id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
15094560
pes2o/s2orc
v3-fos-license
Can erosions on MRI of the sacroiliac joints be reliably detected in patients with ankylosing spondylitis? - A cross-sectional study Introduction Erosions of the sacroiliac joints (SIJ) on pelvic radiographs of patients with ankylosing spondylitis (AS) are an important feature of the modified New York classification criteria. However, radiographic SIJ erosions are often difficult to identify. Recent studies have shown that erosions can be detected also on magnetic resonance imaging (MRI) of the SIJ early in the disease course before they can be seen on radiography. The goals of this study were to assess the reproducibility of erosion and related features, namely, extended erosion (EE) and backfill (BF) of excavated erosion, in the SIJ using a standardized MRI methodology. Methods Four readers independently assessed T1-weighted and short tau inversion recovery sequence (STIR) images of the SIJ from 30 AS patients and 30 controls (15 patients with non-specific back pain and 15 healthy volunteers) ≤45 years old. Erosions, EE, and BF were recorded according to standardized definitions. Reproducibility was assessed by percentage concordance among six possible reader pairs, kappa statistics (erosion as binary variable) and intraclass correlation coefficient (ICC) (erosion as sum score) for all readers jointly. Results SIJ erosions were detected in all AS patients and six controls by ≥2 readers. The median number of SIJ quadrants affected by erosion recorded by four readers in 30 AS patients was 8.6 in the iliac and 2.1 in the sacral joint portion (P < 0.0001). For all 60 subjects and for all four readers, the kappa value for erosion was 0.72, 0.73 for EE, and 0.63 for BF. ICC for erosion was 0.79, 0.72 for EE, and 0.55 for BF, respectively. For comparison, the kappa and ICC values for bone marrow edema were 0.61 and 0.93, respectively. Conclusions Erosions can be detected on MRI to a comparable degree of reliability as bone marrow edema despite the significant heterogeneity of their appearance on MRI. Introduction Erosions of the sacroiliac joints (SIJ) on pelvic radiographs of patients with ankylosing spondylitis (AS) are an important feature of the modified New York classification criteria [1]. However, SIJ erosions are often difficult to identify on pelvic radiographs and training to recognize radiographic structural changes of the SIJ did not improve the performance of radiologists and rheumatologists in detecting radiographic sacroiliitis [2]. A comparison of SIJ radiographs with computed tomography (CT) scans showed a higher sensitivity of CT scans to detect structural changes indicative of sacroiliitis (86% versus 72%), but the same specificity (84%) [2]. However, assessment of SIJ erosions by CT in clinical practice is limited given two recent reports consistently indicating an association of malignancy with CT of the pelvis [3][4][5]. Recent studies have shown that erosions can also be detected on magnetic resonance imaging (MRI) of the SIJ early in the disease course before they can be seen on radiography [6] and that erosions may occur in the absence of bone marrow edema (BME) [7]. In the first report, 59% of non-radiographic spondyloarthritis (SpA) patients showed erosions on MRI in at least two SIJ quadrants [6]. The latter study demonstrated that recognition of erosions on T1-weighted spin echo (T1SE) MRI sequences contributes significantly to diagnostic utility in early SpA and that training to recognize lesions on T1SE MRI improves rheumatologist performance to diagnose SpA on MRI [7]. A recent retrospective analysis confirmed that erosion on SIJ MRI is a highly specific lesion in patients with SpA [8]. However, data on the reliability of detection of SIJ erosion by MRI are scarce [9,10]. Erosions may extend across major portions of the iliac and sacral subchondral bone and we call this feature extended erosion (EE). Previous descriptions of erosions on T1SE MRI have cited complete breech in subchondral bone with change of adjacent marrow signal as defining characteristics of an erosion ( Figure 1) [6,7,11]. However, we have recently observed that the adjacent marrow signal may vary considerably and may even be increased on T1SE MRI suggesting tissue metaplasia. We have termed this novel appearance as 'backfill' (BF) because the appearance is consistent with reparative tissue re-filling an excavated erosion. The appearance of erosion on T1SE MRI may, therefore, vary considerably and it is essential to determine whether methodology can be sufficiently standardized and readers sufficiently calibrated to detect these lesions reliably. Subjects The 60 subjects assessed in this study were randomly selected from a larger population of 187 SpA patients, non-specific back pain (NSBP) patients and healthy controls ≤45 years old, who were recruited in two rheumatology university hospitals [6]. The characteristics of the 30 AS patients meeting the modified New York criteria and of the 30 age-and sex-matched controls (15 NSBP patients and 15 healthy volunteers) are shown in Table 1. The median symptom duration in the two AS groups (15 patients each with symptom duration of ≤5 and > 5/ ≤10 years) was three and eight years, and 87% and 80% of the AS patients were HLA-B27 positive, respectively. Both AS groups had similar median Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) [12] values close to 4 on a numeric rating scale from 0 to 10. AS patients with SIJ ankylosis or who received biologics within six months prior to the SIJ MRI were excluded. The local Ethics Committees approved the study protocol and written informed consent was obtained from all participants. Reading exercise and MRI protocol Four readers from three rheumatology university hospitals, who were blinded to the diagnosis and patient characteristics, independently assessed semicoronal T1SE and short tau inversion recovery (STIR) sequences of SIJ MRI scans in random order on electronic work stations. These are the sequences used in daily routine for MRI evaluation of SpA patients in the involved institutions. The detailed MRI parameters have been published previously [6,7]. The reading exercise of SIJ MRI of the 30 AS patients and the 30 non-specific back pain and healthy controls from the original cohort of 187 subjects was conducted two years after the original assessment of these MRI scans. Standardized assessment of MR images We recorded lesions according to standardized definitions of active and structural lesions on SIJ MRI developed by the Canada-Denmark MR working group [13]. SIJ erosions are defined as full-thickness loss of dark appearance of either iliac or sacral cortical bone of the SIJ and change in normal bright appearance of adjacent bone marrow on T1SE images [6]. A retrospective analysis of the reading exercise in the original study population regarding diagnostic utility of SIJ MRI in SpA patients [6] focused on erosions and identified two main sources of inter-observer disagreement in reporting erosions. The first feature was extended erosion (EE). We standardized our approach by defining EE as erosion that extends continuously across the entire length of at least one SIJ quadrant of the iliac and/or sacral subchondral bone on the same semicoronal slice ( Figure 2). The second feature was termed backfill (BF). BF is characterized by complete loss of iliac or sacral cortical bone with refilling of the excavated area by tissue which demonstrates comparable or even increased signal on T1SE sequence compared to reference normal marrow (Figures 3 and 4). These newly described features representing the spectrum of abnormalities associated with SIJ erosions on T1SE MRI were added to a revised reference SIJ MR images set developed by consensus among study investigators [14]. This reference image set containing active and structural lesions served to calibrate the reader team. Calibration contained three international video-teleconference sessions using SIJ MRI scans of AS patients not involved in this study population. MR images of the SIJ were assessed according to the standardized methodology outlined in an online training module [13]. This standardizes the assessment of consecutive semicoronal slices through the SIJ from anterior to posterior outlining which are the first anterior and last posterior slices that are scored according to anatomical landmarks. We recorded erosions, bone marrow edema (BME) and fat infiltration (FI) in each SIJ quadrant on each semicoronal slice in a customized online data entry module described previously [6]. EE and BF were assessed per four iliac and sacral joint surfaces irrespective of the number of SIJ slices affected. Statistical analysis Data description At the patient level, we calculated frequencies of AS patients and controls with specific MRI abnormalities reported concordantly by ≥2 readers and also by all four readers. We also calculated the median (interquartile range (IQR)) number of quadrants showing erosion, BME and FI by ≥2 readers and also by all four readers. We repeated this analysis to assess frequencies of MRI lesions according to each of the four joint surfaces (right and left iliac, right and left sacral). We compared the frequencies of lesions observed in the iliac versus the sacral portions of the joint using the Wilcoxon test. Interobserver reproducibility of specific MRI lesions We calculated percentage agreement for specific MRI lesions according to positive and negative concordance among the six possible reader pairs for lesions detected at the four iliac and sacral joint surfaces per patient (total = 120 for each group). We also compared concordance separately for the iliac and sacral joint surfaces (total = 60 for each group). Kappa statistics and intraclass correlation coefficients (ICC) were used to calculate the reproducibility of specific MRI lesions for all four readers jointly and for all 60 study participants (30 AS patients and 30 controls), per iliac and sacral joint portion, and per subject. The interreader agreement was defined as slight, fair, moderate, substantial and almost perfect by values of the estimated Cohen's kappa < 0.2, 0.2 ≤ < 0.4, 0.4 ≤ < 0.6, 0.6 ≤ < 0.8, and 0.8 ≤ < 1, respectively [15]. Among the six reported variants of ICC, we report the results of the ICC(3, 1) and ICC(2, 1) model [16][17][18]. The ICC(3, 1) approach considers the study readers to be a fixed sample and thus not representative of a larger population of raters. In the ICC (2, 1) approach, study readers are considered to be a random sample and, therefore, representative of a larger population of raters. ICC values > 0.4, > 0.6, > 0.8, and > 0.9 were regarded as representing moderate, good, very good, and excellent reproducibility, respectively. For kappa, we provide bootstrap confidence intervals based on 1,000 bootstrap replications. All confidence intervals are computed using a confidence level of 95% and statistical tests are considered significant if the P-value is ≤0.05. Data description Frequency of MRI lesions in the AS and control groups Erosions on SIJ MRI were detected in all 30 AS patients by ≥2 readers, whereas five NSBP patients and one healthy control (20% of all controls) also showed lesions meeting the definition of erosion ( open epiphyseal growth plate in an 18-year-old man ( Figure 5). BME was detected by ≥2 readers in 27 (90%) of the AS patients and in six (20%) controls, whereas FI was recorded in 26 (86.7%) and eight (26.7%) of the AS patients and controls, respectively. The median number of SIJ quadrants showing erosions as recorded by four readers in 30 AS patients was statistically significantly higher in the ilium (8.6, IQR 6.9) than in the sacrum (2.1, IQR 2.9; P < 0.0001) ( Table 2). EE and BF were also observed statistically significantly more frequently in the iliac joint part (P-value for both lesions < 0.0001). BME and FI showed no statistical difference in their distribution between the ilium and sacrum. On a patient level, BME was the most frequently reported MRI lesion in the AS group (median 13.1 SIJ quadrants, IQR 15.1). The frequency of erosions, BME and FI per subject was too low in the 30 controls to draw conclusions about a preferential distribution of these features in the ilium or sacrum. The MRI feature observed most frequently in the control group was FI. Reproducibility of MRI lesions Percentage concordance among six possible reader pairs Among the 30 AS patients, the percentage concordance for the detection of erosions in the iliac joint portion was high (80.0%; positive/negative concordance -75.0% and 5.0%, respectively) and comparable with the percentage concordance for BME (79.7%; positive/negative concordance -60.3% and 19.4%, respectively) ( Table 2). The percentage concordance in the sacral joint portion, which had a statistically significantly lower frequency of erosions compared with the ilium, showed a lower concordance between the six possible reader pairs for erosion (67.8%) than for BME (90.3%) resulting in a lower percentage concordance also at the patient level (73.9% for erosion versus 85.0% for BME). The lowest percentage concordance was observed for FI, both for the AS and the control group (69.7% and 83.8%, respectively, per patient level). At the patient level, the percentage concordance for EE (75.7%) and BF (80.0%) was similar to erosion (73.9%). The percentage concordance for the control group, where comparatively few lesions were observed, was high both for erosion (92.1%) and BME (89.3%). Kappa statistics (MRI lesions as binary variables) for four readers jointly For the 60 study participants, reader agreement expressed by kappa values was substantial and comparable for erosion and for BME at the subject level (for erosion 0.72, 95% confidence interval (CI) 0.57 to 0.84, and for BME 0.61, 95% CI 0.47 to 0.74, respectively) and for the ilium (for erosion 0.67, 95% CI 0.53 to 0.79, and for BME 0.64, 95% CI 0.50 to 0.76, respectively) ( Table 3). Kappa values were significantly lower in the sacrum for erosion (0.56, 95% CI 0.42 to 0.70) than for BME (0.71, 95% CI 0.57 to 0.82), but the frequency of erosion was also significantly lower in the sacral portion. At the subject level, the kappa values for EE (0.73, 95% CI 0.63 to 0.84) and for BF (0.63, 95% CI 0.53 to 0.73) indicated substantial agreement comparable to the kappa values of erosion and BME. Despite a lesion frequency similar to erosion, FI showed the lowest kappa value (0.55, 95% CI 0.41 to 0.68). Intraclass correlation coefficients (MRI lesions as sum scores) for four readers jointly At the subject level, reader agreement regarding sum scores of the 60 study participants was higher for BME (ICC(3, 1) 0.93 and ICC(2, 1) 0.92, respectively) than for erosion (ICC(3, 1) 0.79 and ICC(2, 1) 0.75, respectively) ( Table 3). This difference was also observed for both the iliac and sacral joint portion. Consistent with reader pair percentage concordance and kappa values, assessment of FI also had the lowest reliability according to ICC values (ICC(3, 1) 0.71 and ICC(2, 1) 0.63, respectively). Assessment of EE and BF was less reliable than of erosions overall but assessment for these two lesions was based on evaluation of the four cortical surfaces in the entire joint as compared to the erosion score based on evaluation of eight SIJ quadrants. Discussion This systematic and controlled evaluation of erosion and related features on SIJ MRI in patients with AS had two findings of clinical relevance. First, the reliability of detecting erosion on SIJ MRI was substantial and comparable to BME, provided that readers are trained to recognize abnormalities on T1SE MRI. Familiarity with the variable features of erosions on MRI may also improve reliability. These features include EE and BF. Second, erosions occurred statistically significantly more frequently in the ilium compared with the sacrum. This finding has to be taken into account when comparing various lesions detected on SIJ MRI in AS patients, because BME and FI showed a similar distribution across both joint surfaces. A debriefing analysis of the reading exercise in the original study population [6] retrospectively identified EE [19,20], erosion in many SpA patients may be followed by refilling of the excavated bone and eventually ankylosis. Two previous reports focused on agreement data for structural MRI lesions of the SIJ [9,10]. In a cohort of 68 patients with inflammatory back pain according to Calin criteria, the concordance rate for structural lesions defined as a composite index of ankylosis, sclerosis and erosions was 81% and 88%, and kappa values were 0.37 and 0.66, respectively [9]. However, differences in study design preclude a direct comparison with our reliability data. Unlike our study cohort, the study population consisted of patients with inflammatory back pain and only 14 patients (20.6%) met the radiographic modified New York criteria, structural MRI lesions were less frequent (16%), assessment was based on different definitions of MRI lesions, lesions were recorded for the entire right and left SIJ rather than according to joint surface, and there was no control group. Another study which evaluated SIJ MRI in 41 inflammatory back patients meeting the European Spondylarthropathy Study Group (ESSG) criteria [21], reported an interreader agreement (percentage agreement/kappa value) between two senior radiologists of 77%/0.54 for erosion [10]. The lesion analysis according to eight SIJ quadrants noted a predominance of erosions in the iliac joint portion, but no formal comparison of lesion frequency between the two joint surfaces was performed. Again, there were major differences in study design compared to our work regarding study population, MRI lesion definitions and imaging technique, and lack of a control group. Our study was conducted with T1SE and STIR sequences representing the routine protocol used for MRI evaluation of SpA patients. Additional so-called 'cartilage MRI sequences', such as T1-weighted fat saturated (T1FS) or T2-weighted gradient echo (T2GE) sequences, may offer advantages to recognize erosion. However, they require further evaluation as to their reliability for detection of erosion compared to T1SE sequence alone and their implementation into routine MRI scanning protocols with regard to examination time and costs [22]. A study using both T1SE and T1FS sequences to detect SIJ erosions in 37 SpA patients meeting the ESSG criteria reported a good inter-observer agreement by kappa statistics between two trained radiologists (0.76 for erosion at joint level and 0.80 at patient level) [10]. Erosion was defined as loss of marrow signal on T1SE and T1FS images together with a defect in the overlaying cortical bone; erosions were scored regarding their extent on the joint surface and the presence of ankylosis was added to the erosion score. However, this study did not directly compare T1SE and T1FS sequences with regard to reliability for detection of SIJ erosions, a separate analysis without the contribution of ankylosis to the erosion score was not performed, and there was no control group. A recent report compared T1FS sequences with two variants of T2GE sequences, three-dimensional -fast low angle shot (3D-FLASH) and three-dimensional-double excitation in the steady state (3D-DESS) sequences, in a retrospective analysis of scans of 30 patients with clinically suspected sacroiliitis and nine healthy controls [23]. There was no difference in the number of erosions detected by all three sequences, but erosion scores based on the extent of joint involvement were significantly higher in both T2GE sequences compared with T1FS sequences. The reliability for detection of erosion by these three 'cartilage MRI sequences' could not be assessed because only one reader evaluated the scans and there was no comparison with the T1SE sequence often used in daily routine. There are no controlled data on whether CT may have higher sensitivity to detect SIJ erosions compared with MRI. However, two recent studies consistently reported an increased risk of malignancy associated with CT of the pelvis [3][4][5]. The adjusted lifetime attributable risk of cancer was two cancers per 1,000 exposed women for 20-year-old women who undergo pelvic CT examination [4]. This concern about radiation dose may limit the use of CT to assess SIJ erosions in daily routine and particularly in studies involving healthy controls. We found a marked difference for erosion kappa values in the iliac versus the sacral joint portion. However, kappa values depend on the prevalence of the finding under observation [24][25][26]. Kappa values for erosion and BME were virtually identical in the ilium, which may be partly due to a higher occurrence of erosion in this area, comparable to the occurrence of BME. In contrast, erosion kappa values for the sacrum were lower than for BME which occurred more frequently than erosion in the sacrum. Another reason for the difference in detecting erosion between the ilium and sacrum may be different MRI appearances of erosion in the two articular surfaces. Such differences in erosion phenotype may relate to cartilage thickness (which is usually thinner on the iliac compartment), size and depth of erosions, or MRI artifacts, such as chemical shift, which may impair the assessment of subchondral bone [27,28]. Conclusions This systematic, standardized, and controlled evaluation of SIJ MRI scans in AS patients demonstrated that the reliability between four readers for detection of erosion on SIJ MRI was substantial and comparable to BME and that, in contrast to BME and FI, erosion occurred significantly more frequently on the iliac side. The spectrum of appearance of erosion on MRI is much more heterogeneous than reported previously and recognition of variants such as 'extended erosion' and 'backfill' may facilitate overall detection of erosion. Moreover, further assessment in prospective studies is required to understand the characteristics of these variants and their role in the evolution of sacroiliitis.
2017-06-29T15:11:58.805Z
2012-05-24T00:00:00.000
{ "year": 2012, "sha1": "de418b2227be750a333d3e2e3d3ceeb9864fe648", "oa_license": "CCBY", "oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/ar3854", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "38b22899d559c503ea827f7b03703e488757c31e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265063547
pes2o/s2orc
v3-fos-license
Aetiology of vaginal discharge, urethral discharge, and genital ulcer in sub-Saharan Africa: A systematic review and meta-regression Background Syndromic management is widely used to treat symptomatic sexually transmitted infections in settings without aetiologic diagnostics. However, underlying aetiologies and consequent treatment suitability are uncertain without regular assessment. This systematic review estimated the distribution, trends, and determinants of aetiologies for vaginal discharge, urethral discharge, and genital ulcer in sub-Saharan Africa (SSA). Methods and findings We searched Embase, MEDLINE, Global Health, Web of Science, and grey literature from inception until December 20, 2023, for observational studies reporting aetiologic diagnoses among symptomatic populations in SSA. We adjusted observations for diagnostic test performance, used generalised linear mixed-effects meta-regressions to generate estimates, and critically appraised studies using an adapted Joanna Briggs Institute checklist. Of 4,418 identified records, 206 reports were included from 190 studies in 32 countries conducted between 1969 and 2022. In 2015, estimated primary aetiologies for vaginal discharge were candidiasis (69.4% [95% confidence interval (CI): 44.3% to 86.6%], n = 50), bacterial vaginosis (50.0% [95% CI: 32.3% to 67.8%], n = 39), chlamydia (16.2% [95% CI: 8.6% to 28.5%], n = 50), and trichomoniasis (12.9% [95% CI: 7.7% to 20.7%], n = 80); for urethral discharge were gonorrhoea (77.1% [95% CI: 68.1% to 84.1%], n = 68) and chlamydia (21.9% [95% CI: 15.4% to 30.3%], n = 48); and for genital ulcer were herpes simplex virus type 2 (HSV-2) (48.3% [95% CI: 32.9% to 64.1%], n = 47) and syphilis (9.3% [95% CI: 6.4% to 13.4%], n = 117). Temporal variation was substantial, particularly for genital ulcer where HSV-2 replaced chancroid as the primary cause. Aetiologic distributions for each symptom were largely the same across regions and population strata, despite HIV status and age being significantly associated with several infection diagnoses. Limitations of the review include the absence of studies in 16 of 48 SSA countries, substantial heterogeneity in study observations, and impeded assessment of this variability due to incomplete or inconsistent reporting across studies. Conclusions In our study, syndrome aetiologies in SSA aligned with World Health Organization guidelines without strong evidence of geographic or demographic variation, supporting broad guideline applicability. Temporal changes underscore the importance of regular aetiologic re-assessment for effective syndromic management. PROSPERO number CRD42022348045. Response: The Author Summary has been added (page 3): Why was this study done? • Syndromic case management is a common approach for treating sexually transmitted infections in sub-Saharan Africa.• Characterising the infectious aetiologies (causes) of each syndrome is crucial to ensure adequate choice of treatment.• There is a lack of recent comprehensive assessments on the aetiologies for vaginal discharge, urethral discharge, and genital ulcer in sub-Saharan Africa. What did the researchers do and find? • We conducted a systematic review that included 190 studies in 32 sub-Saharan African countries spanning 1969 and 2022.• We accounted for the sensitivity and specificity of different diagnostic tests used across studies and used meta-regression models to estimate the distribution of infections causing each symptom.• We determined that the main aetiologies for vaginal discharge were candidiasis (69% of cases in 2015), bacterial vaginosis (50%), chlamydia (16%), and trichomoniasis (13%); for urethral discharge were gonorrhoea (77%) and chlamydia (22%); and for genital ulcer were HSV-2 (48%) and syphilis (9%).• Distributions of infectious aetiologies were similar across regions and population sub-groups but changed over time. 1. There is a lack of attention to heterogeneity between studies: the very informative Figure 1 indicates that there is substantial heterogeneity, but this needs to be quantified.What was the between-study variance (tau squared)?How much of the variance was explained by the variables included in the meta-regression?Which variable explained the most? Response: Thank you for your comment.Consistent with advice, we have revised Tables S13 to report the following measures for the meta-regression models informing Figure 2: variance attributed to fixed and random effects, variance attributed to observation-level effects, variance attributed to the binomial distribution, and total variance.We have also included these measures in Tables S14 and S15. In the method section of the main manuscript, we have added (line 216-218): "Study observation heterogeneity is assessed per model as the percentage of total variance attributed to observationlevel random effects (22)." In the results section, we have added (line 292-295): "Study observations were heterogeneous for all three symptoms, especially vaginal discharge.The percentage of total variance attributed to observation-level random effects was 43.0% for VD, 22.3% for UD, and 25.0% for GU (Table S13)." 2. The assessment of risk of bias is not appropriate.The JBI instrument does not assess the risk of bias; rather, it is a more general measure that mixes risk of bias criteria with reporting and general study quality.Several criteria used have nothing to do with the risk of bias (for example, 1, 7, 10). Further, summary scores should be avoided: they involve inappropriate equal weighting of the different criteria.See also Dekkers et al. COSMOS-E statement. PLoS Med 2019.My advice would be to analyse the criteria related to bias (3,4,5,8) separately in the meta-regression.It would also be interesting to compare the convenience sampling studies with those using consecutive or random sampling. Response: We thank the editor for the suggestions on the risk of bias assessment.Consistent with the advice, we have revised the assessment to (i) indicate the types of measures assessed by the modified JBI appraisal tool, (ii) removed the summary score, and (iii) analyse each criterion separately in meta-regressions for each symptom. The methods section has been updated (line 194-203): "We adapted the Joanna Briggs Institute critical appraisal tool for prevalence studies to assess study design (objective of the study), selection bias (clarity of inclusion criteria, appropriateness of recruitment method, adequacy of participation, and detail of participant characterisation), measurement bias (objectivity in symptom definition, consistency of diagnostic methodology, and avoidance of misclassified results), precision (sufficiency of sample size), and reporting quality (ambiguity of results) (Table S8) (21).Each of the 10 criteria were independently double assessed for each report, with discrepancies resolved through consensus or by a third reviewer.We assessed the association of these criteria with the RTI proportions by extending the meta-regressions to include fixed effects for each criterion." The results section has been updated (line 323-337): "Fewer than half of studies aimed to assess the aetiology of genital symptoms (NVD=37/87, NUD=32/55, NGU=40/80, Table S18).Most studies were at risk of selection bias; participant inclusion criteria were often clearly defined (NVD=82/87, NUD=50/55, NGU=73/80), but few studies recruited participants using consecutive or random sampling methods (NVD=32/87, NUD=27/55, NGU=46/80), had adequate participation rates (NVD=34/87, NUD=21/55, NGU=40/80), or provided sufficient detail on participant characteristics to determine their representativeness (NVD=36/87, NUD=21/55, NGU=56/80).Most studies were not susceptible to measurement bias; the majority defined participant symptoms objectively (NVD=48/87, NUD=36/55, NGU=75/80), employed consistent diagnostic methodologies (NVD=81/87, NUD=47/55, NGU=75/80), and avoided misclassifying infections by testing for multiple pathogens (NVD=43/87, NUD=22/55, NGU=48/80).Studies generally had sample sizes of at least 100 participants (NVD=61/87, NUD=39/55, NGU=46/80) and reported results unambiguously (NVD=51/87, NUD=37/55, NGU=52/80).Several of these appraisal criteria were associated with the odds of RTI diagnosis, particularly for vaginal discharge, but the hierarchy of aetiologies per symptom remained the same (Figure S3)." In the supplementary material, Table S8 now classifies criteria in the appraisal tool as relating to study design, selection bias, measurement bias, precision, and reporting quality.Table S17 now includes a footnote classifying each criterion according to the above categories.Table S18 has been added to summarise the criteria across studies included for each symptoms.Figure S3 has been replaced to now shows the association of each criterion with the RTI proportions for each symptom. 3. I felt the authors tend to overinterpret their findings.For example, the data do not support the recommendation that a survey should only be done every 5 years.Rather, the authors should stress that studies are urgently needed in many countries without data.Also, the limitations of the syndromic approach should be discussed.An interesting question in this context relates to the proportion of patients who will not receive adequate treatment based on their results. Response: We appreciate this editorial comment.We have revised the discussion accordingly, particularly highlighting the risk of inadequate treatment under the syndromic management approach, lack of data in one-third of countries, and insufficient frequency of assessments in most others (line 358-367 and line 376-388): "We also identified a notable proportion of vaginal discharge cases attributed to chlamydia, emphasising recommendations for speculum examination to detect cervicitis in the absence of aetiologic testing (3).Furthermore, approximately 5% of vaginal discharge and urethral discharge cases were attributed to M. genitalium in 2015.Current WHO guidelines recommend assessing for M. genitalium only in instances of recurrent or persistent discharge (3), potentially leaving this infection untreated.Additional attention to MG is therefore warranted, particularly amid ongoing debates on aetiologic testing in higher-income settings (38).Moreover, over a quarter of vaginal discharge and genital ulcer cases lacked an identified cause, presenting a persistent treatment challenge, even if aetiologic testing becomes more widely available." "Symptom aetiologies have changed over time, particularly for genital ulcer.The leading cause of genital ulcer transitioned from chancroid to HSV-2 during 1990 and 2010… Temporal changes underscore the need for regular aetiologic assessment of syndromes.However, among the 32 with data in our review, the publication rate approximated one study every ten years (median of 5 studies per country during 1969 and 2022) and 16 countries had no data all, falling short of the WHO's recommended assessment frequency of 2 years (6)." Reviewer 1: Thank you for the opportunity to review this manuscript.This is a valuable and timely study.It is delightful to see this study conducted, and so rigorously.The study is thorough, well-conducted, and conforms to the best standards in conducting systematic reviews.The article is well-written, and the presentation is lucid.The results are of global interest and inform the current intense discussion about the relevance of the syndromic approach for the management of STIs versus the etiological approach.The results directly inform regional (as well as global) guidelines. In the spirit of enhancing the impact and value of this study, I suggested few minor revisions. Response: We thank the reviewer for their positive feedback.We have addressed each comment separately below. 1. The definition of the proportion of cases that did not have an identified etiology was unclear to me.I assume the proportion depends on what is being tested in each study, but what is being tested can differ between studies.This makes this proportion not well-defined.Can you please clarify and discuss this point? Response: The definition above is correct and has been modified in the methods for clarity (line 169-170): "The proportion with "unknown aetiology" was only extracted from studies with observations for three or more RTI pathogens, regardless of the specific pathogens tested." The definition has been discussed as a limitation (line 437-442): "Finally, due to different numbers of pathogens examined across studies, we were required to subjectively define an unknown aetiology among study participants tested for three or more RTIs, irrespective of the pathogens examined or the number of potential pathogens present." 2. There is a challenge in applying adjustments for the diagnostic test performance.The reason is that the existing reported adjustments tend to be not representative, introducing errors (sometimes even non-real negative values) when applied.It is great that the authors presented results with and without these adjustments, and both overall agreed.I think it might still be useful to discuss limitations of these adjustments in the limitations section. Response: We used a Bayesian approach for diagnostic test performance adjustment, in order to avoid the introduction of negative prevalence values.This has been described in the Supplementary file, Text S2.Limitations of the performance adjustments have been discussed (line 432-434): "Other RTI proportions may also have been over-or under-estimated due to assigned diagnostic test sensitivity and specificity values, despite our efforts to ensure their accuracy." 3. The results for HSV-1 are interesting on their own but are not sufficiently discussed.The results also align with another relatively recent study (Harfouche M, Chemaitelly H, Abu-Raddad LJ.Herpes simplex virus type 1 epidemiology in Africa: Systematic review, meta-analyses, and meta-regressions.J Infect 2019; 79(4): 289-99.).I suggest some discussion here given the increasing relevance of this infection as an STI (though not strictly in Africa). Response: We thank the reviewer for highlighting the study by Harfouche and colleagues.The revised discussion highlights the consistency of our findings with those from Harfouche et al. (line 356-358): "Our estimates attributed 2% and 48% of genital ulcer cases to HSV-1 and HSV-2 in 2015, respectively, consistent with other systematic review findings of 1.2% and51% in SSA during 1990 and2015 (36,37)." 4. Although such studies are rare, why did you include studies with participants as young as 10 years?It seems to me that a more appropriate age threshold is 15 years.You may want to justify your choice. Response: Our primary interest was STI aetiologies among the sexually active adult population.We used a minimum age of 10 years to avoid excluding studies that included eligible participants.The methods have been updated (line 133-136): "While our primary interest was studies among sexually active adult populations, we used a minimum age of 10 years to avoid excluding studies which enrolled both eligible sexually active adolescent and adult participants; in such studies (N=18), the majority of participants were adults." 5. You may want to indicate in the limitations that the search for grey literature was not strictly systematic. Response: We have added to the discussion (line 403-406): "We extensively searched grey literature to identify all available data, particularly surveillance reports that might not appear in peer-reviewed academic literature.The grey literature search used similar criteria to our database search but was not strictly systematic." Reviewer 2: This article represents a truly massive effort to use the existing literature to explore the cause of vaginal and penile discharge and genital ulcers in SSA from 1969 to 2022.The authors recognize a variety of limitations in this effort.The most important problem with the article lies in failure to discuss the reasons for some of the striking changes offered in the figures: huge increase in NGC, virtual disappearance of HD, reduction in TV and many, many more observations.For example, did untreated HIV set the stage for spread of HD, that has abated with ART?The purpose of the Discussion is to INTERPRET results, not reiterate them. Response: We thank the reviewer for their helpful feedback.We have addressed each comment separately below. 1. The authors state: "STI surveillance using syndrome-based assessments is noncomprehensive and requires studies among symptomatic and asymptomatic populations."Did they not find articles that offered a view of unrecognized, untreated asymptomatic infection?For example guidelines call for routine screening of people living with HIV for STIs? Response: Our review focused on assessing symptomatic infection and did not extract data on aetiologies among asymptomatic populations.We are therefore unable to comment on surveillance among asymptomatic populations and have accordingly removed this statement from the manuscript. 2. In the Methods the authors say "we searched from inception to 25 July 2022.I think by inception they mean 1969? Response: Each of the four databases has a different date of inception.Our search included literature from the beginning of each database until the specified end date.We have revised the methods to clarify that this refers to the date of database inception (line 102-104): "Embase (Ovid), MEDLINE (Ovid), Global Health (Ovid) and Web of Science were searched from database inception to 20 December 2023." 3. The authors discuss bias " Most studies (NVD=47, NUD=30, NGU=33) had moderate risk of bias (Table 1, Table S17).Studies with higher risk of bias (NVD=19, NUD=11, NGU=12) were predominantly those with alternate study objectives, insufficient description of study participants and/or settings, only one pathogen assessed, and ambiguous reporting of outcomes.Estimates for the proportion diagnosed per pathogen over time were generally consistent when alternatively including studies of any risk level, or only studies with lower and/or moderate risk of bias (Figure S3)." .I am not sure they are describing bias (in the epidemiology sense) as much as limitations of the articles available and the veracity of sampling and tests employed?The idea of what they are trying to do could be stated more clearly. Response: As discussed in response to the Academic Editors, we have revised the assessment to (i) indicate the types of measures assessed by the modified JBI appraisal tool, (ii) removed the summary score, and (iii) analyse each criterion separately in meta-regressions for each symptom.S8) (21).Each of the 10 criteria were independently double assessed for each report, with discrepancies resolved through consensus or by a third reviewer.We assessed the association of these criteria with the RTI proportions by extending the meta-regressions to include fixed effects for each criterion." 4. The conclusion of the article does not fit: The authors state "STI surveillance requires prevalence studies among both symptomatic and asymptomatic populations, particularly due to high rates of asymptomatic infection".But this article is NOT about asymptomatic infections.That is an entirely different topic.The authors need to think of what the reader might take from this work?They implore more frequent surveillance but I am not sure the data support this idea without further explanation of the changes observed and more consideration of the frequency if this effort? Response: We appreciate the reviewer's comment and have accordingly removed this statement from the conclusion.The conclusion has been updated (line 443-449): "In conclusion, the aetiology of three common STI-related symptoms were remarkably similar across regions in sub-Saharan Africa but have evolved over time, underscoring a changing STI transmission landscape and the need for regular re-assessment to inform syndromic management protocols.The observed aetiologic distributions in SSA were largely consistent with WHO recommended syndromic management algorithms without strong evidence of variation by country, context, or population strata, strengthening the generalisability of our findings to settings lacking data in SSA." 5. Most important, the authors do not offer a clear opinion of syndromic management compared to diagnostic results.This descriptive effort surely is designed to better direct syndromic management which remains the mainstay of STD care.Is that the intention of the authors?On the other hand (for example) focus on treatment of GC, found so commonly in discharge, would leave the far less common MG untreated.It seems unwise to put forth this effort without an opinion about this dilemma, and the potential contribution of this report? Response: Consistent with the reviewer's advice, the discuss has been revised to highlight the risk of inadequate treatment under the syndromic management approach and lack of attention to MG in WHO guidelines (line 358-367): "We also identified a notable proportion of vaginal discharge cases attributed to chlamydia, emphasising recommendations for speculum examination to detect cervicitis in the absence of aetiologic testing (3).Furthermore, approximately 5% of vaginal discharge and urethral discharge cases were attributed to M. genitalium in 2015.Current WHO guidelines recommend assessing for M. genitalium only in instances of recurrent or persistent discharge (3), potentially leaving this infection untreated.Additional attention to MG is therefore warranted, particularly amid ongoing debates on aetiologic testing in higher-income settings (38).Moreover, over a quarter of vaginal discharge and genital ulcer cases lacked an identified cause, presenting a persistent treatment challenge, even if aetiologic testing becomes more widely available." Reviewer 3 (statistics): Firstly, I would like to commend the authors on the substantive amount of work undertaking this systematic review.There is a large number of included studies, and I can appreciate the workload behind such a task.Just to be clear, I will only be considering the methods and statistical analyses undertaken.Overall, I believe there is good methodological and scientific rigour utilised throughout the review.The analysis is, for the most part, well explained and is conducted well.I have some points below for the authors consideration. Response: We thank the reviewer for their helpful feedback.We have addressed each comment separately below. 1. The authors have presented a lot of details regarding the meta-regression modelling.However, there appears to be a lack of information regarding the initial pairwise meta-analysis which led to this.While there are very helpful and detailed figures (e.g. Figure 1, although I think this is misnamed as the PRISMA flow diagram is also named Figure 1) show the aOR by vaginal discharge, urethral discharge, and genital ulcer they do show that, in some cases, there may be high heterogeneity.For example, figure 1, vaginal discharge shows heterogeneity between the groups.There appears to be a lack of quantification of this heterogeneity (i.e.tau: the standard deviation between the studies).Please could the authors report this information.Additionally, it would be beneficial to observe how meta-regression addressed such heterogeneity and which factors were significantly associated with the observed heterogeneity. Response: We thank the reviewer for their comment.As discussed in response to the Academic Editors, we have revised Tables S13 to report the following measures for the meta-regression models informing Figure 2: variance attributed to fixed and random effects, variance attributed to observation-level effects, variance attributed to the binomial distribution, and total variance.We have also included this in Tables S14 and S15. In the method section of the main manuscript, we have added (line 216-218): "Study observation heterogeneity is assessed per model as the percentage of total variance attributed to observationlevel random effects (22)." In the results, we have added (line 292-295): Study observations were heterogeneous for all three symptoms, especially vaginal discharge.The percentage of total variance attributed to observation-level random effects was 43.0% for VD, 22.3% for UD, and 25.0% for GU (Table S13). 2. The risk of bias assessment includes an overall assessment, which is not recommended.The authors should consider removing this information.Currently, the risk of bias assessment gives equal weighting to each item and some of the items are to do with the study quality of reporting and not risk of bias per se.Therefore, the inclusion of the overall risk of bias score in an analysis would be misleading, and it would be more pertinent to group studies based on certain risk of bias questions as categorical outcome (yes, no, unclear).For example, questions 3, 4, 5, and 8. Additionally, the sampling utilised in the studies should also be considered (i.e.compare convenience sampling studies with those using consecutive or random sampling methods). These re-analyses may change the interpretation of the results and this should be considered carefully.Currently, the results are interpreted positively and maybe too positively. Response: We appreciate the reviewer's comment.As discussed in response to the Academic Editors, we have revised the assessment to (i) indicate the types of measures assessed by the modified JBI appraisal tool, (ii) removed the summary score, and (iii) analyse each criterion separately in meta-regressions for each symptom. The methods section has been updated (line 194-203): "We adapted the Joanna Briggs Institute critical appraisal tool for prevalence studies to assess study design (objective of the study), selection bias (clarity of inclusion criteria, appropriateness of recruitment method, adequacy of participation, and detail of participant characterisation), measurement bias (objectivity in symptom definition, consistency of diagnostic methodology, and avoidance of misclassified results), precision (sufficiency of sample size), and reporting quality (ambiguity of results) (Table S8) (21).Each of the 10 criteria were independently double assessed for each report, with discrepancies resolved through consensus or by a third reviewer.We assessed the association of these criteria with the RTI proportions by extending the meta-regressions to include fixed effects for each criterion." The results section has been updated (line 323-337): "Fewer than half of studies aimed to assess the aetiology of genital symptoms (NVD=37/87, NUD=32/55, NGU=40/80, Table S18).Most studies were at risk of selection bias; participant inclusion criteria were often clearly defined (NVD=82/87, NUD=50/55, NGU=73/80), but few studies recruited participants using consecutive or random sampling methods (NVD=32/87, NUD=27/55, NGU=46/80), had adequate participation rates (NVD=34/87, NUD=21/55, NGU=40/80), or provided sufficient detail on participant characteristics to determine their representativeness (NVD=36/87, NUD=21/55, NGU=56/80).Most studies were not susceptible to measurement bias; the majority defined participant symptoms objectively (NVD=48/87, NUD=36/55, NGU=75/80), employed consistent diagnostic methodologies (NVD=81/87, NUD=47/55, NGU=75/80), and avoided misclassifying infections by testing for multiple pathogens (NVD=43/87, NUD=22/55, NGU=48/80).Studies generally had sample sizes of at least 100 participants (NVD=61/87, NUD=39/55, NGU=46/80) and reported results unambiguously (NVD=51/87, NUD=37/55, NGU=52/80).Several of these appraisal criteria were associated with the odds of RTI diagnosis, particularly for vaginal discharge, but the hierarchy of aetiologies per symptom remained the same (Figure S3)." In the supplementary material, Table S8 now classifies criteria in the appraisal tool as relating to study design, selection bias, measurement bias, precision, and reporting quality.Table S17 now includes a footnote classifying each criterion according to the above categories.Table S18 has been added to summarise the criteria across studies included for each symptoms.Figure S3 has been replaced to now shows the association of each criterion with the RTI proportions for each symptom. 3. PRISMA: Number of reports included was 198 but when adding the subgroups there are 227, please check and make clear. Response: We have revised the manuscript to indicate more clearly that several studies assess the aetiology for multiple symptoms. The breakdown by study is outlined in Table 1 and the results section (line 237-241): "Overall, 206 reports were included from 190 independent studies (number of studies per symptom (N): NVD=87, NUD=55, NGU=80) spanning 1969 to 2022 (Table 1).Of these, 166 studies focused on a single symptom, 16 studies on two symptoms, and 8 studies on all three symptoms."The PRISMA diagram (Figure 1) has now been updated to provide a similar breakdown by report. 4. Text S2: In the supplementary text the authors state that in the case of multiple sources being available with wide variation, a mean sensitivity and specificity value was calculated.I believe taking the mean would be inadequate, as the variation most likely occurs due to sample size differences and other confounding factors (e.g.high vs low risk population).It would therefore be beneficial for these values to be created using a weighted mean based on the sample size. Response: Consistent with this advice, we have revised the sensitivity and specificity values in Table S7.Where multiple sources were used to determine the sensitivity and specificity of treponemal and non-treponemal tests, the weighted mean has now been used.Supplementary Text S2 has been revised to clarify this (page 7): "In cases where multiple sources were available with wide variation, we calculated a weighted mean for the sensitivity and specificity." 5. The tables need to make clear that they are referring the number of studies and not number of participants. 6. There is a lack of information regarding how the adjusted odds ratio were calculated and what factors were used.Please elaborate.A similar supplementary text as to the other analyses would be beneficial. Response: The following explanation has been added to the manuscript (line 215-216): "Model coefficients are presented as adjusted odds ratios (aORs), with confidence intervals calculated on the log odds scale before exponentiation." The methods section describes the variables included in the model (line 181-186): "We estimated time trends in the diagnosed proportion by region via generalised linear mixed-effects metaregressions for each symptom (19).Models were specified a priori to include fixed effects for RTI, the interaction of RTI and year (midpoint date of data collection measured as continuous calendar year), and the interaction of RTI and sex (genital ulcer only), random intercepts and slopes per year for the interaction of RTI and region (central and western, eastern, or southern Africa), and observation-level random intercepts to account for between-study heterogeneity." Reviewer 4: This manuscript by Michalow et al. describes a systematic review and meta-regression to characterise aetiologies for vaginal discharge, urethral discharge, and genital ulcer in sub-Saharan Africa.International and national guidelines for STIs across Africa are often informed by sporadic studies and surveillance activities, making amalgamation of this information very helpful.The manuscript itself is clear and comprehensive, appears methodologically sound, and is overall a very impressive piece of work providing very important data, with clear policy implications.I have a few minor comments and suggestions, largely to improve clarity. Response: We thank the reviewer for their positive feedback.We have addressed each comment separately below. 1. Abstract: It is noted in the methods that results are reported as predictions for the year 2015 as this represents "the most recent quinquennium within the timespan of substantial available data".This makes sense -however, reading the abstract without this explanation is a bit confusing, particularly the sentence "In 2015, primary aetiologies for vaginal discharge were…" I wonder if possible to make this more clear, such that readers won't think that the results are from the year 2015?E.g. "In 2015, predicted primary aetiologies…" or providing a brief explanatory sentence if word count allows. 2. UN M49 standard -The classification of the UN M49 standard is included in the supplementary material.However, I wonder if it might be possible to include this information in the main manuscript, so that readers are not required to review the supplementary material to find out this important information.This is perhaps additionally important as countries such as Zambia and Zimbabwe are often considered to be in "Southern Africa" by various other classifications, and I had actually made this assumption before reviewing the supplementary material. 3. Lines 104 -106: "Studies were included if: (1) participants were symptomatic at the time of testing, defined by the presence of either self-reported or clinician-evaluated abnormal vaginal discharge, urethral discharge, or genital ulcer" -I think it would be helpful for clarity to state that studies were included if "some" participants were symptomatic at the time of testing (rather than all participants in a study having symptoms).It is apparent from reviewing the included papers, but would be helpful to state explicitly. Response: The inclusion criteria have been updated accordingly (line 129): "Studies were included if: 1) there were participants symptomatic at the time of testing, defined by the presence of either self-reported or clinician-evaluated abnormal vaginal discharge, urethral discharge, or genital ulcer…" 4. Was there any particular rationale for a cut-off sample size of 10? Response: The sample size limit aims to strike a balance between excluding excessively imprecise studies (those with fewer than 10 participants) while also incorporating small studies (larger than 10 but, for example, below 100 participants) and accounting for their inherent limitations.The approach ensures findings are synthesized from a diverse range of studies.A minimum sample size of 10 is a common convention in systematic reviews, as in the example references below: • Chan GJ, Lee AC, Baqui AH, Tan J, Black RE.Risk of early-onset neonatal infection with maternal infection or colonization: a global systematic review and meta-analysis.PLoS medicine.2013 Aug 20;10(8):e1001502. 5. Lines 107 -108: "diagnostic methodology for each infection was described and assessed as valid according to published recommendations."Perhaps a bit more information on how this assessment was made would be helpful. 6. Were fixed and random effects variables chosen a priori for models? Response: Yes, the model terms were specified were chosen a priori to address the primary research question about regional levels and time trends in STI aetiologies, allowing for study heterogeneity (via observation-level random effects). We have revised the description of the model to clarify this (line 181-186): "Models were specified a priori to include fixed effects for RTI, the interaction of RTI and year (midpoint date of data collection measured as continuous calendar year), and the interaction of RTI and sex (genital ulcer only), random intercepts and slopes per year for the interaction of RTI and region (central and western, eastern, or southern Africa), and observation-level random intercepts to account for between-study heterogeneity." 8. The four included figures in the main manuscript are numbered as follows: figure 1, figure 2, figure 1, figure 2. Re-numbering is required. Response: We thank the reviewer for noting this error, it has been corrected.Discussion: 9. Lines 304-305: "The distribution of aetiologies estimated for each symptom were consistent with WHO syndromic management algorithms."As MG was diagnosed in 5.6% of urethral discharge cases and 5.4% of vaginal discharge cases, but is not included in either management guideline, I wonder if the authors would like to briefly comment on the merits of the inclusion of MG in such algorithms.This may be particularly apt given the finding that "exceptions included higher diagnosed proportions of M. genitalium than chlamydia among younger women with vaginal discharge and HIV-positive men with urethral discharge."I am aware of the global debate around MG testing, so even a comment simply acknowledging this may be appropriate. Response: Consistent with this advice, we have highlighted the lack of attention to MG in WHO guidelines (line 358-367): "We also identified a notable proportion of vaginal discharge cases attributed to chlamydia, emphasising recommendations for speculum examination to detect cervicitis in the absence of aetiologic testing (3).Furthermore, approximately 5% of vaginal discharge and urethral discharge cases were attributed to M. genitalium in 2015.Current WHO guidelines recommend assessing for M. genitalium only in instances of recurrent or persistent discharge (3), potentially leaving this infection untreated.Additional attention to MG is therefore warranted, particularly amid ongoing debates on aetiologic testing in higher-income settings (38).Moreover, over a quarter of vaginal discharge and genital ulcer cases lacked an identified cause, presenting a persistent treatment challenge, even if aetiologic testing becomes more widely available." Reviewer 5: This systematic review and characterisation of the aetiologies of the three key STI syndromes (vaginal discharge, urethral discharge and genital ulcers) in sub-saharan Africa is very timely and useful and extremely thoroughly and competently conducted, and the findings are very well and clearly written.The authors should be congratulated for an excellent job.The main text, figures and tables are very clear and concise.The supplementary material is very detailed and useful.The conclusions broadly support the current recommendations of WHO: 1) that the current syndromes management guidelines cover well the main aetiologies under each syndrome; 2) that there is a need to regularly appraise the etiological composition of syndrome as these do change over time, particularly genital ulcer syndrome, and may also vary by HIV status (again more so for genital ulcer) and slightly by age (more chlamydia and gonorrhoea I'm younger ages).Interestingly, the estimates of etiological composition do not vary much by geographical region, and the estimated distribution have remained relatively constant even despite the change of diagnostic procedures, with the introduction of NAAT tests more recently.Overall, this work supports the robustness of WHO recommendations for syndromic management, which is reassuring.The studies tend to confirm the near disappearance of some aetiologies such as Haemophilus ducreyi (chancroid) and LGV among ulcers, and T vaginalis among female and male discharges.This is quite likely the result of nearly 30 years of implementation of syndromic management, and it would be interesting to show if this was supported by prevalence studies in general populations (among asymptomatic). Response: We thank the reviewer for their positive feedback.We have addressed each comment separately below. There are no major comments.But a few points: 1. This reviewer could not verify all the publications that were included in the review and possibly some that may have been omitted, although data appear to have been thoroughly checked. 2. One surprising finding is the high proportion of candidiasis found as aetiology of VD (and particularly coexisting with BV and /or TV, given the vaginal pH enabling the yield of CS vs TV/BV goes in opposite directions). Response: We agree with the reviewer's comment.Estimates for CS are also sensitive to the diagnostic test performance adjustments, although the performance values for these tests were specified using the best available evidence.We have noted this as a potential limitation (line 429-432): "In contrast, the diagnosed proportion for CA was 45% relative to CS, which was below the expected range of 70-90% (51,52).Adjustments to the performance of gram stain and/or wet mount may have overestimated CS proportions, while CA may have been underestimated due to the limited number of observations." 3. One aetiology for GU is surprisingly missing --donovanosis, which used to be diagnosed in South Africa in the 1990s (eg O'Farrell) --this would not very much change the etiological profile other than signalling its disappearance. Response: We used the WHO 2021 guideline for symptomatic STI management to inform the selection of aetiologies included in our study.While donovanosis is briefly mentioned in the guidelines in reference to men with persistent anogenital ulcers, it has not been included in the flow chart for genital ulcer management nor the list of treatment options available for genital ulcer aetiologies, and hence was not considered in this study. 4. Regarding syphilis serology in GU, what rule did the authors (or the primary study authors) use to attribute a syphilis aetiological results, for example in the absence of demonstrable TP pathogen in the lesion? Response: When studies used both ulcer swab tests and serology tests, our priority was to extract results from the swab test, even when no identifiable cases of the pathogen were reported.For studies that did not conduct ulcer swab tests, we extracted serologic results as available.This was outlined in the methods (line 165-167): "When multiple diagnostic tests were used for syphilis among those with genital ulcer, we preferentially extracted observations for tests using ulcer swab specimens over serology (3)." A few minor comments or clarifications: 5. There is a slight discrepancy between the number of study reports mentioned in text (top of p9 and Table 1, ie NVD=83, NUD=53 NGU=78, total 183, but should not this be 214?), when figure 1 indicates 88, 53 and 86 respectively (total 198, should be 227).I would guess this is because some studies covered more than one syndrome?Perhaps that proportion should be given in text. Response: We have revised the manuscript to indicate more clearly that several studies assess the aetiology for multiple symptoms. The breakdown by study is outlined in Table 1 and the results section (line 237-241): "Overall, 206 reports were included from 190 independent studies (number of studies per symptom (N): NVD=87, NUD=55, NGU=80) spanning 1969 to 2022 (Table 1).Of these, 166 studies focused on a single symptom, 16 studies on two symptoms, and 8 studies on all three symptoms."The PRISMA diagram (Figure 1) has now been updated to provide a similar breakdown by report. 6. It is not clear how the data from the two identified databases (NICD and CESAHHR) were aggregated to the rest of the data.The CESAHHR data needed some transformation (weighting) to account for study design (RDS survey); what about the NICD data?Were the results obtained simply added to the other data? Response: The following has been added to the supplementary file, Text S1.Firstly, to reflect that NICD data were unweighted: "We extracted simple (unweighted) diagnosed proportions, stratified by year, sex, HIV-status, and age group."Secondly, to indicate that CeSHHAR and NICD data tabulations were merged with the other data: "The data extracted from each database, following the approaches outlined below, were combined with published study data before analysis." 7. It is not clear why the year 2015 is chosen to give the results? Response: We have described the rationale for year 2015 in the methods section (line 212-214): "Pooled results are reported as meta-regression model predictions for the year 2015, representing the most recent quinquennium within the timespan of substantial available data." 8. Whilst inclusion criteria insisted on studies among symptomatic patients, there appears in several places that such studies accounted for 85% or so of patients?why? Response: We extracted aetiologic outcomes for symptomatic study participants, although the study may also have included asymptomatic participants.The inclusion criteria have accordingly been updated for clarity (line 129): "Studies were included if: 1) there were participants symptomatic at the time of testing, defined by the presence of either self-reported or clinicianevaluated abnormal vaginal discharge, urethral discharge, or genital ulcer…" 9. Top of p15/line 358 'our focus on discharge symptoms...' suggest this section starts as new paragraphs --it seems to be dealing with limitations of the analyses.By the way the sentence discharge symptoms rather than syndromes is not clear.... is it because the authors may have included studies that reported patients with discharge, or dysuria etc rather than strictly defined as 'syndrome' --and did they (or the authors) treat the aetiologies attached to each symptoms as equally contributing to the 'syndrome'? Response: This topic falls within the penultimate 'limitations' paragraph of the manuscript.We have clarified our focus on discharge symptoms rather than syndromes (line 422-425): "We specifically focused on discharge symptoms, rather than broader syndrome definitions, to accommodate varied reporting across studies and over time.However, this more inclusive definition may have influenced estimated aetiologic distributions."S5 = under CT diagnosis 'antibody test' done on genital fluid and/or urine: is that correct? is it not rather 'antigen detection' assay? Supplementary Table Response: We thank the reviewer for noting this error, it has been corrected to "ELISA". 12. Supplementary S7 --for syphilis -sensitivity of dark field of 80%?? this seems rather high --it depends on performer!Response: We used WHO documentation for the sensitivity and specificity values.The range reported for darkfield sensitivity has been fairly consistent over time: 74-86% (WHO 1999) and 75-100% (WHO 2023).While the performer's influence on the accuracy of darkfield is well noted, in the absence of better information to quantify performance in different contexts, we have retained the WHO reported ranges for our primary analysis. In the limitations section, we note the uncertainty about and heterogeneity in performance characteristics for many of the diagnostics (line 432-434): "Other RTI proportions may also have been over-or under-estimated due to assigned diagnostic test sensitivity and specificity values, despite our efforts to ensure their accuracy." • World Health Organization.Laboratory tests for the detection of reproductive tract infections. 1999. • World Health Organization.Laboratory and point-of-care diagnostic testing for sexually transmitted infections, including HIV. 2023. Reviewer 6: This systematic review was conducted to determine whether the aetiologies of the three main STI syndromes (vaginal discharge, urethral discharge and genital ulcer) in sub-Saharan Africa (SSA) would be treated by the management algorithms recommended by WHO. The findings are interesting and useful; however, some revisions are required. Abstract: 1. Conclusion needs to be amended.Considering that only studies of STI syndromes (i.e.involving symptomatic patients) were analysed, the findings do not speak to the conclusion that STI surveillance using syndrome-based assessments is non-comprehensive and requires studies in asymptomatic populations.Although this statement is correct, this conclusion cannot be reached by the findings of this meta-analysis. Response: We appreciate the reviewer's comment and have accordingly removed this statement from the conclusion.The conclusion has been updated (line 42-44): "Syndrome aetiologies in SSA align with WHO guidelines without strong evidence of geographic or demographic variation, supporting broad guideline applicability.Temporal changes underscore the need for regular aetiologic re-assessment." 2. Would also state that aetiologic reassessment needs to be periodic or regular. Response: The has been updated in the abstract (line 44): "Temporal changes underscore the need for regular aetiologic re-assessment." It is important for the authors to clarify why a symptom is used to define a particular syndrome throughout the manuscript, when STIs are generally managed as syndromes (defined by groups of symptoms) in regions having limited access to laboratory diagnostics.Aetiological studies usually investigate the causative organisms of STI syndromes. Response: We have clarified our focus on discharge symptoms rather than syndromes in our discussion of limitations (line 422-425): "We specifically focused on discharge symptoms, rather than broader syndrome definitions, to accommodate varied reporting across studies and over time.However, this more inclusive definition may have influenced estimated aetiologic distributions." 4. Lines 67 -68: this statement regarding WHO needs to be substantiated.When considering the diagnostic performance of treatment algorithms and flowcharts, the WHO guideline development group reviewed the relative prevalence of aetiologies per syndrome in order to determine the predictive value of treatment algorithms (refs 25 -27). Response: We acknowledge and agree that prevalence data were reviewed to inform development of the guidelines, however aetiologic distributions were not quantified and geographic differences were not explicitly presented.We have updated the statement to reflect this (line 90-92): "Although the guidelines accounted for changes over time in the underlying causes of each syndrome, they lacked a comprehensive review and quantification of the STI distribution among symptomatic populations in different geographic areas."Methods: 5. Lines 106 -107: how was a sample size of 10 considered appropriate for inclusion?RTI estimates likely to be very imprecise.The risk of bias assessment tool (Table S8) states that the 10.The numbering of figures in main text goes off --Fig 1, 2, are repeated twice Response: We thank the reviewer for noting this error, it has been corrected.
2023-11-09T20:04:28.713Z
2023-11-09T00:00:00.000
{ "year": 2024, "sha1": "a49d49e7bc70712e83540707085ef8eaa6ea94a2", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "aca1cd3fd0b1ba3aa4872af14ba0313e2f4f7e2b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225957335
pes2o/s2orc
v3-fos-license
Olfactory Mucosa Mesenchymal Stem Cells Alleviate Cerebral Ischemia/Reperfusion Injury Via Golgi Apparatus Secretory Pathway Ca2+ -ATPase Isoform1 Olfactory mucosa mesenchymal stem cells (OM-MSCs) have exhibited their effectiveness in central nervous system diseases and provided an appealing candidate for the treatment of ischemic stroke. Previous evidence have shown that Golgi apparatus (GA) secretory pathway Ca2+-ATPase isoform1 (SPCA1) was a potential therapeutic target for ischemic stroke. In this study, we explored the neuroprotective mechanism of OM-MSCs and its effect on the expression and function of SPCA1 during cerebral ischemia/reperfusion. Based on in vitro and in vivo experiments, we discovered that OM-MSCs attenuated apoptosis and oxidative stress in ischemic stroke models, reduced the cerebral infarction volume, and improved the neurologic deficits of rats. OM-MSCs also upregulated SPCA1 expression and alleviated Ca2+ overload and decreased the edema and dissolution of the GA in neurons. Moreover, we discovered that SPCA1 depletion in oxygen and glucose deprivation/reoxygenation (OGD/R)-treated N2a cells mitigated the protective effects of OM-MSCs. Altogether, OM-MSCs exerted neuroprotective effects in ischemic stroke probably via modulating SPCA1 and reducing the edema and dissolution of the GA in neurons. INTRODUCTION Stroke is a leading cause of death and disability worldwide, in which ischemic stroke accounts for approximately 71% of all stroke types. In 2017, the global incidence of ischemic stroke events was about 7.7 million, with 2.7 million deaths (GBD 2017 Disease andInjury Incidence andPrevalence Collaborators, 2018;Campbell et al., 2019). However, the available therapy options regarding ischemic stroke have limited effects . Current treatments for acute ischemic stroke are based on reperfusion through thrombolysis or endovascular therapy, but both therapies are limited by the therapeutic time window: thrombolysis by the recombinant tissue plasminogen activator (rtPA) is required within 4.5 h of onset; endovascular therapy, 6 h. Although endovascular therapy can be extended to 24 h if the patient meets the inclusion criteria (Nogueira et al., 2018), only a few patients can actually benefit from it on account of the strict inclusion criteria and the narrow therapeutic time windows. Therefore, many researchers are actively looking for other effective treatments for ischemic stroke, such as cell therapy. Previous preclinical researches have shown that stem cell transplantation could lead to functional improvement of ischemic stroke animal models (Wei et al., 2017;Boncoraglio et al., 2019). The stem cells involved in these studies included neural stem cells (NSCs; Boese et al., 2018), mesenchymal stem cells (MSCs; Kranz et al., 2010;Oshita et al., 2020), induced pluripotent stem cells (iPSCs; Gervois et al., 2016), and so on. Due to the diversity of access sources, multiple differentiation potential, and the plasticity of function, MSCs have become an appealing stem cell candidate for the treatment of ischemic stroke (Stonesifer et al., 2017;Wang et al., 2018). Olfactory mucosa mesenchymal stem cells (OM-MSCs), first identified by Tome et al. (2009), are a type of Nestin-positive MSCs that reside in the lamina propria of the olfactory mucosa, having the potential to differentiate into osteocytes, adipocytes, smooth muscle cells, and neurons (Delorme et al., 2010). OM-MSCs are easily accessible, exhibit an extensive proliferation rate, and eliminate ethical concerns compared with the other stem cell types (Lindsay et al., 2020). Moreover, OM-MSCs promoted central nervous system myelination in vitro by secretion of the chemokine CXCL12, which was not related to bone marrow mesenchymal stem cells (BM-MSCs; . Another research demonstrated that OM-MSCs had a stronger secretion of immunosuppressive cytokines than adipose-derived mesenchymal stem cells (AD-MSCs; Jafari et al., 2020). The aforementioned advantages supported that OM-MSCs may be an appealing candidate of cell therapies for the treatment of human diseases. Accumulating evidence showed that OM-MSCs exhibited effectiveness and potential in central nervous system diseases, including spinal cord injury, early-onset sensorineural hearing loss, and hippocampal lesions Zhuo et al., 2017). Huang et al. have concluded that OM-MSCs could inhibit pyroptotic and apoptotic death of microglial cells during ischemia/reperfusion . However, the impact of OM-MSCs on neuronal injury in ischemic stroke remains unclear. At present, the inhibition of reperfusion injury is the key to the treatment of ischemic stroke. Intracellular oxidative stress and Ca 2+ overload are the pivotal pathological processes of cerebral ischemia/reperfusion injury (IRI), leading to irreversible neuronal damage. Apart from mitochondria and lysosomes, Golgi apparatus (GA) also participates in the process of oxidative stress. Jiang et al. (2011) have presented the concept of "GA stress, " which consisted of the activity of Ca 2+ -ATPase in GA; the morphology and membrane surface components of the GA would change correspondingly under oxidative stress. There are Ca 2+ release channels and Ca 2+ uptake mechanisms in the GA (Li et al., 2013). The Golgi-resident secretory pathway Ca 2+ -ATPase (SPCA), which is highly expressed in the brain, is mainly responsible for transporting Ca 2+ from the cytoplasm to the Golgi lumen and is involved in cytosolic and intra-Golgi Ca 2+ homeostasis (He and Hu, 2012;Li et al., 2013). SPCA comprises secretory pathway Ca 2+ -ATPase isoform1 (SPCA1) and SPCA2, encoded by ATP2C1 and ATP2C2, respectively, (Hu et al., 2000;Xiang et al., 2005). SPCA1 is well understood, while the function of SPCA2 is rarely studied. Previous studies have found that oxidative stress may exert the ability to downregulate the expression of SPCA1 in ischemia/reperfusion rats (Pavlíková et al., 2009;Li et al., 2015;Fan et al., 2016). Besides, SPCA1 was found to be able to protect cells from oxidative stress by interacting with the HSP60 gene (Uccelletti et al., 2005), while the inhibition of SPCA1 function could lead to apoptosis in N2a cells (Sepulveda et al., 2009) and mice (Okunade et al., 2007). Furthermore, the inactivation of SPCA1 could induce the alteration of the mitochondrial structure and metabolism, which would make the mitochondria more sensitive to oxidative stress (He and Hu, 2012). Based on existing evidences, improving the expression and function of SPCA1 was expected to be a therapeutic target for cerebral IRI. In the present study, we explored the neuroprotective mechanism of OM-MSCs during cerebral ischemia/reperfusion and its effect on the expression as well as function of SPCA1 and further investigated the role of SPCA1 knockdown in the neuroprotective effect of OM-MSCs on cerebral IRI. Ethic Statement Olfactory mucosa mesenchymal stem cells were obtained from two healthy male volunteers for scientific purposes (21 and 28 years old, respectively) at the Second Affiliated Hospital of Hunan Normal University. Human nasal mucosa biopsies were performed by otolaryngology endoscopy operation at the Department of Otolaryngologic Surgery, the second affiliated hospital of Hunan Normal University (Changsha, China). Written informed consent was given by each individual participating in the study before the operation, in accordance with the Helsinki Convention (1964). The investigators and all procedures were approved by the ethics committee of Hunan Normal University (ethical approval document no. 2018-30). Isolation and Characterization of OM-MSCs The isolation and culture of OM-MSCs were carried out using a protocol from a previous study (Ge et al., 2016). Briefly, olfactory tissue samples were obtained from the root of the medial aspect of the middle turbinate undergoing endoscopic nasal surgery, washed three times at room temperature with penicillin streptomycin solution (Invitrogen, Carlsbad, CA, United States), and then cultured in Dulbecco's modified Eagle's medium/nutrient mixture F12 (DMEM/F12; Invitrogen, United States) containing 10% fetal bovine serum (FBS; Gibco, Australia) and incubated at 37 • C in 5% CO 2 . OM-MSCs at passages 3 and 4 were used for further experiments. Cell surface markers were used to characterize OM-MSCs by flow cytometric analysis. Oxygen and Glucose Deprivation/Reoxygenation Mouse N2a cells were purchased from the Cell Storage Center of the Chinese Academy of Sciences (Shanghai, China). N2a cells were cultured in DMEM (Sigma, United States) supplemented with 10% FBS (Gibco, Australia) at 37 • C in 5% CO 2 . To achieve ischemic-like conditions in vitro, the oxygen and glucose deprivation/reoxygenation (OGD/R) model was performed as previously described (Huang and Hu, 2018). Simply, the N2a cells were placed in a modular incubator chamber (Billups Rothenberg, Inc., Del Mar, CA, United States), which kept the pO 2 value consistently below 0.5%. The culture medium was replaced with deoxygenated glucose-free Hanks' Balanced Salt Solution (Sigma). The cells were maintained in the hypoxic and glucose-free chamber for 4 h. After OGD, the N2a cells were quickly maintained in DMEM without FBS and incubated under normoxic conditions for 0, 4, 12, and 24 h. N2a cells cultured with DMEM containing 10% FBS in normoxia (5% CO 2 , 37 • C) were used as normal controls. Co-culture of N2a Cells and OM-MSCs The co-culture system was set up as previously described (Wei et al., 2019). In brief, the N2a cells (1 × 10 5 ) grown in sixwell plates were subjected to stress by the OGD method, as described above. At the same time as reoxygenation begins, the N2a cells were rescued by plating 1 × 10 5 OM-MSCs (OM-MSCs : N2a = 1:1) on the Transwell membrane inserts (pore size, 0.4 µm; polycarbonate membrane, Corning, United States) and incubating for 24 h. During reperfusion, DMEM without FBS conditioning media were used. Animals All animal procedures were approved by the Laboratory Animal Ethics Committee of the Second Affiliated Hospital of Hunan Normal University (ethical approval document no. 2020-164). All experimental procedures were performed in accordance with the Guide for the Care and Use of Experimental Animals. Male Sprague-Dawley rats weighing 250-300 g were kept under controlled housing conditions with a 12-h light/dark cycle with food and water ad libitum. Rats Reversible Middle Cerebral Artery Occlusion Model and OM-MSC Transplantation The right reversible middle cerebral artery occlusion (MCAO) model was performed as previously described (Longa et al., 1989). Briefly, rats were fasted for 12 h before surgery with water accessible. Rats were initially anesthetized with 3.5% isoflurane and maintained with 1.0-2.0% isoflurane in 2:1 N 2 O/O 2 using a face mask. The right common carotid artery (CCA), internal carotid artery (ICA), and external carotid artery (ECA) were separated, and an incision was made on the carotid artery using ophthalmic scissors. A surgical filament (0.26-mm diameter; Beijing Cinontech Co. Ltd.) was inserted into the ICA from the incision of CCA, with the length of the line being 18-20 mm. Resistance implied that the line had reached the beginning of the right middle cerebral artery (MCA), thus blocking the blood flow of the vessel. The filament was withdrawn after 120 min, after which the skin wound was sutured. The body temperature of the rats was maintained at 37 ± 0.5 • C during the whole procedure. For analgesia, post-surgery rats were given a subcutaneous injection of morphine (2.5 mg/kg) every 4 h for 1 day following MCAO. In total, 72 adult male Sprague-Dawley rats were randomly divided into three groups: sham operation group (sham), MCAO group (ischemia/reperfusion, I/R), and MCAO + OM-MSC group (transplantation; n = 24 animals per group). In the transplantation group, the rats received tail vein injection of 5.0 × 10 6 OM-MSCs dissolved in 1 ml saline at 24 h after MCAO model induction, while the rats received tail vein injection of 1 ml saline in the I/R group. Rats in each group were sacrificed after anesthesia for experiment on day 7 after reperfusion. All experimental procedures were performed by investigators blinded to group allocation. Inclusion and Exclusion Criteria The inclusion and exclusion criteria of the I/R and transplantation groups were based on the Zea-Longa score when the rats were awake after operation (Longa et al., 1989). The following are the scoring criteria: 0 point = there is no any neurological symptom; 1 point = the left forelimb of the rats cannot entirely stretch; 2 points = Sprague-Dawley (SD) rats rotate to the ischemic side while walking, moderate neurological deficit; 3 points = SD rats dump to the ischemic side when standing; and 4 points = SD rats cannot walk on their own and lose consciousness. Specifically, SD rats with a score of 1-3 were used in the subsequent experiment, while SD rats that died, or with a score of 0 or 4, were dropped. To compensate for dropouts, three additional animals were enrolled to the study population, resulting in an overall study population of 75 rats. CCK-8 Assay Cell viability in each group was measured by using the Cell Counting Kit-8 (CCK-8; Dojindo Molecular Technologies, Dojindo, Japan) according to the manufacturer's protocol. LDH Measurement Immediately following OGD/R, the culture supernatants were collected and, subsequently, the level of lactate dehydrogenase (LDH) was detected using the LDH Cytotoxicity Assay Kit (Nanjing Jiancheng Bioengineering Institute, Jiangsu, China) according to the manufacturer's protocol. Apoptosis Measurement Apoptosis of N2a cells and the ipsilateral cortex of SD rats were detected via annexin V-fluorescein isothiocyanate (FITC) and propidium iodide (PI) double staining using a FITC Annexin V Apoptosis Detection Kit I (KeyGen Biotech, Jiangsu, China) according to the manufacturer's instructions. The fluorescence was measured by flow cytometry (Beckman, United States). TUNEL and NeuN Double Immunostaining Apoptosis of neurons in the ipsilateral cortex of SD rats was evaluated by terminal-deoxynucleotidyl transferase-mediated nick-end labeling (TUNEL) and NeuN double immunostaining according to the manufacturer's protocol. The brain sections from each group were incubated with TUNEL reaction mixture (Beyotime, Shanghai, China) for 1 h at room temperature and then stained with anti-NeuN (ab177487, Abcam, Cambridge, United Kingdom) and DAPI (Wellbio, China). The slides were observed using a fluorescence microscope (Motic, China). ROS Measurement Intracellular reactive oxygen species (ROS) was detected using an oxidation-sensitive fluorescent probe (2 ,7dichlorodihydrofluorescein diacetate, DCFH-DA). Following OGD/R, reactive oxygen species detection was performed using a fluorescent probe DCFH-DA kit (Beyotime). The cells were washed twice with phosphate-buffered saline (PBS) and subsequently incubated with 10 µmol/L DCFH-DA at 37 • C for 20 min. After washing three times, the fluorescence was measured by flow cytometry (Beckman). LPO and T-SOD Measurements The ipsilateral cortex of the SD rats from each group was used for total superoxide dismutase (T-SOD) and lipid peroxidation (LPO) measurements. The levels of LPO and T-SOD were detected using lipid peroxidation and T-SOD assay kits (Nanjing Jiancheng Bioengineering Institute) according to the manufacturer's instructions. Western Blot Assay N2a cells and the ipsilateral cortex of SD rats were processed for Western blot as described (Fan et al., 2016). Immunoblot analyses were performed using the following primary antibodies: anti-SPCA1 (ab126171, Abcam) and anti-β-actin (60008-1-Ig, Proteintech, United States). The anti-rabbit IgG and anti-mouse IgG secondary antibodies were obtained from Proteintech. The proteins were visualized using an enhanced chemiluminescent (ECL) detection kit (Advansta Inc., United States). Intracellular Ca 2+ Measurement For N2a cells, Golgi vesicles were isolated by a GA protein extraction kit (BestBio, Hunan, China) according to the manufacturer's instructions. The concentrations of Ca 2+ in the cytoplasm and Golgi vesicles were detected using the Ca 2+ Assay Kit (Nanjing Jiancheng Bioengineering Institute) according to the manufacturer's protocol. For the ipsilateral brain samples of SD rats, intracellular Ca 2+ was measured in Fluo-3/acetoxymethyl (AM)-loaded cells by flow cytometry. Briefly, the brain tissues were digested and then incubated with 5 µM Fluo-3/AM (Beyotime) at 37 • C for 0.5 h according to the instructions of the manufacturer. After washing and resuspension in PBS, intracellular Ca 2+ levels were measured at an excitation wavelength of 488 nm and an emission wavelength of 530 nm using a flow cytometer (Beckman). Electron Microscope Test An electron microscope specimen was prepared as previously described (Fan et al., 2016) and then observed with a Hitachi HT7700 transmission electron microscope (Tokyo, Japan). Section analyses were all under the same intensity condition and the same magnification of the electron microscope. Infarct Volume Analysis The mice were sacrificed on day 7 after reperfusion and the brains were removed quickly. Infarct volumes were measured by 2,3,5-triphenyltetrazolium chloride (TTC) staining. All brain slices of mice from each group (n = 3 animals per group) were used to perform TTC staining. Slices were incubated in 2% TTC solution for 30 min at 37.0 • C, then fixed in 10% formalin in the border zone of infarction, and were outlined with Image-Pro Plus Analysis Software (Media Cybernetics, Bethesda, MD, United States). The analysis was done by investigators who were blinded to the experimental groups. Behavioral Analysis The modified neurologic severity score (mNSS) and rotarod treadmill were used to evaluate the neurological deficits of rats in each group before they were killed. All rats in each group received behavioral analysis on days 0 (pre-MCAO), 1, 3, and 7 after reperfusion. The mNSS consists of motor, sensory, reflex, and balance tests and was used to grade the neurological function on a scale of 0-18 . For the rotarod treadmill, the rats were placed on rotating rods which accelerated at 3-20 rpm for 5 min. The time that the animal remained on the rod was the measured parameter. Two observers blinded to the treatment and grouping were assigned to conduct behavioral analysis. shRNA Knockdown of SPCA1 For short hairpin RNA (shRNA) knockdown, we chose the shRNA target sequence 5 -ccggccTGCGGACTTACGCTTATTT ctcgagAAATAAGCGTAAGTCCGCAggtttttg-3 . N2a cells were silenced with SPCA1 shRNA by using a shRNA transfection kit according to the manufacturer's instruction (GIEE, China). The efficiency of the knockdown of SPCA1 in N2a cells was verified by qPCR and Western blot. Statistical Analysis All statistical analyses were performed using SPSS statistical software (SPSS, Inc., Chicago, IL, United States). After testing for normal distribution, the data of two independent variables were analyzed using Mann-Whitney test. For three or more variables, Kruskal-Wallis test was performed followed by post hoc analysis using Tukey's test. All data are expressed as mean ± SEM. Differences between the mean values were considered significant at P < 0.05. Characterization of OM-MSCs The morphology of OM-MSCs was typically fibroblastic or in spindle form, as shown in Figure 1A. The immunophenotype of OM-MSCs identified by flow cytometry exhibited positive expression of MSC markers (CD44, CD73, CD90, CD105, CD133, and CD146) and negative expression of hematopoietic stem cell (HSC) markers (CD34 and CD45; Figure 1B). OM-MSCs Ameliorated OGD/R-Induced Apoptosis and Oxidative Stress in N2a Cells Oxygen and glucose deprivation/reoxygenation-induced N2a cell injury was performed as a classical model to mimic cerebral IRI in vitro. After 4 h OGD exposure, we treated N2a cells with different time courses of reoxygenation. The cell viability was significantly decreased with the time development compared with the normal group (Figure 2A), while the LDH, apoptosis rate, and ROS level were apparently increased and reached the highest changes at the 24-h reoxygenation time point (Figures 2B-F). Thereby, a 4-h OGD treatment followed by a 24-h reoxygenation therapy were applied for further experiments. We then used a Transwell device to co-culture N2a cells with OM-MSCs to investigate whether OM-MSCs could rescue N2a cells from OGD/R injury. The results are shown in Figure 3. OM-MSC co-culture notably reversed the decline in cell viability after OGD/R insult; meanwhile, the upregulated LDH production, apoptosis rate, as well as ROS level were also markedly reduced via OM-MSC co-culture upon OGD/Rinduced injury (Figures 3A-F). OM-MSCs Alleviated Neuronal Apoptosis and Oxidative Stress in I/R Rats We then established the MCAO rat model to achieve cerebral IRI in vivo. The apoptosis rate in the I/R group was significantly higher than that in the sham group. NeuN and TUNEL double immunostaining further verified the increased neuronal apoptosis in I/R rats. Meanwhile, neuronal apoptosis induced by cerebral ischemia/reperfusion was notably alleviated by OM-MSC injection in the transplantation group rats (Figures 4A-C). We also examined the oxidative stress level in each group of rats. Compared with the sham group, the LPO level was elevated while the T-SOD level was reduced in the I/R group. The opposite alterations of these two indicators supported that cerebral ischemia/reperfusion could indeed lead to increased levels of oxidative stress. Likewise, OM-MSC injection could downregulate the LPO level and upregulate the T-SOD level in I/R rats (Figures 4D,E). OM-MSCs Reduced Cerebral Infarction Volume and Improved Neurologic Deficits in I/R Rats The cerebral infarction volume in each group was examined by TTC staining. No infarction was observed in the sham group, while a white infarct lesion occurred in the I/R and transplantation groups, suggesting successful establishment of the MCAO model in rats. The infarction size in the transplantation group was significantly diminished, indicating that OM-MSC injection was able to reduce the cerebral infarction volume (Figures 5A,B). Behavioral function was evaluated by the mNSS and rotarod treadmill. The mNSS was remarkably increased in the I/R 1-day, I/R 3-day, and I/R 7-day groups compared with that of the sham group and notably decreased in the transplantation 3-day and transplantation 7-day groups compared with that in the I/R group ( Figure 5C). The rotarod treadmill results were also obviously improved by OM-MSC transplantation at 3 and 7 days ( Figure 5D). In brief, our results confirmed that OM-MSC injection could improve neurologic deficits in I/R rats. OM-MSCs Upregulated SPCA1 Expression in OGD/R-Treated N2a Cells and I/R Rats The expression level of SPCA1 in N2a cells was identified to be decreased after OGD/R insult compared to the normal group, the alteration reaching a maximum in the 24-h reoxygenation at both the mRNA and protein levels (Figures 6A-C). OM-MSCs were able to upregulate the expression of SPCA1 in N2a cells after OGD/R injury (Figures 6D-F). Similarly, cerebral ischemia/reperfusion contributed to a remarkable decline in the mRNA and protein levels of SPCA1 in the I/R group compared with the sham group, and OM-MSC transplantation was capable of upregulating the SPCA1 protein expression in I/R rats (Figures 6G-I). OM-MSCs Attenuated Ca 2+ Overload and Improved GA Morphology in OGD/R-Treated N2a Cells and I/R Rats Due to the function of SPCA1 being associated with intracellular Ca 2+ homeostasis, we subsequently measured the intracellular Ca 2+ concentrations and discovered a notably increased Ca 2+ concentration in the cytoplasm after OGD/R exposure while a decreased Ca 2+ concentration in GA, both of which reached the highest changes in the 24-h time point (Figures 7A,B). Interestingly, after OM-MSC co-culture following OGD/R insult, the increased Ca 2+ concentration in the cytoplasm was obviously alleviated (Figure 7C), while the Ca 2+ concentration in GA was upregulated ( Figure 7D). We also assayed the intracellular Ca 2+ concentration in the rats from each group and observed an elevated Ca 2+ concentration in I/R rats; likewise, OM-MSC transplantation could notably repress the increase of Ca 2+ concentration in I/R rats (Figures 7E,F). Moreover, we examined the GA ultramicrostructure changes of neurons using an electron microscope. As visualized in Figure 7G, neurons in the sham group had GA with normal FIGURE 5 | Olfactory mucosa mesenchymal stem cell (OM-MSC) injection reduced the cerebral infarction volume and alleviated neurologic deficits in ischemia/reperfusion (I/R) rats. (A,B) The infarct volume was determined by 2,3,5-triphenyltetrazolium chloride (TTC) staining (n = 3 animals per group). (C,D) The line charts show the results of the modified neurologic severity score (mNSS; n = 12 animals per group) and rotarod treadmill (n = 12 animals per group). Data are shown as the mean ± SEM. *p ≤ 0.05, ***p < 0.001, compared with the sham group and & p ≤ 0.05, && p < 0.01, and &&& p < 0.001, compared with the I/R group. morphology and structure, accompanied by the endoplasmic reticulum, lysosomes, mitochondria, nerve microfilaments, neural tubes, and a complete double nuclear membrane. In the I/R group, the GA was swollen and dissolved, other organelles were also fractured, and the nuclear membrane became blurred. However, the GA was less edematous in the transplantation group. Collectively, the ultramicropathological changes of the GA in the transplantation groups were less significant compared with those in the I/R group. OM-MSCs Protected N2a Cells From OGD/R-Induced Injury Through Modulating SPCA1 To explore the role of SPCA1 in the neuroprotective effect of OM-MSCs against cerebral IRI, plasmid containing SPCA1 shRNA sequence was constructed and transfected into N2a cells before the experiment. The transduction results were verified by PCR as well as Western blot analysis, which are shown in Figures 8A-C. Transfection with SPCA1 shRNA contributed to a notable decrease in the mRNA and protein levels compared with control shRNA. Compared with the control shRNA group, the apoptosis rate, ROS levels, LDH production, as well as the Ca 2+ concentration in the cytoplasm of N2a cells induced by OGD/R insult were apparently increased in the SPCA1 shRNA group, and SPCA1 depletion in N2a cells mitigated the protective effects of OM-MSCs following OGD/R injury (Figures 8D-I). Meanwhile, after OGD/R injury, the Ca 2+ concentration in the GA of N2a cells was significantly lower in the SPCA1 shRNA group than in the control shRNA group, and SPCA1 knockdown in N2a cells restricted the capacity of OM-MSCs to upregulate the Ca 2+ concentration in the GA of N2a cells after OGD/R insult ( Figure 8J). Overall, the above findings showed that OM-MSCs protected N2a cells from OGD/R-induced injury probably through modulating SPCA1. DISCUSSION We successfully established in vivo and in vitro models of cerebral IRI, as previously described (Longa et al., 1989;Huang and Hu, FIGURE 7 | Olfactory mucosa mesenchymal stem cells (OM-MSCs) attenuated Ca 2+ overload and improved Golgi apparatus (GA) morphology in oxygen and glucose deprivation/reoxygenation (OGD/R)-treated N2a cells and ischemia/reperfusion (I/R) rats. (A,B) Ca 2+ concentrations in the cytoplasm and GA of OGD/R-treated N2a cells at different time points were determined by the Ca 2+ Assay Kit. (C,D) Ca 2+ concentrations in the cytoplasm and GA of N2a cells in the normal, OGD4h/R24h, and OM-MSC co-culture groups were measured by the Ca 2+ Assay Kit. (E,F) Intracellular Ca 2+ of rats' ipsilateral brain samples in the sham, I/R, and OM-MSC transplantation groups were detected by flow cytometry analysis using a Fluo-3/AM kit. (G) Representative image of GA ultramicrostructure changes by using an electron microscope (scale bar, 2 µm). The GA was indicated by the magenta arrow. Data are shown as the mean ± SEM based on three independent experiments. *p ≤ 0.05, compared with the normal or sham group and & p ≤ 0.05, compared with the OGD4h/R24h or I/R group. 2018). It is well known that OGD/R-induced cell injury is mainly characterized by a decreased cell viability and increased apoptosis rate and LDH release level (Huang and Hu, 2018;Ma et al., 2020), and our results were consistent with previous studies. Meanwhile, significant cerebral infarction lesions were observed by TTC staining of the rats' brain tissues, suggesting the establishment of a successful in vivo model. Oxidative stress, induced by ROS during cerebral ischemia and especially reperfusion, is important in the pathological process of ischemic stroke and is critical for the cascade development of cerebral IRI (Wu et al., 2020). It results in LPO, apoptosis, and, ultimately, neuronal death together with other pathophysiological mechanisms. We also found that the ROS and LPO levels were increased while the SOD levels were decreased in our models of cerebral IRI. Mesenchymal stem cells have exhibited therapeutic properties on IRI because of their paracrine activity, cell-cell interaction, anti-inflammatory activity, and immunomodulatory effects (Souidi et al., 2013;Barzegar et al., 2019;Oliva, 2019;Tobin et al., 2020). Leu et al. (2010) have demonstrated that intravenous injection of AD-MSCs significantly attenuated oxidative stress in an experimental ischemic stroke model. Calio et al. (2014) have concluded that the transplantation of BM-MSCs decreases oxidative stress and apoptosis in the brain of stroke rats. Alhazzani et al. (2018) have found that MSC co-culture could protect Ca 2+ and oxidant-mediated damage in SH-SY5Ydifferentiated neuronal cells. Similarly, our results also showed that OM-MSCs were able to downregulate ROS as well as LPO levels and upregulate antioxidase SOD levels in the cerebral IRI models, eventually reducing neuronal apoptosis and infarction volume. Consequently, we fully believe that OM-MSCs could also confer cerebral protection against IRI by suppressing oxidative stress. In recent years, with the concept of "GA stress" proposed, the complex role of GA in oxidative stress has been gradually recognized (Jiang et al., 2011;Li et al., 2016). Based on the findings of Jiang et al. (2011) and He and Hu (2012), as one of the Ca 2+ transporters of GA, SPCA1 played an important role in the process of GA maintaining intracellular Ca 2+ homeostasis under physiological conditions. However, the activity and expression of SPCA1 were decreased during cerebral ischemia/reperfusion, and its ability to uptake intracellular Ca 2+ was also impaired, leading to intracellular Ca 2+ overload (Pavlíková et al., 2009;Li et al., 2015;Fan et al., 2016). It is well known that Ca 2+ overload is another fatal molecular event in cerebral IRI (Kalogeris et al., 2016;Radak et al., 2017). Sustained excessive intracellular calcium levels often cause neuronal cell hypercontracture, proteolysis, and, eventually, death (Pittas et al., 2019). During reperfusion, the metabolites produced by oxidative stress destroy the integrity of the cell membrane and organelle membrane, and Ca 2+ release channels on the cell membranes and organelle membranes are opened, resulting in Ca 2+ influx into the cytoplasm from the extracellular environment and the endoplasmic reticulum or sarcoplasmic reticulum (Li et al., 2015). And more importantly, Ca 2+ overload could also enhance oxidative stress, their interaction promoting the pathological process of an IRI cascade (Jiang et al., 2011). In this paper, we discovered a decrease in SPCA1 expression, an increase in cytoplasmic Ca 2+ levels, and a decrease in GA Ca 2+ levels in the ischemic stroke model, which were in line with previous studies. Besides, the GA fragment, another typical manifestation of GA stress in ischemic stroke, was often induced by oxidative stress and apoptosis (Zhong et al., 2015;Zhang et al., 2019). The damage of microtubule proteins mainly contributed to the fragmentation and even dissolution of the GA during oxidative stress and apoptosis. Our results presented that the GA was swollen and dissolved in the neuron of I/R rats, which was in accordance with existing evidences. Previous researches on the neuroprotective role of MSCs at the subcellular organelle level in ischemic stroke always focused on the mitochondria and endoplasmic reticulum (Xing et al., 2016;Mahrouf-Yorgov et al., 2017). The impact of stem cell therapy on the function and morphology of GA after cerebral IRI was uncovered. Based on the results above, our results firstly demonstrated that OM-MSCs were able to upregulate SPCA1 expression, rescue its function of maintaining intra-Golgi Ca 2+ homeostasis, and reduce the edema and dissolution of GA in neurons of ischemic stroke models. Since SPCA1 has previously been shown to exhibit antioxidative stress and anti-apoptotic effects in ischemic stroke (Uccelletti et al., 2005;Sepulveda et al., 2009), the upregulation of SPCA1 expression and other neuroprotective effects of OM-MSCs in an ischemia/reperfusion model have also been confirmed according to our results. As a result, we speculated whether the neuroprotective effect of OM-MSCs on cerebral IRI was associated with its ability to upregulate SPCA1 expression and rescue its function in neurons. Subsequently, we used SPCA1 shRNA to construct a SPCA1 knockout model in N2a cells and found that SPCA1 shRNA partly restricted the capacity of OM-MSCs to alleviate OGD/R-induced apoptosis and elevated the ROS levels and LDH production as well as the intracellular Ca 2+ overload. The above findings suggested that the expression and function of SPCA1 in neurons were relevant to the neuroprotective effect of OM-MSCs on cerebral IRI. Nevertheless, we also observed that OM-MSCs still had a partial protective effect, including reduced apoptosis and ROS production and regulated Ca 2+ concentration in the cytoplasm, on OGD/R-induced cell injury in the case of SPCA1 being knocked down. We considered that this outcome was related to the function diversity and plasticity of OM-MSCs. Previous studies have concluded that MSCs exhibited anti-oxidative, anti-apoptotic, endogenous neurogenesis, synaptogenesis, angiogenesis, anti-inflammatory, and immunomodulatory effects in ischemic stroke (Souidi et al., 2013;Hao et al., 2014), and MSCs exerted their neuroprotective effects partly by secreting neurotrophic factors, such as VEGF, NGF, BDNF, and bFGF, etc. (Stonesifer et al., 2017;Barzegar et al., 2019). The secretome of OM-MSCs has been identified by Ge et al. (2016), and their results showed that these secreted proteins were associated with neurotrophy, angiogenesis, cell growth, differentiation, and apoptosis. Consequently, OM-MSCs may be capable of attenuating cerebral IRI partly by secreting a series of neurotrophic factors. Those molecules may reduce apoptosis and oxidative stress levels by acting on the mitochondria and endoplasmic reticulum when SPCA1 was blocked. That the function of the injured mitochondria and endoplasmic reticulum could be rescued by other types of MSCs has been confirmed by previous researchers (Chi et al., 2018;Tseng et al., 2020). In terms of the regulation of intracellular Ca 2+ by OM-MSCs during cerebral ischemia/reperfusion when SPCA1 was knocked down, the possible mechanisms were as follows: firstly, OM-MSCs reduced the oxidative stress level through other feasible pathways, which inhibited the Ca 2+ influx from extracellular stores and the endoplasmic reticulum, eventually leading to a decline in the Ca 2+ concentration of the cytoplasm. Secondly, in addition to GA, there were also Ca 2+ uptake channels in the endoplasmic reticulum membrane, which were also impaired by IRI (Jiang et al., 2011). OM-MSCs may be able to rescue these channels and subsequently reduce the Ca 2+ overload in the cytoplasm to a certain extent. Accordingly, OM-MSCs could still exhibit part of the ability to reduce apoptosis as well as ROS production and regulate the Ca 2+ concentration after SPCA1 knockdown. Nowadays, cell-based therapies are considered to be one of the most promising options to radically advance ischemic stroke treatment (Boltze et al., 2019), and animal models of ischemic stroke are indispensable for their translation into clinical trials. Hence, the establishment of a highly efficient and predictable animal model is conductive to improve the quality of preclinical researches regarding cell therapy (Kringe et al., 2020). In the present study, the choice of male rats effectively eliminated the influence of female sex hormones on the effect of cell therapy, and the standardization of the animal housing conditions greatly reduced its impact on the neurological endpoint. Randomized grouping and allocation concealment also avoided the limitations of other confounding factors to some degree (Boltze et al., 2017;Bosetti et al., 2017). Additionally, the reperfusion model we used here was in line with the recommendations of the guidelines for the study of neuroprotective therapies in recanalization scenarios . Consequently, we believed that these advantages would make our results more credible. However, there are still some limitations regarding this study, which are expected to be improved on in subsequent studies. The first is the small sample size. It would be significant to perform the examination of this treatment in a large cohort for subsequent confirmation. Secondly, the mNSS and rotarod treadmill are widely used neurofunctional assessments in experimental stroke, but other evidences indicated that the mNSS and rotarod are not perfectly fit for neurofunctional assessments after stroke in the context of MSC-based therapies since these two behavioral tests could not distinguish recovery from compensatory behavior well (Boltze et al., 2014;Balkaya et al., 2018). Therefore, it is recommended to choose behavioral tests that are minimally affected by behavior compensation in future experimental stroke, such as Montoya's staircase and the cylinder test. Thirdly, studies on other types of cells have found that cryopreservation limited the effectiveness of those cell types (Weise et al., 2014). A similar exploration should also be carried out in cryopreserved OM-MSCs. Fourthly, no immunosuppressive agent was used, although in xenotransplantation, the possible graft rejection could directly influence the therapeutic outcome of OM-MSCs. However, other investigators recommended MSCs as a novel immunomodulatory strategy in preclinical transplantation studies due to their immunosuppressive properties (Diehl et al., 2017), suggesting that MSCs could be applied relatively safely in non-autologous approaches. Lastly, the establishment of the SPCA1 gene knockout rats is promising. It will provide a more profound understanding of the mechanism regarding SPCA1 in the neuroprotective effect of OM-MSCs on cerebral IRI. In summary, our findings suggest that OM-MSCs may be a useful candidate of cell therapies for the treatment of ischemic stroke. OM-MSCs exert neuroprotective effects against cerebral IRI, probably via modulating SPCA1 and reducing the edema and dissolution of the GA in neurons. Further studies will be conducted to highlight the role of SPCA1 in the neuroprotection of OM-MSCs in vivo by constructing gene knockout animal models of ischemic stroke. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Ethical Committee of Hunan Normal University. The patients/participants provided their written informed consent to participate in this study. The animal study was reviewed and approved by Laboratory Animal Ethics Committee of the Second Affiliated Hospital of Hunan Normal University. AUTHOR CONTRIBUTIONS ZH acquired the funding. JH attended in research design, experimental performances except animal behavioral tests, data analysis, and drafting the manuscript. JL participated in cell culture and animal behavioral tests. YH participate in cell culture and data analysis. YZ participated in animal experiment and behavioral tests. WC took part in animal experiment. DD and XT discussed the results. ZH and ML took care of all aspects including research design, data analysis, and manuscript preparation. All authors read and approved the final manuscript. FUNDING This work was supported by the National Natural Science Foundation of China (no. 81974213).
2020-10-30T13:06:25.132Z
2020-10-30T00:00:00.000
{ "year": 2020, "sha1": "dfcfe0db2bc22655831410bf55d5c55a17cf9b9e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2020.586541/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dfcfe0db2bc22655831410bf55d5c55a17cf9b9e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
1819228
pes2o/s2orc
v3-fos-license
Adaptation to Different Human Populations by HIV-1 Revealed by Codon-Based Analyses Several codon-based methods are available for detecting adaptive evolution in protein-coding sequences, but to date none specifically identify sites that are selected differentially in two populations, although such comparisons between populations have been historically useful in identifying the action of natural selection. We have developed two fixed effects maximum likelihood methods: one for identifying codon positions showing selection patterns that persist in a population and another for detecting whether selection is operating differentially on individual codons of a gene sampled from two different populations. Applying these methods to two HIV populations infecting genetically distinct human hosts, we have found that few of the positively selected amino acid sites persist in the population; the other changes are detected only at the tips of the phylogenetic tree and appear deleterious in the long term. Additionally, we have identified seven amino acid sites in protease and reverse transcriptase that are selected differentially in the two samples, demonstrating specific population-level adaptation of HIV to human populations. Introduction Differences in allele frequencies in different populations were used as evidence of natural selection in some classic studies [1][2][3]. Since the first identification of positive selection within protein sequences [4,5], however, estimation of the relative frequency of synonymous and non-synonymous nucleotide substitution has become a standard tool in molecular evolutionary studies. The power of these analyses to detect selection was substantially increased through the development of codon-based likelihood models that allow selection to vary across sites [6]. We have developed a method, within a maximum likelihood framework, that combines these two approaches to yield novel insights into adaptive evolutionary differences between populations. We have applied this method to investigate the hypothesis that HIV has adapted specifically to the distinct human host population. The immune system is widely recognized as one of the factors that exert a selective effect on pathogen populations. Mutations that allow HIV-1 strains to escape MHC-restricted CTL killing arise in both acute and chronic infection [7][8][9][10][11], but in the absence of an immune response they can reduce fitness [12], and can be selected against on transmission [13,14]. Recently, from correlations of variability and CTL epitopes, it was suggested that adaptation to human CTL responses had led to genetic adaptations in the HIV genome [15]. Both individual-and population-based studies have found overwhelming evidence of positive selection in HIV [6,[16][17][18][19][20][21][22] but have been unable to discriminate between possible mechanisms. Generally speaking, phylogenetic studies that rely on within-population sequence polymorphism to identify non-neutral evolution may not be able to detect selective sweeps, expressed by substitutions localized to a single branch of the phylogeny of serially sampled sequences; selected alleles driven to fixation in all sequences of the sample, resulting in within-population monomorphic sites; or the direction of selection, for instance acquisition of immune escape mutations versus reversion in the absence of selective forces [23]. We have developed a suite of fixed effects likelihood-based approaches [22,24] which we have used here to discriminate substitutions occurring on internal branches from those occurring at the tips of the tree, and to estimate whether selective pressure at a given site is different between two populations of HIV-infected individuals. This allows one to distinguish sites that are positively selected because they are associated with adaptation to the individual host from those associated with adaptation to the population. Results We analyzed the sequences of the protease (PR) and reverse transcriptase (RT) coding regions of HIV clade C from each of 74 individuals from KwaZulu Natal [25] and Zambia (ZA sample), and from 63 Ethiopian Falasha immigrants arriving in Israel between 1998 and 2003 [26] (ET sample). These sequences were obtained by population-based sequencing from viral RNA PCR-amplified direct from the blood plasma of the infected individual: each sequence thus reflects the predominant nucleotides in the plasma viral population of that individual at that time. Within-host recombination will not affect a phylogeny based on these sequences. Our maximum likelihood calculations treated ambiguities as partially missing data. Amino acid positions 4 through 99 of PR and 41 through 240 of RT were analyzed. The absence of first 40 positions in RT and three positions in PR is an artifact of the sequencing procedure used for generating some sequences in the ET sample. The alignments used in this study did not contain any insertions or deletions, which is common for HIV pol alignments. We screened all sequences for mutations known to confer strong drug resistance and found that none of the sequences included in the study had such mutations [27][28][29]. Each sample formed a separate clade ( Figure S1) in the maximum likelihood tree built from PR and RT jointly (reconstructed using the GTRþGþI model with the PhyML package [30]). This clustering was confirmed by Bayesian phylogenetic reconstruction, performed under the same model using the MrBayes software package [31]. 10 6 MCMC samples, thinned by a factor of 100, were generated, and the first 25% of the samples were discarded as burn-in. Sequence alignments and maximum likelihood trees for each sample can be downloaded (in NEXUS format) from http:// www.hyphy.org/pubs/DS/sequences.tgz. Phylogenetic methods can perform poorly if viral sequences have experienced sufficient recombination to result in discordant phylogenetic signal in different parts of the alignment [32,33]. We carried out a simple procedure, similar to the ideas in [34] and [35], to screen for evidence of recombination. Given an alignment, we split the data into two contiguous fragments, reconstructed neighbor joining trees [36] for each segment, fitted the trees using maximum likelihood, and investigated whether having two trees improves the small sample AIC score [37] over the model with one tree for the entire alignment. Additionally, we verified phylogenetic incongruence using the Shimodaira-Hasegawa [38] test. This test was carried out for all possible placements of the breakpoint. For the four alignments in this study, no two-tree model fitted the data significantly better than the single-tree model and yielded a significant SH test result. While this screening procedure does not rule out the presence of recombinant sequences in our alignments, it suggests that the impact of recombination is insufficient to cause detectably discordant phylogenies, and to heavily bias the inference procedure. As an additional check, we carried out the RDP test [39], which also failed to detect any recombination events in any of the four alignments. Gene by gene maximum likelihood codon rate analyses revealed strong rate heterogeneity of both synonymous and nonsynonymous substitution rates in all four samples (Table 1). Consequently, it is imperative that site-to-site variation in synonymous rates be accounted for, lest the tests for selection suffer high rates of false positives [22,40]. Using three independent maximum likelihood methods for detecting selection at an individual site [22], we identified four sites in PR and seven sites in RT subject to significant positive selection, as detected by the consensus of the methods. Codons 12,19, and 63 in PR were positively selected in both populations, as were codons 48, 166, 173, and 207 in RT. Other sites gave significant results using only one test, or were positively selected in only one population (Table 2). Individual versus Population-Level Adaption in HIV Sharp et al. [41] previously estimated dN/dS ratios for all branches in a phylogeny of HIV-1 and SIVcpz env sequences and correlated that quantity with the ''depth'' of the branch in the tree to deduce that dN/dS across the entire sequence was smaller for branches that were far from the tips of the Diversity was computed by averaging pairwise phylogenetic distances between, based on branch lengths fitted using the Dual MG943(012232) GDD 3 3 3 codon substitution model [40]. The same model was used to infer the mean and coefficients of variation (CV) for the distributions of synonymous and (a) and nonsynonymous substitution (b) rates. The p-values for the likelihood ratio test of CV (a) . 0 were computed using the likelihood ratio test as described in [40]. DOI: 10.1371/journal.pcbi.0020062.t001 Synopsis Despite the efforts devoted to surveying HIV genetic diversity and the development of an effective vaccine, there is still no consensus on the extent to which the former prejudices the latter. Experimental studies show that escape from cell-mediated immunity is selected for in HIV and SIV, and sometimes very quickly. Conversely, escape mutants may be selected against at transmission, so how much does this selection within individuals influence the genotype of the circulating HIV population overall? Kosakovsky Pond, Leigh Brown, and colleagues have developed a new statistical approach to address this question. Using sequences from the globally most abundant HIV subtype (subtype C), the authors directly compared virus of the same subtype infecting genetically different human populations. They show at least half of the amino acid sites selected within individuals are not selected at a population level, and they identify six amino acid sites in the RT gene that are selected differentially between populations. We can now partition molecular adaptation between individual and population components for whatever genes may be included in candidate vaccines, in the target populations themselves. The methods are also important beyond the HIV world, where analogous issues arise in the more general question of the evolution of virulence in pathogens. tree. Holmes [42] performed alignment-wide comparisons of dengue virus sequences sampled from individual hosts and from populations of infected hosts and found that average selective pressures were substantially more purifying in between-host samples. We have extended one of the methods used to detect positive selection (fixed effects likelihood) to permit the estimation of the ratio of nonsynonymous (b or dN) and synonymous substitution rates (a or dS) separately in internal and terminal branches of the tree connecting these sequences. This has revealed that many recent nonsynonymous substitutions, i.e., those in the terminal branches of the tree, were not represented on internal branches. For both the ZA and ET populations, there are more codons in both PR and RT with only recent nonsynonymous substitutions than there are codons with substitutions on internal branches (Figures 1 and 2). The difference was particularly striking in RT where the ratio was 35:17 in the ZA sample and 54:16 in the ET sample. This disparity alone may be statistically insignificant because the cumulative length of internal branches in the tree is smaller than that of terminal branches ( Figure S1). However, at those codons where internal substitutions are seen, the strength of selection (measured by the dN=dS ratio) along terminal branches is in all cases higher (Figures 1 and 2): in all four comparisons this difference was significant based on the likelihood ratio test (p , 0.05 using parametric bootstrap to guard against the effect of small sample sizes). At the level of individual sites, three sites were positively selected (p 0.05) along internal branches in the ET sample and seven in the ZA sample. Simulation results (see Materials and Methods) suggest that the test is conservative for model parameters chosen to resemble those likely to have generated our samples, and is capable of reliably detecting sites that are subject to strong selection. Positive predictive value (PPV) of the test was calculated at 98.8%, hence it is unlikely that detected sites are false positives. In particular, a high PPV estimate strongly suggests that site-wise testing procedures in this context do not require a correction for multiple testing. Positively selected nonsynonymous substitutions on internal branches (persistent substitutions) must of necessity be adaptive at both individual host and population level. As we have analysed consensus sequences of the within-individual populations, the substitutions must have reached a high frequency in the infected individuals, but are transient at the population level, suggesting their removal by purifying selection. Based on the elevated rate of adaptation within individuals detected at codons subject to population-level selection, relative to the codons where only recent substitutions have been inferred, we conclude that recent substitutions are, on average, maladaptive at the level of the human population. We note that when longitudinal data are not available, comparative phylogenetic methods may be unable to detect directional selection if the population had undergone a selective sweep. Population level adaptation inferred for our samples could also be due to transient directional selection, or diversifying selection maintained by acquisition and transmission of escape mutants and reversion to wild type. However, because time scales of transmission and reversion processes are not known for this sample, a single mechanism cannot be distinguished. Differential Adaptive Evolution in Different Populations Human major histocompatibility complex (MHC) alleles are remarkably old, and some have been maintained since the human-chimpanzee divergence [43]. Thus, adaptation to human MHC alleles may not only reflect adaptation since the zoonotic transfer of HIV from chimpanzee to humans, but also include the prior history in the chimpanzee Table 2. Sites found to be under positive and/or differential selection. population. However, differential adaptation to different human populations could only be due to a species-specific process. Although many comparisons between HIV sequences from different host populations are confounded by a substantial phylogenetic difference between the viral populations, HIV-1 M group clade C has infected a number of ethnically distinct populations. We compared the sequence dataset from southern Africa with another subtype C dataset sampled from Ethiopian Falasha immigrants arriving in Israel between 1998 and 2003 [26]. An earlier study has shown that the Falasha (Amharic) share most genetic markers with other Ethiopian groups [44], and Ethiopian populations have quite distinct allele frequency spectra at HLA loci [45][46][47]. This comparison allows us to test the explicit hypothesis that passage through different human populations has led to adaptive divergence in the virus genome. As transient substitutions would not contribute to interpopulation adaptive divergence, only internal branches were tested for population-specific positive selection. A novel maximum likelihood test for differential selection (see Materials and Methods) permits the direct comparison of selection pressures on individual amino acid sites between populations. The test takes into account nucleotide substitution biases and weights over all possible ancestral codons, while avoiding assumptions regarding the distribution of dN and dS across sites. With this test we identified one codon in PR (60) and six codons in RT (82,98,165,177,196, and 202) as selected differentially in the two populations, at p 0.05. Thus there is evidence for differential selection between these two populations at seven codons out of 296 compared in RT and PR. Based on the high (98.2%) PPV of the test achieved on simulated data (see Materials and Methods) and the overall low power of the test for relatively small sample sizes, we conclude that these seven codons are unlikely to be false positive results, and that they probably constitute only a portion of codons which evolve differentially between the samples. Figure 3A and 3B (and Figures S2-S6) show a codon-based maximum likelihood reconstruction of evolutionary history at these codon positions. We note that at many codons, evolution in both populations involves the same residues, but drastically different patterns of substitutions throughout the tree, with one population showing synonymous and nonsynonymous evolution along terminal branches only, while the other displays nonsynonymous substitutions along internal branches as well as ongoing evolution at the tips (e.g., Figure 3B). We also note that some sites (e.g., PR12 and RT48 in Table 2) show evidence of selection in both samples, but sequences appear to be driven towards different residues in each sample. Our method does not distinguish such sites as differentially selected, because they are subject to similar selective pressures in both samples, regardless of which residue appears to be selected for. Discussion The MHC-restricted host immune response represents a continuous selective force on pathogens whose effect is dependent on the pathogen genotype. In the case of HIV, viral escape mutations can arise soon after infection and can be transmitted onward, when their fate will depend on the MHC genotype of the new host [13]. In the absence of an active CTL response, due to MHC discordance, such escape mutations can be lost relatively quickly, implying that a second, antagonistic, selective force acts on the same genetic variant, possibly replication rate [48,49]. The extent to which the HIV and other viral genomes are shaped by the human immune response will therefore depend on the balance between these two effects. Only those mutations that either do not incur a significant cost in replicative efficiency, or have such a low probability of being recognized in the human population, would persist at the population level, and such population-level adaptation would be observed on internal branches of a phylogenetic tree of viral sequences. Analyzing viral pol gene sequences from two populations infected with HIV subtype C, we have found many codons with amino acid substitutions only at the tips. At these sites variation is much lower than at those with both internal and tip substitutions, suggesting long-term purifying selection has removed many recent substitutions that may have arisen as adaptations to individual hosts. This suggests there are substantial long-term constraints on the extent to which the genome of HIV can be modified by human MHC-restricted immune responses. However, at seven codons there is evidence that substitutions on internal branches are selected differentially between the two human populations studied, confirming that these constraints are sequence contextspecific. As the density of CTL epitopes in pol is low relative to that of other genes such as gag, tat, and nef [50,51], the level of population adaptation in other genes could well be even higher. Our approach looks for differences in evolutionary forces exerted deep in the phylogenetic trees, which are not always readily manifested in the amino-acid composition at a given site, or raw numbers of inferred synonymous or nonsynonymous substitutions. This approach can augment simpler but less sensitive methods [15,23], which rely on observed aminoacid composition of a site, or on detecting mutations toward (reversion) or away from (escape) a reference sequence (e.g., subtype consensus), thought to represent a variant with higher fitness in absence of selection. Our methodology offers an alternative and more general approach to the ''branch-site'' class methods [52], which attempt to identify site-by-site positive selection along a single branch using a random effects approach and empirical Bayes inference. For example, Travers et al. [21] used such methods to locate sites under selection along predefined branches in a phylogeny of HIV-1 sequences from different clades. Our approaches are able to test selection operating along a set of tree branches, without assuming an a priori, perhaps unnecessarily restrictive, parametric form for all possible selection regimes. For instance, Bielawski and Yang [53] assumed that there are at most three modes of selection, with fixed selection strength at every site in a given mode. In contrast, by adopting a fixed effects phylogenetic likelihood framework [22] and inferring various selection regimes directly at every site, we can sidestep the problems inherent in model mis-specification in the context of branch-site models [54,55] and uncertainties associated with phylogenetic empirical Bayes inference in general. In addition, we have developed a novel test to identify differential adaptation in different populations. This test is particularly suitable for exploring adaptation of parasites to genetically different host populations, and it allowed us to identify a subset of amino acid sites in PR and RT coding regions of HIV that were differentially selected in two human populations. Previous studies [56] have drawn upon observed correlations between location of sites subject to selection to hypothesize concordant or discordant selective pressures on gene regions among populations. While suggestive, correlational studies are unable to rigorously examine two populations for selective forces that differ at the level of an individual site, or a very short sequence region. Our test is capable of directly testing for such differences, including the case when we are only concerned with a subset of tree branches (e.g., internal or tips), and provides a rigorous significance level for such comparisons. Recent studies [40] provide strong evidence that site-to-site variation in synonymous substitution is pervasive in many genes, especially in HIV. Furthermore, it has been shown that failure to model such rate variation can result in uncontrollable rates of false positives and misidentify variable sites under relaxed selective constraints as those under strong positive selection pressure [22]. We have demonstrated that the new tests yield well-controlled false positive rates and high (.95%) PPV on data simulated with parameters realistic for HIV evolution. Additionally, the methods have been implemented as a part of parallelized software package HyPhy [57], can be run very quickly ('10 min per 74 sequence sample) on a small computer cluster, and lend themselves to practical investigation of statistical properties of the method based on simulations, which can be tailored to the specific dataset being analyzed. Adaptation to the host occurs at many levels in HIV: to the intracellular, intra-individual, and intrapopulation levels we have added an interpopulation level. Novel statistical methodology has allowed us to discriminate adaptation occurring at the last two levels and to answer questions raised by earlier correlation studies [15,50]. We have shown that within-host adaptation is often transient and that the codons at which persistent substitutions occur (which would include the immunological footprint) are subject to a substantially stronger ongoing selective force than those at which transient substitutions are seen. The ability to distinguish transient from persistent substitutions could be important for the development of an effective vaccine [58], as well as opening new routes to the analysis of selection in other settings. Materials and Methods Phylogeny reconstruction and substitution models. We used an iterative process [24] to reconstruct a phylogeny of the sample and to select an appropriate nucleotide substitution model, a special case of the general time-reversible Markov model [59]. Independent substitution bias parameters and branch lengths were fitted to each alignment, using the pruning algorithm [60], modified [61] for faster evaluation of phylogenetic likelihood functions, and numerical optimization routines, implemented in HyPhy [57], to obtain maximum likelihood parameter estimates (MLEs). The MG94 3 REV codon model, which estimates synonymous a b s and nonsynonymous b b s rates independently at every site of the alignment (and possibly differing between branches), was then fitted, while holding branch length and nucleotide substitution bias parameters fixed at MLE values obtained with a nucleotide model using the entire alignment. The rate matrix for this model is a modification of the MG94 [62] model (see also [40]), to allow for variable rates across different branches in the tree and to correct for all possible nucleotide substitution biases, given by x ! y 1-step synonymous substitution of nucleotide i with nucleotide j; b b s h ij p ny ; x ! y 1-step nonsynonymous substitution of nucleotide i with nucleotide j; 0; otherwise: To ensure time-reversibility we set h ji ¼ h ij . Because only the products of rates and times are estimable, one of the parameters h ij cannot be identified, and we choose to set h AG ¼ 1; h ij estimates are obtained from the entire alignment and reflect the rate of substituting nucleotide i with nucleotide j relative to the A$G substitution. p ny denotes the relative frequency of the nucleotide in position n (1,2,3) in codon y. For instance, the target nucleotide for the synonymous ACG to ACT substitution is T in the third codon position, and its corresponding rate is a b s h GT p 3 T . Under MG94 3 REV, the stationary frequency of codon y composed of nucleotides i, j, and k is the product of the constituent nucleotide frequencies, scaled to account for the stop codons. For sequences using the universal genetic code, these frequencies are given by Other genetic codes can be easily accommodated by adjusting the list of stop codons. The (x,y) entry of the transition probability matrix T(t) ¼ exp(Qt) defines the probability of substituting codon x with codon y in time t ! 0. All data analyses were conducted on a 40processor Linux cluster and all simulation studies were run on 64 processors of the Swansea Blue-C IBM cluster, using the message passing interface (MPI) distributed framework. Testing for temporal differences in evolution within a sample. Every codon s can be endowed with a single synonymous rate a s and two nonsynonymous rates: b L s (for terminal branches, or leaves) and b I s (for internal branches). If the latter two rates differ significantly, we deduce that evolution along internal branches (historical, e.g., influenced primarily by selection for transmission in HIV) and along terminal branches (recent, e.g., influenced by within-patient evolution in HIV) are subject to differing selective constraints. Formally, No selective difference: H A : a s ; b I s ; b L s are free to vary: Temporally differential evolution: ð3Þ A straightforward modification of the null hypothesis can be used to test for non-neutral evolution only along internal branches of the tree: Neutral evolution: H A : a s ; b I s ; b L s are free to vary: Positive or negative selection: We refer to the latter test as IFEL (internal fixed effects likelihood). Significance is assessed by the likelihood ratio test with one degree of freedom. Our simulations (see simulation strategy details below) have shown that the use of the v 2 1 asymptotic distribution leads to a conservative test, and actual false positive rates (in our simulation scenario) are lower than the nominal significance level of the test ( Figure S7). For a given sample size, the power of the test depends on the divergence level and the disparity between levels of selection between internal and terminal branches. For example, at p ¼ 0.05, the overall power of the test to detect non-neutral evolution is only 25%. This rather low number can be partially explained by a large proportion of codon sites with low degree of polymorphism. Such sites are nearly impossible to classify within the current phylogenetic framework. However, if we narrow our focus to strongly selected sites (i.e., sites where K ¼ maxðb I s =a s ; a s =b I s Þ ! 5) with an above average level of divergence (a s . 1), the power increases to 41%. For very strongly selected sites (K ! 16), the power is boosted to 68%. Overall, the PPV of the test is 98.8%. Population level adaptation test. Given two partitions (Tips only dN . 0, or A for brevity, and Internal dN . 0, or B) of sites in an alignment, we performed a maximum likelihood fitting of the MG94 3 REV model with each of the partitions having a single synonymous (a A ,a B ) and two nonsynonymous substitution rate parameters: the rate for internal branches (b I A ; b I B ), and that for terminal branches (b L A ; b L B ). Rate parameters are shared by all codons in the partition and estimated by maximum likelihood, whilst branch length parameters are held at values estimated from the entire alignment previously. We then tested whether the average selective pressure, measured by the ratio b L /a, along terminal branches was different between partitions A and B. Formally, Same average selective pressure ðRÞ: Significance was assessed by the likelihood ratio test with one degree of freedom using the asymptoticv 2 1 distribution of the LR statistic. Note that this test is an extension of the fixed sites approach of Yang and Swanson [63] to allow for variable selective pressures in different parts of the tree across partitions. When the sample size is small, the asymptotic distribution of the LR statistic may not be appropriate, hence we verified significance of the test using parametric bootstrap with 100 replicates. Testing for differential evolution between populations. Having fitted a s ; b L s ; b I s to each codon independently in sequences sampled from two different populations, we can test whether the selective pressure along internal branches was discordant between populations. We say that differential historical evolution has acted on codon s when x I ¼ b I s =a s differs significantly between two populations. Formally, Significance of the difference can be assessed assuming the v 2 1 distribution for the likelihood ratio test statistic. Our simulations (see below) suggest that the v 2 1 -based determination of significance leads to a conservative test ( Figure S8) and actual false positive rates (in our simulation scenario) are lower than the nominal significance level of the test. For a given sample size, the power of the test depends on the divergence level and the disparity between levels of internal branch selection between the samples. The proportion of sites correctly identified as evolving under temporally differential selection also depends on how different the ratio D ¼ b 1;I s =a 1 s 4 b 2;I s =a 2 s is from one. For example, at nominal p ¼ 0.05, the overall power of the test to identify differential evolution along internal tree branches is merely 8%. Low overall power is attributable to the small extent of polymorphism at many codon positions and small sample sizes. However, for the sites with medium to high levels of divergence (minða 1 s ; a 2 s Þ ! 1Þ and where maxðD; 1=DÞ ! 8, the power increases to 40%. If maxðD; 1=DÞ ! 32, the power goes up to 64%. Overall, the PPV of the test is 98.2%. Multiple test correction. Likelihood ratio tests for selection at an individual site have been applied in similar contexts in at least three studies [22,64,65]. If the main objective is to test whether or not there is evidence for selection somewhere in the sequence, based on the results of a series of site-by-site tests, then one would have to employ a multiple-test correction procedure-for example, the Bonferroni correction or a less conservative false discovery rates [66] approach. However, at the level of any given site, as argued in the three cited manuscripts, it is appropriate to use uncorrected p-values. Furthermore, our Type I error simulation studies ( Figures S7 and S8) show that the size of the test at the level of an individual site is actually less than the nominal p-value. Simulation strategy. Error rates and the power of the tests reported in the previous sections were derived using sequence data simulated under the following protocol. We have used trees, base frequencies, branch lengths (assuming neutral evolution), and nucleotide substitution biases fitted to the two ZA RT sample from our study, with 74 sequences each to simulate 100 (200 codons in each). A neighbor joining tree (using the Tamura-Nei distance metric [67]) was reconstructed from each data replicate and used for further inference, allowing us to investigate whether the power and error rates of the tests were unduly influenced by errors in phylogenetic reconstruction. Previous studies [22] and simulations results presented here ( Figure S7) suggest that fixed effect likelihood methods are able to infer site-specific substitution rates accurately, on average, with moderate smoothing effects for larger rates (due to a fairly small sample size). With that in mind, we set out to generate sequences under a distribution of substitution rates that is similar to those which have influenced our real samples. Having fitted the IFEL model (and thus three rates: a s , b L s , and b I s ) to all four samples, we pooled each type of estimated rates into the following seven bins: For each codon, we drew a s , b L s , and b I s from the appropriate estimated rate distribution (also shown in Figure S8). Sampling from distributions with identical supports ensured that a sufficient proportion of sites was generated under the null distribution (e.g., a s ¼ b I s for IFEL and b 1;I s =a s 1 ¼ b 2;I s =a s 2 ). For the evaluation of the differential selection test, we picked successive pairs of simulations (1À2, 2À3, 3À4, . . . , 99À100) for a total of 99 runs of the analysis. Implementation. All the tests have been implemented as scripts in the HyPhy [57] batch language and are either a part of the standard distribution of the package, or can be obtained upon request from the authors. Figure S1. Joint PR and RT Phylogenies Inferred from Combined Sample Data Ethiopian and South African sequence samples appear reciprocally monophyletic, both in the maximum likelihood and in the 50% consensus Bayesian trees. s as non-neutrally evolving along internal branches) versus the significance level of the IFEL selection test. Solid gray line shows the expected error rate. Because the actual rate of false positives (for this simulation scenario) is lower than predicted by the significance level of the test, we deduce that the IFEL test behaves conservatively. (More detail is available in the Materials and Methods section.) Found at DOI: 10.1371/journal.pcbi.0020062.sg007 (487 KB EPS). Figure S8. False Positive Rate for the Differential Selection Test The bottom right panel depicts the rate of false positives (identifying sites with b 1;I s =a 1 s ¼¼ b 2;I s =a 2 s as evolving differentially along internal branches) versus the significance level of the differential selection test. The solid gray line shows the expected error rate. Because the actual rate of false positives (for this simulation scenario) is lower than predicted by the significance level of the test, we deduce that the test behaves conservatively.
2014-10-01T00:00:00.000Z
2006-04-21T00:00:00.000
{ "year": 2006, "sha1": "222de9ac260641e7f87afb46a4cf0e7e18e04b1d", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.0020062&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "43dbe8564175f4104e243b3ce386cc3f09584429", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Biology", "Medicine" ] }
220931885
pes2o/s2orc
v3-fos-license
Heat-response patterns of the heat shock transcription factor family in advanced development stages of wheat (Triticum aestivum L.) and thermotolerance-regulation by TaHsfA2–10 Background Heat shock transcription factors (Hsfs) are present in majority of plants and play central roles in thermotolerance, transgenerational thermomemory, and many other stress responses. Our previous paper identified at least 82 Hsf members in a genome-wide study on wheat (Triticum aestivum L.). In this study, we analyzed the Hsf expression profiles in the advanced development stages of wheat, isolated the markedly heat-responsive gene TaHsfA2–10 (GenBank accession number MK922287), and characterized this gene and its role in thermotolerance regulation in seedlings of Arabidopsis thaliana (L. Heynh.). Results In the advanced development stages, wheat Hsf family transcription profiles exhibit different expression patterns and varying heat-responses in leaves and roots, and Hsfs are constitutively expressed to different degrees under the normal growth conditions. Overall, the majority of group A and B Hsfs are expressed in leaves while group C Hsfs are expressed at higher levels in roots. The expression of a few Hsf genes could not be detected. Heat shock (HS) caused upregulation about a quarter of genes in leaves and roots, while a number of genes were downregulated in response to HS. The highly heat-responsive gene TaHsfA2–10 was isolated through homeologous cloning. qRT-PCR revealed that TaHsfA2–10 is expressed in a wide range of tissues and organs of different development stages of wheat under the normal growth conditions. Compared to non-stress treatment, TaHsfA2–10 was highly upregulated in response to HS, H2O2, and salicylic acid (SA), and was downregulated by abscisic acid (ABA) treatment in two-leaf-old seedlings. Transient transfection of tobacco epidermal cells revealed subcellular localization of TaHsfA2–10 in the nucleus under the normal growth conditions. Phenotypic observation indicated that TaHsfA2–10 could improve both basal thermotolerance and acquired thermotolerance of transgenic Arabidopsis thaliana seedlings and rescue the thermotolerance defect of the T-DNA insertion mutant athsfa2 during HS. Compared to wild type (WT) seedlings, the TaHsfA2–10-overexpressing lines displayed both higher chlorophyll contents and higher survival rates. Yeast one-hybrid assay results revealed that TaHsfA2–10 had transactivation activity. The expression levels of thermotolerance-related AtHsps in the TaHsfA2–10 transgeinc Arabidopsis thaliana were higher than those in WT after HS. Conclusions Wheat Hsf family members exhibit diversification and specificity of transcription expression patterns in advanced development stages under the normal conditions and after HS. As a markedly responsive transcriptional factor to HS, SA and H2O2, TaHsfA2–10 involves in thermotolerance regulation of plants through binding to the HS responsive element in promoter domain of relative Hsps and upregulating the expression of Hsp genes. (Continued from previous page) Results: In the advanced development stages, wheat Hsf family transcription profiles exhibit different expression patterns and varying heat-responses in leaves and roots, and Hsfs are constitutively expressed to different degrees under the normal growth conditions. Overall, the majority of group A and B Hsfs are expressed in leaves while group C Hsfs are expressed at higher levels in roots. The expression of a few Hsf genes could not be detected. Heat shock (HS) caused upregulation about a quarter of genes in leaves and roots, while a number of genes were downregulated in response to HS. The highly heat-responsive gene TaHsfA2-10 was isolated through homeologous cloning. qRT-PCR revealed that TaHsfA2-10 is expressed in a wide range of tissues and organs of different development stages of wheat under the normal growth conditions. Compared to non-stress treatment, TaHsfA2-10 was highly upregulated in response to HS, H 2 O 2, and salicylic acid (SA), and was downregulated by abscisic acid (ABA) treatment in two-leaf-old seedlings. Transient transfection of tobacco epidermal cells revealed subcellular localization of TaHsfA2-10 in the nucleus under the normal growth conditions. Phenotypic observation indicated that TaHsfA2-10 could improve both basal thermotolerance and acquired thermotolerance of transgenic Arabidopsis thaliana seedlings and rescue the thermotolerance defect of the T-DNA insertion mutant athsfa2 during HS. Compared to wild type (WT) seedlings, the TaHsfA2-10-overexpressing lines displayed both higher chlorophyll contents and higher survival rates. Yeast one-hybrid assay results revealed that TaHsfA2-10 had transactivation activity. The expression levels of thermotolerance-related AtHsps in the TaHsfA2-10 transgeinc Arabidopsis thaliana were higher than those in WT after HS. Conclusions: Wheat Hsf family members exhibit diversification and specificity of transcription expression patterns in advanced development stages under the normal conditions and after HS. As a markedly responsive transcriptional factor to HS, SA and H 2 O 2 , TaHsfA2-10 involves in thermotolerance regulation of plants through binding to the HS responsive element in promoter domain of relative Hsps and upregulating the expression of Hsp genes. Background Owing to greenhouse gas emissions, the global mean surface temperature has increased about 0.65°C from 1956 to 2005 [1]. The rising temperature has become one of the major climatic disasters restricting crop growth and development around the world [2]. Wheat (Triticum aestivum L.) is the main cereal crop in many countries of the world and the high and stable yield is the most important breeding target. However, wheat crops frequently suffer from cross-stresses of heat and dry wind, causing recent decreases in both quantity and quality [3]. It is therefore necessary to analyse molecular mechanisms of thermotolerance and develop wheat cultivars with high resistance to heat stress (HS). Heat shock transcription factors (Hsfs) in plants play central roles in regulating plant thermotolerance. Hsfs can activate the expression of heat shock protein (Hsp) genes and thermotolerance-related genes by binding to HS responsive elements (HSEs) within promoters [4][5][6][7]. Since the cloning of yeast Hsf in the 1980s, many Hsfs have been recently identified at the genome-wide scale in a variety of species [8][9][10][11][12], including the first plant Hsf gene from tomato (Solanum lycopersicum L.) [13]. Plant Hsfs are divided into group A, B, C and are further divided into several subgroups based on different protein structures [4]. The number of Hsf gene family members varies greatly between species. So far, studies have identified 21 Hsfs in Arabidopsis thaliana, 16 Hsfs in tomato, and 82 Hsfs in wheat [7,14]. Most previous studies on Hsfs have been limited to A1 and A2 Hsf subclasses within the model plants Arabidopsis thaliana and Solanum. Lycopersicum (S. lycopersicum) [15][16][17][18]. The S. lycopersicum HsfA1 gene is constitutively expressed at low level and the protein coded by the gene localizes to both the nucleus and cytoplasm under the normal growth conditions. HsfA2 is localized in the cytoplasm due to a strong cytoplasmic localization signal, while its nuclear entry relies on the binding of HsfA2 to HsfA1 to form a hetero-oligomer during HS [8,17]. HsfA2 expression is strictly induced by HS and HsfA2 proteins can accumulate after continuous or repeated HS and during recovery from HS [8,17]. Only one HsfA2 exists in both Arabidopsis thaliana and S. lycopersicum [13]. Arabidopsis thaliana HsfA2 is localized in both the nucleus and the cytoplasm and can activate downstream Hsp gene expression upon binding with and activation by AtHsfA1. When AtHsfA1 is deleted, AtHsfA2 can enter the nucleus and regulate the expression of a series of Hsps and chaperone genes [18]. AtHsfA1 mainly acts as a transcription factor while AtHsfA2 regulates acquired thermotolerance by activating the expression of genes related to reactive oxygen species and carbohydrate and lipid metabolism to maintain cell membrane stability in the later period of HS [19]. In addition, AtHsfA2 can partially perform certain functions of AtHsfA1 during exposure to different heat ranges and oxygen stress and can rescue AtHsfA1 mutant phenotypes [20][21][22]. Most recently, AtHsfA2 was found to regulate transgenerational thermomemory induced by HS in Arabidopsis thaliana by directly activating the H3K27me3 demethylase REF6 (Relative of early flowing 6) [23], suggesting that HsfA2 may participate in diverse thermotolerance regulation [15,16,20,21,24]. Studies to determine characteristics and functions of wheat Hsf genes have only recently begun. In 2008, seven TaHsfs were identified in wheat, one of which was dramatically upregulated by HS, suggesting that these TaHsfs help regulate thermotolerance [25]. In addition, TaHsfA4a is upregulated by cadmium stress and participates in cadmium tolerance [26]. Expression of the TaHsfA2d gene in Arabidopsis thaliana improves thermotolerance, salinity tolerance, and drought tolerance of seedlings, with the seedlings growing at moderately high temperatures displaying increased biomass and yield [27]. For seedlings of Arabidopsis thaliana expressing TaHsf3, both thermotolerance and cold resistance can potentially be improved [28] In 2014, 56 Hsf members from families A, B, and C were identified in T. aestivum, many of which are constitutively expressed, and others in subgroups A2, B2, and A6 are significantly upregulated by HS [29]. TaHsfA6f directly regulates the expression of genes TaHsps, TaGAAP (Golgi anti-apoptotic protein, GAAP), and TaRof1 (a co-chaperone) and thus enhances seedling thermotolerance [30]. TaHsfs vary in expression levels and sensitivity to abiotic stresses including heat, salinity, drought, and cold [31]. TaHsfC2a is highly expressed in the filling stage of wheat and its overexpression upregulates the expression of genes related to drought, heat, and abscisic acid (ABA) responses, TaHsfC2a also provides proactive heat protection in developing wheat grains via an ABA-mediated regulatory pathway [32]. We previously reported that TaHsfB2d can regulate HS responses through a salicylic acid (SA) signalling pathway, which is dependent on H 2 O 2 levels [33]. Both basal and acquired thermotolerances are improved in Arabidopsis thaliana overexpressing TaHsfA2e, with increased expression of multiple Hsp genes belonging to different Hsf group [34]. Hsp genes can improve the thermotolerances of transgenic Arabidopsis thaliana, though expression response to HS was different [34]. In another recent report, we identified 82 wheat Hsf genes in a genome-wide study. These TaHsf family members showed diverse expression patterns in both leaf and roots, and under osmotic stresses such as SA, H 2 O 2 , and ABA in two-leaf-old seedlings of wheat. Among the 82 wheat Hsf genes, 9 members of subclass A2 and 17 members of other subclass were newly identified [14]. However, little is known about the characteristics and functions of these genes nowadays. The average temperature over land from 2006 to 2015 was 1.53°C higher than that from 1850 to 1900 and the warming temperature led to reduction of crop yield [35]. It is estimated that the yield of global wheat fall by 6% with 1°C increasing of global temperature [36]. So it is important to thoroughly investigate Hsf gene expression profiles in advanced development period of wheat and understand the thermotolerance-regulating functions of individual Hsf members during HS responses. This is especially relevant for subclass A2, which has previously been reported to be important for acquired thermotolerance during advanced development periods of wheat [20]. The aim of this study is to investigate the expression characterization of wheat Hsf family in the advanced development stages under HS and further elucidate thermotolerance regulatory function of individual wheat Hsf. The results may enable further understanding of biological functions and molecular mechanisms of Hsf family members and identify target genes for improving thermotolerance of wheat varieties. Results Expression patterns of wheat Hsf gene during HS in advanced development stages of T. aestivum Flag leaves and roots of wheat under the normal growth conditions and after HS at 37°C were sampled at the anthesis stage and latter 10 d and 20 d, and used to analyse expression profiles of wheat Hsf genes via RNA-Seq ( Fig. 1). Eighty wheat Hsf family genes were detected in both leaves and roots, except for TaHsfA2-11 and TaHsfA2-18. Transcription profiles of TaHsfs revealed complex expression patterns in leaves and roots. Under the normal conditions, no difference was detected in the expression profiles of most genes in leaves and roots of wheat in different stages. However, some genes were expressed at higher levels in leaves at the anthesis stage than the latter two development stages of wheat. These genes included the subclass A2 members TaHsfA2-7, TaHsfA2-8, TaHsfA2-9, TaHsfA2-13, the TaHsfB1 members, the B2 subclass members of TaHsfB2-6, TaHsfB2-7, TaHsfB2-8, and the C2 subclass members of TaHsfC2-2, TaHsfC2-3, and TaHsfC2-4. Expression levels of TaHsfA1-1, TaHsfA1-2, and TaHsfA1-3 increased in leaves in the two development stages after anthesis, and similar expression profiles of TaHsfB1-1, TaHsfB1-2, and TaHsfB1-3 were observed in wheat roots. Overall, the majority of class A and B Hsfs were expressed at higher levels in leaves while class C Hsfs were expressed at higher levels in roots. Hsf expression in T. aestivum during advanced development stages exhibited multiple HS response patterns ( Fig. 1). In both leaves and roots, Hsf expression levels were increased to different degrees under HS, especially those genes of subclasses A2, B1, and B2. Especially, TaHsfA2-10 and TaHsfA2-12 were increased most obvious under HS. In contrast, three TaHsfA1s were downregulated during HS in leaves and roots of wheat during three development stages. The expression levels of three A6 subclass members were remarkably upregulated by HS in leaves, but not in roots. In addition, the homeologous genes TaHsfC1-7, TaHsfC1-8, TaHsfC1-9, and both TaHsfC3-4 and HsfC3-10 were upregulated by HS in roots, but not in leaves. Additionally, the expression of those genes were undetectable during normal and HS conditions, including all subclass B4 members, six subclass C1 members, all subclass C3 members in wheat leaves, all subclass B4 members and three subclass C1 members in roots. Amplification of TaHsfA2-10 cDNA and structural analysis of the encoded protein in T. aestivum The cDNA sequence of TaHsfA2-10 was cloned using homeologous cloning from young leaves of T. aestivum Cang 6005 after HS at 37°C. The full-length sequence of TaHsfA2-10 is 1119 bp long and encodes 372 amino acids. TaHsfA2-10, which is located on chromosome 5AL, is homeologous to previously identified TaHsfA2-12 on chromosome 5DL [14]. The amino acid sequence of TaHsfA2-10 contained a DNA-binding domain (DBD), an oligomerization domain (OD), a nuclear localization signal (NLS), a nuclear export signal (NES), and an activator peptide motif (AHA). Protein similarity analysis indicated that TaHsfA2-10 is highly identical to AtHsfA2a-like from Aegilops tauschii, HvHsfA2a from Hordeum vulgare, BdHsfA2a from Brachypodium distachyon, and PhHsfA2a from Panicum hallii (Fig. 2). TaHsfA2-10 expression in different tissues and organs of T. aestivum under abiotic stress qRT-PCR analysis revealed that TaHsfA2-10 is constitutively expressed in many tissues and organs in different development stages of T. aestivum, with the highest expression levels in mature embryos, and expression levels in other tissues and organs were relative lower, suggesting that Hsf genes expression exist tissue-specific variations (Fig. 3a). TaHsfA2-10 expression levels in leaves were upregulated by HS, peaking at 90 min of the control levels while subjected to HS (Fig. 3b). TaHsfA2-10 levels also increased after application of exogenous SA (Fig. 3c) and H 2 O 2 ( Fig. 3d) with peak levels nearly 40 times and 25 times of their own controls at 120 min and 90 min after subjected to different stresses, respectively. In contrast, the expression of TaHsfA2-10 was downregulated by exogenous ABA (Fig. 3e). Analysis of transactivation activity of TaHsfA2-10 in yeast The transactivation activity of TaHsfA2-10 was evaluated in the yeast medium SD/Trp − /His − /Ade − /X-α-gal. As shown in Fig. 5, positive controls containing pGBKT7-53 grew well while the negative control hardly grows. Yeast transformed with pGBKT7-TaHsfA2-10 grew similarly as positive control (Fig. 5). This result suggested that TaHsfA2-10 possesses transactivation activity in yeast. Evaluation of thermotolerance regulation by TaHsfA2-10 in transgenic Arabidopsis thaliana Three transgenic Arabidopsis lines overexpressing TaHsfA2-10 of T3 generation were selected, with semi-RT-PCR confirming TaHsfA2-10 expression (Fig. 6a). Next, basal and acquired thermotolerance of these TaHsfA2-10-expressing Arabidopsis seedlings were evaluated with WT seedlings. No obvious phenotypic differences between three transgenic lines and WT plants were observed under the normal growth conditions (Fig. 6b, d); however, the growth vigour of all TaHsfA2-10expressing plants was higher than that of WT controls after two types of HS regimes treatment. Out of the transgenic lines generated, line 11_26 exhibited the strongest basal (Fig. 6c) and acquired thermotolerance phenotypes (Fig. 6e). Chlorophyll levels and survival (See figure on previous page.) Fig. 1 The transcription profiles of genes from wheat Hsf family in both leaves (L) and roots (R) of the advanced developmental period under the normal conditions and heat stress (HS). A heatmap was drawn to illustrate the relative expression profiles of 80 TaHsfs by TBtools version0.66831. Different colours correspond to log2 transformed values. Red or blue indicates higher or lower relative abundance of each transcript in each sample, respectively. Seedlings of wheat Cang 6005 were grown in the greenhouse with 22°C/18°C (day/night), 16 h/8 h photoperiod/dark and 50% humidity for the whole life. The flag leaves and roots were sampled at 60 min and 90 min respectively after heat treatment at anthesis stage (Feekes 10.5.2) and the following 10 days (10d AA), 20 days (20d AA) and used for RNA-Seq analysis. Pooled samples of total 50 individual plants from three pots were collected for each group respectively, and immediately frozen in liquid nitrogen for RNA extraction rates decreased with increasing thermotolerance, but transgenic lines had significantly higher chlorophyll content ( Fig. 6f) and survival rates ( Fig. 6g) compared to WT under HS conditions. The seedlings of line 11_26 had the highest chlorophyll content (Fig. 6f) and survival rates ( Fig. 6g) among the different genotypes. Rescued thermotolerance of the Arabidopsis thaliana mutant athsfa2 by TaHsfA2-10 Three TaHsfA2-10/athsfa2 complimentary lines, M16_30, M18_14, M21_25, were created and used to investigate thermotolerance. Semi-RT-PCR analysis confirmed expression of TaHsfA2-10 in three T3 transgenic lines while WT and the mutant athsfa2 lacked TaHsfA2-10 expression (Fig. 7a). Phenotypic observation revealed that growth vigour of WT, athsfa2, and TaHsfA2-10/athsfA2 lines were similar under normal growth conditions (Fig. 7b). However, seedlings wilted to different degrees during the recovery period after HS treatment (Fig. 7c). The growth vigour of WT was better than that of the athsfa2 while complementation lines M16_30 and M21_25 showed similar growth vigour as WT. In addition, the M18_14 line showed the least amount of discolouration, suggesting that TaHsfA2-10 can rescue the thermotolerance defect of the mutant athsfa2. M18_14 also showed higher survival rates and chlorophyll levels compared to WT, athsfA2 mutant, and M16_30 and M21_25 lines (Fig. 7d, e) after HS treatment. TaHsfA2-10-regulating Hsp gene expression is related to HS in Arabidopsis thaliana The expression levels of Hsps, including AtHsa32, AtERDJ3A, AtHsp70T, AtHsp90.1, and AtHsp101, were Results showed that the expression levels of these five AtHsps in the TaHsfA2-10 transgenic line 11_26 were slightly higher than that in WT plants under the normal conditions (Fig. 8a). Individual Hsp genes were upregulated to different degrees after HS, with peak expression levels appearing 1 h or 2 h after treatment. The expression levels of AtHsfa32 and AtHsp70T were upregulated by 4-5 times during HS in the TaHsfA2-10 line compared to WT (Fig. 8b-f). After the production of acquired thermotolerance by HS, the expression levels of most Hsp genes gradually decreased in both WT and transgenic line 11_26, except for AtHsp90.1, which showed higher expression level in line 11_26 than in WT plants 4 h after HS. However, during the recovery periods, the expression levels of AtHsp90.1 in the transgenic line were higher than those in WT plants. Overall, Hsp expression levels were higher after HS that induced basal thermotolerance than HS that induced acquired thermotolerance. Five AtHsps were then selected to study the direct binding of HSEs in promoters with TaHsfA2-10 under the normal conditions using the yeast one-hybrid assay. Results revealed that TaHsfA2-10 can bind with HSEs in promoters of all tested AtHsps (Fig. 9); further indicating that TaHsfA2-10 can regulate Hsp genes expression by binding with their HSEs. Discussion Increasing global temperatures have caused diverse and profound effects on plant growth, development and reproduction [37,38], and greatly threaten global crop yields. Plants have evolved sophisticated epigenetic machinery to respond quickly to heat [39]. Thermotolerance can be generated upon expression of Hsp genes induced by HS. In the advanced development stages of wheat, acquired thermotolerance is the predominant factor determining HS responses [40]. Reports from model plants revealed that members of the subclass HsfA2s play central roles in regulating acquired thermotolerance, in recovery from HS, and in transgenerational thermomemory [8,23]. Therefore, in this study, we identified genes expressed in advanced development stages in T. aestivum and evaluated the thermotolerance-regulating roles of individual Hsf gene family members. Our RNA-Seq results reveal that T. aestivum Hsf genes exhibit complex expression profiles and heat-response patterns in the advanced stages of wheat development (Fig. 1). The majority of class A and B Hsfs were predominantly expressed in wheat leaves while class C Hsfs were more highly expressed in wheat roots. Under the normal conditions, no obvious gene expression differences among developmental stages were observed. However, TaHsfA2-7, TaHsfB2-6, TaHsfC2-2, and their two homoeologous genes were more highly expressed during the anthesis stage of leaves. The expression levels of three TaHsfA1 members increased in leaves of the later developmental stages of wheat, and the same trends were observed for three TaHsfB1 members in wheat roots. These results indicate that TaHsfs are differently expressed among tissue types. The study by Xue et al. [29] revealed that members A2b/c/e, A5b, A6c/d/e were predominantly expressed in the endosperm, subclass B1 Fig. 6 The thermotolerance phenotypes, survival rate and the chlorophyll contents of TaHsfA2-10 transgenic Arabidopsis seedlings and wild type (WT) under the normal conditions and subjected to HS. a TaHsfA2-10 relative expression in WT and three transgenic lines of T3 generation by semi-RT-PCR. There were total 50 individual plants of each line of each plate, and the experiment was repeated three times. Single and double asterisks indicate the significant differences between WT and overexpressing lines at P < 0.05 and P < 0.01 level (t-test), respectively. B-E: WT controls and three lines of TaHsfA2-10 overexpressed Arabidopsis (line 2_22, line 10_5 and line 11_26) were used to analyse the basal (BT) and acquired thermotolerances (AT). Five-day-old seedlings (grown in the greenhouse with temperature of 22°C/18°C, 16 h light/8 h dark cycles and light of 100 mmol photons m − 2 s − 1 ) were treated with different HS regimes listed under each phenotype picture, and the seedlings were recovered at 22°C for 8 days, then the phenotypes were observed and photographed. b-c: assays for BT, d-e: assays for the AT. b, d: seedlings under the normal conditions; c, e: seedlings treated with different HS regimes. After above, the survival rates (g) were measured and the rosettes of each line were collected for measurement of chlorophyll contents (f). Total 50 individual plants of each line were divided into three parts and used for chlorophyll contents measurement; three plates were performed for each heat treatment. Each bar value represents mean ± SD of triplicate experiments; raw data refer to Additional file 2 and Additional file 3. Single and double asterisks indicate the significant differences between WT and overexpressing lines at P < 0.05 and P < 0.01 level (t-test), respectively members were expressed at higher levels in reproductive organs than in young leaves and young roots, and three C1 and C2 members were highly expressed in embryos of wheat. Most of these genes expression were very low in both leaves and roots of our experiments. However, our results showed that subclass B4 members and three C1 members were nearly undetectable in roots, while Xue's study indicated that B4 subclass members are expressed in roots and embryos of wheat. We speculate that these differences may be caused by differences in the specific wheat variety examined. Like subclass B4 members, 13 TaHafC3s showed very low expression level in leaves but higher in roots in three advanced development stages detected in our experiments under the normal conditions. RNA-Seq results under HS revealed that the expression of three TaHsfA1s was downregulated during HS in both leaves and roots of wheat (Fig. 1), this perhaps Fig. 7 The thermotolerance phenotypes, survival rate and the chlorophyll contents of atHsfA2-10 recovery Arabidopsis seedlings and WT under the normal conditions and HS. a TaHsfA2-10 relative expression in mutant (M), WT and three complementary lines of T3 generation by semi-RT PCR; b-c WT, athsfA2 mutant and its three TaHsfA2-10 complementary homozygous lines (16_30, 18_14 and 21_25) were used to assay the recovery thermotolerances. Five-day-old seedlings were treated with different HS regimes listed under each phenotype picture. After the seedlings were recovered at 22°C for 8 days, the phenotypes were observed and photographed. After above, the survival rates (d) were counted and the rosettes leaves of each line were collected for measurement of chlorophyll contents (e). b-c seedlings under the normal conditions and HS; Total 50 individual plants of each line were divided into three parts and used for chlorophyll contents measurement; three plates were performed for each heat treatment. Each bar value represents mean ± SD of triplicate experiments; raw data refer to Additional file 2. Single and double asterisks indicate the significant differences between WT and overexpressing lines at P < 0.05 and P < 0.01 level (t-test), respectively caused by sampling time, because the HsfA1s always response to heat earlier than HsfA2s, and function at early stage of HS [15]. The expression of three HsfA6s was upregulated by HS only in wheat leaves at anthesis and two following detective stages, showing tissue-special expression under HS. Wheat TaHsfA6f was expressed constitutively in green organs but was markedly up-regulated during HS. TaHsfA6f is a transcriptional activator that directly regulates TaHsps, TaGAAP, and TaRof1 genes in wheat and its gene regulatory network has a positive impact on thermotolerance [30]. Arabidopsis AtHsfA6b operates as a downstream regulator of the ABAmediated stress response and is required for heat stress resistance, though it response to ABA but not heat [41]. No more reports have been known about HsfA6s. Additionally, in our experiment, the expression of the homologues TaHsfC1-7, TaHsfC3-4, and TaHsfC3-10 was upregulated only in wheat roots, and the expression levels of subclass B4 members, six members of subclass C1, and 13 subclass C3 members were almost undetectable in leaves during HS while the expression of subclass B4 and three C1 members were almost undetectable in roots in three detected stages of wheat. These results expand those obtained using two-leaf-old wheat seedlings reported by Duan and co-authors, in which the subclass HsfC3s mainly responded to ABA [14], suggesting that these genes perhaps mainly participate in ABA signal transduction. These results further support the existence of a proactive TaHsfC2-mediated protective mechanism involving an ABA-dependent pathway for regulating heat protection in developing grains of wheat [32]. Our results enrich the expression characterization of wheat Hsfs by providing more underlying perceivement on the temporal and spatial expression of wheat Hsf family. Results of cis-element analysis showed that majority of TaHsfCs promoter contain ABA responsive motifs, only the promoter of TaHsfC3-1, TaHsfC3-2 and TaHsfC3-11 contain heat responsive motif (Additional file 1). In addition, TaHsfB1s and most TaHsfB2s were upregulated in both leaves and roots, suggesting they are involved in heat response of wheat. All TaHsfB1s and TaHsfB2s contain HSE in their promoter (Additional file 1), revealing Five-day-old T3 generation seedlings of the TaHsfA2-10 transgenic line 11_26 and WT on agar plates were subjected to HS, and then the rosette leaves were sampled at different time interval for qRT-PCR analysis. Meanwhile, the rosette leaves of the TaHsfA2-10 transgenic line 11_26 and WT before two kinds of heat treatments were sampled, respectively. For Hsp genes expression of transgenic line under normal conditions, the value of WT was normalized as 1 (a). For the gene expressions of heat treatments (b-f), the value of 0 h was normalized as 1. Each bar value represents mean ± SD of triplicate experiments, three technical replicates were performed in each experiment, and raw data refer to Additional file 2. Double asterisks indicate the significant differences between WT and overexpressing lines at P < 0.01 level (t-test) these genes can be upstreamly regulated by Hsfs. Up to now, few genes are known about TaHsfBs function involved in thermotolerance regulation, previous studies showed they serve as coregulators or repressors of the HsfAs for lacking a defined activation domain [42]. Zhao et al. reported that TaHsfB2d can improve both basal and acquired thermotolerances of transgenic Arabidopsis thaliana [33], the Arabidopsis seedlings transformed with CaHsfB2 from Cicer arietinum display relatively high drought resistance and thermal tolerance [43]. Lots of work needs to be performed about characteristics and functions of class HsfBs. Studies of model plants indicate that, HsfA2 members participate in responses to many osmotic stresses, including heat, salt, oxygen, drought, and both ABA-and SA-mediated signal transduction. Once activated by HsfA1, HsfA2 induces the expression of many Hsp genes as a key thermotolerance-regulating factor during HS [27]. Among the 82 Hsf genes identified in our previous study, most TaHsfA2s genes exhibit diverse response patterns to osmotic stresses [14]. In this study, as one of A2 members, TaHsfA2-10 was shown to be markedly expressed both in leaves and roots under HS at anthesis and later developing stages of wheat (Fig. 1) and in mature embryos (Fig. 3a), also is significantly upregulated by heat, SA, and H 2 O 2 in two-leaf-old seedlings (Fig. 3bd), indicating that TaHsfA2-10 perhaps involve in thermotolerance regulation in wheat different developing stages as a key factor. SA is reported to upregulate AtHsfA2 expression depending on presence of H 2 O 2 [44], TaHsfB2d regulates HS responses through an SAmediated signalling pathway in plants which depends on the presence of H 2 O 2 [33]. TaHsfC2a appears to serve a proactive role in heat protection in developing wheat grains via an ABA-mediated regulatory pathway [32]. In both our results and Duan's report [14], TaHsfA2-10 expression were downregulated by ABA in two-leaf-old seedlings and later development stages of wheat, speculating that TaHsfA2-10 perhaps participates in diverse thermotolerance regulation through an SA-mediated signalling pathway but not involving ABA-mediated signal transduction, though the promoter of TaHsfA2-10 contains both heat and ABA responsive cis-element (Additional file 1). However, whether this pathway dependents on H 2 O 2 need more researches. There is only one HsfA2 gene in both tomato and Arabidopsis, tomato HsfA2 was localized in cytoplasm, the nuclear translocation of HsfA2 need to rely on the heterooligomer formed between HsfA2 and HsfA1 [17], while Arabidopsis HsfA2 was localized both nuclear and cytoplasm. Different from above, TaHsfA2-10 was confirmed to be localized in nuclear by two constructs of N and C terminal of GFP fusions. We speculate that perhaps nuclear localization enables Hsf to induce downstream genes expression more quickly to improve Further phenotype observation provided convincing evidence for the above hypothesis (Figs. 6 and 7). By expressing TaHsfA2-10 in Arabidopsis, we found that TaHsfA2-10 both improves basal thermotolerance and acquired thermotolerance of the seedlings transgenic Arbidopsis thaliana. In addition, TaHsfA2-10 can rescue the thermotolerance defect of the mutant athsfa2 during HS. Growing vigour of the TaHsfA2-10/athsfa2 complimentary lines is better than WT, suggesting TaHsfA2-10 perthap has stronger thermotolerance regulation ability than AtHsfA2. The survival rate and chlorophyll contents measurement results provide powerful evidences simultaneously. A previous study demonstrated that thermal tolerance, salinity tolerance, and drought tolerance of TaHsfA2d-expressing Arabidopsis seedlings were all improved and that seedlings growing at moderately high temperatures could accumulate relatively high amounts of biomass and yield when compared to WT counterparts [27]. Up to now, there is no any report about TaHsfA2-10. More diverse gene functions of TaHsfA2s need to be deeply investigated in future research. As molecular chaperones, Hsps play central roles in protecting against stress damage and in assisting with the folding, intracellular distribution, and degradation of proteins [45][46][47]. Hsfs can specifically bind to HSEs in the promoter region of Hsp genes as key regulators of Hsp genes [4]. Functional HSEs bound by TaHsfA2b were previously identified in promoter regions of TaHsp17, TaHsp26.6, TaHsp70d, and TaHsp90.1-A1, implying that TaHsp17 and TaHsp90.1-A1 are likely direct targets of TaHsfA2b [29]. In this study, qRT-PCR of AtHsp90.1, AtHsp70T, AtHsp101, AtERDJ3A, and AtHsa32 showed that these Hsp genes were upregulated to different degrees within 4 h of HS, both in WT and transgenic lines (Fig. 8). AtHsp101 and AtHsa32 appear involved in long-term acquired thermotolerance in Arabidopsis [20,48,49], and our results suggest that they also participate in basal thermotolerance. In fact, TaHsfA2-10 can induce Hsp expression in transgenic Arabidopsis lines under normal growth conditions, although the resulting expression levels are relatively low (Fig. 8a). In transgenic Arabidopsis lines, AtHsfA2 activated the expression of Hsp genes like AtHsp101, AtHsfa32, and AtHsp-CI, but not AtHsp90, in the absence of HsfA1 member expression under non-stressed conditions [22]. TaHsfA2e and TaHs-fA2f dramatically upregulate AtHsp70T expression with the improving of basal or acquired thermotolerance [32,50], and ZmHsf05 can activate AtHsp21 and AtHsp90 expression during HS [24], revealing different Hsfs involves in heat response by activating special Hsps expression. Yeast one-hybrid analysis further showed that these detected Hsp genes were the direct target genes of TaHsfA2-10 (Fig. 9). These results confirm the regulatory role of TaHsfA2-10 on Hsp gene expression during HS and suggest that different Hsf members of the same subclass only activate expression of certain Hsp genes in different thermotolerance regulation. Conclusions Our results expanded the expression characterization of wheat Hsf by acquiring new insights on the underlying mechanisms governing temporal and spatial expression of wheat Hsf family members. TaHsfA2-10 was one of a few markedly responsive genes to HS. TaHsfA2-10 showed transactivation activity in yeast and activated expression of a suite of thermotolerance-related Hsp genes in transgenic Arabidopsis thaliana plants. TaHsfA2-10 improved the basal thermotolerance and acquired thermotolerance of transgenic Arabidopsis seedlings and rescued the thermotolerance phenotype defect of the mutant athsfa2 during HS. These findings enrich understanding of the diversity and specificity of Hsf expression in wheat. The results may also spur further investigation of the biological functions and molecular mechanisms of Hsf family members and the identification of target genes for the genetic improvement of wheat thermotolerance. Plant materials, growth conditions, and stress treatments The T. aestivum cultivar Cang 6005, used in this study, was provided by the Cangzhou Academy of Agriculture and Forestry Sciences, Hebei province (E116.83, N38.33). This wheat variety is a winter wheat with a total growth period of about 244 days. It has a reputation for heat and salt-tolerance and is mainly planted in the southeast region of Hebei province. Selected seeds were surface sterilized in 0.1% HgCl 2 for 10 min, rinsed in distilled water repeatedly, and then germinated in a tray. When buds were about 1 cm in size, they were divided into two groups. One group about 30 buds were transplanted into one pot with mesh containing Hoagland nutrient solution, and the other group buds were vernalized at 4°C for 40 d, then transferred into potted soil (soil:vermiculite, 3:1) in big pots with 8 plants per pot. The plants were cultivated in a greenhouse at 22°C/18°C (day/night) with a 16 h/8 h light/dark cycle and 50% humidity under approximately 150 μmol photons m − 2 s − 1 light intensity. For stress treatments, seedlings with two leaves were treated with HS, H 2 O 2 , SA, or ABA for different time following methods described in Zhao's paper [33]. For HS treatment, 40 seedlings were put into a new pot containing Hoagland nutrient solution preheated at 37°C in another chamber, then treated for 30, 60, 90, 120, 240 min. For H 2 O 2 treatment, 40 seedlings were put into a new pot containing Hoagland nutrient solution with the final concentration of 10 mM The T-DNA insertion mutant line SALK_008978 was provided by Dr. Yee-Yung Charng (Agricultural Biotechnology Research Center, Academia Sinica, Taipei), which was named athsfa2 derived from the Arabidopsis Biological Resource Center (Ohio State University, USA). Seeds of WT (ecotype Columbia), athsfa2 and transgenic lines were surface sterilized and sown on MS medium which contained 1% (w/v) sucrose and 0.8% gelrite, then kept at 4°C for 3 days. Plants were grown to the greenhourse at 22°C/18°C (day/night) with a 16 h/8 h light/ dark cycle and 50% humidity under approximately 100 μmol photons m − 2 s − 1 light intensity. RNA extraction Total RNA of different tissues from wheat and Arabidopsis thaliana was extracted using the RNarose Reagent Systems kit (Shanghai Huashun Biotechnological Co., Ltd.) according to the manufacturer's protocol, and genomic DNA contamination was removed by RNasefree DNase I. A NanoDrop 2000 (Thermo Fisher Scientific, Rockford, USA) was used to detect the RNA concentration and quality. RNA-Seq analysis of wheat family Hsfs Flag leaves and roots of anthesis (Feekes 10.5.2) and post-anthesis wheat were sampled for RNA-Seq analysis after stress treatment. RNA-Seq analysis was performed following methods described in [14]. Total RNA of each sample was extracted from 50 plants and genomic DNA was removed by RNase-free DNase I. An Agilent 2100 Bioanalyzer (Agilent Technologies, CA, USA) was used to detect RNA integrity. For RNA sample preparation, about 2 μg RNA of each sample was used as input material. The sequencing libraries were prepared for Illumina by VAHTSTM mRNA-seq V2 Library Prep Kit. The paired-end sequencing of the library was carried out by the HiSeq Xten sequencers (Illumina, San Diego, CA, USA). The sequenced data quality was evaluated by FastQC (version 0.11.2). And the raw reads were selected by Trimmomatic (version 0.36). The clean reads to the wheat reference genome was mapped by HISAT2 (version 2.0) using default parameters. The gene expression abundance of the transcripts was calculated by String Tie (version 1.3.3b). DEGs (differentially expressed genes) were determined by DESeq2 (version 1.12.4). Each sample was detected by RNA-Seq analysis once. A heatmap was drawn to illustrate the relative expression profiles of wheat TaHsfs by TBtools version0.66831 [51]. Cloning of TaHsfA2-10 cDNA and sequence analysis A total of 1 μg purified RNA was used to synthesize first-strand cDNA using the SuperScript IV First-Strand Synthesis System (Invitrogen). The primers used were: forward primer: 5′-CGGGTTTGGTTCTTTGGA-3′; reverse primer: 5′-CCTTCATCTTCTTTCGCTCA-3′. In addition, the high-fidelity enzyme Pyrobest (TaKaRa) was used for PCR amplification. The PCR system and the reaction procedures were performed according to methods described in [33]. The reaction mixture contained 1× reaction buffer, 2.5 mM dNTP mixture, 1 μL first-strand cDNA, 20 μM forward primer, 20 μM reverse primer and 2 U DNA polymerase in a total volum of 50 μL. The reaction procedure were: 1 min at 94°C, 32 cycles of 10 s at 98°C, 30 s at 56°C, 1 min at 72°C, and final extension 5 min at 72°C. Expression analysis by quantitative real-time PCR For the expression analysis of TaHsfA2-10 in wheat, the specific primers for amplifying TaHsfA2-10 were designed based on the sequence of 5′-UTR (Forward primer: 5′-CACCTTCGGGTAGCCCCTG-3′, Reverse primer: 5′-GAAAATGTCGCCCTCCTC-3′). The internal reference gene was TaRP15 (F: 5′-GCACACGT GCTTTGCAGATAAG-3′; R: 5′-GCCCTCAAGCTCAA CCATAACT-3′) [29]. The expression level in young roots was set to 1 for the tissue-specific expression analysis and the expression level at 0 h was set as 1 for the stress treatments of wheat. For the expression of AtHsps in Arabidopsis thaliana, the TaHsfA2-10 transgenic line 11_26 (T3 generation homozygote) was used. Rosette leaves of the 5-day-old Arabidopsis seedlings were sampled at 0 h, 1 h, 2 h, 4 h, and 8 h after heat treatment, as described in the thermotolerance assay section. Five Arabidopsis Hsp genes were selected for expression analysis. The internal reference gene was AtActin8 and the expression level of WT at 0 h was set as 1. Primers used are listed in Additional file 4. PCR reactions were 20 μL in total: 10 μL SYBR Premix Ex TaqII, 0.8 μL 10 μM forward primer, 0.8 μL 10 μM reverse primer, 1 μL 1st strand cDNA, and 7.4 μL ddH 2 O. PCR reactions were performed using a 7500 Real-time PCR System (Applied Biosystems, USA) and reaction procedures carried out according to methods described in [33]. PCR reactions were predenaturated at 95°C for 30 s, then performed 40 cycles of 5 s at 95°C and 34 s at 60°C. The data were analyzed using the 2 -ΔΔCt method after the reaction. Each group of experiments included three biological replicates and each biological sample included three technical replicates. The data are represented by mean values ± standard error of three biological replicates for each experiment. Determination of TaHsfA2-10 subcellular localization using transient expression in tobacco epidermal cells For N-terminal fusions of TaHsfA2-10 with GFP, specific primers (Forward primer was 5′-GACGAGCTGT ACAAGGAGCTCATGGACCCCTTTCAC-3′ and reverse primer was 5′-CGATCGGGGAAATTCGAG CTCTCATGGTAGCTGCGGG-3′. Underlined letters were restriction enzyme sites SacI respectively and bold letters belonged to coding sequence of TaHsfA2-10.) were designed to amplify the coding region of TaHsfA2-10 by PCR. The product of PCR was ligated into the vector pCAMBIA1300-GFP digestion with the restriction enzymes SacI (The plasmid map was Additional file 6B). For C-terminal fusions of TaHsfA2-10 with GFP, specific primers (Forward primer was 5′-GAGAACACGGGGGACTCTAGAATGGACCCCT TTCAC-3′ and reverse primer was 5′-GCCCTTGCTC ACCATGGATCCCTGGTAGCTGCGGGGC-3′. Underlined letters were restriction enzyme sites XbaI and BamHI respectively, and bold letters belonged to coding sequence of TaHsfA2-10.) were used to amplify the coding sequence of TaHsfA2-10, which was then then ligated into the expression vector pCAMBIA1300-GFP after digestion with the restriction enzymes XbaI and BamHI (The plasmid map was Additional file 6C). The recombinants driven by 35S CaMV promoter were constructed according to the manufacturer's protocol using the ClonExpress II kit (Vazyme, Nanjing, China) and transformed into Agrobacterium tumefaciens EHA105 cells, which were then used for tobacco epidermal cell infiltration. The empty vector pCAMBIA1300-GFP was as control to study where only the GFP was expressed. Treated tobacco seedlings were grown in a greenhouse with a 16 h/8 h day/night cycle (23°C/19°C) under 150 μmol s − 1 m − 2 light intensity and 50% relative humidity for 3 d. After tobacco epidermal cells were stained with 10 μg/mL DAPI for 5 min and rinsed with physiological saline, the fluorescence of the stained epidermis was examined using the Confocal Zeiss Microsystems META510 (Zeiss, Oberkochen, Germany). Transcription activation activity and one-hybrid assays in yeast Transcription activation activity assays were performed in yeast according to the manufacture's protocol (TaKaRa, Dalian, China). The coding regions of TaHsfA2-10 were cloned by PCR using primers (Forward primer was 5′-GAGGAGGACCTGCATATGATGGACCCCTTTCAC-3′ and reverse primer was 5′-GTTATGCGGCCGCT GCAGTCACTGGTAGCTGCG-3′. Underlined letters were restriction enzyme sites NdeI and PstI respectively, bold letters belonged to coding sequence of TaHsfA2-10.) was constructed into the yeast expression vector pGBKT7 digestion with NdeI and PstI (The plasmid map was Additional file 6D). The constructs driven by T7 promoter, the pGBKT7-53 as positive control or the empty vector pGBKT7 as negative control with pGADT7 respectively were transformed into AH109, the yeast cell. The yeast cells in exponential growth were diluted to OD 600 of 0.1 and grown on the deficiency medium plates of SD/Trp − / His − /Ade − /X-α-gal. Then the plates were placed at 30°C until the yeast cells grew well. Finally, the yeast cells were photographed after 3-5 days. Yeast one-hybrid assays were performed to detect the binding activity between TaHsfA2-10 and promoters of AtHsps according to the methods described by Li et al. [24]. Briefly, the coding region of TaHsfA2-10 was obtained by PCR using primers (Forward primer was 5′-GCCATGGAGGCCAGTGAATTCATGGACCCCT TTCAC-3′ and reverse primer was 5′-CAGCTCGAGC TCGATGGATCCTCACTGGTAGCTGCG-3′. Underlined letters were restriction enzyme sites EcoRI and BamHI respectively, bold letters belonged to coding sequence of TaHsfA2-10.) was contructed into vector pGADT7 digestion with EcoRI and BamHI (The plasmid map was Additional file 6E). The promoter sequences of different AtHsps were cloned by PCR using primers (Additional file 5) and constructed into vector pHIS2.1 digestion with EcoRI and SacI (The plasmid map was Additional file 6F). The pGADT7-TaHsfA2-10 driven by T7 promoter and different constructs of pHIS2.1-promoter driven by minimal HIS3 promoter were transformed into the yeast cell Y187. The SD/Trp − /Leu − /His − selective medium containing 3-AT (3-amino-1,2,4-triazole) were used in the assay. The yeast cells grew at 30°C for 3-5 days before they were photographed. Generation of transgenic Arabidopsis thaliana lines WT and T-DNA insertion mutant athsfa2 (SALK_ 008978, the Arabidopsis Biological Resource Center, Ohio State University) plants of Arabidopsis thaliana (ecotype Columbia) were used for genetic transformation. Seeds were surface sterilized with 75% alcohol for 30 s then with 10% sodium hypochlorite for 10 min. Sterile seeds were sown on 0.5x Murashige and Skoog (MS) medium (containing 1% sucrose and 0.8% gelrite, San-EiGenFFI Inc., Osaka, Japan, 1x MS salts and vitamins, pH 5.8) in plastic Petri dishes. After incubation for 3 days at 4°C in the dark to ensure synchronized germination, plants were grown in a growth chamber under normal conditions (22°C/18°C with 16 h light/8 h dark cycles and light intensity at 100 mmol photons m − 2 s − 1 ). The coding region of TaHsfA2-10 was amplified by PCR using the primers (Forward primer: 5′-GAGAAC ACGGGGGACTCTAGAATGGACCCCTTTCACGGC-3′, Reverse primer: 5′-CGATCGGGGAAATTCGAG CTCTCACTGGTAGCTGCGGGG-3′. Underlined letters were restriction enzyme sites XbaI and SacI respectively, bold letters belonged to coding sequence of TaHsfA2-10.). The products of PCR were purified and cloned into the binary vector pCAMBIA1300 after digesting the destination plasmid with XbaI and SacI (The plasmid map was Additional file 6A). The resulting constructs driven by 35S CaMV promoter were transformed into Agrobacterium tumefaciens strain GV3101. Constructs were then transformed into WT and the Arabidopsis thaliana mutant athsfa2 plants using the floral dip method under vacuum conditions as described by Clough et al. [52]. All transgenic plants were selected on MS plates containing 25 mg/mL hygromycin until T3 generation homozygous lines were screened. Thermotolerance assays For basal thermotolerance assays, WT, mutant athsfa2, and three independent T3 generation homozygous transgenic Arabidopsis lines were used. For basal thermotolerance, 5-day-old seedlings of WT and TaHsfA2-10 transgenic lines and on agar plates were subjected to heat shock for 50 min at 45°C. For acquired thermotolerance assays, 5-day-old seedlings of WT and TaHsfA2-10 transgenic lines on agar plates were kept at 37°C for 60 min, then recovered for 2 d at 22°C and subjected to HS for 60 min at 46°C. For rescued thermotolerance assays, 5-day-old WT, the mutant athsfa2, and TaHsfA2-10 complementary line seedlings on agar plates were subjected to HS for 70 min at 44°C, and then allowed to continue growth for 8 days at 22°C and photographs were taken. More than 50 plants of each line were used per plate and experiments repeated three times. Measurements of chlorophyll content Chlorophyll content was spectrophotometrically measured as previously described by Li et al. [53]. About 0.2 g fresh leaves of Arabidopsis thaliana were taken into a capped test tube containing 20 mL acetone and ethanol mixture (acetone:ethanol:ddH 2 O, 4.5:4.5:1.0). The homogenate was filtered after the leaves were completely blenched. The content of Chlorophyll a and Chlorophyll b were calculated according to the value of A645 and A663 of the filtrate respectively.
2020-08-03T14:49:03.182Z
2020-08-03T00:00:00.000
{ "year": 2020, "sha1": "b0a8c63b5165437d96eb9dc6adabc7e6597b7771", "oa_license": "CCBY", "oa_url": "https://bmcplantbiol.biomedcentral.com/track/pdf/10.1186/s12870-020-02555-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b0a8c63b5165437d96eb9dc6adabc7e6597b7771", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
246165967
pes2o/s2orc
v3-fos-license
Predictive role of atrial fibrillation in cognitive decline: a systematic review and meta-analysis of 2.8 million individuals Abstract Aims To systematic review and meta-analyse the association and mechanistic links between atrial fibrillation (AF) and cognitive impairment. Methods and results PubMed, EMBASE, and Cochrane Library were searched up to 27 March 2021 and yielded 4534 citations. After exclusions, 61 were analysed; 15 and 6 studies reported on the association of AF and cognitive impairment in the general population and post-stroke cohorts, respectively. Thirty-six studies reported on the neuro-pathological changes in patients with AF; of those, 13 reported on silent cerebral infarction (SCI) and 11 reported on cerebral microbleeds (CMB). Atrial fibrillation was associated with 39% increased risk of cognitive impairment in the general population [n = 15: 2 822 974 patients; hazard ratio = 1.39; 95% confidence interval (CI) 1.25–1.53, I2 = 90.3%; follow-up 3.8–25 years]. In the post-stroke cohort, AF was associated with a 2.70-fold increased risk of cognitive impairment [adjusted odds ratio (OR) 2.70; 95% CI 1.66–3.74, I2 = 0.0%; follow-up 0.25–3.78 years]. Atrial fibrillation was associated with cerebral small vessel disease, such as white matter hyperintensities and CMB (n = 8: 3698 patients; OR = 1.38; 95% CI 1.11–1.73, I2 = 0.0%), SCI (n = 13: 6188 patients; OR = 2.11; 95% CI 1.58–2.64, I2 = 0%), and decreased cerebral perfusion and cerebral volume even in the absence of clinical stroke. Conclusion Atrial fibrillation is associated with increased risk of cognitive impairment. The association with cerebral small vessel disease and cerebral atrophy secondary to cardioembolism and cerebral hypoperfusion may suggest a plausible link in the absence of clinical stroke. PROSPERO CRD42018109185. Introduction Atrial fibrillation (AF) and cognitive impairment are important public health problems and represent significant burden on health resources. 1,2 Population studies have suggested an association between AF and cognitive impairment. Atrial fibrillation and cognitive impairment share similar risk factors, such as age, diabetes, hypertension, and heart failure, which could confound the association. [3][4][5] In addition, stroke, a serious complication of AF, is a well-described risk factor for cognitive impairment. 6 Silent cerebral emboli and chronic hypoperfusion during AF may also represent a plausible pathophysiological link between AF and cognitive impairment, though evidence is unclear. Cognitive impairment is defined as a decline from a previous level of performance in one or more cognitive domains (complex attention, executive function, learning and memory, language, perceptual motor, or social cognition). It is designated 'mild' when it does not interfere with the capacity for independence. On the other hand, severe cognitive impairment (dementia) is a more severe form that interferes with daily function, usually defined either by the Diagnostic and Statistical Manual of Mental Disorders (DSM) criteria or International Classification of Diseases (ICD) codes. In this study, we undertook a meta-analysis and systematic review of (i) studies evaluating the association between AF and cognitive impairment, and (ii) neuropathological lesions in AF patients to better characterize the mechanistic link between AF patients and cognitive impairment. Methods This systematic review complies with the consensus statement outlined by the Meta-analysis of Observational Studies in the Epidemiology group and Preferred Reporting Items for Systematic Review and Meta-Analysis statements. 7 The meta-analysis was registered with the PROSPERO International prospective register of systematic reviews (CRD42018109185). Search strategy The English scientific literature was searched using PubMed, EMBASE, and Cochrane Library from their inception to 27 March 2021 with the assistance of an experienced librarian. The following keywords: atrial fibrillation, cognitive impairment, major neurocognitive disorder, microbleeds, silent cerebral infarcts, and dementia were used (Supplementary material online, Methods S1). Inclusion and exclusion criteria Only prospective studies were included for the meta-analysis evaluating the relationship between AF and cognitive impairment. Furthermore, studies reporting neuropathological lesions in the AF population were Graphical Abstract What's new? • Atrial fibrillation (AF) is associated with a 39% increased risk of cognitive impairment. • The cognitive impairment was early and more frequent after clinical stroke in presence of AF. • Cerebral small vessel disease, such as white matter hyperintensities, microbleeds, silent cortical and subcortical infarction, and decrease in cerebral volume, represents the plausible link between AF and cognitive impairment. included and reported in a systematic manner. The exclusion criteria were (i) reviews, editorials, letters, case series, case reports, and conference proceedings; (ii) sample size <50; (iii) studies lacking a control group; (iv) studies in which the control group was selected from patients with other types of arrhythmias; (v) studies where the measure of association was not reported; and (vi) studies that provided unadjusted analyses. Where multiple studies described the same population (sub-and followup studies), the study with the most comprehensive data was included. Study selection and data extraction The study selection and data extraction were performed using the predefined inclusion and exclusion criteria. Although review articles were excluded, their reference lists were examined for potentially relevant publications. During data extraction, information was collected on the study design, length of follow-up, outcome measures, method of assessing outcomes, inclusion/exclusion criteria, and results. The maximally adjusted risk ratios were extracted and pooled across studies. The data were reviewed by two authors independently (Y.H.K. and L.L.) and disagreements were resolved by consensus with the help of a third investigator (R.M.). The methodological qualities of the included studies were assessed using the modified Newcastle-Ottawa Scale. The outcomes of the meta-analysis were defined as (i) association of AF and cognitive impairment and (ii) impact of AF on progression of cognitive impairment. The outcome of the systematic review was to systematically define the neuro-pathological changes in AF. Statistical analysis Continuous variables are presented as the mean or median and categorical variables as n (%). Meta-analysis was performed using STATA (StataCorp, TX, Version 15). Hazard ratios (HRs) and risk ratios (RRs) were calculated using the metan function. A P-value <0.05 was considered significant. The heterogeneity was assessed using an I 2 value to measure variability in observed effect estimates between the studies and heterogeneity was explored with meta-regression in comprehensive meta-analysis software. 8 Utilizing the unrestricted maximum likelihood assumption, the univariate meta-regression shows the unit change in effect size (i.e. the HR for MCI) standardized to a 10 unit change in predictor variable (e.g. per 10-year increase in age), with associated 95% confidence interval (CI) and P value. Separate meta-analyses were performed for (i) cognitive impairment in all AF patients with (ii) a subgroup analysis of those without previous stroke, and (iii) post-stroke cohort; (iv) progression of cognitive impairment; (v) silent cerebral infarction (SCI); and (vi) microbleeds in AF patients. Incidence rate ratios were calculated using follow-up and event data; using the restricted maximum-likelihood estimator random effects model, Poisson-Normal model with log incidence rate as the outcome measure was fitted. Pooled incidence rate was calculated by performing back-transformation of log incidence rates. Where data could not be presented as a meta-analysis, it were reported in a systematic fashion. Search and synthesis of the literature The online search of PubMed, EMBASE, and Cochrane Library from their inception to 27 March 2021 yielded 4534 citations. Manual searching of references for reviews did not yield additional citations. Subsequently, duplicate citations (656) and citations not conforming to inclusion and exclusion criteria (3627) were excluded from primary review. Two hundred and fifty-one citations were identified for secondary review. After removal of studies with sample size less than previously defined (n = 5), the lack of a control group (n = 12), crosssectional studies (n = 7), inadequate data for stipulated research questions (n = 160), and redundant studies (n = 6), 61 studies were included in the final analysis. Of these, 15 studies reported on the association of AF and cognitive impairment in the general population, 6 reported on cognitive impairment in post-stroke population, and 4 on association of AF with progression of cognitive impairment. Using the same pool of retrieved articles, 36 studies reported the neuropathological lesions in the AF population. Of these, 13 reported on the association of AF and SCI, and 11 on cerebral microbleeds (CMB) in AF patients. Figure 1 provides the consort diagram for the data search. Progression of cognitive impairment Four studies consisting of 3186 patients 23,[30][31][32] were analysed to assess the impact of AF on the progression from mild to severe cognitive impairment. Overall, there was no significant increase in the risk of progression from mild to severe cognitive impairment [relative risk (RR) = 2.75, 95% CI 0.46-5.04, I 2 = 45.6%) in patients with AF as compared to the control population ( Figure 4) over 2.8-10 years. Atrial fibrillation and neuropathological lesions Thirty-six studies reported the neuropathological lesions in the AF population. Of these, 32 described findings from brain imaging, while the remaining 4 studies described post-mortem results. [65][66][67][68] The neuropathological changes of the imaging and autopsy studies are listed in Table 1. Autopsy studies reported gross and subcortical infarcts in patients with AF. [65][66][67][68] Neuritic plaques 66 and neurofibrillary tangles, 68 usually seen with Alzheimer's disease, were noted to be absent. Imaging studies demonstrated association of decrease in cerebral perfusion, 53,54 and total cerebral volume with AF, after adjusting for infarction, 35,49,50 and ApoEe4. 35,49 There was greater decrease in total cerebral volume in permanent as compared to paroxysmal AF. 50 In addition, decreased frontal lobe and hippocampal volume were associated with AF. 49 Atrial fibrillation was not associated with Alzheimer's pattern of 18 F-FDG PET hypometabolism or PiB uptake (b-amyloid accumulation). 35 Study quality and publication bias All included studies fulfilled the Newcastle Ottawa Score for being representative of the relevant cohort and having adequate sample size. There was low risk of detection and information biases in all studies. Supplementary material online, Table S4 provides the modified Newcastle Ottawa Score for the included studies. Funnel plots and egger test revealed no significant publication bias in any of the statistical analyses (Supplementary material online, Figure S1). Discussion Atrial fibrillation is associated with serious complications ranging from heart failure to debilitating stroke and death. Our systematic review and meta-analyses provide compelling evidence for association of AF with cognitive impairment (Graphical abstract). The metaanalyses included only prospective observational studies, which enabled the measurement of events in temporal sequence, providing reliable results. The major findings were (1) The presence of AF is associated with a 39% increased risk of cognitive impairment with a lead time of years to decade(s). The association persisted even after exclusion of patients with previous history of stroke. Furthermore, the presence of AF results in early and 2.7fold increased risk of cognitive impairment after acute stroke. (2) Atrial fibrillation is associated with increased risk of cerebral small vessel disease, such as white matter lesions, SCI, CMB, and reduced cerebral volume which may represent the plausible link between AF and cognitive impairment. (3) Atrial fibrillation is associated with 38% increased risk of CMB. When present, CMB are associated with increased risk of death and recurrent stroke in patients with AF. Atrial fibrillation and cognitive impairment The Rotterdam Study was the first to describe the association between AF and cognitive impairment. 71 Although some small studies or studies in elderly cohorts have not shown this association, larger cross-sectional and prospective longitudinal studies have confirmed high risk of cognitive impairment in patients with AF. 6,33 The risk of cognitive decline has also been noted to be dependent on the duration of exposure or when AF is diagnosed earlier than the eighth decade. 6,9,16 However, this association is confounded by the presence of shared cardiovascular risk factors, such as hypertension, diabetes mellitus, heart failure, and excess alcohol intake. Some studies have shown a lower risk of dementia in patients with AF on oral anticoagulation providing evidence in favour of the causal association of AF and dementia. 72,73 Similarly, patients managed with rhythm control by catheter ablation may have a lower risk of dementia. 74 The current meta-analysis restricted itself to prospective studies that had adjusted risk for cardiovascular risk factors, providing strength to the association between AF and cognitive impairment. The lengthy follow-up of • Brain imaging (CT/ MRI) • Silent cortical infarction [32][33][34][35][36][37][38][39][40][41][42][43][44]50 • More in persistent AF as compared to paroxysmal AF 35 • Subcortical infarction [32][33][34]45,46 • Severe periventricular white matter lesions/ hyperintensities [45][46][47] • Cerebral microbleeds [54][55][56][57][58][59][60][61][62][63] Decreased brain volume • Decrease in total cerebral volume • Adjusted for infarction, 34,48,49 prospective studies also suggests a long lead time in absence of clinical stroke. However, the cognitive impairment was more common and accelerated to occur within months to years after acute stroke. The association between AF and progression of cognitive impairment did not achieve significance and may represent lack of adequate power of the analysis. Link between atrial fibrillation and cognitive impairment Atrial fibrillation is an established risk factor for stroke, accounting for up to one-third of the stroke cases in elderly patients. The presence of AF has been associated with large ischaemic lesions secondary to macro-embolism from the left atrium. 66,68 These lesions increase the risk of developing large vessel dementia. 65 Even in absence of clinical stroke, several mechanisms have been proposed to explain the increased risk of cognitive impairment in patients with AF. This meta-analysis comprehensively presents the various neuropathological changes associated with AF that could represent a plausible link between AF and cognitive impairment. Silent micro-emboli secondary to thrombogenic and inflammatory state in AF have been proposed to result in not only silent cortical [33][34][35][36][37][38][39][40][41][42][43][44][45]75 and but also silent sub-cortical infarction. [33][34][35]46,47 However, associated small vessel disease may contribute to silent sub-cortical infarction. Hypoperfusion secondary to variability in cerebral blood flow and cerebral vascular rhythm may result in ischaemia or impairment of flowing blood's ability to remove micro emboli from the vessels which results in embolic infarction. 76 In addition, the burden of cerebral small vessel disease, such as white matter lesions, increases with a chronic reduction in cerebral blood flow secondary to persistent AF. 36 Although certain studies suggest an association with Alzheimer's disease, autopsy, and positron emission tomography (PET) imaging have demonstrated an absence of neuritic plaques, 66 neurofibrillary tangles, 68 and Alzheimer's disease pattern. 35 Furthermore, decreased cerebral perfusion 54 may explain reduction in cerebral volume in patients with AF, even after adjustment for infarction 35,49,50 and ApoEe4. 35,49 The meta-analysis also confirmed that CMB, a marker of cerebral small vessel disease, are seen more often in patients with AF. Recent data suggest that novel oral anticoagulants may not increase risk of CMB as they do not cross the blood brain barrier. 77 CMB have been shown to be associated with greater risk of cerebral haemorrhage and stroke. The risk for recurrent ischaemic stroke is greater than the risk for cerebral haemorrhage even in patients on oral anticoagulants. 62 Our meta-analysis confirmed that CMB are associated with increased risk of death and all-cause stroke in patients with AF. To summarize, cerebral small vessel disease, clinical, and subclinical infarction are associated with AF. We hypothesize that these changes when superimposed on the concomitant cerebral small vessel disease associated with comorbid conditions such as hypertension and diabetes predispose patients to cognitive impairment (Graphical abstract). Strengths and limitations The results of the current study were compiled using meta-analysis of primarily observational data. However, the technique of metaanalysis is well accepted in the literature to aggregate results from observational data, to facilitate synthesis of available evidence and the infarction were also heterogenous in the included studies. Although previous meta-analyses have shown similar association with cognitive function, 78,79 this updated meta-analyses included only prospective longitudinal studies only and demonstrated a robust relationship despite adjusting for cardiovascular risk factors. In addition, this metaanalysis highlights the potential timeline of cognitive impairment based on the length of follow-up. The study also systematically reviews the association of gamut of neuropathological changes in patients with AF. The impact of oral anticoagulation on cognitive impairment in patients with AF was not evaluated as it way beyond the scope of the meta-analyses. This meta-analysis provides important information that will be useful to estimate sample size and design further prospective studies to inform on measures that may reduce risk of cognitive impairment due to AF. Conclusion Atrial fibrillation is associated with increased risk of cognitive impairment. Clinical and silent brain infarction, cerebral small vessel disease, and cerebral atrophy secondary to cardioembolism and cerebral hypoperfusion may represent the plausible link in absence of clinical stroke. Further prospective randomized control trials are essential to further the understanding of the mechanisms of cognitive impairment and to develop strategies to prevent cognitive impairment in patients with AF. Supplementary material Supplementary material is available at Europace online.
2022-01-23T06:16:25.521Z
2022-01-21T00:00:00.000
{ "year": 2022, "sha1": "62086ddf489760ca47ae9113b96df27d85dba028", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/europace/euac003", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "10f98d36296496a06b6816ab11c8eafd8ec6682d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263221880
pes2o/s2orc
v3-fos-license
Quantitative characterization of traces of Sobolev maps We give a quantitative characterization of traces on the boundary of Sobolev maps in $\dot{W}^{1,p}(\mathcal M, \mathcal N)$, where $\mathcal{M}$ and $\mathcal{N}$ are compact Riemannian manifolds, $\partial \mathcal{M} \neq \emptyset$: the Borel-measurable maps $u\colon \partial \mathcal M \to \mathcal{N}$ that are the trace of a map $U\in \dot{W}^{1,p}(\mathcal M, \mathcal{N})$ are characterized as the maps for which there exists an extension energy density $w \colon \partial \mathcal{M} \to [0,\infty]$ that controls the Sobolev energy of extensions from $\lfloor p - 1 \rfloor$-dimensional subsets of $\partial \mathcal{M}$ to $\lfloor p\rfloor$-dimensional subsets of $\mathcal{M}$. Introduction Given M a compact Riemannian manifold with non-empty boundary ∂M, we consider the homogeneous Sobolev space defined aṡ The classical trace theorem of E. Gagliardo [12] states that for p > 1 there is a well-defined continuous and surjective trace operator If N is a compact Riemannian manifold, that by J. Nash's embedding theorem [21] can be assumed without loss of generality to be isometrically embedded into some Euclidean space R ν ⊇ N , then the homogeneous spaces of Sobolev mappings can be defined for p ≥ 1 aṡ these nonlinear Sobolev spaces arise naturally, for example, as domains of functionals in the calculus of variations and of partial differential equations in geometric analysis and physical models. As a consequence of the straightforward vector version of Gagliardo's trace theorem, the trace operator tr ∂M is well-defined and continuous fromẆ 1,p (M, N ) toẆ 1−1/p,p (∂M, N ). The question of surjectivity of the trace operator is however much more delicate: given a map u ∈Ẇ 1−1/p,p (∂M, N ), the classical linear extension construction gives a function U ∈ W 1,p (∂M, R ν ) such that tr ∂M U = u with no guarantee whatsoever about the range of the extension U . here and in the sequel ⌊t⌋ denotes the integer part of the real number t, so that ⌊t⌋ ∈ Z and ⌊t⌋ ≤ t < ⌊t⌋ + 1. Analytical obstructions finally arise locally for the extension problem: There exist maps inẆ 1−1/p,p (B m−1 , N ) that are strong limits of smooth maps from B m−1 to N but are not traces of maps inẆ 1,p (B m−1 × (0, 1), N ). This is known to happen when either the homotopy group π ℓ (N ) is infinite for some ℓ ∈ N with ℓ ≤ max{m, p} − 1 [1] (see also [3,Theorem 6]) and when p ∈ N \ {0, 1} and the homotopy group π p−1 (N ) is nontrivial [19]. These analytical obstructions can be seen in view of a nonlinear uniform boundedness principle as a consequence of the failure of linear estimates on extensions for smooth maps [20]; when 2 ≤ p < 3, these analytical obstructions are connected to similar analytical obstructions for the lifting problem in fractional Sobolev spaces [2,18]. We are interested in the question of characterizing in general the range of the trace operator. T. Isobe [15] has provided characterization of the maps u : ∂M → N that are the traces of maps inẆ 1,p (M, N ) as the maps satisfying the two conditions: The goal of the present work is to characterize the image of the trace by the properties of mappings on lower-dimensional subsets. This approach is motivated by the fact that in the Gagliardo energy appearing in the definition (1.1) of the fractional Sobolev spacė can be interpreted as the minimal energy inẆ 1,p Because of the quantitative nature of the phenomenon of analytical obstructions, we expect any characterization of the trace space to have some quantitative character. Finally, a workable characterization should be based on a robust definition of generic lower-dimensional set, as developed as topological screening by P. Bousquet, A. Ponce, and J. Van Schaftingen [4]. We first consider the case where the domain manifold M is the m-dimensional half-space . In order to formulate our results we settle some terminology and notation. We assume that simplexes of a simplicial complex inherit the metric and the measure from their canonical realization as an equilateral simplex of side-length 1; on the full complex Σ, a measure is defined by additivity and a distance d Σ . Given simplicial complexes Σ and Σ 0 and λ > 0, we define the quantity with the measure in the numerator being taken relatively to Σ and in the denominator relatively to Σ 0 ; we note that γ λ Σ 0 ,Σ < ∞ for every λ > 0 whenever Σ is a finite homogeneous simplicial complex and if Σ 0 ⊂ Σ is a homogeneous simplicial complex of codimension 1; (1.2) is reminiscent of an Alhfors upper codimension-1 bound (see [16,17,22], where a doubling condition is made separately). The quantity |σ| Lip denotes the Lipschitz constant of the map σ : Σ → R m + |σ| Lip := sup We define the canonical cubication of the half-space R m + of size κ as follows. For ℓ ∈ {0, . . . , m} and κ > 0, we write and we let then We will state our results for a mapping u from ∂R m + defined everywhere following [13, p. 66; 24, p. 5]. In other words, we do not consider equivalence classes of functions equal almost everywhere. (Otherwise, given any σ or any h, there exists a map equal almost everywhere that satisfies (ii) or (iii) in Theorem 1.1 by being constant on the set σ(Σ 0 ) or on C We obtain the following characterization of traces. (iii) There exist a constant θ > 0, a sequence (κ i ) i∈N in (0, ∞) converging to 0, and sets The function w appearing in (ii) can be interpreted as an extension energy density; the mappings σ can be interpreted as generalized paths going through R m + . In the paths condition (ii), we emphasize the facts that, as in singular homology, we do not assume anything about the local or global injectivity of σ -the map σ could even take a constant value where w is finite, in which case u • σ Σ 0 is of course trivially extended by a constant -and that there is no Jacobian appearing in´Σ 0 w • σ: we are integrating w • σ on Σ 0 rather than integrating w on the set σ(Σ 0 ). Assertion (iii) is very rigid because of the presence of a cubication, whereas assertion (ii) is very robust -it is invariant under diffeomorphisms whose derivative and its inverse are controlled uniformly -and is thus a natural candidate for a geometrical characterization of the image of the trace operator. In broad terms, the proof of Theorem 1.1 consists in deducing (ii) from (i) by a Fubini type argument (the proof is given in Section 2.1), (iii) from (ii) by the particularization to families of translations of canonical cubical complexes (the proof is given in Section 2.2), and (i) from (iii) by defining homogeneous extensions on cubical skeletons (the proof is given in Section 2.3). We next have a geometric statement of Theorem 1.1 on manifolds. Here we have defined d 0 : Σ → R to be the distance to Σ 0 in Σ by the quantity sup Σ d 0 quantifies how far points in Σ can be from Σ 0 . In comparison with Theorem 1.1, the map σ is assumed to satisfy the nonlinear conditions that σ(Σ) ⊆ M and σ(Σ 0 ) ⊆ ∂M. The proof of Theorem 1.2 is based on the proof of Theorem 1.1 through suitable localization arguments. Finally, as in Isobe's characterization by (o A ) and (o B ), the obstruction to the extension can be decoupled into a quantitative obstruction to the extension to a neighborhood of the boundary and a qualitative obstruction to the extension to the whole manifold. There exists a summable function w : ∂M → [0, ∞] such that for every finite homogeneous simplicial complex Σ of dimension ⌊p⌋, every subcomplex Σ 0 ⊂ Σ of codimension 1, and every Lipschitz-continuous mapping σ : Σ → M satisfying´Σ 0 w • σ < ∞, one has: The assertion (a) differs from the condition of Theorem 1.2, by the fact that in (a) we assume the stronger condition that σ(Σ) ⊆ ∂M instead of the weaker condition that σ(Σ 0 ) ⊆ ∂M and σ(Σ) ⊆ M, resulting in a weaker condition; in order to keep the equivalence we supplement (a) with (b), which is a reformulation of Isobe's condition (o B ) as a condition on paths, as it appears in topological screening for the approximation of Sobolev mappings [4]. In the particular case where p ∈ N, assertion (b) is equivalent to the fact that u • σ Σ 0 is almost everywhere equal to the restriction to Σ 0 of some V ∈ C(Σ, N ). In contrast with Theorem 1.2, Theorem 1.3 does not give a quantitative estimate; such an estimate is precluded by the qualitative character of assertion (b). with the following property: Suppose that Σ is a finite homogeneous simplicial complex, Σ 0 ⊂ Σ is a subcomplex of codimension 1, that the map σ : Σ → R m + is Lipschitz-continuous and satisfies σ(Σ 0 ) ⊂ ∂R m + , and thatˆΣ The conclusion of Theorem 1.1 where the complex Σ has arbitrary -dimension is slightly stronger than (ii) in Theorem 1.1 where the dimension of Σ is ⌊p⌋. Here and in the sequel, for 0 < η < 1 and ρ > 0 we define the solid spherical cap and note that The main ingredient of the proof is the following integration lemma. We recall that the quantity γ λ Σ 0 ,Σ was defined in (1.2); here and in the sequel we write Proof of Lemma 2.2. By a change of variables , and in view of (2.2) and of the non-negativity of the last component σ m of σ : where τ > 1 is to be chosen later (see (2.10) below). Noting that for every and since, by assumption, σ m (y) ≥ 0 and σ(Σ 0 ) ⊆ ∂R m + we also have Thus, we estimate by (2.4) We estimate now the innermost integral of the right-hand side of (2.5). Since the set Σ 0 is compact, for every y ∈ Σ, there exists a point Moreover, It follows thus from (2.7) and (2.8) that for every x ∈ R m + and z ∈ Σ 0 Recalling that by assumption (ηλ − 1)ρ − |σ| Lip > 0, we set so that one can directly check that in view of (2.6) Moreover, setting θ := Combining (2.5) with (2.9) and (2.11) we conclude. Proof of Proposition 2.1. Without loss of generality, we assume that we have by (2.15), (2.12), and (2.13) We define the function w : We have by (2.16) and (2.17) We also have by (2.14) We assume now that σ : For each ξ ∈ C η ρ , we define the map We claim that the map V := U • σ ξ satisfies the conclusion for a suitable ξ ∈ C η ρ . Indeed, by Lemma 2.2 with we fix such a ξ for the remainder of the proof. Since we have assumed that´R m By Lipschitz-continuity of σ ξ and smoothness of U j , we have D(U j • σ ξ ) = DU j (σ ξ ) · Dσ ξ almost everywhere (here, by · we mean the composition of differential as linear mappings, or equivalently, the multiplication of the Jacobian matrices) and thus by (2.23) Thus, by (2.23) and (2.24), we have In order to conclude, we note that since ρ = 2|σ| Lip /(ηλ − 1) and ξ ∈ C η ρ , we have which gives, by (2.22), We take η := λ+1 2λ and multiply w by a suitable constant, so that (2.1) holds. 2.2. From paths to cubical meshes. The implication (ii) =⇒ (iii) in Theorem 1.1 will follow from the next proposition. By a classical realization of cubes as simplicial complexes, we can assume that Σ j is a simplicial complex of dimension ℓ and Σ j 0 is a simplicial subcomplex of Σ j of codimension 1. Moreover, we observe that for every λ > 1, By assumption, we have By assumption, there exists a map W j ∈Ẇ 1,p (Σ j , N ) such that tr Σ j In view of (2.26) and (2.27), we havê (2.28) In view of (2.28) and by weak compactness in Sobolev spaces, up to a subsequence, we 2.3. From cubical meshes to the half-space. We now prove the implication (iii) =⇒ (i) in Theorem 1.1. Step 1. Construction of U j h by homogeneous extension. For each j ∈ N and every h ∈ H j , we define the map U j h : R m + → N by a homogeneous extension. In order to define the extension we begin by introducing a retraction of R m For ℓ ∈ {1, . . . , m}, we define the mapping P κ,ℓ : C κ,ℓ + → C κ,ℓ−1 + to be the homogeneous retraction defined on each cube Q ∈ Q κ,ℓ in the following way: Let x Q be the center of the cube Q, so that Q ∩ E κ,m−ℓ = {x Q } (note that when Q ∩ ∂R m + = ∅ then Q ∩ R m + is a half-cube and x Q ∈ ∂R m + ). On this cube the map P κ,ℓ : Q \ {x Q } → ∂Q (with the boundary taken in the ℓ-dimensional affine plane containing Q) is given by the formula We define now P κ,ℓ : R m + \ E κ,m−ℓ−1 → C κ,ℓ + by (2.33) P κ,ℓ := P κ,ℓ+1 • · · · • P κ,m . (The map P κ,ℓ is illustrated in Figure 1.) For any h ∈ H j , we define U j,h : R m → N for almost every x ∈ R m + by where V j,h ∈Ẇ 1,p (C κ j ,⌊p⌋ + +h, N ) is a map given by assumptions, such that tr Step 2. Uniform boundedness in L p of the gradients. We prove that when h ∈ H j , the sequence (U j,h ) j∈N remains bounded inẆ 1,p (R m + ). We begin with a well-known lemma. Lemma 2.5. If ℓ ∈ N, p < ℓ and V ∈Ẇ 1,p (C κ,ℓ−1 Lemma 2.5 follows by the application of the next Lemma 2.6 on a suitable decomposition of cubes in pyramids (with a factor 2 coming from the fact that by definition ofẆ 1,p (C κ,ℓ−1 + , N ) traces coincide on common faces). Lemma 2.6. Let ℓ ∈ N and for κ > 0 we let Proof. We have and thus, by Fubini's theorem and the change of variable y ′ = κx ′ /x ℓ , since p < ℓ, We pursue the proof of Proposition 2.4. Iterating Lemma 2.5, in view of (2.34), (2.33), and (2.30), we obtain that for every h ∈ H j , we have U j,h ∈Ẇ 1,p (R m + , N ) with the estimatê the constants in the estimates depend only on m and p. By the definition of the map U j,h in (2.34) and since U j,h ∈Ẇ 1,p (R m + , N ) (see (2.36)), for every h ∈ H j we have We are going to show that for a suitable choice of h j ∈ H j , u j,h j → u in L p loc (R m−1 ). Lemma 2.7. Let f : R ℓ → N be a Borel-measurable function and let Ψ ∈ L ∞ (R ℓ ). Assume that for every k ∈ Z ℓ , Ψ(x + κk) = Ψ(x). Then, we have for every Borel-measurable set Lemma 2.7 is reminiscent to the opening of maps [6, Section 1.1] and the related estimates [5]. Proof. Since the function Ψ is periodic, we have by Fubini's theorem and the change of variable Continuing, the proof of Proposition 2.4, we set Ψ(x) := x − P κ j ,⌊p⌋ (x), so that for every Since u ∈ L p loc (R m−1 ), there exists a sequence (R j ) j∈N diverging to ∞ such that By our assumption (2.29), for j ∈ N large enough, we can choose an h j ∈ H j ≃ H j × {0} such that We set Conclusion. By (2.36), the sequence (U j ) j∈N that we defined in (2.38) is bounded iṅ W 1,p (R m + , N ). Therefore, up to a subsequence it converges weakly inẆ 1,p (R m + , R ν ) to a map U ∈Ẇ 1,p (R m + , R ν ). Since N is compact, by Rellich-Kondrachov's compactness theorem we have strong convergence in L p (B R ) for every R > 0, which implies, up to a subsequence, convergence almost everywhere; hence U also takes values in the manifold N and thus U ∈ W 1,p (R m + , N ). Finally, on the boundary we have tr R m−1 U j → tr R m−1 U in L p loc (∂R m + , N ) as j → ∞, and thus in view of (2.37), we conclude by continuity of the trace that tr R m−1 U = u. A qualitative necessary condition. Isobe's characterization of the obstruction to the extension of Sobolev mappings [15] consisted of an analytical obstruction (o A ) and a topological obstruction (o B ). On the other hand, the characterization of Theorem 1.1 is essentially quantitative. As a complement to the proof of Theorem 1.1 and in preparation of the proof of Theorem 1.3, we state and prove the next qualitative necessary condition for the extension. We recall following [8] that a mapping V : Σ → R ν belongs to the space VMO(Σ, R ν ) whenever V is Borel-measurable and We recall that Proposition 2.1 gave under the same assumptions the conclusion that u • σ Σ 0 = tr Σ 0 W for some W ∈Ẇ 1,p (Σ, N ). Proposition 2.8 would follow from Proposition 2.1, embeddings ofẆ 1,p (Σ, N ) into VMO(Σ, N ), an embedding ofẆ 1−1/p,p (Σ 0 , N ) into VMO(Σ 0 , N ) together with a suitable approximation by continuous map. In order for this approach to work, one would need our assumption dim Σ ≤ p together with a regularity assumption on the simplicial complex: for instance the embedding theorem fails for simplicial complex composed of two simplices intersecting on a set of codimension at least p. In order to avoid these technical issues, we follow [4] and give a direct proof of Proposition 2.8. The global case In this Section we give the proof of Theorem 1.2 and Theorem 1.3. Embedding into the half-space. In order to reduce the situation of manifolds to an open subset of a Euclidean half-space, we rely on the following isometrical embedding. Proof. By a collar neighborhood theorem, we can assume that M ⊂ M ′ , where M ′ is a compact Riemannian manifold without boundary and the inclusion is an isometry. We consider a function f : M ′ → R such that f −1 (0) = ∂M, f −1 ([0, ∞)) = M and 0 < |Df | < 1 on M ′ with respect to the metric g ′ of M ′ . In particular, g 0 := g ′ − Df ⊗ Df also defines a metric. By Nash's embedding theorem, there exists a µ ∈ N and an embedding i 0 : M ′ → R µ−1 which is isometric for the metric g 0 . The mapping i ′ : M ′ → R µ defined by i ′ (x) = (i 0 (x), f (x)) is then an isometric embedding for M ′ endowed with the metric g ′ and i := i ′ M : M → R µ is the required embedding. Characterization of the trace space. We are now ready to prove Theorem 1.2 characterizing the traces of Sobolev maps between manifolds. The idea is to first use Proposition 3.1 to replace maps with a manifold in the domain to maps defined on a subset of the Euclidean half-space, by composing original maps with the retraction, and next to apply to those modified maps a localized version of Theorem 1.1. Proof of Theorem 1.2. Applying Proposition 3.1 we may assume, without loss of generality, that the manifold M is identified with its isometrical embedding into the half-space R µ + and that Π M : U → M is the corresponding smooth retraction, where the set is relatively open in the closed half-space R µ + . We define the sets U 0 := U ∩ ∂R µ + and U + := U ∩ R µ + ; (3.1) we choose the set V ⊂ R µ + relatively open in R µ + such that M ⊂ V and V ⊂ U, and define the sets Necessary condition. Fix δ > 0. By assumption there exists a map U ∈Ẇ 1,p (M, N ) such that tr ∂M U = u. We define the mapsŪ := U • Π M U + andū := u • Π M U 0 , so that in particularŪ ∈ W 1,p (U + , N ) and tr U 0Ū =ū. We continue by observing that Lemma 2.2 and Proposition 2.1 with m = µ admit localized versions. First, in Lemma 2.2 under the additional assumption that for each y ∈ Σ we have (3.3) σ(y) + d 0 (y)C η ρ ⊆ U + , the integral in (2.3) can be taken over U + instead of R µ + , and we thus havê Indeed, it suffices to observe that the dimension has changed from m to µ and that in view of the condition (3.3) and of the change of variable x = σ(y) + d 0 (y)ξ, the integration domain of all the integrals with respect to x can be restricted to the set U + . Next, for the localized version of Proposition 2.1, we define the functionW : U + → [0, ∞] as in (2.15), with R m + replaced by U + and U byŪ , we definew : V 0 → [0, ∞] by (2.17), with the integrals restricted to U + ; we have Thus, for any y ∈ Σ we have and combining (3.6) with (3.5) we obtain This implies, from the choice of the set V, that for a sufficiently large λ > 1 we have for all y ∈ Σ σ(y) + d 0 (y)C η ρ ⊂ U + and thus condition (3.3) is satisfied. Moreover, since Π M •σ = σ, we havê We apply now localized Lemma 2.2 and proceed exactly as in the proof of Proposition 2.1: For the Lipschitz-continuous functionσ : Multiplying w by a suitable constant we obtain (1.6). This finishes the proof of the necessity part. Sufficient condition: Let u : ∂M → N and w : ∂M → [0, ∞] with´∂ M w < ∞ be Borel-measurable maps given by assumptions. Since Π M (U 0 ) ⊂ ∂M, the mapw := w • Π M : U 0 → [0, ∞] is well-defined. If the mappingσ : Σ → U is Lipschitz-continuous and if we set σ := Π M •σ : Σ → M, then so that ifσ(Σ 0 ) ⊂ V 0 , then forδ = δ/C 3 the condition |σ| Lip sup Σ d 0 ≤δ implies |σ| Lip sup Σ d 0 ≤ δ, and if´Σ 0w • σ < ∞, then by assumption there exists a map V ∈Ẇ 1,p (Σ, N ) such that Thus by construction of σ andw, For small enough κ 0 > 0, we have by construction of U + and V + We let Σ j be a sequence of homogeneous simplicial complexes, Σ j 0 ⊂ Σ j be a sequence of subcomplexes of codimension 1 and σ j : Σ j → W j,⌊p⌋ be a simplicial parametrization such that . We observe that for every j ∈ N, so takingδ = C 4 we get |σ j | Lip sup Σ j d 0 ≤δ. Moreover, we have for any λ > 0 Thus, as in (3.8), we obtain the existence of maps V j ∈Ẇ 1,p (Σ j , N ). We then may proceed as in the proofs of Propositions 2.3 and 2.4 to construct a mapŪ ∈Ẇ 1,p (V + , N ) such that tr V 0Ū =ū andˆV By Fubini's theorem there is a set of positive measure of h ∈ ∂R µ + , such that we have M+h ⊂ V, U M+h ∈Ẇ 1,p (M + h, N ), tr ∂M+hŪ M+h =ū ∂M+h , and For such h, we set U :=Ū • (Π M M+h ) −1 ∈Ẇ 1,p (M, N ) and we have tr ∂M U = u on ∂M and This finishes the proof of the sufficiency part for δ = C 3 C 4 . Remark 3.2. In view of (3.4) and (3.12), the infima of´M|DU | p and´∂ M w are comparable. Combining a qualitative and quantitative condition. In this Section we focus on proving Theorem 1.3. Let us first remark that if p / ∈ N, then since dim Σ = ⌊p⌋ and V ∈ W 1,p (Σ, N ) we obtain by the Morrey-Sobolev embedding and the homotopy extension property that the condition (b) is equivalent to the existence of V ∈ C(Σ, N ) such that V | Σ 0 = u • σ almost everywhere on Σ 0 . The first tool of the proof is the following proposition about the extension of boundary data already in W 1,p (∂M, N ) which can be extended trivially to a neighborhood of the boundary. Then there exists an extension U ∈Ẇ 1,p (M, N ) with tr ∂M U = u on ∂M.
2021-01-27T02:16:19.982Z
2021-01-26T00:00:00.000
{ "year": 2021, "sha1": "c0249c13fdb97590dc7a3f7365007efa9e9393f6", "oa_license": null, "oa_url": "https://arxiv.org/pdf/2101.10934", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c0249c13fdb97590dc7a3f7365007efa9e9393f6", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
101382486
pes2o/s2orc
v3-fos-license
Rapid vapour deposition and in situ melt crystallization for 1 min fabrication of 10 μ m-thick crystalline silicon films with a lateral grain size of over 100 μ m † We developed a film deposition method which yielded continuous polycrystalline Si films with large lateral grain sizes of over 100 μm and thicknesses of ∼10 μm in 1 min on growth substrates other than silicon wafers in a single-step process. The silicon source is heated to ∼2000 °C, much higher than the melting point of Si, which enables a high deposition rate. Controlling the temperature of the growth substrate, initially above and later below the melting point of Si, allows the seamless lateral to vertical growth of crystalline silicon grains. Thermally and chemically stable substrates of quartz glass and alumina with a 0.1 μm-thick amorphous carbon layer were effective; liquid silicon wetted well by forming a thin SiC interlayer while substrates stayed stable. Such large-grain polycrystalline silicon films synthesized rapidly in 1 min may be used for low-cost, stable and flexible thin film photovoltaic cells. Introduction Crystalline silicon (Si) wafers and films are supporting our society as key parts of many electronic devices such as semiconductor integrated circuits (ICs), thin-film transistors in flatpanel displays (FPDs) and photovoltaic (PV) cells. The production processes of crystalline Si can be classified into two types; one is wafer production by wire-saw slicing of mono-or polycrystalline ingots made from molten Si, 1 and the other is thin-film deposition on growth substrates by chemical vapour deposition (CVD) of Si precursor gases. 2,3 Monocrystalline Si wafers, which are high-quality, high-performance and expensive, are indispensable for ICs. Polycrystalline Si wafers or films having larger in-plane grain sizes than the thicknesses, which we call large-grain Si hereafter, have been used as fundamental parts of large-area devices such as PV cells and FPDs. 4,5 The fabrication processes of large-grain Si films on substrates can be divided by the type of growth substrate: monocrystalline Si substrates and other substrates. Monocrystalline Si films can easily grow on monocrystalline Si substrates through homoepitaxy by CVD, physical vapour deposition, liquid-phase epitaxy and other methods. However, Si films need to be transferred to low-cost carrier substrates (e.g. plas-tic or glass) in a successive process for cost reduction. 3,6,7 The growth substrate and the homoepitaxial film have the same crystal structure, orientation and chemical properties, and therefore it is difficult to separate the film from the growth substrate (i.e. Si wafer). Sacrificial heteroepitaxial interlayers such as CoSi 2 have been reported, 8 but the transfer process is complicated. Methods need to be established for easy separation of the films from growth substrates and reuse of growth substrates. Growth substrates other than monocrystalline Si, such as SiO 2 , are attractive in the viewpoint of the separation of films and substrates. On most of such substrates, Si does not grow epitaxially. Si forms either amorphous films on substrates below its crystallization temperature 9 or polycrystalline films on substrates above its crystallization temperature. 10,11 In polycrystalline film growth, each nucleus has a different orientation and generally grows isotopically, comes in contact with other growing nuclei and forms continuous films. Then, each nucleus/grain grows in the out-of-plane direction, yielding polycrystalline films with a lateral grain size smaller than the thickness. To obtain polycrystalline films with large grains (i.e. lateral grain size larger than the film thickness), films are post-treated by laser or lamp annealing [12][13][14] to cause lateral grain growth through melting and (re)crystallization processes. In particular, the zone melting recrystallization (ZMR) method can enlarge grains in several micrometre-thick films from the micrometre scale to the centimetre scale by sweeping a heater and thus producing several millimetre-wide molten zones at sweep rates of a few centimetres per minute. 15 However, the ZMR process is accompanied with agglomeration of molten Si, the so-called balling-up effect, as a result of surface tension, and therefore Si films have to be covered with a SiO 2 capping layer to prevent agglomeration. 15,16 Among these processes, the excimer laser annealing process has been used for the production of FPDs. 5 However, none of these methods have been practically used for the production of low-cost PV cells. The ribbon Si process, which coats and crystallizes molten Si on heat-resisting substrates, can be used to fabricate large-grain polycrystalline Si films of around 100 μm in thickness, [17][18][19] but further thinning is difficult because of the surface tension of molten Si. In this report, we propose and develop a new method called "rapid vapour deposition (RVD) of liquid Si and in situ melt crystallization", which yields large-grain polycrystalline Si films directly on heat-resistant substrates other than monocrystalline Si in a single-step process. RVD is a vacuum deposition process realizing a high deposition rate of over 10 μm min −1 by heating a Si source in boats to T boat ∼ 2000°C, which is much higher than the melting point of Si, T m = 1414°C. 20 During RVD, the substrate temperature T sub is intentionally changed in a controlled manner; T sub is set at >T m for the first several seconds to deposit Si as a liquid film, and subsequently T sub is decreased to <T m to nucleate Si crystals and grow them laterally in the liquid film, and finally thickening the film to ∼10 μm by continuing the deposition of Si vapours. All these steps are completed seamlessly in one process within 1 min, yielding 10 μm-thick crystalline Si films with a lateral grain size of over 100 μm on a heatresistant quartz-glass and alumina substrate with a 0.1 μmthick amorphous carbon (a-C) adhesive layer. Experimental Quartz glass (SiO 2 ; 15 mm square, 0.5 mm in thickness), monocrystalline (100) Si wafer with a 50 nm-thick thermal oxide layer (SiO 2 /Si; 20 mm square, 0.65 mm in thickness, Cz-p type, resistivity of 10-20 Ω cm), polycrystalline alumina (Al 2 O 3 ; 15 mm square, 1 mm in thickness) (Nilaco AL-017518, Tokyo, Japan), or sapphire (c-plane, 20 mm square, 0.3 mm in thickness) (Kyocera, Kyoto, Japan) were used as substrates. The substrates were pre-treated by immersing them into a mixed solution of H 2 SO 4 (95 wt%) and H 2 O 2 (30 wt%) with a volume ratio of 3 : 1 for 5 min, and then rinsing them with purified water. The pre-treated monocrystalline Si substrates were subsequently dipped in HF solution (5 wt%) for 1 min to partially remove the thermal oxide from half of their surface, and were then rinsed with purified water. A 0.1 μmthick a-C layer was deposited on some substrates by direct current magnetron sputtering under 2.5 Pa Ar. For RVD, (100) Si wafer (3.5 × 30 mm 2 , 0.65 mm in thickness, Cz-p type, resistivity of 10-20 Ω cm) was used as the vapour deposition source after pre-treatment by immersion in HF solution (5 wt%) for 1 min and rinsing with purified water. Fig. 1 shows a digital image and a schematic of the internal structure of the RVD apparatus. The components were set from bottom to top as follows: two tungsten boats for the Si source, a substrate stage made of quartz glass, a W 0.95 Re 0.05 -W 0.74 Re 0.26 thermocouple, a substrate heater made of a graphite sheet and four reflectors made of molybdenum sheets. After setting a substrate on the stage with its surface facing down and the Si source on the boats, the chamber was evacuated to <3 × 10 −4 Pa using a turbomolecular pump with an oil rotary pump. The substrate heater was turned on to make the thermocouple reach a target temperature, and then RVD was carried out by resistive heating of the source boats to ∼2000°C in several seconds. Si was deposited for 1 min, and after that, heating of the source boats was turned off to finish the deposition. The structure of the Si films was analysed by scanning electron microscopy (SEM; Hitachi S-4800, Tokyo, Japan) and X-ray diffraction (XRD; Rigaku RINT-TTR III, Tokyo, Japan) with a monochromatised CuKα X-ray source. Results and discussion The proposed model with typical results of the deposition method and comparison with conventional methods Fig. 2 shows a schematic comparing the film growth processes by our deposition method and the conventional deposition method. In common methods such as vacuum deposition or CVD, a substrate is kept at a constant temperature with T sub < T m during deposition (Fig. 2b). On a substrate in which epitaxial growth does not occur (e.g. SiO 2 ), Si atoms form small nuclei that grow in the out-of-plane direction and form columnar grains. Grains oriented in the fastest growth direction grow preferentially, often resulting in increasing surface roughness with film growth. However, in our method, Si is deposited on a substrate set at T sub > T m , condensing on the substrate to form a liquid film (Fig. 2a). Although Si re-evaporates from the liquid film, the Si source heated at T boat ≈ 2000°C ≫ T m yields a much higher vapour pressure than the liquid film, realizing a deposition rate which is much larger than the re-evaporation rate and the deposition of the liquid film. Then, we lower the T sub value to <T m to nucleate Si crystallites from the liquid film, inducing lateral grain growth of planar grains in the liquid film, and thickening the film to ∼10 μm by continuing the vapour deposition. Through this growth model, we tried to obtain large-grain polycrystalline Si films in 1 min. Fig. 3 shows a typical Si film deposited on a 0.1 μm a-C/SiO 2 substrate using our method. Fig. 3a shows the time profile of the temperature of the thermocouple, T TC ; the deposition was started at 0 s with T TC > T m , T TC was quickly decreased to T TC < T m at 5-20 s by decreasing the input power for the substrate heater, T TC was kept almost constant for 20-60 s, and then heating of the boats was turned off at 60 s, resulting in a further decrease in T TC at 60-70 s. The thermocouple was not in contact with the substrate to avoid temperature distribution in the substrate, and therefore, T TC has some deviation from T sub and is used as a reference. We made a reference experiment to measure T sub using a 0.1 μm a-C/SiO 2 substrate with a Pt-Pt 0.87 Rh 0.13 thermocouple fixed at the centre of the substrate. We carried out the RVD process with the same time profile of input power for the tungsten boats and the carbon heater but without putting the Si source material in the boats. As shown in the ESI, † Fig. S1, T sub proved to be 100-150°C lower than T TC , and slightly above the melting point of Si (T m = 1414°C) when the RVD process was started. T sub decreased below T m by decreasing the heating power for the upper carbon heater at 5 s. The digital image of the sample in Fig. 3b shows that Si was deposited on the inner 13 mm square area of the 15 mm square substrate and showed a silver-grey colour with patterns coming from the 100 μm grains. The surface and cross-sectional SEM images of the Si film in Fig. 3c and d show that a Si film with a lateral grain size of over 100 μm and a thickness of 10 μm was actually obtained rapidly in 1 min by our deposition method. The out-of-plane XRD pattern of the Si film in Fig. 3e shows the diffraction peaks from a diamond structure without any preferred orientation. The intensity ratios were different from the powder pattern and changed with the measurement point because only a limited number of the large Si grains (over 100 μm) were detected in each XRD measurement. We next show the growth behaviour of Si films at a temperature of T TC ∼ 1100°C < T m with a fixed substrate heater power. Fig. 4a shows a digital image of the sample deposited on the SiO 2 /Si(100) substrate with SiO 2 removed by HF from half of its surface. The Si film deposited on SiO 2 had a cloudy white surface whereas the Si film deposited on Si(100) had a mirror surface. The surface SEM images showed submicrometre-sized grains for the Si film on SiO 2 (Fig. 4b) and a flat surface without any texture for the Si film on Si(100) (Fig. 4c). The cross-sectional SEM images showed columnar grains for the Si film on SiO 2 (Fig. 4d) and a flat cross-section without any texture or boundary with the substrate for the Si film on Si(100) (Fig. 4e). The low-magnification cross-sectional SEM image near the boundary between SiO 2 /Si and Si showed a clear change in the Si film structure and rapid deposition of 16 μm in 1 min (Fig. 4f). The out-of-plane XRD spectrum of the Si film on SiO 2 /Si in Fig. 4g shows a prominent (400) peak coming from the Si(100) substrate with a forbidden (200) peak, which is characteristic of monocrystalline Si. 21 In addition, (111), (220) and (331) peaks are observed, and therefore, the film is polycrystalline Si without any specific orientation in the out-of-plane direction. The Si film on Si(100) showed the same XRD spectra as the Si(100) substrate; the out-of-plane XRD spectrum showed only (400) and (200) peaks (Fig. 4h) and the Φ scan of the (022) diffraction showed a four fold symmetry (Fig. 4i). These results prove that the Si film grew by homoepitaxy on the Si(100) substrate. Our RVD method realized the rapid growth of Si films at 16 μm min −1 owing to the high T boat ∼ 2000°C, as a homoepitaxial film on the Si(100) surface, possibly owing to the high mobility of Si adatoms at the high substrate temperature, and as a columnar polycrystalline film on a SiO 2 surface. As mentioned above, compared with the conventional vapour deposition method at a constant substrate temperature below the melting point, our method can yield Si films with a significantly different structure with planar large grains by changing the substrate temperature, starting from T TC (∼T sub ) > T m and decreasing to T TC (∼T sub ) < T m during deposition. The conditions for large-grain polycrystalline Si films are discussed in detail below. Effect of cooling rate on the grain sizes of the Si films deposited on a-C/SiO 2 substrates Si films were deposited on a-C/SiO 2 substrates with different cooling rates. Deposition was started with T TC > T m and con-tinued at the fixed substrate heater power for 5 s, and then the heater power was decreased at three different rates (Fig. 5a). The digital images of the surface (Fig. 5b-d) showed shinier surfaces with coarser patterns for the samples deposited with slower cooling rates. The SEM images of the surface (Fig. 5e-g) showed polycrystalline films with lateral grain sizes changing with the cooling rates, from ∼50 μm by fast cooling to >100 μm by slow cooling. The grain size of the deposited Si films changed with the cooling rate, similar to the ordinal crystal growth from molten Si. Si film growth on SiO 2 substrates without the a-C layer Fig. 6 shows a Si film deposited on the SiO 2 substrate without the surface a-C layer. In the digital image (Fig. 6a), the part surrounded by red broken lines is the deposited area; however, a continuous Si film remained only on the greycoloured central region of a 10 mm circle. A spherical particle with a diameter of ∼1 mm was found at the centre, which should have solidified from a Si droplet repelled from the SiO 2 substrate. The SEM images of the surface and crosssection of the Si film at the central region showed a polycrystalline film with grain size as small as several micrometres (Fig. 6b and c). In the digital image (Fig. 6a), transparent SiO 2 was exposed at the outer region. The surface and crosssectional SEM images showed small spherical Si particles of ∼10 μm in diameter. Liquid Si should easily repel from and be discontinuous on the substrates with poor wettability. Conversely, the SiO 2 substrates with a 0.1 μm-thick a-C surface layer yielded large-grain Si films uniformly over the deposited area as shown in Fig. 3 and 5. Next, we examined the very early stage of Si deposition on a-C/SiO 2 to check the behaviour of Si on the a-C layer. We deposited Si for only 5 s, quickly cooled down the substrate, and then analysed the sample by SEM and XRD, as shown in Fig. 7. There are many droplet-like Si grains with smaller heights than the lateral dimensions and a much smaller wetting angle of ∼50° ( Fig. 7c) than the SiO 2 substrate without an a-C layer (Fig. 6e). The thin Si layer might have de-wetted during the cooling down process in this experiment of 5 s deposition. During actual deposition for 1 min, however, Si was depositing rapidly on the Si layer, and the thickening Si layer formed a continuous film without de-wetting on the substrate. Fig. 7d is the grazing-incidence XRD spectrum of this sample, which clearly shows the formation of β-SiC. Contact angles were previously reported for liquid Si on vitreous carbon (a-C) to be 40-50°at 1426°C (ref. 22) and 36°at 1430°C, 23 which agree well with our observation (∼50°, Fig. 7c). A 0-100 nm a-C layer was previously applied to a microcrystalline alumina substrate to improve the wettability of Ni-63 at% Si alloy, where the formation of SiC was the key for the improved wettability. 24 Thus, we conclude that a-C reacted with liquid Si and formed a β-SiC layer, on which liquid Si wetted well. As discussed above, Si formed continuous films with large grains on a-C because of the good wettability of Si with a-C. In contrast, Si had poor wettability with SiO 2 , yielding the repelled Si droplets and the discontinuous film of the solidified Si particles on the SiO 2 substrate without an a-C layer. Because liquid Si reacts with an a-C layer to form SiC, it is important to have an a-C layer thin (0.1 μm) enough not to consume much Si by this reaction. Effects of substrate materials: comparison of SiO 2 and Al 2 O 3 substrates Residual stress between films and substrates can cause breaking and/or delamination of the films. In our method, a mismatch of the linear thermal expansion between the Si films and the substrates causes thermal stress during the cooling process. The linear thermal expansion of Si, fused SiO 2 , polycrystalline Al 2 O 3 , and a-axis Al 2 O 3 at the temperature range of 20-1500°C are estimated using previously reported equations 26 and summarized in Fig. S2. † It clearly shows that fused SiO 2 has the smallest value, Si has the medium value and Al 2 O 3 has the largest value. Fig. 8 shows the Si films formed on a-C/SiO 2 and a-C/Al 2 O 3 substrates. Linear cracks can be seen on a-C/SiO 2 at intervals of several tens of micrometres whereas no cracks were observed on a-C/Al 2 O 3 . Tensile stress worked on the Si film on a-C/SiO 2 during cooling, resulting in crack formation. It should be important to choose appropriate substrates with thermal linear expansion similar to or somewhat larger than Si to obtain crackfree continuous Si films. Conclusions We proposed a film deposition method, in which Si is rapidly deposited on a substrate heated above the melting point of Si and the substrate is subsequently cooled down below the melting point while maintaining rapid deposition. The method yielded continuous polycrystalline Si films with large lateral grain sizes of over 100 μm and thicknesses of ∼10 μm in 1 min in a single process on SiO 2 and Al 2 O 3 substrates with a 0.1 μm-thick a-C layer. To realize such films without using monocrystalline Si substrates, the choice of substrates is important. The substrates need to be thermally stable at temperatures above the Si melting point, chemically stable enough against liquid Si to be reusable, and in addition, need to have good wettability with liquid Si and to apply small/no tensile stress to the resulting Si films. An Al 2 O 3 substrate with an a-C layer was a good combination, yielding crack-free continuous Si films with large grains owing to the good wettability of liquid Si with a-C via the reaction forming SiC and the larger thermal linear expansion of Al 2 O 3 than Si (and thus compressive stress instead of tensile stress working on the resulting Si films). To practically produce Si films for real applications such as PV cells, how to avoid contamination to Si films at a high temperature and how to scale-up the process are the crucial issues. The maximum substrate temperature was slightly above the melting point of Si (Fig. S1 †). This shows that our process has a similar temperature to but much shorter time than the ingot production process. Al is an effective dopant in making a back surface field in bulk Si PV cells 27 and thus the Al 2 O 3 substrate would be promising in avoiding the negative effects of contamination. For scale-up, we are planning a continuous process carrying growth substrates over a Si melt bath. It may be possible to directly convert Si melt in a crucible to Si films on Al 2 O 3 with a takt time of ∼1 min, instead of making and slicing Si ingots into thick wafers in many hours/days. There is a long way to reach such goal, but this work should be the very important first step. As the next step, we are now working on a method to transfer large-grain Si films from a growth substrate to device substrates (glass and/or plastic) and reuse the growth substrate to achieve practical low-cost production of such Si films.
2019-04-07T13:09:11.990Z
2016-05-09T00:00:00.000
{ "year": 2016, "sha1": "6d38e8fcfb683142b2962f82600d183096cbb238", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2016/ce/c6ce00122j", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "7c8d8bd3f2944a57698f6a823aa5d21d566e3d58", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
97092344
pes2o/s2orc
v3-fos-license
Polylithiated (OLi2) functionalized graphane as a potential hydrogen storage material Hydrogen storage capacity, stability, bonding mechanism and the electronic structure of polylithiated molecules (OLi2) functionalized graphane (CH) has been studied by means of first principle density functional theory (DFT). Molecular dynamics (MD) have confirmed the stability, while Bader charge analysis describe the bonding mechanism of OLi2 with CH. The binding energy of OLi2 on CH sheet has been found to be large enough to ensure its uniform distribution without any clustering. It has been found that each OLi2 unit can adsorb up to six H2 molecules resulting into a storage capacity of 12.90 wt% with adsorption energies within the range of practical H2 storage application. Introduction The consumption of energy is increasing at a rapid pace and is predicted to be almost doubled over the next few decades. The current resources of energy are also decreasing with each passing day. The CO 2 emission caused by the extensive use of fossil fuels results in global warming and leaves devastating effects on atmosphere. So, there is a strong need for alternate sources of energy, which are safe, efficient, abundantly available, and environment friendly. 1 Hydrogen could be one of the best available choices as promising energy carrier due its abundant availability, highest energy density, environment friendliness and low cost. 2-5 But the gaseous nature of hydrogen and the unavailability of the storage media for practical applications, restricts its use as a energy carrier in fuel cell. Different storage media were considered for efficient H 2 storage in recent past and carbon based nanostructures are considered to be the most promising materials. [6][7][8][9] Along with the other countless applications, carbonaceous nanomaterials including fullerenes, carbon nanotubes (CNTs), graphene etc. have extensively been used for energy applications, especially hydrogen storage purposes. 10-12 The greatest advantages of using carbon based materials are their light weight and low cost. Alkali metals can be a good dopant on carbon nanostructures owing to its low cohesive energy (E coh ~1eV) 8 and uniform distribution on surface. Graphene, having sp 2 bonding and possessing unusual electrical, unique mechanical, and extra ordinary optical properties is the most important member of carbon nanostructure family. It has opened up many windows right after its experimental isolation. 16 In recent past graphene has been the subject of many studies regarding potential medium of H 2 storage. Ataca et al. 17 It has been verified both theoretically and experimentally. 20,21 The advantage of using graphane as substrate to bind metal adatoms for storing hydrogen is the strong metal-graphane bonding. There are few studies describing the strong graphanemetal interaction and its usefulness in H 2 storage applications. 22,23 Along with the importance of adatom-substrate binding, the interaction of H 2 with the adatom (metal) is also very important for a good storage material. Lighter elements Results and discussion First of all, the geometry of pure graphane is discussed briefly. Fig.1 (a, b) shows the side and top view of the optimized structure of graphane. When the structure is fully optimized the C-C and C-H bonds are 1.53Å and 1.12Å respectively which is in good agreement with the previous studies. 20 Now two out of the eight hydrogen atoms on a graphane sheet are substituted with OLi 2 molecules, one each from (+Z) and (-Z) directions. This will result into a 25% doping concentration of OLi 2 . Fig. 2a Van der Waal's forces are considered to be responsible for the attraction between the Li + ions and H 2 molecules. In order to have maximum storage capacity, the H 2 molecules should bind to Li + ions at physisorption distance and maintain a reasonable distance within them so that the repulsion among them can be avoided. We have found that at the most 3H 2 molecules can be adsorbed on each Li + ion in CHOLi 2 system. This will results in a very high storage capacity of 12.90 wt%, which is well beyond the DOE target to be attained till 2017. Fig. 2 (b, c, d) shows the optimized geometry of CHOLi 2 with H 2 molecules physisorbed on it. The adsorption energy ∆E ads of H 2 molecules can be calculated by ∆E ads = E {(CHOLi 2 +nH 2 ) -E (CHOLi 2 ) -E (H 2 )}/n (2) Where ΔE (n) is the adsorption energy of the nth H2 molecule adsorbed on the CHOLi 2 sheet, E (CHOLi 2 ) is the energy of CHOLi 2 sheet without H2 molecule and E (H2) is the energy of a single H2 molecule. Table.1 shows the complete results describing the adsorption energies of ∆E ads (eV) of H 2 molecules adsorbed on CHOLi 2 , and the average H-H bond length ∆d (Å). For reliable results, and to avoid the overestimation of LDA and underestimation of GGA, we have also employed the van der Waal's corrected dispersion term in our calculations. The consistency in the values of ∆E ads , and ∆d is clear from the Table, regardless of the XC functional. Conclusions By using first-principle calculations, we have predicted that the polylithiated (OLi 2 ) functionalized graphane can serve as a fascinating material for high capacity H 2 storage. The structure of CHOLi 2 is stable and the large value of binding energy of OLi 2 on CH sheet will induce its uniform distribution on the sheet. Bader charge analysis indicates the polar nature of C-O and Li-O bonds. It has been found that a partially charged Li + ion on each OLi 2 molecule can adsorb up to six H 2 molecules resulting in a very high storage capacity of 12.90 wt%. The average adsorption energies of H 2 molecules have been found to be within the range of practical applications.
2019-04-06T00:42:59.738Z
2012-07-23T00:00:00.000
{ "year": 2012, "sha1": "4e550329e81dbbbee42007eefc8f7cebba5cbf5a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1207.5385", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4e550329e81dbbbee42007eefc8f7cebba5cbf5a", "s2fieldsofstudy": [ "Materials Science", "Chemistry", "Physics" ], "extfieldsofstudy": [ "Chemistry", "Physics" ] }
269354910
pes2o/s2orc
v3-fos-license
Key Challenges of Cloud Computing Resource Allocation in Small and Medium Enterprises : Although cloud computing offers many benefits, such as flexibility, scalability, and profitability, some small and medium enterprises (SMEs) are still unable to fully utilize cloud resources, such as memory, computing power, storage, and network bandwidth. This reduces their productivity and increases their expenses. Therefore, the central objective of this paper was to examine the key challenges related to the allocation of cloud computing resources in small and medium enterprises. The method used for this study is based upon qualitative research using 12 interviews with 12 owners, managers, and experts in cloud computing in four countries: the United States of America, the United Kingdom, India, and Pakistan. Our results, based on our empirical data, show 11 key barriers to resource allocation in cloud computing that are classified based on the technology, organization, and environment (TOE) framework. Theoretically, this research contributes to the body of knowledge concerning cloud computing technology and offers valuable understanding of the cloud computing resource allocation approaches employed by small and medium enterprises (SMEs). In practice, this research is useful to aid SMEs in implementing successful and sustainable strategies for allocating cloud computing resources. Introduction The world of computing has entered a new phase with the development of cloud computing and services, which offer several advantages such as flexibility, scalability, and profitability.The process of Resource Allocation in Cloud Computing (RACC) [1] involves the distribution of resources among different users and applications in the cloud.These resources include memory, computing power, storage, and network bandwidth.However, allocating cloud resources to meet the demands of many organizations, such as small and medium enterprises (SMEs), is a major concern and these organizations encounter a multitude of obstacles in managing these resources [2].Therefore, it is necessary to overcome these challenges to enhance cloud computing sustainability by reducing the operational costs and accompanying carbon footprint of such massive equipment, which compromise the sustainability of cloud services. SMEs that fail to use efficient resource allocation practices and procedures may encounter diminished productivity and heightened expenses [3].Therefore, the success and sustainability of SMEs rely heavily on the efficient deployment of resources.However, several obstacles such as financial limitations, the restricted availability of skilled staff, and the integration of novel technology have negative effects on the RACC process [4].Therefore, policymakers, SME owners, managers, and technology experts need to investigate potential solutions [5], practices, and procedures that aid their organizations in enhancing resource usage and attaining their desired outcomes.In this regard, this paper aims to fill the gap: it aims to conduct an empirical investigation into RACC to shed light on the obstacles encountered by SMEs in their pursuit of effective RACC. Digital 2024, 4 373 However, several challenges that SMEs experience throughout the resource allocation process are highlighted in the literature.Ref. [6] introduced the financial aspect of RACC and proposed an economic resource allocation method to increase its efficacy by forecasting resource allocation requests using a heuristic method.However, this type of research focused only on a specific issue and did not cover other challenges. Ref. [7] proposed the resource allocation and relocation method based on power conversion efficiency.The goal was to reduce the energy consumption of RACC while maintaining acceptable performance levels.Therefore, organizations may improve their sustainability efforts, lower their carbon footprint, and achieve greater energy utilization in cloud settings by implementing such approaches.However, this study did not focus on other challenges, especially the operating expenses. Ref. [8] suggested a method to locate the best resources for each piece of work that is being performed in real-time by expanding the flexible algorithm.Despite this method being effective in maximizing resource use, it ignored the energy component.Similarly, ref. [9] developed an approach based on a general method for upgrading virtual machine resources.However, this method's short execution time and energy usage were its strongest points, and the cost was still substantial. Ref. [10] has suggested a scheduling-based heuristic resource allocation strategy.Although the suggested method reduced costs and met the quality of service (QoS) requirements, it left out considerations for resource utilization and execution time.However, these studies focused on finding solutions to specific issues in RACC and ignored other aspects.On the other hand, other research works have attempted to study the challenges of RACC; for example, cost, security, technical skill, top management support, and complexity are a few of the significant aspects that ref. [11] identified as affecting resource allocation in cloud computing.Similarly, ref. [12] emphasized the importance of senior management support, comparative advantage, company scale, competitive pressure, and pressure from trade partners when it comes to cloud computing.Moreover, ref. [13] highlighted how important training programs are for helping workers learn more about using cloud computing resources.Additionally, ref. [14] identified determinants of RACC in SMEs, including compatibility, relative advantage, firm size, uncertainty, trial-ability, and top management support.In addition, ref. [3] highlighted factors such as usability, convenience, security, privacy, and cost reduction as crucial considerations for the use of cloud resources by SMEs.Ref. [15] found that the organization's limited knowledge of cloud computing technology acted as a challenge.In the context of secure cloud computing, ref. [16] discussed societal issues such as trust, privacy, and user behavior, as well as technological factors such as scalability, reliability, encryption, data rights, and transparency.Additionally.Moreover, ref. [17] discussed the financial advantages of cloud resource optimization, such as decreased up-front capital expenses and increased resource use efficiency.However, these studies ignored some important obstacles for RACC in SMEs.In addition, there is a lack of clear classification of different types of challenges. Understanding these challenges enables SMEs to develop tailored strategies to enhance their competitiveness, cost-efficiency, and overall operational effectiveness in today's dynamic digital landscape.In addition, this study reveals efficient resource allocation practices.Furthermore, it ascertains the variables that exert influence on these practices.As a result, this will help academics, practitioners, experts, and managers enhance their knowledge of RACC.The significance of this study lies in its potential to support SMEs in achieving success, therefore contributing to the development of a robust and sustainable global economy.Moreover, this study applies the technological, organizational, and environmental TOE framework [18] to categorize the challenges.TOE describes components that influence the RACC.The TOE framework has been widely used to explore the encounters and obstacles of technology adoption.Tornatzky and Fleischer [19] state that the TOE shows challenges and opportunities for technological innovation.In their book, the technical perspective embodies the technological obstacles faced by an organization using technology, such as security, and scalability.The organizational view enters organizational weaknesses such as a lack of sufficient skills.Environmental perspectives examine the challenges of the environment in which the organization carries out essential services, such as laws and regulations. To address these research objectives, a qualitative research method is used in this study to answer the following research question: "What are the challenges that affect cloud computing resource allocation in Small and Medium Enterprises (SMEs)?" Semi-structured interviews were used to collect data from 12 owners, managers, and experts in the SMEs in four countries: the United States of America, the United Kingdom, India, and Pakistan.In addition, the data collected were analyzed using a thematic analysis approach. Materials and Methods This section outlines the research methodology used to address the research objectives and questions.It is divided into five sub-sections that cover the research approach, sample selection, data collection methods, data analysis process, and ethical considerations.The next subsection discusses the overall approach used to guide the study. Research Approach This study utilized a qualitative research methodology which involved the conducting of semi-structured interviews.The utilization of semi-structured interviews in this study allows for a high degree of research flexibility and adaptability [20].In addition, the use of this particular methodology enables the researcher to delve further into the perspectives and attitudes of the participants, thereby facilitating a more comprehensive understanding of their viewpoints [20].Moreover, the employment of the qualitative technique enables the identification of patterns and themes within the acquired data, facilitating the derivation of pertinent inferences regarding the obstacles associated with resource allocation in SMEs. Participants The research on RACC obstacles has deliberated on the acquisition of data through the use of purposive sampling [21].Focusing on individuals with extensive experience in managing cloud computing resources significantly increases the likelihood of obtaining valuable information; thus, we can guarantee the gathering of high-quality data [21].In addition, purposive sampling, at its core, assists in selecting respondents aligned with the study's objectives, leading to a clear and thorough understanding of the subject [22]. Furthermore, in this study, participants were specifically selected using purposive sampling based on their appropriate experiences, quality, and roles to enhance the study findings and participate in a meaningful understanding of RACC.Therefore, in this study, 12 participants with experience in managing cloud computing resources in SMEs were included, representing a varied range of populations, industries, and positions.The participants were from the USA, the UK, India, and Pakistan, and held roles such as system manager, web server administrator, DevOps engineer, head of cloud computing, and team manager. Table 1 gives an overview of the demographic information and characteristics of the interview participants. Data Collection The semi-structured interview guide was carefully crafted to study barriers of RACC flexibly and comprehensively.Ref. [23] has pointed out a systematic, five-step method for creating such a guide, including detecting the prerequisites for employing semi-structured interviews, retrieving and utilizing prior knowledge, articulating a preliminary guide, running pilot testing, and presenting the finished guide. This methodology facilitated an exhaustive investigation of the subject while upholding the emphasis on the distinct experiences of everybody involved.There were twenty primary open-ended questions and eight supplementary open-ended sub-questions spread throughout six sub-domains in the guide.When a participant's answer to the main question did not fully address particular subjects of interest, sub-questions were used.The interview process prioritized the significance of the participants' narratives over rigorous adherence to the question order, even though all respondents were asked identical questions.This adaptable strategy improved the data-gathering process and made it easier to successfully record individual experiences [24].Furthermore, the interview guide was distributed to the participants along with the invitation to the interview, enabling them to become acquainted with the subjects and organize their ideas beforehand.This method made the interview process more efficient and engaging [25]. This study's semi-structured interview data-gathering process, which was conducted between 1 January 2024, and 15 March 2024, provided a strong basis for reliable analysis and results.The interviews were skillfully performed via online meetings in English, which made communication with participants easy.With the help of this strategy, the researcher was able to get in touch with individuals all around the globe and collect a wide range of information and experiences that greatly enhanced the dataset for the study.The study gains a thorough grasp of the subject matter by depending on the perspectives of a participant pool that is geographically dispersed, which eventually strengthens its credibility and persuasive power.Avrio, an innovative AI-powered transcription tool, was used to record and transcribe all interviews to ensure the highest level of accuracy in data acquisition.This cutting-edge technology produced verbatim transcriptions of the interviews, accurately expressing the participants' words and presenting the material in a readable manner.Any unnecessary oral fillers, inaudible parts, or intersecting speech were found and suitably documented using a standard transcription technique [26].A cautious approach was used when addressing private and delicate material, using either replacement words or the complete omission of the information.This strategy ensured that the essence of the interviewee's thoughts was retained while respecting their privacy and adhering to ethical guidelines for research. For the significant task of thematic analysis, NVivo was used [27].An unmatched program for managing, analyzing, and displaying written data is offered by NVivo, a popular and reliable qualitative data analysis tool.Because of its sophisticated capabilities, it was able to quickly and effectively find patterns, themes, and insights in the transcripts of the interviews, which resulted in a thorough and nuanced grasp of the subject [27].This meticulous method of gathering and analyzing data not only makes the study more credible but also makes it more persuasive. The full list of questions and sub-questions for the semi-structured interview can be found in Appendix A of this study. Data Analysis A thematic analysis was carried out as part of the methodology's data analysis phase to find patterns and themes that emerged from the transcripts of the interviews.A codebook that was created through a typical iterative procedure served as guidance for this analysis process [26].The following three steps were engaged in this process: 1. Familiarization with the data: In this step, the data were studied for greater familiarity, and the interview transcripts were reread.Also, pertinent research on RACC was analyzed.This step assisted in the gaining of a comprehensive comprehension of the data and in identifying initial impressions and ideas [28]. 2. Generating initial codes: In this step, data were systematically coded by identifying and labeling meaningful information units related to RACC challenges.This required highlighting sentences, phrases, or paragraphs that encapsulated essential concepts or ideas.The codes were created based on the study's research question and objectives [14]. 3. Searching for themes: After generating initial codes, they were classified into various themes.Patterns, connections, and relationships between identifiers were sought to identify the data's overarching themes.This procedure entailed sifting and reorganizing codes into meaningful clusters [29]. 4. Reviewing and refining themes: To ensure that the identified themes accurately represented the data and conveyed the essence of the participants' experiences and perspectives, they were reviewed and refined.Each theme and its corresponding codes were critically examined by making any necessary adjustments [29]. 5. Creating a thematic relation: To visualize the relationships between themes, a thematic relation was created.This relation illustrated how the different themes were interconnected and related to one another, highlighting the main findings of the analysis [29].6. Reporting: The results of the thematic analysis were conveyed clearly and concisely. The themes and their supporting evidence were organized into a coherent narrative, with statements or passages from interviews used to illustrate key points.The findings were then discussed concerning the existing literature and used to answer the research questions and achieve the study's objectives [14] Results RACC in small and medium-sized businesses is primarily challenged by 15 themes, as determined by data analysis.Based on the framework for TOE, these themes were then categorized; see Table 2.For ethical reasons and to protect the anonymity of the participants, the participants are numbered from 1 to 12 (P1-P12).The following subsections provide comprehensive details on each theme.Table 2 refers to obstacles for RACC in SMEs (n = 12).Inadequate training and development programs for employees 3. Monitoring resource usage and performance 1. Scalability and performance Technological Barriers The first context in the TOE framework is technological barriers.This theme consisted of three sub-themes: (1) lack of knowledge; (2) network performance; and (3) optimization. Lack of Knowledge One of the important challenges extracted from the participants' explanations of the technological barriers to efficient RACC in SMEs was the lack of knowledge of cloud computing technology. The participants (30%) emphasized that a lack of understanding and familiarity with cloud computing technology hampered their SMEs' willingness to accept cloud solutions.Furthermore, one of the gaps mentioned by the participants was not fully understanding the benefits, hazards, and applications of cloud computing concerning resource allocation objectives.For instance, a participant (P2) stated that knowledge of programming languages along with APIs is very important for RACC: "I guess, having knowledge of programming languages and APIs for cloud applications is crucial when it comes to automating resource allocation.Being proficient in programming languages allows us to develop scripts and applications that can automate the process of allocating resources in the cloud.In my opinion, understanding various APIs helps us interact with cloud services and efficiently manage resource allocation.It's an important skillset for streamlining the allocation process and maximizing the benefits of cloud computing." Additionally, the participants referred to other aspects of the lack-of-knowledge issue.For instance, a participant (P6) emphasized the basic domain knowledge and state-of-the-art of RACC: "Well, domain knowledge is essential when it comes to working with cloud computing.Having a solid understanding of the concepts, principles, and practices in the field allows us to make informed decisions and effectively utilize cloud resources.In my experience, being familiar with various cloud computing software and services is crucial."Moreover, a participant (P10) considered that knowledge of DevOps and the networking domain is important for RACC operations: "In my opinion, knowing DevOps and the networking domain is a plus point.Along with that, being familiar with regularly used software development-related tools is also beneficial.I also think that these skills and knowledge areas can greatly enhance an individual's ability to effectively allocate resources in cloud computing." Lack of Expertise Another key challenge stated by most of the participants (50%) was a lack of expertise.The participants highlighted the technical expertise required, such as a knowledge of cloud computing architecture, virtualization, storage, security databases, etc.For instance, a participant (P3) said: "I think, to effectively allocate resources using cloud computing, technical expertise is required in cloud computing architecture, cloud service providers, virtualization, networking, storage and databases, monitoring and management, security and compliance, and programming and automation.In my point of view, proficiency in these areas is necessary to ensure efficient and effective resource allocation in a cloud environment." Further, a participant (P4) highlighted five key different areas of technical expertise that are required for effective RACC: "Certainly!There are five essential technical expertise areas in cloud computing.First, we have on-demand self-service, which means users can access and provision computing resources as needed without the need for human intervention.Second, there's broad network access, allowing users to access cloud services and applications over the internet from various devices.Third, we have resource pooling, where multiple users share and allocate resources dynamically to meet their individual needs.Fourth, rapid elasticity enables the quick and seamless scaling of resources up or down based on demand.And finally, measured service allows for monitoring and billing based on actual resource usage.These capabilities are fundamental in the world of cloud computing." Similarly, a participant (P5) emphasized that expertise in DevOps and cloud architecture is important for successful RACC: "As per my knowledge, to effectively allocate resources using cloud computing, strong technical expertise in several areas is essential.These include cloud architecture, cloud security, DevOps, automation, and orchestration.It's important to have a team of experts who possess the skills and knowledge required to handle these aspects and ensure efficient resource allocation using cloud computing technologies." Moreover, a participant (P7) referred to other areas of expertise that are needed, such as architecture, automation, server management, etc. "Yes, it's important to have a strong grasp of various areas.These include cloud architecture, which involves designing and managing cloud-based systems and services.I guess, containerization is also crucial for efficient deployment and management of applications.Cloud automation is another essential skill, enabling streamlined and automated resource allocation and management," Further, a participant (P9) believed that expertise in lowering the cost and increasing the efficiency of resource allocation is the key: "Certainly!When it comes to resource allocation in the cloud, it's crucial to employ effective techniques for optimizing data management and costing.By strategically managing resources and implementing cost-effective strategies, organizations can ensure efficient allocation of resources in the cloud, leading to improved performance and cost savings." Network Performance According to the participants, network performance is an important obstacle to implementing efficient RACC in SMEs.In this regard, 40% of participants referred to network challenges such as cloud network infrastructure configuration.In addition, the participants emphasized the significance of minimizing latency and ensuring optimal network performance for improved application outcomes; for example, a participant (P9) stated that: "Well, to be honest, I think in my experience network configuration is a big challenge in resource allocation." In addition, a participant (P10) emphasized network latency and the way to deal with it: "I guess to deal with the network traffic we should make the application utilize less resources and the latency.So, if there is a network between traffic managers there should be very little latency and we performance get the best results, accurate results, and then go faster." Similarly, a participant (P11) highlighted the need for a network connection between on-premises and cloud resources, which required a significant amount of time and effort to resolve: "Since I am working in cloud computing I think establishing a network connection between on-premises and cloud resources was a bit challenging and we had to spend a long weekend to sort out this problem." Optimization The participants (50%) explained the complexity of the RACC optimization as one of the obstacles against the successful implementation of resource allocation in cloud computing by SMEs.In this regard, a participant (P10) suggested the need to explore insights related to memory utilization to achieve the better deployment of applications, particularly with the recent use of microservice technology: "So, normally we have the option to explore insights which utilization and, memory utilization.So, when we deploy the application, as recently the microservice technology."Furthermore, a participant (P5) emphasized the importance of cost and usage optimization to meet customer requirements: "optimization is very important because Customers want to accomplish their objectives with less cost.Some of the frequent challenges we face are cost and usage optimization." Another participant (P3) also confirmed this point and referred to load balancing, resources, and network optimization as an important challenge: "Yes, exactly and these are a few critical aspects of cloud resource allocation, and there are several ways to address these challenges Load balancing, Resource optimization, and Network optimization." Security and Privacy The participants (40%) considered security and the preservation of privacy as a challenging issue.In this context, a participant (P3) mentioned several security issues that are central to consider when implementing the RACC, such as selecting suitable cloud service providers and architectures and ensuring compatibility and integration with existing IT systems: "I think of few, one of the majors is in resource allocation include ensuring data security and privacy, selecting the appropriate cloud service provider and cloud architecture, and ensuring compatibility and integration with existing IT systems." Another participant (P4) also confirmed this point and stated: "I have seen the results from different sectors and results show that the factors of compatibility, security, and trust, as well as a lower level of complexity, lead to a more positive attitude towards cloud adoption." Moreover, a participant (P9) considered that regulatory compliance requirements, security concerns, and the availability of technical expertise are additional factors that contribute to the challenge of security and privacy in RACC: "Yes, there are other factors as well and it may include regulatory compliance requirements, security concerns, and the availability of technical expertise." Organizational Challenges The second context in the TOE framework is the organizational barriers associated with RACC in SMEs.This theme consisted of three sub-themes: (1) cost efficiency; (2) inadequate training and development programs for employees; and (3) monitoring resource usage and performance. Cost Efficiency The participants (30%) identified cost efficacy as one of the key obstacles to RACC in SMEs.In this regard, a participant (P4) emphasized the need for SMEs to carefully evaluate pricing structures, monitor resource utilization, and implement cost-effective strategies: "Yes absolutely, I think the cost factor is one of the most important factors that you should consider when choosing a Cloud Service Provider.Pricing plays an important role in deciding which cloud service provider you should choose for your business requirements." Confirming this point, a participant (P5) said: "Well, in my company we ensure that cloud resources are utilized effectively by closely monitoring usage and optimizing costs." In addition, the participants mentioned the slow process of achieving cost efficiency as a problem.In this regard, a participant (P8) explained: "As far as I know, it's a slow process but in the long run it will help to reduce the cost on the infrastructure side." Inadequate Training and Development Programs for Employees Lacking suitable training and development programs for employees was identified as a challenge by 80% of participants.Therefore, these issues must be solved by instituting comprehensive training and development programs to address the lack of skills and knowledge in RACC.In this context, a participant (P10) stated that: "Yes actually, there are many, we have optional training every time.Such as we have a community practice share, so, um, we're mostly looking into Java, so we're migrating to the how can utilize this framework and programming language, community practice every to that.We get some to get this outside this certified and once clear it can be reimbursed." Additionally, the participants emphasized the weight of training on server maintenance, load balancing, and selecting the right instance types to improve scalability and performance.In this regard, a participant (P2) said: "Yes, of course, we provide training to employees about server maintenance, load balancing, or choosing the right instance types for better scalability and performance."Furthermore, the participant noted that adequate training had improved the adoption and use of resources in the cloud, resulting in greater efficiency and effectiveness in achieving organizational objectives; a participant (P3) stated that: "In my company, we have provided the training to the employees, this improved adoption and utilization of cloud resources, as well as increased efficiency and effectiveness in achieving organizational objectives." Monitoring Resource Usage and Performance Another obstacle reported by the participants (50%) was monitoring resource utilization and performance.Therefore, SMEs need to adopt strong monitoring mechanisms and performance management methods to meet the difficulty of monitoring resource consumption and performance in RACC; a participant (P3) stated that: "To be honest, I think monitoring performance is a side-by-side goal to achieve performance from cloud regularly." In addition, a participant (P3) mentioned the significance of adopting monitoring and management tools to track resource usage and perform capacity planning for optimizing resource allocation: "In my organization, Monitoring and management tools are used to track resource usage, and capacity planning is performed to optimize resource allocation." The previous idea is shared with another participant (P5): "We try to monitor usage; we ensure that cloud resources are utilized effectively by closely monitoring usage and optimizing costs." Environmental Challenges The third context in the TOE framework is the environmental barriers.This theme consisted of three sub-themes: (1) economic factors; (2) market competition; and (3) sustainability and performance. Economic Factors An important problem stated by the participants was the economic factors associated with the RACC in SMEs.In this regard, a participant (P3) stated: "Cloud computing resource allocation has to be economically efficient.There are several economic benefits of using cloud computing for resource allocation, including reduced upfront capital costs, lower ongoing operational costs, and improved resource utilization efficiency." Additionally, the participants reported the reduced cost achieved through cloud computing resources by decreasing the need for hardware and software investments; a participant (P5) said: "I guess the reason why cloud computing is famous these because of its economic benefits.The economic benefits of using cloud computing for resource allocation are significant.It allows us to achieve cost savings by reducing the need for hardware and software investments." Furthermore, the participants reported the financial advantage of cloud computing compared to on-premises infrastructure, drawing an analogy of renting a car instead of purchasing one to enjoy a ride within budget; in this regard, a participant (P6) noted that: "Cloud computing is economical in terms of cost compared to on-premises infrastructure, for example, if you want to ride a car and you don't have the budget to buy a car you can simply rent a car and enjoy your ride." Market Competition According to the participants (40%), competition in the market is a challenge to RACC in SMEs.To stay competitive, participants emphasized how market dynamics and competitive pressures affect the decision-making process for allocating cloud resources.In addition, they also underscored how crucial it is to stay up to date with market trends.In this regard, a participant said (P5): "Yes, it's a very important factor and we ensure compliance with all relevant regulations and consider market competition when selecting cloud service providers." In addition, the participants believed in the significance of business needs, regulations such as the Health Insurance Portability and Accountability Act (HIPAA) and the General Data Protection Regulation (GDPR), economic conditions, and technological innovations in the context of market competition; a participant (P7) said: "I guess, Business needs Regulations such as HIPAA, PCI DSS, and GDPR, Market competition such as AI or machine learning, Economic conditions, and Technology innovations." Further, the participants highlighted the influence of market competition on organizations, driving them to adopt cloud-based solutions as a means to gain a competitive advantage; in this regard, a participant (P9) said: "Obviously yes, market competition can drive organizations to adopt cloud-based solutions to gain a competitive advantage." Scalability and Performance According to the interview results, 80% of respondents identified scalability and performance as a challenge.These findings highlight the need to ensure that cloud resources can expand successfully to meet variable demands, as well as to optimize performance to ensure efficient and responsive cloud services.A few of the responses are as below. In addition, the participants emphasized that scalability is a feature of the cloud and a primary driver of its popularity among businesses; a participant (P4) said: "If you ask me, I guess, scalability is one of the hallmarks of the cloud and the primary driver of its exploding popularity with businesses."(R4) Additionally, the participants noted the importance of leveraging the latest technology and best practices to tackle scalability and performance challenges in allocating cloud resources; a participant (P5) noted that: "We leverage the latest technology and best practices to address scalability and performance challenges in allocating cloud resources." Moreover, the participants highlighted the importance of adopting best practices such as auto-scaling, load balancing, and right-sizing to address the challenges of scalability and performance; a participant (P9) noted that: "I have different ways to this, for example, to address the challenges of scalability and performance in cloud resource allocation, organizations should adopt best practices such as auto-scaling, load balancing, and right-sizing." Discussion and Future Directions The main aim of this research was to explore the challenges related to cloud computing resource allocation in SMEs.A total of 11 challenges were identified by the 12 participants, as shown in Table 2 These challenges were divided into three contexts based on the TOE framework: technological, organizational, and environmental.In the technological context challenges, there were (1) lack of expertise, (2) lack of knowledge, (3) network performance, (4) optimization, and (5) security and privacy.Organizational challenges were as follows: (6) cost efficiency, (7) inadequate training and development programs for employee monitoring, and (8) resource usage and performance.Additionally, in the environmental context, there were (9) economic factors, (10) market competition, and (11) scalability and performance; see Figure 1.The current findings of this study are consistent with previous research, which has mostly focused on technological challenges to successful RACC in SMEs [30].However, it is important to note that this emphasis on technological challenges highlights a gap in understanding the importance of organizational and environmental challenges that impact the efficiency of RACC in SMEs.For instance, inadequate training in organizations, in particular, is an organizational challenge to effective RACC in SMEs [31].This leads to a lack of experience and awareness regarding RACC which may damage many different aspects of the resource allocation process, including resource management [32,33].Therefore, SMEs can harness the full potential of cloud computing and improve their resource allocation practices to foster innovation and competitiveness in the digital world by recognizing and overcoming these challenges.As a result, future studies should focus on exploring and overcoming the organizational and environmental impediments to successful RACC in SMEs.The current study's findings revealed that participants drew more attention to technology challenges than the other two contexts, with five technological challenges, three organizational challenges, and three environmental challenges, as shown in Figure 2.This may be explained based on the participants' experiences in certain contexts; see Table 1, where eight out of twelve participants are technical experts.In addition, this could be due to the technological innovation of cloud computing and the diversity in resources and their configurations. The current findings of this study are consistent with previous research, which has mostly focused on technological challenges to successful RACC in SMEs [30].However, it is important to note that this emphasis on technological challenges highlights a gap in understanding the importance of organizational and environmental challenges that impact the efficiency of RACC in SMEs.For instance, inadequate training in organizations, in particular, is an organizational challenge to effective RACC in SMEs [31].This leads to a lack of experience and awareness regarding RACC which may damage many different aspects of the resource allocation process, including resource management [32,33].Therefore, SMEs can harness the full potential of cloud computing and improve their resource allocation practices to foster innovation and competitiveness in the digital world by recognizing and overcoming these challenges.As a result, future studies should focus on exploring and overcoming the organizational and environmental impediments to successful RACC in SMEs. The current findings of this study are consistent with previous research, which has mostly focused on technological challenges to successful RACC in SMEs [30].However, it is important to note that this emphasis on technological challenges highlights a gap in understanding the importance of organizational and environmental challenges that impact the efficiency of RACC in SMEs.For instance, inadequate training in organizations, in particular, is an organizational challenge to effective RACC in SMEs [31].This leads to a lack of experience and awareness regarding RACC which may damage many different aspects of the resource allocation process, including resource management [32,33].Therefore, SMEs can harness the full potential of cloud computing and improve their resource allocation practices to foster innovation and competitiveness in the digital world by recognizing and overcoming these challenges.As a result, future studies should focus on exploring and overcoming the organizational and environmental impediments to successful RACC in SMEs.Similar to the findings reported by ref. [34], the participants of the current study recognized security and privacy as being among the most challenging concerns in RACC in SMEs.This stems from the lack of appropriate security skills.In addition, to preserve privacy it is required to obey strict data protection laws such as HIPAA and GDPR that Similar to the findings reported by ref. [34], the participants of the current study recognized security and privacy as being among the most challenging concerns in RACC in SMEs.This stems from the lack of appropriate security skills.In addition, to preserve privacy it is required to obey strict data protection laws such as HIPAA and GDPR that require adequate skills to control which categories of information should be stored oncloud, which can be accessed by the public, and which data we must keep secret.Therefore, the most confronting issues reported by participants were security and privacy and lack of expertise; see Figure 1.However, despite the negative impacts of lack of expertise, and its solid association with other concerns, very few research works addressed these problems [35].Therefore, further studies are recommended on these two issues. Furthermore, the participants in the current study considered that their SMEs may lose control of their assets and resources if they depend on cloud service providers to decide the security of resources.This is because SMEs depend on the provider's skills, rules, and techniques for securing systems.This is in line with the findings of a study by refs.[36][37][38], which also reported that a lack of control can lead to problems with service uptime, customization, and not being able to fix speed or security problems directly.Moreover, this study showed that when organizations rely on a single cloud service provider for resource allocation, they run into the challenge of vendor lock-in.When it becomes difficult or expensive to transfer to a different cloud provider or bring the services back in-house, this is known as vendor lock-in [39].The participants of the current study stressed that a lack of interoperability standards and proprietary technologies can restrict an organization's adaptability and ability to negotiate better terms or adapt to changing business requirements.In addition, when allocating cloud resources, organizations in regulated industries confront compliance and legal risks.Therefore, consideration must be given to industry-specific compliance requirements and contractual obligations.Failure to comply with applicable regulations or contractual obligations may result in legal repercussions, monetary penalties, reputational harm, and a loss of consumer confidence [40]. The optimization of RACC was reported as an important challenge for SMEs.This matches the findings of other studies.For example, ref. [17] discussed the advantages of resource optimization in cloud computing, such as decreased up-front capital expenses and increased resource use efficiency.Confirming this point, ref. [33] underlined the necessity for SMEs to carefully monitor resource utilization and optimize expenses.According to the apparent findings in this study, the participants also reported on how important cloud computing training programs are for helping workers learn how to use cloud resources efficiently.This is in line with the findings of a study by [13,41] that also reported that training has a good impact on cloud resource usage. The integration of advanced technologies such as machine learning, artificial intelligence, and automation has the potential to significantly enhance the resource allocation procedures within small and medium enterprises (SMEs).Utilizing established principles and employing optimization algorithms, the aforementioned technologies possess the capability to effectively analyze historical data, accurately forecast resource demands, and seamlessly automate the allocation process.Future research may delve into investigating the potential viability and effectiveness of incorporating these technologies into the existing resource allocation practices within small and medium enterprises (SMEs).This exploration could potentially aid in fostering more intelligent and efficient decision-making processes.In addition, future research endeavors may be directed toward the development of resource allocation models and tools that are driven by analytics.These models and tools should be specifically designed to cater to the needs of small and medium enterprises (SMEs), allowing them to effectively monitor and optimize their cloud resources in real time. Furthermore, it is worth noting that the collaboration and knowledge-sharing practices within small and medium enterprises (SMEs) have the potential to greatly contribute to the improvement of resource allocation in the context of RACC.The establishment of communities of practice, industry networks, and knowledge-sharing platforms has been identified as a potential strategy to facilitate knowledge exchange among small and medium enterprises (SMEs).By leveraging these collaborative mechanisms, SMEs have the opportunity to learn from one another's experiences, share best practices, and collectively address challenges related to resource allocation.Future research endeavors may delve into the examination of the feasibility of these collaborative methodologies and construct conceptual frameworks that can effectively foster the exchange of knowledge and facilitate collaboration among small and medium enterprises (SMEs) within the realm of RACC. The scope of this study is to study the key challenges of RACC in SMEs.However, even though large enterprises encounter some of these issues, such as security, privacy, and cost-efficiency, they are managed differently.For example, big businesses use intricate, customized cloud system solutions to achieve long-term objectives, utilizing vast resources to satisfy a range of requirements.On the other hand, due to their smaller size and lack of resources, SMEs frequently choose simple, affordable solutions that concentrate on operational requirements and more basic business models.In addition, big businesses frequently choose to form internal teams of specialists dedicated to cloud adoption.The reasoning behind this is that these businesses typically have intricate, highly specialized needs that are insufficiently satisfied by off-the-shelf services.In contrast, SMEs usually go in a different direction and outsource cloud administration.This is because their top priority is cost-effectiveness, and they might not need the same tailored solutions that major enterprises do.Furthermore, when implementing cloud services, large organizations frequently devote a significant amount of IT spending to performance and customization.SMEs, on the other hand, typically focus their IT budget on affordable and effective cloud services such as security services, which introduce more vulnerabilities.However, further studies are needed to dive deeply into these challenges from the large enterprise perspective. Challenges and Limitations The current study has identified several limitations that warrant acknowledgment.The study's sample size was relatively small, potentially limiting the generalizability of the findings.It is crucial to acknowledge that qualitative studies, due to their inherent characteristics, do not strive for generalizability.Consequently, it is inappropriate to assume that the findings can be universally applied to all organizations across various contexts [42]. In addition, it is important to note that the study exclusively utilized a single research method for data collection, without integrating additional complementary methodologies to corroborate and substantiate the obtained results.By implementing a mixed-method methodology, which integrates both qualitative and quantitative data, it would have been possible to enhance the internal validity to a greater extent.The implementation of diverse data collection methodologies would have augmented the researchers' capacity to ascertain the dependability and credibility of the gathered data. Furthermore, the present study aimed to investigate the participants' subjective viewpoints and personal encounters about the obstacles encountered during the implementation of cloud computing resource allocation techniques within small and medium enterprises (SMEs).Nevertheless, it is crucial to acknowledge the inherent difficulties associated with evaluating the objectivity and neutrality of participants' responses in studies of this nature.It is imperative to consider that the descriptions provided may potentially be influenced by various biases.Hence, it is recommended that additional surveys be conducted to investigate the perspectives of owners, managers, and employees regarding the allocation of cloud computing resources in SMEs.These surveys should specifically focus on the design and implementation stages of cloud technology applications.The inclusion of surveys in this study would serve to enhance the comprehensiveness of the data collection process, thereby contributing to the validation of the research findings. Conclusions Although cloud resources can yield several benefits, the management of these resources in SMEs remains a challenging issue.A deep exploration, identification, and categorization of perceptions of managers and experts toward the barriers of the allocation of resources in SMEs is presented in this study.This study has revealed that several barriers have caused inefficient resource allocation in SMEs.The findings revealed 11 challenges categorized into three perspectives, technological, organizational, and environmental, based on the TOE framework.The following are the technological obstacles: (1) lack of expertise, (2) lack of knowledge, (3) network performance, (4) optimization, and (5) security and privacy.Organizational challenges are as follows: (6) cost Efficiency, (7) inadequate training and development programs for employee monitoring, and (8) resource usage and performance.Additionally, in the environmental context, there are (9) economic factors, (10) market competition, and (11) scalability and performance. In comparison to the organizational and environmental challenges, this study indicated that participants paid greater attention to technological obstacles.This highlights a gap that could have a negative impact on the efficiency of resource allocation in cloud computing for SMEs.Therefore, further research is required from an organizational and environmental perspective.• What external factors (e.g., regulations, market competition) affect your cloud resource allocation decisions?• What are the economic benefits of using cloud computing for resource allocation? • How do you address the challenges of scalability and performance in cloud resource allocation?• What economic factors do you consider when selecting a cloud computing service provider?Conclusion: • Is there anything else you would like to add about your experiences with resource allocation in cloud computing?• Thank you for your time and contributions to the study. Figure 1 . Figure 1.The challenges reported by the study's participants. Figure 2 . Figure 2. The number of challenges identified in each context of the TOE framework. 39Figure 2 . Figure 2. The number of challenges identified in each context of the TOE framework. • How long have you been in operation?• How many employees does your business have?• What types of cloud computing-related services are currently being provided by your business?Section 2: Technology: • How would you describe your experience with cloud computing?• What are some of the challenges you have faced when implementing cloud computing for resource allocation?• What factors influence your decision to adopt cloud computing for resource allocation?• What kind of technical expertise do you need to effectively allocate resources using cloud computing?Section 3: Organization: • How has the adoption of cloud computing for resource allocation affected the organization's overall operations?• How do you ensure that the cloud resources are utilized effectively to achieve organizational objectives?• Have you provided any training to employees regarding the use of cloud computing?If so, how effective was it?Section 4: Environment: Table 1 . Key summary details for each of the 12 interviewees.
2024-04-25T15:25:00.285Z
2024-04-23T00:00:00.000
{ "year": 2024, "sha1": "fb732ecf0509202f4b35e1cffba4a4b0f9b76009", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-6470/4/2/18/pdf?version=1713877386", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a377e8f09dbbc7a3677278555023ecaa02846cdb", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
28124227
pes2o/s2orc
v3-fos-license
Porphyromonas gingivalis-Induced GEF Dock 180 Activation by Src / PKC δ-Dependent Phosphorylation Mediates PLC γ 2 Amplification in Salivary Gland Acinar Cells : Modulatory Effect of Ghrelin Phospholipase Cγ2 (PLCγ2) plays a pivotal role in mediation of inflammatory reaction to bacterial lipopolysaccharide (LPS) as well as serves as a key target in modulatory influence of the hormone ghrelin. Here we explore the involvement of Rac1 and its activator, guanine nucleotide exchange factor (GEF), Dock180, in mediation of PLCγ2 activation in salivary gland acinar cells in response to P. gingivalis LPS and ghrelin. We show that stimulation of the acinar cells with the LPS leads to up-regulation in Dock and PLCγ2 activation, and is reflected in the membrane translocation of Rac1 and PLCγ2, while the effect of ghrelin is manifested by the suppression in Rac1 translocation. Further, we reveal that stimulation with the LPS leads to Dock180 phosphorylation on Tyr and Ser, while the modulatory influence of ghrelin, manifested by a drop in membrane Rac1-GTP, is associated with a distinct decrease in Dock180 phosphorylation on Ser. Moreover, we demonstrate that phosphorylation on Tyr remains under the control of Src kinase and is accompanied by Dock180 membrane translocation, while protein kinase Cδ(PKCδ) is involved in the LPS-induced phosphorylation of the membrane-recruited Dock180 on Ser. Thus, our findings underscore the role of Src/PKCδ-mediated GEF Dock180 phosphorylation on Tyr/Ser in modulation of salivary gland acinar cell PLCγ2 activation in response to P. gingivalis as well as ghrelin. Introduction Porphyromonas gingivalis, a Gram-negative bacterium found in periodontal pockets of people with gum disease, is recognized as a major culprit in the etiology of periodontitis, a chronic inflammatory condition that affects about 15% of adult population and is a major cause of adult tooth loss [1] [2].The oral mucosal responses to P. gingivalis and its key endotoxin, cell wall lipopolysaccharide (LPS), are characterized by the disturbances in NO signaling pathways, massive rise in epithelial cell apoptosis, and the increase in proinflammatory cytokine production [3]- [6].Studies into the events underlying the proinflammatory signal regulation indicate that P. gingivalis LPS, like LPS of other Gram-negative bacteria, is capable of Toll-like receptor 4 (TLR4) ligation resulting in the receptor dimerization, followed by the TLR4 autophosphorylation at the several critical Tyr residues that are essential for initiation of downstream signaling events [3] [7]- [9].The key element of this signaling is the receptor-mediated recruitment of phosphoinositide-specific phospholipase C (PLC) which catalyzes formation of the second messengers, inositol 1,4,5-trisphosphate (IP 3 ) and diacylglycerol (DAG), from membrane phosphatidylinositol 4,5-bisphosphate (PIP 2 ) [10]- [13]. Moreover, PLC activation not only plays a major role in defining the extent of inflammatory response to LPS, but is also considered as a primary target in modulatory influence of the hormone ghrelin on the mucosal responses to bacterial invasion [6] [9] [13]- [15].This 28-amino acid peptide, initially isolated from the stomach [16] and subsequently identified in oral mucosa, saliva and the acinar cells of salivary glands [17], is commonly recognized as an important modulator of the processes of mucosal repair and the control of local inflammatory responses to bacterial infection.Indeed, engagement by ghrelin of the growth hormone secretagogue receptor type 1a (GHS-R1a), a G protein-coupled receptor (GPCR), leads to activation of heterotrimeric G protein-dependent signal transduction pathways, including PLC/PKC, PI3K, and Src/Akt implicated in signaling to NOgenerating system [6] [13] [14] [18] [19]. The PLC generated second messengers have far-reaching regulatory and metabolic roles: DAG is known to stimulate the activity of a variety of enzymes, including PKC, while the IP 3 is recognized for its role in the regulation of the cytoplasmic calcium concentration [10] [11].Other data point to the existence of cross talk between PLC and PKC, and indicate that while DAG causes stimulation of PKC activation, the PKC in turn may be involved in the reciprocal modulation of PLC as well as PI3K activation through phosphorylation on Ser [13] [20]- [22].Furthermore, several PLC isozymes, including mucosal tissue PLCγ2, appear to be directly activated by Ras superfamily of small guanosine triphosphatases (GTPases), and many GPCR-initiated signaling pathways also involve Ras activation [11] [23]- [26].The Ras superfamily of GTPases consist of over 150 small, 20 -40 kDa, monomeric proteins, and are divided into five major families (Ras, Rho, Rab, Ran, and Arf) on the basis of sequence and functional similarities [24] [27] [28]. The small GTPases specifically implicated in the regulation of PLCγ2 activation are represented by the two members of Rho family, Rac1 and Rac2, and their activation status is controlled through the exchange of GDP for GTP, catalyzed by the guanine nucleotide exchange factors, also known as GEFs [23] [26] [27].In mammals, the Rho GEFs comprise of 11 members and are referred to as Dock (dedicator of cytokinesis) 180-related family of GEFs [28]- [30].The Dock180, facilitating GDP/GTP exchange in Rac1, responds to stimuli activating tyrosine kinase receptors (RTKs) by the Src kinase-mediated increase in Rac1-GTP formation, and up-regulation in Rac1 activation has been observed in association with LPS-induced gastric mucosal and pulmonary inflammation [26] [31] [32]. Taking into consideration a pivotal role of PLCγ2 in propagation of proinflammatory reaction to bacterial endotoxins being a primary target in modulatory influence of ghrelin on the extent of oral mucosal inflammatory reaction [5] [6], in this study, we examined the involvement of P. gingivalis LPS in the amplification of PLCγ2 activation associated with the salivary acinar cell Rac1 activation by GEF Dock180. Salivary Gland Acinar Cell Incubation The acinar cells of sublingual salivary gland, collected from freshly dissected rat (Sprague-Dawley) salivary glands, were suspended in five volumes of ice-cold Dulbecco's modified (Gibco) Eagle's minimal essential medium (DMEM), supplemented with fungizone (50 µg/ml), penicillin (50 U/ml), streptomycin (50 µg/ml), and 10% fetal calf serum (Sigma), and gently dispersed by trituration with a syringe [5].After centrifugation, the cells were resuspended in the medium to a concentration of 2 × 10 7 cell/ml, and transferred in 1 ml aliquots to DMEM in culture dishes and incubated under 95% O 2 and 5% CO 2 at 37˚C for up to 2 h in the presence of 0 -100 ng/ml P. gingivalis LPS [5].P. gingivalis used for LPS preparation was cultured from clinical isolates obtained from ATCC No. 33277 [33].In the experiments evaluating the effect of ghrelin (rat), Rac1 inhibitor, NSC 23766, and wide spectrum PKC inhibitor, GF109203X (Sigma), PLC inhibitor, U73122, and Src family protein tyrosine kinases (SFK-PTKs) selective inhibitor, PP2 (Calbiochem), the cells were first preincubated for 30 min with the indicated dose of the agent or vehicle before the addition of the LPS. Rac1-GTP Activation Assay The measurements of Rac activation in the acinar cells were carried out with Rac1 Activation Assay Kit (EMD Millipore).The cells from the control and experimental treatments were lysed in magnesium lysis buffer (MLB), containing protease inhibitor cocktail (10 µg/ml leupeptin, 10 µg/ml aprotinin, 1 mM sodium orthovanadate, 1 mM PAF, and 1 mM NaF), at 4˚C for 30 min and centrifuged at 12,000 ×g for 10 min.The supernatants were precleared with GST beads and incubated with PAK-1 PBD-agarose for 1 h at 4˚C.The beads were washed three times in MLB, resuspended in Laemmle reducing sample buffer, resolved on SDS-PAGE, and immunoblotted for GTP-bound Rac1 using anti-Rac1 antibody. PKC Activity Assay Protein kinase C activity measurement in the acinar cells of sublingual salivary gland was conducted with ELISA PKC Activity Assay Kit (Stressgen).The cells were rinsed with 0.05 M phosphate buffer/saline, pH 7.4, settled by centrifugation, and suspended for 30 min at 4˚C in the lysis buffer consisting of 20 mM Tris-HCl, pH 7.4, 150 mM NaCl, 4 mM EGTA, 2 mM EDTA, 1% NP40, 1 mM PMSF, 1 mM sodium orthovanadate, and 10 µg/ml of leupeptin and aprotinin.Following sonication (3 × 10 sec pulses), the samples were centrifuged at 12,000 ×g for 10 min and the resulting supernatant was subjected to protein determination using BCA protein assay kit (Pierce).The samples from various experimental treatments were adjusted to 5 µg of crude protein/30 µl, and added to the wells for PKC activity measurement using peroxidase conjugated secondary antibody and TMB spectrophotometric quantification [22]. PLC Activity Assay PLC activity in sublingual salivary gland acinar cells was measured by the production of inositol phosphates [34] [35].Aliquots of the cell suspension (1 ml) were transferred to DMEM in cell culture dishes containing 2 µCi of myo-[2-3 H]inositol and incubated for 16 h under 95% O 2 /5% CO 2 atmosphere at 37˚C.The cells were then centrifuged at 300 ×g for 5 min, washed three times with DMEM containing 5% albumin to remove the free radiolabel, and resuspended in a fresh DMEM free of albumin containing 10 mM LiCl.After 10 min equilibration period, the cells were transferred to a medium containing 0 or 100 ng/ml of H. pylori LPS and incubated for 1 h.In the experiments on the effect of ghrelin, and PLC, PKC and SFK-PTK Inhibitors, the cells prior to the addition of the LPS were first preincubated for 30 min with the indicated dose of the agent or the vehicle.At the end of the specified incubation period, the cells were treated for 30 min at 4˚C with 20 mM formic acid, and following neutralization with 20 mM ammonium hydroxide the lysates were centrifuged for 5 min at 12,000 × g to remove particulate material.The supernatants were applied to Dowex (AG1-X8 100-200 mesh) anion exchange (formate) columns, and following washing with 50 mM sodium formate/ 5 mM sodium tetraborate, the [ 3 H] inositol phosphates were eluted with 1 M ammonium formate/0.1 M formic acid [34].The content of [ 3 H]inositol phosphates was measured by scintillation spectrometry and normalized against the protein content in the lysates determined by BCA protein assay kit (Pierce). Cell Membrane Preparation To assess membrane translocation of DOck180, Rac1 and PLCγ2 in response to the LPS and ghrelin, the sublingual salivary gland acinar cells from the control and experimental treatments were subjected to cell membrane preparation.The aliquots of the acinar cell suspension were settled by centrifugation at 1500 × g for 5 min, rinsed with phosphate-buffered saline, and homogenized for 10 s at 600 rpm in 3 volumes of 50 mM Tris-HCl buffer, pH 7.4, containing 0.25 M sucrose, 25 mM magnesium acetate, 1 mM EDTA, 1 mM dithiothreitol, 10 mM aprotinin, 10 mM leupeptin, 10 mM chymostatin, and 1 mM PMSF [36].The cell lysates were then centri-fuged at 5000 x g for 15 min, the supernatant was diluted with two volumes of cold homogenization buffer and centrifuged at 10,000 x g for 20 min.The resulting supernatant was then subjected to centrifugation at100,000 × g for 1 h at 4˚C, and the obtained membrane pellet was suspended in the extraction buffer, containing 20 mM HEPES, pH 7.9, 25% glycerol, 0.4 M NaCl, 1.5 mM MgCl 2 , 1 mM EDTA, 1 mM dithiothreitol, and 1 mM PMSF.After 30 min of incubation at 4˚C, the suspension was centrifuged at 15,000 × g for 15 min, and the supernatant containing solubilized membrane fraction was collected and stored at −70˚C until use.Protein content of the prepared membrane fraction was analyzed using BCA protein assay kit (Pierce). Immunoblotting Analysis The acinar cells of sublingual salivary gland from the control and experimental treatments were collected by centrifugation and resuspended for 30 min in ice-cold lysis buffer (20 mM Tris-HCl, pH 7.4, 150 mM NaCl, 10% glycerol, 1% Triton X-100, 2 mM EDTA, 1 mM sodium orthovanadate, 4 mM sodium pyrophosphate, 1 mM PMSF, and 1 mM NaF), containing 1 µg/ml leupeptin and 1 µg/ml pepstatin [5] [6].Following brief sonication, the lysates were centrifuged at 10,000 g for 10 min, and the supernatants were subjected to protein determination using BCA protein assay kit (Pierce).The lysates of whole cells as well as those of membrane preparations were then used either for immunoblots analysis, or proteins of interest were incubated with the respective primary antibodies for 2 h at 4˚C, followed by overnight incubation with protein G-Sepharose beads.The immune complexes were precipitated by centrifugation, washed with lysis buffer, boiled in SDS sample buffer for 5 min, and subjected to SDS-PAGE using 40 µg protein/lane.The separated proteins were transferred onto nitrocellulose membranes, blocked for 1 h with 5% skim milk in Tris-buffered Tween (20 mM Tris-HCl, pH 7.4, 150 mM NaCl, 0.1% Tween-20), and probed with specific antibodies directed against Rac1, Dock180, phosphotyrosine (4G10), and PKCδ (EMD Millipore), phosphoserine PKC substrate, phospho-Src (Tyr 416 ), and PLCγ2 (Cell Signaling), and cSrc (Sigma). Data Analysis All experiments were carried out using duplicate sampling, and the results are expressed as means ± SD. Analysis of variance (ANOVA) and nonparametric Kruskal-Wallis tests were used to determine significance.Any difference detected was evaluated by means of post hoc Bonferroni test, and the significance level was set at p < 0.05. Results In view of a central role of PLCγ2 in propagation of proinflammatory response to bacterial endotoxins as well as being a key target in modulatory influence of peptide hormone, ghrelin, we investigated the factors affecting the PLCγ2 activation in rat sublingual salivary gland acinar cells exposed to LPS of periodontopathic bacterium, P. gingivalis.As shown in Figure 1, incubation of the acinar cells with P. gingivalis LPS led to a marked increase in PLC activity, while preincubation with ghrelin elicited a significant reduction in the LPS effect.Moreover, as the literature evidence suggests that PLC activation shows dependence on Rac GTPases [25] [26], we have also assessed the influence of the LPS and ghrelin on the acinar cell activity of Rac1 GEF, Dock180.The results revealed that the effect of the LPS was manifested by a significant increase in Dock180 activity, whereas preincubation with ghrelin exerted the modulatory effect.Furthermore, the activation of PLC and Dock180 by the LPS was susceptible to suppression by PLC inhibitor, U73122, as well as the inhibitor of Rac1, NSC23766, thus pointing to existence of the cross talk between PLC and Rac1. Since PLC as well Rac1 activation involves the membrane translocation, we next evaluated the influence of P. gingivalis LPS and ghrelin on the acinar cell membrane recruitment of PLC and Rac1.Western blot analysis of the whole cell lysates as well as the cell membrane fraction, using anti-PLCγ2 and anti-Rac1 antibody, revealed that incubation with the LPS resulted in translocation of both PLC and Rac1 to the membrane fraction, while the effect of ghrelin was manifested by the elevation in membrane-associated PLCγ2 and the suppression in the membrane translocation of Rac1 (Figure 2).Moreover, blocking the Rac1 activation with NSC23766 had no effect on the LPS and ghrelin-elicited membrane translocation of PLCγ2, while the inhibitor of PLC, U73122, appeared to exert less apparent effect on Rac1 translocation. To gain further insights into the involvement of Rac1 in the regulation of PLC activation, we have followed the leads as to the role of Src/PKC in Rac1 GEF activation.Accordingly, the sublingual salivary gland cells prior to incubation with P. gingivalis LPS and ghrelin were pretreated with wide spectrum PKC inhibitor, GF109203X, or SFK-PTKs inhibitor, PP2, and assayed for PKC and GEF Dock180 activities.As illustrated in Figure 3, the effect of the LPS was associated with the elevation in the PKC and Dock180 activation, whereas the preincubation with ghrelin elicited further stimulation in PKC activity and the reduction in Dock180 activation.The activation of PKC and Rac1 by the LPS, moreover, was susceptible to suppression by PKC inhibitor, GF109203X as well as the inhibitor of SFK-PTKs, PP2, thus attesting to the involvement of PKC and Src in the processes of GEF Dock180 activation.Hence, to ascertain the nature of this involvement, the lysates of the acinar cells as well as the cell membrane fraction were immunoprecipitated with anti-Dock180 antibody, and subjected to Western blot analysis using anti-Dock180, anti-pSer-PKC substrate, anti-pTyr, and anti-Rac1 antibody (Figure 4).The results revealed that the effect of the LPS was manifested by the membrane elevation in Rac1 associated with Dock180 phosphorylated on Tyr as well as Ser, while the effect of ghrelin, characterized by a drop in membrane-associated Dock180 phosphorylation on Ser, was also reflected in a decrease in membrane translocation of Rac1.Further, we observed that PKC inhibitor, GF109203X, exerted the inhibitory effect not only on the LPS-induced membrane localization of Rac1 and Dock phosphorylation on Ser, but also caused further decrease in the effect of ghrelin on Rac1 translocation.We also noticed that that the effect of SFK-PTKs inhibitor, PP2, was associated with the suppression in the LPS and ghrelin-elicited phosphorylation of membraneassociated Dock180 on Tyr as well as Ser and a decrease in Rac1 membrane localization.These results suggest that Src kinase-mediated phosphorylation on Tyr may be required for the stimulus-induced membrane localization of Dock180, while the PKC isozyme, identified earlier as PKCδ [22], is involved in the phosphorylation of membrane-recruited Dock180 on Ser.Therefore, to assess the extent of Src and PKCδ influence over the processes of Dock180 activation and their involvement in mediation of salivary gland acinar cell responses to P. gingivalis LPS and ghrelin, we examined the requirements and selectivity of the interactions by co-immunoprecipitation.The results, presented in Figure 5, demonstrated that PKCδ and Dock180 were present in association in both Dock180 and PKCδ immunopreci- pitates following the acinar cell stimulation with the LPS and ghrelin.Further, we found that the interaction between the two proteins was dependent on Dock180 phosphorylation on Tyr as well as the activity of PKCδ, since SFK-PTKs inhibitor, PP2, as well as wide spectrum PKC inhibitor, GF109203X, interfered with the coimmunoprecipitation.Moreover, examination of the interaction between Src kinase and Dock180 by co-immunoprecipitation revealed while the two proteins did not co-precipitate in the absence of stimulation, the Dock180 was found in association with Src following the acinar cell stimulation with P. gingivalis LPS and ghrelin (Figure 6).However, this association was subject to interference by SFK-PTK inhibitor, PP2.Upon further probing the salivary gland acinar cell Src immunoprecipitates with anti-pSrc(Tyr 416 ), we found that the effect of the LPS and ghrelin was manifested by the massive increase in Src phosphorylation on Tyr 416 , which is the reflection of up-regulation in Src activation.Together, these data underscore the role of Dock180 phosphorylation on Tyr/Ser in modulation of salivary gland acinar cell PLC activation in response to P. gingivalis LPS. Discussion The mammalian phosphoinositide-specific PLC is a family of 13 isozymes divided into six subfamilies (PLCβ, γ, δ, ε, η, and ζ) on the basis of their size, amino acid sequences, domain structure, and activation mechanisms [10] [11].Perhaps the most ubiquitously expressed are the two isoforms of the PLCγ subfamily, PLCγ1 and PLCγ2, that appear to play a key role in regulation of cell growth, differentiation, and modulation of the immune and inflammatory responses [10] [13] [35].Indeed, studies indicate that both PLCγ isoforms are activated by receptor and non-receptor tyrosine kinases, and PLCγ2 have been shown to undergo phosphorylation on Tyr and Ser fol- lowing TLR4 ligation by LPS resulting in the amplification in the enzyme activity [12] [13] [21].Furthermore, recent evidence suggests that PLCγ2 is subject to modulatory influence by the members of Rho family of small GTPases, Rac1 and Rac2, the activation status of whose is controlled by the GEF Dock180 [11] [23] [26] [27] [35] [37].Hence, considering that PLCγ2 is also the major cellular target of peptide hormone, ghrelin in controlling the extent of oral mucosal inflammatory responses to P. gingivalis [3]- [6], in the present study we explored the involvement Rac1 GTPase and its GEF, Dock180, in mediation of the amplification of PLCγ2 activation in response to P. gingivalis LPS, and explored the modulatory effect of ghrelin. Relying on the literature evidence as to the involvement of Rac GTPases in PLC activation [23] [25] [26] [37], we have exposed rat sublingual salivary gland acinar cells to incubation with P. gingivalis LPS and ghrelin, in the presence of Rac1 and PLC inhibitors, NSC23766 and U73122, and followed their influence on the acinar cell activity of Rac1 GEF, DOck180 and PLC, as well as the membrane localization of Rac1 and PLCγ2.Results of analyses revealed that the effect of the LPS-induced a significant increase in Dock180 and PLC activity, while preincubation with ghrelin exerted the modulatory effect.Moreover, the LPS-induced activation of Dock180 and PLC was susceptible to suppression by Rac1 inhibitor, NSC23766, as well as the inhibitor of PLC, U73122, thus supporting the existence of the cross talk between PLC and Rac1.Indeed, up-regulation in PLC and Rac1-GTP formation (Dock180 activity) has been also observed in association with LPS-induced pulmonary inflammation in mice as well as gastric mucosal inflammatory response to H. pylori LPS [26] [32] [37].Furthermore, considering that PLC enzymes are mainly cytosolic and translocate to the membrane upon activation [10] [11] [23], and that Rac proteins undergo regulatory control by GTP binding and membrane translocation for activation and hydrolysis to GDP for inactivation [24] [27] [29], we evaluated the influence of P. gingivalis LPS and ghrelin on membrane recruitment of PLCγ2 and Rac1.Western blot analysis revealed that the effect of the LPS was manifested by the elevation in membrane translocation of both PLCγ2 and Rac1, while the influence of ghrelin was reflected in the elevation in membrane-associated PLCγ2 and the suppression in the membrane translocation of Rac1.Moreover, we observed that blocking the Rac1 activation with NSC23766 had no effect on the LPS and ghrelin-elicited membrane translocation of PLCγ2, while the inhibitor of PLC, U73122, appeared to exert less apparent effect of Rac1 translocation.Hence, we concluded that up-regulation by P. gingivalis LPS in Rac1 membrane translocation plays a major role in PLCγ2 activation.This contention is consistent with the studies suggesting that the enhancement in PLCγ2 activation is a consequence of membrane proximity of Rac1 and the interaction with the split PH domain of PLCγ2 phosphorylated on Ser, which promotes its association withRac1 [13] [21] [25] [26] [37]. Next, to address the accumulating evidence as to the role Src and PKC in the regulation of PLC activation [10] [11] [13], we assessed the influence of SFK-PTK inhibitor, PP2 and PKC inhibitor, GF109203X on the activities of GEF Dock180 and PKC enzymes in the presence of the LPS and ghrelin.We noted that the effect of P. gin- givalis LPS was associated with the elevation in the PKC and Dock180 activation, while ghrelin evoked further stimulation in PKC activity and the reduction in Dock180 activation.The LPS-induced activation of PKC and Rac1, moreover, was susceptible to suppression by PKC inhibitor, GF109203X, as well as the inhibitor of SFK-PTKs, PP2.Hence, we concluded that PKC and Src are active participants in GEF Dock180 activation. Our assertion is further supported by the results of Western blot analysis of Dck180, in which the lysates of the acinar cells as well as the cell membrane fraction were immunoprecipitated with anti-Dock180 antibody, and subjected to probing with anti-pSer and anti-pTyr antibody.The analyses revealed that incubation with the LPS elicited elevation in membrane Rac1 associated with Dock180, which was phosphorylated on Tyr as well as Ser. The effect of ghrelin, characterized by the presence of membrane-associated Dock180 phosphorylated on Tyr, and a drop in its phosphorylation on Ser, was also reflected in a decrease in the membrane translocation of Rac1.We have also observed that the LPS-induced membrane localization of Rac1 as well as Dock180 phosphorylation on Ser, were susceptible to suppression by PKC inhibitor, GF109203X, that also caused further decrease in the effect of ghrelin on Rac1 translocation.On the other hand, the effect of SFK-PTKs inhibitor, PP2, was reflected in the suppression of the LPS and ghrelin-elicited phosphorylation of membrane-associated Dock180 on Tyr and Ser as well as a decrease in Rac1 membrane localization.The fact that activation of Dock180 by the LPS, reported herein, was susceptible to suppression by both the wide spectrum PKC inhibitor, GF109203X, and the inhibitor of SFK-PTKs inhibitor, PP2, suggests that Src-kinase mediated phosphorylation on Tyr may be required for the stimulus-induced membrane localization of Dock180, while the PKC enzyme, identified earlier as PKCδ [22], is involved in the phosphorylation of membrane-recruited Dock180 on Ser.The above findings, thus attest to the functional role of Dock180 phosphorylation on Tyr/Ser in the mediation of proinflammatory consequences of P. gingivalis LPS as well as the modulatory influence of ghrelin on the oral mucosal responses to this periodontopathic bacterium.Indeed, activation of Rac1 by Src-dependent phosphorylation of Dock180 on Tyr has been reported in association with PDGFRa-stimulated glioma tumorigenesis in mice and humans [31], and we have shown recently that the increase in H. pylori LPS-induced gastric mucosal Rac1-GTP generation occurs with the involvement of PKCδ [26] [37].Therefore to add further credence to our assertion as to the role of Dock180 phosphorylation on Tyr/Ser in mediation of the signaling pathways triggered by P. gingivalis LPS as well as ghrelin, we investigated the hierarchy of the rapport between Src and PKCδ with respect to Dock180 phosphorylation by co-immunoprecipitation.Our analyses revealed that while PKCδ not co-precipitate with Dock180 in the absence of stimulation, the two proteins were found in association in both Dock180 and PKCδ immunoprecipitates following the acinar cell incubation with the LPS and ghrelin.Moreover, the association between Dock180 and PKCδ was dependent on the phosphorylation of Dock180 on Tyr as well as the activity of PKCδ, since pretreatment with SFK-PTKs inhibitor, PP2, as well as wide spectrum PKC inhibitor, GF109203X, interfered with the co-precipitation.Furthermore, we observed that following the acinar cell stimulation with P. gingivalis LPS and ghrelin, the Dock180 protein present in the Src immunoprecipitates, was found in association with Src phosphorylated on Tyr 416 .As phosphorylation of Src on Tyr 416 reflects the kinase activation state [6] [38], the findings provide a clear indication as to the involvement of Src in GEF Dock180 phosphorylation on Tyr. Together, our data attest to the involvement of PKCδ and Src in modulation of Dock180 activation through phosphorylation on Tyr/Ser in response to P. gingivalis LPS.Although the full functional paradigm of GEF Dock180 phosphorylation has not yet been clearly defined, it is tempting to suggest that phosphorylation of Dock180 on Tyr could facilitate membrane anchoring and the interaction of Dock180 with nucleotide-free Rac1, thereby increasing Dock180 binding to Rac1 and its activation, while the LPS-induced phosphorylation of Dock180 on Ser may be responsible for further increase of Dock180 activation and, hence up-regulation in GTP loading to Rac1and the amplification in Rac1-GTP formation. Conclusion Our findings suggest that GEF Dock180 activation through Src/PKCδ mediated phosphorylation on Tyr/Ser plays a pivotal role in the salivary gland acinar cell PLCγ2 activation not only in response to proinflammatory P. gingivalis LPS signaling, but also in reaction to the modulatory action of ghrelin (Figure 7).Although the modulatory influence of ghrelin, signaling through GPCR activation, relies on Src-dependent Tyr phosphorylation of Dock180, and the propagation of proinflammatory events by P. gingivalis LPS relies on TLR4 ligation and sub- Figure 1 .Figure 2 . Figure 1.Impact of Rac and PLC inhibitors on the changes induced in sublingual salivary gland acinar cells by P. gingivalis LPS and ghrelin (Gh) in the expression of PLC and Dock180 (GTP-Rac1) activities.The cells, preincubated with 50 µM of Rac1 inhibitor, NSC23766, or 15 µM PLC inhibitor, U73122, were treated with 0.5 µg/ml Gh, and incubated for 1 h in the presence of100 ng/ml LPS.Values represent the means ± SD of five experiments.* p < 0.05 compared with that of control, ** p < 0.05 compared with that of LPS, *** p < 0.05 compared with that of Gh + LPS. Figure 3 . Figure 3.Effect of PKC and SFK-PTK inhibitors on ghrelin (Gh)-induced changes in the expression of PKC and Dock180(GTP-Rac1) activities in salivary gland acinar cells exposed to P. gingivalis LPS.The cells, preincubated with 30 µM of SFK-PTKs inhibitor, PP2, or 5 µM of wide spectrum PKC inhibitor, GF109203X (GF), were treated with 0.5 µg/ml Gh, and incubated for 1 h in the presence of 100 ng/ml LPS.The data represent the means ±SD of four separate experiments.* p < 0.05 compared with that of control, ** p < 0.5 compared with that of LPS, *** p < 0.5 compared with that of Gh + LPS. Figure 4 . Figure 4. Impact of SFK-PTK and PKC inhibition on the changes induced by P. gingivalis LPS and ghrelin (Gh) in membrane translocation and phosphorylation of Dock180, and its association with Rac1.The acinar cells, preincubated with 30 µM SFK-PTKs inhibitor, PP2, or 5 µM of wide spectrum PKC inhibitor, GF109203X (GF), were treated with 0.5 µg/ml Gh, and incubated for 1 h in the presence of 100 ng/ml LPS.The lysates of whole cells (T) as well as the corresponding membrane (M) fractions were immunoprecipitated (IP) with anti-Dock180 antibody, and immunoblotted (WB) with anti-Dock180, anti-pTyr), anti-pSer-PKC substrate, and anti-Rac1 antibody (A).The relative densities of phosphorylated proteins are expressed as fold of control (B), and the total (T) Dock180 was used as loading control.The data represent the means ± SD of four separate experiments.* p < 0.05 compared with that of control, ** p < 0.05 compared with that of LPS, *** p < 0.05 compared with that of Gh + LPS. Figure 5 . Figure 5. Impact of SFK-PTK and PKC inhibition on the changes induced by P. gingivalis LPS and ghrelin (Gh) in the association of Dock180 with PKCδ in sublingual salivary gland acinar cells.The cells, preincubated with 30 μM PP2, or 5 μM GF109203X (GF), were treated with 0.5 µg/ml Gh, and incubated for 1 h in the presence of 100 ng/ml LPS.(a) Cell lysates were immunoprecipitated (IP) with anti-Dock180 antibody and immunoblotted (WB) with anti-Dock180 and anti-PKCδ antibody, and the relative densities of proteins (b) are expressed as fold of Dock180 control; (c) Cell lysates were immunoprecipitated with anti-PKCδ antibody, immunoblotted with anti-PKCδ and anti-Dock180 antibodies, and the relative densities of proteins (d) are expressed as fold of PKCδ control.The data represent the means ± SD of four separate experiments.* P < 0.05 compared with that of control.** P < 0.05 compared with that of LPS.*** P < 0.05 compared with that of Gh + LPS. Figure 6 . Figure6.Impact of SFK-PTK inhibition on the changes induced by P. gingivalis LPS and ghrelin (Gh) in sublingual salivary gland acinar cell Src kinase phosphorylation and its association with Dock180.The cells, preincubated with 30 µM PP2 were treated with 0.5 µg/ml Gh, and incubated for 1 h in the presence of 100 ng/ml LPS.Cell lysates were immunoprecipitated (IP) with anti-Src antibody and immunoblotted (WB) with anti-Src and anti-Dock180 antibody (A).The Src immunoblots were also reblotted with anti-pSrc (Tyr 416 ) antibody, and the relative densities of proteins (B) are expressed as fold of Src control.* P < 0.05 compared with that of control.** P < 0.05 compared with that of LPS. Figure 7 . Figure 7. Schematic diagram of the regulatory role of GEF Dock180 phosphorylation on Tyr/Ser in modulation of Rac-1 activation in response to P. gingivalis LPS and ghrelin.Ligation by ghrelin of salivary gland acinar cell GHS-R1a activates several G protein-dependent signal transduction pathways, including that of Src kinase-dependent Dock180 phosphorylation on Tyr that maintains the regulatory level of Rac1-GTP formation.Binding of the LPS to TLR4 triggers Src kinase-dependent Dock180 phosphorylation on Tyr and the PLCγ2-mediated PKCδ activation that leads to the PKCδ-induced up-regulation in Dock180 and PLCγ2 activation through phosphorylation on Ser.The up-regulation in Dock180, in turn, stimulates the formation of Rac1-GTP and promotes its association with PLCγ2, thus resulting in the amplification in PLCγ2 activation.G heterotrimeric G-protein, pS phosphoserine, pY phosphotyrosine.sequentamplification in Dock180 activation through Src/PKCδ-dependent Tyr/Ser phosphorylation, the major consequence of these seemingly opposing inputs is Rac1 activation and the amplification in PLCγ2.
2017-06-15T21:42:06.229Z
2015-06-24T00:00:00.000
{ "year": 2015, "sha1": "7f446165b819efc9b43d5bbf3d70f2c7581f0776", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=57918", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "7f446165b819efc9b43d5bbf3d70f2c7581f0776", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
16902654
pes2o/s2orc
v3-fos-license
Lecithin-Linker Microemulsion Gelatin Gels for Extended Drug Delivery This article introduces the formulation of alcohol-free, lecithin microemulsion-based gels (MBGs) prepared with gelatin as gelling agent. The influence of oil, water, lecithin and hydrophilic and lipophilic additives (linkers) on the rheological properties and appearance of these gels was systematically explored using ternary phase diagrams. Clear MBGs were obtained in regions of single phase microemulsions (μEs) at room temperature. Increasing the water content in the formulation increased the elastic modulus of the gels, while increasing the oil content had the opposite effect. The hydrophilic additive (PEG-6-caprylic/capric glycerides) was shown to reduce the elastic modulus of gelatin gels, particularly at high temperatures. In contrast to anionic (AOT) μEs, the results suggest that in lecithin (nonionic) μEs, the introduction of gelatin “dehydrates” the μE. Finally, when the transdermal transport of lidocaine formulated in the parent μE and the resulting MBG were compared, only a minor retardation in the loading and release of lidocaine was observed. Introduction Transdermal drug delivery provides convenient and controlled delivery of drugs to patients with minimum discomfort [1]. The stratum corneum of the skin is the main barrier opposing transdermal absorption of drugs [2]. Microemulsions (μEs) have been proposed to overcome this barrier function and improve transdermal drug permeation [3][4][5][6][7][8][9][10][11]. Similar to transdermal delivery, periocular ophthalmic delivery is the least invasive method of delivery of drugs to the anterior and posterior section of the eye [12][13][14][15].The resistance to drug transport in periocular delivery is associated with the structure of corneal epithelium and stroma that act as protective barriers that regulate transport to and from the eye through the cornea and sclera [16]. Aqueous eye drops are the most popular dosage form in spite of their low bioavailability [17]. Eye drops formulated using μEs have been shown to improve the solubility of hydrophobic drugs and improve the efficacy of eye drop formulations [17][18][19]. Among the various μE systems, lecithin μEs are especially desirable since lecithin is a naturally occurring nontoxic biological surfactant with generally recognized as safe (GRAS) status [20]. However, lecithin cannot produce μEs when utilized as the sole surfactant because of its tendency to form liquid crystalline phases [6]. Earlier lecithin μEs were formulated using medium-chain alcohols, such as pentanol, that promote the μE phase [6]. Unfortunately, these medium-chain alcohols tend to dissolve cell membranes [21]. One alternative to the alcohol-based lecithin micromulsions is the use of linker molecules [22]. Alcohol-free lecithin μEs have been formulated with linkers as potential vehicles for transdermal drug delivery of lidocaine [22,23]. Linker molecules are amphiphilic additives that when added to the surfactant, tend to segregate near the surfactant tail (lipophilic linkers) or near the surfactant head group (hydrophilic linkers) [24]. The addition of linkers helps by increasing the surfactant-oil (lipophilic linkers) and surfactant-water (hydrophilic linkers) interactions. Furthermore, when hydrophilic and lipophilic linkers are combined, they produce a surfactant-like self-assembled system that offers enhanced solubilization capacity [25,26]. Compared to conventional alcohol-based lecithin μEs, lecithin-linker μEs have substantially lower toxicity, and can be prepared using food or pharmaceutical grade surfactants and linkers [22,23]. In addition, lecithin-linker μEs can provide twice the absorption and penetration of lidocaine through the skin when compared to conventional emulsions, and also demonstrated sustained transdermal delivery of lidocaine for 12 hours [22,23]. Compared to other pharmaceutical-grade μEs, lecithin-linker μEs have an exceptionally low viscosity (<150 mPa·s) that, together with its affinity towards hydrophilic and lipophilic environments and small drop size (<10 nm), makes it a suitable vehicle to penetrate epithelial tissue and use these tissues as a depot for drug delivery [23,27]. While the low viscosity of lecithin-linker μEs is suitable for spray or roll-on methods of applications on the skin, these linker μEs spread well beyond the intended area when applied as drops [27]. The low viscosity of lecithin-linker μEs is also an undesirable feature in periocular ophthalmic delivery where higher viscosities are desirable to increase the residence time and improve the effectiveness of ophthalmic delivery formulations [28,29]. While the advantages and mechanisms of transdermal delivery with lecithin-linker μEs have been established in previous articles [22,23,27,30], the formulation objective for this work was to improve the rheological properties of these formulations, introducing gelatin as gelling agent, while retaining the desirable transport properties of the original (parent) μEs. Under certain conditions, oil-continuous μEs can be transformed into highly viscous gels with the addition of certain gelling agents [31]. Since the main component is an organic solvent, these gels can be referred to as organogels [31]. A subgroup within the organogels is the μE-based gels (MBGs), introduced in the 1980s [32]. Generally, MBGs undergo liquid to gel transition when subjected to environmental stimuli such as changes in pH, temperature and electrolyte concentration [33]. Compared to their hydrogel counterparts, MBGs incorporate μE systems that provide a suitable environment for the solubilization of hydrophilic, lipophilic and amphiphilic drugs. Furthermore, due to their rheological properties, such as thermo-reversibility, MBGs have been proposed as potential vehicles for sustained drug and vaccine delivery [34], and as enzyme entrapment media (e.g., lipase) [35]. The gelling agents used to formulate these MBGs include natural polymers such as gelatin [36], κ-carrageenan [36], hydroxypropylmethyl cellulose (HPMC) [37], and block copolymer surfactants such as poly(ethylene oxide)-poly(propylene oxide)-poly(ethylene oxide), commercially known as Poloxamers or Pluronics [38]. Out of the many different MBGs reported, the majority have been prepared using gelatin with a Type II w/o micreomulsion system. Many of the reported μEs are systems of sodium dioctyl sulfosuccinate (AOT)/isooctane/water. Isopropyl myristate (IPM) has also been used as the oil phase. With the addition of 10 to 20% w/w solid gelatin, AOT μEs can be transformed into high viscosity, transparent thermo-reversible gels [31][32][33]. The gelation mechanism of gelatin-stabilized MBGs has been studied using various techniques, including small angle X-ray and neutron scattering (SAXS, SANS), 1 H and 13 C NMR, scanning emission and transmission electron microscopy (SEM, TEM), differential scanning calorimetry (DSC) and electrical conductometry [39]. Using small angle neutron scattering Atkinson et al. proposed a gelation model (for systems far from the phase transition point) where the gelatin strand network was formed in water-continuous channels and that the space in between the strands was filled by an oil-continuous AOT μE [40]. At the same time, for systems near the transition point towards the bicontinuous phase, Petit et al. proposed that only regions of the gelatin strands were formed in water-continuous environments, and that the rest of the strand, as well as the space in between the strands was filled with an oil-continuous AOT μE [41]. However, these groups agreed, and later studies confirmed, that in AOT μEs the introduction of gelatin had very little impact on the phase behavior of the microemulsion system [40][41][42]. A major limitation of gelatin MBGs is the toxicity associated with the anionic surfactant, AOT, used to formulate these μEs [43,44]. Gelatin MBGs prepared from lecithin μEs have been reported [36,37,45,46]. However, these lecithin MBGs contain short or medium chain alcohols and even the mildest formulation-produced with ethanol-still triggers some level of allergic response [46]. Willimann et al. formulated lecithin organogels (not classified as μEs by the authors) without alcohol or gelling agents and determined that these formulations were non-toxic and produced substantial trandermal permeation enhancement over conventional aqueous formulations [47]. However, the authors also indicated that it is necessary to use highly pure phosphatidyl choline to produce the gels and that the formulation is highly sensitive to the type of oil or drug used in the formulation. While gelatin MBGs have been used as reaction media [45,48,49], their use as drug delivery vehicles is limited since the presence of AOT and/or alcohols in these formulations poses cytotoxicity concerns. Recent articles on MBGs for transdermal delivery have concentrated on AOT-based MBGs, most of them using nonionic surfactants as additives [33,[50][51][52]. Kantaria et al., recognizing the biocompatibitlity issues of AOT, attempted to produce alternative MBGs formulated with nonionic surfactants but failed in producing these nonionic MBGs [50]. These authors linked this failure to the difference in the phase behavior of nonionic μEs compared to that of AOT (an anionic surfactant) systems. In one recent ion-sensitive nonionic MBG, formulated for periocular delivery of cyclosporine, the authors replaced water by glycerol in order to facilitate the formation of the gel [53]. That periocular delivery system (as well as many other ophthalmic formulations [54], including the formulation introduced in this work) made use of nonionic polyethylene glycol-based surfactants because of their biocompatibility. In the present study, we hypothesized that nonionic and alcohol-free gelatin-stabilized MBGs could be prepared using low toxicity linker-based lecithin μEs and that the increase in the viscosity of the formulation would not affect significantly the ability of the μE to permeate through epithelial tissue or membranes. To produce these lecithin-linker μEs, the formulation of Yuan et al. [22] was slightly modified by replacing the hydrophilic linkers sodium octanoate and octanoic acid with a milder combination of PEG-6-caprylic/capric glycerides and decaglycerol monocaprylate/caprate that has been confirmed to be non-irritant to human skin, non-mutagenic and highly biocompatible [27]. This base μE formulation has been used in the delivery of anti-wrinkle active ingredients in humans [27]. However, due to its low viscosity, these μEs spread beyond the intended area. As indicated by Kantaria et al. [50], the formulation of multicomponent nonionic MBGs is a complex task that requires careful consideration of the phase behavior of the μE and the properties of the resulting gel. Such consideration was lacking in the literature and was addressed in this study by comparing a set of phase behavior studies (phase scans and ternary phase diagrams) of the parent μEs with the appearance and rheological properties (particularly the elastic modulus, G') of the resulting MBGs. Furthermore, the effect of each of the μE ingredients in the elastic modulus of the gels was evaluated in order to determine the relative importance of each ingredient in the rheology of the gels. To evaluate the effect of gelatin on drug transport through epithelial tissue, the release profiles of lidocaine, topically adsorbed in the skin from a lecithin-linker gelatin MBG and from the parent μE, were obtained. To evaluate the effect of gelatin on transmembrane transport properties, a membrane permeation study was conducted to compare the permeation profiles of lidocaine incorporated in a gelatin MBG and in the parent μE, using conditions relevant to periocular delivery. Skin Ear skin from domestic pigs (approximately 6 months old) was used as a surrogate for human epidermis [56]. Porcine ears were purchased from a local market and frozen overnight. They were partially thawed by rinsing with running water for 10 seconds at room temperature to soften the skin for smoother cuts. The skin of the external side of the ear was dermatomed to a thickness ranging from 700 to 900 μm [57]. Areas of the skin with cuts, burns, swelling or abnormal texture were avoided. The dermatomed tissue was cut into circles of 11.4 mm diameter. Before placing the skin samples in the permeation device, the samples were equilibrated to room temperature and then inspected to make sure that there were no pores or other imperfections. The work of Yuan et al. includes more details about pig skin sampling and its use to evaluate the transdermal permeation in linker μEs [22,23,30]. Microemulsion (μE) Preparation μEs were prepared in flat-bottom tubes by mixing lecithin (surfactant), decaglycerol monocaprylate/caprate and PEG-6-caprylic/capric glycerides (hydrophilic linkers), sorbitan monooleate (SMO, lipophilic linker), isopropyl myristate (IPM) and 0.9% sodium chloride solution at various compositions ( Table 1). The concentration of lecithin, PEG-6-caprylic/capric glycerides and decaglycerol monocaprylate/caprate was kept at 5% w/w, 10% w/w and 4% w/w, respectively (giving a 1:2:0.8 weight ratio of lecithin to PEG-6-caprylic/capric glycerides to decaglycerol monocaprylate/caprate). This might not be the optimal composition but it was the maximum ratio of lecithin to hydrophilic linkers that could be used to produce μE systems without the formation of meta-stable phases. In addition, a minimum of 2% SMO was also required to prevent these meta-stable phases. The phase scan of the μE consisted of increasing SMO concentration from 2% to 10% (g SMO/g total formulation), thus increasing the hydrophobicity of the formulation. After introducing all the ingredients, the tubes were thoroughly vortexed and left to equilibrate for 2 weeks. MBG Preparation MBGs were prepared by addition of gelatin powder to o/w μEs at various surfactant/linker/oil/water ratios, according to the method of Kantaria et al. [33]. Briefly, the mixture was first stirred at room temperature for 45 minutes to swell the solid gelatin, and then heated to 50 °C and stirred for 20 minutes until the gelatin was completely dissolved in the μE. The agitation was then stopped and the sample was allowed to cool in an ice bath for 30 minutes. MBGs with various hardness and opacity were obtained. Ternary Phase Diagrams Phase behaviour studies were performed by constructing ternary phase diagrams at room temperature (25 °C) and at the gelatin activation temperature (50 °C) using the water titration method [58,59]. The "surfactant" vertex of the ternary phase diagrams was a mixture of 1:2:0.8:1.2 weight ratio of lecithin: PEG-6-caprylic/capric glycerides:decaglycerol monocaprylate/caprate:SMO. Mixtures of lecithin + linkers and isopropyl myristate were prepared; then an aqueous solution of 0.9% sodium chloride solution was added until the desired composition along the dilution line was achieved. After each titration the flat-bottom tubes were vortexed for 3 minutes to ensure thorough mixing. The phase behavior of each of the tubes at room temperature was observed after two weeks of equilibration time. The ternary phase diagram at 50 °C was generated by keeping the tubes in a constant-temperature water bath at the desired temperature for two weeks, which allowed the systems to reach the new equilibrium. Anisotropic liquid crystal (LC) phases in the phase diagrams were identified using cross polarization microscopy with an Olympus BX-51 microscope (Richmond Hill, ON, Canada). Precipitate (P) phases were identified as systems where a solid residue was observed at the bottom of the test tube. Microemulsion phases were identified as translucent yellow-amber phases-this color was indicative of the presence of lecithin, SMO and the hydrophilic linkers in that phase-that were able to scatter (but not diffuse) a red laser beam (650 nm). Further confirmation of the microemulsion phase (μE) was obtained via electrical conductivity measurements and via repeated temperature cycles (25 °C-50 °C-25 °C) where the microemulsion phase volumes were recovered after completing each cycle. The presence of excess oil (Oil) phase or aqueous (Water) phases was confirmed via electric conductivity measurements. It is important to note, however, that these "Oil" or "Water" phases contained part of the linkers that partitioned into these excess phases. Another way of differentiating μEs from the "Oil" and "Water" phases was that the latter were not able to scatter the light of the red laser beam. The characteristics of the surfactant "S" phase identified in the systems containing precipitate (P) were similar to those of a μE system, only that this S phase had a lighter amber colour, and produced a weaker scattering of the red laser beam than its μE counterpart. The composition or structure of the S, LC, P, Oil and Water phases were not further studied because they were not useful in the preparation of lecithin-linker MBGs for transdermal delivery. Physiochemical Characterization The conductivity of the μEs was measured at room temperature using a VWR bench/portable conductivity meter equipped with a custom-build OEM conductivity microelectrode (Microelectrodes Inc., Bedford, NH, USA). Viscosity measurements of the μE samples were obtained (in triplicate) using a CV-2200 falling ball viscometer (Gilmont Instruments, Barrington, IL, USA) at room temperature. The hydrodynamic diameter measurements of the μE aggregates were determined via dynamic light scattering at a 90° angle, using a BI 90Plus particle size analyzer equipped with a 35 mW diode laser (wavelength ~674 nm) (Brookhaven Instruments, Holtsville, NY, USA). Rheological Characterization Rheological characterizations of the MBGs were performed using a stress and strain controlled CSL 2 500 rheometer (TA Instruments Ltd., Surrey, UK). The measuring system used was the 4 mm diameter stainless steel cone and plate geometry (cone angle 2°). The sample volume was approximately 1 mL. Oscillation experiments were performed at 1.00 Hz frequency and 0.177 Pa oscillation stress over the temperature range of 20 to 50 °C. At the maximum temperature the formulations were liquid and then gelled upon cooling. The elastic modulus G' was recorded. In Vitro Transport of Lidocaine To evaluate the performance of gelatin-stabilized MBGs as delivery vehicles, a lipophilic drug, lidocaine base, was chosen as the model drug in this work. Lidocaine is an anesthetic used in topical formulations as a pain reliever in the treatment of minor burns, after various laser skin surgeries and during cataract surgery [60][61][62]. It was incorporated in the lecithin-linker μEs and gelatin MBGs by pre-dissolving 10% w/w lidocaine in isopropyl myristate (IPM). The methods described below were adapted from previous transdermal transport studies for lidocaine, formulated in lecithin-linker μEs [22,23,30]. Transport of Lidocaine in the Skin The transdermal in vitro extended release experiments for a selected μE and its gelatin MBG were conducted using MatTek Permeation Device (MPD) supplied by MatTek Corporation (Ashland, MA, USA) fitted with pig ear skin as membrane. The experiment was carried out in laboratory conditions that simulated the topical application of these formulations. Briefly, pig ear skin tissues were placed into the MPD with the epidermis layer facing up. The exposed tissue area in the MPD was 0.256 cm 2 . After assembling the device, 400 μL of test μE (25 °C) or liquefied gelatin MBG (initially at 50 °C) were applied in the donor compartment. During the transfer process, the liquid gelatin-μE mixture was cooled down to 40 °C or less, making it unlikely for the MBG to affect the structure and permeability of the stratum corneum. The receptor compartment was filled with 5 mL of PBS. The donor μE or the gelatin MBG was withdrawn 30 minutes after application [22,23]. The skin surface in the MPD was blotted dry with Kimwipes and was then used for extended release. At predetermined times (1, 3, 6, 12, 24 and 48 h), the receiver solution was withdrawn completely from the receptor compartment and was immediately replaced with fresh PBS solution to maintain sink conditions. The experiment was terminated at 48 h. At the end of the experiment, the pig ear skin from each of the MPD was collected and used to determine the final concentration of lidocaine absorbed in the skin (and hence the total amount of lidocained absorbed in the skin). To this end, the skin samples were rinsed with a few droplets of PBS solution and then the residual lidocaine in each sample was extracted with 2 mL of methanol for 48 h [22,23]. All experiments were conducted in quadruplets at room temperature. Transmembrane Transport of Lidocaine In vitro permeation experiments were conducted as described in Section 2.7.1 with four modifications: (A) the skin tissue was replaced with cellulose acetate membranes (Harvard Apparatus, MWCO 100 kDa, ~10 nm pore size); (B) An aliquot of 50 µL of test μE (25 °C) or liquefied MBG (50 °C) was applied to the donor compartment and was kept in the donor compartment throughout the 48 h permeation study (instead of removing it after 30 minutes of loading); (C) The receptor compartment was filled with 5 mL of simulated tear fluid (Table 2) instead of PBS; (D) All permeation experiments were conducted in quadruplets at 34 °C (ocular surface temperature). These modifications were introduced to simulate the release from the MBG and its corresponding μE onto a solution that simulates tear fluid as a simple model for periocular delivery. Lidocaine Quantification The concentration of lidocaine in the skin.-determined after extraction with methanol.-and in receiver solutions, was quantified using a Dionex ICS-3000 liquid chromatography system equipped with an AS40 automated sampler, AD25 absorbance detector and a reverse phase column (Genesis, C 18 , 4 μm, 150 mm × 4.6 mm). The UV detector was set to 230 nm. A mixture of acetonitrile and 0.05M NaH 2 PO 4 ·H 2 O (pH 2.0) (30:70, v/v) was used as the mobile phase at flow rate of 1.0 mL/min. The retention time of lidocaine under the described conditions was approximately 2.7 min, and the calibration curve for the area under the peak vs. concentration was linear (R 2 = 0.9997). Further details of the development and validation of this liquid chromatography method can be found in the work of Yuan et al. [22]. Sorbitan Monooleate (SMO) Phase Scan The compositions of linker-based lecithin μEs considered in the SMO scan are shown in Table 1. A picture of the vials employed in this phase scan is presented in Figure 1, showing the transition from Type I o/w μE (bottom phase) with excess oil phase (top phase) obtained at 2% and 4% SMO to single phase Type IV μE at 6% SMO. The systems of 8% and 10% SMO produced a more complex behaviour that included a top phase oil-continuous (Type II) μE. The types of μEs produced were confirmed using conductivity measurements (Table 3). Figure 1. Phase scan at room temperature (25 ± 1 °C). All test tubes contained 5% lecithin, 10% PEG-6 caprylic/capric glycerides and 4% decaglycerol monocaprylate/caprate (see Table 1 for additional composition details). Table 3. Properties of the μEs produced with the phase scan (2-10% sorbitol monooleate-SMO) of Figure 1 (see Table 1 for additional composition details) at room temperature (25 ± 1 °C). Yuan et al. evaluated Type I, IV and II lecithin linker μEs as lidocaine delivery vehicles and found that while all these types improved lidocaine loading in the skin and its transdermal flux, Type IV systems produced the largest lidocaine loading in the skin [22]. Furthermore, bicontinuous (Type IV) μEs produce the largest co-solubilization of oil and water. Considering the advantages of Type IV formulations, the 6% SMO system was selected as the base composition to construct ternary phase diagrams and to evaluate in vitro transport of lidocaine. The viscosity of μE formulations containing 2 to 6% SMO are presented in Table 3. The increase in viscosity from 28 to 127 mPa·s with increasing SMO concentration can be explained on the basis that when approaching the Type I-IV transition (increasing SMO), oil-swollen micelles grow larger and turn cylindrical, which increases the viscosity of the formulation [22,64]. The data in Table 3 confirms the relatively low viscosity of lecithin-linker μEs when compared to medium-chain alcohol lecithin μEs that reach viscosities as high as 1000 mPa·s [31] and to commercial topical creams such as Lanocort 10 that have viscosities of approximately 1300 mPa·s [65]. The low viscosity of lecithin-linker μEs makes them suitable for spray and roll-on applications, but it is a disadvantage for gel-type topical and ophthalmic applications. The mean hydrodynamic radii, obtained via dynamic light scattering, for the μEs of Figure 1 containing 2%, 4% and 6% SMO are also presented in Table 3. These radii are comparable to the values reported by Yuan et al. for similar linker-lecithin systems [22]. The small droplet size (less than 10 nm) of lecithin-linker μEs has been associated with the use of hydrophilic linkers that increase the interfacial area and reduce the size of oil-swollen micelles and water-swollen reverse micelles [25,66]. Although the hydrodynamic radius for 6% SMO is also reported in Table 3, the meaning of that particular measurement is questionable because the μE does not exist as disperse droplets but as interconnected (bicontinuous) channels. Ternary Phase Diagrams Ternary phase diagrams at 25 °C and 50 °C (gel activation temperature) were constructed using the surfactant composition corresponding to the 6% SMO formulation of Figure 1. These phase diagrams are presented in Figure 2. At both temperatures, the surfactant mixture was not completely soluble in either the aqueous (0.9% NaCl) solution or in isopropyl myristate (IPM). Liquid crystalline (LC) or surfactant + oil + precipitate phases (S + O + P) were found in systems containing less than 10% IPM or 5% water. Other phases observed in the ternary phase diagrams include isotropic, single phase microemulsions (µE), μE in equilibrium with dispersed liquid crystalline phases (µE + LC), μEs in equilibrium with excess oil (µE + oil), μE in equilibrium with excess water (µE + water), and μE coexisting with excess oil and excess water phases (µE + oil + water). The main difference in the ternary phase diagram at 50 °C compared to that at 25 °C is the larger µE+water region at 50 °C. This can be explained on the basis that the hydrophilic linker PEG-6-caprylic/capric glycerides are temperature sensitive, due to the presence of ethylene oxide groups [67]. As the temperature increases from 25 °C to 50 °C, hydrogen bonding between the ethylene oxide groups of the hydrophilic linkers and the water molecules weaken (dehydrate) and the formulations become more hydrophobic, hence the ability of the μE to solubilize water decreases, resulting in the formation of a larger µE + water region [67][68][69]. Sorbitan Monooleate (SMO) Phase Scan The systems of Figure 1 were used to produce MBGs formulated with 10% and 20% gelatin. Figure 3a presents pictures of these MBGs. MBGs formulated with 2%, 4% and 6% SMO and 20% gelatin were translucent. The elastic modulus (G') of MBGs produced with 2% and 10% SMO are presented in Figure 3b and 3c as a function of temperature for formulations containing 10% and 20% gelatin, respectively. Higher G' values were obtained in systems formulated with 20% gelatin, indicating the formation of stronger gels, likely due to a denser gel network structure. Furthermore, gels with lower SMO produced weaker, albeit more clear, gels. The electrical conductivity of the clear gels produced with 2-6% SMO ranged from 50 to 200 μS/cm, suggesting that although the original μEs were water-continuous systems, the structure of the final gel was closer to a bicontinuous μE rich in oil. The fact that oil-rich bicontinuous MBGs were produced is consistent with the work of Atkinson et al. and Petit et al. on AOT MBGs [40,41]. However, in contrast with all the work reported on AOT-based MBGs, where the addition of gelatin does not affect the phase behavior of the parent μE, the phase behavior of the lecithin-linker (nonionic) systems was almost reversed with the addition of 20% gelatin. One explanation for the fact that the presence of gelatin induced the transition from Type I μEs to bicontinuous oil-rich systems, is that gelatin "dehydrated" the μE and used that water to form the gel network. According to the ternary phase diagrams of Figure 2, removing water from the μE shifts the formulation into a region of single phase μEs rich in oil. For the MBGs containing 8 and 10% SMO, their marginally translucent appearance might be a sign that the final μE phase in the MBG is closer to a phase transition. Fluorescent dyes, sodium fluorescein (hydrophilic) and nile red (hydrophobic), were added (separately) to the 2% and 10% SMO MBGs produced with 20% gelatin to assess macroscopic phase separation and the structure of the gel fibers, using florescence and polarized light microscopy. According to Figure 4, the MBG formulated with 2% SMO showed apparent continuity in both aqueous (4a) and oil (4b) phases according to the continuous green color from fluorescein (water soluble) and nile red (oil soluble), respectively. Areas that showed different color intensity were typically indicative of bubbles or gelatin fibers in the sample. The structure of these fibers was evidenced by the bright strands observed under cross polarizers in Figure 4c. Similarly, the MBG containing 10% SMO also seemed to be continuous in both phases (Figure 5), and also had a threedimensional network of gelatin fibers. It is important to clarify that at the magnification scale of Figures 4 and 5, it was not possible to assess the continuity of these systems because of the submicron scale of the μE domains. However, the macroscopic observations with the fluorescence dyes were consistent with the electrical conductivity measurements in the gels discussed earlier. . For formulation details see Figure 1 and Table 1. According to Figures 5c and 5d, the fibers of the 10% SMO MBG were thicker than those shown in Figure 4c for 2% SMO MBG. This difference in fiber thickness could explain the higher gel modulus (G') and turbidity of MBGs prepared with 10% SMO. Ternary Phase Diagrams Gel formation in gelatin MBGs occurred between room temperature (25 °C) and 50 °C. At 50 °C, the gelatin activation temperature, the native helical structure of gelatin is denatured, and collagen exists as flexible random coils in solution [70]. Upon cooling, gelatin recovers its native helical structure, producing a gelatin hydrogel network [70]. Using the ternary phase diagrams of Figure 2, the appearance and rheological properties of MBGs, along oil and water dilution lines that passed through the optimal 6% sorbitan formulation (containing 36% water and 39% oil), were evaluated. Figure 6a shows the oil dilution path at 25 °C and 50 °C. Figure 6b presents a picture of the gels prepared along the oil dilution line. For systems containing between 10% and 30% IPM, the gels had a milky appearance, suggesting the presence of multiple phases in the gel. Considering the earlier discussion that gelatin "dehydrated" the original μE, and the ternary phase diagrams of Figure 6a, it is likely that the gelling process for these 10-30% IPM systems led to the formation of a μE + LC systems embedded in the gelatin hydrogel. Although the system containing 50% IPM was clear, it was a 2-phase system of μE and a MBG. The 40% IPM formulation was close to the 6% SMO gel of Figure 3a. Figure 7, increasing the oil (IPM) content in the MBG from 10 to 40% reduces the strength of the gel, albeit increasing its clarity. To obtain a measurement for the 50% IPM MBG, only the gel portion of the 2-phase system was evaluated, which explains the high strength of this gel. For the 40% IPM linker-lecithin MBG, the zero shear viscosity was close to 3 Pa·s at 25 °C and 1 Pa·s at 37 °C, which represents one order of magnitude increase in viscosity with respect to the original μE. Viscosities in the 1-10 Pa·s range are comparable to some topical creams [65]. The gel strength of the system with 40% IPM is lower than other commercial lidocaine creams and gels (measured G' at 25 °C for EMLA ® cream and Topicaine gel were approximately 500 Pa and 200 Pa, respectively). As shown in Figure 3, MBGs with high G' values can be obtained with higher gelatin and SMO content. Figure 8a presents the water dilution path used to produce MBGs containing 30 to 90% water. Figure 8b shows that with increasing water content the MBG became more turbid. These milky gels reflect the presence of an emulsified phase within the gel. These gel-entrapped emulsions were likely produced when, at 50 °C, a μE phase coexisted with an aqueous solution, as shown in Figure 8a. The gelatin gel was probably formed within the excess aqueous phase and, upon cooling, the μE and any excess water-not associated with gelatin-were emulsified within the gel. Figure 9 shows cross-polarizer micrographs of the MBG prepared with 80% water, showing the gelatin network (Figure 9a) and the drops of the emulsion entrapped in the gel (Figure 9b). Figure 8b, as a function of temperature. In general, increasing the water content in the MBGs increases the strength of the gels. However, the system containing 30% water, similar to the 50% oil MBG, consists of a liquid oil-rich μE that coexists with a strong gel. Lecithin-Linker Gels Lecithin-linker gels (not MBGs) were prepared using selected mixtures of lecithin and linkers at the ratios corresponding to the 6% SMO formulation of Figure 1. Figure 11 presents the elastic modulus of these formulations as a function of temperature. Liphophilic components (lecitihin, SMO) slightly increased the G' value of the gelatin gel-although the statistical regressions of Figure 11 indicate no significant difference for temperatures ranging from 25 °C to 40 °C-particularly at 50 °C. This observation is consistent with the fact that lecithin and SMO can also produce organogels on their own [34]. On the other hand, introducing the hydrophilic additives (linkers) PEG-6-caprylic/capric glycerides and decaglycerol monocaprylate/caprate decreased the G' of the gelatin gel and its transition temperature from 50 °C to values close to 45 °C. This observation suggests that the ethylene glycol and glycerol groups of the hydrophilic linkers interfere with the self-assembly of the collagen strands during gelation, thus reducing the strength of the resulting gel [70]. In Vitro Transport Studies The 20% gelatin MBG prepared with 6% SMO (Figure 3a) was evaluated as a delivery vehicle to load lidocaine in the skin, and use the skin as a lidocaine reservoir (in situ patch) for extended release. Using a permeable synthetic membrane, instead of skin, the gel itself was evaluated as a reservoir for the extended release of lidocaine. The corresponding 6% SMO μE was used as a control to compare the transport of lidocaine in both scenarios. Transport in the Skin The total mass of lidocaine, loaded in the skin from the μE (control), and the MBG were determined by adding the mass of lidocaine recovered from methanol extraction of the pig ear skin at the end of the experiment and the mass of lidocaine recovered from the receiver solutions at different times. Slightly less lidocaine (0.8 ± 0.3 mg/cm 2 ) was loaded in the skin from the MBG than from the μE (1.3 ± 0.5 mg/cm 2 ). The loading of lidocaine in pig skin samples from water, IPM and from lecithin-linker μEs has been evaluated by Yuan et al. via a skin-donor partition coefficient (K sd ) [30]. This K sd parameter was obtained after fitting the cumulative lidocaine permeation data to a three-compartment (donor, skin, and receiver) transport model. The value of K sd represents the partition ratio at equilibrium between the lidocaine concentration in the skin compartment and the lidocaine concentration in the donor compartment. The value of K sd can be estimated using the concentrations in the donor and in the skin-obtained via mass balance-after 30 minutes of loading [30]. The estimated values of K sd were 0.36 ± 0.14 for the MBG and 0.46 ± 0.17 for the μE, comparable to the K sd 's reported for oil-continuous (K sd = 0.3) and water-continuous (K sd = 0.4) lecithin-linker μEs prepared with 4% lecithin [30], which are close to the parent μE of the MBG. These values are substantially larger than the partition from IPM (K sd ~ 0.1) and substantially lower than the partition from water (K sd ~ 1.3) [30]. The observed order of K sd 's correlate with the hydrophilicity of the donor vehicle; since lidocaine is a lipophilic drug, its partitioning into skin would be higher in more hydrophilic donor vehicles Figure 12 shows the lidocaine release profile for the MBG-and the μE-loaded skins. Both release profiles are similar, and akin to a first order release. The three compartment transport model of Yuan et al. can be simplified to a two-compartment case for the release of lidocaine from the skin to the receiver [23,30], that under sink conditions leads to a first order release model where the first order rate constant is k sr /h, k sr being the skin-receiver transport coefficient (cm/hr) and h the thickness of the skin compartment (0.08cm). When the data of Figure 12 is plotted in the format of ln(1-fraction released) vs. time for the interval of 0-12 hrs for the μE system, k sr = 9 ± 2 · 10 −3 cm/hr (R 2 = 0.91 for the first order model), and for the interval of 0-24 hrs for the MBG system k sr = 7 ± 4 · 10 −3 cm/hr (R 2 = 0.99 for the first order model). Yuan et al. reported values of k sr = 12·10 −3 cm/hr for water-continuous μEs and k sr = 35 · 10 −3 cm/hr for oil-continuous μEs prepared with 4% lecithin, 12% SMO and a mixture of sodium octanoate and octanoic acid used as hydrophilic linkers [30]. The authors discussed that lower k sr values were obtained with higher lecithin concentrations, and that this correlation might be associated with the penetration of lecithin, and possibly the linkers, into the skin [30]. It is possible that the low k sr values of the MBG and the parent μE are associated with the relatively high concentration of lecithin and hydrophilic linkers. For extended release/extended action, however, lower k sr values are desirable. These formulations offer the potential for longer lasting pain relief when compared to commercial lidocaine creams such as EMLA (emulsion containing 2.5 wt% lidocaine and 2.5 wt% prilocaine), whose action only lasts between 2 and 4 hours [71,72]. Unfortunately, attempts to establish direct comparisons between lecithin-linker μEs and the EMLA cream resulted in inconclusive results as the EMLA cream could not be homogeneously applied in the MPD device. One important observation derived from the release profiles of Figure 12 is that, other than a minor reduction in lidocaine loading, the increase in viscosity for the MBG formula did not affect the release of lidocaine from the skin. These results support the initial hypothesis that the gelatin gel network should not affect the loading or release of drugs on the skin, but produce a viscosity suitable for topical/transdermal applications. Figure 12. Release profile at room temperature (25 ± 1 °C) of lidocaine from pig skin after 30-minutes loading with: 20% gelatin MBG containing 3.1% lidocaine (♦), and gelatin-free, 6% SMO μE containing 3.9% lidocaine (∆). See Table 1 for additional μE composition details. Transmembrane Transport In order to produce a simplified model for periocular transport, skin was replaced by a cellulose acetate membrane with a molecular weight cut off (MWCO) of 100 kDa, which corresponds to an approximate pore size of 10 nm. The use of synthetic ultrafiltration membranes, particularly for scleral transport, has been reported in the literature [73][74][75]. Various ranges of equivalent pore size of the sclera have been reported in the literature, but for paracellular transport the pore size has been estimated to be in the order of 2-4 nm, although pore sizes approaching 10 nm have been proposed as well [16,76,77]. Figure 13 presents the accumulated mass of lidocain permeated (per unit of area) through the acetate membrane as a function of time. Assuming that the membrane does not accumulate drug or microemulsion components, then a simple permeation model can be used to interpret the data of Figure 13. Assuming sink conditions, the transport equation based on the permeability coefficient can be simplified to: dC d /dt = −(k p *A/V))*C d where C d is the concentration of lidocaine in the donor compartment, k p is the transmembrane permeability of lidocaine, A is the area of the membrane (0.256 cm 2 ), and V is the volume of the donor compartment. Applying a simple mass balance to the data of Figure 13 and the initial concentration of lidocaine in the donor (C do ), the value of C d can be estimated as a function of time. A plot of ln(C d /C do ) versus time should produce a straight line with a slope −(k p A/V). This was the case when the data between 0 and 24 hrs was used. For the case of the MBG, the value of k p was 6 ± 1 · 10 −3 cm/hr and for the μE was 6.3 ± 0.4 · 10 −3 cm/hr. These values of k p are close to the transdermal permeability of lidocaine, k p ~ 20 · 10 −3 cm/hr [22], and to the permeability of other organic molecules through the sclera (k p ~ 6 to 20 · 10 −3 cm/hr) [78]. This transmembrane transport study showed that the addition of gelatin did not affect the transport of lidocaine in conditions that simulate periocular drug delivery. The fact that similar lidocaine loadings (partition) in the skin were obtained with the μE and the MBG, further support the idea that the introduction of gelatin does not affect the transport of the drug. Figure 13. Permeation profile of lidocaine at 34 ± 1 °C through 100 kDa Molecular Weight Cut-Off (MWCO) acetate membrane from gelatin MBG containing 3.1% lidocaine (dose 6.1mg/cm 2 ) (♦) and from lidocaine-loaded, gelatin-free, 6% SMO μE containing 3.9% lidocaine (dose 7.6 mg/cm 2 ) (∆). See Table 1 for additional μE composition details. The permeation of lidocaine through the MBG can be interpreted using obstruction-diffusion models applied to heterogeneous hydrogels [79]. According to a simplified form of the equation of Amsden [79], for small drug molecules (e.g., lidocaine) in gel networks with relatively large distance between fibers, the diffusivity in the gel ≈ diffusivity in the solvent * exp(-(π/4)(thickness of the fiber/distance between fibers) 2 ). Considering the distance between the fibers, and the thickness of the fibers in Figures 4 and 5, and the expression of Amsden, one simply concludes that the fibers are too loosely packed to interfere with the diffusion of molecules through the gel. However, the packing is strong enough to increase the viscosity of the μE. Conclusions This work introduced the formulation of alcohol-free, lecithin-based, gelatine MBGs. This linker formulation, found to be biocompatible in previous work, used lecithin as the main surfactant, SMO as a lipophilic additive (linker), and a mixture of PEG-6-caprylic/capric glycerides and decaglycerol monocaprylate/caprate as hydrophilic additives (linkers). It was found that in order to prepare clear MBGs (i.e., absence of an emulsified phase in the final gel), it was important to ensure that the parent μE was a single phase at room temperature before introducing gelatin. When gelatin was added to bicontinuous lecithin (nonionic) μEs, the results suggest that some of the water initially solubilised in the μE was used to produce a dispersed network of gelatin fibers embedded in an oil-rich bicontinuous μE. This observation contrasts with previous findings for anionic (AOT) MBGs where the addition of gelatin produced minor changes in the morphology of the μE. The elastic modulus (G') of the MBGs increased with increasing water content or decreasing oil content. In lecithin-linker organogels, the addition of lipophilic components, lecithin and SMO, slightly increased the elastic modulus at high temperature, while the addition of hydrophilic additives reduced the elastic modulus of the gelatin gel at high temperature. When a single phase μE system was used as the "parent" μE for the MBG, a clear gel with a viscosity suitable for topical applications was obtained. This MBG produced comparable, although slightly lower, lidocaine loading and release from skin than the parent μE.
2016-03-22T00:56:01.885Z
2012-01-31T00:00:00.000
{ "year": 2012, "sha1": "24ebfcefaa992341280af0fb8ce9f8b60faa3217", "oa_license": "CCBY", "oa_url": "http://www.mdpi.com/1999-4923/4/1/104/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "24ebfcefaa992341280af0fb8ce9f8b60faa3217", "s2fieldsofstudy": [ "Materials Science", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
263924743
pes2o/s2orc
v3-fos-license
Interpretation of the transpulmonary thermodilution curve in the presence of a left-to-right shunt Nusmeier, Anneliese van der Hoeven, Johannes G Lemson, Joris Comment Letter United States Intensive Care Med. 2011 Mar;37(3):550-1; author reply 552-3. Epub 2010 Dec 9. Dear Editor, In a recent article by Giraud et al. [1] the effect of a left-to-right shunt on the transpulmonary thermodilution (TPTD) curve and the subsequent calculation of extravascular lung water (EVLW) and intrathoracic blood volume (ITBV) are described. The authors conclude that a left-toright shunt generates recirculation of thermal indicator, which induces a change in the dilution curve. Although we support their observation their explanation may lead to confusion. The transpulmonary thermodilution method must fulfill the following conditions: (1) constant blood flow, (2) no or minimal loss of indicator between injection and detection point, (3) complete mixing of the indicator with blood, and (4) the indicator must pass the detection point only once [2]. To satisfy condition 4, the dilution curve is interrupted at the downslope part based upon a specific algorithm to prevent the effects of recirculation [3]. Subsequently the curve is extrapolated from the interrupted point to the baseline in order to calculate the area under the curve [4]. The cases described by Giraud et al. both had a left-to-right shunt (ventricular septal defect (VSD) and aorto-cava fistula (ACF)). The (pulmonary) ''recirculation'' that occurs in a left-to-right shunt is an extra ''short'' circuit. The observed TPTD curve is the result of a delay in delivery of the indicator to the systemic circulation and will subsequently show a lower initial peak, followed by a slow re-approximation to the baseline. Unfortunately the authors do not provide the values of the mean transit time (MTt) and downslope time (DSt), but these numbers are easily deduced from Fig. 1 on page 1,084 [1]. The recalculations are explained in Table 1 and show that the left-toright shunt induces an increase of both time intervals. The increase of the DSt (51%) is twice the increment of the MTt (25%). Recirculation of the indicator passing the detecting point for a second time is excluded by the fact that true recirculation will not occur before approximately 60 s (&2 9 MTt in the normal situation). This is long after the interruption of the downslope part. Increment of the DSt and, to a lesser extent, of the MTt can also be observed in the presence of a large volume of lung water. Both situations are the consequence of delayed delivery of indicator to the systemic circulation. In conclusion it can be stated that a left-to-right shunt induces an increase in DSt and, to a lesser extent, MTt as a consequence of delayed delivery of indicator to the systemic circulation because of the presence of an extra circuit. This phenomenon should not be confused with true recirculation. Conflict of interest The authors have not disclosed any potential conflicts of interest.
2017-08-02T18:13:59.833Z
2010-12-09T00:00:00.000
{ "year": 2010, "sha1": "a6a0edc79b204808f69317e474fb1ddfb5c6f8e1", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00134-010-2107-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a6a0edc79b204808f69317e474fb1ddfb5c6f8e1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211383529
pes2o/s2orc
v3-fos-license
Rice Price Ceiling (HET) regulation’s effect toward rice’s inflation rate in South Sulawesi This study discusses to analyze the highest retail price policy for rice retailers released by the Minister of Trade in September 2017 against the rate of rice sales, in South Sulawesi Province. This study used three cities that were considered to represent South Sulawesi in a comparative calculation: Makassar City, Pare-pare City and Bone District. All variables are analyzed using Multiple Linear Regression. These variables are rice stocks, rice price margins, real exchange rates, and exchange rates of each region. Data is taken from April 2016 to December 2018 and uses calculated from month to month. The conclusion. This is evidenced by 4 of the six equations that are agreed by the dummy variable value of each equation. Therefore, the government must return this policy to reach a level that can be entered below the expected target. Introduction Inflation is an economic phenomenon that is still the focus of the government in Indonesia today. Hera et al. [1] mentioned it as an impact of growth on the aggregate macroeconomy: economic growth, competitiveness, productivity and even income distribution. The target output determined by the government is inseparable from the government's goal in becoming economic stability in Indonesia. One of them, one of which is the price of rice. Rice prices are special because rice is still the main agricultural commodity and staple food of the Indonesian people. The consumption of the Indonesian people towards rice reaches 114.6 kg per capita per year. The price of rice is inelastic. This causes the commodity to be vulnerable to inflation. Apart from the amount of consumption per capita per year, the supply of rice is usually often disrupted mainly due to uncertain weather. According to Widiarsih [2] politically explaining the value of rice means that the availability of supply and a surge in prices will have an impact on political stability. When the turmoil cannot be appropriately overcome, it can impact the political sphere. Thus, the availability and security of rice prices are one of the keys to achieving national stability, especially economic stability. The form of government policy is to realize a basic price policy by using the applicable instrument price (base price) and maximum price (the highest price). The basic pricing policy is set by the government purchase price policy, while the maximum price policy is set at the highest retail price (HET) policy. The highest retail price is the highest agreed to change to the price of goods specified in a contract during a trading period by existing trade conditions. Approved market prices are not permitted to raise prices above the specified maximum price [3]. Highest retail price of rice is approved based on minister of trade regulation No.57/M-DAG/PER/8/2017 concerning determination of the highest retail price of rice. Every region in Indonesia has the highest bid price in selling rice. This regulation applies two types of rice, namely premium and medium rice. The highest retail price in the Sulawesi region is IDR 12,800/Kg for premium type rice and IDR 9,450/kg for medium type rice. South Sulawesi is known as a local rice producer and one of the leading rice suppliers in Indonesia. However, rice is still one of the bribe commodities in South Sulawesi year-on-year. This is translated from numbers and conversions in five cities for South Sulawesi: Makassar City, Pare-pare City, Bone Regency, Palopo City, and Bulukumba Regency. The purpose of this study was to study this significance policy affecting the inflation rate in South Sulawesi. This study took three out of five cities/regencies determined by the Central Bureau of Statistics, namely Makassar City, Pare-pare City, and Bone Regency. The fundamental reason is that these three regions constitute three broad areas in South Sulawesi and rice distribution lines to other regions in Indonesia. The research method used is multiple linear regression with one independent variable and four independent variables. Methods Sources of data taken in this study come from various sources, which are then called secondary data. These secondary data are received from the central statistics agency, the Indonesian Ministry of Agriculture Food Security Agency, Food and Agriculture Organization (FAO) and Bank Indonesia. Multiple linear regression is an analytical tool used in this study. This analysis is useful in knowing the relationship of independent variables to the dependent variable. The dependent variable used is the year-on-year inflation rate. The independent variables used are rice supply (X1), rice price margin (X2), a Real Exchange Rate (RER) (X3), and dummy variable. The dummy variable aims to identify whether there have been changes after the highest retail price policy was set in September 2017. Thus, the multiple linear regression equation used is as follows. = Real exchange rate (ratio) Dummy = Variable with a value of 0 (before) and 1 (after) The above equation is an equation that uses different units of each variable. Therefore according to Gujarati [4] the selection of this equation model is based on the use of the natural logarithmic model (Ln) which has the advantage of minimizing the possibility of heteroscedasticity. The equation after using natural logarithms as a step to transform data is as follows. Based on these equations, the framework of the study is as follows. Changes in rice price margin, rice supply and real exchange rate Changes that occur in the three variables need to be seen as a benchmark for conclusions in this study. Rice price margin data are calculated and grouped on average before and after the highest retail price policy was enacted. The average percentage of data is then tested to find out the significance level using the paired sample T-test. Based on the data collected, changes in rice price margins in the three regions are presented in table 1 on the following page. In table 1, the average percentage is calculated based on the period before and after the determination of highest retail price of rice. A significant T-test value indicates the effect of highest retail price on the increase or decrease in the percentage of rice price margins. These results indicate Makassar City and Bone District with premium types of rice whose rate of price margins increased after the stipulation of highest retail price. According to Azwar [5] , this is due to the length of the distribution chain, which causes more significant price differences in Makassar City. As for Bone Regency, the margin for the price of medium type rice has decreased due to better absorption of rice, for medium type rice [6]. The average rice supply in the three research sites is presented in table 2 below. In table 2 we can see the average supply of rice supplied to the level of traders for each period specified in the table. Rice supply is seen to increase in Bone Regency, both for medium and premium types of rice, this is due to the absorption of rice by Bulog from larger mills and rice production surplus [6]. Whereas in both regions, namely Makassar City and Pare-pare City, according to Susilowati [7] rice supply declined due to deficit rice production and the two areas were the largest share of rice delivery to other regions in Indonesia. The Real Exchange Rate (RER) in the three regions is presented in table 3 below. In table 3, the average real exchange rate shows a declining ratio after the determination of the HET policy is set. The rupiah exchange rate in the calculated period and the increase in domestic rice prices are the factors that influence the real exchange rate in 2017. Another factor is the increasing demand for rice from Iran and Bangladesh, which has caused a reduction in rice stocks in various countries such as Thailand, Vietnam and India and impact on international rice prices [8]. Effect of highest retail price of rice policy on inflation rate in South Sulawesi The classic assumption test is the first step that must be done to determine the effect of HET rice policy on the inflation rate in South Sulawesi. The results of the six samples in the normality, multicollinearity, heteroscedasticity and autocorrelation tests state that the variables used in the sample are worth estimating. The next step is to use the R-square test, partial T, and F test to identify the significant influence of each independent variable on the dependent variable. So from the six research results, four of them show a relationship between highest retail price of rice policy setting on the inflation rate. This relationship is assumed by dummy variable. The results of this study can be seen in table 4 on the following page. In table 4, the linear regression equation that has a significant effect can be seen in the significant value of each variable given in parentheses. The explanations from each region are as follows. Makassar City. The multiple linear regression equation which simultaneously (F-Test) is significant at a level smaller than  = 0.05 is a multiple linear regression equation with a premium type of rice sample. Adjusted R 2 test for this equation is 41%. This means that the independent variables used in this equation can explain the relationship to the inflation rate of 41%. In addition, 59% (100% -41% = 59%) were able to be explained by other variables not included in this study. The partial test of each independent variable from this equation is as follows. a. Constanta (o). This equation constant is -0.380 and is significant at levels below 0.05. This means that if the other independent variables are ignored, then the inflation rate in Makassar City for premium rice is -0.380%. b. Rice stock (X1). The coefficient of the variable supply of rice is 0.035 and is significant at levels below 0.05. This is not following the study by [9] which states that the coefficient of rice supply should be negative; rice supply has a negative relationship to the inflation rate. That is, if inflation increases if the rice supply is added, then the inflation rate will slow down. However, the results of this study indicate that the level of rice consumption in Makassar City is still higher than the supply of rice available. c. Rice margin prices (X2). The margin of the price of rice is 0.032 and is significantly below the 0.05 level. This is following [5], rice prices have an influence on the inflation rate due to the length of the distribution chain so that the price margin will be affected. d. Dummy variable. The dummy variable coefficient is 0.046 and is significant at levels below 0.05. If the value of the dummy variable is 1 (describing the situation after the highest retail price of rice policy is set), then the result is a positive dummy policy that has a relationship with the increase in the inflation rate in Makassar City for premium types of rice. (X1). The variable coefficient of medium rice supply is 0.038 and is significant at levels below 0.05. This is not following the study by [9] which states that the coefficient of rice supply should be negative, rice supply has a negative relationship to speed. Rice assistance in the city of Pare-pare is quite vulnerable to affect the risk of the City of Pare-pare, the largest rice payment in the Province of South Sulawesi [7]. Payment of rice sold in the area is not only for public consumption, but also sent to other regions. Meanwhile, the variable coefficient of rice supply for rice types of the premium is not significant at levels below 0.05. c. Rice Margin Prices (X2).The coefficient of margin for rice prices for premium types of rice is 0.024 and is significant at levels below 0.05. This is following the opinion of Ade [10] that price margins are influenced by prices at the level of wholesalers and milling rates. If prices at the grinding level rise, the price of rice in Pare-pare City, especially in Pasar Lakessi, will be affected. Meanwhile, the price margin for rice for medium type rice does not have a significant effect on the inflation rate at the 0.05 level. d. Dummy Variable. The two multiple linear regression equations show the relationship between the inflation rate and the HET Rice policy setting in Pare-pare City. The dummy variable coefficient for the multiple linear regression equation for medium rice is 0.052. Meanwhile, the dummy variable coefficient for the multiple linear regression equation for premium rice is 0.033. If the value of the dummy variable is 1, then the determination of HET policy positively affects the inflation rate in Pare-pare City in both types of medium and premium rice. This positive relationship can be seen in the positive dummy variable coefficient. Bone District. The multiple linear regression equation which simultaneously (F Test) is significant at a level smaller than  = 0,05 is a multiple linear regression equation with a premium type of rice sample. Adjusted R 2 test for this equation is worth 49.8%. This means that the independent variables used in this equation can explain the relationship to the inflation rate of 49.8%. In addition, 50.2% (100% -49.8% = 50.2%) were able to be explained by other variables not included in this study. The Partial Test or T-Test for each of the independent variables of this equation is as follows.
2019-10-12T00:44:10.092Z
2019-11-06T00:00:00.000
{ "year": 2019, "sha1": "e4a7d53f0dd7e9da12deba8056e93c7f24e73da2", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/343/1/012112", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "0d0cc6ca91276511ddaa5baaa6b191fe404ac94a", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
265309370
pes2o/s2orc
v3-fos-license
Modeling spatial evolution of multi-drug resistance under drug environmental gradients Multi-drug combinations to treat bacterial populations are at the forefront of approaches for infection control and prevention of antibiotic resistance. Although the evolution of antibiotic resistance has been theoretically studied with mathematical population dynamics models, extensions to spatial dynamics remain rare in the literature, including in particular spatial evolution of multi-drug resistance. In this study, we propose a reaction-diffusion system that describes the multi-drug evolution of bacteria based on a drug-concentration rescaling approach. We show how the resistance to drugs in space, and the consequent adaptation of growth rate, is governed by a Price equation with diffusion, integrating features of drug interactions and collateral resistances or sensitivities to the drugs. We study spatial versions of the model where the distribution of drugs is homogeneous across space, and where the drugs vary environmentally in a piecewise-constant, linear and nonlinear manner. Although in many evolution models, per capita growth rate is a natural surrogate for fitness, in spatially-extended, potentially heterogeneous habitats, fitness is an emergent property that potentially reflects additional complexities, from boundary conditions to the specific spatial variation of growth rates. Applying concepts from perturbation theory and reaction-diffusion equations, we propose an analytical metric for characterization of average mutant fitness in the spatial system based on the principal eigenvalue of our linear problem, λ1. This enables an accurate translation from drug spatial gradients and mutant antibiotic susceptibility traits to the relative advantage of each mutant across the environment. Our approach allows one to predict the precise outcomes of selection among mutants over space, ultimately from comparing their λ1 values, which encode a critical interplay between growth functions, movement traits, habitat size and boundary conditions. Such mathematical understanding opens new avenues for multi-drug therapeutic optimization. Introduction Bacterial resistance to antibiotics remains one of the biggest threats to public health.The emergence and selection of strains that are resistant to multiple antibiotics exacerbates the problem of resistance management and control (Laxminarayan et al., 2013;Cassini et al., 2019;Marston et al., 2016;Murray et al., 2022).Different strategies have been proposed to mitigate the problem of antibiotic resistance, including antibiotic cycling vs. mixing patterns (Beardmore et al., 2017;Brown and Nathwani, 2005;Bergstrom et al., 2004;Nichol et al., 2015;Batra et al., 2021), synergies with the host immune defenses (Gjini and Brito, 2016), maintenance of competition with sensitive strains (Hansen et al., 2017(Hansen et al., , 2020;;Gatenby et al., 2009;West et al., 2020), and-crucially-multidrug evolutionary strategies (Baym et al., 2016).Although the molecular mechanisms underlying the rapid evolution of drug resistance are increasingly understood, it remains difficult to link this molecular and genetic information with multi-species population dynamics at different scales (MacLean et al., 2010;Holmes et al., 2016;Singer et al., 2007;Denk-Lobnig and Wood, 2023).In addition, while the majority of studies focus on bacteria populations in well-mixed environments (Lane et al., 1999;Kawecki et al., 2012;Hughes and Andersson, 2017), natural communities evolve on spatially extended habitats that display multiple biotic and abiotic gradients (Donaldson et al., 2016;Chikina and Vignjevic, 2021) and potentially yield complicated networks of interacting subpopulations (Hanski, 1998;Nicoletti et al., 2023). Understanding the role of this spatial structure in the evolution of resistance is an ongoing challenge, despite the fact that environmental drivers of evolution-for example, the local concentration of antibiotic, or the density of susceptible hosts-are known to vary on multiple scales-across different body compartments, organs, or tissues, and on longer length scales, between hospitals and geographic regions. The impact of spatial heterogeneity on ecological and evolutionary dynamics has been studied in a wide range of contexts-from the spread of COVID (Thomas et al., 2020) and HIV (Zulu et al., 2014;Feder et al., 2021) to conservation ecology (Silva et al., 2006;Hovick et al., 2015).Graph theory and dynamical systems theory offer a number of elegant approaches for studying multi-habitat models on networks (Allen et al., 2017;Marrec et al., 2021) or in particular limits (e.g. with a center manifold reduction) (Constable and McKane, 2014).Theory indicates that the way different subpopulations in a community are topologically "connected" can alter evolution (Lieberman et al., 2005;Marrec et al., 2021), and laboratory experiments in microbes are beginning to confirm some of these predictions (Kreger et al., 2023;Chakraborty et al., 2021).In parallel, a separate body of work has focused on spatial dynamics of microbial communities on agar plates, leading to an increasingly mature understanding of range expansions and cooperation in multi-species communities (Korolev et al., 2010;Korolev, 2013Korolev, , 2015;;Datta et al., 2013;Sharma and Wood, 2021;Martínez-Calvo et al., 2023) or in populations impacted by complex fluid dynamics Atis et al. (2019); Plummer et al. (2019). In the specific context of drug resistance, spatial heterogeneity can manifest in multiple ways, from heterogeneity in drug concentrations to host heterogeneity in infectious disease models Brockhurst et al. (2004); Campos et al. (2008); Chabas et al. (2018).A number of theoretical and experimental studies have shown that spatial differences in drug concentration significantly impact the evolution of resistance (Galvin et al., 2013;Organization et al., 2022;Fu et al., 2015;Greulich et al., 2012;Hermsen et al., 2012;Kepler and Perelson, 1998;Moreno-Gamez et al., 2015;Zhang et al., 2011;Hermsen and Hwa, 2010;Baym et al., 2016;De Jong and Wood, 2018a).Theory suggests that the presence of spatial gradients of drug tends to accelerate resistance evolution (Hermsen and Hwa, 2010;Hermsen et al., 2012;Kepler and Perelson, 1998;Moreno-Gamez et al., 2015;Fu et al., 2015), though it can be slowed down by tuning the drug profiles (De Jong and Wood, 2018b) or in cases where the fitness landscape is non-monotonic Greulich et al. (2012). The connection between spatial heterogeneity and the evolution of resistance is particularly murky when multiple drugs are involved, making predictions difficult, both at a within-host level in a clinical setting, as well as at the higher population or ecological levels (Goossens et al., 2005;Asaduzzaman et al., 2022).Even in the absence of spatial structure, multi-drug therapies are a subject of intense interest.Antibiotics interact when the combined effect of the drugs is greater than (synergy) or less than (antagonism) expected based on the effects of the drugs alone (Loewe, 1953;Greco et al., 1995), and these interactions can accelerate, reduce, or even reverse the evolution of resistance (Chait et al., 2007;Michel et al., 2008;Hegreness et al., 2008;Pena-Miller et al., 2013;Dean et al., 2020;Gjini and Wood, 2021).In addition to these interactions, which occur when drugs are used simultaneously, resistance to different drugs is linked through collateral effects-where resistance to one drug is associated with modulated resistance to other drugs.Collateral effects have been recently shown to significantly modulate resistance evolution (Barbosa et al., 2018;Rodriguez de Evgrafov et al., 2015;Munck et al., 2014;Maltas and Wood, 2019;Maltas et al., 2020;Roemhild et al., 2020;Ardell and Kryazhimskiy, 2021). Despite substantial progress in understanding spatial heterogeneity, drug interactions, and collateral effects separately, it remains unclear how these three components combine to impact the evolution of multi-drug antibiotic resistance.To address this gap, we study a general mathematical model describing a continuous environment with spatially-varying antibiotic concentrations, in which bacteria move, grow and are selected following deterministic dynamics.Our aim is to build an integrative framework for drug interactions and collateral effects in spatiallyextended multi-drug environments and show how specific drug gradients can shape evolutionary outcomes. The outline of the paper is as follows.In Section 2 we present the spatial model, extending the model-first introduced in (Gjini and Wood, 2021) for multi-drug resistance evolution (summarized in BOX 1).In Section 3, we present an analytical quantity for predicting outcomes of selection in multi-drug spatial environments, and describe key cases of multi-drug resistance evolution linking simulations with theoretical predictions.We conclude with a discussion of our study's limitations and potential future extensions. BOX 1. Modeling framework for multidrug resistance Drug resistance as a rescaling of effective drug concentration.To link a cell's level of antibiotic resistance with its fitness in a given multidrug environment, we assume that drug-resistant mutants exhibit phenotypes identical to those of the ancestral ("wild type") cells but at rescaled effective drug concentration (Chait et al., 2007;Gjini and Wood, 2021).The phenotypic response (e.g.growth rate) of drug-resistant mutants corresponds to a rescaling of the growth rate function G(x, y), of the ancestral population at concentrations x and y of the two drugs.At such concentrations, the per-capita growth rate (g i ) of mutant i is given by where α i and β i are rescaling parameters that reflect an effective change in drug concentration and, therefore, in that mutant's subpopulation growth rate. Mutant traits.In a 2-drug environment, each mutant is characterized by a pair of scaling parameters, which one might think of as a type of coarse-grained genotype.They can be measured experimentally, for example, as the ratios of the mutant MICs for two different antibiotics relative to those of the wild-type. Mean trait evolution and population adaptation.Considering all growth rates g i of existing mutants (e.g.i = 1, ..M), the population dynamics of scaling parameters follows naturally as a dynamic weighted average over all sub-populations.The mean resistance traits to drugs 1 and 2, (averaged over all mutants) ᾱ(t) ≡ M i=1 α i f i (t), and β(t) evolve as: where f i (t) is the frequency of mutant i at time t in the population.Assuming exponential growth (dn i /dt = g i n i , with n i the abundance of mutant i and g i given by Equation 1), the frequency f i (t) changes as: where ḡ = M i=1 f i g i is the (time-dependent) mean value of g i across all M subpopulations (mutants). The Price Equation for mean trait evolution.Combining equations, we arrive at: ( where Cov(α, g) x ≡ M i=1 α i f i (g i − ḡ) is the covariance between the scaling parameters α i and the corresponding mutant growth rates g i , and similarly for Cov(β, g) x .The subscript x refers to the fact that the growth rates g i and ḡ depend on the external (true) drug concentration x ≡ (x, y). The model extended to space and spatial gradients of 2 drugs We consider the case of a population of bacteria growing and diffusing in space (1-d) as a finite set of M subpopulations (mutants/strains), where each mutant has a potentially different level of resistance to the drugs (BOX 1).The total population size at each point in space (z) is given by n(z, t) = M i=1 n i (z, t), with the dynamics of the subpopulations given by ∂n where D represents the common diffusion coefficient, g i the growth rate of mutant i.For simplicity we choose where n 0,i the initial distribution of mutant i.This system is an instance of the classical reaction-diffusion system with exponential growth kinetics, and Dirichlet boundary conditions.The assumption is that bacteria live in an idealized one-dimensional fixed domain of length L, and die when diffusing out the habitat, either because they meet inhospitable conditions or because of lack of resources for growth.Within the domain, we assume there are unlimited resources for growth, and there is no direct interaction between the mutants i.e. all of them are assumed to grow exponentially independently of each other. 2.2.Linking the model to multi-drug resistance: g i = G(α i x, β i y) While the system 6 can be studied on its own, just as an abstract framework for mutants which vary in their growth rates and diffuse over space, here we focus on the key scenario that growth rate is entirely determined by the (α i , β i ) antibiotic resistance trait of each mutant and the two drug concentrations (x, y) (see BOX 1).Specifically we consider two cases: • Case 1: g i = G(α i x, β i y) is constant in space, i.e. a spatially homogeneous 2-drug environment.This leads to a constant selection coefficient everywhere in space.The frequency equation for each mutant in this case is given by: an instance of the Fisher-KPP equation (Fisher, 1937;Kolmogorov et al., 1937). • Case 2: g i (z) = G(α i x(z), β i y(z)) varies in space, i.e. a spatially inhomogeneous 2-drug environment, with drug concentrations that vary in z.This leads to a selection coefficient among mutants that is spacedependent, hence the frequency of each mutant at each point in space changes according to: where ḡ(z, t) = i f i (z, t)g i (z). Initially we assume equal movement traits between all mutants, translating into equal diffusivities D, hence only growth differences driving a fitness gradient.The function G can be constructed analytically using abstract functional forms for antagonistic, synergistic or independent drug action (Loewe, 1953), or can be obtained empirically from growth measurement of wild-type bacteria (or an arbitrary reference strain) at a range of two-drug combination doses (x, y) ∈ [x min , x max ] × [y min , y max ]. (see, for example, (Dean et al., 2020)). In what follows, we investigate 3 broad regimes for G, corresponding to i) independent drug action, ii) synergistic drug interaction, iii) antagonistic drug interaction.From Equation 9 it becomes evident that a mutant will grow in frequency at some point in space only if its growth rate is higher than the population's mean growth at that point in space, and will decrease in frequency otherwise.Its frequency will not change if g i = ḡ. Multi-drug resistance evolution in space.Combining equations above, we arrive at the following PDE system governing evolution of mean rescaling factors to drug 1 ( ᾱ), and drug 2 ( β), a measure of multi-drug resistance: A similar link with the Price equation (Price, 1970(Price, , 1972) ) was derived in our earlier study (Gjini and Wood, 2021).However compared to the non-spatial model (BOX 1), in this case we are dealing with two partial differential equations, because the mean traits ᾱ(z, t) and β(z, t) now evolve over space and time. Numerical prediction of selection dynamics.While the Price Equation framework is compact, it is not by itself sufficient to predict evolution over multiple time steps since the covariance terms are dynamic.We need the explicit mutant frequencies information at a given time-point to be able to simulate the system.Thus, provided with initial conditions, i.e. an initial spatial distribution over space for all mutants, and their multi-drug resistance traits (α i , β i ), a given a drug-action landscape G and an external drug concentration (x(z), y(z)) we can numerically integrate the equations to obtain solutions f i (z, t) for frequencies of all mutants, and finally ᾱ(z, t) and β(z, t) as well as mean adaptation rate of the population ḡ(z, t) = i f i (z, t)g i (z). Results In this study, we consider the context of a cellular population adapting to a multi-drug environment via selection of pre-existing diversity.Although another route to adaptation is provided by de-novo mutations, we limit ourselves here to the case of standing variation, where antibiotic-resistant mutants with different degrees of susceptibility to two drugs are already present from the start, albeit at possibly very low frequencies.We investigate if and how a spatial pattern of diversity in antibiotic resistance phenotypes distribution over long time scales emerges from the fitness gradient between such mutants.We observe that a variety of selection outcomes are possible in such expanding population.We distinguish two qualitative regimes for spatial heterogeneity: i) constant drug concentration over space, and ii) spatially-varying drug concentrations, leading to constant and spatially-varying mutant growth rates over space respectively. The main novelty of our approach is that beyond numerical simulations of the model PDEs, we propose a fully analytical measure of mutant fitness over space via which the outcome of selection can be predicted, studied and controlled. An average fitness measure to predict selection outcome When the mutants have the same growth rate everywhere in space, it is intuitive to compare them via their speed of propagation c = √ Dg i , hence by their growth rates.Yet, when the growth rate is a space-dependent function, it is not clear how to establish fitness hierarchies.Our system is governed by the PDE which describes each sub-population.Assuming separation of variables, and rescaling space to [0, 1] we can arrive at the following eigenvalue equation where u is the eigenfunction corresponding to eigenvalue λ.This falls within classical Sturm-Liouville problems, whose spectrum of equations consists of a discrete set of eigenvalues λ n (in our case, decreasing) that determine the behavior of solutions.Importantly, when the principal eigenvalue λ 1 > 0, it follows that the solutions grow away from zero, corresponding to the trivial spatially homogeneous steady state being unstable.These eigenvalues can be numerically computed and evaluated but they can also be analytically bounded using variational methods, or analytically approximated.The latter is the approach we adopt here.By assuming that g(z) can be written as g max g 0 (z) such that ϵ ≡ g max L 2 /D ≪ 1 is a small parameter that describes the ratio of the two relevant timescales: the timescale for diffusion (L 2 /D) and that for growth (1/g max , with g max a suitable scaling factor, e.g. the maximum value of g(z)), the eigenvalue equation above becomes: For small ϵ ≪ 1, it is straightforward to derive expressions for λ via classical perturbation theory (see Supplementary material S1-S3), leading to an expression for the nth eigenvalue λ n , up to any order in ϵ.In particular for first-order approximation we obtain: In the above expression, we can substitute ϵ explicitly.Then, reverting to original space and for n = 1, we obtain the following principal eigenvalue approximation : This principal eigenvalue, in the case of a single population, holds key information for successful invasion, and in the case of multiple mutants, can be used to determine their relative success in growth and propagation over space.It is noteworthy to remark that although strictly-speaking, our approximation for λ 1 is based on assuming ϵ ≪ 1 the practical use of this asymptotic approximation typically gives reasonable results outside of its strict range of applicability.So in many numerical examples we find that the λ 1 approximation predicts very well the outcome of selection also for small diffusion rates.However, formally the other extreme of the perturbation approach (growth much faster than diffusion) is analyzed in Supplementary material S2. Special case: constant g(z) ≡ r.In the case where growth rate is a constant r, Equation 14 for n = 1 yields the survival condition λ 1 > 0 which ultimately says that the growth rate r > π 2 D L 2 , a well known result from spatial spread models (Skellam, 1951), also framed as a critical length required for the trivial steady state to become unstable L > L c = π √ D/r.Following this reasoning, the magnitude of λ 1 , can become a relative measure by which to compare also different mutants i = 1, ..M growing and spreading in parallel.The mutant with the higher λ 1 should win.This condition can also be applied when mutants vary in their diffusion coefficients.: the mutant with the smallest π √ D i /r i should exclude the others. Arbitrary g(z).In general, and when the population is comprised of more than one sub-population each experiencing a different space-dependent growth rate g i (z), the largest eigenvalue (Eq.14) can be taken as a measure of average fitness over space for each mutant and we can expect that if we compare two mutants i and j, mutant i will ultimately dominate the population if λ 1 (i) > λ 1 ( j) and vice versa.For direct comparison with simulations, we also present a discrete-space analytical approximation for λ 1 together with some key properties (BOX 2). BOX 2. How should we define global mutant fitness over space? To go from spatial growth rate g(z) variation among mutants to final selection outcome over space z ∈ [0, L], we propose the use of the principal eigenvalue of the linearized equation for each mutant, λ 1 .This should yield an exact measure of global fitness, which can be numerically computed or analytically approximated (see Supplementary Section S1) to obtain the mutant ranking. λ 1 when space is discretized.We begin with some suitable discretization of space, for example by subdividing the interval [0, L] with K + 2 equally spaced points (linearly-arranged patches with h z = L K+1 ).In the case where ϵ ≡ g max L 2 /D is relatively small), the λ equation 14 for n = 1 can be approximated via secondorder centered finite differences to represent the diffusion.Written in matrix form, where also the function g(z) is now discretized as piecewise constant (g k ) within each small sub-interval k, we can then use a Taylor expansion to arrive at an approximation for the principal eigenvalue λ 1 .The first-order approximation for λ 1 , under mutant growth rate g(z), for suitably high discrete resolution of space (K large) is given by: where k runs over all interior sub-intervals of space (See Supplementary Material S3). Properties of λ 1 as a fitness measure over space. • Under Dirichlet boundary conditions, the λ 1 indicates that central parts of the domain have a higher value for growth than regions closer to the boundary.This can be seen from weighting of g(z) via the function sin(z) 2 . • The between-mutant difference λ 1 (i)−λ 1 ( j) does not depend on diffusion (first-order approximation).Diffusion rate, D, when equal, cannot alter selection, but it can do so when it varies, i.e.D i D j . • In order to increase accuracy for growth functions that are very close, and enable selection sensitivity to a common diffusion rate, we must compute λ 1 by including additional higher-order terms in the Taylor approximation (see S3.2). • The difference λ 1 (i)−λ 1 ( j) will predict the same winner as the difference in spatially-averaged growth rates ḡi − ḡ j if ∆g(z) = g i (z)−g j (z) does not change sign over space z.This is key to compare selection under the spatial vs. non-spatial dynamics. • Through weighting of ∆g(z) via the function sin(z) 2 inside the sum (Eq.15) and integral (Eq.14), it is evident that spatial growth function differences ∆g(z) which are symmetric around the center (L/2), will yield λ 1 ( j) = λ 1 ( j), hence coexistence between strains i and j under equal diffusion rate D, albeit under possible spatial segregation. Example dynamics in spatially homogeneous drug concentrations For g i = G(α i x, β i y), namely growth rates are independent of space, the mutant with the highest positive g i wins over long time everywhere in competitive exclusion.The spatial dynamics of the frequencies of each mutant correspond to a travelling wave, with speed of spread c i = 2 √ Dg i .When there is no variation in D, the mutant with the highest positive g i tends to fixation everywhere, irrespective of how this g i is determined by the confluence of resistance traits and 2-drug landscape in G(α i x, β i y), as illustrated in Figure 1.The only way for two (or more) mutants to coexist is their fitnesses perfectly equalize, i.e. if their antibiotic resistance traits are such that g i and g j fall on the same contour of G: G(α i x, β i y) = G(α j x, β j y).But the levels of such coexistence will depend on their initial total distribution, whereby the mutant with an overall advantage at the start, also persists at higher frequency in steady state coexistence. Competitive exclusion when two mutants have constant g i and g j in [0, L] Some examples of outcomes among two strains for constant g i over space are illustrated in Figure 1.If these constant growth rates are different g i g j , under equal initial conditions, the mutant with the higher g will spread faster and eventually take over everywhere in space (Fig. 1A).In contrast, if these constant growth rates are equal, it is possible that asymmetric initial conditions create a bias and the mutant with a head start will eventually win (Fig. 1B-D).The bias can be created from total initial abundance (1C) or relative distribution over space (1D).In the latter case, the mutant with a relative advantage in the center of the domain will effectively spread faster and competitively exclude the other over all space. Example dynamics in the presence of spatial gradients in drug concentrations The case of g i (z) = G(α i x(z), β i y(z)), when the drug concentrations can vary over space, and hence also mutant selective advantages, is the more realistic, the more interesting and naturally the more complex one.We observe mainly two results: competitive exclusion with the same mutant winning everywhere or coexistence of the same subset of mutants everywhere (although at different frequencies).The final outcome depends on how average fitness over space compares between all mutants.Analytically this PDE case is much more complex and solutions have been obtained only for special cases under certain regularities.Below we consider some specific scenarios. 3.3.1.Coexistence under g(z) variation but perfect (i, j) symmetry around L 2 An example of linear g i (z), g j (z) variation for 2 mutants leading to coexistence everywhere is shown in Figure 2. In this case, one strain is better-adapted in the first half of the domain, the other strain is better-adapted in the second-half of the domain with the selective advantages exactly counterbalanced (Fig. 2A).For low diffusion, each strain dominates in frequency in the part of the domain where it experiences a relatively higher growth rate, maintaining a high-degree of spatial segregation in the system (Fig. 2B).As diffusion increases, the coexistence frequencies become more similar and tend towards 1/2 in both halves of the domain, leading to a more homogeneous spatial distribution of the strains over space (Fig. 2C-D).Figure 2: Coexistence example of 2 strains everywhere in space for space-dependent g i (z) which are mutually symmetric about L/2. A. In this case, one strain is better-adapted in the first half of the domain, the other strain is better-adapted in the second-half of the domain with the selective advantages exactly counterbalanced.A. For low diffusion, the two strains coexist such that each strain dominates in frequency in the part of the domain where it experiences a relatively higher growth rate, maintaining a high-degree of spatial segregation in the system.C. As diffusion increases, the coexistence frequencies become more similar and tend towards 1/2 in both halves of the domain.D. Eventually, for very high-diffusion, the growth variation starts to matter less and less, and the two strains tend to the same frequency everywhere, leading to a uniformly homogeneous spatial distribution of diversity over space. Competitive exclusion under broken (i, j) symmetry around L 2 , even with equal spatial averages of g This is the example in Figure 3 A, where g 1 (z) and g 2 (z) are piecewise-constant.For equal mean growth rates over space, the situation has been long tackled analytically (Cantrell and Cosner, 1991;Seno, 1988).As recognized in previous theoretical studies, for Dirichlet boundary conditions, it is expected that the population with a spatial growth advantage in the center of the domain will experience the maximal fitness, and be the winner (Cantrell and Cosner, 1991).Indeed this is what we obtain when considering such piece-wise growth functions which make one mutant more suited to the center of the domain and the other mutant more suited to the borders of the domain.Even though mean growth rates are equal, the mutant with the central advantage spreads with an advantage and ultimately excludes the other everywhere in space. Figure 3: Competitive exclusion everywhere in space, but the ultimately winning strain depends on parameters of g i (z) variation.A. In this piece-wise growth rate example, the g 1 (z) and g 2 (z) are such that the mean growth rates for both strains are the same ḡ1 = 1 0 g 1 (z)dz = 1 0 g 2 (z)dz = ḡ2 for b = 1/2.Yet, even with equal spatially-averaged growth rates, the strain with the central advantage will be the winner.When b changes, the final winner is a result of b as well as (max(g) − min(g)) magnitude.B. In this example, the winner can be overturned by modulating the width of the interval where g 1 (z) > g 2 (z), while keeping the shape of the two functions.We assume the growth rates are non-monotonic functions of space, represented by a concave and a convex parabola with vertices near the middle of the domain:g 1 (z) = m − σ(z − L/2) 2 + h and g 2 (z) = m + 2σ(z − L/2) 2 with m = 0.3, σ = 0.4, D = 0.015 and h varied.The critical value of h for overturning the final outcome is h = 0.04.Mutant 1 loses if h < 0.04 but it wins if h > 0.04, when its fitness advantage in the center of the domain is sufficiently high to compensate for its disadvantage near the boundaries.This cannot be predicted with the mean growth rate difference ḡ1 − ḡ2 but can be predicted with λ 1 difference for mutants 1 and 2. In our case (Figure 3A), the spatial averages of the two mutant growth rates are simply ḡ1 = L(bg max + (1 − b)g min ) and ḡ2 = L(bg min +(1−b)g max ), where g max , g min are the maximum and minimum growth rate of each strain.The condition for them to be equal for any g max and g min is simply b = 1/2.In concordance with previous theory, when ḡ1 = ḡ2 , we observe that the strain with the central advantage will be the ultimate winner in the system.In contrast, the situation gets more complicated when spatially-averaged growth rates differ, hence b 1 2 such that ḡ1 ḡ2 .It is not always the superiority in mean g or in central advantage that simply drives selection.We can find cases that the strain with central advantage in g (in this case strain 1) may lose overall because of its total growth rate, or when the strain with superior mean g may lose overall because of its central fitness disadvantage.The key lies in the relative λ 1 magnitude of each mutant. Tuning spatial heterogeneity of g i and g j can invert selection An example with nonlinear growth rates, leading to competitive exclusion with the possibility to revert the winner via a continuous parameter change is shown in Figure 3 B. From the parabolic shapes of the growth rates it's not immediately clear which should be the strain to ultimately grow and spread faster over space.Using λ 1 calculations (Eq.14) for the difference between mutants, we find that for small values of the parameter h it is strain 2 that ultimately excludes strain 1, but for large enough h, more specifically strain 1 overall fitness is superior and it will be this strain the only persisting one in the system.This result is impossible to disentangle from comparing purely spatial averages of the two growth rates.The information of the spatial average is insufficient because it weighs equally all points in space, whereas in reality the death at the boundary and the competition between diffusion and growth within the boundary makes locations near the centre of the domain be of higher value for any strain.The appropriate weighting of space is contained in the measure λ 1 . Figure 4: The basis for the atlas of multi-drug resistance evolution patterns over space.A. The four canonical mutant types for resistance phenotypes to two drugs, distributed in the (α, β) space of rescaling parameters: blue -fully resistant to drug 1 and sensitive to drug 2; redfully resistant to drug 2 and sensitive to drug 1; purple -intermediate resistance to both drugs; brown -wild-type, sensitive to both drugs.B. The 3 drug fitness landscapes used: synergistic (left) independent (center) and antagonistic (right), as specified in Eqs.16.These drug landscapes will be used to give rise to i (z) = G(α i x(z), β i y(z)) as a function of two drug variation over space x(z) and y(z).The relative fitnesses of the strains are hence dependent both on drug variation over space and on the details of the underlying growth landscape G. An 'atlas' of selection outcomes under drug spatial heterogeneity: single or double resistance? With this setup, it is now possible to systematically study scenarios of drug variation over space.First we focus just on 4 relevant mutants, which represent the main resistance combinations: mono-resistance to drug 1, and to drug 2, double intermediate resistance to both drugs, and wild-type (see Figure 4).These correspond to special locations in α, β space, namely (0, 1), (1, 0), (0.5, 0.5) and (1, 1).Then we consider synergistic vs. antagonistic drug interactions, under the assumption of a of low diffusion rate, and several explicit 2-drug concentration variation patterns over space x(z), y(z).Although more complex fitness landscape formulations are possible (Wood et al., 2014), the growth rates we assume for the drug interaction as a function of drug concentrations x and y, can be Figure 5: An atlas for 2-drug resistance evolution in space under spatial heterogeneity.We consider only four available mutants each with different resistance phenotypes to two drugs, distributed in the (α, β) space of rescaling parameters: blue -fully resistant to drug 1 and sensitive to drug 2; red -fully resistant to drug 2 and sensitive to drug 1; purple -intermediate resistance to both drugs; brown -wild-type, sensitive to both drugs.We considered a diffusion coefficient of D = 0.01; the spatial equilibrium is obtained numerically by considering the system at t = 1000.For all the simulations, we considered the same initial distributions with 99% wild type mutants and the remaining 1% distributed equally among the three resistant mutants.Without loss of generality, the initial distributions of each mutant were shaped as the function sin πz L , so that the homogeneous Dirichlet boundary conditions were respected.The growth landscapes were as specified in Equation 16.The interaction strength is fixed at q = 0.5 both in the case of synergistic and antagonistic interaction.For more drug gradient scenarios, under a more complex drug interaction profile and two diffusion rates see Supplementary Figures S1-S3. constructed for illustration, via the following simple functions: for independent drugs 2 − (x + y) + qxy, for antagonistic drugs, (16) where q > 0 and can be taken as a measure of the strength of interaction, illustrated in Figure 4B-D. The results of simulations are presented in Figure 5. Different multidrug concentration gradients give rise to a total of 12 scenarios, a kind of 'atlas'.We show the frequencies obtained by simulating the system long enough for it to have reached a spatial equilibrium.These are not meant to be an exhaustive analysis but a summary of key cases of spatial heterogeneity that can shape evolution of multi-drug resistance along the main axes of monomorphic vs. polymorphic phenotypic distributions.A study of more multidrug scenarios, under a more complex growth function G, is shown in Figures S1 and S2-S3, where we also explore different diffusion rates. These scenarios show that exactly anti-symmetric drug gradients relative to the center of the spatial domain are those more likely to lead to coexistence of different types of resistances (Fig. 5 A), in the case of independent drugs, both mono-resistant and the double-resistant mutant coexist, in the case of antagonistic drugs, the intermediate double-resistant mutant has a higher chance to exclude the mono-resistant variants, and in the synergistic drugs regime, only the two mono-resistant mutants coexist.When one drug concentration exceed the other drug concentration throughout the domain, as intuitively expected, the mutant that gets selected in a competitive exclusion scenario uniformly over space, is the mutant that is resistant to that drug (Fig. 5B-C).When the drugs co-vary in non-linear manner over space, depending on the way this gradient translates to relative growth functions among mutants, it is possible to have different selection scenarios and fine-tune parameters to invert the hierarchical competitive fitnesses of different types of resistance mutants 5D).These selection outcomes can be entirely predicted analytically by computing and comparing the principal eigenvalues (Equation 14) between mutants in each case. Verifying predictions of selection over space based on λ 1 Next we show how the ranking based on λ 1 comparison between mutants gives accurate prediction for final of competition over space in a more complex case with more mutants (M > 2), more complex druginteraction function, and arbitrary distribution of resistance phenotypes (see Figure 6). Selection over space under motility and growth differences among mutants A direct extension of this model is to, instead of assuming an equal D, allow for mutant-specific diffusion rates D i , in the environment.In this case, there is an additional trait affecting global fitness over space, namely motility.In the case of constant growth rates g i it is straightforward to obtain the fittest mutant by their ranking on the classical critical criterion L crit = π D i g i (also related to λ 1 ).The mutant with the smallest L crit < L should win.Obviously when two mutants have exactly the same fitness, perhaps by counter-balancing growth and diffusion, leading to same L crit (equivalently same λ 1 ) they will coexist.Whereas in the case of spatially-varying growth rates g i (z), one can resort to the same principal eigenvalue approach and compute the mutant fitness by including the assumption of a different D i for each mutant.In both these cases, the diffusion traits play a key role in determining the winner or coexistence in the system, with fast diffusion sometimes being able to rescue locally maladapted strains, or slow diffusion sometimes amplifying the fitness of lower-growth variants (see Supplementary Material S4).These theoretical predictions, made accessible here through the λ 1 comparison between variants, could be linked with existing empirical observations on bacterial coexistence and inverted competitive hierarchies driven by motility and spatial competition (Gude et al., 2020). The case of periodic habitat quality: periodic multi-drug regimes A special case of environmental variation is periodic habitat quality; in the case of multi-drug regimes, this translates to periodically-varying drug concentrations in space.This case has been long studied in the theoretical ecology literature, for example for finite one-dimensional or two-dimensional space, or infinite one-dimensional environment (Berestycki et al., 2005a,b).Typically a discretization approach is used, dividing the landscape into periodically-alternating patches of two or more types.In our case, such alternation of the space into patches of differential suitability for growth comes as a result of fitness being a direct function of the two drug concentrations.Namely for drug concentrations varying periodically in space: Initial conditions (99% vs 1%: WT vs. all mutants) were assumed equal for all strains, satisfying the boundary conditions n i (z, 0) ∼ sin(πz). where A 1 and A 2 denote the amplitude of spatial variation and T 1 and T 2 the period of the variation for each drug, the growth rate at each point in space of the wild-type (reference strain) would be given by and in general, the growth rate of any variant with resistance traits (α i , β i ), would be given by We consider again the simple drug landscapes G specified in Eqs.16, where with a single parameter q > 0, we can vary the strength of the drug interaction.Focusing on three classical mutants with (0, 0), and (0.5, 0.5), and computing their fitness based on the principal eigenvalue approach we can study which type of resistance, whether resistance to drug 1, or to drug 2 or intermediate resistance to both drugs will be favoured in each periodic drug regime, and how the result depends on the type and strength of drug interaction.Indeed we see that for a given periodic variation of drugs over space, leading to a periodic variation of g over space, typically one mutant has the highest fitness.The theory predicts that under Dirichlet boundary conditions, and fixed mean growth rate over space, the population whose growth variation exhibits the highest amplitude of Figure 7: Selection outcomes for periodic drug regimes leading to periodic growth rates over space.A. The periodic variation of drug 1 and drug 2 over space, keeping the total amount of each drug equal.The periodic function parameters, under conservation of total drug, are: k 1 = A 1 = 0.5, T 1 = 2 and k 2 = A 2 = 0.72, T 2 = 0.4.Further we show mutant growth rates and selection outcome under: B. synergistic drug interactions; C. independent drugs; C. antagonistic drug interactions; E. even more antagonistic drugs.The first column shows resulting growth rates g i (z) for each mutant following the linear G function combinations in Eqs.16, with q = 0.5 (first three rows), and q = 1.5 in the last row, depicting the case of stronger drug antagonism.The second column shows associated final fitnesses of the 3 mutants over space, computed on the basis of the principal eigenvalue.Assumed diffusion coefficient is D = 0.01.variation, will be the one to win (Berestycki et al., 2005a,b), e.g. the mutant who experiences a less fragmented habitat.This result is not immediately translatable to our case, since what we are controlling for is not average growth rate of each mutant, but total amount of drug 1 and total amount of drug 2. In our simulations with given parameters, when mean growth rates of mutants may vary, we find that the mutant with resistance to drug 1 is selected in Fig. 7 top panels).However, as drug interactions increase in magnitude, the intermediate double-resistant mutant can be selected (Fig. 7 last panel).These outcomes of selection cannot simply be understood from just comparing the amplitude and frequency of variation in g(z), but also crucially on the mean growth rate that may change as we vary the two drugs or their interaction.In any case, computing the relative fitnesses on the basis of the principal eigenvalue ranking, leads to robust analytical results and very general predictions for whichever environmental variation, and coupling from environment to fitness. Figure 8: Fitness ranking among 3 mutants for periodic drug regimes, as a function of spatial period of drug 2. A. Synergistic drug interaction.B. Independent drug action C. Antagonistic drug interaction.The strength of interaction when assumed, was q = 0.5 and G(x, y) were specified as in Eq. 16.The periodic variation of drug 1 x(z) was held fixed, while drug 2 concentration y(z) over space was varied by varying the period T 2 .These parameters were fixed: k 1 = A 1 = 0.5, T 1 = 0.8 and k 2 = A 2 = 0.5 before normalization, which then leads to total conservation of drug 1 and drug 2, fixed amount=1 for each spatial period of drug 2 T 2 .Final fitnesses of the 3 mutants over space, computed on the basis of the principal eigenvalue.Assumed diffusion coefficient is D = 0.01.In blue: single-resistance to drug 1, in red: single-resistance to drug 2, in purple: double resistant mutant with intermediate resistance to each drug. Open avenues for multi-drug optimization over space Keeping the total amount of each drug constant and equal when integrated over space, one can then try to perform optimization of periodic drug administration over space so as to select one or the other mutant.For the purposes of illustration, since detailed optimization falls beyond the scope of this paper, we studied systematically how the winning mutant varies depending on the period of a single drug's administration over space (Figure 8), and as a function of both drugs' spatial periods (Figure 9), in the three cases of synergistic, antagonistic and no-drug interaction.In our parameter combinations, we observe that the synergistic and independent drug actions produce very similar selection outcomes for any combination of periods of the two drugs between 0 and 2, always favouring single resistance to one drug, albeit overturning the winning strain for some critical parameter thresholds. In contrast, the antagonistic drug scenario is the one that can lead also to selection of the intermediate double resistant mutant, and does so in a majority of parameter combinations.This confirms that antagonistic drug combinations, even considering spatial variation, typically constrains high-level single resistance selection, in favour of the intermediate double resistance. Notice that two scenarios displaying the same final selection outcome only means that the ranking of the principal eigenvalues produces the same fittest strain; this does not preclude differences in the transient dynamics leading up to that outcome.Coexistence is obtained when strain fitness computed from the λ 1 is equal.Sometimes 2 mono-resistant strains may coexist excluding the double resistance (orange border line in Fig. 9A), or the double resistant with a single mono-resistant strain (region border lines in Fig. 9C), and under independent drugs, 3 strains can coexist including both mono-resistant and the double resistant strains (green border line in Fig. 9B). In contrast, the use of the spatially-averaged growth rates in these scenarios as a proxy to predict selection would yield very different results to λ 1 .With the drug landscape defined in 16 and the total amount of the two drugs equal, we would have only one possible outcome under each drug interaction, independently of periods (T 1 , T 2 ).Namely, in the synergistic case we would always have coexistence of the mono-resistance mutants, in the independent drug action case, coexistence of the three mutants (both single-resistant and the double-resistant mutant have equal ḡ), and under antagonistic drugs, the double-resistant mutant with intermediate resistance to both drugs would competitively exclude the other mutants.This result could be easily verified analytically via the integrals ḡ = L 0 g(z)dz which would lead to the following growth rates: -from the assumption of total drug amount being equal L 0 xdz = L 0 y(z)dz, and noting q > 0. Hence, for whichever variation of x and y over space (i.e.independently of T 1 and T 2 ), the single-drug resistances would be selected. The explicit analytical handle on relative fitness based on λ 1 could further be used to design optimal multidrug regimes over space under certain constraints.For example, fixing the total amount of drug 1, and its spatial variation, which total amount of drug 2 and spatial variation would be needed to drive the double-resistant mutant toward extinction?In Figure S5 we illustrate an answer to this question, identifying precisely those drug-2 gradients over space that would be effective.Further analytical advances in multi-drug therapeutic optimization over spatially extended habitats could be obtained, by exploiting previous theoretical results Cantrell and Cosner (1991); Berestycki et al. (2005b); Pellacci and Verzini (2018) and linking them to antibiotics and bacterial realities, or generating new results on a case-by-case basis for specific microbial ecosystems under spatial gradients.B. Independent drugs.C. Antagonistic drugs.Shaded blue region: single-resistance to drug 1 has higher fitness, shaded red region: singleresistance to drug 2 has higher fitness, shaded purple region: double resistant mutant with intermediate resistance to each drug has the higher fitness.The periodic variations of drug 1 and drug 2 over one-dimensional space z ∈ [0, 1] are constructed in such way as to keep the total amount of each drug equal to 1.The periodic function parameters are initially specified as: k 1 = A 1 = k 2 = A 2 for any combination of periods T 1 and T 2 , and then immediately scaled by the integral of the periodic function over space, to obtain a total amount of drug equal to unity in each case.Assumed diffusion coefficient is D = 0.01.The growth functions of each mutant over space are obtained following Eqs.16 together with the assumption that a mutant with traits (α i , β i ) experiences the two drugs at concentrations α i x and β i y.The interaction strength is fixed at q = 0.5 both in the case of synergistic and antagonistic interaction.In the case of synergistic/antagonistic interaction, the effect is to decrease/increase the growth rate of bacteria relative to the simple additive effect of the two drugs.See Figure S4 for the analogous figure under a more complex drug interaction function, highlighting the sensitivity to fitness landscape. Discussion The spatiotemporal evolution of strain frequencies in a population spreading in a homogeneous environment can be described by parallel travelling fronts where each strain propagates in space with a constant speed, according to the classical Fisher-KPP equation (Fisher, 1937;Kolmogorov et al., 1937) -an equation with a long history of study in biological invasions and population genetics (Skellam, 1951;Aronson and Weinberger, 1978).Under the classical exponential or logistic growth kinetics, the result of such competition typically leads to competitive exclusion where the fittest strain will be the only one to survive everywhere in space over long time. In a spatially-varying environment, the local quality of the habitat affects the speed of spread of an invading population.Natural environments where populations grow and spread are generally heterogeneous, composed of different sub-habitats, such as forests, plains, marshes, and the like, or consist of pieces divided by barriers such as roads, rivers, cultivated fields (Kinezaki et al., 2003).This case has a long history of ecological, environmental and agricultural interest, and a long history of mathematical study with analytical results on periodic traveling waves (Shigesada et al., 1986), piecewise-environmental variation (Cantrell and Cosner, 1991) and up to the more recent work by (Berestycki et al., 2005a,b) on periodic spatial variation, with the pulsating front characterized by its average speed. The key quantity highlighted in many of these earlier studies is the principal eigenvalue of the linearized equation, which determines the global stability of the stationary state 0. Although these studies were primarily interested in biological invasion of a single species, such result can be applied to the context of multiple strains of a population spreading in a heterogeneous habitat.One can use the same logic for ranking the fitness of different mutants on such heterogeneous habitat, or alternatively for determining relative environmental suitability for each mutant.Ranking the global (in)stability of the 0 steady state, via principal eigenvalue comparisons, allows to reach a conclusion about the strains' survival in an environment that is differentially suitable, and hence predict selection outcomes in the system over long time. Harnessing the analytical foundations of these results, we go here one step further by applying this theory to the context of antibiotic resistance evolution, and providing an accurate approximation of this principal eigenvalue (fitness measure).We study arbitrary variation in the environment suitability, linking spatial heterogeneity to explicit multi-drug antibiotic regimes, and integrating fitness landscapes, drug interactions and collateral effects, with the aim to predict multi-drug resistance evolution as a selection process among any mutants.Other studies have considered the role of spatial heterogeneity in the evolution of resistance, using the framework of an epidemiological model and focusing on the case of a single drug being used with periodic variation in the growth rates of single and double-resistant genotypes (Griette et al., 2022).Here we develop a more general and comprehensive link between traveling fronts and multi-drug resistance.In our framework, periodic variation can be seen as a special case of spatial heterogeneity, when drugs are used at periodic concentrations over space.Furthermore, differently from (Griette et al., 2022), we include the possibility of multiple drugs interacting, which affects mutant success and final competitive outcome between multi-drug resistant variants. The main advantage of this framework is that it provides a simple and general template for studying multi-drug resistance evolution in space, possibly applicable to other systems and open to analytical extensions (BOX 3).In particular, it allows for continuous resistance traits, includes collateral effects and drug interactions explicitly, and the prediction of final outcomes is based on dominant eigenvalue ranking among mutants, which can be analytically approximated.The framework is easily extendable to include spatial variation of multiple drugs to > 2 drugs, and hence enabling study of higher-dimensional evolution in antibiotic resistance fitness traits.Especially in the case of 1-d and 2-d environments, there are many results that can be directly applied from the literature to the case of antibiotic resistance, such as optimal spatial variation to prevent or facilitate global spread of an invading species or strain (Cantrell and Cosner, 1991;Berestycki et al., 2005a). Mathematically, and strictly-speaking the λ 1 presented in our framework corresponds to a growth rate, related to the negative principal eigenvalue in other studies (Cantrell and Cosner, 1991;Berestycki et al., 2005a;Pellacci and Verzini, 2018), whose optimization for invasion and persistence would seek a minimum.While this is a convention, our technical choice enables us conceptually and practically to rank the mutants more easily favouring the one with the relatively higher λ 1 .Other approaches to compute or obtain suitable bounds for λ 1 can come from variational methods applied to Sturm-Liouville problems and the Rayleigh quotient (Cantrell and Cosner, 1989).We adopted a perturbation theory approach, which despite its requirement for small, appears to apply well outside this immediate strict range.In particular, the 1 approximation matches very well the selection results that the model simulations display even in cases of lower diffusion. A disadvantage of our approach is that while the population involved and its constituent strains are considered dynamic, the environment, namely the drug concentrations across space, is generally regarded as static.Other modeling frameworks are needed to treat situations with both temporal and spatial variation in the environment, or a mutual feedback, such as in resource-based models where cells physically interact with resources at the expanding front, e.g. in biofilms (Young and Allen, 2022;Sinclair et al., 2019).Similarly, by assuming large populations, we neglect stochastic fluctuations in sub-populations which might affect selection outcomes in certain settings. BOX 3. General applicability of the framework, outlook and challenges Spatial growth and selection under multiple stressors.The simple framework based on rescaling parameters (α, β) could be applied to other biological populations, at different scales, growing in response to multiple stressors and spreading through migration in heterogeneous environments that generate a gradient for growth.Differential variant makeup, whether genetic or non-genetic, but heritable, that produces different susceptibility traits to these stressors in the population forms the basis for selection on relevant timescales, manifested in its simplest form as competitive exclusion (monomorphism) or coexistence patterns (polymorphism) in spatially extended habitats.The stressors could range from antibiotics (Larsson and Flach, 2022), to agrochemicals (Malagón-Rojas et al., 2020), temperature, moisture, (Jiang et al., 2017), pH, salinity (Wicaksono et al., 2022), to nutrient oxygen, or physiological micro-environments (Chikina and Vignjevic, 2021).The primary dose-responses of stressor-to-growth phenotypic effect can be used to obtain (α i , β i ) traits in the sub-populations of interest, and then rescale accordingly the G(x, y) of the reference (WT) strain to obtain growth rates g i for all variants under any combination of stressor concentrations.Limitations and caveats.In some cases, scaling factors may not fully capture the variation in mutant growth relative to wild-type as a function of stressor concentrations.Much more nonlinear functional transformations may be required to obtain mutant fitness landscapes, and this remains an active area of research (Wood et al., 2014).Similarly, the exponential model may be too unrealistic to represent the intricate mechanisms of growth and interactions between strains (Maciel and Lutscher, 2018;Estrela and Brown, 2018), thusfar assumed negligible.We focused on the case of mainly positive growth rates.However, locally negative growth rates (e.g.supra-inhibitory stressor doses) could complicate outcomes, via stronger dependence on initial conditions, or additional sensitivity to variable diffusion rates to compensate for fitness troughs. Spatially-heterogeneous environment Extensions and outlook.Applications can be envisaged in other systems such as gut microbiota, soil bacteria and their spatio-temporal distribution under abiotic gradients, cancer cell populations and drug resistance selection dynamics along physiological gradients, antibiotic resistance evolution at the epidemiological scale, toxicology data, freshwater aquatic systems and environmental biotechnology.More than 2 stressors could be implemented, thereby yielding high-dimensional susceptibility traits.An additional axis of resistance cost can be integrated, either as independent or via a functional dependence on α, β traits.The model could include space-dependent diffusion or habitat preference bias at the interface between distinct environmental patches (Maciel and Lutscher, 2018).Analytical extensions could exploit links with homogenization techniques from landscape ecology (Yurk and Cobbold, 2018) and global fitness perspectives from adaptive dynamics (Metz et al., 1992).We only included diffusion, i.e. random movement of cells in space, while other similar reaction-diffusion models focusing on microbiota composition variation along the gut have included both diffusion and directed flow of bacterial lineages along a longitudinal growth gradient (Ghosh and Good, 2022).We also did not explicitly include active mutation processes in the kinetics of the spatial model, a process which would break the independence between the existing strains (Gjini and Wood, 2021), and preclude the straightforward application of the first eigenvalue approach for fitness comparison.In the particular case in which mutation rates to a given variant are equal among all possible parental strains, one could plausibly assume the initial total distribution among mutants as a proxy for such hierarchical mutation biases, and apply still the present model.De-novo diversity generation could be added with specific assumptions on parent and offspring phenotypes, dependence on the current environment, population size, and/or spatially-varying mutation rate.This remains an interesting avenue for the future. Another mechanism to break the independence between sub-populations would be competition or facilitation between variants, explicitly embedded into their growth kinetics and spatial spread.Other studies have addressed it e.g. in ecological models (Maciel and Lutscher, 2018;Estrela and Brown, 2018) or epidemiological multi-strain systems (Le et al., 2023;Le and Madec, 2023), leading to potentially very complex replicator-type dynamics.Although we did not focus on analytical results for coexistence levels between strains, when their global fitnesses equalized over space, these results can be easily obtained from the same approximation steps based on perturbation, that allowed us to compute the dominant eigenvalue for each mutant (see S1-S3).On the other hand, analytical results for total population size and frequencies could be more easily obtained for cases constant rates over space, applying classical reaction-diffusion theory and travelling-wave characterization of the solutions. While the debate of antibiotic resistance management (Raymond, 2019) has optimization at the center, we very briefly sketched a few aspects of the model to inform optimal combination multi-drug therapies over space (e.g.Fig. 9 and S5).Analytical results on optimization remain challenging even for simple piecewise growth variation as recognized in earlier work (Cantrell and Cosner, 1991).Many factors do play a role, including sensitivity to boundary conditions and structure of habitat fragmentation (Berestycki et al., 2005a;Pellacci and Verzini, 2018), modelling assumptions, nonlinearities linking phenotypes to growth and variation in habitat 'quality', and details of underlying stressor interaction.However, the general framework presented here for multi-drug gradients can be a basis that can be tailored to specific systems and their optimal control in the future.We expect many opportunities for model-data links both in microbiology and lab evolution experiments, as well as in the larger-scale epidemiology of multi-drug resistance evolution. Figure 1 : Figure1: Example of outcomes among two strains for constant g i over space.A. Competitive exclusion.In this example, strains start at uniform distribution over space, with g 1 > g 2 , hence dynamics lead to a traveling wave solution for f 1 (z, t) and f 2 (z, t) with strain 1 traveling at faster speed and ultimately being the winner everywhere over long time.B. (Neutrally-stable) coexistence at 50:50 because the mutants start at equal total abundances and g 1 = g 2 .C. (Neutrally-stable) coexistence different from 50:50 because mutants start at different total abundances and g 1 = g 2 .D. (Neutrally-stable) coexistence different from 50:50 because mutants start at equal total abundances with g 1 = g 2 , but their initial distribution over space favours one of them that starts at higher abundance in the center of the domain. Figure 6 : Figure 6: Validating selection predictions based on λ 1 ranking among several competing mutants.We illustrate a model simulation under the linear drug gradients in A, with 10 multi-drug resistant mutants varying in (α i , β i ) traits (B), growing (g i (z) in C) and spreading over space with diffusion coefficient D = 0.01.The λ 1 values (Eq.14) in D, match very well with the spatial selection dynamics observed numerically (E).Initial conditions (99% vs 1%: WT vs. all mutants) were assumed equal for all strains, satisfying the boundary conditions n i (z, 0) ∼ sin(πz). Figure 9 : Figure9: Drug-resistance selection outcomes for periodic 2 drugs as a function of their spatial periods T 1 and T 2 .A. Synergistic drugs.B. Independent drugs.C. Antagonistic drugs.Shaded blue region: single-resistance to drug 1 has higher fitness, shaded red region: singleresistance to drug 2 has higher fitness, shaded purple region: double resistant mutant with intermediate resistance to each drug has the higher fitness.The periodic variations of drug 1 and drug 2 over one-dimensional space z ∈ [0, 1] are constructed in such way as to keep the total amount of each drug equal to 1.The periodic function parameters are initially specified as: k 1 = A 1 = k 2 = A 2 for any combination of periods T 1 and T 2 , and then immediately scaled by the integral of the periodic function over space, to obtain a total amount of drug equal to unity in each case.Assumed diffusion coefficient is D = 0.01.The growth functions of each mutant over space are obtained following Eqs.16 together with the assumption that a mutant with traits (α i , β i ) experiences the two drugs at concentrations α i x and β i y.The interaction strength is fixed at q = 0.5 both in the case of synergistic and antagonistic interaction.In the case of synergistic/antagonistic interaction, the effect is to decrease/increase the growth rate of bacteria relative to the simple additive effect of the two drugs.See FigureS4for the analogous figure under a more complex drug interaction function, highlighting the sensitivity to fitness landscape.
2023-11-22T14:13:05.685Z
2023-11-17T00:00:00.000
{ "year": 2024, "sha1": "19d1d76071ce1de234fded6c9de31dac1831269d", "oa_license": "CCBYNCND", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/11/17/2023.11.16.567447.full.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "15b7bfb38798a9d1f5b099e7a8575e421fa978f2", "s2fieldsofstudy": [ "Environmental Science", "Mathematics", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
139545062
pes2o/s2orc
v3-fos-license
Problems in the Test Procedure of Hydrated Microsphere Particle Size Microsphere profile control is a new deep profile control technology in oilfield, which has developed in recent years. The particle size distribution of hydrated microsphere is an important basis for its application in formation pore throat, which has direct influence on the effect of profile control. First, the results of hydrated microsphere particle size were obtained from the current testing method, which was based on the static laser particle size analyzer. And then, the existing problems in current particle measurement of hydrated microsphere were studied. Results showed that there were two main problems in the current test methods of hydrated microsphere particle size. One was the poor reproducibility of the test results, and the other was the unrecognized particle size distribution of the truly hydrated microsphere. Finally, the suggestion of analyzing the influencing factors during the test procedure of hydrated microsphere particle size was given. Introduction Microsphere flooding is a new deep flooding technology in oilfield, which has developed in recent years. The microsphere technology uses many different methods to synthetize spherical or sphericallike microsphere, including micro-emulsion polymerization method, suspension polymerization, as well as the method of solution polymerization, et cl. The microsphere particle size is ranged from tens of hundreds of nanometers to hundreds of microns, which has great significance to improve water flooding development effect and the crude oil recovery. [1][2] The particle size of microsphere and the core pore throat should match with each other theoretically. [3] The hydrated size of microsphere is too small to form a stable and effective pore throat plugging because of the easy through. While the big hydrated size of microsphere will lead to the difficult injection, which cannot achieve the purpose of deep profile control. Therefore, the size and distribution of microsphere size is an important basis for its application in the pore throat size, which directly affects the displacement control effect of microsphere. [4][5] At present, there are two standard methods for measuring micron size particles. [6][7] One is based on dynamic light diffraction (DLS), and the other is based on static light (laser diffraction). However, there is no national standard and industry standard for measuring particle size of microsphere. [8] The investigation was showed that the main methods of particle size test are transmission electron microscopy (TEM), [9][10] and laser particle size distribution method [1][2] . Since TEM must be operated in vacuum, it requires high professional and technical level, which limits its universal application. Laser particle size analyzer is an instrument by means of laser scattering, which is used to measure particle distribution in emulsion, suspension, as well as the powder sample. It has been widely used in the analysis and research of petroleum and petrochemical industry. [1] The research shows that most domestic oil fields, including Huabei Oilfield, universities, as well as enterprises and other units related to the production of microsphere, [11][12][13][14][15][16] use laser particle size distribution analyzer to obtain the microsphere hydrated size value by testing microsphere water solution sample of microsphere, which is used to guide the microsphere application in oil fields. In this paper, problems existed in the test procedure of hydrated microsphere particle size was analyzed, which was based on summarizing the experimental results. On the above basis, the suggestion of analyzing the influencing factors during the test procedure of hydrated microsphere particle size was given. Experimental steps Shake fully the microsphere samples with distilled water. Formulate aqueous solution of microsphere, whose mass fraction is 1%. Sub-package to seal the stainless steel drums. Take the microsphere solution sample out from the electric thermostatic drying oven for a period of time. Place at room temperature for cooling. Turn on the laser particle size, preheat 30 minutes, and then calibrate the instrument according to the operating procedures. Test the distilled water as the empty sample to deduct the background value. Fully shake the microsphere sample after the check amount in the beaker with distilled water, and then adjust the sample concentration to transmittance falls in the range of suitable instruments (red semiconductor laser to meet 80%~90%, the blue LED light meet 70%~90%). Finally, test the size and distribution of dispersion medium in particle samples using laser particle size analyzer at room temperature (refractive index: 1.33; particle refractive index: 1.50). Results of hydrated microsphere particle size The indoor test prepared 4 different aqueous solutions of microsphere samples, which were used in profile control. The microsphere samples were hydrated at 70℃ for 4 days, and then were tested the particle size and the distribution by using the laser particle size analyzer. Besides, the median particle size was characterized by D 50 , which was the corresponding particle size when the cumulative frequency reached to 50%. The D 50 data of four samples were shown in table 1. From table 1, it can be seen that the D 50 repeatability of the same microsphere sample were poor. Besides, the test data even appeared in the order of magnitude of nanometer and micron. Moreover, the relative standard deviation was up to 136%. problems existing in the test of hydrated microsphere particle size Taking microsphere sample 1 # as an example, figure 1 was the particle size distribution curve of the microsphere sample under the same experimental conditions. Figure 1(a) showed that the size distribution of sample 1 # was 0.100~0.389 μm, and the medium particle size was 0.158 μm. However, figure 1(b) indicated that the particle size distribution range was 1.981~15.175 μm, and the medium particle size was 3.60 μm. The above results showed that the particle size of sample 1 # was either nanometer or micrometer. That was to say, the size of the microsphere sample showed a difference of magnitude between the nanometer and micrometer particles, even under the same experimental conditions, which led to the fact that the true distribution of the particle size could not be distinguished. Figure 2(a) showed that the size distribution of sample 4 # was 8.816~58.953 μm, and the median particle size was 12.95 μm. Whereas, figure 2(b) indicated that the particle size distribution range was 17.377~678.504 μm; In addition, the median particle size was 25.58 μm. Based on the above data, it can be seen that the particle size distributions of 4 # sample were in micrometer scale and overlap in some areas (17.377~58.953 μm). However, the particle size distribution of the two measurements was less than that of overlap, so that the peak value of the particle size distribution was not similar, and the true value of the particle size cannot be obtained. Figure 2. Particle size test results of 4 # microsphere sample after 4 days at 70℃ In addition to the above, the distribution of microsphere size resulted in the distribution of doublet in figure 3. Figure 3 showed that the particle size distribution of sample 1 # existed two ranges, 2.269~4.472 μm and 5.122~19.904 μm. It also caused the disturbance of the true particle size distribution of microsphere. . Doublet particle size distribution of 1 # hydrated microsphere after 4 days at 70℃ Based on all the above particle size results of the hydrated microsphere samples, it showed that there were some problems in the current test methods of hydrated microsphere particle size of microsphere, including the doublet particle size distribution, and the phenomenon of different magnitude. It resulted in the poor reproducibility of the test results, which led to the unrecognized particle size distribution of the truly hydrated microsphere. The suggestions about the test of hydrated microsphere particle size Since the current testing method of the static laser particle size analyzer had some problems, which were the poor reproducibility of the test results, and the unrecognized particle size distribution of the truly hydrated microsphere. It was extraordinarily important to analyze the test factors of hydrated microsphere particle size, which was suggested to study through analyzing the morphology and the composition of microsphere, as well as the testing process of the static laser particle size analyzer. 4. Conclusions ⅰ.The median particle sizes of four different microsphere samples were obtained through the static laser particle size analyzer. The D 50 repeatability of the same microsphere sample was poor. Besides, the test data even appeared in the order of magnitude of nanometer and micron. Moreover, the relative standard deviation was up to 136%. ⅱ.The existing problems in current particle measurement of hydrated microsphere were studied through analyzing the results of the hydrated particle size. Results showed that there were two main problems in the current test methods of hydrated microsphere particle size. One was the poor reproducibility of the test results, and the other was the unrecognized particle size distribution of the truly hydrated microsphere. ⅲ.The suggestions about the test of hydrated microsphere particle size were given, which were analyzing the morphology and the composition of microsphere, as well as the testing process of the static laser particle size analyzer.
2019-04-30T13:07:37.942Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "b22081cf8dfcd84860670bab4e6d760122920e33", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/153/6/062088", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e27503ba8af51df45735e8c53ec26a7c648d65cd", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Materials Science" ] }
252442012
pes2o/s2orc
v3-fos-license
Variation of volatile organic compound levels within ambient room air and its impact upon the standardisation of breath sampling The interest around analysis of volatile organic compounds (VOCs) within breath has increased in the last two decades. Uncertainty remains around standardisation of sampling and whether VOCs within room air can influence breath VOC profiles. To assess the abundance of VOCs within room air in common breath sampling locations within a hospital setting and whether this influences the composition of breath. A secondary objective is to investigate diurnal variation in room air VOCs. Room air was collected using a sampling pump and thermal desorption (TD) tubes in the morning and afternoon from five locations. Breath samples were collected in the morning only. TD tubes were analysed using gas chromatography coupled with time-of-flight mass spectrometry (GC-TOF-MS). A total of 113 VOCs were identified from the collected samples. Multivariate analysis demonstrated clear separation between breath and room air. Room air composition changed throughout the day and different locations were characterized by specific VOCs, which were not influencing breath profiles. Breath did not demonstrate separation based on location, suggesting that sampling can be performed across different locations without affecting results. Volatile Organic Compounds (VOCs) are carbon-based compounds that are gaseous at room temperature and are the end products of many endogenous and exogenous processes 1 . VOCs have been of interest to researchers for several decades for their potential role as non-invasive biomarkers of human diseases. However, there remains ongoing uncertainty regarding standardisation of both the collection and the analysis of breath samples. One crucial area of interest for breath analysis standardisation is the potential effect of background VOCs within the ambient room air 2 . Previous studies have suggested that background VOCs within the ambient room air influence the levels of VOCs detected within exhaled breath 3 . A study by Boshier et al. in 2010 utilising selected ion flow tube mass spectrometry (SIFT-MS) examined the levels of seven VOCs in three clinical environments. Differing ambient VOCs levels were identified across the three areas which in turn raised suggestions about the ability of VOCs of high prevalence in room air to be utilised as disease biomarkers 4 . In 2013, Trefz et al. also monitored the ambient room air of an operating theatre over the course of a working day alongside breath samples from hospital staff. They found levels of exogenous compounds such as sevoflurane had increased in both ambient room air and breath by the end of the working day 5 raising questions as to when and where sampling of patients for breath analysis should be performed to minimise such confounding factors. This was correlated by a study by Castellanos et al. in 2016 who identified sevoflurane in the breath of hospital workers but not in that of workers outside of the hospital 6 . In 2018, Markar et al. attempted to demonstrate the impact of variation in room air composition on breath analysis as part of their study to assess the diagnostic capability of exhaled breath for oesophagogastric cancer 7 . They utilised steel breath bags and SIFT-MS for their sampling process and identified eight VOCs within room air that differed significantly across sampling locations. These VOCs however were not included within their final diagnostic model of breath VOCs and thus their impact was negated. In 2021, Salman et al. performed a study monitoring the VOC levels across three hospital locations over 27 months. They identified seventeen VOCs that acted as seasonal differentiators and proposed a cut off level of exhaled VOC concentrations above 3 µg/m 3 as being unlikely to be secondary to background VOC contamination 8 . Aside from setting a cut off level or direct exclusion of exogenous compounds, alternative methods to negate this background variation include collecting paired samples of room air at the same time as breath sampling so that the level of any VOCs present in high concentrations in the inhaled room air can be subtracted from levels found in the exhaled breath 9 providing an "alveolar gradient". A positive gradient is thus suggestive of an endogenous compound 10 . Another approach is to have participants inhale "scrubbed" air that is theoretically free from contaminant VOCs 11 . However, this is onerous, time consuming and the equipment itself can generate additional contaminant VOCs. A study by Maurer et al. in 2014 had participants inhale synthetic air which reduced the intensity of 39 VOCs, but increased intensity of 29 VOCs compared to inhaling ambient room air 12 . The use of synthetic/scrubbed air also significantly limits the portability of equipment for breath sampling. It is also anticipated that the levels of VOCs within ambient air would alter throughout the day which could further impact upon standardisation and accuracy of breath sampling. Advances in mass spectrometry techniques including the coupling of thermal desorption with gas chromatography and time-of-flight mass spectrometry (GC-TOF-MS) also provides a more robust and powerful VOC profiling approach, enabling the concurrent detection of hundreds of VOCs and consequently, a more in-depth analysis of room air. This provides the opportunity to present a more detailed characterisation of the composition of ambient room air and the variation across location and time with a larger number of samples. The primary aim of this study is to determine the varying abundance of VOCs within ambient room air in common sampling locations within a hospital setting and how it potentially impacts exhaled breath sampling. Secondary aims were to determine if there is a significant diurnal or locational variation in VOC profiles in ambient room air. Results Breath and room air have distinct VOCs profiles. Breath samples were collected in the morning alongside matching room air samples at five different locations and analysed by GC-TOF-MS. A total of 113 VOCs were detected and extracted from the chromatograms. Repeated measures were collapsed to the mean before performing principal component analysis (PCA) on the extracted and normalised peak areas to identify and remove outliers. Supervised analysis through partial least squares-discriminant analysis (PLS-DA) was then able to show a clear separation between breath and room air samples (R 2 Y = 0.97, Q 2 Y = 0.96, p < 0.001) (Fig. 1). Group separation was driven by 62 different VOCs, with a variable importance projection (VIP) score > 1. A complete list of the VOCs characterizing each sample type and their respective VIP scores can be found in Supplementary Table 1. Diurnal variation in room air VOCs levels. Differences in room air VOC profiles between morning and afternoon were investigated using PLS-DA. The model identified significant separation between the two timepoints (R 2 Y = 0.46, Q 2 Y = 0.22, p < 0.001) (Fig. 2). This was driven by 47 VOCs with a VIP score > 1. VOCs with the highest VIP score characterizing morning samples included multiple branched alkanes, oxalic acid and hexacosane, while afternoon samples presented more 1-propanol, phenol, propanoic acid, 2-methyl-, 2-ethyl-3-hydroxyhexyl ester, isoprene and nonanal. A comprehensive list of VOCs characterizing daily variation in room air composition can be found in Supplementary Table 2. Room air, but not breath, VOC profiles differ across sampling locations. Samples were collected across five different locations: endoscopy unit, clinical research bay, operating theatre complex, outpatient clinic and a mass spectrometry laboratory within St Mary's Hospital, London. These locations are all commonly used for patient recruitment and breath collection by our research group. Room air, as previously mentioned, was collected both in the morning and afternoon, while breath samples were only collected in the morning. PCA www.nature.com/scientificreports/ highlighted a separation of room air samples by location through permutational multivariate analysis of variance (PERMANOVA, R 2 = 0.16, p < 0.001) (Fig. 3a). Thus, pairwise PLS-DA models were generated, comparing each location against all the others to identify characteristic signatures. All models were significant and VOCs with VIP score > 1 were extracted with respective loading to identify group contribution. Our results indicate that the composition of ambient air changed by location, and we identified location characteristic signature through model consensus. The endoscopy unit was characterized by higher presence of undecane, dodecane, benzonitrile and benzaldehyde. The clinical research bay (also identified as liver research unit) samples displayed more α-pinene, di-isopropyl phthalate and 3-carene. The operating theatre complex air was distinguished by a more abundant presence of branched decane, branched dodecane, branched tridecane, propanoic acid, 2-methyl-, 2-ethyl-3-hydroxyhexyl ester, toluene and 2-butenal. The outpatient clinic (Paterson building) was marked by higher levels of 1-nonanol, vinyl lauryl ether, benzyl alcohol, ethanol, 2-phenoxy-, naphthalene, 2-methoxy-, isobutyl salicylate, tridecane, and branched tridecane. Finally, the room air collected in the mass spectrometry laboratory presented more acetamide‚ 2‚2‚2-trifluoro-N-methyl-, pyridine, furan‚ 2-pentyl-, branched undecane, ethylbenzene, m-xylene, o-xylene, furfural, and ethyl anisate. Varying levels of 3-carene were present in all five locations, suggesting this VOC to be a common contaminant, with highest abundance observed in the clinical research bay. A list of consensus VOCs separating each location can be found in Supplementary Table 3. In addition, univariate analysis was performed on each VOC of interest, comparing all the locations to each other with pairwise Wilcoxon test followed by Benjamini-Hochberg correction. Boxplots for each VOC are reported in Supplementary Fig. 1. Breath VOC profiles did not appear to be affected by location as observed in PCA followed by PERMANOVA (p = 0.39) (Fig. 3b). Additionally, pairwise PLS-DA models were generated between all the different location for the breath samples too, but no significant differences were identified (p > 0.05). www.nature.com/scientificreports/ Discussion In this study, we analysed VOC profiles within ambient room air across five commonly used locations for breath sample collection to further understand the impact of background VOCs levels on breath analysis. Separation of room air samples across all five different locations was observed. Except for 3-carene which was present in all investigated areas, separation was driven by different VOCs, giving each location a specific signature. In the endoscopy assessment area VOCs driving separation were predominantly monoterpenes, such as β-pinene, and alkanes, such as dodecane, undecane and tridecane that are commonly found in essential oils commonly used in cleaning products 13 . Given the frequency with which the endoscopy unit is cleaned, it is likely these VOCs are a result of frequent cleaning processes within this space. In the clinical research bay, as with endoscopy, separation was predominantly due to monoterpenes, such as α-pinene, also most likely originating from cleaning products. In the operating theatres complex, the VOC signature predominantly consisted of branched alkanes. These compounds may originate from surgical instruments since they are abundant in oils and lubricants 14 . In the surgical outpatient clinic, characteristic VOCs included a selection of alcohols: 1-nonanol, found in plant oil and consequently cleaning products, and benzyl alcohol, which can be found in fragrances and local anaesthetics 15-18 . VOCs within the mass spectrometry laboratory were largely different to the other areas which was to be expected given that this was the only non-clinical area that was assessed. While some monoterpenes were present, a more homogenous group of compounds separated this area from the others (2‚2‚2-trifluoro-N-methyl-acetamide‚ pyridine, branched undecane, 2-pentyl-furan‚ ethylbenzene, furfural, ethyl anisate, o-Xylene, m-Xylene, isopropyl alcohol, and 3-Carene), including aromatic hydrocarbons and alcohols. Some of these VOCs may be secondary to chemicals used within the laboratory, consisting of seven mass spectrometry systems operating both in TD and liquid injection modes. Strong separation of room air and breath samples was observed through PLS-DA, driven by 62 of the 113 detected VOCs. Within room air, these VOCs were exogenous and included di-isopropyl phthalate, benzophenone, acetophenone and benzyl alcohol, which are all commonly used within plasticisers and fragrances [19][20][21][22] , the latter of which can be found in cleaning products 16 . The identified chemicals in breath were a mixture of endogenous and exogenous VOCs. Endogenous VOCs largely consisted of branched alkanes which are an established by-product of lipid peroxidation 23 and isoprene, a by-product of cholesterol synthesis 24 . Exogenous VOCs included monoterpenes such as β-pinene and D-limonene, which can be traced back to essential oils from citrus fruit (also commonly used in cleaning products) and food preservatives 13,25 . 1-Propanol can be both endogenous, deriving from amino acid breakdown, and exogenous, present in disinfectants 26 . Of the VOCs which were found in higher levels in room air compared to breath, several have been suggested as possible disease biomarkers. Ethylbenzene has been shown to be a potential biomarker for several respiratory conditions including lung cancer, COPD 27 and pulmonary fibrosis 28 . N-Dodecane and Xylene have also been shown to be higher in patients with lung cancer compared to those without 29 and m-cymene has been found to be higher in patients with active ulcerative colitis 30 . Therefore, even if room air differences don't appear to affect the overall breath profiles, they might influence the levels of specific VOCs of interest, concluding that background room air monitoring is might still be essential. Separation between room air samples collected in the morning and afternoon was also observed. Morning samples were mainly characterised by branched alkanes, which are commonly found exogenously in cleaning products and waxes 31 . The four clinical areas included within this study were all cleaned prior to the sampling of the room air which would account for this. The clinical areas were all separated by different VOCs thus this separation cannot be attributed to cleaning. Afternoon samples typically presented mixture of alcohols, hydrocarbons, esters, ketones, and aldehydes in higher levels compared to the morning samples. 1-Propanol and phenol can both be found in disinfectants 26,32 , which is expected given the regular cleaning that goes on throughout clinical areas during the day. Breath was only collected in the morning. This is due to the multiple other factors that can influence VOC level within breath over the course of the day which could not be controlled for. This includes drink and food consumption prior to breath sampling 33,34 and different levels of exercise 35,36 . Analysis of VOCs remains an evolving frontier in the development of non-invasive diagnostics. Standardisation of sampling remains an issue however our analysis reassuringly demonstrates no significant difference between breath samples collected at different locations. Within this study we have demonstrated that VOCs within ambient room air varies between location and time of day. However, our results also demonstrate that this does not significantly alter the profile of VOCs within exhaled breath suggesting breath sampling can be performed across varying locations without significantly impacting on results. The inclusion of multiple locations over a longer period of time and duplicate sample collection was prioritised. Finally, the separation of room air from different locations and the lack of separation in breath clearly suggests that sampling location does not significantly impact upon the composition of human breath. This is reassuring for breath analysis studies as it removes one potential confounder for the standardisation of breath collection. While having all breath samples from a single subject is a limitation of our study, it has the potential to reduce variance from other confounding factors influenced by human behaviour. Single subject study design has been successfully used previously in several studies 37 . However, further analyses are required to draw definitive conclusions. Routine sampling of room air in parallel to breath sampling is still recommended, to allow exclusion of exogenous compounds and identification of specific contaminants. We would recommend exclusion of isopropyl alcohol given its prevalence within cleaning products, especially within healthcare settings. This study was limited by the number of breath samples taken in each location and further work is required with a larger number of breath samples to confirm that there is no significant impact on the composition of human breath on the background environment in which it is samples. Furthermore, relative humidity (RH) data has not been collected and while we acknowledge that differentiations in RH might influence VOC distribution, in large scale studies, the logistical challenge is substantial both control of RH and for collection of RH data. www.nature.com/scientificreports/ In conclusion, our study has demonstrated that there is variation of VOCs in ambient room air across different locations and times but that this does not appear to be the case with breath samples. Due to a small sample size, definitive conclusions regarding the impact of ambient room air on breath sampling cannot be drawn and further analysis is required and thus it is recommended to sample room air in parallel to breath to allow interrogation of any potential contaminant VOCs. Methods The experiment took place over 10 non-consecutive weekdays in February 2020 at St. Mary's Hospital, London. Each day, two breath samples and four room air samples were collected in each of the five locations, resulting in a total of 300 samples. All methods were carried out in accordance with relevant guidelines and regulations. All five sampling areas were temperature controlled at 25 °C. Room air sampling. Five locations were selected for room air sampling: mass spectrometry instrument laboratory, surgical outpatient clinic room, operating theatres assessment area, endoscopy assessment area and clinical research bay. Each area was selected as they are regularly utilised for participant recruitment for breath analysis by our research group. An air sampling pump from SKC Ltd. was used to draw ambient room air across Tenax TA/Carbograph inert-coated thermal desorption (TD) tubes (Markes International Ltd, Llantrisant, UK) at a rate of 250 mL/ min for 2 min, loading a total of 500 mL of ambient room air on to each TD tube. The tubes were then sealed with air-tight brass caps for transportation back to the mass spectrometry laboratory. Room air was sampled from each location in sequence between 9 and 11 a.m. each day and then again between 3 and 5 p.m. Samples were collected in duplicates. Breath sampling. Breath samples were collected from a single subject who performed room air sampling. The breath sampling process was performed as per the protocol approved by the NHS Health Research Authority-London-Camden & Kings Cross Research Ethics Committee (reference 14/LO/1136). The investigator provided informed written consent. For standardization purposes, the investigator had nothing to eat or drink from midnight the previous evening. A custom-made, single use Nalophan™ (PET-polyethylene terephthalate) bag with a 1000 mL capacity and a polypropylene syringe acting as a sealable mouthpiece was utilised for the collection of breath as previously described by Belluomo et al. 2 . Nalophan has been demonstrated to be a good medium for breath storage due to its inertness and ability to provide compound stability for up to 12 h 38 . After spending a minimum of 10 min in the location, the investigator exhaled into the sample bag during normal tidal breathing. Once filled to maximum volume, the bag was sealed with the syringe plunger. As with room air sampling, within 10 min, an air sampling pump from SKC Ltd. was used to draw breath from the bag across TD tubes: a wide bore needle without a filter was attached to a TD tube via plastic tubing and with the SKC Air Pump at the other end. The bag was needled, and breath was drawn through each TD tube at a rate of 250 mL/min for 2 min, loading a total of 500 mL of breath on to each TD tube. Samples were once again collected in duplicate to minimise sampling variability. Breath was collected in the morning only. Cold trap desorption (no TD tube) and conditioned, clean TD tube desorption were included in the beginning and in the end of every analytical run to ensure the absence of carryover effects. Same blank analyses had been performed right before and right after breath sample desorption to ensure that samples can be analysed sequentially without need for TD conditioning. Following visual inspection of the chromatograms, the raw data files were analysed using Chromspace ® (Sepsolve Analytical Ltd.). Compounds of interested were identified from representative samples of breath and room air. Annotations were performed using NIST 2017 Mass Spectral Library based on VOC mass spectra and retention indices. Retention indices were calculated by analysing an alkane mixture (nC 8 -nC 40 , 500 μg/ mL in dichloromethane, Merck, USA) 1 μL spiked onto three conditioned TD tubes via a calibration solution loading rig and analysed under the same TD-GC-MS conditions and from the raw compound list, only those with a reverse match factor > 800 were kept for analysis. Oxygen, argon, carbon dioxide and siloxanes were also removed. Finally, any compounds with a signal to noise ratio < 3 were also excluded. The relative abundance of each compound was then extracted from all data files using the compound list generated. 117 compounds were identified in breath samples versus NIST 2017. Peak picking was performed using MATLAB R2018b (Version 9.5) and Gavin Beta 3.0 software. Following further interrogation of the data with visual inspection of the chromatograms, a further 4 compounds were excluded leaving 113 compounds included in the downstream analysis. The abundance of these compounds was extracted from all 294 samples that were successfully processed. Six samples www.nature.com/scientificreports/ were removed due to poor data quality (leaked TD tubes). In the remained dataset, 1-tailed Pearson correlation was calculated between the 113 VOCs in the repeated measurement samples to assess reproducibility. Correlation coefficients were 0.990 ± 0.016 and p-values 2.00 × 10 -46 ± 2.41 × 10 -45 (arithmetic mean ± standard deviation). Statistical analysis. All statistical analyses were performed on R version 4.0.2 (R Foundation for Statistical Computing, Vienna, Austria). Data and code used for the analysis and to generate figures is publicly available on GitHub (https:// github. com/ simon ezuffa/ Manus cript_ Breath). Integrated peaks were first log transformed and then normalised using total area normalisation. Samples for which repeated measurements were available were collapsed to the mean. The 'ropls' and 'mixOmics' packages were used to generate the unsupervised PCA models and supervised PLS-DA models. PCA allowed for the identification of 9 sample outliers. One breath sample clustered with the room air samples and therefore was felt to represent an empty tube secondary to sampling error. The other 8 samples were room air samples driven by 1,1′-biphenyl, 3-methyl. On further inspection, it was identified that all 8 samples had significantly lower VOC yields compared to the other samples, suggesting these outliers were due to manual errors in loading the tubes. Separation due to location was tested in the PCA using PERMANOVA from the 'vegan' package. PERMANOVA allows the identification of group separation based on centroids. This technique has been previously used in similar metabolomic studies [39][40][41] . The 'ropls' package was used to evaluate PLS-DA models significance using a randomised sevenfold cross validation and 999 permutations. Compounds with a variable importance projection (VIP) score > 1 were considered relevant for the classification and retained as significant. Loadings from the PLS-DA models were also extracted to identify group contribution. Location specific VOCs were identified through consensus of pairwise PLS-DA models. To do so, all locations VOCs profiles were tested against each other and if a VOC with VIP > 1 was constantly significant in the models and attributed to the same location, it was then considered location specific. Comparison between breath and room air samples was investigated only on samples collected during the morning since no breath sample was collected in the afternoon. Wilcoxon test was used for univariate analysis and false discovery rate was accounted applying Benjamini-Hochberg correction. Data availability The datasets generated during and analysed during the current study are available from the corresponding author on reasonable request.
2022-09-23T14:16:16.030Z
2022-09-23T00:00:00.000
{ "year": 2022, "sha1": "44f8dcc634efbad0f6989272396cb0bd50015509", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "44f8dcc634efbad0f6989272396cb0bd50015509", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
80458261
pes2o/s2orc
v3-fos-license
Testicular Relapse in Acute Lymphoblastic Leukemia (ALL): Guidelines Must be Changed Guidelines for the treatment of testicular recurrences need to be changed: a patient with unilateral testicular recurrence of ALL has been unnecessarily castrated by irradiation. The patient has been free of disease for 12 years, having had chemotherapy, several bone marrow transplants and left Orchyepididimectomy. Nevertheless, due to irradiation of the right testicle, he has been left permanently under hormonal replacement and with unavoidable infertility. This outcome could have been avoided if treatment guidelines had been changed and a different approach was taken. This confirms some more recent studies, although with the drawback of the number of affected patients being small. Introduction The testis is the second most frequent site for extramedullary recurrence in ALL. Local therapy is not uniform in different study groups. In the classical Protocol POG 8034, as well as in ALL-REZ, COG, BFM 2002, UK ALL-R3 or the COPRALL, there is no clear specific reference to unilateral or bilateral testicular recurrence. But it appears that everyone accepts that recurrence in the testicular sanctuary will always, at least potentially, be bilateral (even if it may be only more evident on one side) and then requires local irradiation, classically with 24 Grays. Because it entails complete loss of hormonal function and testicular atrophy (with concomitant sterility), BFM relapse strategies have recommended to remove a clinically involved testis and to irradiate a contralateral clinically and bioptically negative testis with 15 Grays. They believe that the strategy offers the chance of spontaneous puberty without hormonal substitution, at least, in a reasonable number of patients. These are the guidelines that I believe really need revision, so that, at least most patients (and not only a reasonable but non defined number) remain fertile and not hormonally dependent [1][2][3][4][5]. Mini Review A 7-month-old male with ALL-B Calla negative, was seen on the 2nd January 2002 and treated according to Protocol POG 8034. One year later (October 2003) he had a successful Bone Marrow Transplant. Nevertheless, when aged 5 years he showed a large recurrence in the left testicle (confirmed by FNAC), which rapidly reached the size of a hen's egg, albeit with an apparently clinically normal right testis. Peripheral Blood and Bone Marrow where normal and 2 FNAC's on the Rigth Testicle proved also to be completely normal. At this stage I was asked to do a bilateral orchydectomy, as the POG 8034 advised. But I refused to do it, not only because the right testicle was clinically normal, 2 FNAC's were negative and also because the testicle, due its location, could be easily evaluated through frequent and simple palpation, even by the Parents [5,6]. Taking into question his future quality of life, I considered that, if the child was going to survive (as fortunately happened), he should still have a functioning testicle, not only under an hormonal point of view but also in what concerns fertility (even if admiting possible damage from BMT and Chemotherapy (with Vincristin, Doxorubicin and Prednisolone), as experience has shown that, around half of those with ALL will be infertile (although children having a better prognoses than adults). So, I only performed a Left Orchyepididymectomy, leaving the Right Testicle alone (the left spermatic cord proved to be free of disease involvement ). I believe that Guidelines are extremely valuable but certainly not always the final word. Each Patient is a Patient, and I agree that "Guidelines are not God's Lines", each one having to question and understand what he thinks is best for the Patient. I still remember that, when the results of Rosen's Osteosarcoma Patients (at the Memorial Hospital, in New York) were reviewed, a significant number of them had had alterations to the Classical Rosen T10 [6,7]. If one irradiates both testicles, even when only one appears clinically and histologically involved (what is not so common), one can never prove whether that testicle was really normal or, eventually, minimally involved. So, the question to be asked upfront, is how can the POG Protocol justify routine Castration (surgical or radiotherapeutic) and, if so, on what grounds does it base its recommendations? Even the softer attitude (1200 or 1500 instead of 2600 Grays) proposed by the BFM, looks to me unacceptable, because that strategy only seems to offer a chance for spontaneous puberty without hormonal substitution on a "non defined" substantial part of patients. Also nothing is known about eventual congenital malformations brought about by those "irradiated" spermatozoa. Also numbers of isolated testicular relapses are statistically very small and many years will have to pass for any acceptable conclusion. So, I believe one has, nowadays, only to rely on bibliography, reasoning and common sense! It is known that, in a few patients that have had a laparotomy at the time of testicular relapse, most had leukemic infiltration of the abdominal lymph nodes, liver and spleen. Also, treatment by irradiation of the remaining testicle, in an apparently isolated an usually late, testicular relapse, is frequently followed by a bone marrow relapse some time later. If the leukemia recurs it is almost certainly because the overall disease has not been controlled by the transplant or the chemotherapy given, and certainly not because of the preserved testicle, above all, so easily controlled by palpation. Journal of Leukemia So why to be so dogmatic about the need to destroy a clinically and histologically normal testicle? If there is the slightest doubt about a recurrence (testicular enlargement, that even the Parents can perform frequently), then an orchydectomy can be rapidly performed. But even if that would happen as an isolated recurrence, the likelihood of spreading from that sanctuary to the whole body, is certainly minimal. And, obviously, neither chemotherapy or bone marrow transplantation would be excluded, if indicated [8][9][10]. Further, if the preserved contralateral testicle is still present, any of its alterations is most likely an early sign of further generalized recurrence. Certainly an earlier and easier way to detect a recurrence than from marrow aspirates or blood sampling). Thus entailing further salvage chemotherapy or transplant. Unfortunately my advice was not followed and the testicle that I had refused to remove, was "treated" with irradiation (24 Grays), thus nullifying my hopes of a more conservative approach and for a better future quality of life for that Patient. Since then the child had no further treatment, but now 12 years later the left sided orchyepididymectomy and right sided testicular irradiation. He is, needlessly and permanently under hormonal treatment has a small penis and a stunted growth. And he will never be a true father. I believe that that this Patient was a real prof of the need to reevaluate the current guidelines for ALL. Also a Dutch Study, using only chemotherapy, showed that 5 patients, in whom irradiation of the contralateral testicle was avoided remained disease free. When this problem was presented at a IPSO Meeting, almost all Pediatric Surgeons present, agreed on a conservative approach, the only exception being a Pediatric Oncologist, quoting the "sacred" POG 8304. So I firmly believe that POG 8034 (and other Protocols for AAL, that unfortunately maintain the same "classical" philosophy), need to be reviewed, so that common sense and future quality of life for the Patients will prevail, at a minimal health risk. And now some final remarks: now, that everyone is worried with costs apart from being sterile, the treatment costs of this male patient with "Growth Hormone" and "Testosterone", would amount to a expense of around 100 dollars per month. With a life expectancy of more 60 years (accepting a lowering dose over the years), it will mean an avoidable cost of, at least, many thousands of dollars [11,12].
2019-03-17T13:12:18.533Z
2018-04-16T00:00:00.000
{ "year": 2018, "sha1": "d27bb994a574888db34f61c19786c83434cf65fc", "oa_license": null, "oa_url": "https://doi.org/10.4172/2329-6917.1000248", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d3a5481c7fb3b58a30aa42912370846570a5771f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267538523
pes2o/s2orc
v3-fos-license
Effects of dietary addition of ellagic acid on rumen metabolism, nutrient apparent digestibility, and growth performance in Kazakh sheep Plant extracts have shown promise as natural feed additives to improve animal health and growth. Ellagic acid (EA), widely present in various plant tissues, offers diverse biological benefits. However, limited research has explored its effects on ruminants. This study aimed to investigate the effects of dietary addition EA on rumen metabolism, apparent digestibility of nutrients, and growth performance in Kazakh sheep. Ten 5-month-old Kazakh sheep with similar body weight (BW), fitted with rumen fistulas, were randomly assigned to two groups: the CON group (basal diet) and the EA group (basal diet + 30 mg/kg BW EA). The experiment lasted 30 days, and individual growth performance was assessed under identical feeding and management conditions. During the experimental period, rumen fluid, fecal, and blood samples were collected for analysis. The results indicated a trend toward increased average daily gain in the EA group compared to the CON group (p = 0.094). Compared with the CON group, the rumen contents of acetic acid and propionic acid were significantly increased in the EA group and reached the highest value at 2 h to 4 h after feeding (p < 0.05). Moreover, the relative abundances of specific rumen microbiota (Ruminococcaceae, uncultured_rumen_bacterium, unclassified_Prevotella, Bacteroidales, Bacteroidota, Bacteroidia, unclassified_Rikenellaceae, and Prevotella_spBP1_145) at the family and genus levels were significantly higher in the EA group (p < 0.05) compared to the CON group. The EA group exhibited significantly higher dry matter intake (p < 0.05) and increased the digestibility of neutral detergent fiber and ether extract when compared with the CON group (p < 0.05). Additionally, the plasma activities of total antioxidant capacity (T-AOC), superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase (GSH-Px) were significantly higher, while malondialdehyde (MDA) concentration was significantly lower in the EA group compared to the CON group (p < 0.05). In conclusion, dietary supplementation with 30 mg/kg BW EA in 5-month-old Kazakh sheep increased the dry matter intakQ16e, apparent digestibility of neutral detergent fiber, and ether extract, as well as the contents of acetic acid and propionic acid in rumen fluid. Moreover, EA supplementation regulated the ruminal microbiota, enhanced antioxidant capacity, and improved daily weight gain. Introduction In response to the antibiotic ban policy in animal husbandry, the quest for antibiotic alternatives in animal feed has gained momentum.Plant extracts stand out as vital options due to their medicinal attributes in enhancing animal health and productivity.Compared to conventional chemical drugs, plant extracts offer unique advantages in controlling inflammation and oxidative stress.In addition, they remarkably enhance animal health by bolstering immunity, satisfying healthy breeding standards to avert animal diseases, and improving animal production performance and product quality (1). Ellagic acid (EA), a dimerized derivative of gallic acid widespread in various plants (2), has emerged as a potent alternative to antibiotics and possesses a variety of biological functions, such as antimutagenicity, antibacterial, antiinflammatory, and antioxidant properties (3,4).Xu et al. (5) evaluated the effect of gallic acid on calves, observing improved growth performance metrics, such as average daily gain, rumen fermentation parameters (total volatile fatty acids, propionate, and butyrate), and antioxidant levels (catalase and T-AOC).Notably, gallic acid reduced malondialdehyde and tumor necrosis factor-α levels in preweaning dairy calves while influencing ruminal microbial abundances of Prevotellaceae_UCG-001, Saccharofermentans, and Prevotella_1, while reducing the abundance of Prevotella_7.Gallic acid, a phenolic compound in plant extracts, demonstrates robust antioxidant capabilities, scavenging hydroxyl radicals, and exhibiting potent reducing power (6).Previous research has found that the addition of concentrated pomegranate peel extract (containing EA) to the diet of lactating dairy cows can prolong daily ruminating time and enhance the digestibility of dry matter, crude protein, neutral detergent fiber, cellulose, and hemicellulose (7).Moreover, concentrated pomegranate peel extract influences the relative abundance of methanogenic archaea, which are rumen-specific bacteria responsible for cellulose decomposition and lactic acid fermentation, and significantly improves the milk yield and growth performance of cattle (7).Studies involving monogastric animals have shown that dietary supplementation with EA can improve animal growth performance, intestinal health, and antioxidant capacity.Lu et al. found that feeding 500 g/t of EA to 30-day-old weaned piglets for 40 days increased the average daily gain and reduced diarrhea (8).Moreover, Qin et al. (9) demonstrated that the supplement of 0.1% EA to the basal diet of weaned piglets increased average daily feed intake and daily gain while reducing fecal score, thus suggesting an effect on intestinal bacteria.Additionally, it could alleviate oxidative stress and intestinal injury in weaned piglets (9).However, few studies have explored the effects of EA supplementation in ruminants. EA primarily exists in the tannin form in nature (10).While traditionally considered antinutrients, low doses of plantderived tannins enhance ruminant protein utilization (11).Condensed tannins bind to dietary crude protein, inhibiting its ruminal degradation.Consequently, this elevates rumen protein concentration, enhances amino acid utilization, and augments digestible protein throughout the digestive tract, ultimately improving production performance (12).Notably, limited studies have explored the effects of dietary supplementation of EA in sheep.In this study, 5-month-old Kazakh sheep received an EA-supplemented diet to investigate the effects of EA on growth performance, rumen metabolism, and apparent nutrient digestibility. . Ethical considerations All animal care and handling procedures in this study were conducted under the Guidance of the Care and Use of Laboratory Animals in China and were approved by the Animal Care Committee of Xinjiang Agricultural University of China (protocol permit number: 2020024). . Animal and experimental design Ten 5-month-old Kazakh sheep (35.61 ± 2.32 kg, rams) of similar body weight (BW), well cared for, healthy, and equipped with rumen fistulas, were randomly assigned to two groups: the CON group (n = 5) and the EA group (n = 5) using a random number generator (http://www.r-project.org/).These sheep were housed in individual feeding pens (1.20 × 1.50 m) within a naturally ventilated barn structure.The ten pens were located inside a barn open on two sides and arranged in two rows of five, separated by the central feeding lane.The pens are enclosed by horizontal metal rail bars, which also delimit the pens at the feeding lane.The floor had a concrete base covered with barley straw bedding, of which one fresh flake (around 1.5 kg) per pen was added over the permanent bedding once a day.The sheep were untethered and did not have any access to a paddock area.The sheep in the CON group received a basal diet devoid of EA, whereas the diet for the EA group included EA supplementation at 30 mg/kg BW.The quality of the hay was checked according to guidelines in Cavallini et al. (13), ensuring the absence of molds and spores.The corn aflatoxin levels were assessed according to the procedure described in Girolami et al. (14) and were found to be below the maximum tolerable threshold recommended by the EU.Health checks, including fecal consistency and pH, as described below, were completed twice a week.The EA (≥90.00%) was purchased from Wufeng Chicheng Biotechnology Co., Ltd.(Hubei, China).The experimental sheep were raised at the 103 regiment experimental base of the 6 th division of the Xinjiang Production and Construction Corps in China.The feed was provided twice a day at 8:00 am and 8:00 pm, and the EA was weighed and mixed into the concentrate before feeding.The composition and ingredients of the basic diet are presented in Table 1.All sheep had ad libitum access to feed and clean water during the experiment.The experimental period lasted 30 days, preceded by a 5-day adaption stage. . Sample collection and analysis The dry matter intake (DMI) of sheep was recorded daily.On the 21 st to 25 th day of the experiment, self-made fecal collection bags were used to collect fecal samples for 5 consecutive days.Specifically, the fecal samples were collected four times daily at fixed time intervals, meticulously documented, and pooled across each consecutive 5-day period.Subsequently, 10% of the total collected fecal amount was randomly selected, weighed, and dried for analysis.Feed and feces samples were sent to the Animal Nutrition Laboratory, College of Animal Science, Xinjiang Agricultural University, for dry matter (DM) and chemical analysis. To determine the DM content, the samples were dried in a forced air oven at 65 • C until a constant weight was achieved.Upon drying, the samples were ground to pass through a 1 mm screen (Cyclotec Mill, model 1093; Foss Tecator, Höganäs, Sweden).The ash content of the ground samples was analyzed after 4 h of combustion in a muffle furnace at 550 Further, the calcium (Ca) and phosphorus (P) contents of feed and feces were analyzed by atomic absorption spectrophotometer (17).Average daily gain (ADG) was calculated as the difference between final and initial body weight divided by the number of days of feeding.Feed/Gain (F/G) was calculated as the ratio of average daily feed intake to daily weight change (g/g). Rumen fluid samples were collected on the 15 th and 30 th day of the trial period, both before feeding (0 h) and at 2, 4, 6, and 8 h after feeding from the same position of the fistula with a self-made rumen fluid collector.Fluid sampling devices consisted of Tygon tubing terminating in a pot scrubber, weighted with several steel nuts installed in each animal, and were threaded through holes in the cannula plug to maintain the anaerobic rumen environment.One pot scrubber was placed in the cranial portion of the rumen, and one was placed in the caudal portion.The ends of the Tygon tubing were scored to allow a lure lock syringe to be screwed directly onto each tube.For each rumen fluid sample collected, a 50 mL syringe was then used to sample equal volumes of fluid from each sampling line.Samples were mixed in the syringe, and the bulk sample was aliquoted into 2 glass vials (18), filtered with a 60-mesh nylon bag, and packed into frozen tubes immediately after pH was measured by a portable pH meter (PB-21, Sartorius, Germany), and preserved in liquid nitrogen for the subsequent rumen fermentation parameter analysis including the volatile fatty acids (VFAs) and ammonia nitrogen (NH 3 -N) (Agilent Cary 60UV-Vis Spectrophotometer, USA) by gas chromatography (colorimetry). S rDNA sequencing and bioinformatics analysis of the rumen bacteria Total DNA extraction and PCR amplification of rumen fluid samples followed the methodology outlined in Ma et al.'s study (19).Briefly, total DNA extraction involved the use of cetyltrimethylammonium bromide (CTAB), with subsequent assessment of DNA purity and concentration via 1% agarose gel electrophoresis and spectrophotometry.Universal prokaryotic primers 341F (5′-CCTACGGGNGGCWGCAG-3′) and 806R (5′-GGACTACHVGGGTATCTAAT-3′) were used to amplify the V3-V4 variable region of bacterial 16S rRNA gene.The amplifiers were sequenced on the Miseq PE300 platform (Illumina, USA).The 16S rRNA gene sequencing raw reads were qualitatively filtered using Flash (version 1.20) and QIIME (Quantitative Insight into Microbial Ecology, version 1.8.0) (20).The filtered sequences were compared using homologous clusters to obtain the operational taxonomic units (OTUs).The α and β diversity indices were measured and analyzed.Linear discriminant analysis (LEfSe) of effect size was used to identify differential microflora, and PICRUSt analysis was performed to predict microbial function. Statistical analysis Preliminary analysis of the experimental data was conducted using Excel 2010.The normality (Shapiro-Wilk) test was performed prior to the statistical analysis.The data of growth performance, rumen fermentation, nutrient apparent digestibility, and plasma antioxidant were analyzed for normality using the Shapiro-Wilk test, and further statistical analysis was carried out using SPSS 20.0 (SPSS Statistics 20, IBM Japan, Ltd., Tokyo, Japan) software using independent sample t-tests.The data were expressed as mean ± standard deviation, with p < 0.05 indicating significant differences and 0.05 < p < 0.10 indicating a significant trend of differences.Pearson's correlation analysis was performed to evaluate the correlation between rumen differential bacteria and rumen fermentation parameters and apparent digestibility of nutrients, and graphs were rendered using Orgin 8.0 (OriginLab Co., Northampton, MA, USA). . Growth performance Table 2 demonstrates that the EA group sheep exhibited increased final body weight, average daily gain, and feed conversion efficiency compared to sheep in the CON group.However, no statistically significant differences were observed between the CON and EA groups (p > 0.05). . Rumen fermentation parameters Rumen fermentation parameters on Day 15 and Day 30 are presented in Table 3.No significant differences were observed in rumen pH value and isobutyric acid, butyric acid, isovaleric acid, valeric acid, and ammonia nitrogen contents between the EA and CON groups (p > 0.05).Nevertheless, the contents of acetic acid (p = 0.003) and propionic acid (p = 0.003) exhibited significant increases, while lactic acid (p = 0.086) contents tended to be higher in the EA group compared to the CON group.Specifically, on the 30 th day, acetic acid content in the EA group increased by 10.32% compared to the CON group (p = 0.056), although other parameters showed no significant differences.As shown in Figure 1, on the 15 th day of the experiment, at 1 h after morning feeding, the rumen fluid contents of acetic acid (Figure 1A), propionic acid (Figure 1B), and lactic acid (Figure 1H) in the EA group reached the maximum value, which was higher (p < 0.05) than that of the CON group, and then began to decrease.The contents of butyric acid (Figure 1D) and valeric acid (Figure 1F) showed the same trend.However, no significant difference was observed between the EA and CON groups (p > 0.05).The contents of isobutyric acid (Figure 1C) and isovaleric acid (Figure 1E) in the rumen fluid of the two groups began to decrease after feeding.As shown in Figure 2, on the 30 th day of the experiment, the changing trend of acetic acid (Figure 2A), propionic acid (Figure 2B), isobutyric acid (Figure 2C), butyric acid (Figure 2D), isovaleric acid (Figure 2E), valeric acid (Figure 2F), and ammonia nitrogen (Figure 2G) in rumen fluid of the two groups was similar to that on the 15 th day, whereas lactic acid content showed the opposite trend (Figure 2H).Moreover, 2 h after morning feeding, the contents of acetic acid and propionic acid in the rumen fluid of the EA group were significantly higher than those of the CON group (p < 0.05). . Rumen bacterial diversity An average of 65,767 effective labels per sample were obtained on the 15 th day.By the 30 th day, an average of 678 OTUs per sample were acquired with 97% paired sequence identity.Additionally, an average of 66,514 effective tags per sample was obtained, resulting in an average of 692 OTUs per sample with 97% paired sequence identity.ACE, Chao1, Shannon, and Simpson indices exhibited no significant differences between the CON and EA groups (Figures 3A, B).The intestinal microbiomes of both groups exhibited wide distribution and effective isolation, suggesting EA's impact on rumen microflora composition (Figure 4). The results of the microbiome composition analysis are presented in Figure 5. On the 15 th day, Bacteroidetes and Firmicutes were the dominant phyla in the rumen of the two groups, accounting for over 85% (p < 0.05) of microflora.At the family level, Prevotellaceae and Rikenellaceae were the dominant families in the rumen of the two groups, while at the genus level, the abundance of Prevotella, uncultured_rumen_bacterium, and other bacteria in the rumen of sheep in the EA group was marginally but insignificantly higher than that in the CON group (p > 0.05) (Figure 5A).On the 30 th day, at the phylum level, Bacteroides and Streptomyces were the most predominant phyla of the two groups of the rumen, accounting for more than 90% of all microorganisms; at the family level, the abundance of Erysipelatoclostridiaceae in the rumen of the EA group was significantly higher than that of the CON group (p < 0.05); at the genus level, the abundance of SP3_e08 in the rumen of EA group was higher than that of CON group, while the abundance of UCG_004 was significantly lower than that in the CON group (p < 0.05) (Figure 5B).LEfSe analysis was used to compare the microbiota in the rumen contents of the two groups.On the 15 th day of the experiment, the abundances of Oscillospirales, Ruminococcaceae, uncultured_rumen_bacterium, and uncultured rumen bacterium in the rumen of the EA group were higher than those in the CON group (p < 0.05) (Figure 6A).On the 30 th day of the experiment, the rumen abundances of unclassified_Prevotella, Bacteroidales, Bacteroidota, Bacteroidia, Unclassified_Rikenellaceae, unclassified_Rikenellaceae, and Prevotella_spBP1_145 in the EA group were higher than those in the CON group (p < 0.05) (Figure 6B). PICRUSt analysis revealed comparable functions of rumen microbiota in both groups, primarily associated with metabolic pathways, biosynthesis of secondary metabolites, antibiotics, amino acids, and microbial metabolism (Figure 7). . Apparent nutrient digestibility As shown in Table 4, compared with the CON group, the intake of DM and OM of sheep in the EA group increased significantly (p < 0.05).Additionally, the apparent digestibility of NDF and EE were increased significantly by 6.12% and 3.17% (p < 0.05). . Plasma antioxidant capacity As shown in Table 5, the activities of T-AOC, SOD, CAT, and GSH-Px in the plasma of the EA group were significantly higher than those in the CON group (p < 0.05), and the content of MDA decreased significantly (p < 0.05). . Correlation analysis between rumen di erential bacteria and rumen fermentation parameters, apparent digestibility of nutrients, and plasma antioxidant capacity The correlation between rumen differential bacteria and rumen fermentation parameters, apparent digestibility of nutrients, and plasma antioxidant capacity was explored.Acetic acid content in the rumen was positively correlated with uncultured_rumen_bacterium and Bacteroidota abundance, and significantly negatively correlated with Bacteroidales_bacterium_Bact_22.Isobutyric acid content was positively correlated with Ruminococcaceae abundance, and Butyric acid content was negatively correlated with Bacteroidales_bacterium_Bact_22 (Figure 8A).The apparent digestibility of NDF was positively correlated with rumen uncultured_rumen_bacterium abundance and negatively correlated with Bacteroidales_bacterium_Bact_22 (Figure 8B).Bacteroidales_bacterium_Bact_22 abundance was negatively correlated with SOD, GSH-Px, and CAT activity and positively correlated with MDA content; uncultured_rumen_bacterium abundance was negatively correlated with MDA content and positively correlated with GSH-Px; unclassified_Bacteria abundance was negatively correlated with GSH-Px, CAT activity, Frontiers in Veterinary Science frontiersin.organd T-AOC; Bacteroidota was negatively correlated with MDA content (Figure 8C). Discussion EA exerts various biological functions, including anti-oxidative, anti-cancer, and anti-inflammatory properties, which have spurred substantial research interest in its practical applications (21).At present, there are few reports on the effect of EA on ruminants.To better understand the effects of dietary EA supplementation in sheep, in the present experiment, we used 10 ruminally cannulated Kazakh sheep to assess the effects of dietary addition of EA on growth performance, rumen metabolism, and apparent nutrient digestibility.The DMI required for ruminants to maintain their life activities determines the quality of their growth, development, and reproduction (22).In this study, we found that dietary supplementation with EA had a tendency to increase ADG (p = 0.094), indicating its beneficial effect on growth performance in sheep.The underlying cause may be the higher DMI intake of sheep in the EA group.This suggests that the inclusion of EA in the diet could have positive effects on palatability, and consequently, on feed intake.Orzuna-Orzuna et al. evaluated the effects of dietary tannin supplementation on performance, carcass characteristics, meat quality, oxidative stability, and serum antioxidant capacity of sheep by meta-analysis.The results showed that dietary tannin supplementation did not affect the dry matter intake of sheep but increased the daily gain (23).These findings are similar to those of this study.It could be presumed that tannins reduce the consumption of microbial protein, improve the efficiency of microbial protein synthesis, promote protein flow to the duodenum, and thus improve the production performance of ruminants by inhibiting ciliated protozoa in the rumen (24).Unfortunately, rumen protozoa were not studied in the current research.We plan to investigate this in future studies. Many plants rich in secondary metabolites or bioactive compounds can affect the growth or activity of rumen microorganisms through different mechanisms to regulate rumen fermentation characteristics (25).The change in pH of rumen fluid is a comprehensive reflection and intuitive manifestation of the change in the internal environment of rumen fermentation.Extreme pH values adversely affect the growth and reproduction of rumen microorganisms and feed substrate fermentation (26).In this study, the rumen fluid pH values of the two groups were in the normal range (6.33-6.68)without significant differences, which is consistent with the previous studies (27).Rumen NH 3 -N concentration is not only one of the main internal environmental indicators of rumen fermentation but also the most important N source of microbial protein synthesis in rumen (28).The increase of rumen VFAs and NH 3 -N production generally indicates the improvement of rumen microbial metabolic activity, nitrogen use efficiency, and overall productivity of ruminants (29,30).In this study, after the dietary supplementation of EA, we found no significant increase in rumen ammonia nitrogen concentration, except for improved rumen VFAs (acetic acid and propionic acid significantly increased).Manoni et al. used a short-term in vitro rumen fermentation model to better understand the effects of EA and gallic acid on rumen fermentation and discovered that EA had a significant effect on reducing CH 4 emission and ammonia formation as well as affecting rumen degradability and total SCFA yield (31), which is consistent with the results of our study.Regrettably, we did not measure the CH 4 of the rumen, limiting the scope of inquiry.Xu et al. ( 5) evaluated the effect of Gallic acid on rumen fermentation of pre-weaning calves.The results showed that the concentrations of total volatile fatty acids, propionate, butyrate, and valerate in the rumen fluid of calves increased linearly with the addition of gallic acid, resulting in a linear decrease in pH (5).While EA might have a dose-dependent effect in this experiment, since only a single EA concentration was used in this study, follow-up research is required to investigate this aspect.Bhatta et al. studied the effect of tannin on rumen fermentation in vitro and found that different addition of tannin could reduce the average NH 3 -N concentration (32).However, the addition of tannin to the diet of dairy cows did not affect the 33).The differences in the above results may be related to the source, type, and molecular weight of tannins.Furthermore, our assessment of rumen fermentation dynamics in Kazakh sheep revealed a pattern: initially, the levels of acetic acid, propionic acid, butyric acid, pentanoic acid, and lactic acid increased before gradually declining, reaching their peak at the 4-hour mark post-feeding (refer to Figures 1, 2).Large and diverse rumen microflora play a key role in the growth and health of ruminants.There is increasing evidence that concentrated tannins from various plants or plant extracts have a significant effect on the rumen microflora of ruminants and can selectively change specific rumen bacteria, thus altering the metabolism of volatile fatty acids (VFAs) in the rumen (34).The effect of EA on rumen microorganisms has not been reported.Herein, we performed 16S rDNA high-throughput sequencing to detect the effect of dietary EA on the composition of rumen microflora of sheep and found that the trend of microbial diversity was consistent between the two groups of sheep rumen samples (Figure 3).Secondly, whether in the CON group or EA group, Bacteroides and Streptomyces were the dominant phyla in the rumen of Kazakh sheep, which was consistent with the findings of other ruminants (35,36), and there was no significant change in the composition of the top 10 dominant families and genera between the two groups (Figure 5).However, dietary EA could regulate the abundance of rumen microorganisms.For example, after the addition of EA to the diet, the abundance of Cyanobacteria and Synergistota in the rumen of sheep in the EA group decreased.Synergistota has been found in a wide range of anaerobic environments, and some members are associated with amino acid transport (37).Cyanobacteria is a common rumen bacterial phylum, which plays a vital role in hemicellulose and pectin degradation and methane production reduction (38).The change in the relative abundance of Cyanobacteria in the rumen may be driven by the change in feed quality and their ability to degrade plant hemicellulose and pectin.The deficiency of our study is also the lack of determination of amino acid content and methane production.Therefore, the effects of dietary EA on energy utilization and amino acid fermentation in sheep need to be further studied.LEfSe was used to classify the characteristics of microflora with rich differences among animal subgroups.In this experiment, two groups of sheep rumen differential bacteria were compared.Results showed that the abundances of Ruminococcaceae, uncultured_rumen_bacterium, Prevotella, and SP3_e08 in the EA group were significantly higher than those in the CON group (Figure 6).The increase in Ruminococcaceae abundance could be attributed to the inclusion of EA in the diet, potentially enhancing the cellulose and hemicellulose degradation capabilities in Kazakh sheep.This family possesses a significant quantity of hemicellulase and oligosaccharide degradase enzymes, which might explain this observed change (39).Subsequently, we analyzed the correlation between rumen differential bacteria and rumen fermentation parameters and found that dietary supplementation with EA could significantly increase the rumen Ruminococcaceae abundance in sheep.Moreover, we observed a significant positive correlation between the bacteria and isobutyric acid content.Therefore, the increase of rumen cocci abundance in sheep may be responsible for the increase in TVFA production in our study.In addition, we found that the significantly upregulated bacterial Bacteroidales_bacterium_Bact_22 in the rumen of sheep in the CON group had a significant negative correlation with acetic acid and butyric acid content.In the EA group, the significantly upregulated bacterial uncultured_rumen_bacterium was positively correlated with acetic acid content, and Bacteroidota content was positively correlated with acetic acid content.Our findings suggest that dietary supplementation of ellagic acid can improve the rumen fermentation of Kazakh sheep by regulating the abundance of rumen microorganisms, which has a beneficial effect on growth performance. Dietary nutrient digestibility is another important parameter for evaluating dietary utilization rate (40).The improvement of animal growth performance is related to the high digestibility of diet.After adding EA to the diet, we observed that the digestibility of NDF increased significantly, and the digestibility of dry matter and crude protein showed a noticeable but insignificant increase (Table 3).In a previous study, digestibility was improved with the addition of 30 mg/kg BW/d EA to equine diets for various components including DM, OM, gross energy, NDF, ADF, and Ca (41).Hence, based on the results of the current study, sheep supplemented with EA had a high potential to improve the digestibility of DM and nutrients.Additionally, we analyzed the correlation between rumen differential bacteria and apparent nutrient digestibility.The results showed that the apparent digestibility of NDF was positively correlated with the upregulated bacterial uncultured_rumen_bacterium in the rumen of sheep in the EA group and negatively correlated with the digestibility of NDF in the rumen of sheep in the CON group.In terms of apparent digestibility, the apparent digestibility of NDF in the EA group was significantly higher than that in the CON group (Figure 8B).It can be concluded that the dietary supplementation of EA can improve the apparent digestibility of Kazakh sheep, which may be related to the upregulation of rumen uncultured_rumen_bacterium bacteria. EA can inhibit oxidative stress by directly scavenging free radicals, inhibiting lipid peroxidation, increasing the activity of antioxidant enzymes and gene expression, maintaining cell stability, and reducing DNA damage by regulating SOD, MDA, CAT, and GSH-Px levels in the blood (42).We evaluated the effect of dietary EA on the plasma antioxidant indices of Kazakh sheep.The results showed that dietary EA could significantly increase the activities and T-AOC of SOD, CAT, GSH-Px, and other enzymes, and decrease the MDA content (Table 4).Previous studies in piglets (9), mice (43), and broilers (44) have highlighted the antioxidant effect of EA, which is consistent with the results of our experimental study.Moreover, changes in the microbiota are linked to alterations in the redox state (45).A previous study (46) found that antioxidants can regulate the dynamic balance of intestinal microbiota by scavenging excessive free radicals and strengthening organic immunity.Furthermore, some studies have indicated that Lactobacillus and Bifidobacterium possess excellent antioxidant capacity by scavenging free radicals (47,48).In our experiment, the dietary supplementation of ellagic acid improved the antioxidant capacity of sheep and regulated the ruminal microbiota.Hence, we analyzed the correlation between rumen differential bacteria at the phylum, family, and genus level, and plasma antioxidant capacity.The analysis unveiled significant associations, such as the positive correlation of uncultured_rumen_bacterium abundance in the EA group with GSH-Px activity and its negative correlation with MDA content.Conversely, the upregulated Bacteroidales_bacterium_Bact_22 in the CON group showed negative correlations with SOD, CAT, and GSH-Px activities and a positive correlation with MDA content.The findings suggest that dietary EA may enhance Kazakh sheep's antioxidant capacity, partly influenced by rumen microorganisms like Bacteroidota, Bacteroidales_bacterium_Bact_22, and uncultured_rumen_bacterium. Conclusions In conclusion, the present study demonstrated that the dietary supplementation of 30 mg/kg BW (sheep/day) EA for 5-monthold Kazakh sheep improves the dry matter intake and apparent digestibility of NDF and EE, increases the acetic acid and propionic acid contents in the rumen fluid, regulates the ruminal microbiota, enhances antioxidant capacity, and improves daily weight gain.These findings offer valuable insights into EA supplementation's potential benefits. FIGUREE FIGURE E ect of the dietary addition of ellagic acid on the rumen fermentation dynamics of Kazakh sheep (on day ).a,b Values within a row without common superscripts di er significantly (p < .); CON group, fed a basal diet; EA group, fed a basal diet with EA ( mg/kg BW).(A) Acetic acid, (B) Propionic acid, (C) Isobutyric acid, (D) Butyric acid, (E) Isovaleric acid, (F) Valeric acid, (G) Ammonia nitrogen, and (H) Lactic acid. FIGUREE FIGURE E ect of the dietary addition of ellagic acid on the rumen fermentation dynamics of Kazakh sheep (on day ).a,b Values within a row without common superscripts di er significantly (p < .); CON group, fed a basal diet; EA group, fed a basal diet with EA ( mg/kg BW).(A) Acetic acid, (B) Propionic acid, (C) Isobutyric acid, (D) Butyric acid, (E) Isovaleric acid, (F) Valeric acid, (G) Ammonia nitrogen, and (H) Lactic acid. FIGURE FIGUREThe results of α diversity analysis.(A) The ACE, Chao , Shannon, and Simpson indices of rumen microorganisms on the th day of the experiment.(B) The ACE, Chao , Shannon, and Simpson indexes of rumen microorganisms on the th day of the experiment. FIGURE FIGUREThe results of β diversity analysis.(A) Weighted Unifrac PCoA scatter plot on the th day of the experiment.(B) Weighted Unifrac PCoA scatter plot on the th day of the experiment. FIGURE FIGURETaxonomic and stack distribution of di erent species.(A) Taxonomic stack distribution at phylum, family, and genus levels on the th day of the experiment.(B) Taxonomic stack distribution at gate, family, and genus levels on the th day of the experiment. FIGURE FIGURELEfSe analysis of rumen microflora.(A) LEfSe analysis results on the th day of the experiment.(B) LEfSe analysis results on the th day of the experiment.D CON, D CON, fed a basal diet; E EA, E EA, fed a basal diet with EA ( mg/kg BW). FIGURE FIGURE Correlation analysis.(A) Rumen di erential bacteria and Rumen fermentation parameters.(B) Rumen di erential bacteria and apparent digestibility of nutrients.(C) Rumen di erential bacteria and plasma Antioxidant capacity."*" indicates a significant di erence between groups where p < . ."**" indicates a significant di erence between groups where p < . . TABLE Composition and nutrient levels of the diet (DM basis). TABLE E ect of the dietary addition of ellagic acid on growth performance of Kazakh sheep. TABLE E ect of the dietary addition of ellagic acid on rumen fermentation parameters of Kazakh sheep. a,b Values within a row without common superscripts differ significantly (p < 0.05); CON group, fed a basal diet; EA group, fed a basal diet with EA (30 mg/kg BW). TABLE E ect of the dietary addition of ellagic acid on apparent nutrient digestibility of Kazakh sheep. a,b Values within a row without common superscripts differ significantly (p < 0.05); CON group, fed a basal diet; EA group, fed a basal diet with EA (30 mg/kg BW). TABLE E ect of the dietary addition of ellagic acid on plasma antioxidant capacity of Kazakh sheep. a,b Values within a row without common superscripts differ significantly (p < 0.05); CON group, fed a basal diet; EA group, fed a basal diet with EA (30 mg/kg BW).
2024-02-08T16:10:03.778Z
2024-02-06T00:00:00.000
{ "year": 2024, "sha1": "19e7ce79b35b20c1f6118db439209dca9123b5cc", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2024.1334026/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c74d8e8e4db0bdcf6af853c2df404daa3e83c44e", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
12233069
pes2o/s2orc
v3-fos-license
Estimates of the topological entropy from below for continuous self-maps on some compact manifolds Extending our results of [17], we confirm that Entropy Conjecture holds for every continuous self-map of a compact K ( π, 1) manifold with the fundamental group π torsion free and virtually nilpotent, in particular for every continuous map of an infra-nilmanifold. In fact we prove a stronger version, a lower estimate of the topological entropy of a map by the logarithm of the spectral radius of an associated ”linearization matrix” with integer entries. From this, referring to known estimates of Mahler measure of polynomials, we deduce some absolute lower bounds for the entropy. Introduction Let M be a compact manifold and f : M → M a continuous self-map of M. The topological entropy h top (f ), denoted shortly by h(f ), is defined as lim ǫ→0 lim sup n→∞ 1/n log sup #Q, supremum over all Q being (ǫ, n)-separated. Q is called (ǫ, n)-separated, if for every two distinct points x, y ∈ Q, max j=0,...,n d(f j (x), f j (y)) ≥ ǫ. Here d is a metric on M consistent with the topology; in fact h(f ) does not depend on the metric (cf. [28]). Entropy Conjecture, denoted shortly as EC, says that the topological entropy of f is larger or equal to the logarithm of the spectral radius of the linear operator induced by f on the linear spaces of cohomology of M with real coefficients. It was posed by M. Shub in seventies who asked what suppositions on f and M imply EC. We prove the following Theorem A Assume that a compact manifold M is a K(π, 1)-space with the fundamental group π being torsion free and virtually nilpotent. Then EC holds for every continuous self-map f of M. In particular EC holds for every continuous self-map of any compact infra-nilmanifold. • A group π is called virtually nilpotent if it contains a finite index nilpotent subgroup Γ. We can assume that Γ is a normal subgroup of π. (Indeed, for any pair of groups K ⊂ L, where K has a finite index in L, one has a homomorphism ρ : L → Sym(L/K) into the symmetry group of the quotient space. Then ker ρ is a normal subgroup of L, it has finite index, and is contained in K.) • One can replace the assumption that π is virtually nilpotent by the assumption that π has polynomial growth, [9]. • Theorem A is a step towards proving a conjecture by A. Katok [11] saying that EC holds for every continuous map if the universal cover of M is homeomorphic to an Euclidean space R d . • One can even ask whether EC holds for every continuous self-map of a K(π, 1)) compact manifold (or a finite CW-complex). Affine maps We refer to the following theorem by A. Malcev and L. Auslander about the existence of a model [7] p.76: Assume that π is finitely generated torsion free virtually nilpotent group. Then it contains a finite index normal subgroup Γ which can be embedded as a lattice, i.e. a discrete co-compact subgroup, in a connected, simply connected nilpotent Lie group G. The embedding can be extended to the embedding of π in the group Aff(G) of affine mappings of G, so that π ∩ G = Γ. More precisely, if C ⊂ Aut(G) denotes the maximal compact subgroup of the group of automorphisms of G, then π ⊂ G ⋉ C ⊂ G ⋉ Aut(G); this embedding of π is called an almost Bieberbach group. It follows then from the definition that π acts on G properly discontinuously (First note that if α ∈ π has a fixed point z ∈ G, then α ℓ (z) = z and α ℓ ∈ Γ for ℓ = #(π/Γ). Then α ℓ = e the unity of π, hence α = e by the assumption that π is torsion free.) The quotient manifold IN = G/π is called an infra-nilmanifold. It is regularly finitely covered by the nilmanifold N = G/Γ, with the deck transformation group equal to H = π/Γ. Note that every compact manifold finitely covered by a nilmanifold, in particular every infra-nilmanifold, satisfies the assumptions of Theorem A. Indeed, G is homeomorphic to R d , where d = dim M (cf. [22]). If π were not torsion free, a cyclic subgroup generated by an element {g} ≃ Z p of a prime order would act freely on R d . The latter is impossible, as follows from the Smith theory (cf. [1]). The image of the embedding of π into G ⋉ C ⊂ Aut(G) will be denoted by π IN . It is a deck transformation group of the cover p IN : G → G/π IN and, distinguishing an arbitrary z ∈ G, it can be identified in a standard way with the fundamental group π 1 (IN, p IN (z)) of IN. Consider now any M being K(π, 1) as in Theorem A. The group π acts properly discontinuously on the universal cover spaceM and it can be identified with π M , which is the deck transformation group of the universal cover p M :M → M. (Similarly to N → IN, we have a regular finite coverM /Γ → M =M/π, with the deck transformation group H.) Every continuous f : M → M induces an endomorphism F = F f of π M , unique up to an inner authomorphism. To define this use f # : π 1 (M, z) → π 1 (M, f (z)) between the fundamental groups and standard identifications of these groups with π M . For more details see Section 3. When we identify π with π IN , then we can consider F as an endomorphism of π = π IN . By K. B. Lee, [13, Th.1.1 and Cor 1.2], there exists an affine self- Hence, in view of (1), there exists a factor φ = φ f of Φ on IN under the action of π IN . In general one calls factors of affine Φ on G satisfying (1), affine maps on IN, in particular φ f is an affine map on IN. Since both M and IN are K(π, 1)-spaces, they are homotopically equivalent, [26]. One can find a homotopy equivalence h : M → IN inducing our identification between π M and π IN . Then φ • h is homotopic to h • f , see [26,Section 8.1] and Section 3. Note, [13], ∧ k D f of D f . Of course it is equal to j:|λ j |>1 |λ j |, where λ j ∈ σ(D f ), provided at least one λ j has absolute value larger than 1. Otherwise it is equal to 1 (in ∧ 0 D f ). Linearization matrices One can assign to an endomorphism F = f # not only a linear map D f : R d → R d , G ≡ R d , but also an integer d × d matrix A [f ] called the linearization of the homotopy class [f ], because endomorphisms f # are in the one-one correspondence with homotopy classes [f ] of self-maps of a K(π, 1)-space. Sometimes we shall use the notation A f . An endomorphism F : π → π does not preserve the nilpotent subgroup Γ ¡ π in general. But Γ contains a subgroup Γ ′ ¡ π such that Γ ′ is nilpotent has finite index in π and is invariant for F (cf. [14] or Proposition 5). Since Γ ′ has finite index in Γ it is also a lattice in G. By (1) the endomorphism B : G → G is an extension of F : Γ ′ → Γ ′ , see Section 2. Let G be a nilpotent connected simple-connected Lie group, Γ its lattice. By the definition the descending central series of is a homomorphism of G preserving a lattice Γ it preserves each subgroup Γ i thus it induces an endomorphism B i on each factor group Γ i /Γ i+1 , 0 ≤ i ≤ k. Clearly Γ i /Γ i+1 is abelian and torsion free of dimension d i . Therefore the action of which is uniquely defined up to a choice of basis, i.e. up to a conjugation by a unimodular matrix. Finally we put [12] and [10] for some extensions to [17], [10]). In fact the matrices A can be defined directly, using F = f # : π → π, without constructing G. One can use the series of isolators Conclusion We are in the position to formulate a sharper version of Theorem A, namely Theorem B For every continuous self-map f of a compact manifold M which is a K(π, 1)-space with the group π torsion free and virtually nilpotent, In the case M is an infra-nilmanifold the equality holds for every affine map φ : M → M, a factor of an affine Φ satisfying (1), in particular for φ f . In consequence for every Maybe considering of M is not needed and it is sufficient to consider only IN. Since the topological entropy is an invariant of conjugation by a homeomorphism, this would follow from Borel conjecture, which states that the fundamental group π of a manifold being K(π, 1)-space determines M up to homeomorphism, This has been confirmed by Farrell and Jones [5] for a class of groups that contains the almost-Bieberbach groups, except in dimension 3. Formulating Theorems A and B we have followed a suggestion by M. Shub [24], to assume a discrete group point of view. Having given an endomorphism F = f # : π → π of a finitely generated torsion free virtually nilpotent group, we associate to it a linear operator D f , or an integer d × d matrix A f . As suggested in [24] the logarithm of spectral radius of sp (∧ * D f ), or sp (∧ * A f ), is "a kind of volume growth" of f # . In Theorem A it is replaced by the spectral radius of the map induced on real cohomologies of the group π. We shall present two proofs of Theorems A and B. The first one, in Section 2, concludes Theorems A and B from analogous theorems in [17], with f : N → N a continuous map of a compact nilmanifold. However this proof of Theorems A and B does not work in dimension 3. The second proof, in Section 3, holds for f on M and uses only a homotopy equivalence between M and IN. It directly repeats the arguments of [17]. An important observation is that A [f ] is an integer matrix. This allows, in Section 4, to prove absolute estimates from below for sp (∧ * A f ), where f is an expanding map of a compact manifold (without boundary) or an Anosov diffeomorphism of a compact infra-nilmanifold. The latter uses number theory results estimating the Mahler measure of an integer polynomial. The authors would like to express their thanks to K. Dekimpe, E. Dobrowolski, T. Farrell, A. Katok and M. Shub for helpful conversation. Entropy Conjecture on infra-nilmanifolds The proof of Theorems A and B we present in this section will hold for M finitely covered by a nilmanifold and follows from two standard facts and the main theorem of [17] in which the topological entropy of a continuous map of nilmanifold is estimated by the corresponding quantities. We begin with the following Proof: It is elementary and is given in [6]. Briefly: The p-preimage of an (n, ǫ) − fseparated set in M is (n, ǫ) −f-separated (in a metricd onM is being the lift of a metric d on M chosen to define the entropy), hence h(f ) ≤ h(f ) . (In fact only the continuity of p was substantial in this proof). Conversely, let Q be an (n, ǫ) −f -separated set inM consisting of points in a ball B(z, ǫ/2). Let δ > 0 be a constant such that p is injective on every ball inM of radius δ. We prove that the set p(Q) is (n, ǫ) − f -separated. Indeed, take ǫ < δ and suppose that for x, y ∈ Q we have d(f j (p(x)), f j (p(y)) < ǫ for all j = 0, 1, ..., n. Let j 0 ≥ 0 be the smallest j ≤ n such that d(f j (x),f j (y) ≥ ǫ. Then d(f j (x),f j (y) ≥ δ − ǫ (i.e. projections by p are close to each other but the points are in different components of preimages of a small ball under the cover map). This is not possible for j = 0 by Q ⊂ B(z, ǫ/2). If it happens for another j it means that thef image of two points within the distance < ǫ have distance ≥ δ − ǫ, what for ǫ small enough contradicts the uniform continuity off . P Definition 4 Let Γ ⊳ π be a normal nilpotent subgroup of finite index in π and let let s be an endomorphism of π. We say that a group Γ ′ ⊂ Γ is s -admissible if Proposition 5 For a nilpotent group Γ normal and of finite index in a group π there exists a group Γ ′ ⊂ Γ, Γ ′ ⊳ π, admissible for every endomorphism s of π. (sometimes such Γ ′ is called a fully characteristic subgroup). Proof: Repeat verbatim the argument of Lemma 3.1 of [14] and define Γ ′ = group generated by {γ k : γ ∈ π}, where k is the order of H = Γ/Γ ′ . This is a subgroup preserved by every endomorphism of π, in particular π is normal. Next we define a group Γ(k) := group generated by {x k } : x ∈ Γ . Of course Γ(k) ⊂ Γ ′ . It is enough to show that Γ(k) is of a finite index in Γ. Apply an argument used in [14]: Since Γ is nilpotent, it is polycyclic, [22]. But for any polycyclic group Γ the corresponding group Γ(k) has a finite index, cf. [22,Lemma 4.1]. In particular adapting the argument of Lemma 4.1 of [22] one shows its assertion for a nilpotent group, by an induction over the length of nilpotency. P Note that the number k used to define the group Γ(k) is not unique, e.g. we can take any its multiple getting a smaller group with the required property. To get a larger group Γ ′ than that of [14] we can use k equal to the LCM{#h; h ∈ H} the order of element, instead of k = #H the order of H. Proof: The assertion follows from Proposition 5 for π the fundamental group of M, N = G/Γ for Γ a subgroup of π and s = f # . We can assume that Γ (hence Γ ′ ) is normal in π, see Introduction. We defineÑ := G/Γ ′ . A liftf exists, since the homomorphism f # : π → π preserves Γ ′ identified withp # π(Ñ), see [26]. P Together with Corollary 3 and EC for nilmanifolds, [17], this proves EC for all continuous self maps of M finitely covered by nilmanifolds, in particular for all M being infranilmanifolds. Proof: of Theorem B (for M finitely covered by a nilmanifold G/Γ). As above we find an admissible Γ ′ ⊂ Γ. The quality (1) for α = g ∈ Γ ′ takes the form As the left hand side expression is equal to fact B in (b, B) was found in [13] just as an extension of f # |Γ ′ to G.) B is a lift to G/Γ ′ of φ f homotopic to f . Theorem B follows from Corollary 6 (B isf there), from Proposition 1, and from Theorem B for self-maps f of nilmanifolds, [17]. P Another proof of EC Now we provide another proof of Theorems A and B without additional assumptions by showing that a modification of the proof for the nilmanifolds given in [17] works. Let us remind the notation. M is a compact manifold, being K(π, 1) for π a virtually nilpotent torsion free group π. G is a connected simply connected nilpotent Lie group and IN = G/π where π is embedded in Aff(G) as π IN , acting discontinuously on G so that IN is an infra-nilmanifold, see Existence of a Model Theorem in Introduction. We have the universal covers p M :M → M and p IN : G → IN. Remark that we use the right action, thus IN = G/π, instead for π\G used in [13]. Then the action of an affine map (d, D), d ∈ G, D ∈ Endo(G), is given as (d, D)x = (Dx)d. We assume that all metrics under consideration are induced by Riemannian metrics. We need the following Proof: By compactness the lift τ Γ of τ IN to G/Γ, where Γ = G ∩ π, is equivalent to ρ Γ , the projection of ρ to G/Γ. Therefore the lifts to G are also equivalent. P Let us stop for a while on the homotopy equivalence between M and IN making some explanations from Introduction more precise. By construction we get Let x n , n = 0, 1, 2, ... be anf trajectory. Hence w n =h(x n ) is a ξ 1 − Φ-trajectory in the metric τ G , hence, by Lemma 7, a ξ 2 − Φ-trajectory in ρ, the right invariant metric on G. Finally , the latter equality by the right invariance of ρ. Hence w n is a ξ 2 + ρ(b, e) trajectory for B ′ . Note that the spectra of the derivatives (linearizations) of DB(e) and DB ′ (e) coincide as these operators are conjugate. Now we define a mapping Θ from (w n ) to a B ′ -trajectory in G u the unstable subgroup for B ′ by proceeding as in [17]: First we define w n → π u (w n ), the "projection" to G u , i.e. we write w n = g cs · g u where g cs ∈ G cs the central stable subgroup and π u (w n ) := g u ∈ G u . Next Θ(w n ) is defined as the unique B ′ -trajectory in G u subexponentially "shadowing" π u (w n ). Finally, we define θ(x) := Θ •h(x). For an arbitrary ǫ > 0, for ((1 + ǫ) j , n) − B ′ -separated points in G u , j = 0, ..., n, (contained in a small disc), i.e. such that for some j their j-th images under B ′ are within the distance at least (1 + ǫ) j , we choose points w in their Θ preimages (also in a small disc) and next points x inh-preimages, in a small disc. This is a crucial point which uses the fact thath is onto, since |deg h| = 1, compare Remark 4.8 in [17]. If two points p M (x), p M (y) are (ǫ, n) − f -close (i.e. not separated), then so are x, y. Henceh(x) andh(y) are (ξ 4 , n)-close (with respect to Φ, hence B ′ ) in ρ for a constant ξ 4 . Hence their Θ images are ((1 + ǫ) j , n) − B ′ -close, a contradiction. P Note that we did not use an admissible group constructed in Proposition 5 Remark 9 The statement of Theorem A, in a weaker form for the flat manifolds, was posed as a question by Szczepański in his article [27]. Earlier, a very special case of entropy conjecture an for affine map of a compact affine manifold was proved by D. Fried and M. Shub in [8]. Absolute estimates of entropy Famous Lehmer conjecture in number theory states that there exists a constant C, called Lehmer constant, such that for every integer polynomial w(x) = a 0 x d +a 1 x d−1 + + · · · +a d , not being a product of cyclotomic polynomials (all zeros being roots of 1) and x k , for the Mahler M(w) measure of w have where the product is taken over all zeros of w(x). There are estimates of the Mahler measure which depend on the degree of an irreducible polynomial (the degree of an algebraic number). Using an estimate given by Voutier in 1996 (cf. [29]), which is the best known valid for every d > 1, not only asymptotically, we get the following Proof: Let w(x) = w 1 (x) · w 2 (x) · · · w k (x), d j = deg w j , d 1 ≤ d 2 ≤ · · · ≤ d k , be a decomposition of the characteristic polynomial of the linearization matrix A f into irreducible terms. If h(φ f ) > 0 then by Theorem B at least one eigenvalue of A f has the absolute value larger than 1. Hence, by Theorem B, using the property the sequence τ (n) is decreasing with respect to n, P For other estimates of Mahler measure see for example [4]. In particular from Smyth's theorem [25] (which is a partial answer to the Lehmer conjecture) it follows where τ 0 is the real root of polynomial τ 3 − τ − 1. P One can check that the latter τ 0 is greater that 1.32471795. In particular, τ 0 does not depend neither on w(x) nor on its degree d. Theorem 11 is a statement about a homotopy property of f . A special case is when A f is a hyperbolic matrix invertible over integers, i.e. φ f is an Anosov automorphism, and d, the dimension of M, is odd. Then obviously the characteristic polynomial of A f is non-reciprocal, hence Theorem 11 applies and we obtain h(f ) ≥ 1.32471795. This is in fact an easy case whose proof does not need the use of Theorem B. Namely one can refer to Franks' theorem [7,Theorem 2.2], saying that such a map f is semiconjugate to φ f , i.e. there exists a continuous map θ : M → M such that θ • f = φ f • θ. This θ is found to be homotopic to identity, hence "onto". Therefore h(f ) ≥ h(φ f ), see Proposition 1. It is easy to check that if f is an Anosov diffeomorphism then A f is a hyperbolic invertible matrix. Other remarks • The "projection -shadowing" construction of Θ in the proof of Theorem B in Section 3 and in [17] can be considered as a strengthening of Franks' theorem to the case central direction exists. • It is sufficient to assume A f is a hyperbolic endomorphism, i.e. without eigenvalues of absolute value 1, and without zero eigenvalues, to apply Franks' theorem. Then φ f is an Anosov endomorphism and the semiconjugacy holds between the inverse limits, cf. [23] and [20] . • In the expanding case, i.e. if all the eigenvalues of A f have absolute values larger than 1, the product is at least 2. Therefore h(f ) ≥ log 2. In this case, instead of Theorem B, one can refer to Shub's theorem [23] saying that f is semiconjugate to φ f . • Finally, if f itself is metric expanding on a compact orientable manifold (i.e. it expands all the distances between points close to each other, at least by a constant factor larger than 1) or at least if f is forward expansive, i.e. ∃δ > 0 such that ∀x = y ∃n ≥ 0 with d(f n (x), f n (y)) ≥ δ (as this implies expanding in an appropriate metric, see [21, Section 3.6]), then for its degree d(f ) one has immediately h(f ) ≥ log |d(f )| ≥ log 2, see [28]. Note that f expanding (in a metric induced by a Riemannian metric) can happen only on infra-nilmanifolds, [9]. • In general h(f ) ≥ log |d(f )| for all f being C 1 , see [18]. However the assumption that f is C 1 is essential in absence of the expanding property, namely there are easy examples of continuous, but not smooth maps f for which h(f ) < log | deg(f )|.
2007-05-11T09:42:48.000Z
2007-05-11T00:00:00.000
{ "year": 2008, "sha1": "0868a06a10fec4f23f3dc705c78883cd35168a51", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3934/dcds.2008.21.501", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "e31e29ccdb65db826075906afa65b4090b945ed3", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
115741
pes2o/s2orc
v3-fos-license
Characteristics of mentoring relationships formed by medical students and faculty Background Little is known about the characteristics of mentoring relationships formed between faculty and medical students. Individual mentoring relationships of clinical medical students at Munich Medical School were characterized quantitatively and qualitatively. Methods All students signing up for the mentoring program responded to a questionnaire on their expectations (n = 534). Mentees were asked to give feedback after each of their one-on-one meetings (n = 203). A detailed analysis of the overall mentoring process and its characteristics was performed. For qualitative text analysis, free-text items were analyzed and categorized by two investigators. Quantitative analysis was performed using descriptive statistics and Wilcoxon-test to assess differences in grades between students with and without mentors. Results High-performing students were significantly more likely to participate in the mentoring program (p<0.001). Topics primarily discussed include the mentee's personal goals (65.5%), career planning (59.6%), and experiences abroad (57.6%). Mentees mostly perceived their mentors as counselors (88.9%), providers of ideas (85.0%), and role models (73.3%). Mentees emphasized the positive impact of the mentoring relationship on career planning (77.2%) and research (75.0%). Conclusions Medical students with strong academic performance as defined by their grades are more likely to participate in formal mentoring programs. Mentoring relationships between faculty and medical students are perceived as a mutually satisfying and effective instrument for key issues in medical students’ professional development. Practical implications Mentoring relationships are a highly effective means of enhancing the bidirectional flow of information between faculty and medical students. A mentoring program can thus establish a feedback loop enabling the educational institution to swiftly identify and address issues of medical students. M entoring is increasingly viewed as a key factor contributing to a successful career in academic medicine (1Á9). Having a mentor has been found to be vital for facilitating a young medical professional's career advancement and acquisition of clinical and research skills (3Á5). In particular, career counseling by mentors leads to an earlier choice in terms of specialty and career by the juniors (10). Also, mentoring increases the odds of participating in research during medical school (11) and correlates with increased research pro-ductivity in junior academic physicians (3). In a recent review of the literature, the role of a one-on-one mentor for students pursuing an academic career was highlighted (10). In addition, role models were identified by medical students as an important modality for learning professionalism (12). Lack of mentoring has been identified as a major obstacle hindering career advancement in medicine (13). Furthermore, mentoring and advising enhanced the performance of underrepresented minority students in medical school (14). Despite the importance of mentoring in medical curricula, a cross-sectional study among medical schools in Germany showed that only a limited number of medical students are enrolled in formal mentoring programs and only a small percentage of those receive mentoring in a one-on-one mentoring setting (15). Also, in most other countries a lack of mentoring programs for medical students was observed (5). Due to a limited number of studies using validated questionnaires on the effects of student mentoring and the confusion about the difference between an advisor, role model, and career mentor (10), there is little understanding of the characteristics of mentoring relationships and their importance for career success. There are very limited data about mentoring relationships involving medical students. In a review of the literature, Frei et al. identified 438 publications relating to mentoring programs, but only 25 of them met the selection criteria for structured programs and student mentoring surveys (10). A cross-sectional study at the University of California, San Francisco (UCSF; 11), found that, in the absence of a formal mentoring program, medical students form mentoring relationships through interactions on clinical clerkships and research rotations. It further showed that in the mentorÁmentee relationship, the mentor's role was to provide personal support, role modeling, and career advice. A survey among faculty members and medical students at the Makerere University College of Health Sciences revealed a lack of awareness of roles of mentors and mentees (16). Others have highlighted different mentoring strategies suitable for different stages of a student's career: while specific, skill-based instruction might be most helpful for new medical students, the role of a more general consultant seems more appropriate to support advanced medical students (17). Many institutions have introduced formal mentoring programs to facilitate the formation of mentoring relationships among medical students (5, 18Á20). Such programs provide opportunities for students to find a mentor at the outset in medical school (21). In a preliminary study, we performed needs analysis survey among all students at the medical school to evaluate the desire for mentoring among medical students (22). The needs analysis showed that despite a high overall satisfaction with the MD program (84.9% positive responses on a 6-point Likert-scale), only 36.5% of medical students expressed satisfaction with how the faculty supported their professional development and 86.4% expressed a desire for more personal support. To meet this need for mentoring among medical students, we created a formal program at the LMU Medical Faculty to facilitate the formation of mentoring relationships (22). As aforementioned, little is known about the specific characteristics of mentoring relationships formed by medical students. The topics discussed between medical students and faculty physicians, the role of the mentor and the impact of mentoring are likely to differ from what has been found for mentoring junior faculty or resident physicians. Therefore, we performed a detailed analysis of the program to characterize the individual mentoring relationships of medical students. In planning our mentoring program, we found the framework by Schapira and colleagues (23) to be inspiring though it did not meet our expectations completely as we were looking for a more generic approach to mentoring. So, instead of using this specific framework, we adapted elements and further developed our own variables with respect to mentoring relationships and the perceptions of our mentoring program. We sought to find answers to the following questions: What students (gender, performance) are more likely to seek a mentoring relationship? What are the expectations of mentees from the mentoring relationships and what is mentorÁmentee interaction effectively about? What is the mentor's role as perceived by mentees? How do mentors see themselves and the outcome of their mentoring for the development of their mentees? Setting and participants The medical curriculum at LMU Munich consists of two preclinical years followed by four clinical years. Step 1 of the National Board Examination in medicine is taken after the preclinical years. A mentoring program was established at LMU School of Medicine and launched in May 2008 (22). Feasibility considerations regarding the large number of students at this institution resulted in a novel concept, which combines an optional one-onone mentoring for all students in their clinical years with peer-mentoring societies that provide all students with a network comprising advanced students and physicians willing to share their advice. Participation is entirely voluntary. For the one-on-one mentoring, clinical students and physician mentors are required to complete online matching profiles consisting of 13 items using 6point Likert-scales with regards to professional orientation, work-life priorities, and recreational interests. Based on these profiles, an automated algorithm will calculate a weighted correlation score and provide the student with 10 proposals of potential mentors matched by specialty and areas of interest (24). Mentors with three mentees will not be suggested by the matching system to ensure mentoring quality. The student can then choose a mentor from these proposals. Three hundred and eight out of 2,074 clinical students have thus been matched to personal mentors within 1 year. Students have the opportunity to evaluate and change their mentor at the end of each semester. However, the duration of a mentoring relationship is not limited. Here, we present a detailed analysis of these one-on-one mentoring relationships. Procedure All students signing up for the newly created one-on-one mentoring program were required to complete the online questionnaire addressing their expectations regarding the role of their mentor, the mentoring relationship, and topics to discuss with the future mentor (n0534, Table 1). In addition, mentees were asked to provide feedback after every personal meeting (n0203, Table 1, two multiple choice and three free-text items). Feedback questions focused on the duration of the meeting and topics discussed during the meeting. Furthermore, a detailed evaluation of the program was performed at the end of every semester in October 2008 and April 2009 (n 0208 for mentees and n066 for mentors, Table 1). Here, mentees were asked to define the roles of their mentors, characterize their mentoring relationships, and judge the impact of mentoring on their academic progress. In addition, mentors were questioned about their perception of the relationships. To further characterize those students who participate in a formal mentoring program, performance at final secondary-school examinations and Step 1 of the German National Board Examination in medicine, as defined by the grades achieved, was compared between students who had chosen a mentor and those who had not. In an online questionnaire sent to all clinical year students (whether or not they had a mentor, n02,074), students were asked to voluntarily provide their scores on final secondary-school examinations and Step 1 of the German National Board Examination. Only respondents who had provided both scores were included into the analysis (n 0104 with mentor and n0356 without mentor, Table 1). All data collected were anonymously saved and processed. Instruments As part of the registration for the mentoring program, students had to complete a web-based survey investigating opinions about the potential mentor's role and topics to be discussed with the mentor, as well as expectations regarding frequency, duration, and mode of mentoring. The categories for mentors' roles and topics discussed were derived from the qualitative analysis of preliminary focus groups. This survey comprised 34 items with 6-level Likert scales, three multiple choice questions as well as eight free-text items. To further characterize the quality and effectiveness of the mentoring relationships, we used a modified version of the Mentorship Profile Questionnaire and Mentorship Effectiveness Scale developed by the John Hopkins University School of Nursing (25). Since some outcome measures proposed in this questionnaire were not applicable for medical students (e.g., grant writing, job promotion), we developed outcome measures suitable for the characteristic situation of medical students. These include positive effects on career planning, research activities, clinical electives, experiences abroad, extra-curricular activities, work-life balance, and preparation for exams. A 6-level bipolar anchor scale was applied for all Likertrating scales, ranging from 1 0not at all to 60very much. Thus, no neutral position was provided to avoid loss of information by central tendency bias (26). No single item was mandatory as a 'not applicable' option was available to the rater. In addition, we used multiple-choice and free-text items where appropriate. We did not assess reliability or validity of newly created or modified instruments. Bias Acquiescence bias, halo effect, and social desirability response bias may also potentially limit the validity of the results obtained in the analysis. There are no means to entirely exclude acquiescence bias (the tendency to agree with presented statements) and halo effect (e.g., rating a specific item positive because of an overall positive impression). These biases are not common with Likert scales (27). However, they should be taken into account when drawing conclusions from the data. To minimize the risk of social desirability bias, it was communicated very clearly to respondents that all data would be analyzed anonymously. All data were stored and analyzed using encoded responder IDs. Thus, neither mentors and mentees nor the investigators themselves had access to an individual's assessment of his or her mentoring relationship. Ethics approval and data privacy The LMU's ethics committee approved the study. All data were collected and stored anonymously using encoded responder IDs. Thus, neither mentors and mentees nor the investigators had access to an individual's assessment of his or her mentoring relationship. To maintain strict confidentiality while dealing with performance in final secondary-school examinations and Step 1 of the German National Board Examination, an independent faculty official not involved in the administration of the mentoring program related students' exam performance with whether or not they had chosen a mentor. Data analysis For qualitative text analysis, free-text items were analyzed and categorized by two investigators independently. To further characterize students participating in the individual mentoring program, we compared participants to non-participating students regarding their performance in final secondary-school examinations and Step 1 of the German National Board Examination. Students who choose a mentor had a better grade at both their final secondary-school examinations (pB0.001) and Step 1 of the German National Board Examination (pB0.001; Fig. 1). Also, 22.5% of the students in their first clinical year had chosen a personal mentor as compared to 5.4% in the final year (Table 2). After 1 year, the program had enough mentors with completed profiles to offer one-onone mentoring to 24.7% of all clinical students. Role of mentors Prior to choosing a mentor, future mentees were asked to define what they hoped to be the role of their mentor (n 0534, Table 3). Strongest approval was found for the roles of a counselor (mean 5.590.7), agent for contacts Free-text analysis confirms these aspects: 'My mentor was very competent and helpful in every issue I raised. She even offered me a research opportunity in her team.' Many mentors illustrated their roles by examples like 'I offered to arrange a clinical elective at Dartmouth Medical School for my mentee.' The relationship also seemed to influence students' attitude toward medical school as several mentees reported that mentors 'increased [their] motivation for better academic achievements'. Topics discussed in mentoring relationships When signing up for the individual mentoring, students were asked to define which topics they would like to discuss with their future mentors (n0534, Table 4). Most mentees hoped to discuss personal goals with their mentors (mean 5.291.0). A similarly large number of students expected to speak about research/MD thesis (mean 5.291.2) and final year electives (mean 5.391.1). To evaluate which other topics were discussed in mentoring relationships, we asked mentees to give feedback after every personal meeting with their mentor (n0203). Here, mentees were found to most frequently seek advice from their mentors about research, including MD thesis (65.5%), career planning (59.6%), and experiences abroad (57.6%). Communication between mentees and mentors While the majority of mentees prior to their matching estimated that two (30.4%) or three (32.6%) meetings Note: Completion of an electronic survey with Likert scale and free-text items was mandatory for all students wishing to create matching profiles for the one-on-one mentoring program. (n0534). Students were asked to answer the question 'Which roles do you want your future mentor to adopt?' on 6-level Likert scales ranging from 10'not at all' to 60'very much'. Mean values and standard deviations as well as the frequency of overall positive answers (4Á6) are shown. At the end of every semester, mentees were asked to provide an evaluation of their mentoring relationship. Here, students were asked to define 'What has been the role of your mentor?' on 6-level Likert scales ranging from 1 0'not at all' to 6 0'very much'. Mean values and standard deviations as well as the frequency of overall positive answers (4Á6) are shown. with their mentor would be desirable, in reality most mentees met their mentors once (51.4%) or twice (22.6%) in one semester. On average, these meetings lasted 66944 min. In addition, mentees contacted their mentors twice (24.4%), three to five times (41.3%) or more than five times (18.8%) by email. The telephone was used as a means of contacting their mentor by 31.6% of mentees. Satisfaction of mentors In the evaluation of the program, we further investigated how mentors perceived the mentoring relationships (n 066, Table 5). Mentors almost unanimously felt that they had been able to help their mentees (mean 4.690.8) and answer their questions (mean 5.190.9). Most of them concluded that they had made a difference for their mentees' careers (mean 4.191.1). Moreover, analysis of mentors' free-text answers uncovered that next to social factors like the 'enriching acquaintance with very likeable and motivated students' the mentorÁmentee relationship can also provide faculty with helpful feedback and insight into a medical student's development 'by reflection of students' problems especially regarding the choice of a research project and critical discussion of potential weaknesses in supervision and education of students'. Interestingly, only two mentors (3.0%) stated that their mentoring had demanded a disproportionate dedication of time (mean 1.890.8). Outcomes of the mentoring relationships Finally, we assessed the self-perceived impact of the oneon-one mentoring on mentees' development (n0208, Note: Completion of an electronic survey with Likert scale and free-text items was mandatory for all students wishing to create matching profiles for the one-on-one mentoring program. (n0534). Students were asked to answer the question 'Which topics do you wish to discuss with your future mentor?' on 6-level Likert scales ranging from 10'not at all' to 6 0'very much'. Mean values and standard deviations as well as the frequency of overall positive answers (4Á6) are shown. Mentees were asked to give a feedback after every personal meeting with their mentor. Here, students were asked to report topics discussed in their meeting. The percentage of mentees who reported discussing a certain topic with their mentor in one semester is shown. *Discussing personal goals for the mentee was defined as indispensable by the program's guidelines. Discussion Mentoring is a key factor for professional success in medicine. While intense research has been performed on mentoring programs for junior faculty physicians and scientists, there is only limited data about mentoring relationships formed between faculty and medical students. We here shortly present a new established formal mentoring program with its main characteristics: a novel, computerized algorithm that proposes mentors to mentees based on online matching profiles, with the final choice being made by the student; participation being voluntary both for mentors and mentees and latitude concerning topics of discussion, amount of meetings and duration of the mentoring relationship. More importantly, we present a detailed characterization of the mentoring relationships formed by medical students within a formal mentoring program. Participation in this voluntary mentoring program varied greatly with students' progress in the curriculum. While in the first clinical semesters around one in four students participated in the program, this number was only around one in twenty of final year students during the first year of the program. This may indicate that the demand for mentoring decreases with the amount of experience and acquaintances a student has made in hospitals during their clinical years of study (i.e., mentoring is taking place outside of the program's registry). However, as mentoring relationships usually are longitudinal and long lasting, it is to be expected that younger students will continue their relationships with their mentors and their networks throughout their studies. Registered participation in the program would therefore start being higher for later semesters over time. Interestingly, despite there being close to equal numbers of female and male mentors to choose from in online matching, only about one in five male mentees chose a female mentor. Female students showed no such discernible bias toward their mentors' gender. Our data clearly show that despite the program being offered to all students equally, academically higher-performing students were more likely to participate. In his article about mentoring medical students in academic emergency medicine, Garmel described main topics for mentors. The most important ones among them were career choice, clinical issues, including interpersonal skills, dealing with difficult situations, research, career satisfaction and life balance (28). In a previous study of group mentoring for medical students in Germany, the main topics discussed were questions concerning the curriculum and career planning (29). Ninety-eight per cent of mentors at UCSF discussed career planning with their mentees and 60% gave personal advice to them (7). In line to these studies, in our one-on-one mentoring setting, we have identified personal goals, career planning and experiences abroad as the topics most frequently discussed. One additional topic that seems to be very important for our students is research/MD thesis. Of note, participating in research is a prerequisite for obtaining an MD degree (but not to be licensed as a physician) in Germany. In an online survey conducted in October 2009, 99% of medical students at our faculty have already performed research or are planning to engage in research projects during medical school (unpublished data). Therefore, the prominent role of research as a topic in medical students' mentoring relationships might be due to this distinctive feature of medical education in Germany. Based on this data and a review of the literature on definitions of mentoring (22), our program's guidelines defined discussing and establishing short-and long-term goals for the mentee as an essential component of mentoring relationships. In a US study by Aagaard and Hauer, the most common functions of mentors were personal support, role modeling, and career advising (11). This corresponds well with our data that mentees most commonly described the role of their mentor as a counselor, provider of ideas, and role model. In a randomized controlled study at the UCLA college program, students enrolled were more satisfied in terms of career planning and opportunities (30). Aagard and Hauer have emphasized the impact of mentoring on specialty and residency choice (11). We have demonstrated that medical students perceive a particularly strong positive impact of their mentoring relationships on their career planning, research, clinical electives, and experiences abroad. Different definitions of mentoring in formal mentoring programs and a big variety in targeted students, goals of the programs, duration, matching systems, and programs' structure in literature make a general characterization of 'the' mentoring relationship between medical students and faculty difficult. It has been hypothesized that mentoring relationships formed via organized programs are qualitatively different from spontaneous mentoring in intensity, commitment, duration, and structure (17,19). Indeed, we cannot exclude the possibility that participating in a formal mentoring program influences the shape of the mentoring relationships formed within that program. However, it is likely that this influence is not strong in programs which are voluntary for both mentors and mentees, where mentees are free to choose their mentors and meet them as often as they need. We therefore believe that the results presented here are largely valid even for mentoring relationships formed by medical students outside formal mentoring programs or other institutions with different curricula or size. Moreover, our formal mentoring program approach seams to be suitable for medical faculties with a very large number of students. A longer-term evaluation will provide clarity. Limitations Our statistical analysis shows that there is a strong selection bias: students participating in the one-on-one mentoring program had performed significantly better than their non-participating fellows both in their final secondary-school examinations and their Step 1 of the German Medical Board Examination. We conclude that high-performance medical students are more motivated to participate in a formal mentoring program. The reasons for this are unclear: these students might have more time for 'extra-curricular' involvement, such as investing time in a mentoring relationship because of better time management skills or simply less time needed for studying. Good performance may also lead to prioritizing more on career advancement and therefore actively seeking to contact faculty through a formal mentoring program. Although the initial goal of the program was to offer mentoring to all medical students, any program that is based on voluntary participation is likely to over represent students who share specific characteristics, including high academic performance and an aspiration for a career in academic medicine. This is in line with previous reports that having a mentor strongly correlated with interest in research and academic medicine (11). Though inevitable, this selection bias together with low response rates among students not participating in the program limits the generalizations of our findings to the entire population of medical students. Further studies are planned to assess other reasons for not participating in the mentoring program. Practical implications Mentoring relationships are a highly effective means of enhancing the bidirectional flow of information between faculty and medical students. Analyzing the issues discussed in mentoring relationships can provide the faculty with an excellent picture of the questions and challenges students encounter during their time in medical school. Educational institutions can easily use this information to identify and address issues underserved by the current curriculum. For example, the prominent role of MD thesis research in mentoring relationships has prompted our faculty to set up a novel research fair for medical students. A mentoring program could in the future be used as a part of institutional learning to contribute to a feedback loop enabling the faculty to adjust or amend their curriculum according to the needs of medical students. Conclusion The presented data demonstrate the feasibility of a largescale one-on-one mentoring program providing hundreds of medical students with suitable mentors. There is some evidence that students with strong academic performance are significantly more likely to choose a personal mentor. However, there is need for investigation into reasons of students not to participate in mentoring or matching with a mentor. The role of the mentor identified by survey data is that of counselor, agent for contacts, and provider of ideas helping mentees to gain insight and advance in MD thesis, career planning, and experiences abroad. The key outcomes of mentoring relationships as perceived by medical students are facilitation in their development in the areas of career planning and research and having a close connection to a faculty member who may act as 'enabler' in terms of clinical electives and experiences abroad.
2016-08-09T08:50:54.084Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "4775ab9243091df57705ae4768a777c415ea26aa", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3402/meo.v17i0.17242", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4775ab9243091df57705ae4768a777c415ea26aa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18870269
pes2o/s2orc
v3-fos-license
Functional Polymorphisms of Matrix Metalloproteinases 1 and 9 Genes in Women with Spontaneous Preterm Birth Objective. The aim of this study was to investigate the association of functional MMP-1-1607 1G/2G and MMP-9-1562 C/T gene polymorphisms with spontaneous preterm birth (SPTB; preterm birth with intact membranes) in European Caucasian women, as well as the contribution of these polymorphisms to different clinical features of women with SPTB. Methods and Patients. A case-control study was conducted in 113 women with SPTB and 119 women with term delivery (control group). Genotyping of MMP-1-1607 1G/2G and MMP-9-1562 C/T gene polymorphisms was performed using the combination of polymerase chain reaction and restriction fragment length polymorphism methods. Results. There were no statistically significant differences in the distribution of neither individual nor combinations of genotype and allele frequencies of MMP-1-1607 1G/2G and MMP-9-1562 C/T polymorphisms between women with SPTB and control women. Additionally, these polymorphisms do not contribute to any of the clinical characteristics of women with SPTB, including positive and negative family history of SPTB, gestational age at delivery, and maternal age at delivery, nor fetal birth weight. Conclusion. We did not find the evidence to support the association of MMP-1-1607 1G/2G and MMP-9-1562 C/T gene polymorphisms with SPTB in European Caucasian women. Introduction Preterm birth (PTB) is a common complication of pregnancy and an important perinatal health problem, occurring in 9.6% of all births worldwide [1]. It is defined as childbirth before 37 completed weeks or 259 days of gestation and accounts for almost 70% of neonatal mortality and morbidity [1][2][3]. Additionally, children born prematurely are at an increased risk of long-term health complications [1,3]. Preterm birth can be the consequence of three conditions, including preterm premature rupture of membranes (PPROM), medical indications, or preterm labor with intact membranes (spontaneous PTB; SPTB) [3,4]. The latter accounts for almost 50% of all cases of PTB [1,3,4]. Although the causes of SPTB are unknown, epidemiologic data point to a potential contribution of genetic factors [2,3,5,6]. Firstly, women with a personal or family history have an increased risk of PTB compared with women in the general population [3,7]. Additionally, twin studies suggest heritability for PTB ranges from 20% to 40% [8]. Finally, substantial differences have been determined in the rate of PTB across different racial and ethnic groups [9]. The contribution of genetic variability to PTB was evaluated for several candidate genes, divided into two groups. The first group comprises genes encoding products involved in host response to infection and inflammation, whereas the products of genes in the second group participate in extracellular matrix (ECM) remodeling. Although it is unknown whether the initial signal for parturition derives from the fetus or the mother, extensive ECM degradation occurs in fetal membranes, cervix, and 2 Disease Markers decidua during the final weeks of pregnancy, allowing the rupture of fetal membranes, cervix dilatation, and placental detachment from uterus [10,11]. Degradation of the ECM is controlled by matrix metalloproteinases (MMP), a family of 23 zinc-dependent endopeptidases [12]. The levels of MMPs in the cervix, lower uterine segment, amniotic fluid, fetal membranes, and maternal plasma increase at labor, indicating their precise spatial and timely regulation is needed to prevent PTB [11,[13][14][15][16]. Among the MMPs, MMP-1, and MMP-9 have been extensively examined in women with PTB. Most studies reported alterations of MMP-1 and MMP-9 gene expression in terms of increased levels in serum, amniotic fluid, fetal membranes, cervical fibroblasts, and cervical mucus plug in women with PTB compared to women with term delivery [11,[15][16][17][18][19][20]. Although the causes of this altered gene expression are unknown, functional polymorphisms located in MMP-1 and MMP-9 promoter regions might be a contributing factor. An insertion-deletion polymorphism of a single guanine (1G or 2G) is located at nucleotide-1607 in the MMP-1 gene promoter, and the presence of the additional guanine leads to up to a fourfold increased promoter activity [21,22]. Additionally, a single nucleotide polymorphism at nucleotide-1562 in the MMP-9 gene promoter results from the substitution of cytosine (C) with thymine (T), which increases promoter activity due to the loss of the binding site for an unknown transcription repressor [23]. Two previous studies evaluated the potential role of the MMP-9-1562 C/T gene polymorphism as a factor of predisposition to PPROM in African American and Chinese women, and one study included non-Hispanic white women with PTB, which was not classified into PPROM and SPTB [24][25][26]. However, the association of MMP-1-1607 1G/2G and MMP-9-1562 C/T with SPTB in European Caucasian women has never been tested. Therefore, in view of the important roles of MMP-1 and MMP-9 in the pathogenesis of PTB, the aim of this study was to investigate the association of functional MMP-1-1607 1G/2G and MMP-9-1562 C/T gene polymorphisms with SPTB in European Caucasian women. Furthermore, we analyzed the contribution of these polymorphisms to different clinical features of women with SPTB. Subjects. We conducted a case-control study in order to evaluate the potential association of MMP-1-1607 1G/2G and MMP-9-1562 C/T gene polymorphisms with SPTB. A total of 113 women with SPTB and 119 control women were included in the study. Demographic and clinical data of women with SPTB and their newborn children were collected in accordance with the data set for genetic epidemiology studies into PTB [3]. Data were collected by means of a self-developed questionnaire which was completed by investigators. All women with PTB had singleton pregnancies following natural conception and spontaneous initiation of PTB before 37 weeks of gestation. Gestational age was determined by last menstrual period and confirmed by ultrasound in the first trimester. In cases where estimated gestational age from the last menstrual period and ultrasound differed for more than 7 days, gestational age was changed according to ultrasound measurement in the first trimester. The initial study group consisted of 118 women with preterm birth; however five women with PPROM were excluded from genetic analysis. None of the women had known risk factors for PTB, including diabetes, hypertension, kidney disease, autoimmune conditions, allergic diseases, birth canal infections, in vitro fertilization, and complications of pregnancy. Furthermore, none of the live born had congenital anomalies or evidence of infection. Additional maternal and newborn characteristics are shown in Table 1. For each woman with SPTB one woman of the same age and parity with term delivery of singleton baby after uncomplicated pregnancy was included in the study. All women from the study and control groups were Caucasians and delivered at Division of Perinatology at Department of Obstetrics and Gynaecology, University Medical Center in Ljubljana, Slovenia. Written informed consent was obtained from all participants. The study was approved by the Slovenian National Medical Ethics Committee. DNA Extraction. Genomic DNA of all women was extracted from peripheral blood leukocytes by standard procedure using commercially available kit (Qiagen FlexiGene DNA kit, Qiagen GmbH, Hilden, Germany). Extracted DNA was stored at −20 ∘ C. Genotype Analysis. Genotyping of MMP-1-1607 1G/2G and MMP-9-1562 C/T gene polymorphisms was performed Disease Markers 3 using the combination of polymerase chain reaction (PCR) and restriction fragment length polymorphism (RFLP) methods. Primers, PCR-RFLP reaction conditions, as well as the expected sizes of PCR products, and restriction fragments were described in detail in our previous study [27]. PCR amplification was carried out in a thermal cycler (Mastercycle personal, Eppendorf, Hamburg, Germany). The restriction digestion of PCR products was carried out following the manufacturer's recommended conditions. PCR products and restriction fragments were separated using electrophoresis on 3% agarose gels stained with GelRed (Olerup SSP, Saltsjöbaden, Sweden) and the product bands were visualized under ultraviolet light. Differences in genotype and allele frequencies between women with SPTB and control women were determined using Pearson's chi square ( 2 ) test. The association of MMP-1-1607 1G/2G and MMP-9-1562 C/T gene polymorphisms with SPTB was determined by calculating ORs and their 95% CIs according to different genetic models (dominant, recessive, codominant). Distribution of numerical variables was tested using Kolmogorov-Smirnov test. One-way analysis of variance (ANOVA) was used for comparison of age and fetal birth weight means between MMP-1-1607 1G/2G genotypes, whereas Student's -test was used for comparison of age and fetal birth means between MMP-9-1562 C/T genotypes. Statistical significance was set at values <0.05. Results Genotype distributions in women with SPTB and control women were in Hardy-Weinberg equilibrium for both MMP-1-1607 1G/2G and MMP-9-1562 C/T gene polymorphisms (data not shown). Our study had 100% and 90% power to detect a twofold increase in MMP-1-1607 1G allele and MMP-9-1562 T allele, respectively. The distribution of MMP-1-1607 1G/2G and MMP-9-1562 C/T genotypes and alleles frequencies in women with SPTB and control women are shown in Table 2. The association between the two polymorphisms and the risk of SPTB according to dominant, recessive, and codominant genetic models is shown in Table 3. There were no statistically significant differences in the distribution of genotype and allele frequencies of either polymorphism between women with SPTB and control women. Additionally, there was no association between MMP-1-1607 1G/2G and MMP-9-1562 C/T genotypes and alleles and the risk of SPTB under any genetic model. Finally, no significant differences were observed in the distribution of any combination of MMP-1-1607 1G/2G and MMP-9-1562 C/T genotypes between women with SPTB and control women (data not shown). We further evaluated the association between the MMP-1-1607 1G/2G and MMP-9-1562 C/T genotypes and alleles and various clinical features of women with SPTB. However, there were no statistically significant differences in the distribution genotype and allele frequencies of either polymorphism between women with positive and negative family history of SPTB (Tables 4 and 5), as well as between women according to gestational age at delivery (Tables 6 and 7). Furthermore, there were no statistically significant differences between MMP-1-1607 1G/2G and MMP-9-1562 C/T genotypes and maternal age at delivery ( = 0.856 and = 0.807, respectively; full data not shown) nor fetal birth weight ( = 0.850 and = 0.612, respectively; full data not shown). Discussion In the present study, we investigated for the first time whether there was an association of the functional MMP-1-1607 1G/2G and MMP-9-1562 C/T gene polymorphisms with SPTB in European Caucasian women. Differences in the distribution of individual and combined genotype and allele frequencies of these polymorphisms between women with SPTB and control women did not reach statistical significance. Moreover, there was no association between MMP-1-1607 1G/2G and MMP-9-1562 C/T genotypes and alleles and the risk of SPTB according to dominant, recessive, and codominant genetic models. To the best of our knowledge, the association between SPTB in European Caucasian women and MMP-1-1607 1G/2G and MMP-9-1562 C/T gene polymorphisms was not previously investigated. However, two studies analyzed the association between the MMP-9-1562 C/T gene polymorphism in African American and Chinese women with 4 Disease Markers PPROM [24,25]. Another study was performed in non-Hispanic white women with PTB, which was not classified into PPROM and SPTB, therefore not allowing an adequate comparison with our results [26]. Chinese women carrying the CT and TT genotype had 5.31 times increased risk of PTB than those carrying the CC genotype (95% CI = 1.07-26.44) [25]. Similarly, the MMP-9-1562 C/T gene polymorphism was associated with PTB in non-Hispanic white women; however, the authors did not specify which genotype is the potential risk genotype [26]. In contrast, this polymorphism was not associated with PPROM in African American women, which is comparable to our results in European Caucasian women with SPTB [24]. Although no previous studies investigated the potential role of MMP-1-1607 1G/2G gene polymorphism in SPTB in women, significant association was found between the 2G allele and PPROM in the offspring of African American women with PTB [21]. Furthermore, the MMP-1 gene promoter containing the 2G allele had a twofold increased activity compared with the 1G allele in amnion cells, indicating that it could be a risk factor for PTB. Another aim of this study was to determine the contribution of MMP-1-1607 1G/2G and MMP-9-1562 C/T gene polymorphisms to different clinical features of women with SPTB, which could alter the risk of SPTB. First, we analyzed the distribution of genotype and allele frequencies in women whose mother and/or sibling(s) had SPTB and in those with a negative family history. It is well established that women with a positive family history have an increased risk of PTB compared with women in the general population [3,7,9]. Also, women who were themselves born prematurely have an increased risk of PTB [7]. In the present study, we did not determine any statistically significant differences in MMP-1-1607 1G/2G and MMP-9-1562 C/T genotype and allele frequencies between women with a positive and negative family history of SPTB. Second, we investigated whether the genotype and allele frequencies of MMP-1-1607 1G/2G Disease Markers 5 and MMP-9-1562 C/T gene polymorphisms differ between women according to different gestational age at delivery. PTB is usually classified into extreme (<28 weeks), severe (28-31 weeks), moderate (32-33 weeks), and near term (34-37) [28]. Due to our sample size, we classified women into two categories, moderate preterm (34-37 weeks) and severe preterm (<33 weeks), but there were no statistically significant differences in the distribution of MMP-1-1607 1G/2G and MMP-9-1562 C/T genotype and allele frequencies between these two SPTB subgroups. Third, older maternal age is associated with PTB, although it is unknown whether maternal age is an independent risk factor for PTB or a risk marker that influences PTB in association with other risk factors [29]. However, in this study, the mean age at delivery did not differ between individual MMP-1-1607 1G/2G and MMP-9-1562 C/T genotypes in women with SPTB. Finally, we compared the mean birth weight between individual maternal MMP-1-1607 1G/2G and MMP-9-1562 C/T genotypes but found no differences between maternal genotypes and offspring size at birth. Although the results of our study indicate the lack of association between MMP-1-1607 1G/2G and MMP-9-1562 C/T gene polymorphisms and SPTB, further studies are needed to evaluate the role of these, as well as other MMP gene polymorphisms in different populations. The onset of labor involves a sequence of events that are the consequence of ECM degradation, including softening and ripening of the cervix, weakening of the fetal membranes, and uterine contractions [10,13]. The MMPs have a crucial role in all of these processes, MMP-1 enabling the first step of fibrillar collagen cleavage, after which other MMPs, including MMP-2 and MMP-9, further degrade collagen fragments [30]. During normal gestation, MMP-1 and MMP-9 are found in the amniotic fluid and fetal membranes and their levels of expression increase during normal and preterm birth, favoring ECM degradation [13-15, 31, 32]. This increase in MMP-9 gene expression possibly contributes to ECM degradation in the fetal membranes and placenta, facilitating fetal membrane rupture and placental detachment at labor [15]. The highest enzymatic activity of MMP-9 occurs at the contact region of fetal and maternal parts, indicating the importance of MMP-9 in separation of the placenta from the uterus during delivery [33]. Additionally, placental chorionic villus genes affect the initiation of parturition through altered processing of cell surface molecules by MMP-1 [34]. There are several limitations to this case-control study. For example, we analyzed only maternal genotypes, which do not allow us to draw conclusions on fetal contribution to SPTB nor the potential interaction between maternal and fetal factors to pregnancy outcome. Another limitation is the relatively small sample size. Nevertheless, this study has substantial strengths, such as the inclusion of women with spontaneous PTB only, which according to guidelines for genetic research of PTB increases the homogeneity in the study group and offers increased sensitivity to detect differences in genetic epidemiology studies of PTB [3]. Moreover, we included only women with a standard clinically defined SPTB, had sufficient power analysis, and used peripheral blood samples for DNA analysis. Conclusions In conclusion, we did not find the evidence to support the association of MMP-1-1607 1G/2G and MMP-9-1562 C/T gene polymorphisms with SPTB in European Caucasian women.
2016-05-04T20:20:58.661Z
2014-10-28T00:00:00.000
{ "year": 2014, "sha1": "6b76ccbd10e2e9f59683eb9bf828ef84610a90bd", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/dm/2014/171036.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e15197e04169b9072b605bbbfbc7b2f376fd86a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17114268
pes2o/s2orc
v3-fos-license
Real-Time PCR for detection of herpes simplex virus without nucleic acid extraction Background The speed and sensitivity of real-time polymerase chain reaction (PCR) have made it a popular method for the detection of microbiological agents in both research and clinical specimens. For the detection and genotyping of herpes simplex virus (HSV) in clinical specimens, real-time PCR has proven to be faster, more sensitive and safer than earlier methods which included isolation of the virus in cell culture followed by immunofluorescence microscopy. While PCR-based assays for HSV detection posses clear advantages over these earlier techniques, certain aspects of the PCR method remain onerous. The process of extraction and purification of nucleic acid from clinical specimens prior to PCR is particularly cumbersome. Nucleic acid extraction is expensive, time-consuming and provides a step whereby specimens can become contaminated prior to their analysis. Herein, we investigate the necessity of nucleic acid extraction from swab-based clinical specimens for HSV detection by real-time PCR. We find that nucleic acid extraction is unnecessary for specific and sensitive detection of HSV in clinical specimens using real-time PCR. Methods Prospective (n = 36) and retrospective (n = 21) clinical specimens from various anatomical sites were analyzed for the presence of herpes simplex virus 1 or 2 by real-time PCR using the RealArt HSV 1/2 LC PCR Kit. Specimens were analyzed by PCR both before and following automated nucleic acid extraction. PCR using extracted and unextracted specimens was also compared to cell culture as a means of detecting HSV. Results Detection of HSV 1/2 DNA in clinical specimens by real-time PCR did not require that the specimen be subjected to nucleic acid extraction/purification prior to analysis. Each specimen that was detectable by real-time PCR when analyzed in the extracted form was also detectable when analyzed in the unextracted form using the methods herein. The limit of detection of HSV-1 and HSV-2 particles when analyzed in the unextracted form was found to be approximately 17 and 32 virus particles respectively, compared to a sensitivity of 10 copies, for analysis of purified DNA. Omission of the nucleic acid extraction step shortened both the assay time and cost. Conclusion Omission of the nucleic acid extraction step prior to real-time PCR for detection of herpes simplex virus resulted in a more rapid and cost-effective assay, with little impact upon the sensitivity of detection. Background Reliable methods for detection and sub-typing of HSV infections have included enzyme-linked immunosorbent assay (ELISA), immunofluorescence microscopy (IFA) and virus isolation by cell culture. While each of these methods has been very useful in assisting clinical diagnosis, time and technological progress have revealed the limitations of these assays. All three assays are laborious and time consuming, with cell culture often requiring as long as seven days before results are obtained. The sensitivities of these techniques have also been questioned, particularly in reference to more recent methodologies, such as polymerase chain reaction (PCR). The advent of real-time PCR for sensitive and rapid detection of nucleic acid sequences has had a significant impact upon detection of infectious disease agents. Many laboratories, including our own, have adopted real-time PCR as the primary method for detection of HSV due to the speed, sensitivity and relative lack of complexity of the real-time PCR method [1][2][3][4]. Typically, specimens analyzed by real-time PCR must first be processed in such a way that nucleic acid is extracted and purified from the clinical specimen. The extracted nucleic acid is used as a reactant in PCR to determine if the DNA sequences of interest (i.e. an infectious agent) are present. Extractions of DNA are deemed necessary due to both assumption and empirical observation that the efficiency of PCR chemistry can be negatively affected by constituents of biological specimens. While it is true that gross contamination of nucleic acid specimens with biological and chemical factors can inhibit PCR, there are few if any reliable trends which describe such inhibition. Polymerase chain reactions require evaluation on a case-by-case basis to determine their efficiency. Herein, we show that HSV specimens (swabs diluted in a widely-used, commercially available viral transport buffer) are capable of being analyzed by PCR in the absence of any purification or extraction of nucleic acid. Performing PCR on crude specimens does not require any sacrifice of specificity and requires only a minor sacrifice of assay sensitivity. Methods Specimens (n = 36) considered for possible HSV infection were collected from outpatients of the STD clinic during October 2005. Specimens were taken by swabbing of lesions, rashes or ulcers from various anatomical sites, including genital (male and female), rectal (male) and facial (male). Swabs were placed into 2 ml of either Cellmatics or Universal Transport Kit buffer (Becton Dickinson, Sparks, MD). Specimens were refrigerated at 4°C until analyzed, and subsequently frozen at -35°C. Retrospective specimens (n = 21) taken between June 2005 and October 2005 and stored at -35°C were also chosen for analysis. Retrospective specimens were chosen from males (n = 13) and females (n = 8), from various anatomical sites (genital, rectal, facial). Aliquots (1 ml) of all prospective and retrospective specimens were combined with A549 cells (Viromed Laboratories, Minnetonka, MN) in shell vials and placed at 37°C. Cultures were visualized 24, 48, 72 and 168 hours after initiation for determination of the presence or absence of cytopathic effect. If cytopathic effect was noted within a cultured specimen, cells from that culture were harvested and smeared onto glass slides prior to being fixed and subjected to immunofluorescent microscopy using the PathoDx Herpes Typing Kit (Remel, Lenexa, KS) in order to confirm the detection of HSV and to determine HSV type (1 or 2) For detection of HSV by real-time PCR, samples (200 μl) of each clinical specimen were combined with 200 μl of MagNAPure LC Lysis Buffer and subjected to automated nucleic acid extraction using a MagNAPure LC (Roche Diagnostics, Indianapolis, IN) programmed for Total "Nucleic Acid Extraction Kit I" with external lysis. Final elution volume of each sample at the conclusion of nucleic extraction was 50 μl. Specimens were either analyzed by PCR immediately following extraction, or were stored at -35°C. PCR for the detection of HSV 1/2 DNA was carried out using the RealArt HSV 1/2 LC PCR Kit (Qiagen, Germantown, MD). Reactions were set-up and performed according to manufacturer's instructions. In cases of extracted specimens, 5 μl of extracted sample was added to 15 μl of PCR Master Mix and 0.5 μl of internal control DNA. For unextracted samples, 1 μl of clinical specimen was combined with 4 μl of deionized water, 0.5 μl of internal control DNA and 15 μl of PCR Master Mix. All reactions were performed in a LightCycler 2.0 (Roche, Indianapolis, IN). Data analysis was carried-out using LightCycler 4.0 software, with criteria for positive detection of HSV being designated as any specimen having a crossing point (CP value) less than 30 (using the 640 nm/ back 530 nm channel for analysis). This CP value was chosen as follows: Based on our laboratory results, 10 purified HSV-2 DNA copies was found to be detectable 100% of the time (4/4 attempts in one experiment), with the highest CP value being 26.77. Five copies was not detectable (0/4 attempts in two experiments). Results were similar for HSV-1, with 25.53 being the highest CP for detection of 10 purified DNA copies. Hence, for purposes of this work, we set the upper boundary for calling an HSV specimen positive at approximately three crossing points higher than 26.77 (to a CP of 30) to account for the possibility of delays in amplification caused by potential impurities when unextracted clinical specimens are analyzed. Specimens with crossing points greater than 30 were considered negative for HSV. In accordance with the Code of Federal Regulations Title 45 Part 46, this work is exempt from human subjects review as this research involved the study of diagnostic specimens in a manner that patients cannot be identified either directly or through identifiers linked to the specimens. Determining the feasibility of efficient detection of HSV DNA by real-time PCR on untreated clinical specimens Having established real-time PCR within our laboratory as the method of choice for detection of HSV in clinical specimens, we sought to explore the temporal efficiency of our real-time PCR procedure. We found that approximately 50% (2 hrs.) of the total time required to execute the assay procedure was spent on the process of extraction of nucleic acid from clinical specimens. We investigated whether it would be feasible to detect HSV DNA in crude (unextracted) clinical specimens using the same Real-Time PCR reagents and methods currently utilized in our laboratory. We hypothesized that the diluted nature of the swab specimens that we regularly analyze, along with the typical lack of any gross contamination of the viral transport buffers would allow specimens to be analyzed directly by PCR. Also considered in this hypothesis, was the fact that the first 10 minutes of our PCR procedure included a 95°C denaturation step, which might allow for adequate dissociation of viral nucleic acid from other viral and host components. We explored the feasibility of real-time PCR for the detection of HSV in unextracted clinical specimens by analyzing three specimens which recently had been detected and typed in our laboratory. The three clinical specimens included a positive HSV-1, positive HSV-2 and negative HSV, (as determined by cell culture and IFA). These three specimens (200 μl each) were subjected to automated nucleic acid extraction with a 50 μl elution volume per specimen. Extracted specimens (5 μl) were then analyzed by HSV 1/2-specific real-time PCR. Simultaneous to this, samples of those same three clinical specimens were also analyzed by real-time PCR, using 5, 2.5 and 1 μl of crude, unextracted specimen combined with water (if necessary) to achieve a final volume of 5 μl. As shown in Figure 1, the amplification curves for 5 μl of extracted specimens were nearly identical to the curves generated when either 1 μl or 2.5 μl of crude specimen was analyzed. This finding was true for both the HSV-1 and HSV-2 specimens tested (Figures 1A, 1B). The use of 5 μl of unextracted clinical specimen did not appear to significantly alter the crossing points of either specimen relative to extracted sample. However, the use of 5 μl of unextracted specimen did have an impact on some aspect of the amplification or detection process, as such curves possessed jagged compositions, with much lower maximum fluorescence. Known negative clinical specimen did not show any amplification when 5, 2.5 or 1 μl of crude, unextracted specimen were subjected to PCR relative to the extracted version of the same specimen ( Figure 1C). Included in all real-time PCR reactions was an internal control which utilized the same primers, but a different probe than those used to detect HSV (the probes for these internal controls emit light at a wavelength of 705 nm) ( Figures 1D, 1E and 1F). The internal control reactions functioned properly (i.e. they showed exponential amplification) for all specimens shown in figures 1A, 1B and 1C when either 1 or 2.5 μl of specimen was analyzed. However, in each case where 5 μl of crude clinical sample was analyzed, the internal controls either failed to amplify efficiently ( Figures 1D, 1E), or did not amplify at all (Figure 1F) indicating that the fundamental chemistry of PCR was negatively affected by something in the crude specimen, but that a significant amount of the crude specimen was required to be added to the reaction for such a negative impact to occur. In the cases where HSV-1 and HSV-2 clinical specimens were tested ( Figures 1D and 1E), amplification of the internal control was merely delayed when 5 μl of clinical specimen was used. Hence, these specimens would have still been considered as valid specimens for analysis by the testing protocol utilized herein. However, in the case of the negative clinical specimen in which 5 μl of raw specimen was tested ( Figure 1F), amplification of the internal control was completely inhibited. Such a specimen would not have been considered 'negative' for HSV. Rather, this specimen would have been considered invalid, and an additional clinical specimen from the patient would have to have been ordered. Evaluation of PCR performance with extracted and unextracted prospective and retrospective specimens With preliminary evidence that crude, unextracted clinical swab specimens, when used in the proper amounts, are adequate for direct PCR analysis, we sought to determine the repeatability of this finding. Consecutive prospective specimens submitted to our laboratory for HSV testing (n = 36) were subjected to attempted isolation by cell culture. All cultures with evidence of cytopathic effect were subsequently tested by immunofluorescence assay (IFA) for typing. Simultaneous to those tests, specimens were subjected to HSV-specific real-time PCR in either extracted (5 μl) or unextracted (1 μl) form. Although both 1 μl and 2.5 μl of crude specimen had performed adequately in PCR in our initial feasibility study, as shown in Figures 1A and 1B, we chose to use 1 μl for the remainder of the study for the reason that such a volume would carry over a smaller amount of potentially inhibiting factors, if any, in the clinical specimen. As shown in Table 1 [see Additional File 1], using a crossing point of 30 as the limit of detection, the ability to detect HSV DNA(either type 1 or type 2) in clinical specimens was perfectly concordant for extracted and unextracted specimens. The genetic typing of positive specimens as either HSV-1 or HSV-2 was also 100% concordant between extracted and unextracted specimens. One specimen determined to be undetectable Figure 1C was 10,000 copies of a purified plasmid containing HSV-2 DNA target fragment (provided by the real-time PCR kit). by the method of viral culture was found to be positive by PCR whether the reaction was performed on extracted or crude samples of that specimen. These findings reinforce that nucleic extractions are not necessary when analyzing clinical specimens for the presence of HSV DNA by PCR. These data also indicate that PCR can still be a more sensitive method than viral culture as a means of HSV detection, whether or not the tested specimen is subjected to nucleic acid extraction prior to analysis. All specimens found to be negative by real-time PCR possessed exponential amplification curves for internal control PCR (data not shown). To determine whether the physiological source of the clinical specimen affects whether extraction is necessary for PCR analysis, we extended our analysis to include 21 retrospectively evaluated specimens. Specimens were selected so that a range of samples from various anatomical sites, from both sexes, would be represented. In addition, one of the chosen retrospective specimens was selected because it had previously been found to be negative by cell culture but positive by PCR (using extraction) in our laboratory. This specimen was analyzed in order to determine whether the improved sensitivity demonstrated by PCR over cell culture using extracted specimens could be maintained when the PCR was performed using unextracted specimens. Table 2 [see Additional File 2], results for PCR testing of extracted and unextracted versions of all retrospective specimens indicated that both forms of specimens were detectable. One specimen which was previously determined to be negative by way of cell culture and positive by real-time PCR was found to be positive by PCR whether or not the specimen was subjected to extraction. All specimens found to be negative by realtime PCR possessed exponential amplification curves for internal control PCR (data not shown). These data confirm that extraction of nucleic acid from clinical HSV specimens is not necessary prior to PCR detection, and that the enhanced sensitivity of PCR over cell culture for HSV detection is at least not completely sacrificed when the extraction step is bypassed. Moreover, these data indicate that clinical specimens taken from a variety of anatomical sites may be subjected to PCR without prior nucleic acid extraction. Comparison of the sensitivities of PCR using extracted and unextracted specimens On a qualitative basis, the data in Tables 1 and 2 [see Additional Files 1 and 2] show 100% concordance of PCR results for extracted and unextracted clinical specimens. However, inspection of the crossing point values of each analyzed specimen reveals a trend of disparity between extracted and un-extracted specimens. For prospectively analyzed HSV-2 specimens, the average crossing point for extracted specimens was 14.69, while the average crossing point for the same specimens analyzed in unextracted form was 16.26 (a difference of 1.57). Similarly for prospective HSV-1 specimens, the averages for extracted and unextracted specimens were 15.68 and 17.17 respectively (a difference of 1.49). These differences indicate that specimens analyzed in the extracted form are more readily detected than specimens run in unextracted form. Such a difference would be expected, based on the methodology: During nucleic acid extraction, 200 μl of clinical specimen is lysed, purified, and finally eluted in 50 μl of elution buffer. Hence, assuming that nucleic acid extraction resulted in 100% recovery of HSV DNA, the eluted specimen theoretically contains 4 μl equivalents of original clinical specimen (200 μl original specimen/50 μl eluted specimen) per microliter. When 5 μl of extracted, eluted sample is analyzed by PCR, this correlates to 20 μl equivalent of original clinical specimen. In this study, when the same clinical specimen was analyzed in unextracted form, only 1 μl of original clinical specimen was utilized. This difference in the amount of extracted and unextracted specimen used in PCR implied that the theoretical sensitivity enhancement of HSV PCR using extracted versus unextracted specimens should be at least 20-fold. Others have found however, that the process of automated nucleic acid extraction we utilized herein results in far less than 100% recovery of nucleic acid from specimens [5]. Hence, we sought to determine the yield of nucleic acid extraction by this automated method in our lab. To do this, we utilized quantified plasmids containing HSV DNA target fragments (provided by the real-time PCR kit), combined with known negative clinical HSV specimens. Such spiked formulations containing known amounts of HSV DNA target fragment were then subjected to nucleic acid extraction, and the eluted samples were quantified by real-time PCR. HSV-2 DNA target fragments (90,000 copies in 200 μl of non-HSV-containing (negative) clinical specimen) were subjected to automated extraction, and were eluted to a final volume of 50 μl. Subsequent quantitative PCR analysis showed that the average yield (recovery) of three extractions was 24.1%. Using this factor, a comparison of sensitivities of the use of extracted and unextracted specimens in PCR was reconsidered: In the context of this work, using 5 μl of extracted specimen was actually the equivalent of analyzing only 4.8 μl of original specimen. Hence, the sensitivity of PCR for HSV detection using extracted specimens is approximately only 5 fold greater than the sensitivity of the same PCR using unextracted specimen. In accordance with this, we have routinely found with the RealArt HSV 1/2 LC PCR Kit that a 3-cycle crossing point differential correlates to an approximate 10-fold difference in target DNA concentration. This in agreement with the finding that extracted speci-mens possess crossing points approximately 1.5 cycles lesser than those of their unextracted counterparts. To further evaluate the sensitivity of PCR on unextracted clinical specimens, we performed real-time PCR analysis on dilutions of previously quantified stocks of patientderived HSV particles. HSV-1 and HSV-2 stocks containing 3.4 × 10 7 and 1.02 × 10 8 virus particles per millilitre respectively were subject to 2-fold serial dilution using a HSV-negative clinical specimen as a diluent. Dilutions were subject to real-time PCR in the unextracted form to determine the maximum dilution of whole, unextracted virus particles detectable by the assay. For HSV-1, a diluted sample theoretically containing 17 virus particles (1 μl of a 1/2000 dilution) was the maximum detectable dilution, giving a crossing point value of 27.98. For HSV-2, a diluted sample theoretically containing 32 virus particles per microliter (1 μl of a 1/3200) was the maximum detectable dilution, giving a crossing point value of 26.59. Purified HSV-1 and HSV-2 target DNA standard (provided by the kit) were found to be detectable at a minimum level of 10 copies in 4 out of 4 reactions, with the highest detected crossing point values being 26.77 for HSV-2 and 25.53 for HSV-1. These data confirm that there is a sensitivity loss for real-time PCR when HSV is detected in the unextracted form compared to the detection of purified DNA. Discussion The data described in this work indicate that real-time PCR for the detection of herpes simplex viruses can be performed without previous extraction and purification of nucleic acid from clinical samples. It is important to emphasize that this assertion is made only in the context of specimens collected by swabs which have been put into contact with anatomical sites and subsequently diluted into a commonly utilized viral transport buffer. Specimens collected in this way probably contain very little physiological debris, and what little debris that is carried by the swab is diluted greatly into transport buffer. Also, the first step of this (and many) PCR protocol involves 10 minutes at 95°C. At this temperature, many bio-molecules will be denatured and solubilised, allowing for lipids and protein complexes to disassociate, hence allowing for exposure of target nucleic acids to the detection chemistry reactants (e.g. primers, probes, enzymes). Moreover PCR protocols call for a relatively small amount of such diluted specimen be placed within the PCR reaction, further diluting-out any potential inhibitors. Whether or not PCR can be carried out on non-extracted, non-purified specimens is certainly a matter to be determined empiri-cally, on a case-by-case basis. However the data provided herein imply that it may very well be worth considering omission of nucleic acid extraction steps in cases where dilution of potential inhibitors takes place during specimen collection or in cases where the clinically collected specimen is thought to be relatively free of potential inhibitors. Certain conditions demand that nucleic acid extraction and purification be performed prior to PCR. This is true for protocols involving RNA viruses, where reverse transcription of RNA into DNA must take place prior to PCR. Since such protocols involve reverse transcription steps that take place at relatively lower temperatures (often 48°C) before the denaturation step (95°C or greater), access of enzymes to viral RNA is required. Attempts to perform reverse-transcription-PCR on specimens known to contain influenza A virus (an RNA virus) in our laboratory were not successful without first extracting and purifying nucleic acid. Our use of PCR to detect HSV in specimens in lieu of nucleic acid extraction was not without some sacrifice in the sensitivity of the assay. Using 1 μl of raw clinical specimen resulted in an approximate 2 to 5-fold reduction in . This is corroborated by the study herein involving both prospective and retrospective clinical specimens. In all of the positive clinical specimens detected, the highest crossing point value identified by PCR of unextracted specimens was 24.98, with the vast majority of positive specimens possessing crossing point values less than 22, which correlates to approximately 500 virus particles per microliter in our assay (data not shown). Hence, it appears that the consequential loss of sensitivity of approximately 1.5 crossing point values for analysis of unextracted specimens will very rarely render a positive specimen undetectable by the unextracted real-time PCR method. In this body of data, we identified two low-positive specimens (culture-negative, PCR-positive using extracted specimen samples) that were readily detected by real-time PCR when raw, unextracted specimen samples were analyzed. It should also be noted that in this study, we chose to analyze only 1 μl of clinical specimen. This amount was chosen in the interest of generating a conservative estimate of the capabilities of unextracted PCR for HSV, while additionally maintaining a low probability of carrying over a physiologic inhibitor to PCR. Our results showed that as much as 2.5 μl could be analyzed per reaction. If such an amount was used, then the sensitivity difference between PCR on extracted and unextracted specimens might be less than 2-fold. Moreover, the data herein indicate that the use of a crossing point value of 30 as a cut-off for detection of impure specimens may have been unnecessarily high. Noting that the highest crossing point generated for an unextracted specimen was 27.98 for the smallest detected amount of whole virus (17 HSV-1 particles), we intend to consider 28 as a crossing point maximum for assigning positive status to a specimen. The use of PCR without extensive nucleic acid extraction of specimens is not without precedent. Polymerase chain reaction for the detection of Bordetella pertussis has been reliably achieved using swabs merely agitated in water and heated for ten minutes [6]. Real-time PCR for the detection and quantification of adeno-associated viral vectors was shown to work very well on a routine basis, with a two-fold approximate loss in sensitivity in the ability to detect unextracted AAV particles relative to those subjected to nucleic acid extraction [7]. Other simplifications of the overall process of using PCR to diagnose HSV infections have been shown to be effective. Filen et al have shown that dry cotton swabs in an empty transport tube are just as effective as those placed in viral transport buffer when used subsequently in real-time PCR [1]. Such findings, combined with those in this work, may operate well together towards establishment of a greatly simplified diagnostic protocol. Simplification of the overall protocol might greatly eliminate the possibility of contamination during sample processing, while shortening assay time. Real-time, quantitative PCR is becoming commonplace in both the clinical and public health laboratory settings. The cost of consumables, reagents and labor that is required to operate a PCR-capable laboratory can be prohibitive. Hence, studies that critically evaluate the relevancy of the individual steps of complex protocols such as PCR may result in modifications to protocols which save time and money, with a minimal sacrifice in assay performance. Conclusion Clinical specimens consisting of swabbed lesions thought to be caused by herpes simplex virus (HSV, type 1 or 2) can be analyzed by real-time PCR in lieu of nucleic acid extraction and purification. Analysis of specimens by realtime PCR without previous extraction/purification of nucleic acid results in an approximate 2 to 5-fold loss in sensitivity, with no discernable loss of specificity. The sensitivity loss which occurs when specimens are analyzed in this way still results in a highly sensitive assay relative to cell culture based isolation. Moreover, the exclusion of nucleic acid extraction results in a considerable savings in both time and money.
2017-06-18T00:56:20.568Z
2006-06-24T00:00:00.000
{ "year": 2006, "sha1": "731432da2366fed640d66582fe12bb9d0bc1ff7e", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-6-104", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bd5c3fa0d245de1e60008c4f61b7ae1f20ac1f4d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235322738
pes2o/s2orc
v3-fos-license
Wax‐Transferred Hydrophobic CVD Graphene Enables Water‐Resistant and Dendrite‐Free Lithium Anode toward Long Cycle Li–Air Battery Abstract One of the key challenges in achieving practical lithium–air battery is the poor moisture tolerance of the lithium metal anode. Herein, guided by theoretical modeling, an effective tactic for realizing water‐resistant Li anode by implementing a wax‐assisted transfer protocol is reported to passivate the Li surface with an inert high‐quality chemical vapor deposition (CVD) graphene layer. This electrically conductive and mechanically robust graphene coating enables serving as an artificial solid/electrolyte interphase (SEI), guiding homogeneous Li plating/stripping, suppressing dendrite and “dead” Li formation, as well as passivating the Li surface from moisture erosion and side reactions. Consequently, lithium–air batteries fabricated with the passivated Li anodes demonstrate a superb cycling performance up to 2300 h (230 cycles at 1000 mAh g−1, 200 mA g−1). More strikingly, the anode recycled thereafter can be recoupled with a fresh cathode to continuously run for 400 extended hours. Comprehensive time‐lapse and ex situ microscopic and spectroscopic investigations are further carried out for elucidating the fundamentals behind the extraordinary air and electrochemical stability. Introduction Rechargeable Li-air battery is one of the most tempting electrochemical energy storage solutions due to its ultrahigh theoretic energy density (3500 Wh kg −1 ), tenfold higher than that of the state-of-the-art Li-ion battery today, and the elimination of reactant tanks as those required by fuel cells and flow batteries. [1] "dead" Li. [10] Numerous efforts have been devoted to stabilize the lithium anodes, including the modification of electrolyte with SEI-stabilizing additives, [11] construction of artificial SEI and protection layers, [12] host of Li metal in conductive and lithiophilic 3D scaffolds, [13] and adoption of solid inorganic/polymer electrolyte or spacers. [14] Among the various tactics, the passivation of lithium metal surface by 2D materials has attracted particular attention by efficiently mediating Li plating/stripping and suppressing dendrite formation. [15] For instance, Langmuir-Blodgett artificial SEI was constructed with phosphate-functionalized reduced graphene oxides (rGO) to achieve stable operation of Linickel cobalt manganese oxide batteries with minimized n/p ratio. [15b] 2D MoS 2 was directly sputtered onto Li metal serving as a protective layer, and greatly improved the performance of Li-S batteries. [15c] Through solvent evaporation-assisted selfassembly, our group successfully passivated the Li anodes with a mosaic rGO layer, leading to superb cycling performance of Li-sulfurized polyacrylonitrile cells. [15d] Nonetheless, despite the great progress achieved by implementing these 2D protection layers, which are mostly hydrophilic and defective, there have been no demonstration of water/moisture-resistance, which is highly desired by applications such as Li-air battery with an opencell configuration, as a tiny amount of H 2 O could result in severe corrosion of the Li anode leading to significant performance deterioration. [16] Herein, guided by physical modeling, we report a facile but efficacious method for realizing water-resistant Li anodes by implementing a wax-assisted transfer protocol to passivate the Li surface with high-quality CVD graphene films. Serving as an artificial SEI, the conductive and robust graphene coating can effectively dissipate local surface charges, homogenize Li deposition, suppress dendrite growth, and protect the Li surface from parasitic reactions with organic electrolytes and moisture. As a result, high Coulombic efficiency of Li plating/stripping and long-term cycling reversibility were witnessed in both half and symmetric cells. Li-air batteries fabricated with the protected Li metal anodes demonstrated an impressive long cycling of 2300 h, and even more strikingly, the recycled anode can be further recoupled with a fresh cathode and continuously operate for extended hours. To help understand the performance enhancement brought by the CVD graphene protection layer, a full spectrum of microscopic and spectroscopic techniques was exploited for timelapse and ex situ investigations. Simulation of Li + Flux on Bare Lithium and Graphene-Coated Lithium First, the interfacial Li + flux and distribution on bare Li with and without graphene coating was modeled using the COMSOL Multiphysics toolbox. For bare Li, a layer of conventional SEI with a pinhole was constructed on the Li surface, whereas on the graphene-coated Li (gLi), the graphene coating serves as a coherent artificial SEI. The simulation of the Li + flux was based on three key parameters of the electrolyte and SEI, including the ionic conductance, diffusion constant, and surface roughness. Table S1, Supporting Information, list the values of Li + ionic conductance and diffusion constant in the electrolyte, SEI, and graphene, according to previous reported values. The roughness factors of the SEI and graphene are set to 2 and 0.5, respectively. As illustrated in Figure 1a, the pinhole induced by SEI rupture during repeated plating and stripping creates a hot spot of Li + flux on the bare Li surface, leading to localized Li deposition that would eventually cause dendrite growth and "dead Li" formation ( Figure 1c). In contrast, on the gLi surface the Li + flux is uniform due to the coherent graphene film and its high charge conductance, and generates a thin zone of enhanced Li + concentration at the electrolyte/graphene interface owing to the difference in Li + diffusion constant between the two phases ( Figure 1b). Accordingly, the Li + flux inside the graphene coating, albeit still uniform, is lower than that in the electrolyte. Based on the simulation here, it can be then postulated that a robust and conductive graphene coating should be in favor of homogeneous Li + deposition, both on and across the graphene layer, and thereby stabilize the Li anode. To validate the modeling results, we'll next seek after experimental evidences by passivating the Li anode surface with high-quality CVD graphene. Figure 2a shows the flowchart for coating both sides of the Li foil with the chemical vapor deposition (CVD)-grown graphene film via a wax-assisted protocol, which was previously developed in our group as a facile alternate for the classic PMMA method for graphene transfer. [17] The notable advantages of our wax method lie in the facilitated sample handling and template removal owing to the great thermal property and solubility of paraffin. More importantly, the transferred high-quality graphene film with excellent integrity can maintain seamless and conformal contact with the underlying Li to avoid perforation and delamination. Specifications of the graphene coating such as thickness can be readily varied by tuning the CVD growth parameters ( Figure S1, Supporting Information). For instance, by adjusting the H 2 /CH 4 ratio and the growth time, graphene thin films of ≈200, 100, and 30 nm were obtained ( Figure S2, Supporting Information), and correspondingly the coated Li foils are denoted as gLi-200, gLi-100, and gLi-30. Of note, the gLi-100 sample manifested the lowest electrochemical impedance in symmetric cells among all gLi-x and bare-Li samples ( Figure S3, Supporting Information), possibly owing to a trade-off between the beneficial surface passivation and escalated Li + diffusion resistance (as suggested by the above modeling). Thus, all the following characterizations and electrochemical assessments will be based on gLi-100. Fabrication and Characterization of Graphene-Coated Lithium Anodes First of all, Raman spectrum taken on the as-grown graphene thin film shows a broad 2D peak and the absence of the defectassociated D band ( Figure S4, Supporting Information), indicating defect-free and high crystalline nature of the sample, which is further affirmed by the atomic image and electron diffraction pattern ( Figure 2b) acquired using high-resolution transmission electron microscopy (HR-TEM). As shown in Figure 2c, the XRD pattern of gLi-100 comprises both signatures from the graphene coating and the underlying Li metal, with the distinct peak at 2 = 26°corresponding to the (002) planes of multilayer graphene. Furthermore, scanning electron microscope (SEM) images ( Figure S5, Supporting Information) of the as-obtained gLi-100 show clearly grain boundaries of graphene, further confirming the successful transfer of graphene film onto the Li metal surface. Atomic force microscopy (AFM) operated under the peak-force tapping mode measured an average Young's modulus of 31.2 ± 4.3 GPa for gLi-100 ( Figure S6 and Table S2, Supporting Information), far exceeding that of 4.9 GPa reported for Li metal, [18] and that of 0.15 GPa reported for a conventional SEI, [19] which should help mechanically stabilize the Li surface. The surface of gLi-100 further shows a water contact angle of 107.2° ( Figure 2d), similar to that of the pristine graphene film (105.9°, Figure S7, Supporting Information). Collectively, the above observations clearly show that the CVD graphene thin film has been successfully coated onto the Li foil and its high hydrophobicity and mechanical strength should help protect the underneath lithium surface. What's more, the wax-transferred graphene coating on Li foil can be easily scaled up due to the facile fabrication process, which is demonstrated by the large 95 mm × 25 mm gLi-100 foil shown in Figure S8, Supporting Information. We believe that further engineering efforts should help continuously scale up the gLi fabrication, ultimately making the innovation commercially feasible. The surface morphology of gLi-100 versus bare Li was further investigated by AFM installed in a glove box. Strikingly, gLi-100 shows a much smoother topograph than the bare Li does (Figure 2e,f). A few surface ridges, which are characteristic of the CVD-grown graphene film, can be clearly seen on gLi-100, while the bare Li presents highly rugged morphology exhibiting many surface pits and protrusions. This difference in surface roughness observed by AFM further validates our physical models with varied roughness factors (Figure 1a,b). As suggested by the modeling results, when used for Li metal anodes the conductive and smooth surface of gLi-100 should help homogenize local surface charge density and evenly dissipate Li + flux by avoiding the "tip effect" (Figure 1c). [20] Air and Electrochemical Stability The air stability of the as-prepared gLi-100 anodes was examined by both the time-lapse photography and XRD. When exposed to ambient air at room temperature with a relative humidity of 45-60%, the bare Li turned into rusty and dark appearance immediately upon taking out from the glove box (Figure 3a). After 6 h, the color of the bare Li changed to bluish grey, indicative of severe surface corrosion. By contrast, the surface of gLi-100 maintained an overall unchanged appearance and texture after the same period of exposure, corroborating the great protection of the graphene coating against air moisture. More impressively, the gLi-100 anode can be even tossed into water without causing any notable reaction (Video S1, Supporting Information), whereas the bare Li flares up immediately (Video S2, Supporting Information). The time-lapse XRD further shows the gradual emergence of a LiOH peak at 2 = 33.1°on bare Li (Figure 3b) during the 6 h testing period in air (humidity of the XRD chamber was controlled at ≈50%), while no such peak was observed for gLi-100 ( Figure 3c). The observations made here, in conjunction with previous water contact angle measurements, endorse the superb air and moisture stability of gLi-100. Next, the electrochemical stability of gLi-100 was examined with respect to its reactivity with electrolyte and suppression of dendrite growth. For that, symmetric cells of gLi-100 were assembled for time-lapse electrochemical impedance spectroscopy (EIS) and optical microscopy measurements, and the results are compared to those obtained for bare Li. At the rest state under open-circuit potential, the impedance of the bare Li symmetric cell increased from 140 Ω at 0 h to 570 Ω after 120 h (Figure 3d), which can be ascribed to severe SEI formation due to the spontaneous reaction between Li and electrolyte. In stark contrast, the symmetric gLi-100 cell displays fairly consistent Nyquist plots with a stabilized R SEI at around 50 Ω during the entire 120 h testing period (Figure 3e). These observations strongly support that the highly crystalline graphene film can effectively passivate the Li surface to suppress parasitic side reactions with the organic electrolyte. In situ time-lapse optical microscopy was carried out to monitor the Li plating behavior on the gLi-100 and bare-Li surface (Figure 3f,g). Strikingly, while in no time the bare Li surface evolved into a highly chaotic and mossy morphology, indicative of severe dendrite and "dead" Li formation, the gLi-100 electrode maintained both clean surface and cross section without obvious dendrite formation during the entire 30 min plating period. This observation is further coincided with the smooth AFM topograph after plating 5 mAh cm −2 of Li onto the gLi-100 surface ( Figure S9, Supporting Information), whereas the bare-Li surface after plated with the same amount of Li was too rough to be imaged by AFM. Taken together from the above time-lapse microscopic and spectroscopic studies, we can now conclude that the graphene coating on Li anodes endows not only air and chemical stability, but also helps guide smooth Li deposition with alleviated dendrite growth, just as predicted by previous modeling results. Electrochemical Properties of Half-and Symmetric-Cells To inspect the Coulombic efficiency of Li plating/stripping, halfcells with gLi-100 or bare Li serving as the working electrode and Cu foil serving as the counter electrode were assembled. At a current density of 1 mA cm −2 , the gLi-100‖Cu half-cell was able to maintain a Coulombic efficiency above 97% for over 140 cycles, whereas the control Li‖Cu half-cell showed a much inferior cycling stability, exhibiting chaotic Coulombic efficiency fluctuation after only 30 cycles (Figure 4a), which is typically associated with the cyclic SEI fracture and repetitive formation/dissolution of dendrite and "dead" Li. [21] A closer examination on the serial plating/stripping curves revealed not only a more reversible cyclic capacity on gLi-100 ( Figure S10, Supporting Information), but also much reduced and stabilized charge/discharge hysteresis ( Figure S11, Supporting Information). This reduced electrode polarization strongly evidences the facilitated Li plating/stripping across stabilized SEI enabled by the graphene protection layer. Upon disassembling the half-cells after 100 cycles, SEM images were taken to compare the bare Li and gLi-100 surfaces. In the top-view images of bare Li, extensive cracks can be observed on the roughened surface (Figure 4b). Inside the cracks, loose Li structures are clearly visualized (Figure 4c), indicative of excessive "dead" Li, which can be further witnessed from the cross-sectional view showing delaminated and loosely packed morphology (Figure 4d). In stark contrast, the disassembled gLi-100 electrode exhibits a smoother surface comprising island-like Li domains (Figure 4e,f), apart from a dense and intact cross section (Figure 4g). Ex situ AFM was further exploited to reveal the evolution of surface topograph and texture in greater details for both the bare-Li and gLi-100 electrodes. After cycling the Li‖Cu half-cell for 50 cycles, the local morphology of the bare-Li electrode appears to be smoother when compared to the pristine Li surface ( Figure S12a, Supporting Information, vs Figure 2e). This is possibly due to the formation of SEI that effectively flattens local Li topograph by filling big surface pits. Nonetheless, its surface roughness is still significantly larger when compared to that of gLi-100 after 50 cycles, exhibiting highly grained texture ( Figure S12b, Supporting Information). After 100 cycles, the granular protuberances on the bare Li electrode grew into loosely packed larger agglomerates (Figure 4h), echoing the above SEM observation. By contrast, the gLi-100 electrode after 100 cycles still displays an overall flat and smooth surface exhibiting enormous small granules (Figure 4i). In general, our half-cell study above clearly shows the more efficient Li plating/stripping on gLi-100, thanks to the graphene coating facilitating homogeneous Li deposition and suppressing Li dendrite formation. In addition, the domain structure viewed from SEM and grain texture viewed by AFM suggest at least a portion of Li + are deposited atop the graphene layer, which is consistent with previous simulation. Symmetric cells assembled with the bare-Li or gLi-100 electrodes were further cycled to interrogate the long-term passivation effect of the CVD graphene layer. Figure 5a presents the galvanostatic cycling profiles of both the bare-Li and gLi-100 symmetric cells under a fixed areal capacity of 1 mAh cm −2 at 1 mA cm −2 . It can be seen that the gLi-100 electrode shows a superb cycling stability with a charge/discharge overpotential as low as 10 mV for the entire 600 testing cycles (1200 h, Figure S13, Supporting Information), whereas the bare Li manifests a severe voltage hysteresis with the charge/discharge overpotentials rapidly surging to 200 mV after just 80 cycles (160 h). More impressively, even at a high current density of 5 mA cm −2 , the gLi-100 symmetrical cell could still deliver a long-term stability for over 300 cycles (120 h) with a stabilized overpotential less than 80 mV (Figure 5b and Figure S14, Supporting Information), which is, again, far superior to the bare-Li cell exhibiting a shortlived unstable voltage profile. The greatly improved cycling performance of gLi-100 with lowered voltage hysteresis in symmetrical cells can be reasonably attributed to the more conductive and robust graphene coating in substitution of the conventional SEI, effectively mitigating local charge accumulation, parasitic side reactions, as well as cyclic SEI fracture. To consolidate the above point of view with regard to the electrode/electrolyte interfacial stability, EIS measurements were carried out to monitor the impedance evolution on symmetric cells at various cycling states (under 1 mAh cm −2 at 1 mA cm −2 , Figure 5c,d). For the argument of simplicity and facilitation of comparison, the overall impedance R overall (i.e., the sum of internal resistance [R i ], interfacial resistance [R f ], and charge-transfer resistance [R ct ]) is adopted for comparing the cell impedance as a whole, and can be directly read from the Nyquist plot at the end of the semi-circle at low frequency. For both the bare-Li and gLi-100 electrodes, the values of R overall dropped continuously in the first 20 cycles (from 139 to 10 Ω for bare Li and 52 to 4 Ω for gLi-100, Figure 5e), indicating an initial activation and conditioning process. [22] Afterward, R overall of the bare Li symmetric cell increased again to 18 Ω at the 50th cycle and further to 39 Ω at the 100th cycle, as a result of the accumulation of SEI layer due to its repetitive rupture and regeneration during cycling, whereas R overall for gLi-100 kept mostly unchanged at ≈4 Ω for the rest of cycles (Figure 5e). These observed trends in impedance evolution are in good agreement with the evolution of voltage hysteresis in Figure 5a, both attesting to the greatly reduced and stabilized electrode polarization on gLi-100 upon cycling, apart from the much-improved charge transfer kinetics. [23] Moreover, the slopes of the Z′-−1/2 curves in Figure S15, Supporting Information, for gLi-100 are generally smaller than those observed on the bare-Li symmetrical cell, indicating improved Li + diffusion in the artificial SEI. [24] Similar to the observations on the gLi-100‖Cu and Li‖Cu halfcells, SEM images taken on the bare Li and gLi-100 electrodes www.advancedsciencenews.com www.advancedscience.com disassembled from the symmetric cells after 100 cycles reveal a cracked surface morphology of the former, and a homogeneous domain-rich morphology of the latter ( Figure S16, Supporting Information). The high-resolution ex situ SEM images in Figures 5f-h and 5i-k clearly illustrate the morphological evolution of the bare-Li and gLi-100 electrodes, respectively, along the cycling process. The bare-Li electrode after the 1st cycle of stripping and plating displays many surface pits (Figure 5f and Figure S16a, Supporting Information), serving as the preferential nucleation sites for subsequent non-uniform Li deposition (Figure 5g and Figure S16b, Supporting Information). During the successive cycling, the rough surface regions due to uncontrolled Li deposition gradually expand and coalesce, and finally develop into surface cracks comprising excessive Li dendrites and "dead" Li ( Figure 5h and Figure S16c, Supporting Information). On the other hand, the Li deposition on the gLi-100 surface is more homogeneous, forming increasingly densely packed Li domains as the cycling goes on (Figure 5i-k and Figure S16d-f, Supporting Information). Once again, these microscopic observations are in good agreement with previous theoretic modeling, corroborating the better interfacial stability of gLi-100, as well as its lowered and stabilized R overall values upon cycling. X-ray photoelectron spectroscopy (XPS) characterization was performed to analyze the composition of the SEI layer on both bare-Li and gLi-100 after cycling ( Figure S17, Supporting Information). Compared with the C 1s spectrum of the bare-Li electrode, that of gLi-100 exhibits more prominent C─C species from graphene but significantly reduced peak intensities of COR, C═O, and HCO 2 Li/COOR, indicating mitigated electrolyte decomposition. In addition, a strong peak of C─F 3 is observed for the bare-Li electrode, ascribable to the decomposition of LiTFSI. [25] When comparing the O 1s spectra, more oxygen contents were observed on the SEI of bare Li with an extra peak of Li 2 O at 528.3 eV, which should come from the decomposition of DOL. [26] Furthermore, in the Li 1s spectra the intensity ratio of LiF on the surface of gLi-100 electrode is notably smaller than that on the surface of bare-Li, further affirming the suppressed electrolyte decomposition. [27] Taken together, it is evident that the CVD-grown graphene enables serving as an artificial SEI layer to stabilize the interface and mitigate side reactions. Demonstration of Lithium-Air Batteries Encouraged by the remarkable air and electrochemical stability of gLi-100, Li-air batteries were fabricated using gLi-100 as the anode and ruthenium-doped carbon nanotubes (Ru@CNT) as the classic cathode catalyst. The batteries were tested under a fixed capacity of 1000 mAh g −1 at the current density of 200 mA g −1 between 2.2 and 4.6 V (vs Li/Li + ) in air, with the bare Li serving for control studies. To ensure the testing reproducibility, cylinder compressed air was used with a fixed water content of ≈5600 ppm as verified by mass spectroscopy (Figure S18, Supporting Information). Figures 6a and 6b display the serial discharge/charge profiles at various cycling states for Li-air batteries comprising the bare Li and gLi-100 anodes, respectively. For the bare-Li‖Ru@CNT cell, both the discharging and charging profiles deteriorate gradually with increasing cycles. The charging terminal voltage surged to 4.8 V after just 47 cycles, which is considered highly detrimental to the electrolyte stability. By contrast, the gLi-100‖Ru@CNT cell was able to operate steadily over 230 cycles (2300 h) with the charge/discharge curves mostly overlapped. Of note, the cycling stability of the gLi-100 cell is more than five times better than that of the bare-Li cell when compared at the same cut-off terminal voltage (Figure 6c). Apparently, the graphene protection layer enables greatly extending the anode lifetime in Li-air batteries by synergistically improving the Coulombic efficiency of Li plating/stripping, suppressing dendrite and "dead" Li formation, and passivating the Li surface from moisture attack. More impressively, when the gLi-100 anode was disassembled from the Li-air battery after running for 230 cycles and recoupled with a fresh Ru@CNT cathode, the new cell can continuously run for another 80 cycles at a fixed cycling capacity of 500 mAh g −1 under the cutting-off terminal voltage of 4.2 V (Figure S19, Supporting Information). This experiment strongly suggests that the increased overpotential in the first cycling trial was mainly due to the deactivation of the Ru@CNT cathode, and/or the evaporation and decomposition of electrolyte after prolonged cycling. Apart from the cycling stability, the gLi-100‖Ru@CNT cell also demonstrated superior rate capability when the current density was ramped up from 100 to 1000 mA g −1 and then back to 100 mA g −1 , whereas the bare-Li cell failed at 500 mA g −1 ( Figure S20, Supporting Information). In virtue of the great passivation effect from the graphene coating, the cycling performance of the gLi-100‖Ru@CNT cell demonstrated here is ranked among the best Li-air batteries reported today, and is even superior to many of the state-of-the-art Li-O 2 batteries tested in pure oxygen (Figure 6d). [2,4,7b,9,12b,28-39] More absurdly here, to showcase the superb water tolerance of the gLi-100 cell in operation, we immersed the as-fabricated Li-air battery in water and found it can still light up an LED for a short period of time with the preperfused air ( Figure 6e and Video S3, Supporting Information). Last, post-mortem and operando characterizations employing SEM, XRD and in situ differential electrochemical mass spectroscopy (DEMS) were carried out to seek insights into the performance enhancement brought by the graphene protection layer. As expected, the bare-Li anode disassembled from the Li-air battery after 50 cycles revealed a significantly eroded surface full of gravelly "dead" Li ( Figure S21a,b, Supporting Information), whereas the gLi-100 anode retained a relatively smooth and compact surface even after 120 cycles ( Figure S21c,d, Supporting Information). The corresponding XRD patterns in Figure 6f revealed that on the bare Li anode the intensity of LiOH peaks overwhelms that of the Li metal, while on gLi-100 the peaks of metallic Li are still the prominent feature, corroborating the greatly inhibited moisture erosion and side reactions. This argument is further supported by in situ DEMS measurements, showing less CO 2 elution from the charging process of the gLi-100 cell when compared to that from the bare-Li cell ( Figure S22, Supporting Information). The higher CO 2 elution from the latter is ascribed to the aggravated electrolyte decomposition. Furthermore, by quantifying the O 2 evolution versus charge consumption during the charging process, the numbers of electron transfer per O 2 molecule were determined to be 2.34 and 2.12 for the bare Li and gLi-100, respectively, further attesting to the superior Faradaic efficiency of the latter with suppressed side reactions. Conclusion In summary, a physical model was first constructed to enlighten the more homogeneous Li + flux atop the gLi surface versus bare Li. Then, for experimental validation, a wax-assisted trans-fer method was implemented to coat Li metal anodes with the high-quality CVD-grown graphene films of various thicknesses. The thus fabricated and optimized gLi-100 anodes demonstrate superb air and electrochemical stability, as evidenced by timelapse spectroscopic and microscopic studies, electrochemical and morphological characterizations on half-and symmetric-cells, as well as the demonstration of stable Li-air batteries. Strikingly, after an impressive long cycling for 2300 h, the recycled gLi-100 anode can be further recoupled with a fresh cathode and continuously run for extended hours. What's more, the Li anode was protected so well by the CVD graphene layer that it is water-resistant. While relieved from the worry of Li-water contact, the as-fabricated Li-air battery can be even immersed into water and still operatable shortly with preperfused air. By constructing a conductive and inert graphene layer to guide homogeneous Li plating/stripping, suppress dendrite and "dead" Li formation, and passivate the Li surface from moisture erosion and side reactions, our work offers a practical solution for protecting the Li anodes in Li-air batteries to afford extraordinary electrochemical performances. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
2021-06-04T06:16:23.310Z
2021-06-03T00:00:00.000
{ "year": 2021, "sha1": "6501ed19d0a6e1d84a8d43dfb1db593f0aa42a14", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8373161", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "1881e4a643dd99b0ca074bab10ff51f7220eaa30", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
237336248
pes2o/s2orc
v3-fos-license
THE EFFECT OF FOLIAR CALCIUM APPLICATION IN TOMATO (Solanum lycopersicum L.) UNDER DROUGHT STRESS IN GREENHOUSE CONDITIONS Numerous studies have demonstrated the effects of drought stress on tomato yield. Significant role of mineralsin alleviating theadverse effects of abiotic stresses have been documented. Calcium is an essential mineral needed in plant growth and development and in serving as an intracellular messenger. However, alleviation of drought stress in tomato plants with the application of calcium has rarely been addressed. Therefore, this study was attempted to investigate the effects of foliar application of calcium sulphate (CaSO4) on drought stress in tomato plants.The experiment was conducted in greenhouse conditions. Tomato plants were sprayed with CaSO4solution and exposed to drought stress. The foliar application of CaSO4 increased magnesium and chlorophyll level. The results indicated that application of CaSO4under drought conditions increased and regulated carbohydrate levels of leaves. Metabolite analysis revealed that a beneficial effect of CaSO4on drought tolerance. The results indicated a promising result for foliar CaSO4application providing improving mineral nutrition efficiency and therefore a higher tolerance to drought stress.Also, in drought stress, calcium application has been effective in yield significantly. Introduction Tomato (Solanum lycopersicum L.) belongs to the Solanaceae family, which is one of the most widely cultivated and economically important vegetables in the world, containing about 2800 species (Lahoz et al., 2016). The tomato, native to Peru, is an annual vegetable and introduced to Turkey in 1900s. Tomato is a warm season crop; thus, requires a warm and mild climate (Gebhardt and Thomas, 2002). Total tomato production of Turkey in 2017 was 12750000 tons on an area of 187070 ha and ranked the third highest tomatoe producer in the world (FAO, 2017). Nutritional value of tomatoe is quite high due to high vitamins A, B and C, calcium and carotene contents (Bose and Som, 1990). Gebhardt and Thomas (2002) indicated that a medium-size (123 g) tomato contains about 94% water, 26 kcal energy, 1 g protein, 6 g carbohydrate, 1.4 g total fiber, 6 mg Ca, 0.6 mg Fe, 273 mg K, 11 mg Na, 766 IU vitamin A, 0.07 mg thiamine, 0.06 mg riboflavin, 0.8 mg niacin and 23 mg ascorbic acid.Tomato is known as an ideal fleshy fruit model system due to unique characteristics such as easy grown under different conditions, short life span and simple genetics (Bergougnoux, 2014). Drought is a major abiotic stress factor causing significant yield losses and quality of products (Bray, 2004; Wang and Frei, 2011;Trenberth et al., 2014). Irregularrainfall distribution patterns and excessive use of water resources to meet the demands of growing population increased the frequency and severity of drought events in many regions of the world (Bacon, 2004;Lee, 2007). Drought stress reduces transport of nutrients in tomatoe plants (Bauer et al., 1997) and nutrient uptake of roots (Naeem et al., 2017). Calcium (Ca), a macronutrient, is quite immobile in plants and Ca uptake of roots in drought conditions is adversely affected by limited access to water (Adams and Ho, 1993;Naeem et al., 2017). Calcium has a vital role for normal growth and development of plants due to an important role in balancing membrane structures, increasing nutrient uptakes and activates of metabolic processes (Tuna et al., 2007;Sarwat et al., 2013). In addition, the Ca is needed to maintain cell wall integrity and to ensure bindings between cells (Marschner, 1995). Calcium also reduces detrimental effects of stress by regulating antioxidant metabolism (Zorrig et al., 2012;Ahmad et al., 2015). Therefore, the Ca deficiency may cause reduction in fruit quality as well as blossom-end rot and many other physiological disorders (Adams and Ho, 1993). However, Ca requirement of plants must be met continuously to sustain healthy leaf and root development (Del-Amor and Marcelis, 2003). Foliar application of fertilizer is the most effective way to improve the nutritional status of plants (Shabbir et al., 2015). The aim of this greenhouse study was to investigate the effects of foliar calcium sulphate (CaSO4) application on yield and quality of tomato under drought stress. Materials and Methods The study was carried out under farmer conditions in Birlik village of Silopi town at Sirnak province in Turkey. The experiment was conducted in a 3000 m 2 greenhouse for 4-month of tomatoe production season and tomatoeplants were grown in soil. The seedlings of Aziz F1 tomato variety were planted on March 3, 2020 ( Figure 1). The experiment had 4 treatments,20 plants per repetition, which were 100% irrigation (control), 50% irrigation, 50%+calcium1% and control+calcium1%. The temperature and the average humidity values of the greenhouse were recorded throughout the experiment. Figure 1. A view from the experiment The drought stress was applied modifying the method given in Akhoundnejad and Dasgan (2019). The amount of water used during the experiment was given in Table 1.Calcium sulphate (CaSO4) was used as the source of Ca and 1% application dose was sprayed on leaves using a 16 L backpack pump. Spraying was carried out on April 10, 40 days after planting, and application was repeated every 20 days. The drought stress in tomatoe plants was created 30 days after planting the seedlings. Interrow and intrarow distances of tomato seedlings were 100 and 25 cm, respectively. Fertilizer application rate in all treatments was110 kg ha -1 N, 190 kg ha -1 K2O, 20 kg ha -1 MgO and 30 kg ha -1 CaO. Tomato yield and yield components Tomato fruits were harvested from May to June. The weights for each replication were recorded to determine thetotal fruit yield (kg plant -1 and kg m -2 ) for each treatment. The weights of tomato fruits from different replications were averaged to determine the average fruit weight of tomato fruits (g fruit -1 ). The number of tomato fruits in different replications was recorded and averaged to determine average number of fruits per plant (number plant -1 ). Fruit juice was extracted from one slice of 5 fruits selected from each replication. The percentage of soluble dry matter in extracted juice was read using a refractometer (PCE-4582). Flesh firmness (kg) of tomatoe fruits was determined using with a fruit penetrometer(GY-1). Five fruits were selected from each replication in the flesh firmness measurements. The amount of water given to plants was recorded throughout the experiment. Water use efficiency was calculated by the ratio of total fruit yield to the total amount of water used. Water use efficiency indicates the efficient use of water in tomato production Akhoundnejad and Dasgan (2019) and calculated by the following Equation 1; where, WUF is the water use efficiency; Y is the yield (g plant -1 ), and AW is the amount of applied water (L plant -1 ). Chlorophyll content of tomato plants was measured using a SPAD meter (Minolta 502) in the morning hours when the sky was clear. Leaf temperatures ( o C) of tomato plants were measured from the 4 th leaves of plants using a Testo brand 104-IR model infrared thermometer between 09.00-10.00 am of the day. The measurements were carried out during the third harvest of the experiment. Leaf samples were collected from 4-6 leaves down to the growth end in the 45 th day of the experiment. Leaf samples were washed with deionized water and dried in an oven at 60 o C. The dried and ground leaf samples were burned at 550 o C for 6-7 hours. The ashes were filtered by dissolving in 3.3% (v/v) HCl acid. Nitrogen (N), potassium (K), magnesium (Mg), calcium (Ca), iron (Fe), manganese (Mn), copper (Cu) and zinc (Zn) contents of leaf samples were determined. Potassium, Ca, Mg and Na contents were determined in emission mode, and Fe, Mn, Zn and Cu contents were in absorbance mode of an atomic absorption spectrometer. Nitrogen content of leaves were determined by wet combustion according to the Khjeldal method.Macro and micro element analysis was performed on Atomic Absorption Spectrophotometer device and FS220 model. FolinCiocaltaeu method was used to determine total phenolic content (mg g -1 ) of tomato leaves (Singleton and Rossi, 1965). For phenolic content, 2 g of dried and ground leaf sample was weighed and 5 ml methanol 75% (containing 0.1% formic acid) was added. Homogenization process was carried out with Ultra Turrax at 6000 rpm in an ultrasonic water bath (25℃, 10 min.). The mixture was centrifuged at 2500 rpm for 10 min at room temperature and supernatant was poured into a clean tube. Extraction process was repeated twice, and final volume was adjusted to 10 ml with methanol. The extract was diluted by adding 900 ml of distilled water, then 5 ml FCR (0.2 M) was added and shaken vigorously. After standing 8 minutes, 5 ml sodium carbonate (7.5%) was added and mixed on a 20s vortex. The mixture was kept in dark for 2 hours at room temperature and absorbance was read at 765 nm by a spectrophotometer. The result was presented as mg gallic acid/g sample. Total flavonoids were determined using the method specified by Molina-Quijada et al. (2010). 1 ml of extract was mixed with 4 ml of deionized water and 0.3 ml of 5% NaNO2. Five minutes later, 0.3 ml of 10% AlCl3, 2 ml of 1M NaOH and 10 ml of deionized water were added. Absorbances of mixtures were read at 415 nm using a spectrophotometer. Chlorophyll is one of the most important pigments providing coloring in plants and enables photosynthesis to take place. Green plants synthesize organic compounds using chlorophyll and light energy. Chlorophyll concentration (mg g -1 )was determined according to Arnon (1949). In chlorophyll analysis, 100-200 mg of dried leaf samples were weighed, 10 ml acetone (80%) was added and homogenized. The absorbance values were read using a UV spectrophotometer at 663 nm, 652 nm, 645 nm and 470 nm, respectively. Chlorophyll contents were calculated using the following equations (Equations 2,3, and 4). Statistical analysis The effects of Ca and stress treatments on yield and plant characteristics were evaluated using JMP 13 statistical software. One-way ANOVA was used to test the differences in yield and plant characteristics between the treatments. The least significant difference test (LSD) at 95% probability was used to separate the means where ANOVA indicated significant differences. Fruit weight (gnumber -1 )and total fruit yield (kg ha -1 ) Fruits, reached harvest maturity, were collected and the weights were weighed and recorded throughout the experiment. The mean weights of 5 tomato fruits at harvest maturity for each treatment were presented in Table 2. The difference in mean fruit weights between the treatments was not statistically significant. The highest mean fruit weight (167.14 g piece -1 ) was recorded in control, while the lowest value (152.85 g piece -1 ) was in 50% irrigation application. The fruit weights for each treatment were summed to determine the total yield (kg ha -1 ) for each treatment ( Table 2). The effects of treatments on total tomato yield was statistically significant (p<0.05). The highest total yield (7830 kg ha -1 ) was obtained in control and Catreatment (7790 kg ha -1 ), while the lowest value was recorded in 50% irrigation treatment (4710 kg ha -1 ). Total fruit yield in 50% irrigation + Ca treatment was higher compared to the yield in 50% irrigation treatment. Daldal (2018) investigated the effects of different CaSO4doses (0, 100, 200, 300 g Ca m -2 ) on fruit yield and quality of tomato, and reported that 100 g Ca m -2 doseincreased the diameter and size of tomato fruits. Tanveer et al. (2020) investigated the effects 5 and 10 mM Ca concentrations on germination and growth parameters of tomato and indicated that Ca application increased the growth of tomato seedlings. Number of fruits (fruit plant -1 ) The mean number of fruits for each treatment collected from the first to last harvest was given in Table 2. The effects of treatments on the number of fruits per plant was significant (p<0.05). The highest number of fruits (47.27) was recorded in Ca application, while the lowest number of fruits (34.80) was obtained in 50% irrigation + Ca treatment ( Table 2). Leaf chlorophyll content (SPAD) Mean chlorophyll contents recorded in different treatments were given in Table 2. The chlorophyll content recorded under different treatments was not statistically different. The highest mean chlorophyll content (53.93) was obtained in Ca application, while the lowest mean value was recorded in control (45.63). Although the difference in chlorophyll content between control and the treatments was not significant, Ca application may be a useful strategy to increase drought tolerance of tomato plants and prevent yield losses. The leaf chlorophyll contents recorded were similar to those reported by Mishra et al. (2012) and Sadak (2018) who investigated the effects of drought stress on tomato and pepper seedlings. Water soluble dry matter content (Brix %) Water soluble dry matter content (Brix %) of fruit juice that determined using a hand refractometer were given in Table 2. The difference in Brix values between the treatments was not statistically different. The highest Brix value (5.43%) was recorded in %50 irrigation application and the lowest value was obtained in control (4.57%). The results revealed that water soluble dry matter content of fruit juice increased with the increase drought stress. Relative water content of leaves Relative water content (RWC) of leaves under different treatments were given in Table 2. The difference in RWC values between the treatments was not statistically significant. The highest RWC value (72.66%) was obtained in Ca application, while the lowest value (61.95%) was detected in control. The mean RWC in 50% irrigation + Ca treatment was higher than that recorded in 50% irrigation application. Similarly, Kabay and Şensoy (2016) reported lower RWC values for bean genotypes grown under drought stress compared to control treatment. Leaf temperature Leaf temperatures (°C) that recorded during the third harvest period were given in Table 2. The effects of treatments on leaf temperature was statistically significant (p<0.05). The highest leaf temperature (35.60°C) value was recorded in 50% irrigation + Ca application, while the lowest temperature value (31.60°C) was obtained in control application. The results showed that leaf temperatures of plants under drought stress increased as the stomata closure. Macro (%) and micro (mg kg -1 ) nutrient contents of leaves and fruits (mg 100g -1 ) The highest leaf K content (9.51%) was obtained in control and the lowest (4.31%) in Ca application. The occurrence of lowest K content under Ca application might be attributed to the antagonism between Ca and K. Potassium content under 50% irrigation treatment was 6.34% ( Table 3). The highest fruit K content (130.96 mg 100 g -1 ) was recorded in control, while the lowest content was obtained in 50% irrigation (90.74 mg 100g -1 ) treatment ( Table 4). Potassium is the most important nutrient to alleviate the effects of stress. Potassium is an extremely important mineral nutrient for marketing of fruit, quality parameters and human health (Lester et al., 2010). In addition, K plays an important role in vitamin C storage and pigment formation in fruits (lycopene and beta-carotene) (Ramiérez et al., 2012). The highest leaf Ca content (9.70%) was obtained in Ca application, while the lowest Ca content (7.43%) was obtained in 50% irrigation treatment. Mean Ca content of tomato leaves in control treatment was 8.73% ( Table 3).The highest (55.37 mg 100 g -1 ) and the lowest fruit Ca (40.21 mg 100 g -1 ) contents were determined in 50% irrigation treatment ( (Starck et al., 1995). Therefore, sufficient Ca content in individual organs prevent the incidence and severity of physiological disorders cause by adverse external conditions (Poovaiah, 1993;Starck et al., 1995). The effect of treatments on Mg content tomato leaves was statistically significant. The highest Mg content (0.94%) was obtained in 50% irrigation + Ca treatment, while the lowest Ca content (0.71%) was in 50% irrigation treatment. Mean Mg content of tomato leaves in control treatment was 0.88% ( Table 3). The highest fruit Mg content (19.30 mg 100 g -1 ) was recorded in 50% irrigation, while the lowest Mg content (8.21 mg 100 g -1 ) obtained in Ca application ( Table 4). Calcium plays a beneficial role under drought stress in tomato plants. The drought stress and Ca application increased the synthesis of higher Mg soluble sugars that positively increased the chlorophyll level in leaves. The highest leaf and fruit nitrogen contents (4.44% and 6.23 mg 100 g -1 ) were obtained in Ca application and the lowest values (3.9% and 3.46 mg 100 g -1 ) were recorded in 50% drought stress (Tables 3 and 4). Photosynthesis of green plants decreases under stress conditions, therefore, nitrogen content also decreases accordingly. Wahocho et al. (2017) investigated the effects of various nitrogen applications on the economic performance of muskmelon, and indicated that a positive effect high N fertilizer application on vegetative traits such as tallest plants with more branches. The researchers also showed significant effects of high N application on fruits characteristics and fruit yield. In our study, N fertilizer proved to have a significant positive effect on the initial growth of tomato seedlings. Micronutrient contents of tomato leaves significant (p<0.05) changed with the application of Ca and drought stress. Leaf Fe content in the control was 93 mg kg -1 . The highest leaf Fe content (112 mg kg -1 ) was obtained in Ca+50 % irrigation treatment, while the lowest Fe content (73 ppm) was recorded in 50% irrigation treatment. The highest leaf Mn content (168 mg kg -1 ) was obtained in 50% irrigation + Ca application, while the lowest Mn content was in Ca application (152 mg kg -1 ). Mean Zn content of leaves in control was 24.33 mg kg -1 , which is sufficient for healthy plant growth. The lowest leaf Zn content (16 mg kg -1 ) was obtained in 50% irrigation ( Table 3). Mean Zn content of tomato fruits in 50% irrigation + Ca treatment was 16.66 mg 100 g -1 , while Zn content of fruits under Ca application was 11.00 mg 100g -1 . The highest and the lowest fruit Fe contents (15.17 and 9.20 mg 100 g -1 ) were recorded in control treatment. The highest fruit Mn content (6.32 mg 100 g -1 ) was obtained in 50% irrigation treatment and the lowest fruit Mn content (4.36 mg 100 g -1 ) was in 50% irrigation + Ca application ( Table 4). The results revealed that stress treatments caused a significant decrease in micronutrient contents of leaves and fruits. In addition, the effect of stress in plants can be prevented with the application of Ca. Bjelić et al. (2005) reported that Cu content of tomato plants in greenhouse and open field is quite stable under various environmental conditions (high and low temperature and humidity, early or late harvest time etc.). Iron is the most abundant micro element in plants. In addition, Fe has a significant influence on quality of tomatoe fruits due to an important role in metabolic processes. Iron is also very active in many enzymatic systems such as photosynthesis, respiration and chlorophyll synthesis in plants (Houimli et al., 2017). Inactivity or slow transfer in plant is characteristic for Fe, and Fe, therefore, usually remains in roots and young leaves. Inactivity or slow transfer characteristic of Fe causes low and unstable Fe content in tomato plants (Bjelić et al., 2005). Total phenolic and flavonoid compounds (mg g -1 ) The effects treatments on total phenolic and flavonoid compounds was statistically significant (p<0.01). The lowest mean total phenolic and flavonoid compounds (9.59 and 69.42 mg g -1 ) were recorded in Ca application, while the highest values (12.60 and 96.17 mg g -1 ) were recorded in 50 irrigation treatment ( Table 5). Phenolic compounds, commonly found in plants, are secondary metabolism products and are involved in ecological and physiological events (Okunlola et al., 2017). One of the most important properties of phenolic compounds in plants is their antioxidant activity. Reactive oxygen species are formed in the cells due to metabolic events. The antioxidant activity of phenolic compounds can be attributed to the fact that free radicals formed by oxidation are extinguished by releasing hydrogen (Es-Safi et al., 2007). Phenolic compounds inhibit lipid peroxidation by trapping lipid alkyl radicals (Michalak, 2006). The flavonoid compound content in control was 73.87 mg g -1 ( Table 5). The structural and electrochemical properties of flavonoids suppress lipid peroxidation and play a role in antioxidant activities that protect the membrane structure by reducing lipid oxidation (Eren et al., 2018). The reduction of lipid peroxidation is due to the removal of reactive oxygen species by flavonoids and the reduction of lipid radicals produced during lipid peroxidation. Antioxidant activity occurs depending on the number of hydroxyl groups in phenolic varieties, location and structure of the molecule (Kalefetoğlu and Ekmekçi, 2005). Flavonoids contained in phenolic compounds can scavenge reactive oxygen species. Plants have different adaptation mechanisms to reduce oxidative damage caused by drought stress and Ca application is one of most commonly used methods. In this study, drought stress caused an increase in total phenolic compounds and flavonoids contents. Chlorophyll contents (mg g -1 ) The effects treatments on Chlorophyll a, Chlorophyll b and Chlorophyll a b compounds were statistically significant (p<0.01). The lowest mean on Chlorophyll a, Chlorophyll b and Chlorophyll a b compounds (1.31, 0.40 and1.55 mg g -1 ) were recorded in Ca application, while the highest values Chlorophyll a (50 irrigation 2.63 mg g -1 ), Chlorophyll b (50% Irrigation + Calcium 0.64 mg g -1 ) and Chlorophyll a b 50% Irrigation + Calcium 2.68 mg g -1 ) were recorded in 50 irrigation treatment ( Table 5). As the drought increases, the amount of chlorophyll increases and it is observed that the amount of chlorophyll decreases in ca applications.Water stress causes various significant changes in chlorophyll content and components due to inhibiting photosynthesis in plants, as well as damaging photosynthetic order (Sankar et al., 2008). Ashraf and Arfan (2005) determined chlorophyll content of okra plants under drought stress and reported that chlorophyll content increased with increasing stress intensity. Water stress causes various significant changes in chlorophyll content and components due to inhibiting photosynthesis in plants, as well as damaging photosynthetic order. Ashraf and Arfan (2005) determined chlorophyll content of okra plants under drought stress and reported that chlorophyll content increased with increasing stress intensity. Conclusion This study revealed that foliar application of 1% CaSO4 to tomato plants exposed to drought regulates nutrient status of plants, metabolic and transcription activities; thus, increases drought stress tolerance. The application of 1% CaSO4 to the leaves increased plant dry matter as well as chlorophyll levels of leaves. In addition, 1% CaSO4 application significantly increased Mg content of leaves and fruits. Application of Ca improved the tolerance to drought-related oxidative stress. Flower nose rot may occur under unstable irrigation or insufficient irrigation conditions. However, in this study, flower nose rot was not observed in the foliar application of 1% CaSO4 under insufficient irrigation treatments. Foliar application of 1% CaSO4 enabled tomato plants to better cope with stress by protecting fertile shoots. The growth and developments of tomato plants were relatively strong under drought stress treatments.
2021-08-27T17:22:27.464Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "addd4644081eb8560d1ae6a06dd3e76227ea3a3a", "oa_license": null, "oa_url": "https://doi.org/10.15666/aeer/1904_29712982", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8af48df750ccdc9754f40a4485d4d55582eae533", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [] }
226417233
pes2o/s2orc
v3-fos-license
The Application of Comparative Teaching Method in the Course of Aeronautic Equipment Storehouse Mold Proof In order to improve the class teaching effect, cultivate the students' ability of comparison, analysis and identification, and promote their comprehensive and systematic understanding of the problems, the comparative teaching method is adopted in the course of teaching. Combined with the characteristics of the course of "mold proof technology of aeronautic equipments storehouse", such as strong theoretical basis, complex knowledge points, complicated and meticulous operation steps of the experimental part, and difficult verification of experimental results, we have adopted the method of "grasping common features, finding differences and distinguishing advantages and disadvantages" in the teaching course. The practice proves that the comparative teaching method is helpful for students to think about the teaching contents and analyze them comprehensively and deeply grasp its regularity, so as to achieve a rational understanding, which plays an important supporting role in improving the management level of aeronautic equipments safeguard of students. INTRODUCTION Aeronautic equipments are the material basis for the Naval Aviation Forces to complete various operations and training tasks. The insufficient quantity or quality of aeronautic equipments stored in the aeronautic equipments storehouse may cause the aircraft to stop flying, reduce the good rate of monthly aeronautic equipments support, and directly affect the formation of the combat power of the Aviation Forces, so it is difficult to ensure that can complete the military training tasks smoothly by the Aviation Forces. Due to the particularity of the task, the environmental and climatic characteristics of "three highs" (i.e. high temperature, high humidity and high salt, and even high sunshine in some places all the year round, commonly known as "four highs") are easy to cause the breeding and spreading of the mold in the storage, resulting in the serious degradation of the performance of the equipments. In order to ensure the quality and quantity of the equipments in the storage, the "none of four" about aeronautic equipments (one of which is " there is no corrosion and mildew") must be done. In the personnel training program of aeronautic equipments management specialty at the level of vocational and technical education for noncommissioned officers, the basic course of aeronautic equipments storage mold proof technology is set up. Students are required to be able to identify the main types of mold in aeronautic equipments storage through the learning of this course, describe the micro morphological structure and macro colony characteristics of the mold, and name the asexual and sexual reproduction and spore types of the mold, the growth conditions of the mold in nature, the growth and propagation rules and characteristics of mold, induce the multiple influencing factors of the mold growth, elaborate the physical and chemical control methods of the mold, and conclude the comprehensive control measures of the mold in aeronautic equipments storehouse, so as to finally achieve the course purpose of cultivating students' ability of mold proof and good professional quality in aeronautic equipments storehouse. The connotation of comparative teaching method The comparative teaching method is a kind of teaching method which focuses on thinking discrimination and can determine the similarities and differences between teaching contents. In the process of teaching, in order to achieve the expected teaching purpose, according to certain comparative standards, teachers first select the course contents that are related to each other and different from each other, and then make a horizontal comparative analysis of these contents to find out the similarities and differences, which can make students have associative memory of knowledge and improve the teaching effect ofthe course [1]. The significance of comparative teaching method The application of comparative teaching method is not only conducive to the cultivation of students' ability of independent thinking and independent learning, but also to the cultivation of students' ability of in-depth analysis and active exploration, as well as to the cultivation of students' ability of drawing inferences from one instance and making a comprehensive study. In order to better let the students master the obscure theoretical knowledge in the course, the teacher can carefully select relevant knowledge points for comprehensive analysis and comparison, explain the similarities and differences of knowledge from multiple sides, guide the students to carry out series and parallel connection of knowledge step by step, and realize the improvement of students' ability. The reasonable application of this teaching method can greatly stimulate the students' learning interest in the teaching process, make the students change from "want me to learn" to "I want to learn", make full use of the students' subjective initiative in learning, so as to further strengthen post capacity of the students [2,3] . In order to ensure the smooth and complete use of comparative teaching method in the teaching process, the implementation can be organized according to the steps shown in Fig.1. A CASE STUDY ON THE APPLICATION OF COMPARATIVE TEACHING METHOD From the perspective of thinking training, there are two common methods of comparative teaching method: seeking the same comparison method and seeking different comparison method. Among them, the method of seeking the same refers to comparing the knowledge which has similarities on the surface but has differences in essence, so that students can make use of the connection between knowledge, seek differences in the same, seek the same in the different, deepen understanding, enhance memory, and cultivate students' ability to distinguish things. The method of seeking difference means that there are differences between things, one is the comparison between different things, the other is the comparison between two different aspects of the same thing, that is, put two opposite concepts together, analyze their characteristics from different sides, form a sharp contrast, and deepen the impression of students. Generally, these two methods cooperate with each other and complement each other, so as to draw an analogy of knowledge. The different ways of reproduction have various characteristic of spores For example, when we explain the knowledge points of asexual propagation of the mold, we tell that asexual propagation includes five different kinds of asexual spores, including sporangia spore, conidia, arthrospores, chlamydospores and thallospore. Different kinds of mold produce different kinds of spores when they propagate, Rhizopus and Mucor can produce sporangia, Aspergillus and Penicillium can produce conidia, and Geotrichum candidum can produce arthrospores; The same kind of mold can produce different kinds of spores in different growth environment. For example, Rhizopus usually produces sporangia in asexual propagation stage, while the "yeast cell" formed in liquid medium belongs to the spore. The asexual spore is formed by the bud of mother cell, in which the hypha cell produces small protuberances similar to germination, which are formed by the overflow and contraction of cell wall from the mother cell, then it becomes a kind of spherical spore. It can be found that the same point of the five kinds of spores is that the chromosome multiples are haploid, the different points include the endogenesis or exogenesis of spores, and the formation characteristics also include several ways, such as the hypha top expanding and breaking or the hypha breaking at the diaphragms. The spore morphology also has many shapes, such as near circle, tube and ball. The above contents are listed in the table, we make a specific comparison of the characteristics of each spore, which helps students to clearly identify the knowledge points and truly internalize the knowledge they have learned into their post ability [4] . The differences in transport materials have outstanding absorption characteristics For instance, because there is no special organ for the storage mold to absorb nutrients, the nutrition intake mainly depends on the whole cell surface. According to the characteristics of substance transport process, there are four main ways of nutrients entering cells: simple diffusion, facilitated diffusion, active transport and group transfer. In order to deepen and consolidate the students' understanding and memory of the difficult knowledge, Creating situation and organizing teaching Analyze and contrast to form a framework Teaching new courses and in-depth guidance Advances in Social Science, Education and Humanities Research, volume 451 firstly, four different absorption modes are shown to the students through animation demonstration, and then the characteristics of each mode are explained in detail. Finally, whether there is specific carrier protein, transport speed, transport direction of substances, concentration inside and outside cells, transport molecules are specific or not, and whether they need to the energy consumption and the change of material structure after transportation are analyzed and compared in tables. At the same time, in the teaching process, it also emphasizes the application of seeking difference in the same and seeking the same in the difference. For example, the concentration of substance transport is from high to low, the former does not need transport carrier, and the latter needs carrier protein, but finally reaches the same concentration in and out of the transport substance cell; active transport and group transfer need to consume energy in the transport process, but before and after transportation, the former's substance structure remains unchanged, while the latter's structure changes. Therefore, through this all-round and multi angle comparative analysis, students can greatly enhance the three-dimensional cognition of knowledge, which is conducive to the digestion and absorption of curriculum content and ability transformation [5] . The continuous growth periods have the different reproduction law For another example, the growth and reproduction law of storage mold is also the key and difficult content of this course. According to the different growth rate of mold, the curve can be divided into four periods, i.e. delay period, logarithm period, stability period and decay period. Why will the mold inoculated into the new medium have a delay period? What are the characteristics of each period? Is the growth rate constant of the delay period and the stable period basically zero? In order to solve the above questions, each question of the four periods is analyzed by the comparative teaching method. The growth rate constant of the delay period is zero, because the spores do not grow basically, and because the spores of the death number and the newly proliferated number are almost equal, the growth rate constant of the stable period on the curve is zero. After comparing the contents of each part, in order to facilitate the students' memory, the characteristics of the four periods are summarized respectively. The delay period can be summarized as follows: slow division and active metabolism; the characteristics of the logarithmic period can be summarized as follows: vigorous metabolism and stable speed; the characteristics of the stable period can be summarized as follows: increase and decrease offset, dynamic balance, and delay period can be described as increase rather than decrease, and the number plummets. The highly condensed and multi-dimensional comparison of the course content has brought new learning experience to the students and greatly inspired their enthusiasm for learning [6] . The combination of physical and chemical methods has significant effect of mold proof The final goal of this course is to prevent mold in aeronautic equipments storage. From the identification of mold species to the growth conditions of mold, from the mode of mold reproduction to the growth and propagation rules and influence factors of mold, these theoretical knowledge are all the preparation for the study of comprehensive treatment measures of aeronautic equipments storage mold. There are two ways to control mold: physical method and chemical method. In the storehouse, two methods often complement each other and play an irreplaceable role. In order to let the students master the characteristics of the two methods, the physical and chemical methods are compared and optimized in the teaching process. The physical method used in the storehouse is usually the ultraviolet lamp irradiation method. First, the students will be told the principle of ultraviolet sterilization, and then explain how to determine the quantity of ultraviolet lamp installation according to the storehouse area and the wattage of the lamp, and how long the ultraviolet lamp needs to be continuously irradiated after it is turned on, including the precautions to use the ultraviolet lamp, etc. Finally, the students will learn the effect of ultraviolet on the growth of mold through practical teaching. The method of chemical control of mold is also in accordance with the order of "sterilization principle -use method -precautions", which is in turn compared with physical sterilization method. For example, there are usually four ways to use fungicide, and the spraying method is selected in the storehouse. First, the fungicide should be prepared according to the storehouse area and the mold growth of the equipments in the storehouse. Then, taking the mold research results of the aeronautic equipments storehouse as an example, they are introduced respectively through the real experimental data about the amount of the fungicide, and when to spray fungicide, and how long the doors and windows are closed after spraying and so on. Finally, combining with the growth and propagation law of mold, let students distinguish physical method which is often used to prevent the front stage of mold growth. Once it is found that the growth of mold is close to the logarithmic period and there is an irresistible momentum, the method of spraying fungicide must be taken immediately. Only when the two methods are combined the effect of both treatment and specimen be achieved. The comparison teaching method makes a concise comparison of the abstract and obscure teaching contents, enhances the comprehensive analysis ability of the students, improves the efficiency of the class teaching, and lays a solid foundation for the students to competent the post of aeronautic equipments store keeper [7] . CONCLUSION The comparative teaching method is a thinking process and Advances in Social Science, Education and Humanities Research, volume 451 method for teachers to distinguish and determine the similarities and differences between teaching contents in teaching practice. In the process of teaching, teachers should first extract the knowledge points suitable for comparative teaching through the integration of teaching contents, then guide the students to think through the whole method, see the essence through the phenomenon, carry out in-depth comparison, select the appropriate comparison method, lead the students to try and explore continuously, stimulate the students' sense of participation and competition, and improve the students' enthusiasm and initiative in learning mobility. The reasonable application of comparative teaching method can cultivate students' ability of knowledge integration and application, enable students to learn knowledge in a relaxed and pleasant learning atmosphere, improve teaching efficiency, enhance learning effect, and enable students to get twice the result with half the effort. ACKNOWLEDGMENT This paper was supported by education reform project of Naval Aviation University.
2020-08-06T09:04:17.726Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "d431e74dba155141df726bd4b5ceb8b5261cde16", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125942283.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "00747617103648eecb13cb4c07ee8a5ab6e34598", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
40181369
pes2o/s2orc
v3-fos-license
Intravitreal Bevacizumab Treatment in Type 2 Idiopathic Macular Telangiectasia Objectives: To evaluate the efficacy of intravitreal bevacizumab treatment in type 2 idiopathic macular telangiectasia (IMT). Materials and Methods: Six eyes of 5 patients with type 2 IMT who received intravitreal bevacizumab between 2009 and 2014 were included in this study. All the patients had an ophthalmological examination including best corrected visual acuity (BCVA), dilated fundus examination, spectral domain optical coherence tomography (OCT) and fluorescein angiography. Intravitreal bevacizumab injection was planned for patients who had macular edema and/or decreased visual acuity at baseline. Patients were examined 1 week and 1 month after the intravitreal injection. Intravitreal injection was repeated in patients whose visual acuity decreased and/or whose macular edema persisted or increased. Changes in BCVA, central macular thickness (CMT) and central macular volume from baseline at 1 month after the first injection and at final examination were evaluated. Results: Average age of the patients (4 female and 1 male) was 62±11.8 years. Average follow-up period was 26±11 months. Patients received an average of 2.3 (range 1-4) injections during follow-up. Average Snellen BCVA of the patients was 0.48±0.29. BCVA increased at final examination compared to baseline in all of the patients. The difference between baseline and final visual acuities was significant (p<0.05). The patients’ average CMT was 328±139 µm at baseline and decreased by a mean of 85±153 µm at 1 month after the first injection and 65±142 µm at final examination, but the changes were not significant. CMT decreased at final examination compared to baseline in four patients and increased in both eyes of one patient. Conclusion: Intravitreal bevacizumab injection is a preferable treatment method in regard to both visual acuity and OCT findings. Introduction Idiopathic macular telangiectasia (IMT), first described by Gass and Oyakawa, 1 is a clinical condition of telangiectasia and aneurysmal dilatations of the juxtafoveal retinal capillaries. IMT type 2 affects both genders equally and is more common in the fifth and sixth decades. Telangiectatic changes are the most common changes seen in the fundus. Although patients may initially present with unilateral involvement, long-term follow-up usually reveals changes in the fellow eye as well. 2 Yannuzzi et al. 3 separated IMT into nonproliferative and proliferative subgroups. Clinical findings are highly variable; mild cases may manifest as loss of retinal transparency in the perifoveal temporal region, while more severe cases exhibit prominent telangiectatic vessels on fundoscopy, right-angle venules, intraretinal crystalline deposits, retinal pigment epithelium cell migration, and ultimately transformation to the proliferative type. 2,3 On fluorescein angiography (FA), slight intraretinal staining is observed in the early disease stages, whereas patients with substantial telangiectatic changes exhibit filling of the superficial telangiectatic capillaries and leakage from the deep capillaries. 3 Increased foveal thickness and intraretinal cystoid changes may be observed on spectral domain optical coherence tomography (SD-OCT). 3,4,5,6 Other possible findings are outer retinal atrophy and disruption of the inner segment/outer segment junction. 5,6 Various treatments such as focal/grid argon laser therapy, 7 transpupillary thermotherapy, 8 photodynamic therapy, 9 subretinal membrane surgical excision, 10 and intravitreal triamcinolone 11,12 have been tried in type 2 IMT patients. In recent years, intravitreal anti-vascular endothelial growth factor (VEGF) injection has been administered to proliferative and nonproliferative patient groups in a variety of studies. 12,13,14,15,16,17,18,19,20,21,22,23,24 Although the results of these studies differ, some patients reportedly benefited from intravitreal anti-VEGF injections. In the present study we aimed to examine the functional and morphologic effects of intravitreal bevacizumab injection in type 2 IMT patients. Materials and Methods The study included 6 eyes of 5 patients treated with intravitreal bevacizumab therapy and followed in our clinic for type 2 IMT between 2009 and 2014. Approval was granted by the local ethics committee and informed consent forms were obtained from all patients. All patients underwent a full ophthalmologic examination including best corrected visual acuity (BCVA) measurement and dilated fundus examination, SD-OCT (RTVue; Optovue Inc, CA, USA) and FA (Visucam; Zeiss, Meditec, Germany). Visual acuity was measured using Snellen chart and converted to logMAR (logarithm of the minimum angle of resolution) for statistical analysis. OCT measurements were done using a MM5 (5x5 mm 2 grid) protocol. Intravitreal bevacizumab injection was indicated in patients with macular edema and/or reduced visual acuity at presentation. Intravitreal injections were performed in sterile operating room conditions. Intravitreal bevacizumab (1.25 mg) (Avastin, Roche, Germany) injections were done using a 27-gauge needle applied 3.5 mm from the temporal limbus in phakic patients and 3 mm in pseudophakic patients. Follow-up examinations were conducted at 1 week and 1 month after intravitreal injection. FA was repeated an average of once every 3 months. Intravitreal bevacizumab injections were repeated in patients whose BCVA decreased and/or whose macular edema persisted or worsened. BCVA, central macular thickness (CMT) and central macular volume (CMV) were compared at baseline, at 1 month after the first injection and at final examination. Statistical Analysis Number Cruncher Statistical System 2007&PASS (Power Analysis and Sample Size) 2008 Statistical Software (Utah, USA) was used for all statistical analyses. Study data were evaluated using descriptive statistical methods (mean, standard deviation, median, minimum and maximum) and the paired-samples t-test was used to compare quantitative data. Level of significance was p<0.05. Results Mean age of the patients (4 female and 1 male) was 62±11.8 years. Lesions were nonproliferative in all cases. Mean follow-up time was 26±11 months, during which patients received an average of 2.3 (range 1-4) injections. Patients' BCVA, CMT and CMV values at baseline, 1 month after the first injection and at final examination are shown in Table 1. Mean Snellen BCVA (expressed as decimal) was 0.48±0.29 at baseline, 0.68±0.36 at 1 month after first injection and 0.77±0.35 at final examination ( Figure 1). There was no significant difference in BCVA at 1 month after first injection compared to baseline, but the increase in BCVA between baseline and final examination was significant (p<0.05). All patients' showed improved BCVA at final examination compared to baseline. Mean CMT value was 328±139 µm at baseline, and decreased by a mean of 85±153 µm at 1 month after first injection and by a mean of 65±142 µm at final examination ( Figure 2). However, the reductions in CMT were not statistically significant. CMT decreased in 4 patients at final examination compared to baseline, but increased in both eyes of the other patient. No significant changes in mean CMV were observed during follow-up. Following intravitreal injection, patient 1's Snellen BCVA improved to 20/20 and OCT revealed that the extrafoveal intraretinal cysts had resolved. The juxtafoveal telangiectatic changes observed on FA diminished but did not completely resolve. There were no changes in the patient's BCVA during follow-up, so no further injections were administered. After the first intravitreal injection, patient 2's Snellen BCVA improved from 20/100 to 20/25, the intraretinal cysts seen on OCT shrank, and a reduction in the juxtafoveal telangiectatic structures Following the first intravitreal injection, patient 3's Snellen BCVA improved to 20/20, the extrafoveal intraretinal cysts detected by OCT completely resolved, and foveal contours returned to normal. FA showed that the amount of leakage was reduced ( Figure 3). Two additional injections were administered during follow-up due to decreased BCVA and increased CMT. After the final injection, BCVA remained stable at 20/20 and the foveal contours returned to normal. Following the first intravitreal injection, patient 4's Snellen BCVA remained at 20/400. The intraretinal cysts were smaller on OCT, CMT was substantially decreased and the degree of leakage seen on FA was reduced. Repeated injections were done because the patient's CMT increased again during follow-up. There were no significant changes in BCVA during follow-up. This was attributed to the development of retinal atrophy due to prolonged macular edema. In patient 5, BCVA improved in both eyes after intravitreal injection. OCT at final examination revealed slightly increased CMT in both eyes, but the intraretinal cysts were smaller in size. Reduced leakage was observed in both eyes on FA. Additional injections were applied to the patient's left eye due to reduced visual acuity. Visual acuity in the right eye remained stable after a single injection. Discussion The pathogenesis of type 2 IMT and the role of VEGF molecules in that pathogenetic process continues to be a controversial topic. Yannuzzi et al. 3 posited that endothelial cell degeneration may be the triggering factor of vasogenic mechanisms in the absence of pronounced ischemia or inflammation. Other investigators have claimed that, considering the function of Müller cells in supporting the retina, dysfunction in these cells may initiate and accelerate endothelial cell degeneration. 25,26 In their histopathologic study, Green et al. 27 proposed that endothelial degeneration and capillary structural disruption lead to retinal hypoxia, which may increase VEGF release and angiogenic activity. Most studies of intravitreal injection of anti-VEGF agents in type 2 IMT have demonstrated that leakage on FA is generally reduced after injection. 12,15,16,17,18,19,20,21,22 In some of these studies, however, the leakage on FA was reported to return to baseline levels during periods without injections. 15,17,20,22 Similarly, though decreases in macular thickness measured by OCT may be detected initially, 12,16,17,18,19,20,21,22,24 studies with long-term follow-up after the final injection reported that OCT findings also returned to baseline. 17,18,20,22 Besides these studies, there are others in which no substantial changes in OCT findings were observed. 13,14,15 Results concerning visual acuity vary. Some studies show improvements in visual acuity, 12,18,19,20,22 whereas others report no change or even decline over time. 13,14,15,16,20,22,24 Response to treatment varies in terms of disease duration and severity, and degree of neuroretinal degeneration. In the present study, the finding which most strongly supports intravitreal anti-VEGF therapy is the significant improvement in visual acuity at final examination. Although the patients showed some improvement in OCT findings, the changes were nonsignificant. This may be due to the small number of patients. The better results achieved by some patients may be attributable to factors such as individual differences in treatment response, disease duration, and previous therapies. Despite variation in extent of treatment response, our study demonstrates that intravitreal anti-VEGF is a preferable treatment for type 2 IMT in terms of both visual acuity and OCT findings. To date, no treatment protocol has been developed for type 2 IMT. Several treatment modalities are being tested. Studies of intravitreal injection of anti-VEGF agents have yielded conflicting data regarding treatment outcomes. Future studies including larger patient groups may provide results which more clearly demonstrate treatment response. Conclusion In the present study and others in the literature, there are patients who have clearly benefited from intravitreal anti-VEGF therapy. Therefore, patients should be evaluated individually during the course of disease management. Ethics Ethics Committee Approval: It was taken. Informed Consent: Obtained. Conflict of Interest: No conflict of interest was declared by the authors. Financial Disclosure: The authors declared that this study received no financial support.
2017-08-15T01:54:18.904Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "24599ab9b50f49399ca69e0331c438cf012edb64", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4274/tjo.23921", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "24599ab9b50f49399ca69e0331c438cf012edb64", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15034045
pes2o/s2orc
v3-fos-license
Reducing hospital bed use by frail older people: results from a systematic review of the literature Introduction Numerous studies have been conducted in developed countries to evaluate the impact of interventions designed to reduce hospital admissions or length of stay (LOS) amongst frail older people. In this study, we have undertaken a systematic review of the recent international literature (2007-present) to help improve our understanding about the impact of these interventions. Methods We systematically searched the following databases: PubMed/Medline, PsycINFO, CINAHL, BioMed Central and Kings Fund library. Studies were limited to publications from the period 2007-present and a total of 514 studies were identified. Results A total of 48 studies were included for full review consisting of 11 meta-analyses, 9 systematic reviews, 5 structured literature reviews, 8 randomised controlled trials and 15 other studies. We classified interventions into those which aimed to prevent admission, interventions in hospital, and those which aimed to support early discharge. Conclusions Reducing unnecessary use of acute hospital beds by older people requires an integrated approach across hospital and community settings. A stronger evidence base has emerged in recent years about a broad range of interventions which may be effective. Local agencies need to work together to implement these interventions to create a sustainable health care system for older people. Systematic review (M) Systematic reviewof interventions intended to reduce admission to hospital of older people. E Evidence for reducing hospitalisation rates was equivocal. The most effective was provided by established, integrated teams in the patient's home. The review had some methodological limitations and caution is warranted when interpreting the author's conclusions. [4] Systematic review and meta-analysis of randomised controlled trials (H) 108,838 people; 110 randomised controlled trials -21 incorporated in meta-analysis Review of randomised controlled trials evaluating 'complex' social and medical interventions that may help maintain independence in older people. P There was an overall benefit of complex interventions in helping older people to live at home, explained by reduced nursing home admissions rather than death rates. Hospital admissions and falls were also reduced in intervention groups. Benefits were largely restricted to earlier studies, perhaps reflecting general improvements in health and social care for older people. was better in the intervention groups than in other groups. Benefit for any specific type or intensity of intervention was not noted. [6] a Literature review (M) Review of randomised controlled trials and observational studies Overview of the effectiveness of different strategies for reducing hospital demand that may be viewed as primarily targeting the hospital sectorincreasing capacity and throughput and reducing readmissions -or the nonhospital sector -facilitating early discharge or reducing presentations and admissions to hospital. P In regards to the non-hospital sector, potentially the biggest gains in reducing hospital demand will come from improved access to residential care, rehabilitation services and domiciliary support. More widespread use of acute care and advance care planning within residential care facilities and population-based chronic disease management programmes can also assist. [ (2) with the standard average stay in the corresponding autonomous region. P The mean length of stay in the acute geriatric unit was 8-19% shorter than that of similar patients in other medical departments. In one hospital, the reduction in the mean length of stay was 21% in patients older than 80 years. In three of the four hospitals where comparisons with the standard average stay in the corresponding autonomous region were performed, the mean length of stay in the acute geriatric unit showed reductions of 7-9%. Two systematic reviews comparing coordinated multidisciplinary approaches for in-patient rehabilitation of older people versus usual orthopaedic care found no significant difference in mortality. Mental health liaison [34] Narrative review (M) 13 papers Review of joint geriatric/psychiatric wards as a potential solution to improving care of older patients with both psychiatric and medical illnesses in acute hospitals. E These wards share common characteristics and there is an evidence that they may reduce the length of stay and be cost-effective, but there are no highquality randomised controlled trials. This is a narrative rather than a systematic review because the limited number of studies address different aspects of care in different patient populations and authors did not consider it meaningful to attempt to combine results. Pooled analysis of exercise intervention trials found no effect on the proportion of patients discharged to home or acute hospital length of stay. Continues This article is published in a peer reviewed section of the International Journal of Integrated Care Interventions should commence well before discharge. The research shows there is a direct correlation between the quality of discharge planning and readmission to hospital. No mention of cost-effectiveness or economic evaluations. [42] Systematic meta-review (H) 15 reviews Synthesis of the evidence presented in the literature on the effectiveness of interventions aimed to reduce post-discharge problems in adults discharged home from an acute general care hospital. E Although a statistical significant effect was occasionally found, most review authors reached no firm conclusions that the discharge interventions they studied were effective. We found limited evidence that some interventions may improve knowledge of patients, may help in keeping patients at home or may reduce readmissions to hospital. Interventions that combine discharge planning and discharge support tend to lead to the greatest effects. There is little evidence that discharge interventions have an impact on length of stay, discharge destination or dependency at discharge. [43] Quasiexperimental pre-post study design (USA) (L) 237 patients pre intervention; 185 intervention Study of the feasibility and effectiveness of a discharge planning intervention to facilitate the transition of older adults from three hospitals back to their homes. The intervention toolkit had five core elements: admission form with geriatric cues, facsimile to the primary care provider, interdisciplinary worksheet to identify barriers to discharge, pharmacist-physician collaborative medication reconciliation and pre-discharge planning appointments. Results A total of 48 studies were included for full review consisting of 11 meta-analyses, 9 systematic reviews, 5 structured literature reviews, 8 randomised controlled trials and 15 other studies (6 before and after studies, 6 non-randomised controlled trials, 1 comparator group study, 1 cohort study with case controls and 1 observational cohort study). With only 1 exception [3], evidence from meta-analyses and systematic reviews was classified as high, from literature reviews and randomised controlled trials as medium and that from 'other' studies as low. We assessed the impact of the studies based on the reported findings as follows: Positive -statistically significant positive impact on hospital admissions/readmissions and/or length of stay; Equivocal -some positive but not statistically significant impact; Negative -no impact. We classified interventions into those which aimed to prevent admission (Table 1) interventions in hospital (Table 2), and those which aimed to support early discharge (Table 3). We found evidence for the effectiveness of care coordination, preventive health checks and care home liaison in the prevention of admission to hospital. Within the hospital setting, there was an evidence for the effectiveness of geriatric assessment units and orthogeriatric units targeting frail older people in reducing the length of stay. For services which linked hospital-and community-based care, including discharge planning, information sharing and rehabilitation services provided in the person's home, there was an evidence of effectiveness in reducing length of stay and preventing readmission to hospital. There were a series of interventions where there was no evidence of impact on hospital bed use. These included multi-factorial falls prevention services, day hospital services, medication reviews, exercise programmes in the community, nutritional enhancement in hospital and nurse-led transitional care units. This review provides insufficient objective evidence of economic benefit or improved health outcomes for early discharge hospital at home services. Discussion Our search for peer-reviewed publications about interventions for reducing hospital bed use by frail older people published since 2007 revealed a large number of studies. There may be further studies which were not captured by our search terms. As the majority of studies we identified were secondary reviews, our study covers a substantial body of evidence from peer-reviewed research on this topic. We have found that the evidence base has strengthened for many interventions in hospital and community settings. These include: targeted preventive health checks, care coordination for frail older people, when embedded within integrated health and social care teams, hospital geriatric assessment and orthogeriatric units, community-based rehabilitation services and better integration of acute and post-acute care through discharge planning and joined up information systems. We have found no evidence to support multi-factorial falls prevention services, community-based medicines reviews, day hospital services, exercise interventions in hospital and nurse-led transitional care, but there were fewer studies of these interventions. It may be that with further development, some of these interventions may prove effective. Studies of association have shown that falls [51], polypharmacy [52], poor nutrition [53][54][55] and lack of exercise [56] are all associated with increased hospital bed use in older people, so interventions targeted on these areas have the potential to reduce hospital bed use. Despite huge expectations, telehealth and telecare have not been shown to be effective in the randomised trials. In a recently published randomised trial of telehealth [57] (the Whole Systems Demonstrator telehealth trial), compared with usual care, telehealth was not more effective and did not improve quality of life or psychological outcomes for patients with chronic obstructive pulmonary disease, diabetes or heart failure over 12 months [58]. Reassuringly, no deleterious effects on the service users were noted with the telehealth. Similarly, a cluster randomised trial comparing telecare (as implemented in the Whole Systems Demonstrator trial) with usual care did not show significant reductions in service use over 12 months [59]. Effective interventions had common features including anticipatory care targeting older people at risk of adverse outcomes in all settings and well-integrated multidisciplinary practice and inter-agency working. We conclude that services should be developed as a whole system including preventive care, acute hospital care and community care. A shared information system should be created to support patient flow through the system. Conclusion Reducing unnecessary use of acute hospital beds by older people requires an integrated approach across hospital and community settings. A stronger evidence base has emerged in recent years about a broad range of interventions which may be effective. Local agencies need to work together to implement these interventions to create a sustainable health care system for older people.
2018-04-03T03:44:48.021Z
2013-12-05T00:00:00.000
{ "year": 2013, "sha1": "f1c920a8b99d5fb5a4a6ad65021adfb28b355a49", "oa_license": "CCBY", "oa_url": "http://www.ijic.org/articles/10.5334/ijic.1148/galley/2294/download/", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f1c920a8b99d5fb5a4a6ad65021adfb28b355a49", "s2fieldsofstudy": [ "Medicine", "Political Science", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
268855493
pes2o/s2orc
v3-fos-license
Compensated Hypogonadism Identified in Males with Cluster Headache: A Prospective Case‐Controlled Study Androgens have been hypothesized to be involved in the pathophysiology of cluster headache due to the male predominance, but whether androgens are altered in patients with cluster headache remains unclear. Introduction Cluster headache is the sole primary headache disorder that has a well established prevalence among males compared to females. 1 The observed ratio of males to females has exhibited fluctuations, but it presently stands at 4.3:1. 2 The recurrent headache attacks are excruciating pain in the eye lasting 15-180 minutes, and the attacks are accompanied by cranial autonomic symptoms and restlessness. 3Most patients experience the headaches in bouts lasting weeks to months, separated by attack free periods called remissions lasting months to years. 2,3Cluster headache is classified as episodic if the remission lasts longer than 3 months and as chronic if the remission period is shorter than 3 months or absent within the past year. 3ndrogens have long been of interest in the study of cluster headache. 4For patients with cluster headache, an androgenic disturbance has been thought to play a role in the aforementioned male predominance, physical appearance, and age at onset.The physical appearance was described by Dr. Graham as "rugged, aggressive masculinity with bodies of sturdy, muscular mesomorphs" in 1972 5 and the age of onset is typically in the 2nd decade, 6 which just follows the testosterone peak occurring around 20 years. 7espite the lack of objective replication of these early physical appearance observations, the groundwork for studying androgens was laid.Based on the proposed hypermasculine appearance and the male predominance, an elevated concentration of testosterone was hypothesized as part of cluster headache etiology.][10][11][12][13] The studies identified both a reduced, unchanged, and elevated concentration in total testosterone.][16][17][18] The primary androgen is testosterone, whose gonadal synthesis is controlled by the hypothalamic-pituitary-gonadal axis.The vast majority of testosterone in the blood is bound to proteins such as albumin and sex-hormone-binding globulin (SHBG) but its effects are primarily exerted via the interaction of the biological active (free) testosterone (fT) its conversion to dihydrotestosterone in target organs 19 both of which bind to the androgen receptor.The main controlling hormones are luteinizing hormone (LH) from the pituitary gland which in turn is stimulated by the pulsatile secretion of gonadotropin-releasing hormone (GnRH) from the hypothalamus.Circulating testosterone exerts negative feedback on the hypothalamo-pituitary unit.If testosterone levels are normal or near normal due to a compensatory elevation of LH, then the condition is termed compensated hypogonadism.In accordance, in clinical practice, the ratio of fT to LH, commonly referred to as the fT/LH ratio, is regarded as a reflection of the functionality of the testicular Leydig cells.Whereas the ratio of inhibin B to follicle stimulating hormone (FSH) assesses testicular Sertoli cell function.However, dehydroepiandrosterone sulphate (DHEAS) produced from the adrenal glands, which is controlled by the hypothalamus-pituitary-adrenal axis, also have the potential of conversion into active androgens.Therefore, to understand the effects of androgen function, it is essential to investigate multiple hormones simultaneously.To clarify the hypothesis on androgen involvement in cluster headache and to identify a possible treatment target, we aimed to comprehensively investigate androgen concentrations in males with cluster headache using gold standard, validated liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) techniques in a well-powered sample.Based on the previous data, we hypothesized that episodic cluster headache in bout and chronic cluster headache was associated with a lowered fT/LH ratio.In order to further understand the potential biological relationship between cluster headache and androgen concentrations, we additionally aimed to investigate if there was a shared genetic link between these 2 entities. Study Design and Setting This study used data from The Danish Cluster Headache Biobank, a prospective observational case-controlled study.Between October 2018 and December 2021, we recruited participants from the Danish Headache Center, Rigshospitalet-Glostrup, Denmark.The 1st study day for participants with episodic cluster headache could be in remission or in bout, but the 2nd study day should be in the opposite state as the 1st study day.After study day 1, controls and participants with established chronic cluster headache completed the trial (Fig S1). Participants Inclusion criteria for participation in the Danish Cluster Headache Biobank was (1) being diagnosed with cluster headache using the ICHD-3 3 or (2) being a sex-and age-matched healthy control.Participants ranged in age from 18 to 80 years.Exclusion criteria for all participants were chronic headaches (other than cluster headache), known drug misuse, and serious somatic or psychiatric disorders.Participants were instructed to arrive fasting for at least 8 hours on both study days.Patients with episodic cluster headache were defined to be in remission after at least 30 days without a cluster headache attack, whereas in bout was defined as having at least 1 headache attack within the past week.Participants with cluster headache in their 1st bout were followed up to 1 year to establish the correct ICHD-3 subgroup, for example, episodic or chronic cluster headache.Apart from a greater occipital nerve block and oral steroids during the previous 30 days, the use of preventive therapies for cluster headaches was approved if the dose was kept stable. The recruitment of control participants was conducted through announcements via social media platforms and posting of physical invites within and around the hospital premises.The controls were excluded if they had a history of primary or secondary headaches, except for infrequent tension-type headaches, previous acute headaches caused by alcohol use or were associated with infections such as influenza, as defined by the ICHD-3.Furthermore, they were excluded if a 1st-or 2nddegree relative had a diagnosis of cluster headache.Prior to the blood sample collection, none of the controls had experienced a headache of any kind for at least 7 days. Inclusion criteria for the androgen analysis were biological males.Exclusion criteria were a body mass index (BMI) >30 kg/m 2 and use of androgen modifying medication such as a 5-alphareductase inhibitor.Distribution of sex, age, and ethnicity in the episodic group in a 1:1 ratio served as the basis for control matching (Fig 1). In compliance with the Helsinki Declaration, all participants provided written informed consent.The study was approved by the Capital Region Regional Health Research Ethics Committee (H-16048941) and the Danish Data Protection Agency. Outcomes and Covariates Our primary outcome was the fT/LH ratio in serum between the 3 cluster headache states, that is, episodic cluster headache in bout, episodic cluster headache in remission, and chronic cluster headache, as compared to the controls.Our secondary endpoint was the effect of acute medication and sleep in 24 hours prior to sampling on fT/LH ratio adjusted for age.Secondary endpoints were the inhibin B/follicle stimulating hormone (FSH) ratio and the concentration of fT, total testosterone, and DHEAS in serum between the 3 cluster headache states as compared to the controls. Participants' medical histories were recorded on the 1st study day, and a baseline semi-structured interview was conducted.The diagnosis of patients with cluster headaches was confirmed by a physician (A.S. and A.S.P.) or a medical student with specialized training (A.F.P.), and in cases of diagnostic ambiguity, a senior neurologist (R.H.J.) made the final decision.The semi-structured interview contained information to confirm the cluster headache diagnosis, and all participants diagnosis were doubled checked by author A.S.P. before the data analysis.On both study days, a structured interview was conducted regarding clinical features in preceding 24 hours: Acute medication of the past attack, and sleep in the preceding night (<6 hours, 6-8 hours, >8 hours), and medicine used the past 24 hours. Biochemically, testosterone deficiency was defined as serum concentrations of total testosterone concentrations below 11 nmol/L and fT below 220 pmol/L. 20Additionally, the fT should be below minus 2 standard deviation (SD) scores for age calculated as previously reported. 21e did a genetic analysis to find possible genome-wide significant hits ( p < 5 Â 10 À8 ), which are linked to both cluster headaches and free testosterone.This was done to see if there are shared genetic risk variants for both testosterone levels and cluster headaches. Data Sources Blood was taken into standard 9-ml serum clot activator tubes (VACUETTE ® ) from the antecubital vein and inverted several times.The tubes were left at room temperature for 30 minutes prior to centrifugation at 4 C for 10 minutes at 1409 g.Afterward 1 ml of serum was transferred to polypropylene tubes (Greiner Bio-one) and 1st kept at À25 C before being moved to À80 C pending analysis.To avoid defrosting, the samples were sorted at the time of analysis on dry ice. The steroids measured was DHEAS, androstenedione, testosterone, and 17-hydroxyprogesterone (17-OHP).Steroids were measured by an isotope-dilution online-TurboFlow-LC-MS/MS method. 22Limits of quantification was 19 nmol/L for DHEAS, 0.042 nmol/L for androstenedione, 0.012 nmol/L for testosterone, and 0.1 nmol/L for 17-OHP.The concentrations of LH and FSH were determined by chemiluminescence immunoassays (Atellica, Siemens Healthineers, Tarrytown, NY, USA) with limits of detection (LODs) of 0.07 IU/L and 0.3 IU/L, respectively.The concentrations of anti-Müllerian hormone (AMH) and SHBG were measured by chemiluminescence immunoassays (both: Access 2, Beckman Coulter) with LODs of 0.14 nmol/L and 0.33 nmol/L, respectively.The concentration of inhibin B was measured by an enzyme-linked immunosorbent assay (Beckman Coulter Inhibin B Gen II ELISA, Beckman Coulter, Brea, CA, USA) with an LOD of 3 ng/L.All analytical methods used were accredited according to the ISO-standard 15,189:2013, DANAK registration number 1013.Free testosterone (fT) was calculated according to the equation by Vermeulen et al 23 which is reliable index of the measured fT.The fT/LH ratio was calculated by dividing the concentration of fT (numerator) by the concentration of LH (denominator).Similarly, the inhibin B/FSH ratio was calculated by dividing the concentration of inhibin B by the concentration of FSH. Regarding identification of a shared genetic risk variant, the cluster headache measures were derived from the newest meta-analysis in a cohort of 4,777 cases with a confirmed diagnosis of cluster headache.Summary statistics on genome-wide nominal significant risk alleles (N snps = 802,577) was retrieved from the International Consortium of Cluster headache Genetics, CCG (clusterheadachegenetics.org), 24 and genome-wide significant hits (p < 5 Â 10 À8 ) from the GWAS catalog 25 Bias To reduce selection and Berkson bias, we invited persons with cluster headache who contacted the Danish Headache Centre for guidance or oxygen treatment even though they were followed elsewhere and excluded participants with other severe illnesses. 26o avoid hospital control bias, we recruited healthy controls outside the hospitals.We included only males because testosterone has a different regulation and function in females. Study Size As no previous study in cluster headache has been done with the same methodology of measuring testosterone, the power calculation was done with data from healthy persons.The expected mean fT was 434.5 (SD: 163.8) pmol/L in the control group based on previous findings. 27With 60 in each group and the assumption of an equal SD in the cluster headache group, an alpha of 0.05 and a power of 0.8 enables the study a power to detect a reduction of 10%.Therefore, the number of participants in each group was determined to be 60. Statistical Methods Numeric descriptive data are presented as a mean with SD if normally distributed and as a median with inter-quartile-range (IQR) if non-normally distributed.Categorical variables are presented as count with percentages.The distribution of the data was inspected visually for normality and with Bartlett's test for variance.If the normality requirements were satisfied, the paired or unpaired t-test was used; otherwise, the Mann-Whitney test was used to examine differences in the fT/LH ratio across the groups.A multiple linear regression was used to test if cluster headache in different disease states were associated with fT/LH independent of acute medication (excluding oxygen) and sleep duration 24 hours preceding the blood sampling compared to healthy controls when adjusting for age.Model requirements were met by checking model assumptions and, if necessary, applying a logarithmic transformation.To avoid violating the assumption of independent observations, we ran the same linear regression twice, once with episodic cluster headache in remission and once with episodic cluster headache in bout.In case of missing values for predefined variables or if numeric variables were outside the detectable range, participants were excluded from the regression analyses.Missing data are indicated in brackets.Statistical analyses were performed using R Statistical Software (v4.2.2 R Core Team 2021). 28A level of significance of 5% (P < .05,2-tailed) was accepted for all tests.We did not correct for multiple testing to minimize the risk of type II errors. Visualizations Visualizations were created in Biorender and in R Statistical software with the ggplot2 package. Demographics In total, 211 eligible participants with cluster headache were available in the Danish Cluster Headache Biobank, and hereof we excluded 60 due to female sex, 16 had a BMI >30, 1 due to age (age: 78 years), 10 had confounding co-morbidities or medication, and 4 were excluded due to missing matching ethnicity in the control group.In total, 60 participants with episodic cluster headache and 60 participants with chronic cluster headache were included (Fig 1).A total of 101 controls were available from the Danish Cluster Headache Biobank, and hereof we excluded 3 due to BMI >30, 9 had confounding co-morbidities, and 29 were deselected to match for sex, age, and ethnicity.After analysis, 3 samples were extreme outliers and were excluded.Two acknowledged testosterone use, and 1 required further workup for pathologies. The participants were males with mean age of 42.9 years (SD = 11.7) at the 1st study visit.The median days between the 2 study days for participants with episodic cluster headache were 216 (IQR = 243) days.All participants were of Western European ancestry.Table 1 shows the demographics across all groups and key cluster headache features in the cluster headache groups.The majority of the participants slept fewer than 8 hours in the preceding night, but a higher number of participants with cluster headache compared to controls slept fewer than 6 hours (46% vs. 12%, p < 0.001).The use of acute medication excluding oxygen in the past 24 hours was as expected unevenly distributed, as it was used by 3 (5%) of the participants with episodic cluster headache in remission, 21 (36%) of the participants with episodic cluster headache in bout, and 15 (25%) of the participants with chronic cluster headache.None of the participants in the control group had used acute medication within the past 24 hours.Table 2 summarizes the clinical variables on the study day. Comparison of Androgen Concentrations in Cluster Headache Compared to Controls The fT/LH-ratio in serum was reduced by 20% for participants with episodic cluster headache in remission compared to the controls ( p < 0.001).As compared to the healthy controls, the fT/LH-ratio was reduced by 12% (p = 0.043) in the group with episodic cluster headache in bout and by 38% ( p < 0.0001) in the group with chronic cluster headache (Fig 2).Compared to the control group, the concentration of the fT was reduced by 45 pmol/L (10%, p = 0.039) in the group with episodic cluster headache in remission, by 56 pmol/L (13%, p = 0.010) in the group with episodic cluster headache in bout, and by 105 pmol/L (24%, p < 0.0001) in the group with chronic cluster headache.A full summary of hormone concentrations is listed in Table 3. Post hoc analysis found total testosterone, inhibin B/FSH-ratio, and DHEAS in serum was equal in groups with episodic Note: One participant in each cluster headache group was extreme outliers, and all 3 were excluded.Abbreviations: cCH = chronic cluster headache, eCH = episodic cluster headache.a Other preventive used in cCH group: Candesartan, melatonin, lithium gabapentin, and valproate (6 participants were treated with a combination of verapamil and other preventive). b Other preventive used in eCH group: melatonin and gabapentin.c Seven out of 9 used a low dose (≤240 verapamil mg/day). cluster headache both in remission and in bout as compared to controls (p > 0.2), but in the group with chronic cluster headache, total testosterone was reduced by 11% (p = 0.046), inhibin B/FSH was reduced by 28% (p = 0.004), and DHEAS was reduced by 40% (p = 0.0002). Comparison of Androgen Sex Hormone Concentrations in Different Cluster Headache States Concentrations of fT/LH and fT were reduced in in the chronic cluster headache group compared to both groups with states of episodic cluster headache ( p < 0.05; for details, see Table 3).No differences were detected between the 2 states of episodic cluster headache (e.g., in remission and in bout, N = 58 pairs) regarding fT/LH (mean difference: 9.5 pmol/IU, 95% CI: À1.1-30.1,p = 0.361), inhibin B/FSH (mean difference: 2.5 ng/IU, 95% CI: À13.5-8.5, p = 0.648), and fT (mean difference: 14.2 nmol/L, 95% CI: À61.5-33.2,p = 0.551).The inhibin B/FSH ratio was reduced in the chronic cluster headache group compared to the group with episodic cluster headache in remission ( p = 0.011), and the same trend was observed when compared to the group with episodic cluster headache in bout, but the result was statistically insignificant ( p = 0.060). Effect of Sleep and Acute Medication the Past 24 hours The multiple linear regression model identified a mean reduction in the fT/LH ratio of 35% (95% CI: 21%-47%, p < 0.0001) in participants with chronic cluster headache and of 24% (95% CI: 9%-37%, p = 0.004) in participants with episodic cluster headache in remission compared to controls when adjusting for age, acute medication (excluding oxygen), and sleep 24 hours preceding the blood sampling (Fig 3).Neither acute medication nor sleep was significantly associated with fT/LH (Table S1). Clinical Outcomes Concentrations below 11 nmol/L in total testosterone (N = 20) and 220 pmol/L in fT (N = 17) were identified in 12 participants, of whom 3 participants (1 patient with chronic cluster headache and 2 patients with episodic cluster headache) had fT SD scores within the normal range.Thus, 4 out of 59 (7%) participants with chronic cluster headache, 4 out of 59 (7%) participants with episodic cluster headache in remission, and 1 out of 60 (2%) controls had biochemical signs of hypogonadism.For these 9 participants, the median fT SD score was À2.28 (range: À2.03 to À2.78). Shared Genetic Risk Variants between Cluster Headache and Testosterone Level We identified the overlap between risk variants (p < 5 Â 10 À8 ) from the genome-wide association studies (GWAS) catalog on testosterone measures (https://www.ebi.ac.uk/gwas/efotraits/EFO_0004908) with that of the nominal significant associated with cluster headache.We identified 127 risk variants, of which 1 cluster risk, rs112572874, was below the genome-side suggestive threshold ( p = 3.3 Â 10 À5 , Fig 4).The testosterone measures derived from an association study of fT concentration ( p = 6.0 Â 10 À9 ) in a population of 148,248 British ancestry males from the UK biobank. 25Unfortunately, it is not stated in the original investigation if rs112572874 was associated with a high or low fT concentration, but for both cluster headache and fT the risk allele is G.The risk variant, rs112572874, is located in the intron of the microtubule associated protein tau (MAPT) gene, on chromosome 17q21.31.This was also true for the 2nd most significant cluster variants, rs58879588. Discussion A novel key finding of this study is that cluster headache is associated with a reduced fT/LH ratio independent of disease states as compared to matched healthy controls. Our data suggest that this reduction is not merely a secondary occurrence resulting from sleep patterns or the recent use of acute medicine within a 24-hour timeframe.Furthermore, we identified 1 shared risk allele for cluster headache and fT.The previously published studies were conducted more than 30 years prior to our study, and only 1 of them included participants diagnosed according to the ICHD criteria.The methodologies have evolved tremendously since the 1980s; therefore, major differences exist between our study and previous studies.First, total testosterone was determined by radioimmunoassays in all previous studies, whereas we applied LC-MS/MS, which is the modern golden standard methodology. 29Second, fT is considered the biologically active substrate, and only 1 other study calculated fT. 10 However, we calculated fT using the Vermeulen equation, 23 by which fT has been shown to correlate with androgen deficiency symptoms, 30 whereas the previously published study calculating fT 10 used an alternative equation.Two of the prior studies are of interest as they apply different sampling methods compared to our study.The 1st study found reduced testosterone concentrations in 9 participants with episodic cluster headache in bout in repeated measurements over a 24-hour period as compared to 7 sex-and age-matched controls. 11Another study did a GnRH-provocation and found a blunted response in patients with chronic cluster headache but not in patients with episodic cluster headache.Thus, our findings and the overall literature support the existence of an association between the hypothalamuspituitary-gonadal axis and cluster headache.The largest effect is seen in chronic cluster headache, but the effects persist also in episodic cluster headache during remission. We identified compensated hypogonadism is associated with cluster headache.Importantly, compensated hypogonadism is associated with a wide range of adverse health effects including a reduced physical performance, FIGURE 3: Linear regression of the free testosterone/luteinizing hormone (fT/LH-ratio in serum as a function of age.Visualisation of the predicted values with a 95% CI of fT/LH as a function of age for 3 groups, for example, chronic cluster headache (the lowest line), episodic cluster headache in remission (in the middle line), and healthy controls (the highest line).Compared to controls fT/LH was reduced in episodic cluster headache (24% (95% CI: 9%-37%, p = 0.004) and chronic cluster headache (35%, 95% CI: 21%-47%, p < 0.0001).Adjusted for sleep and acute medication in the past 24 hours.Abbreviations: fT = free testosterone, LH = luteinizing hormone, cCH = chronic cluster headache, eCHr = episodic cluster headache in remission, sleep1 = sleep between 6 and 8 hours, sleep2 = sleep >8 hours.[Color figure can be viewed at www.annalsofneurology.org]an increased risk of developing metabolic syndrome and a higher all-cause mortality rate. 31,32This is clinically important because an unhealthy lifestyle is common in patients with cluster headache. 33ompensated hypogonadism could be caused by a testicular dysfunction or defective testosterone feedback on the hypothalamus or the pituitary gland, and it raises the central question of whether the origin of the hypogonadism is caused by a physiological association with cluster headache or is caused by environmental effects, thus being secondary.The identification of a common risk allele for cluster headache and fT is suggestive of a biological link; especially as the MAPT gene is primary expressed in the brain 34 and has been associated with several neurodegenerative disorders. 35However, studies have shown that environmental factors are associated with androgen concentrations.Smoking typically associates with higher testosterone concentrations. 36Smoking could potentially mask a more pronounced hypogonadism in cluster headache, as we observed a higher smoking prevalence among patients with cluster headache.Use of nonsteroid anti-inflammatory drugs (NSAIDs) has also been associated with compensated hypogonadism. 37A decreased sleep quality and sleep duration are potential confounders, as they are associated with a reduction in both serum testosterone 38 and cluster headache. 39wever, we adjusted for both acute medication and hours slept the past night, and the differences persisted on group level.The number of participants only using NSAID was too small for subgroup analysis (N = 15).In remission, patients with cluster headache do not have any attacks, thus using minimal or no medication; and the direct effect of the attacks on sleep is stopped during remission but the compensated hypogonadism persists in our study.Even though some sleep disturbances persist in remission 39 it is remarkable that patients without clinical symptoms of cluster headache, that is, cluster headache attacks, have an altered neuroendocrinology.Other diseases may also influence the androgen concentrations.An example is depression which might affect the fT 40 and is known to be more prevalent in cluster headache. 33However, a lower fT concentration may also be the cause of depressive symptoms, and based on our data, we cannot determine causality. We also identified lower DHEAS in chronic cluster headache in a post hoc analysis.DHEAS is almost exclusively secreted from the adrenal glands. 41Although circulating DHEAS has minimal androgen activity it can be converted, via intracrine metabolism in target tissues, to more physiologically active androgens. 42Adrenal DHEAS production is independent of the gonads, being driven by adrenocorticotropic hormone (ACTH).Consequently, our findings of suppressed DHEAS in patients with chronic cluster suggests either a degree of adrenal gland dysfunction or hypothalamus-pituitary-adrenal axis dysfunction in persons with chronic cluster headache.The present work however does not indicate the cause of the DHEAS suppression.DHEAS is a potent modulator of neuronal activity and an extra gonadal source of androgens. 43urther work is required to assess the cause of the DHEAS suppression in chronic cluster headache and its potential physiological consequences. In summary, our data are suggestive of the existence of a physiological link between cluster headache and compensated hypogonadism, which raises yet another question: whether compensated hypogonadism is a predisposition for developing cluster headache or whether having cluster headache causes compensated hypogonadism?A speculative explanation could be that, in order to obtain homeostasis, the hypothalamus and the pituitary gland have to continually produce more GnRH and LH, and that this strain causes a vulnerability for developing cluster headache in otherwise susceptible individuals.More research is needed to understand the association.Uncompensated hypogonadism, for example, frank testosterone deficiency, is among other things negatively associated with energy level and mood, and replacement therapy to normal physiological levels can improve these symptoms. 44Therefore, it could be speculated that testosterone replacement therapy would also benefit patients with cluster headache and biochemical signs of severe uncompensated hypogonadism.There are several published case reports indicating that supplementary treatment with testosterone may ease cluster headache, 14,15,17,18 but in a small prospective study, 16 a 50% response was only obtained in 1 out of 12 patients with chronic cluster headache treated with intramuscular testosterone propionate 100 mg/day for 14 consecutive days.However, the dose might have been too high as normally 100-135 mg is given every fortnight. 20A randomized controlled trial from the same group investigated the effect of a single intramuscular administration of a slow-release GnRH analogue, leuprolide 3.75 mg, as compared to placebo (saline). 45The treatment resulted in a reduction of LH and testosterone concentrations comparable to those observed after castration.A significant reduction in both attack frequency and intensity as compared to placebo was noted among 60 male participants with chronic cluster headache.Thus, it is puzzling that manipulating testosterone both up and down may improve cluster headache.Our data do not support the use of testosterone replacement treatment as the identified differences are subclinical for most patients.To truly evaluate the potential of the treatment a randomized controlled trial is necessary.Especially because both GnRH analogs and testosterone replacement therapies have potential severe, long-term side-effects. Strengths and Limitations This study holds several methodological strengths compared to previous reports on androgen concentrations in cluster headache, most importantly the application of golden standard methodology, the high number of participants, and thorough inclusion and exclusion criteria.Selection bias is inevitable when obtaining data from a tertiary headache center; however, the inclusion of participants without prior association with the Danish Headache Center may reduce the effect of this bias.Another limitation is that patients were allowed to use preventive medication.However, we believe that this approach is representative of a cohort recruited from a tertiary headache center, where verapamil treatment is not uncommon even among patients in remission probably due to anxiety of reoccurrence of the attacks.We chose to only include biological males matched by ethnicity.Consequently, only males with Western-European ancestry were included in this study, and therefore, the results may only be representative for these patients.This, however, increased the homogeneity of the study.Interestingly, in populations of East-Asian descent, the cluster headache phenotype is less severe 2 and serum concentrations of total testosterone is found to be reduced in a healthy cohort. 46Females with cluster headache remain almost uninvestigated regarding concentrations of sex hormones, but clinically, 1 crosssectional study did not find an association between hormonal cycles in females and cluster headache symptoms. 47his highlights the need for future studies, if possible, to include females as well as a trans-ancestry analysis.We did not systematically collect data on BMI in the control group which is a limitation.Additionally, we did not investigate other neuroendocrine axes such as cortisol, which could potentially influence the concentrations of testosterone.In this study, we did not investigate the clinical phenotype either by a testicular examination or by validated questionnaires, so we cannot draw any conclusions about whether compensated hypogonadism is associated with altered physical appearance or other clinical symptoms. Conclusion Our results demonstrate that the male endocrine system is altered to a state of compensated hypogonadism in patients with cluster headache.The reduction in fT to LH is independent of sleep pattern and use of acute medication, and this, combined with the lack of normalization in hormone concentrations in the remission phase suggests that the association is part of cluster headache pathophysiology.This is further supported by the identification of a common risk allele for cluster headache and fT.Future investigations are warranted to investigate the causality and treatment potential between cluster headache and the subclinical testicular dysfunction. FIGURE 2 : FIGURE 2: Violin plot of free testosterone/luteinizing hormone (fT/LH) ratio in serum in participants with different types of cluster headache and in healthy, sex-and age-matched controls.The central axis of the violin plot represents the interquartile range, while the symmetrical side sections illustrate the probability density of the data.* p < 0.05, *** p < 0.001, **** p < 0.0001, ns: not significant.[Color figure can be viewed at www.annalsofneurology.org] FIGURE 4 : FIGURE 4: Shared genetic risk variants of cluster headache and serum free testosterone concentration.Visualization of the shared genetic risk variants between cluster headache and serum concentration of free testosterone.The dotted line indicates the nominal significance associated with cluster headache.[Color figure can be viewed at www.annalsofneurology.org] TABLE 1 . Demographics of Populations Note: Numerical values are presented as mean with SD if normally distributed; otherwise as median with IQR. TABLE 2 . Variables on the Day of Collection TABLE 3 . Concentrations of Male Sex Hormones in the Population
2024-04-03T06:18:17.399Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "0fba33c420fbd27d195a5a3299ca3bd61cea64d2", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1002/ana.26906", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "cf04ed8f451e98b1ecc0bb1e5045eb0a50e6b1c0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231790565
pes2o/s2orc
v3-fos-license
Analyzing Kinase Similarity in Small Molecule and Protein Structural Space to Explore the Limits of Multi-Target Screening While selective inhibition is one of the key assets for a small molecule drug, many diseases can only be tackled by simultaneous inhibition of several proteins. An example where achieving selectivity is especially challenging are ligands targeting human kinases. This difficulty arises from the high structural conservation of the kinase ATP binding sites, the area targeted by most inhibitors. We investigated the possibility to identify novel small molecule ligands with pre-defined binding profiles for a series of kinase targets and anti-targets by in silico docking. The candidate ligands originating from these calculations were assayed to determine their experimental binding profiles. Compared to previous studies, the acquired hit rates were low in this specific setup, which aimed at not only selecting multi-target kinase ligands, but also designing out binding to anti-targets. Specifically, only a single profiled substance could be verified as a sub-micromolar, dual-specific EGFR/ErbB2 ligand that indeed avoided its selected anti-target BRAF. We subsequently re-analyzed our target choice and in silico strategy based on these findings, with a particular emphasis on the hit rates that can be expected from a given target combination. To that end, we supplemented the structure-based docking calculations with bioinformatic considerations of binding pocket sequence and structure similarity as well as ligand-centric comparisons of kinases. Taken together, our results provide a multi-faceted picture of how pocket space can determine the success of docking in multi-target drug discovery efforts. Introduction Small-molecule modulators of protein function are the most frequent type of molecules in use for the treatment of diseases due to their favorable pharmacokinetic properties [1]. Such ligands bind to cavities on protein surfaces-the binding sites-and compete with substrates or native ligands, or they alter the protein conformation. For such a molecule to become an efficacious drug, it has to possess adequate affinity for its protein target, solubility, membrane permeability and stability. Furthermore, its overall binding profile has to be compatible with its intended mode of action. On the one hand, unintended binding to proteins other than the primary target can cause side effects. On the other hand, several The results of our study allow us to reflect on the similarity boundaries determining the suitability of structure-based drug design (SBDD) to successfully address a specific multi-target combination. In particular, they show the necessity for ever-larger libraries that hold diverse molecules, in order to increase the likelihood of identifying ligands tailored towards predefined selectivity profiles. Results and Discussion Herein, the selected kinase profiles are rationalized first and the virtual screening results against these panels are discussed. Then, the experimental results for the selected compounds are presented. Finally, the similarity between the kinases of the studied profiles is analyzed with respect to different ligand-and protein-centric measures. Kinase Profiles We focused our analysis on a target panel comprising kinases with medical relevance as well as a typical anti-target, known to be associated with frequent side effects of kinase inhibitors. All kinases in this set have been thoroughly characterized in the literature and are summarized in Table 1. Atypical PIK VEGFR2 KDR P35968 TK VEGFR BRAF -P15056 TKL RAF CDK2 -P24941 CMGC CDK LCK -P06239 TK Src MET -P08581 TK MET p38α MAPK14 Q16539 CMGC MAPK a EGFR, epidermal growth factor receptor; ErbB2, Erythroblastic leukemia viral oncogene homolog 2; PI3K, phosphatidylinositol-3-kinase; VEGFR2, vascular endothelial growth factor receptor 2; BRAF, rapidly accelerated fibrosarcoma isoform B; CDK2, cyclic-dependent kinase 2; LCK, lymphocyte-specific protein tyrosine kinase; MET, mesenchymal-epithelial transition factor; p38α, p38 mitogen activated protein kinase α. The Erythroblastic leukemia viral oncogene homolog (ErbB) subclass of Receptor Tyrosine Kinases (RTKs) consists of four members named from ErbB1 (better known as epidermal growth factor receptor [EGFR]) to ErbB4 and they bind the EGF family of peptides with their extracellular region [22]. The ErbB family is involved in the regulation of a multitude of signaling pathways associated with cell development. It is thus not surprising that aberrant ErbB signaling occurs in many cancers. Of note, patients with altered EGFR and ErbB2 expression suffer from a more aggressive disease. Especially breast cancer overexpressing ErbB2 is associated with poor patient prognosis [23]. Unfortunately, therapy is often effective only for a short time and tumors will escape inhibition by activating pathways downstream of ErbB receptors via other kinases. This has been demonstrated for the phosphatidylinositol-3-kinase (PI3K) pathway, which is directly or indirectly activated by most ErbBs [24]. After initial downregulation of PI3K activity upon inhibition of ErbBs, this pathway often recovers. Combination therapies are used to circumvent this problem, albeit with limited success. There is also evidence that tumor cells escape the negative effects of EGFR inhibition by upregulating tumor angiogenesis-promoting growth factors. A study used two antibodies against EGFR and VEGFR2 (vascular endothelial growth factor receptor 2), respectively, to treat gastric cancer grown in nude mice [25]. The combination resulted in significantly greater inhibition of tumor growth. Based on these experimental observations, we aggregated the investigated kinases in "profiles" ( Table 2). Profile 1 combined EGFR and ErbB2 as targets (indicated by a '+') and BRAF (from rapidly accelerated fibrosarcoma isoform B) as a (general) anti-target (designated by a '−'). Out of similar considerations, Profile 2 consisted of EGFR and PI3K as targets and BRAF as anti-target. This profile is expected to be more challenging as PI3K is an atypical kinase and thus less similar to EGFR than for example ErbB2 used in Profile 1. Profile 3, comprised of EGFR and VEGFR2 as targets and BRAF as anti-target, was contrasted with the hit rate that we found with a standard docking against the single target VEGFR2 (Profile 4). Table 2. Definitions of kinase profiles and the numbers of screening compounds selected for each profile. ID Kinase Profile a No. of Tested Compounds +VEGFR2 4 a + and −indicate targets and anti-targets, respectively. b Three compounds are identical between Profiles 1 and 2 but were independently selected from the docking calculations against both profiles. c One compound is identical between Profiles 1 and 3 but was independently selected from the docking calculations against both profiles. To broaden the comparison and obtain an estimate for the promiscuity of each compound, the kinases CDK2 (cyclic-dependent kinase 2), LCK (lymphocyte-specific protein tyrosine kinase), MET (mesenchymal-epithelial transition factor) and p38α (p38 mitogen activated protein kinase α) were included in the experimental assay panel and the structurebased bioinformatics comparison as commonly used anti-targets. Virtual Screening against Kinase Profiles Following our previous approach to identify ligands with tailored selectivity profiles by virtual screening [6], the aim of this study was to evaluate the possibility to add antitargets to a kinase profile. We hence modified our previous approach to incorporate profiles with more than two kinases, multiple structures per kinase and the selection of targets and anti-targets (Equation (1) in Section "Data and Methods"). Starting from the EGFR/ErbB2 pair, we included BRAF as a promiscuous anti-target, resulting in Profile 1 (see Section 2.4.1 for a discussion of promiscuity values). We therefore prioritized molecules with high rank (i.e., favorable docking scores) in EGFR and ErbB2 as well as low rank (i.e., unfavorable docking interactions) in BRAF. The ZINC lead-like and ZINC drug-like subsets, containing 4.6 and 10.6 million molecules, respectively, were docked into each of the selected structures of these kinases (cf. "Data and Methods"). After docking the smaller lead-like subset to EGFR, ErbB2 and BRAF, the kinases comprising Profile 1, we identified a high mutual overlap in terms of well-ranked compounds between these three kinases (6982 common compounds in the top-ranked 25,000 compounds for EGFR and ErbB2, 4732 for ErbB2/BRAF and 4675 for EGFR/BRAF, respectively, each number representing the maximum over all pairwise comparisons of all docking runs of the lead-like ZINC subset into the different structures of these kinases). Thus, many promising poses in EGFR/ErbB2 were invalidated by a high-rank in the anti-target BRAF. Therefore, we deemed the docking of the larger drug-like subset necessary to obtain a sufficient number of poses with reasonable binding modes to select from after re-ranking. The re-ranking procedure was devised to prioritize molecules matching the requested profile, i.e., molecules with favorable docking rank in all targets but unfavorable docking ranks in all anti-target structures (see "Data and Methods" for details). Finally, we selected 18 molecules (see Table 2 and Table S1) based on visual inspection for this profile (see "Data and Methods" for more detail) from the re-ranked lists of both molecule sets and evaluated these experimentally. Similarly, for Profile 2, using EGFR and PI3K as targets and again BRAF as an antitarget (Table 2), we docked both the ZINC lead-like as well as the drug-like subsets. Again, we deemed the drug-like subset to be necessary due to the large overlap of the top-scoring lead-like molecules of the targets with the ones ranked favorably in the anti-target (4683, 4675 and 6591 for EGFR/PI3K, EGFR/BRAF, and PI3K/BRAF, respectively). For this profile, we selected nine molecules (Table 2 and Table S1). The parallel docking calculations for Profiles 3 and 4 ( Table 2) yielded eight and four candidate ligands, respectively ( Table 2 and Table S1). For Profile 3, the number of common molecules in the top 25,000 was 4610 and 5544 for VEGFR2/EGFR and VEGFR2/BRAF, respectively. As above, the overlap between EGFR and BRAF was 4675. Experimental Validation In total, 24 compounds selected from Profiles 1 and 2 ( Table 2 and Table S1) were tested in the DiscoverX assay against kinases EGFR, ErbB2, BRAF, VEGFR2, LCK, CDK2, MET, p38α and PI3K (Table S2), as well as in an additional confirmatory assay by Eurofins against EGFR, ErbB2, BRAF and PI3K (Table S3). Only one of the 24 compounds, DS39984, showed measurable binding to the desired kinases (Profile 1, Table 3 and Tables S1-S3), while binding to neither Profile 1's anti-target BRAF nor any of the other tested kinases (VEGFR2, CDK2, LCK, MET, p38α and PI3K). This compound DS39984 emerged from the screening campaign against Profile 1 (+EGFR+ErbB2−BRAF) and was picked from the drug-like subset of the ZINC database. We further validated the binding of this ligand and determined binding curves in an independent assay with IC 50 values of 324 and 220 nM (note that both enantiomers were docked-with the R-enantiomer more favorably ranked, but the racemate was tested) against EGFR and ErbB2insYVMA (a variant of ErbB2 with an insertion of four residues distant from the binding pocket), respectively ( Table 3, Rauh Lab). As shown in the predicted binding modes in EGFR and ErbB2 (Figure 1), DS39984 adopts a similar binding orientation in both proteins, with the pyrimidine portion forming a hydrogen bond to the hinge region. The methylester moiety is oriented more towards the back of the binding pocket, where both kinases feature rather voluminous cavities. This predicted binding mode to the hinge region is consistent with the sensitivity of DS39984 towards the T790M mutation: Affinity for the EGFRL858R/T790M double mutant is abolished (IC 50 > 10 µM), whereas the affinity for the EGFRL858R mutant is 2351 ± 397 nM. In contrast, in both BRAF structures used herein, the predicted poses are flipped and have their methylester moiety pointing towards the solvent ( Figure S1). A similar hinge binding interaction as in EGFR and ErbB2 is only present in one of the two poses (in the docking to BRAF structure 1UWH). This occurs despite the fact that in the 1UWH crystal structure the deep back pocket is open due to the crystallized ligand. Thus, in principle, a binding mode of DS39984 similar to the ones predicted in EGFR and ErbB2 is not per se excluded in BRAF due to steric reasons. Note that DS39984 is not present in ChEMBL and has low similarity to known kinase ligands in ChEMBL (no ligand with Tanimoto similarity >0.7 as implemented in the ChEMBL web interface as of 18 October 2020). Furthermore, none of the additionally tested kinases (LCK, CDK2, MET and p38α) were inhibited by the molecule, which underlines, together with absence of BRAF inhibition, the potential of DS39984 as a novel, selective nanomolar EGFR and ErbB2 inhibitor. Eight compounds were selected for Profile 3 (+EGFR+VEGFR2−BRAF, Table 2 and Table S1) and tested in the DiscoverX assay against EGFR, VEGFR2, BRAF and ErbB2. However, none of the compounds exhibited a relevant effect against any of these kinases. To crudely estimate the ligandability of VEGFR2, we docked against this target individually (Profile 4). However, we did not observe many poses that passed our visual inspection (see "Data and Methods" for details) and were able to select only four compounds from the docking to VEGFR2. These were tested in the same assay. Again, none of these compounds showed an effect on VEGFR2 activity. While the number of tested compounds is certainly too small to draw clear conclusions, the fact that only few compounds could be considered in the first place and that those few were inactive might indicate that VEGFR2 is more challenging with respect to the identification of ligands by docking than for example EGFR and ErbB2. One explanation for this could be associated with the fact that the vast majority of VEGFR2 structures show DFG-out(like) conformations (ratio DFG-in/out(like) structures in the PDB: 5/34 for VEGFR2 compared to 168/22 for EGFR, as of KLIFS 25 November 2020). Note that several FDA-approved kinase inhibitors bind to DFG-out(like) VEGFR2 conformations, e.g., axitinib, sunitinib and sorafenib [26]. In contrast, we used DFG-in conformations of VEGFR2 for docking in order to maximize comparability with the other kinase structures used. Unexpectedly, however, we found that one of these four compounds selected for VEGFR2 inhibition, K001MM011, actually inhibited EGFR and, to a lesser extent, ErbB2 (Tables 3 and Table S2). While K001MM011 was picked from the docking to VEGFR2 only, we retrospectively inspected the ranking of this compound in the docking to EGFR and ErbB2. In EGFR, K001MM011 was found to be ranked within the best 10,000 compounds (rank 9527) of the lead-like subset in PDB 3POZ, while, in ErbB2, K001MM011 was ranked not as highly (best rank: 123,665 in PDB 3PP0). In light of these experimental results and the comparative scarcity of ligands with the intended profiles, we decided to better investigate the kinases involved, with a view towards the possibility to predict the sensibility of a particular target combination. Kinase Similarities Designing kinase inhibitors with intended dual target activity that avoid binding to one or several specific anti-targets is a non-trivial task, as evidenced by the docking part of our study. To better understand how difficult it may be to design such inhibitors rationally, five different measures of inter-kinase similarity-each contributing a different level of granularity and a different viewpoint-were investigated ( Figure 2). Such an analysis potentially enables a priori estimations of the success of these endeavors for a given target/anti-target profile. Ligand Profile Similarity (LigProfSim) A first glance at the ChEMBL kinase ligand subsets revealed that none of the investigated kinases seems to be overly selective in terms of the ligands it recognizes, which is in accordance with previous kinome-wide profiling studies [21,27]. Given that the promiscuity values (Table 4, diagonal of Figure 2A and Table S4) range from 0.55 for CDK2 to 0.82 for BRAF, all nine kinases bind more than half of the compounds tested against them at an affinity cut-off of 500 nM. Accordingly, BRAF is the most promiscuous kinase in the set, justifying its use as a general kinase anti-target in this study. Second, considering LigProfSim, it becomes evident that EGFR, ErbB2 and BRAF are more similar to each other than the remaining kinases (top-left quarter of Figure 2A), which renders finding a compound for Profile 1 ( Table 2) a difficult task. With LigProfSim values of 0.53 and 0.55, EGFR is more similar to ErbB2 and BRAF, respectively, than to any other kinase in the set (Table S4). The same holds true for ErbB2, while BRAF has also higher similarities to other kinases in the set. In contrast, with a mean similarity value of 0.18, PI3K has the lowest mean LigProfSim similarity to all nine kinases. This is not unexpected, given that PI3K is the only atypical kinase in the set, but it underlines how challenging the definition of Profile 2 is. Note that, while 4150 compounds were tested against PI3K (with 2706 being active), PI3K has fewer than five common actives with most kinases, except for EGFR (13 common actives of 180 compounds tested against both targets) and VEGFR2 (32 of 175) (see Tables 5 and Tables S5 and S6). While all kinases were assayed against at least 1500 compounds, a few other kinase pairs-not including PI3K-exist that have only a low number of tested compounds in common, e.g., CDK2/BRAF (14), CDK2/p38α (8) or ErbB2/p38α (9, see Table S5), which makes thorough comparison difficult. Finally, with a value of 0.35, EGFR and VEGFR2 do not show high similarity from this ligand-centric perspective, while, as mentioned above, VEGFR2 and BRAF show considerably higher similarity (0.77). These numbers indicate that Profile 3 is very difficult. Table 4. Kinase promiscuity measures calculated as the ratio of ligands active on a specific kinase (column 2). In Columns 3-6, mean values and standard deviations (s.d.) of ligand profile similarity (LigProfSim), pocket sequence similarity (PocSeqSim), interaction fingerprint similarity (IFPSim) and pocket structure similarity (PocStrucSim) per kinase are given. Note: Two kinases having a similar mean value for a particular similarity measurement does not imply that they are similar to each other (especially when large s.d. values are associated with the measure; see Figure 2 for pairwise kinase comparisons). Kinase Promiscuity Classically, kinases are clustered based on their full sequence similarity, such as in the well-known phylogenetic human kinome tree by Manning et al. [11]. The kinome tree is often considered when checking for relationships among kinases, cross-reactivity and anti-targets. Arguably, EGFR and ErbB2 are the most closely related kinases in the set, both belonging to the TK branch and the EGFR family, followed by similarity to VEGFR2 (TK branch, VEGFR family). BRAF is less closely related (tyrosine-kinase-like [TKL] branch, RAF family). Finally, PI3K belongs to the atypical kinases and is only distantly related. Full kinase details are listed in Table 1. Here, we refined this sequence-based view of similarity to only consider the 85 residues forming the binding site in each kinase (PocSeqSim). Also in this "pocket sequence" space, the two EGFR family members EGFR and ErbB2 show the highest similarity of 0.89 ( Figure 2B, numbers in Table S7). All other kinase pairs have similarity values below 0.48, thus less than 50% identical pocket residues. VEGFR2, MET and LCK, three other kinases from the TK class, have PocSeqSim between 0.42 and 0.47 to EGFR and Erb2; BRAF (TKL), p38α and CDK2 (both from the CMGC family) have values in the range of 0.32 to 0.40. Again, PI3K shows the lowest similarity to all other eight kinases. This indicates that, first, the pocket sequence similarities follow a similar trend as the whole-sequence similarities and, second, that-due to the close relationship of EGFR and ErbB2-other less similar kinases of the TK branch such as VEGFR2, MET and LCK, but also BRAF (TKL), p38α and CDK2 (both from the CMGC family), could be easier-to-satisfy anti-targets of +EGFR+ErbB2 ligands ( Figure 2B). Interaction Fingerprint Similarity (IFPSim) To take the interplay between the ligand and the protein into account, interaction fingerprint similarities (IFPSim) were investigated. Note that, for each kinase pair, all available X-ray structures were compared and that only the similarity between the highestscoring pair is reported ( Figure 2C, numbers in Table S8). In the IFPSim matrix, the diagonal describes the best match among all pairwise IFP comparisons between different structures from the same kinase. Interestingly, ErbB2 has a self-similarity of only 0.71. This could be a consequence of the relatively low structural coverage of this kinase. In fact, ErbB2 is only represented by two structures, whereas, for EGFR, 150 structures are available (Table 5). With mean similarity values between 0.61 (lowest for PI3K) and 0.83 (highest for VEGFR2), the IFPSim values are generally higher than the LigProfSim and PocSeqSim values described above (Table 4). EGFR has a high mean similarity to all kinases of 0.81, whereas ErbB2 has a lower mean value of 0.64; note again the low structural coverage of ErbB2. While ErbB2 is most similar to EGFR (0.78) with respect to IFPSim ( Figure 2C), it is less similar to BRAF (0.65), which would favor the development of a Profile 1 (+EGFR+ErbB2−BRAF) inhibitor. Interestingly, PI3K shows one of the highest similarities to EGFR (0.65), while it is less similar to BRAF (0.52), which, in contrast to other similarity measures, would support the feasibility of designing +EGFR+PI3K−BRAF compounds (Profile 2). In the case of VEGFR2, although similarity to EGFR is high (0.83), we observe an even higher similarity to BRAF (0.93), giving another indication of how difficult it may be to design-out this anti-target. On the other hand, the comparatively high similarity of VEGFR2 to EGFR might give an indication of why our Profile 4 compound actually inhibited EGFR. Pocket Structure Similarity (PocStrucSim) Similarities with respect to structural and physicochemical properties of the binding sites were analyzed using the CavBase fast cavity graph comparison algorithm [28,29] ( Figure 2D, numbers in Table S9). Note that binding sites were automatically detected using LigSite and thus may vary in precision throughout the different structures, even within the same kinase. Pairwise kinase similarities range from 0.16 (PI3K/ErbB2) to 0.61 (BRAF/VEGFR2 and LCK/VEGFR2) and are-with a mean value of 0.46 over all kinase pairs-generally lower than the IFPSim values described above (Table 4). Interestingly, EGFR and ErbB2 share only moderate similarity in this measure (0.40), while EGFR is more similar to all other kinases (including BRAF; 0.52), except PI3K (0.24). However, it should be noted that the structural coverage for ErbB2 and PI3K is much lower than for the other kinases, with only two structures each ( Table 5). Note that EGFR is most similar to the anti-target BRAF (0.52). Thus, according to PocStrucSim, it appears difficult to develop ligands against all multi-target profiles (1-3, Table 2). Docking Rank Similarity (DockRankSim) Finally, we leveraged the results of our docking experiments to derive a complementary similarity measure based on the rank correlation of the docked lead-like compounds ( Figure 2E). DockRankSim values were calculated using only the top-scoring 25,000 leadlike molecules for each structure (about 0.5% of the ZINC lead-like subset at that time), since control calculations taking into account the entirety of docked molecule sets showed poor discrimination between different kinases. This lack of discrimination is likely due to the fact that the majority of molecules in the lead-like set are not kinase inhibitor-like. Therefore, the docking rank order of molecules past a certain threshold is noisy, i.e., all of them are more or less equally unlikely to bind. However, they will still receive different ranks based on small scoring differences, and these different ranks will lead to rather different-yet meaningless-correlations between the rankings. Only the five kinases that were included in the four docking profiles (Table 2) were considered, i.e., no values for CDK2, LCK, MET and p38α were determined. EGFR and ErbB2 have by far the highest mutual similarity of 0.3 within this set of kinases and a DockRankSim below 0.12 to all other kinases. While their higher mutual DockRankSim is not surprising given the close relationship between EGFR and ErbB2, it is encouraging that the docking results capture this. Interestingly, the second highest DockRankSim observed is between PI3K and BRAF (0.15), followed by BRAF and VEGFR2 (0.13) as well as PI3K and VEGFR2 (0.13). This is surprising as PI3K, as atypical kinase, shares a rather low similarity to the remaining kinases using most other measures employed in this study (Figure 2A-D). The remaining DockRankSim values are around 0.1, which seems to be the center of the distribution. The smallest DockRankSim was observed between EGFR and PI3K (0.04), an indication that Profile 2 (+EGFR+PI3K−BRAF) inhibitor design might be a challenge, at least computationally. Comparison of Similarity Analyses To shed light on the ease of identifying inhibitors for the respective profiles and the possibility to predict the likelihood that multi-target design endeavors will be successful, five different protein similarity measures were calculated (Figure 2A-E). While the individual relationships between the nine kinases studied differ according to the five measures (which might also be due to missing data or noise in the data, as discussed above), several trends can be observed. The similarity scores of the PocStrucSim and the IFPSim comparisons are distributed more evenly and clearly correlate with each other (R = 0.78, p < 0.001, Figure S2). In addition, the pocket structure-and sequence-based comparisons follow a similar trend (PocStrucSim vs. PocSeqSim R = 0.73, p < 0.001). All other pairwise comparisons are less correlated, showing values in the range of R = [0.55, 0.59] with p < 0.001 ( Figure S2). While several measurements appeared to be correlated, differences between them are not surprising since the measures capture diverse views and thus complementary information of similarity. Nonetheless, it should be noted that the calculated values highly depend on the amount of available data. The conformational space of a kinase might be underrepresented if few kinase structures are available, which affects the structure-related measurements. Furthermore, since ChEMBL only provides a very sparse kinase-compound matrix of experimental measurements, the basis of compounds considered per kinase pair may differ strongly, affecting the LigProfSim values (as well as the promiscuity as defined here). Besides PocStrucSim, all other measures imply a high similarity between EGFR and ErbB2, which is in favor of +EGFR+ErbB2 inhibitor design. Furthermore, LigProfSim, PocStrucSim and PocSeqSim suggest BRAF as a relevant and frequent anti-target, while this is less clear-cut for the IFPSim and DockRankSim measures. This fact renders design for all three profiles a challenging task. Furthermore, while PI3K is very dissimilar to EGFR from a sequence point of view (cf. Manning tree annotation), it showed higher similarity based on other measures such as IFPSim, which is encouraging for Profile 2 (+EGFR+PI3K−BRAF) design. In this sense, the fact that our docking results did not yield compounds with such a profile would suggest that similarity to the anti-target (in this case, BRAF) larger than to the intended target could be a key factor complicating the detection of the desired compounds. Overall, our analyses suggest that ligand-, sequence-and structure-based approaches complement each other and can thus yield consistent insights into kinase similarities. It therefore seems advisable to carry out all of these analyses before a (virtual) screening campaign in order to take appropriate steps, e.g., adaptation of the molecule library to be screened, early on. Our ranking comparisons also suggest that similarity between one of the targets and the anti-target that is higher than the similarity between the two intended targets can be used as a prognostic indicator for difficult multi-target profiles. Docking-Based Virtual Screening Kinase crystal structures that were suitable for docking in general, as well as for the herein discussed purpose in particular, were carefully selected from the Protein Data Bank [14]. Structures were prioritized based on their resolution and the number of missing heavy atoms, with a focus on residues in and around the binding site. Furthermore, structures for target pairs were selected such that the structures for the two kinases involved were as similar as possible. The rationale behind this aim was to maximize the possibility to identify inhibitors binding to both structures. This structural similarity included the overall state of the kinase structure, as determined by the conformation of the DFG and αC motifs, as well as visual comparisons of the binding site residues. Structures with similar side-chain conformations of equivalent amino acids were preferred, as far as such structures existed and the equivalence of amino acids could be rationally established, i.e., for homologous amino acids in EGFR/ErbB2 structure pairs, whereas this was not applicable to, e.g., EGFR/PI3K structure pairs due to their higher dissimilarity. Finally, the crystal structures (PDB IDs given in parentheses) for EGFR (1XKK [30], 3POZ [31]), ErbB2 (3PP0 [31], 3RCD [32]), BRAF (1UWH [33], 3PPK [34]), PI3K (4JPS [35]) and VEGFR2 (2P2H, 3WZD [36]) were downloaded from the PDB (a summary of structural details is presented in Table 6). in out a Orientation of the conserved DFG motif (in/out), annotation from KLIFS [12]; b conformation of the α C-helix (in/out), annotation from KLIFS [12]. The structures were prepared following the protocol in Kolb et al. [37]. Briefly, the first protein chain was used in case several were crystallized. Hydrogens were placed and minimized using the CHARMM (version 31b2) HBUILD command. The ZINC12 [38] lead-like and drug-like subsets (as of July 2015), containing 4.6 and 10.6 M molecules, respectively, were docked into the prepared receptor structures using DOCK 3.6 [39][40][41][42][43] as described in Schmidt et al. [6]. For EGFR, for which a ligand/decoy set is available from DUD-E [44], the prepared structures were additionally validated by their ability to enrich ligands over decoys. AUC values were found to be 0.87 (1XKK) and 0.85 (3POZ), which compares favorably to the value of 0.84 as published by DUD-E [44]. Based on these docking results, compounds were re-scored according to the different selectivity profiles of interest. In our previous work, we introduced a selectivity score for protein pairs, i.e., two docking runs, both being considered as target. Compounds were penalized for unfavorable (i.e., high) ranks in each docking run as well as a high rank difference between these two docking calculations (i.e., good/bad performance in docking A/B; Equation (1) in Schmidt et al. [6]). Here, this procedure was extended to be applicable to more than two proteins, multiple structures per protein and the proper incorporation of anti-targets. Specifically, the docking calculations for multiple structures of the same kinase (e.g., 1XKK and 3POZ for EGFR) were aggregated by using only the best (i.e., numerically smallest) rank in any of the structures. Second, anti-targets were incorporated by inverting the docking rank order, based on the idea that a good docking performance is disfavored in anti-targets. Third, the equation was extended to multiple proteins by using the average rank (note that ranks for anti-targets were inverted beforehand) in all protein docking calculations of the respective profile (e.g., EGFR, ErbB2 and BRAF) and the rank difference between the highest and lowest docking rank in all proteins. Finally, in contrast to our previous procedure [6], logarithmic ranks were used to focus on the top-scoring molecules, based on the notion that the docking scores (and hence docking ranks) become less discriminating beyond the first few percent of the docked database for very large (and diverse) ligand sets, such as the ones used herein. Altogether, the score S of a molecule for the profile comprising kinases 1 to N was defined as follows: if kinase k was defined as target or if kinase k was defined as anti-target. Here, P k denotes the rank of a compound in kinase k aggregated over all structures s of this kinase. R k,s denotes the scaled docking rank of the compound, calculated from the nominal docking rank r k,s of this compound and the total number of molecules m k,s that were docked into the sth structure of the kth kinase. The poses of molecules receiving top ranks after applying this rescoring were visually inspected in their respective protein structure. This inspection is necessary in order to remove compounds which are ranked favorably for the wrong reasons, i.e., because of deficiencies in present-day force fields. Examples are unsatisfied hydrogen bond donors; burial of polar protein residues through apolar ligand moieties; charge mismatches; and ligand conformations with high strain. DiscoverX KINOMEscan Ligand binding experiments for the molecules selected from Profiles 1 and 2 towards nine kinases (EGFR, ErbB2, LCK, CDK2, BRAF, MET, p38α, PI3K and VEGFR2) and for molecules selected from Profiles 3 and 4 towards four kinases (EGFR, ErbB2, BRAF and VEGFR2) were carried out by DiscoverX using the supplied protocol as described in the Supplementary Materials. Briefly, ligand affinity was measured by competition with a resinbound standard ligand and washed-off kinase concentration was determined via qPCR. Summarizing, binding of a compound to a kinase was tested in comparison to a control compound (see Table S2). Lower values generally indicate a higher affinity of the compound to the protein with values below 35% being considered as significant binding according to the information of DiscoverX. Eurofins In Vitro Assay Kinase inhibition assays for EGFR, ErbB2, PI3K and BRAF were carried out by Eurofins Cerep following the protocols of Weber et al. [45] (EGFR), Quian et al. [46] (ErbB2), Sinnamon et al. [47] (PI3K) and Kupcho et al. [48] (BRAF). Briefly, except for PI3K, compounds were incubated with the respective kinase, ATP, and a substrate analog, and the effect of each compound on phosphorylation was measured. In the case of PI3K, the displacement of biotinylated PIP3 from a PIP3-binding complex by unlabelled PIP3 (produced from PIP2 by PI3K) was measured by Homogeneous Time Resolved Fluorescence (HTRF). Finally, inhibition of the respective kinases is calculated as the percentage inhibition of control activity. According to Eurofins, values above 50% inhibition represent significant inhibition and values between 25% and 50% weak inhibitory effects (Table S3). IC 50 Determination IC 50 determinations for EGFR, its mutants and ErbB2-insYVMA (Carna Biosciences, lot13CBS-0005K for EGFR-wt; Carna, lot13CBS-0537B for EGFR-L858R; Carna, lot12CBS-0765B for EGFR-L858R/T790M; and ProQinase, lot1525-0000-1/003 for ErbB2-insYVMA) were performed with the HTRF KinEASE-TK assay from Cisbio according to the manufacturer's instructions. Briefly, the amount of kinase in each reaction well was set to 0.60 ng EGFR-wt (0.67 nM), 0.10 ng EGFR-L858R (0.11 nM), 0.07 ng EGFR-T790M/L858R (0.08 nM), or 0.01 ng ErbB2-insYVMA (0.01 nM). An artificial substrate peptide (TK-substrate from Cisbio) was phosphorylated by EGFR or ErbB2. After completion of the reaction (reaction times: 25 min for EGFR-wt, 15 min for L858R, 20 min for L858R/T790M, and 40 min for ErbB2-insYVMA), the reaction was stopped by addition of buffer containing EDTA as well as an anti-phosphotyrosine antibody labeled with europium cryptate and streptavidin labeled with the fluorophore XL665. FRET between europium cryptate and XL665 was measured after an additional hour of incubation to quantify the phosphorylation of the substrate peptide. ATP concentrations were set at their respective K m -values (9.5 µM for EGFR-wt, 9 µM for L858R, 4 µM for L858R/T790M and 6 µM for ErbB2-insYVMA) while a substrate concentration of 1 µM, 225 nM, 200 nM and 1 µM was used. Kinase and inhibitor were preincubated for 30 min before the reaction was started by addition of ATP and substrate peptide. An EnVision multimode plate reader (Perkin Elmer) was used to measure the fluorescence of the samples at 620 nm (Eu 3+ -labeled antibody) and 665 nm (XL665labeled streptavidin) 50 µs after excitation at 320 nm. The quotient of both intensities for reactions made with eight different inhibitor concentrations was then analyzed using the Quattro Software Suite for IC 50 -determination. Each reaction was performed in duplicate, and at least three independent determinations of each IC 50 were made. Kinase Similarity Measures The nine protein kinases investigated in this study were compared with five measures: their ligand binding profiles (LigProfSim), pocket sequence (PocSeqSim), interaction fingerprint (IFPSim) and structural information (PocStrucSim), as well as docking ranks (DockRankSim). Ligand Profile Similarity (LigProfSim) To compare kinases from a ligand point of view, their similarity with respect to binding the same ligands was investigated. The kinase subset of ChEMBL v.27 [49] was used as the profiling dataset, assembled from https://github.com/openkinome/kinodata/releases/ tag/_pub_ligprofsim (accessed September 2020). Only compounds measured in binding assays yielding a standard activity value as IC 50 were taken into account. If the same compound was measured several times in the same assay (against the same kinase), only the lowest IC 50 value was kept (most active). Compounds were considered active against a kinase if their IC 50 value was below 500 nM, otherwise inactive. For each of the nine kinases studied here, the total number of measured compounds and the number of active compounds was determined ( Table 5). The pairwise ligand profile similarity (LigProfSim) between two kinases was calculated as the ratio of compounds active on both kinases divided by the total number of compounds tested on both kinases (Figure 2A, absolute values in Tables S4-S6). Note that, for the individual kinases, this "self-similarity" yields the fraction of active compounds with respect to all compounds tested, which can also be interpreted as a simple measure for promiscuity (Table 4). Pocket Sequence Similarity (PocSeqSim) Pocket sequences and binding site definitions were taken from the KLIFS database [15][16][17]. Based on the analysis of known kinase-ligand crystal structures, van Linden et al. [15] defined the ATP-binding pocket of kinases by 85 residues which cover most interactions with known inhibitors (front and back-cleft binders). These residues include known motifs such as the DFG motif, the hinge region and the αC-helix. To compare kinase binding sites based on sequences, the master multiple sequence alignment (MSA) of the 85 binding pocket residues for all human kinases available from KLIFS was used and the nine kinases investigated in this work were extracted. Pocket sequence similarity (PocSeqSim)-in this case residue identity-between two kinases was computed by comparing the residues at each of the 85 positions. Thus, the PocSeqSim for two binding site sequences equals the ratio of identical residues within the fixed length MSA of 85 positions. The score ranges from 0 to 1, where 0 indicates no identical residues and 1 indicates complete identity (Table S7). Interaction Fingerprint Similarity (IFPSim) All DFG-in and DFG-out structures for the nine human kinases under investigation, namely EGFR, ErbB2, PI3K, MET, CDK2, BRAF, p38α, LCK and VEGFR2, were fetched from the KLIFS database with https://github.com/volkamerlab/opencadd, which uses the KLIFS Swagger API [17]. This query yielded 2091 structures (as of 27 July 2020). Only structures with orthosteric ligands were kept (1817 structures). For many kinases, several PDB structures are available and many structures contain more than one chain (and occasionally also alternative models), which are provided as separate entries in KLIFS. Whenever one structure was represented by more than one chain/alternative model entry, only the entry with the highest KLIFS quality score [16] was selected (if two had the same quality, the first one was kept arbitrarily). The quality score describes the alignment and structure quality ranging from 0 (bad) to 10 (flawless). This yielded a filtered set of 965 kinase structures (numbers per kinase in Table 5). For every structure, KLIFS provides information on the kinase-ligand interaction stored in an Interaction FingerPrint (IFP). The IFP encodes seven different interaction types (hydrophobic contact, aromatic face-to-face, aromatic edge-to-face, H-bond donor-acceptor, H-bond acceptor-donor, ionic positivenegative and ionic negative-positive) that can potentially be formed between each of the 85 pocket residues and the respective ligand in a bit string as either present (1) or absent (0) [15,16]. The Tanimoto similarity between every IFP pair of the 965 structures was calculated, resulting in multiple structure-pair comparisons for each kinase pair. Finally, a reduced matrix of size 9 × 9 was produced in which for each kinase pair only the highest IFP similarity (IFPSim) score among all structure-pair scores was stored (Table S8). Pocket Structure Similarity (PocStrucSim) For the particular set of kinases investigated here, a set of 183 different PDB structures was compiled manually using the KLIFS dataset and a set of structures that has initially been considered for the docking screens. The manual selection was focused on choosing those kinase structures that featured similar binding sites to EGFR/ErbB2 and high structural quality (such as high resolution and few missing residues), also considering the correlation coefficient of the docking ranks. Furthermore, DFG-in and DFG-out structures were included to allow for diversity. After downloading the structures from the PDB, the files were processed with the API-RP package in the CSD Enterprise suite 2018 by CCDC, detecting all cavities using LigSite [50,51]. The predicted set of 909 cavities for 181 structures was further reduced by filtering for cavities containing at least one orthosteric ligand, resulting in 248 cavities from 176 different structures. It should be noted that some of these cavities emerged from different chains of the same structure and, therefore, contained the same ligand. Although the number of structures was decreased during this process, we made sure that at least two different structures for each kinase were still present in the final cavity set (Table 5). Furthermore, the set contained cavities for each of the structures used during the docking calculations, except for the structure with PDB ID 4JPS (PI3K), for which LigSite was not able to detect the correct cavity. Each of the remaining cavities was then compared to all other cavities using the fast graph comparison method by CCDC [29]. In brief, the binding pocket is described by a graph model based on a set of pseudocenters with assigned surface patches containing information about the properties of the surrounding amino acids. In addition to the original CavBase implementation, the new method includes convexity and concavity measures in the pseudocenters as shape representation. Finally, two binding pockets were compared using a clique detection algorithm which was improved from the original CavBase algorithm [28,29]. Last, as for the IFPSim measure, the maximum similarity over all structure comparisons per kinase pair is reported. Docking Rank Similarity (DockRankSim) The docking rank similarity was calculated based on the notion that similar structures enrich similar ligands in the docking process. The similarity between two docking runs, each targeting a certain structure, was quantified by calculating the Spearman rank correlation of the common molecule set of the top-scoring molecules of both dockings. More precisely, to calculate the DockRankSim between two dockings, the top-ranked 25,000 molecules in both dockings were taken and the molecules common to both sets identified. For the calculation of the DockRankSim, only the dockings of the ZINC lead-like subset were considered. For this intersection, the ranks of the molecules were renumbered and the Spearman rank correlation was calculated. We restricted the calculation of the rank correlation to the top-scoring molecules, as we found this to lead to more discriminating DockRankSim values (data for full set not shown). A cutoff of 25,000 was identified to yield relevant results. However, it must be noted that this cutoff was not systematically optimized to yield the largest possible spread in DockRankSim values. The values calculated in this way describe how similar the compound ranking between two docking runs, i.e., two protein structures, is. To compare kinases instead of structures, we used the maximum observed DockRankSim of all pairwise structure comparisons between the respective two kinases. Conclusions In this study, we investigated parallel docking to disease-relevant kinase profiles, combining two targets and one anti-target. The choice of the initial profile was guided by biology: dual inhibitors of EGFR and ErbB2 are regarded as an advantageous treatment option for several carcinomas, whereas BRAF is a common undesired anti-target. While being biologically meaningful, this profile is also a challenging test case of the precision of docking calculations, given the mutual similarity of the ATP binding site of the three kinases. Nonetheless, we were able to identify one ligand with the desired profile, namely compound DS39984 against Profile 1, with IC 50 values on the targets below 324 nM. This is very close to the expectation value assuming a hit rate of approximately 10-25% (0.25 × 0.25 × 0.90 = 0.056) and a selection of 18 molecules from the docking calculations. We then compared this with another profile combination, +EGFR+PI3K−BRAF (Profile 2), and at the same time investigated whether the likelihood for success (i.e., finding a ligand that fulfils the profile) can be predicted based on data derived from the protein structures. The profile +EGFR+PI3K−BRAF turned out to be hard to find a ligand for, and this was also reflected in the kinase similarity metrics (Figure 2). Finally, we tested a profile including EGFR and VEGFR2 as targets, due to the interest in them for cancer treatment, and tried again to design out binding to BRAF. As in the case of +EGFR+PI3K−BRAF, the higher similarity of VEGFR2 to BRAF (compared to EGFR) in most measures can be a hint why this docking did not yield the desired results. An alternative option, which would agree with the lack of positive results in the single docking performed for the target VEGFR2, is to select alternative starting structures, if available, or a different ligand database to further explore this profile. Based on our findings and the further investigations into different similarity measures of kinases, several conclusions about the factors that determine the likelihood of successful predictions in multi-target settings can be drawn. First, for the present set of kinases, the various measures we calculated in this work largely agree with respect to which kinases are more similar to each other. This is important, because it means that, for a first estimate, one can go with a measure that can be computed in a fast and computationally inexpensive way and already get a largely correct view of the relationship of the targets involved. It also means that the ligand-centric and protein-centric views of ligand-protein interactions match to quite some degree. Second, we only managed to pick few compounds from the docking runs, because few potential hits with plausible binding modes were identified in the top ranks of the combined scoring. Naturally, this means that the results for several of the profiles need to be interpreted with caution, as the numbers of data points are small. However, even if we had picked more compounds from lower ranks, the vast majority of them would likely have been inactive, as docking in general is able to prioritize ligands over nonbinders [52]. Third, the docking rank correlation of the top-ranked poses is very low ( Figure 2E), which indicates that there exists only a limited number of substances in chemical databases for a given kinase profile. This lends additional support to docking strategies using (ultra-)large libraries of virtual compounds, as having access to larger and more diverse fractions of chemical space is certainly beneficial [52,53]. It has to be noted, however, that a certain amount of the rank correlation difference might also stem from the use of rigid protein structures in docking. In conclusion, while docking to identify ligands gets progressively harder with more and more elaborate profiles composed of targets and anti-targets, one can try to estimate the chances of success already from protein-structure-, protein-sequence-and ligand-spacebased methods. This is encouraging in the sense that protein and ligand space show a certain amount of congruence, i.e., that kinases that are close in structure or sequence space also recognize similar ligands, and supports the ongoing efforts to computationally expand chemical space to search for kinase inhibitors with tailored binding profiles. Supplementary Materials: The following are available online. Figure S1: Docking poses of ligand DS39984 bound to BRAF structures, Figure S2: Comparison of different similarity measures for pairwise kinase structure comparisons, Table S1: IDs and 2D depictions of all compounds tested in the different kinase assays as well as the docking profile they were selected from,
2021-02-04T06:16:21.676Z
2021-01-26T00:00:00.000
{ "year": 2021, "sha1": "f10f4bfe17ac1c5513f1afd2fa346de2a1df24e2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/molecules26030629", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "12fb1115689376b4190bf265a8af21ef839d13c3", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
54527531
pes2o/s2orc
v3-fos-license
How strange is pion electroproduction? We consider pion production in parity-violating electron scattering (PVES) in the presence of nucleon strangeness in the framework of partial wave analysis with unitarity. Using the experimental bounds on the strange form factors obtained in elastic PVES, we study the sensitivity of the parity-violating asymmetry to strange nucleon form factors. For forward kinematics and electron energies above 1 GeV, we observe that this sensitivity may reach about 20\% in the threshold region. With parity-violating asymmetries being as large as tens p.p.m., this study suggests that threshold pion production in PVES can be used as a promising way to better constrain strangeness contributions. Using this model for the neutral current pion production, we update the estimate for the dispersive $\gamma Z$-box correction to the weak charge of the proton. In the kinematics of the Qweak experiment, our new prediction reads Re$\,\Box_{\gamma Z}^V(E=1.165\,{\rm GeV}) = (5.58\pm1.41)\times10^{-3}$, an improvement over the previous uncertainty estimate of $\pm2.0\times10^{-3}$. Our new prediction in the kinematics of the upcoming MESA/P2 experiment reads Re$\,\Box_{\gamma Z}^V(E=0.155\,{\rm GeV}) = (1.1\pm0.2) \times 10^{-3}$. I. INTRODUCTION The discovery of weak neutral current interactions in parity-violating electron scattering (PVES) [1,2] and in atomic parity violation (APV) [3] provided an important proof for the structure of the Standard Model (SM). The accuracy of modern experiments provides access to physics beyond the Standard Model (BSM) in a mass range which is comparable or complementary to searches at colliders and in astrophysics [4,5]. Two experiments designed to test the SM running of the weak mixing angle at low energies, Qweak at the Jefferson Laboratory in the U.S. [7] and P2@MESA at Mainz University, Germany [8] will constrain SM extensions with mass scales in the 30 -50 TeV range. The interpretation of these high-precision experiments in terms of the fundamental SM parameters is based on a similarly precise calculation of electroweak radiative corrections [4,9]. At the one-loop level, γZ-box graph corrections constitute a numerically important contribution. Their evaluation requires knowledge of the hadron structure at low energies, i.e. in a region where perturbation theory can not be applied. Recently, the vector part of these corrections was re-evaluated in the framework of forward dispersion relations [10], and its value and uncertainty was found to be considerably larger than previously anticipated. Subsequent work allowed to constrain its central value [10][11][12][13][14], but the size of its uncertainty is still an open question. For the kinematics of the Qweak and P2 experiments, the dispersion representation of the γZ-box graph correction involves the inclusive inelastic interference structure functions F γZ 1,2 integrated over the full kinematical range with a strong emphasis on the low-energy range, Q 2 ≤ 1 GeV 2 and W ≤ 4 GeV (Q 2 is the virtuality of the space-like photon or Z-boson originating from electron scattering, and W the invariant mass of the hadronic final state X resulting from the process γ * + N → X with X = πN, 2πN , etc.) 1 . The approach of Refs. [10][11][12][13][14], in the absence of any detailed interference data in the required kinematical range, was based on a purely phenomenological fit to the electromagnetic inelastic total cross section data [15], complemented with an isospin-rotation to predict F γZ 1,2 . Unfortunately, this procedure is essentially ad hoc, and leads to model-dependent uncertainty estimates, reflected by the spread of the uncertainties given in Refs. [12][13][14]. In the present work we try to construct the input to the dispersion relations, starting at threshold for pion production, in a more controlled way. In this range, approximately determined by M + m π ≤ W ≤ 2 GeV and Q 2 ≤ 2 GeV 2 , one can rely on very detailed experimental data for pion photo-and electroproduction that allow for partial wave analyses as implemented in MAID [16,17] and SAID [18]. In this approach it is also possible to explicitly take account of constraints due to unitarity and symmetries and include dynamical effects of strong rescattering. In the literature, the weak pion production amplitudes have been constructed from the electromagnetic ones, and observables have been studied upon neglecting strangeness contributions [19,20]. The main distinction of the present work is in avoiding this assumption. This leads to a natural uncertainty estimate due to strangeness contributions which, firstly, is driven by experimental data on strange form factors from elastic PVES [21], and, secondly, brings this uncertainty estimate in direct correspondence with that of inelastic PVES data. Such data above the pion production threshold and reaching into the ∆-resonance region have been taken by the G0 Collaboration at JLab [22] and by the A4 Collaboration at MAMI, Mainz [23]. Our formalism can also serve as a basis for extracting the strange form factors from threshold pion production in PVES experiments. The advantage of this method lies in the fact that PV asymmetries are large, in the range of several tens of p.p.m. as opposed to a few p.p.m. for asymmetries in elastic PVES, which were traditionally used to access strangeness contributions. Another closely related topic concerns hadronic parity violation that leads to induced PV contributions in electromagnetic interactions and may be seen in PVES as well as in parity violation in nuclei [24,25]. In elastic PVES, these contributions manifest themselves in a similar way as effects from the axial-vector coupling of the Z boson at the hadronic side, but can be disentangled from the Z-exchange contribution in PV pion electroproduction due to a different Q 2 -dependence. Ref. [26] used the "DDH best value" of the PV πN N coupling constant h 1 π [27], and showed that PV threshold π + electroproduction at low energies and forward angles is very sensitive to this coupling. On the other hand, there are indications that the actual value of h 1 π is at least four times smaller [28]. Having in mind such contributions, we will focus on electron energy range ∼ 1 GeV and not too forward angles. We postpone the detailed discussion of the interplay of hadronic PV effects with the strangeness to the upcoming work. The article is organized as follows. In Section II we lay out the formalism and define the kinematics, in Section III we explicitly construct the multipoles with the weak vector current and incorporate strangeness contributions. Section IV deals with the sensitivity of the PV asymmetry in inelastic PVES to strange form factors; in Section V we apply the model for weak pion production developed in the previous sections to the calculation of the dispersion γZ-correction to the proton's weak charge. Section VI contains our concluding remarks. II. KINEMATICS AND DEFINITIONS In this work we consider pion electroproduction off a nucleon of mass M , e − ( ) + N (p) → e − ( ) + π(q) + N (p ), as shown in Fig. 1. The interaction is described by diagrams with the exchange of one boson which carries the four-momentum k = − , for the contribution of the electromagnetic interaction, and for the weak neutral current (NC) interaction. We have defined Q 2 = −k µ k µ > 0. The kinematics of the reaction (γ/Z) (k) + N (p) → π(q) + N (p ) in the πN center of mass frame is completely fixed in terms of three Lorentz scalars for which we take the invariant mass of the hadronic final state W , W 2 = (p + q) 2 = (p + k) 2 , the virtuality Q 2 of the initial boson, and the four-momentum transfer to the nucleon t = (p − p) 2 < 0. In the following, we will use the center-of-mass frame of the initial nucleon-photon pair (or, equivalently, of the final nucleon-pion pair) defined by k + p = q + p = 0. In this reference frame, the kinematics can be specified by the energy of the photon, ω, its virtuality Q 2 , and the pion scattering angle θ. We parametrize the momentum 4-vectors by with A. Invariant amplitudes The invariant amplitudes with the vector current were introduced, e.g., in Ref. [29] as with where U i (U f ) stand for the initial (final) nucleon Dirac spinor, and we have used the average nucleon four-momentum P = (p + p )/2. Similarly, the axial-vector part is separated as with The scalar amplitudes V γ,Z i , A Z i are functions of the invariants W 2 , Q 2 and t. The last two structures O µ A 7,8 are lepton mass terms and do not contribute to the neutral current process studied here. B. Multipole decomposition It is common to evaluate the covariant tensors introduced in the previous subsection in the center-of-mass frame of the pion and the final nucleon, relating the invariant amplitudes to the CGLN amplitudes [30] with µ the photon or Z polarization vector, F j , G j scalar amplitudes, χ i (χ f ) the Weyl spinors for initial (final) nucleon, respectively, and Σ j ,Σ j matrices in the spinor space. The CGLN amplitudes allow for a decomposition into multipoles, Above, E l± , M l± , S l± are the multipoles describing the vector electric, magnetic and scalar transition to the πN -state with the angular orbital momentum l and the total orbital momentum j = l ± 1/2, and similarly E l± , M l± , S l± describe transitions with the axial vector interaction. The multipoles are functions of W and Q 2 only, and the angular dependence in terms of Legendre polynomials and their derivatives is contained in the coefficients a, b, c,ã,b,c. The explicit form of the coefficients of the matrices a, b, c for the vector case can be found in Ref. [31] andã,b,c in Ref. [29] for the axial-vector case. C. Isospin structure Isospin is not conserved by the electromagnetic interaction. The one-photon exchange diagram has, therefore, an isoscalar and an isovector component. Combining I = 0, 1 of the photon and I = 1 of the pion would lead to three possible isospin amplitudes for I = 0, 1, 2. From the t-channel perspective, i.e. for pion photoproduction γπ → NN , however, only I = 0 or I = 1 are possible. As a result, there are three independent isospin channels which we denote by M 0 and M ± . The scattering amplitude can thus be separated as with χ i,f the two-component nucleon's isospinors, τ the Pauli matrices and π denote the pion isovectors. In terms of these isospin-channel amplitudes, charge-channel amplitudes are expressed as These relations are equally valid for multipole, invariant or CGLN amplitudes. The isospin decomposition of the electromagnetic and weak neutral current in terms of quark currents (we consider the three lightest flavors u, d, s only) reads with q = u d . At tree level in the Standard Model and according to the normalization defined in Eq. (2), the coupling constants appearing in these equations are determined by the weak mixing angle, sin 2 θ W , and given by ξ I=1 From this decomposition we obtain the standard expressions for the weak form factors of the nucleon, In the same way, the flavor decomposition of the amplitudes for weak vector pion production, identifying, e.g., , and keeping strangeness, can be decomposed as The presence of the strangeness contribution is the main distinction of this analysis from other calculations of pion production in electroweak reactions. III. VECTOR MULTIPOLES FROM ISOSPIN SYMMETRY AND STRANGENESS FORM FACTORS Upon neglecting the strangeness contribution, the vector multipoles in weak NC pionproduction is given exactly in terms of the electromagnetic multipoles given in Eq. (15). To our knowledge, this is how pion production is dealt with in all phenomenological models that relate pion production in electromagnetic and weak NC reactions. The aim of this section is to go beyond this approximation and model the strangeness contributions. We start from the isobar model as implemented in MAID [16,17]. The pion-production amplitude can be represented as a sum of tree-level amplitudes, as shown in Fig. 2. These can be modeled in terms of tree-level coupling constants and form factors. Rescattering effects are incorporated following the unitarization procedure in the K-matrix approach used in MAID [16]. There, the unitarized Born amplitude M B,α γπ is obtained from the tree-level Born amplitude B α γπ and the elastic pion-nucleon scattering amplitude t α πN = [η α exp(2iδ α ) − 1]/2i (taken from the SAID analysis [35]) expressed in terms of phase shifts δ α and inelasticities η α as where the index α = (j, l, I,. . . ) carries all relevant quantum numbers: total and orbital angular momentum, isospin, multipolarity etc. The unitarized Born amplitude defined in this way has the phase of the pion-nucleon scattering amplitude below the two-pion production threshold and obeys the Watson theorem. Note that the Born amplitude described by the sum of the graphs (a) -(d) in Fig. 2 is gauge invariant, which is achieved by requiring a certain form of the form factor of the contact term, graph (d) in this figure [29,31]. Gauge invariance makes such a unitarization procedure for the Born part meaningful. The MAID approach consists then in adding further contributions like vector meson exchange contributions (graph (f)) within the same unitarization procedure, and resonances (graph (e)) with an appropriate phase, such that the full amplitude M α γπ = M B,α γπ + M V,α γπ + M R,α γπ has the correct phase [16]. Since the pion is an isovector, only the isovector components of the photon and Z boson couple to the pion field. The same is true for all N → ∆ transitions. Correspondingly, only ZN N and ZN N * couplings can obtain additional contributions from strangeness. This statement operates with "bare" couplings of the photon. However, rescattering effects that could modify the tree level amplitudes are due to the strong interaction which conserves isospin and flavor. The most straightforward way to estimate the strangeness contribution is to use the available experimental information on strange form factors of the nucleon. A recent global analysis [21] gives the following values of the strange electric and magnetic form factors of the nucleon at For this analysis, the strange form factors were parametrized as G s With this parametrization, extrapolating G s M from Q 2 = 0.1 GeV 2 to the origin one obtains µ s = 0.38 ± 0.27. Analyses based on lattice QCD tend to give smaller values of the strange form factors with a much smaller uncertainty [32,33]. For our purpose, the values given in Eq. (17) are sufficient, since one of the goals of this work is to propose a new way of extracting the strangeness contribution from the experimental data. The estimates from lattice QCD are automatically included in the explored parameter range. We also note that the values given above, if rewritten in terms of Dirac and Pauli strange form factors using G s M = F s 1 + F s 2 and G s E = F s 1 − τ F s 2 , are consistent with F s 1 = 0 and F 2 = µ s G D (Q 2 ). The strange magnetic moment contribution to the Born multipoles can now be calculated in a straightforward way, and we make use of the expressions for the Born multipoles published in Berends et al. [31]. However, we have to extend the analysis of [31] where the πN N coupling is assumed to be purely pseudoscalar, whereas MAID uses a combination of pseudoscalar and pseudovector couplings, with q the three-momentum of the pion. The parameter Λ m = 450 MeV describes the transition from a pseudo-vector coupling at the pion production threshold (where | q| = 0) to a pseudoscalar coupling at high energies. It is this form that we adopt here. As a consequence, an additional contact term appears which contributes to four of the multipoles, nameley to E 0+ , M 1− , S 0+ , and S 1− . We present the full expressions for the Born strangeness contributions to the multipoles in Appendix A. In Figs. 3 -8 we display results for the first few multipoles according to the procedure described above. In all figures, the dotted curves represent the MAID results for electromagnetic pion production, while the dashed curves include the weak neutral current contributions assuming exact isospin symmetry with no strangeness, and the solid curves correspond to the weak neutral current multipoles including strangeness using the strange magnetic moment µ s = 0.38. Thus the difference between the solid and the dashed curves is a measure of the sensitivity of the multipoles to the strange magnetic moment. IV. SENSITIVITY OF THE PV ASYMMETRY TO µ s AT THRESHOLD AND IN THE ∆-REGION Weak pion production can be accessed, e.g., in parity-violating electron scattering (PVES). We will consider inclusive scattering here, i.e., we assume that only the electron in the final state is detected. In terms of the electromagnetic and γZ interference cross sections σ γ,γZ T,L,A the asymmetry for scattering of a beam of longitudinally polarized electrons off unpolarized protons reads with the usual polarization parameter ranging from 0 for backward scattering to 1 for forward scattering, ν = E−E is the energy of the virtual photon in the laboratory reference frame. The contribution from πN final states, πN σ γ,γZ T,L,A , to these cross sections are expressed in terms of multipoles as Here we have used k CM γ = (W 2 − M 2 )/2W which stands for the real photon three-momentum in the center-of-mass frame. Expressions for πN σ γ T,L in terms of multipoles have been obtained, e.g., in Ref. [34]. In contrast to that reference, here the longitudinal cross section is written in terms of scalar, S l± , rather than longitudinal multipoles, L l± . The two sets of multipole moments are related by gauge invariance. Expressions for the interference cross sections πN σ γZ T,L are a straightforward generalization of their electromagnetic counterparts, but to our knowledge have not been reported in the literature before, as is the case with the axial term πN σ γZ A . Note that the longitudinal axial-vector multipoles do not enter πN σ γZ A which is hence purely transverse. As in Ref. [19], the PV asymmetry defined in Eq. (19) can be represented as a sum of three terms, The first term, A 1 = 2 − 4 sin 2 θ W , is model independent and results from isolating the isovector contribution in the numerator and denominator. The other two terms encode the isoscalar and strange (A 2 ) and the axial-vector (A 3 ) contributions. This form follows from the isospin decomposition of Eq. (15) but we do not explicitly rewrite Eq. (19) here. We are interested in studying the sensitivity of the parity-violating asymmetry to strangeness contributions. To simplify the discussion and to keep the analysis uncontaminated, we start with results without taking the axial multipoles into account. In fact, their contribution is suppressed by the small weak charge of the electron, with 1 − 4 sin 2 θ W (0) ≈ 0.048. In addition, the factor √ 1 − 2 leads to a further suppression for forward kinematics; however, this suppression is not there at backward angles. In Figs. 9 and 10, we plot our results as a function W in the range between the pion production threshold and the ∆-resonance region assuming forward kinematics as measured in the A4 experiment at MAMI. We see that in this kinematic range there is considerable sensitivity to the strangeness contribution at the level of 10 -20%, as indicated by the difference between the solid and dashed lines in Figs. 9, 10. This sensitivity is quite promising in view of the high precision of the experimental data which are at the level of 5% [23]. A targeted analysis of the available inelastic PVES data between threshold and the ∆ resonance region will offer an alternative way to constrain strangeness form factors of the nucleon, virtually independent of the conventional measurements of elastic PVES. Historically, the parity-violating asymmetry in electron scattering has been proposed to measure the axial N → ∆ transition [19]. Numerically, its contribution at the ∆ resonance position in the backward kinematics of the G0 experiment is at the level of 6% [22]. Our estimates show that at the ∆ position an extraction of the axial N → ∆ transition would still be possible since the uncertainty due to strangeness is below 3%. The largest sensitivity to strange form factors is, not unexpectedly, observed between the threshold and the ∆ resonance. V. AN UPDATE OF γZ-BOX GRAPH CORRECTIONS TO ELASTIC PVES EXPERIMENTS The γZ-box graph corrections to elastic PVES have attracted significant attention recently. They can be evaluated in an approach based on dispersion relations from γZ interference structure functions for inelastic ep scattering. Their vector part, Re V γZ , is sensitive to low-energy form factor input and depends strongly on the beam energy. For the kinematics of the Qweak experiment the box graph corrections exceed the previously claimed theory uncertainty by a factor of 8. Currently, the size of Re V γZ has settled at 5.4 × 10 −3 [10][11][12][13][14], but the uncertainty estimates range from 0.4 × 10 −3 [14] to 2.0 × 10 −3 [13]. In the framework of forward dispersion relations, the γZ-box correction obeys a sum rule in terms of the inclusive interference structure functions F γZ 1,2 , as function of the lab energy of the electron, where ν = (W 2 − M 2 + Q 2 )/2M is the virtual photon energy in the laboratory frame, and ω = (ν + ν 2 + Q 2 )/2. The integral requires knowledge of the structure functions F γZ 1,2 in the whole kinematic range in the two variables W and Q 2 . For a meaningful evaluation of this integral, the πN contribution to the inclusive structure functions developed in the previous section needs to be extended to W > 2 GeV and Q 2 > 2 GeV 2 , and contributions from highermass states need to be included. First, we deal with extending the πN state contribution to higher energies using the Regge theory framework. Assuming that at W = 2 GeV the background contribution (mainly the vector meson exchanges) dominate over the resonances in the cross section, we can use the Regge theory expectation F 1 (W → ∞) ∼ (W 2 /1 GeV 2 ) α M (0) and F 2 (W → ∞) ∼ (W 2 /W 2 0 ) α M (0)−1 . The meson Regge trajectory α M (t) = α 0 M + α M t is taken at t = 0, and the relevant ρ-ω trajectories correspond to α 0 ρ ≈ α 0 ω ≈ 0.5. Correspondingly, the simple Ansatz was adopted. The Q 2 -dependence beyond Q 2 = 2 GeV 2 is assumed to follow a dipole ∼ [1 + Q 2 /Λ 2 V ] −2 with Λ V = 0.84 GeV 2 , reminiscent of nucleon form factors, This model is not very sophisticated; but since the integral is saturated to 99% by the range Q 2 < 2 GeV 2 the details of the model for higher Q 2 are irrelevant at the level of the required precision. With the model specified above the numerical evaluation of the πN state contribution with the uncertainty coming from varying the strange form factors within their error bars and treating the part of the integral from Q 2 > 2 GeV 2 as an additional uncertainty, the two errors added in quadrature. The πN -continuum is only one part of the proton inelastic spectrum. Further contributions are due to higher resonance excitations in channels other than πN , such as 2πN and ηN , included in the original fit by Christy and Bosted [15]. Their contributions can be evaluated by switching off the πN -decay channel for the resonances, Further details can be found in Ref. [13]. Finally, to include the non-resonant part of multiparticle intermediate states we adopt a background model resembling the one described in Ref. [13] which starts at the 2π-production threshold. Then we find Adding these three numbers we obtain a new prediction for the γZ V dispersive correction, Fig. 9 for E = 0.855 GeV. Due to a more precise account of the lowest part of the proton excitation spectrum we observe that the uncertainty is reduced. Similarly, we find for the energy range coverd by the upcoming MESA/P2 experiment in Mainz, and for the total γZ box graph correction Re V γZ (E = 0.155 GeV) = (1.07 ± 0.18) × 10 −3 . VI. CONCLUSIONS To summarize, we studied the effect of the strangeness on the parity-violating pion electroproduction. We showed how the inclusion of the strange magnetic form factor modifies the Born contribution, obtained expressions for the vector multipoles for the subprocess Z + p → π + N and performed a unitarization procedure that follows the approach of Refs. [16,17]. At the moment we did not attempt to model the strangeness contributions to the N → N * transition form factors, while no such contributions are present in the purely isovector N → ∆ transitions. This effectively limits the applicability of our model to energies between pion production threshold and the Roper resonance. We applied this model to the PV asymmetry in inclusive linearly polarized electron scattering off an unpolarized proton target, and demonstrated that at higher energies and moderate Q 2 ∼ 0.6 GeV 2 , as in the kinematics of the A4@MAMI experiment, the sensitivity to the value of the strange magnetic moment is at the level 20% relative to the asymmetry size of ∼ 40 p.p.m. At lower electron energy and correspondingly lower Q 2 this sensitivity decreases to ∼ 10%. In view of the intrinsic statistical uncertainty of the asymmetry data at the level of 5-10%, this sensitivity offers a quite promising new way to address strangeness with inelastic, rather than elastic PVES. The main advantage lies in much larger asymmetries than in the elastic case. Elastic and inelastic measurements even with the same apparatus will have different systematics, making the extraction of the strange magnetic form factor from the data below and above the pion production threshold practically independent. The model for pion production with a weak probe allowed us to calculate the contribution of the πN -state to the inclusive interference structure functions F γZ 1,2 that enter the calculation of the dispersion γZcorrection to the weak charge. This correction has to be taken into account for the extraction of the weak mixing angle from the elastic PVES data within the Q-Weak and P2 experiments. A more careful modeling of the lower part of the nucleon excitation spectrum done here allowed to shift the uncertainties to higher energies, leading a reduction of the uncertainty of the dispersive calculation of the energy-dependent correction V γZ (E). In the Q-Weak kinematics E = 1.165 GeV the new uncertainty estimate is δ V γZ (1.165GeV) = 0.00141, about 2% of the Standard Model expectation for the proton's weak charge. For the P2 kinematics, E = 0.155 GeV, the new uncertainty estimate is an order of magnitude smaller.
2015-09-29T14:44:55.000Z
2015-09-29T00:00:00.000
{ "year": 2016, "sha1": "c8c67e661de88c90c203c6d9f865ae123743fe3f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2015.11.038", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "c8c67e661de88c90c203c6d9f865ae123743fe3f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
55566012
pes2o/s2orc
v3-fos-license
The Effect of Eurasian Orientation on the Relations between Russia and America The international system is an arena for relations and interactions. Sometimes these relations are along with peace and sometimes with conflicts and tensions. Accordingly, countries use certain ideologies and policies that can lead to the promotion or demotion of their position in the international system. The collapse of the Soviet Union was a turning point for the change of the nature of the international system -from bipolar to unipolarand also Russia’s attempt to redefine its lost identity which prepared the ground for the influx of new discussions and approaches in the international arena. Seeking its lost identity, Russia came up with the reality that Atlantic Orientation and following the unipolar system not only did not solve its problems but led to its decline, disappointment and frustration at the internal and international level; therefore, with Putin’s accession to power, another major turning point came in Russian foreign policies, which led to the improvement of the economic conditions in Russia following the increase in the oil price and Putin’s strategic leadership and management. The policy and approach taken in Russia originated from the self-confidence it had built up. This not only increased Russia’s maneuverability at the region but leveled the ground for its confrontation with America and challenging the unipolar system. As a result, adopting the Eurasian Orientation policy since 2000 not only caused Russia’s attention to turn to Central Asia countries and Caucasus but has also promoted an aggressive or offensive attitude in the policies taken by this country. Thus, taking an Offensive Realism approach, the present study seeks to find an answer to this question: What is the role and effect of Eurasian Orientation policy on Russia’s foreign policies towards America? The data for this study were collected by drawing on dissertations, libraries and research findings and analyzed using a descriptive analytic approach. Introduction After the collapse of the Soviet Union, the international system changed from its bipolar form into a unipolar system and America's hegemony and the Soviet Union broke into several independent countries with their particular foreign policies, but Russia still considered itself as the father and chief of the Caucasus region and Eurasia.Particularly after Putin's accession to power and with the end of Yeltsin's rule, in addition to Putin's progressive policies, the changes in the international system went in this country's favor.In this period of time, Russia made an attempt to build up and bring back its lost reputation in the region and, accordingly, did not spare any effort in redefining its identity.At the same time with the rise of Putin to power, the price of energy resources also increased.The rise in energy prices was followed by the promotion of Russia's self-confidence and an increase in its power.These happenings generated and strengthened the motivation for confronting and taking positions against the United States policies and challenging the unipolar system for Russia and its Moscow's foreign policies changed from 'following' to the 'balance of power' or 'power tranquility' policies.This was followed by taking an offensive attitude by Russia.In the present article, in the first section we will look at the commonalities between Offensive Realism and redefinition of Russian foreign policy in this framework and, in the second part, the reasons and factors related to Eurasian orientation policies and, finally, the role and effect of these two factors (i.e., redefinition of Russian foreign policy and Eurasian orientation policies) on the relations between Moscow and Washington will be analyzed considering Russia's aggressive behavior at the time of Putin's rule and its relationship with taking the Eurasian orientation policy by Putin and its role in the relations with America. Theoretical Background: Offensive Realism Offensive Realism is a branch of realism and assumptions such as power, security, government, anarchy and great powers are considered as its major components as in the classic realism.As against other schools of thought in the field of criticizing and examining the international relations and policies, each of which has been introduced with certain well-known characters, this school of thought has been less characterized by the mentioned features.In other words, a very small number of scientists have tried to explain and provide a theoretical framework for Offensive Realism.Complying with the other schools of thought in the international arena, Offensive Realism has based its intellectual foundation on realistic doctrine and assumptions.The followers of this school of thought recognize the governments, based on the realistic frameworks, as the main actors in the international relations arena and believe that it is the great powers that form the international policies and the great power outputs play a determining role and affect the international system (Schwelle, 1996).According to Offensive Realism, anarchy forces the governments to maximize their relative power or influence because, from their perspective, there is a relationship between systemic requirements and following the neo-realists including the defensive and offensive realists, who claim that the world is competitive and uncertain and the structure of the international system has made the policy of power as the dominant political paradigm.This perspective fits the system of benefits and beliefs of the majority military strategists and foreign policy decision makers who are in power in the world today (Baylis & Smith, 2004, 445). To ensure their survival, the governments should put keeping or improving their position of power on the agenda in their foreign policies and since power is defined in the final analyses as the ability to declare a war, the governments always put the emphasis on creating and developing their military establishment (Pfaltzgraff and Dougherty, 2004, 445).According to Mearsheimer, one of the most important theoreticians of Offensive Realism, the most fundamental bases of the international system are formed with the performance of the great powers and, to achieve absolute security, the powers think of eliminating the potential threats from other powers that are likely to practice hegemony.Therefore, to talk about issues such as democratic peace and security-building cooperation is only to justify offensive behaviors of the great powers (Salimi, 2005, 34-36).Mearsheimer's theory, i.e., Offensive Realism, puts emphasis on this assumption that basically international policies are an attempt to increase relative power and the governments will not quit their efforts as long as they have not become a hegemonic power.Accordingly to Mearsheimer (2001, 139), this behavior originates from the fear of the anarchic situation of the international system and makes desire for survival a requirement and makes the governments follow and show an offensive behavior.And Offensive Realism is based on this very assumption. In summary, Offensive Realism helps to understand the international policies by explaining the foreign policies and considers achieving absolute security as the most important demand of the great powers, which is achievable only via power and maintaining and practicing hegemony.These powers are the main actors in the international system and try to ensure their survival and increase their power in an anarchic atmosphere.Therefore, they attempt to outperform others in their foreign policy planning. Eurasian Orientation and Russia's Offensive Policy When Putin came to power and after the end of Yeltsin's rule, with the progressive views and policies of Putin, the changes in the international system happened as desired and favored by Russia.In this range of time, Russia sought to restore its lost reputation and preserve and keep its rule and influence over the region.Accordingly, they made every attempt to redefine their identity.Simultaneously the price of energy sources such as oil rose.The increase in energy price was followed by an increase in Russia's self-confidence and power, which provided the necessary motivation for Russia to take positions against the United States and pose a challenge to the unipolar international system and change its policy from 'following' to 'balance of power'.This led Russia to take an offensive approach and strategy and, today, as Russia further follows and advances its policy, its tensions in the region and the international system also increase to the same extent and it becomes more offensive. By gaining the necessary power and strength, the first ultimatum given to the world by Russia was to challenge the unipolar system imposed by America.Having large energy resources as a source of power, Russia attempted make decisions on the world's especially the European countries' behalf.Based on the Offensive Realism theory, the most important ambitions of any country is to reach absolute security and Russia uses all the necessary and effective tools to achieve this goal.Threatening, stopping the gas exports, posing sanctions and even using weapons by Russia are all evidence for this claim that it resorts to the use of any instrument for gaining power and achieving absolute security.Apart from that, America's and NATO's approximation to any of the countries Caucasus region might be a bullet in Russia's heart. In analyzing and evaluating Russia's foreign policy towards the neighboring countries especially those surviving the collapse of the Soviet Union, there is a belief that Russia considers itself as the big brother in the region and seeks to keep and preserve the union which was dominant during the Soviet period.In other words, it seeks to form the Russian Union once again but this time not with authoritarianism.Under the Eurasian Orientation mask, Russia wants to rule over Asia and Europe and considers any tension or conflict in these areas as a threat to its political position and a threat to its security borders. Russia considers the presence and challenge of any other power besides the regional countries as a threat to its security and benefits.Therefore, the fight with America and preventing the development of NATO in the region is one of the Eurasian Orientation strategies used by Russia and indicative of its offensive approach which puts a stop to any feeling of insecurity.The collapse of the Soviet Union, in fact, changed the conditions of the international system with America ruling over the world as a hegemony and makes every effort to maintain the status quo.In contrast, Russia still had its old ambition in mind, i.e. the ambition of ruling over the whole world or at least, at the present time, over Europe and Asia, is not happy with the existing situation and can use the domino of attaching the regional countries such as Crimea to Russia to achieve this goal. As it was previously mentioned, one of the notable assumptions of the Offensive Realism is the persistence in achieving absolute security and, from this perspective, Russia is one of the greatest powers in the production of weapons and missiles and one of the powers that has nuclear power, which is deterrent in nature. At the same time with the changes in Russia's policies from 'following' to the policy of 'balance of power' and in the attempt to achieve a superior power in the region and after alignment with the hegemony, Russia attempts to change the international system in line with itself by taking the pragmatic policy and behavior.In addition, it tries to maintain its dominance over the independent states of the former Soviet Union.In line with that, one of the most important approaches taken by Russia is attention to Eurasian Orientation.The proponents of this theory want to achieve two goals through creating unions and promoting cooperation at the regional level: on the one hand, improving the regional cooperation leads to the countries' dependence on Russia and in this way it can influence the foreign policy of these countries because the common tradition in these countries is that when they become friends with Russia, they will normally turn away from America. Becoming more powerful against the other competitors increases the great powers' chances of survival.An everlasting power has an implication that the great powers are looking for opportunities to change the global distribution of power to their advantage.If they have the required capabilities and capacities, then they will make the best of the opportunities.Accordingly, it can be stated that these powers have offensive thoughts in mind from the very beginning.Despite that, a great power not only does not seek to gain more power against the opponents but tries to neutralize their power by resorting to its power.Therefore, a great power will only support the balance of power when the changes are to its advantage and to the opponents' disadvantage and seeks to undermine the balance of power when the changes are not in its favor but against its demands and desire (Mearsheimer, 2004, 3).Based on this belief, similar to other countries in the international system and particularly in its current particular conditions, Russia tries to increase its relative power and it can resort to war and force for achieving this goal.Attack on Ukraine and its offensive policies in this region in 2014 are an indication of this fact and an evidence for this claim. The Bases of Putin's Foreign Policy Since the end of the 70 th century, the proponents of the Normalized Modern Great Power and at the head of them, Vladimir Putin, as the main critic of Primakov's foreign policy, which, according to Putin, was too ambitious, ideological, confrontational and anti-Western, appeared in the political arena of Russia.Paying serious attention to the need for adaptability in the field of foreign policies, Putin put three fundamental goals as the basis for his foreign policy including economic modernization, achieving an proper position in the process of global competition and restoration of Russia's position in the international system as a great power.Taking a pragmatic approach, which was based on Russia's hardware and software capabilities considering the requirements, Putin put in a great deal of effort to operationalizing these goals (Trenin, 2004).As Russia lagged behind in many aspects including economic, political and military fields, Russian politicians sought to rebuild their domestic and foreign policies.After the collapse of the Soviet Union, the main focus of Russia's domestic and foreign policies has been on reviving the economy of their country and also fighting against America's unilateralism and restoring its lost power and authenticity (Olikier, 2009, 38).Therefore, after Putin's accession to power, the most important components of Russian foreign policy were centered around establishment and improvement of Russia's reputation and authority.For achieving this goal, a change from and passing from Atlantic Orientation to Eurasian Orientation seemed to be a smooth and the most logical path to the Russian politicians of the time.In addition to this change in the discourse, the bad economic conditions and also bypassing the law were other issues that influenced the setting of priorities and the bases of Russia's foreign policy during Putin's rule and became one of the basic necessities of Russian foreign policy during this period. At the end of the communist era, the Soviet Union politicians, who saw the danger of their collapse tried to make certain reforms to save themselves from falling in the abyss of disintegration.These reforms included Glasnost and Perestroika's plans which were not found to be effective.After the collapse of the Soviet Union and the rise of the new system, an attempt was made to follow moderate policies in the political arena and the relations with the West and in the economic area, there was an attempt to protect and preserve the position of this country as a first-class power.But taking these policies during the presidency of Yeltsin led to the lowering of the position of Russia in the global arena.The end of Yeltsin's rule and Vladimir Putin's admission to power was a beginning for basic changes in the policies of this country particularly in the international arena and its foreign policies.Putin sought to rebuild and increase Russia's power in the international arena (Mancevic, 2006, 6). To summarize, the basis of Russian foreign policy during this period had the following features:  The dominance of a combination of Slavic Orientation and Eurasian Orientation over Russia's foreign policy system,  Redefinition of the identity of this country at the international level,  Economic modernization and advancement of Russia's economic interests in the framework of adaptation with the international system,  Achieving a proper position in the process of global competition,  Rebuilding Russia's position of authority as a "great power",  Taking a pragmatic approach in the framework of the Normalized Great Power theory. The soft balance between Russia and the United States following the collapse of the Soviet Union and the bipolar international system, was supported based on Russia's regional strategy in the geopolitical area of "the near abroad".At the end of 1992, this country adapted its strategy based on the idea of controlling the regional countries.Hansen believes that during this period Russia made some attempts to recover its influence and authority in the bordering areas.It was against the development of the great powers in these strategically important regions.According to the prominent officials of this country including Putin, the countries and lands neighboring with the Russian federation have had a special significance for Russia since old times for security, cultural and economic reasons (Khatami, 2004, 27).With Putin being put at the top of the political power pyramid in Russia, for the first time the foreign policy of this country was shaped and established in its real form (Afshordi, 2002, 276).For this purpose, Russia first turned its attention to the regional countries, the region referred to as "the near abroad" in its political literature and this country considers this area as its backyard (Dadak, 2010, 90).The presence of a third player, i.e., America in this region and, consequently, threatening of the political, economic and security interests of Russia has led this country to have particular view of the South Caucasus and to try to decrease the opponent's influence and authority in this region (Shafee, 2010, 24).Competition over the energy resources and its transfer routes, security dependence of the regional countries on Russia or America, the conflicting blockings of the powers, the attempts to eliminate or weaken other regional actors such as Iran and Turkey and militarization of the region under the shadow of Georgia's conflict with Russia and also the Karabakh conflict has caused the competitive atmosphere to be sustained so that none of the two opponents is able to win absolute dominance over the region. Russia's Foreign Policy during Putin's Presidency During Putin's presidency, Russia adopted an offensive approach in its foreign policy with the aim of reviving its old place in the international system.In line with this purpose, Putin initially prepared and developed the theoretical background for this new approach in the framework of four documents in 1999-2000, which was generally referred to as Putin's doctrine.But in early years of the third millennium, certain motives such as the color revolutions and America's offensive policies towards Russia, and also Putin's success in Russia's internal affairs, America's failure in Iraq and Afghanistan and Russia's increased economic capabilities helped to implement the policies intended and aimed by Putin and led to increasing militarism and diplomatic expansionism of Russia (Talebi Arani, 2007, 1). In Putin's time, Russia has been influenced more than anything else by international evolutions such as globalization in all its aspects, the process of integration or convergence in Europe, America's expansionism, growth of Islam, international terrorism, energy security, the increasing power of China and India and the developments and changes in the international economy.The pragmatic approach taken by Putin in foreign policy was focused upon certain principles such as emphasis on action, positive play and confrontation in foreign policy.From the beginning of his rise to power, Putin based his policies on three major goals including economic modernization, achieving an proper position in the process of global competition, and restoration of Russia's position in the international system as a new great power.And for achieving these goals, he proposed "The Normalized Modern Great Power" theory, in which visionary democratic nationalism plays a pivotal role, as the new strategy in his foreign policy (MosallaNejad, 2013, 122). Following an offensive policy in the field of energy, emphasis on establishing its power as an "energy superpower" in the New Middle East policies, following the costly military modernization programs and the sale of military equipment, opposition to the deployment of a missile defense shield by America in Eastern Europe and, consequently, suspension of Conventional Arms Treaty, establishing new anti-missile defense bases in San Petersburg and the unexpected announcement that it has pointed its nuclear missiles towards Europe, and resistance against America and the European Union for declaration of independence for Kosovo are among the main issues in this period which Russia tries to remind its foreign counterparts particularly the West of its promoted position in the international arena concentrating actively on these issues and propose itself as a great trans-regional power in the regional arrangements and equations (Baylis, 2005, 32).Putin's accession to power is considered as a milestone in the domestic and international policies of this country after the Cold War.On the one hand, Putin has tried to resolve internal crises and return stability and solidarity to Russia and, on the other hand, stand against the constraints provided mainly by the West and their expansionism against Russia by bringing it out of its shell using a progressive and offensive approach.By this, Putin aimed at returning and restoring the old reputation to Russia and recover it as a great world power.For this end, it can be stated that Putin's offensive approach in foreign policy was a reaction against the Yeltsin's submissive attitude towards the West and particularly America.Therefore, by pursuing the Eurasian Orientation policy, Putin not only stood against America's unilateralism but was able to play the role of an empire in the Caucasus and Eurasia regions.He also had a new look at the Asian countries trying to establish its position in the region and certain behaviors were shown by Russia, which are indicative of this fact.Some of these behaviors include:  Meeting with great powers such as America as a prudent and pragmatic strategy,  Paying attention to the region and the commonwealth countries,  Vladimir Putin strongly and authoritatively supported and sought the revival of the role of Russia in the international arena and looks at Syria as its last base in the Middle East.With a review of the history of the friendship of these two countries and the old relationship between Russia and the Syrian Ba'ath Party, the importance of Syria's port of Tartus and the academic relations between the two countries can be considered as evidence for and indicative of this friendship.  Russia's defense of Iran's nuclear rights. Another area of conflict between Moscow and Washington in 2009, which also devoted a part of the energy of the Russian diplomacy system to itself, was the issue of Iran's nuclear program.Despite the fact that following the relative improvements in the relations between Moscow and Washington, in some sources a reference was made to their deal over Iran and Medvedev's and other Crimean officials' emphasis on the possibility of Moscow's agreement and compliment with the policy of imposing sanctions against Tehran in the case of failure in reaching a solution and conclusion from other ways led to further speculations about this possibility, but Moscow could keep its Russia's interests in Tehran and Washington in 2009 with its diplomatic skills (Noori, 2009). The formation of regional organizations such as the Commonwealth of Independent States (CIS) Union, the strategic position and the rich energy resources in the region has made the regional and global powers particularly the United States pay a particular attention to this region (Naumkin, 2002, 31).The entrance of these powers and especially America into this area has led Russia to more carefully follow and observe the changes in the region, which is referred to as "the near abroad" in Russia. As a symbol of authoritarianism in Russia, Putin acted differently from the traditionalist leaders of the authoritarian systems in the undeveloped countries in reaction to the emerging political and social events in his country.Gaining power using completely democratic processes during the years from 2000 to 2012, withdrawal from presidency in the 2007 presidential elections, helping to create new political and economic democratic institutions during his presidency and allowing the opponents to participate in the two elections, i.e., presidential and parliamentary elections and, finally, showing a constructive and favorable reaction to the urban demonstrations of the opposition groups are considered as evidence for Putin's commitment to the principles of political development.In addition, by writing a very important article entitled "Democracy and the Quality of Government", which was published in February 2012 in the Kommersant newspaper, Putin declared his absolute commitment to political development and strengthening of the civil society as current requirements of Russia and announced some of his plans for satisfying these requirements. The most important obstacles facing Putin for the realization of political development include: Although the assumption of homogeneity of Putin's foreign policy during the eight years of his presidency is questionable, in the same line with the analysts this policy can be divided into different periods from different perspectives.Furthermore, although at some points Putin's foreign policy has been diverted from his pragmatic principles under the influence of the high sensitivity of the geopolitical and identity issues such as tension with Estonia about the Monument of the Unknown Soldier, Moscow-Kiev strong confrontation in the case of the possibility of Ukraine's membership in NATO, tensions with Washington about America's missile defense shield and diplomatic arguments with London in the case of the death of Alexander Litvinenko, a brief review of Russia's general behavior in the field of foreign policies during Putin's government reveals that he is practically committed to the principles of pragmatic approach (Koulayi, 2010, 213).Despite the Crimean policy-makers' attempt to recognize and introduce Russia as a normalized great power in the international system, he was considered by majority of the world countries as an abnormal or non-normalized agent in the international system for his positions during the Ukraine-Moscow crisis and this is the result of radicalization of the Eurasian Orientation policy.As long as the stance on Eurasian Orientation was moderate and softened and with an emphasis on a pragmatic behavior, Russia had been able to introduce itself as a normalized agent in the international system to a certain extent. The Offensive Aspects of Putin's Policies Putin's doctrine in 1999 is the beginning of a new period in the foreign policy of Russia.After developing and formulating this doctrine, the September 11 th events took place which resulted in an increased unilateral actions by America.Following these unilateral actions by America, Russia also took an offensive approach in practice.But in addition to this factor, i.e., America's unilateralism, there were other pre-existing factors which caused this to happen and played a very significant role in the development of Putin's new doctrine.For instance, such issues as the expansion of NATO to the East, formation of color revolutions in the independent states of the former Soviet Union, America's growing military presence in Central Asia and Caucasus, America's interference in Russia's internal affairs, excluding Russia from participation in international decision-makings and determining the agenda led the focus and behavior of Russia's foreign policy change and be impregnated with militarism, weapons, threatening and invasion. The most important issues in Russia's foreign policy during Putin's run include the following: The manifestations of this situation can be seen in Russia's plans for increasing the number of its conventional weapons and missile arsenals, improving the quality of Russian weapons, lack of interest in membership in military and security commitment unions, which create limitations for Russia in confronting America, and the attempt to withdraw from and get out of military pacts and commitments.These aspects of Russia's offensive policy are particularly concerned with the United States.The battlefield for these two countries during the last two years has not been limited to Eastern Europe but expanded to the Atlantics.  The Use of Weapons as a Means of Putting Pressure: This aspect of Russia's offensive foreign policy is mainly related to and directed at the European countries.But in addition to the European countries, some of the neighboring countries such as Ukraine, which wants to follow policies that are against Russia's security, are also the target of this policy.Using its huge gas reserves, as the sole supplier of gas to Europe, Russia attempts to use this economic weapon as a means of putting pressure in its foreign policy.An evidence for this is the time when Russia threatened Ukraine in 2014 that Russia would cut gas imports if it did not pay its deferred debt. The Opportunities and the Substrata of Russia's Eurasian Orientation Policy during Putin's Run After the collapse of the former Soviet Union and especially Yeltsin's submissive policies and worsening of the economic conditions, Russia sought to redefine and its authoritarian identity in the international arena.Under the conditions, following the increase in the price of energy resources and having the monopoly of energy resources and also with the dependence of other countries on Russia, this country got a new life and thanks to the improving economic conditions, it tried to restore its international reputation.In this case, only one enemy can confirm the identity and authority of a country.Therefore, Russia claimed that only America could come to grips with Russia.From this perspective, taking the Eurasian policy with an offensive behavior has different effects on Russia and its status in the field of international relations and at the regional level some of which will be mentioned in this section. At the Internal Level It may seem that the choice of Eurasian Orientation policy with an offensive taste has no role in Russia's internal affairs, but the selection of this policy and its effects on the internal developments are important in some respects.When Putin came to power, the only thing he inherited from the previous period was bad economic conditions and social dissatisfaction.Accordingly, the lucky simultaneity of Putin's policies with the rise of energy price led to his economic and political success.As a consequence, the trust and confidence achieved by the improving conditions of the country and Russia's offensive foreign policies, on the other hand, has promoted Putin's position as a national hero as the symbol of a modern and powerful Russia and getting Russia out of the mire of problems and establishing its position in the international arena became the major strategic goal of the charismatic character of Putin. In addition, Russia's offensive stance in the international arena has led to internal solidarity and unity and Russia's attention has shifted from 'following' and Atlanticism to Eurasian Orientation.It can be claimed that Russia's identity issue has been resolved as the result of the Eurasian Orientation.Therefore, it seems that Russia's offensive policy and Eurasian Orientation policy are directly related. At the Regional Level At the regional level, Eurasian Orientation policy has a positive and a negative view.Based on the positive view, Russia can justify its interferences in some of the policies of the regional countries that are related to America in the framework of its confrontation with this country.But, on the other hand, this can have negative consequences such that some of the regional countries may feel insecure due to Russia's offensive policy in their affairs.In this way, some conflicts may arise at the regional level. At the Level of International System The new policy adopted by Russia based on Eurasian Orientation and in the framework of the offensive realism theory, which originates from Putin's doctrine and has anti-American symbols, has had different effects on Russia's reputation in the international scene. First, this policy posed a challenge to the unipolar system and America-centrism and has introduced Russia as a strong barrier against America.Such a claim is made in the multipolar system theory. Second, the existence of such a phenomenon reveals the fact that America's power is not eternal and the appearance of another polar may challenge the nature of hegemony. Third, challenging America at the international level has made Russia important in the region and, in fact, for those opposed to America, Russia is the hero and their supporter in the international decision-makings.This attitude has also posed a challenge to the order desired by the United States in the sensitive and critical regions. Eurasian Orientation: The Motive for a New Cold War The collapse of the Soviet Union was a significant milestone in changing the dominant pattern of the international system, and consequently, changing the dominant attitude in Russia's foreign policy.Although at the beginning of the collapse of the Soviet Union, the prevailing view was to follow the hegemony and Russia considered friendship with America and following Washington as its nature and identity, but when Putin got into power, the pragmatic policy of Putin and, subsequently, all attentions were directed at Eurasian Orientation and the self-confidence created by the improving economic conditions and the rising energy price convinced Russia to redefine itself and not only to avoid following the hegemony but oppose it and restore the role and identity of the original Russia to the land broken into pieces.With such as view and the presence of theoreticians such as Dugin, Russia's major concern was to define and determine certain policies and perspectives based on Eurasianism and its view of the region.Therefore, after Russia became relatively powerful and following the improved conditions in this country, its offensive policies expanded in the regional and at the international level. There are two different views about the change of the competition between America and Russia into the Cold War:  According to one view, the competition between Russia and America will not turn into another cold war. The reasons provided for this claim are the signs of the relationship between the two countries after the Cold War; their post-war relations has gone through a process that is indicative of cooperation and interaction rather than confrontation and competition despite the ups and downs.This cold-war approach to the relations between America and Russia pays little attention to some of the global trends requiring the development of regional cooperation, the need for a dynamic interaction in the process of world economy and also the dominance and development of liberal democracy values.Finally, it can be stated that the improved internal conditions and the rising price of the energy resources created a self-confidence in Russian leaders for more serious confrontation with the anti-Russia provocations in Caucasus.Extensive military intervention by Russia in Southern Eurasia is not only indicative of Russia's geopolitical reappearance in Caucasus but also a deterrent strategy and a serious warning and threat to the regional countries to reconsider their anti-Russian attitudes (Ebrahimi and Mohammadi, 2011, 17).  However, with the emerging developments and changes in Ukraine and Crimea's secession to Russia and America's and Russia's interferences in the region supported the view that another cold war is happening. The differences and conflicts between Russia and Ukraine had a great impact upon the social status of the country members of the Commonwealth of Independent States (CIS).Ukraine has the largest population and the highest economic and political capacity in CIS after Russia.Accordingly, it has always attracted the attention of Russia, on the one hand, and the West, on the other.A part of their conflicts and differences is concerned with determining the status of the Crimean peninsula.This peninsula, which was separated from Russia and attached to Ukraine during Khrushchev's run, has a special position and significance in the Black Sea.The largest Soviet Navy is at the Port of Sevastopol at the coast of the Black Sea.Furthermore, another aspect of the differences and conflicts between the two countries is motivated by economic issues.Ukraine claims to have a share in the Soviet assets abroad, which Russians have taken over with the excuse that they are the legal successor to the Soviet. The relations between Russia and America in the last years of Putin's presidency had a particular quality.In this period, taking an offensive pragmatist approach, Putin adopted the policy of "Direct Resistance" against the expansionism of the West and, particularly, America imperceptibly from 2006 and obviously from the beginning of 2007 considering the self-confidence he had built up by the achievements and successes in the political and military fields and particularly the unexpected inflow of dollars from oil resources into the Russian economy.The point of departure for this trend, which Derek Averre refers to as Putin's Munich doctrine, was his speech in Munich Security Conference in which he severely criticized America's unilateral approach (Koulayi, 2007, 213). Russia is the heart of the Soviet Union and many of the social and structural features of the Soviet can be seen in the political and strategic foundations and components of this country.Its competition with the United States goes back to the 19 th century.These competitions were initially formed in the framework of balance of power in the international policies.America and the Soviet Union formed the bipolar international system from 1945 to 1991 and, currently, after the collapse of the former Soviet Union and the end of the bipolar international system, Russia has been still following the strategic policies of those times and claims to be fighting with the unipolar system.The most important areas of conflict between Russia and America are the security issues.Russia is totally opposed to the development of NATO and the growing unilateralism being followed by the United States and always shows reactions to them.The examples of this strategy and approach can be observed in its disagreement with America's missile defense plan and criticism of America's strategic policy and intention to withdraw from the Anti-Ballistic Missile Treaty (Koulayi, 2007). Russian politicians believe that great powers need great enemies.A superpower needs another superpower that stands against it.The more powerful and the greater the enemy is, naturally the more glorious it will be to stand against it.Therefore, it is only America which has the competence and capability to be an enemy to the United States.Russia needs a totally new identity that can replace the identity and ideology of the time of the Soviet Union.With this new identity, Russia can determine its position in the world and among the great powers.This country is defining itself in a new promised land.The Western model, which had been considered as a desirable pattern for Russia at the beginning of Yeltsin's run, has faded into insignificance both from the Russian people's and politicians' perspective.The Missile Defense Shield raised some points with regard to the future and prospect of the relations between Russia and America for the macro-level analysis.He first referred to the existence of two trends of thought in Russia, i.e., Atlanticism (West-orientation) and Eurasianism.There are both extremist and moderate groups within each thought trend. The West's sanctions, which seemed totally inefficient at the beginning, is now having a very negative effect on Russia's economy.Above all, the significant decline in oil prices over the past few months, has greatly affected Russia leading to a sharp drop in its export earnings and the value of Ruble.Eastern Ukraine has also turned into a dangerous region for Russia at the present time due to the killing of Russian soldiers involved in war.In the same way, the West's disagreement with and even fear of Russia has increased and European economic and political leaders have become less inclined to makes investments in Russia and help Moscow and cooperate with Putin (Katz, 2014).The Ukraine crisis can also become an excuse for intensive geopolitical competition between America and Russia all over the world, but the commonalities such as moderated nuclear weapons can increase the areas of cooperation between the two countries.Otherwise, if this competition is not moderated, it can lead to a significant hidden competition in international relations, which influence many other issues. Today, half of the Russian population believes that if we put aside and ignore Russia's frustration in Afghanistan, during the former Soviet Union, the international conditions have been more stable, reliable and in favor of and to the advantage of Russia.Only 5% of the Russian people believe Russia had a good position in the international arena during Yeltsin's rule.Russia needs a foreign enemy in order to find and determine its direction.When Putin refers to enemy, the Russian people realize that, if he does not mention the name of this enemy, he is referring to the United States, which dropped with anger the first nuclear bomb on Hiroshima, destroyed Vietnam with different types of poison and, more importantly, has put in all its efforts to destroy the glory of their native land, i.e., Russia.Russian people, however, like Europe in contrast with the United States. According to the Russian people, Europe is a praiseworthy entity that nobody is definitely afraid of.80% of the Russians believe in the existence of positive and warm feelings between the Europeans and the Europe.Nevertheless, Russians still credit their Asian origin.50% of the Russians do not consider themselves as European and believe that Russia has never been European.This group of Russian people are proud of Russian traditions and values.Europe and, particularly, European countries are too small, in the Russians' eyes, to count as an enemy and play the role of an enemy (Der Stürmer, 2011, 291). Considering the conflict with the West over the establishment of missile defense shield and with the end of the period of building trust between the two countries, Put wanted the West to reform its wrong behavior at the peak of distrust between the two countries under "Moderate Eurasian Orientation" despite the fact that he believed in creating a balance between their view of and relations with the East the West.The relations with America has always had ups and downs in Russia.However, these relations can be divided into two periods: the first period started from the collapse of the Soviet Union and continued until September 11 th , 2001 and the second period started with the attack on the twin towers of the World Trade and has continued up to now.In the first period, the ideological competition between the two countries had ended due to the collapse of the Soviet Union and security policies and America's military doctrine, which was founded upon opposition to and inhibition of communism, was intrigued by strategic confusions and ambiguities, but America still views itself superior to Russia considering the new unipolar structures.In the second period, Putin's accession to power and the September 11 th events were two important factors that made the relations between America and the Russian Federation step into a new stage. Putin had improved the undesirable conditions in Russia to a large extent.On the one hand, he showed a cooperative orientation towards fighting terrorism and , on the other hand, exercised power within the country.But, overall, none of the players could ignore the other side.Thus, the competition for gaining more benefits and interests continued (Ebrahimi & Mohammadi, 2011, 8). Generally, the measures taken by Russia in its new confrontation with America and NATO include the following:  The massive sale of weapons to the countries challenging America and Europe,  Carrying out a 200-billion project for empowering the Russian army,  An attempt to establish the rule of Russia over some regions of the North Pole,  The resumption of flights of strategic bombers in distant areas,  The suspension of Russia's membership in the treaty on Conventional Armed Forces in Europe,  Expulsion of English diplomats from Russia, Activation of Russian diplomacy in the Middle East, encouraging the Middle East governments to have security cooperation within the region in the framework of Shanghai Organization instead of the models of mutual military-security agreements with America and measuring its success, turning the Western (under the leadership of America) and Eastern (by facilitating a coordination between China and Russia) conflict into an identity conflict, and Putin's clear disagreement over NATO's unilateral actions under the leadership of America for solving the crises in Europe and particularly in Eastern Europe and the Balkans are all examples of these measures.Despite that, it seems that the world is passing and moving to a new plan of polarization. Russia's worry about its primary security circuit increased with the West's and particularly NATO's increasing presence after the September 11 th events.America's intrusion into the traditional area of influence of Russia in the middle of the 1990s caused a threat to the Russian interests, but the peak of this progressive movement by America was after the September 11 th events.After these events, the United States had direct presence in the region deploying its forces to the bases in the regional countries.At this time, Russians showed a passive reaction watching the growing presence of the Americans in the Caucasus and the Central Area and were not very pessimistic about this presence.After the initial shock of the initial horrific events subsided and America's strategic goals for having presence and establishing military bases and also creating and fomenting color revolutions in the Central Asia countries and Caucasus as a direct result of the issues were revealed, Russians felt an looming threat and their way of interaction with America greatly changed.They used different strategies as a lever to challenge this country's (i.e., America's) desired order in the region and drive and force America away from their geostrategic area.Among the strategies and reactions shown by Russians were their absolute rejection of America's plan to operate a missile shield in Eastern Europe, considering the preparation and establishment of new nuclear missile sites in Petersburg and announcing the use of the fleet of long-range nuclear patrol aircraft.These measures reached their peak by establishing the Shanghai Treaty Organization.Shanghai Treaty was the result of America's encroachment upon Russia's and China's geopolitical area bringing these two countries together (MojtahedZadeh and RashidiNejad, 2011, 14).In fact, Russia's foreign policy in the Middle East as a country that claims to be reviving its lost power in the international arena can be evaluated and examined with respect to this claim and in terms of its position towards the war in Afghanistan and Iraq, the nuclear issue of the Islamic Republic of Iran, Syria, Palestine and the countries experiencing Islamic awakening. On the whole, Russia has always been opposed to America's presence in the Middle East both overtly and covertly.This opposition has been much stronger particularly when this presence has been along with military operation or has not been approved and supported by the United Nations.Russia was opposed to America's attack on Iraq from the very beginning such that some experts talked about the likely veto of this plan by Russia in the Security Council before America started the attack. Russia's measures were, in fact, also the beginning for a new polarization after the Cold War.In line with that, an opposing power is emerging and developing against a power that seeks to establish a hegemonic order.The anti-hegemony power is being formed with Russia's initiative.Although Russia is not as powerful as America, the polarization trend against America can, at least, create a challenge for the power of America (Talebi Arani, 2007, 23). Based on the mentioned principles, Putin believed in lack of confrontation and gaining advantage until the threshold of tolerance topples.An evidence for this is Russia's support for the war in Afghanistan, the initial not-so-serious oppositions to the war in Iraq and lack of serious disagreement and opposition to the military cooperation between America and Georgia (Harutyunyan, 2007, 11).By replacing the concept of "multilateralism" with the concept of "multipolar system", Putin made an attempt to replace the components of competition and tension in the idea of multipolar system with the components of competition and cooperation in the concept of multilateralism.In the concept of multilateralism, Russia defined a positive role for itself in the cooperation with the West and even America and considered itself as an international strategic arrangements partner particularly in the security issues (Koulayi, 2010, 217-219).Taking the Eurasian Orientation policy by Russia and standing against America instead of following and complying with it, has had different effects on Russia and its status in the area of international relations and at different regional levels.In addition, at the internal level it led to gaining trust and the promotion of the position of Putin inside Russia and introduced him as a national hero and the symbol of powerful and modern Russia and saving Russia from the mire of problems and establishing its position in the international system became a major strategic goal.Furthermore, Russia offensive stances in the international arena has created internal solidarity and with the change of Russia's policy from 'following' and Atlanticism to Eurasian Orientation has helped to resolve Russia's identity issues to some extent.Therefore, it seems that offensive policy and the Eurasian Orientation are directly related. There is also a positive and a negative view of the Eurasian Orientation policy at the regional level.According to the positive view, Russia can justify some of its interferences in some of the policies of the regional countries that are related to America and justify them in the framework of its confrontation with America.But, on the other hand, this can have negative consequences such that some of the regional countries feel insecure due to Russia's interferences in their affairs which, accordingly, can cause some conflicts at the regional level leading to the weakening and decline of Russia. However, the new policy taken by Russia, i.e., Eurasian Orientation, in the framework of the offensive realism theory, which originates from Putin's doctrine and has anti-American symbols, has had different consequences for Russia's reputation in the international arena.First, this policy challenged the unipolar system and America-centeredness and introduced Russia as a powerful player against America.This claim is raised in framework of the multipolar system theory.Second, the existence of such a phenomenon is indicative of the fact that America is not an eternal great power and the appearance of another polar may challenge the nature of hegemony.Third, challenging America at the international level is the cause of importance of Russia in the region and, in fact, from the perspective of the enemies of America, Russia is their hero and supporter in the international decision-makings.This attitude has also posed a challenge to the order desired by the United States in the sensitive and critical regions. In addition to the regional countries, Russia considers the presence and enmity of any other power as an obstacle and threat to its security and interests.As a result, fighting America and preventing the development of NATO in the region originates from Russia Eurasian Orientation and, consequently, indicative of its offensive approach that tries to nip any feeling of insecurity in the bud.The collapse of the Soviet Union, in fact, changed the conditions in the international system making America a hegemony that rules over the world and spares no effort to keep and preserve the status quo.In contrast, Russia is still thinking of realizing its old dream, i.e., gaining control over the whole world or at least, at the present time, domination and control over Europe and Asia.Therefore, it is not content with the existing situation can use the domino of attaching the regional countries such as Crimea to Russia to realize this dream. Conclusion Russian politicians believe that great powers need great enemies.A superpower needs another superpower in front of and against it.The stronger and the more powerful the enemy is, the more glorious it will be to stand against it naturally.Therefore, it is only America which has the capability to be an enemy to Russia.Russia needs a totally new identity that can replace the identity and ideology of the Soviet time.Accordingly, following the inefficiency of Russia's 'following' and submissive policy during Yeltsin's run and worsening of the economic conditions during this period, Putin appeared as the savior of Russia and got into power. Adoption of the Eurasian Orientation policy by Putin not only led to Russia's domination over Europe and Asia but could improve the economic conditions of this country.By creating unions including the CIS Union, Putin could bring the independent states of the former Soviet Union close together and also have control over the economic and political conditions in these countries.Russia's attempt to prevent the membership of the regional countries in NATO is an example of Russia's interference in the region in the framework of its Eurasian Orientation policy. Reinforcement of the Commonwealth of Independent States is one of the priorities in Russia's foreign policy after Putin's accession to power in line with his Eurasian Orientation policy.This community aims at the convergence and unity of the Eurasia region.In contrast with it is the Guam Union, which tries to diverge rather than converge the Eurasia region and particularly fight against Russia's domination in the region and has an inclination towards the entrance of trans-regional powers such as America into the region and membership in NATO.Despite the establishment of the CIS Union as a great union, we see the formation of smaller unions such as the Guam Union at the heart of this great union.These countries, i.e., members of these small unions, bring the bad memories of the Soviet time into their mind and are worried and fear that Russia might be a threat to their sovereignty and independence.Thus, by turning to America they try to dampen or dissipate the offensive policies and domination of Russia. The United States of America is the greatest challenge to Russia at the international level.On the other hand, the CIS countries' worry about and fear of the domination of Russia over them is one of the most important challenges and obstacles to advancing the Eurasian Orientation policy by this country.On the whole, the most important effect of the Eurasian Orientation discourse on the behavior of Russia is that as the Eurasian Orientation policy becomes more prominent in Russia's foreign policy, its foreign policy becomes more offensive leading to the isolation of Russia in the international arena.Accordingly, one of the most important strategies to be taken by Russia is to follow Moderate Eurasian Orientation.Therefore, although Eurasian Orientation plays the role of a moderator and can lead to the empowerment of Russia, if it moves away from moderation and towards extremism, it will not only bring no power for this country over time but also lead to its isolation in the international arena.  Making a visit to North Korea and Cuba,Abolition of the Gore-Chernomyrdin Treaty,Forming a coalition with China to for confronting America's unilateralism and the creation of the Shanghai Cooperation Organization and making Iran an observer member of this organization in 2004, concern about the interference of foreign agents in Russia's internal affairs,  the dependence of Putin and his followers on the economy of oil and gas,  the likely turning away of some parts of Putin's social support base during his future presidency,  the looming economic crises in Russia and the West and the concern of the middle class about the undesirable effects of these crises on political and economic development in Russia,  the growing unresolved crises in Russia in underdeveloped regions of North Caucasus and the Far East,  the growing national chauvinism among the Russians (Shargh Newspaper, 2012, 7).
2018-12-11T18:31:10.974Z
2016-08-25T00:00:00.000
{ "year": 2016, "sha1": "923a7b7fa557eb9ce3a969c653beed5eaab13fb9", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/ach/article/download/61400/33572", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "923a7b7fa557eb9ce3a969c653beed5eaab13fb9", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Sociology" ] }
3430557
pes2o/s2orc
v3-fos-license
“Salt in the Wound” Electronic health record (EHR) data can be extracted for calculating performance feedback, but users' perceptions of such feedback impact its effectiveness. Through qualitative analyses, we identified perspectives on barriers and facilitators to the perceived legitimacy of EHR-based performance feedback, in 11 community health centers (CHCs). Providers said such measures rarely accounted for CHC patients' complex lives or for providers' decisions as informed by this complexity, which diminished the measures' perceived validity. Suggestions for improving the perceived validity of performance feedback in CHCs are presented. Our findings add to the literature on EHR-based performance feedback by exploring provider perceptions in CHCs. . This lack of trust in the legitimacy and accuracy of EHR-based performance feedback both stems from and illustrates the challenges of creating quality metrics based on readily extractable EHR data (Baker et al., 2007). Previous qualitative research identified some strategies for creating EHR data-based feedback measures that providers consider credible and valid, but with a few exceptions (Ivers et al., 2014a;Kansagara et al., 2014;Rowan et al., 2006), this research was conducted in large, academic/integrated health care settings. Little is known about perceptions of and strategies for improving such feedback in the community health center (CHC) setting. Yet, CHCs-the United States' health care "safety net"-differ from other health care settings in critical ways, most notably their patients' socioeconomic vulnerability. Thus, there is a need to better understand barriers to the perceived legitimacy of EHR-based feedback measures in primary care CHCs, and approaches to crafting performance feedback that effectively improves care quality in this setting. To that end, we present an in-depth qualitative assessment of how primary care providers perceived EHRbased performance feedback, and their suggestions for increasing the utility of such feedback data, as reported in data collected in the context of a clinic-randomized implementation trial conducted in CHCs. The terminology used to describe performance feedback in the literature varies. Here, performance metrics means the aggregate measurement of a given care point (eg, rate of guideline-concordant statin prescribing, shown as a percentage on a graph). Data feedback means potentially actionable data linked back to individual patients (eg, a list of patients with diabetes who are indicated for a statin but not prescribed one). Performance feedback encompasses both types of measurement. METHODS The "ALL Initiative" (ALL) is an evidencebased intervention designed to increase the percentage of patients with diabetes who are appropriately prescribed cardioprotective statins and angiotensin-converting enzyme inhibitors (ACEI)/angiotensin II receptor blockers (ARB). The data presented here were collected in the context of a 5-year pragmatic trial of the feasibility and impact of implementing ALL in 11 primary care CHCs in the Portland, Oregon, area. The ALL intervention included encounter-based alerts, patient panel data roster tools, and educational materials, described in detail elsewhere (Gold et al., 2012(Gold et al., , 2015. We also extracted data from the study CHCs' shared EHR to create performance metrics on the percentage of diabetic patients who had active prescriptions for statins and ACEI/ARBs, if indicated for those medications per national guidelines. These study-specific metrics were calculated for each clinic and clinician, using aggregated data (Figure 1), and given to the study CHCs' leadership as monthly clinic-level reports; in addition, patient panel summaries were given to each provider at the study clinics at varying intervals. Individual patients' "indicated and active" status was also given to providers by request ( Figure 2). The CHCs' leaders and individual providers distributed this performance feedback to clinic staff as desired; for details on how the feedback was disseminated, see Table 1. Using a convergent design within a mixedmethods framework (Fetters et al., 2013), we collected qualitative data on the dynamics and contextual factors affecting intervention uptake (Bunce et al., 2014). The extent of qualitative data collection at each clinic was informed by pragmatic constraints and data saturation (the point at which no new information was observed) (Guest et al., 2006). Table 2 details our methods and sampling strategy. The intervention was implemented in June 2011 and supported through May 2015; qualitative data were collected between December 2011 and October 2014. Qualitative analysis was guided by the constant comparative method (Parsons, 2004), wherein each finding and interpretation is compared to previous findings to generate theory grounded in the data. We used the software program QSR NVivo to organize and facilitate analysis of the interview and group discussion transcripts and observation field notes. We identified key emergent concepts, or codes, and assigned them to appropriate text segments. Each code's scope and definition was then refined, and additional themes identified, through iterative immersion/crystallization cycles (deep engagement with the data followed by reflection) (Borkan, 1999). Our interpretations of the data were confirmed through regular discussions among the research team, which included experts in clinical care, quality improvement, and quantitative and qualitative research, and clinic leadership at the study CHCs. This study presents our qualitative findings on CHC physicians' perspectives on the performance feedback provided to them as described earlier. This study was approved by the Kaiser Permanente NW Institutional Review Board. Study participants (clinic staff) gave verbal consent prior to data collection. CHC providers' perceptions of performance feedback measures CHC providers stated that they often questioned the validity of performance feedback measures, usually because the feedback measures did not account for CHC patients' needs or the complexity of their lives, or for clinical decisions made by providers who In study year 4, intending to increase reliance on the real-time alert, began inserting only the intervention logic into the EHR problem list Roster continued to be used by an RN diabetes QI lead for individual meetings with providers to discuss overall care of their diabetic patients B Quarterly, site coordinator sent metrics for each provider and clinic to the medical director, who then disseminated to clinic-based lead providers Monthly, site coordinator posted roster-based lists of patients "indicated but not active" by the care team (usually 2 providers) on the organization's shared drive Some lead providers presented the metrics at clinic-specific provider team meetings Staff had to take the initiative to search for and pull the list Graphs depicting overall clinic progress sometimes posted on clinic bulletin boards, at discretion of clinic managers C Site coordinator pulled providerspecific percentages of "indicated and active" from the study results and e-mailed them in graph form (along with the clinic-wide percentages) to individual providers 4 times over the course of the 5-y study Approximately every 6 wk, site coordinator created roster-based provider-specific lists of patients "indicated but not active" Leadership sometimes used the clinic metrics as a springboard for discussion in leadership and QI meetings Distributed paper copies in-person and e-mailed electronic copies (varied). Usually given only to providers, but by request sometimes shared with other members of the care team Defining the population: Who counts as "indicated and active"? In this study, the feedback measures' denominator was the number of patients indicated for a given medication (ACEI/ARB or statin), and the numerator was the number prescribed the indicated medication in the last year. However, CHC patients' socioeconomic circumstances (eg, lack of money to pay for the medication and housing instability) or related clinician judgment (eg, perceived likelihood of medication nonadherence and preference for a stepwise approach to prescribing for patients with complex needs) could be barriers to prescribing a given "indicated" medication. For example, some CHC patients bought medications in their home country, where they cost less, or took medications that family members or friends had discontinued. Without documentation of these circumstances in the EHR, however, the feedback measures would identify the patient's prescription as expired. Two examples illustrate this: [Provider] asked about one medication that [the patient] said he was taking but it looked in the chart like he was out of. Patient explained that his son was taking the same medication but had recently been prescribed a higher dose, so he gave his dad (the patient) his remaining pills of the lower dose. "Because I don't have money." (Field note) . . . If patients are not on the medications, it is not because it wasn't offered. [The provider] believes that if the patient was not on medications it is due to education level affecting understanding, lack of resources for scripts and tests, or patient flat out refuses. The concern . . . is how this information is reflected in the statistics or data. (Field note) Furthermore, CHC providers reported that their patients are often unable to see their primary provider for periods of time (eg, if they are out of the country, or in prison), or are not available for other reasons (eg, transient populations and inaccurate/frequently changing contact information). When patients on a provider's panel were temporarily receiving care elsewhere (eg, while in jail), and medication data were not shared between care sites, feedback measures would be affected. Similarly, migrant workers remain on the provider's panel (and thus in the feedback measures' denominator) even when they are out of the country and cannot be reached by the clinic. Their prescriptions might expire while the patient was unable to see their provider, negatively impacting rates of guideline-based prescribing in the performance feedback measures. In addition, CHC patients are not enrolled members, which can affect measurement of care quality by making it unclear whether a patient was not receiving appropriate care, versus out of reach. Patients identified as lost to follow-up were removed from the feedback measures' denominators; however, as clinics used different methods for defining patients as lost to follow-up, accounting for this accurately in data extraction was difficult. For example, patients could be considered in a given provider's denominator if they were "touched" by the clinic in the last year (eg, by attempted phone calls) even if no actual contact was made. Thus, patients who were never seen in person could be included in the feedback measure's denominator. Situations such as these could not be effectively captured by the extraction algorithm, as the EHR lacked discrete data fields where providers could record them, so these exceptions were not reflected in the performance data. As a result, the CHC providers often questioned the measures' validity and fairness. One provider said that receiving such reports can feel like "salt in the wound." Another provider noted: . . . we get these stupid reports all the time telling you you're good, you're bad. I mean, just one less thing to like have somebody pointing fingers at me. . . . It's horrible as a provider, really, to get all of these measurements . . . It's like saying you're going to be graded on this. (PCP) Gap between potential and actuality Providers consistently described struggling between a desire to use feedback data to improve patient care and their inability to do so given inherent situational constraints. This could lead to feeling overwhelmed, anxious, frustrated, or guilty when they received the feedback reports. . . . the possibilities for data and what we could do with it in a systematic way are amazing. But we are so completely overloaded . . . that we just can't even deal with the data that we get . . . (PCP) However, a few positives were noted. Some providers acknowledged that performance feedback can be helpful reminders of the importance of the targeted medications in diabetes care, which motivated them to discuss this with their patients. I like it, personally, because . . . somebody is helping me to see. Sometimes it is difficult to see the whole picture . . . . It is not because you lack the knowledge or the experience. But you can't catch everything. (PCP) Others appreciated the feedback as a safeguard, even though they were often already aware of the patients flagged as needing specific actions or medications. Conversely, others thought it was not worth reviewing the reports, as they already knew their patients' issues: [I would look over] the patients who were indicated for certain meds . . . that weren't on them, and just kind of just quickly review who those patients were. Just kind of . . . do I recognize this patient? Oh, am I surprised that they're not on a statin or ACE? No I'm not. Okay. (PCP) Provider suggestions for improving performance feedback measures Despite the tensions described earlier, most providers said they wanted to receive feedback data, but many noted that organizational changes (eg, to workflow, staffing, and productivity expectations) would be necessary precursors to its effective use. Without these changes, providers thought such data would primarily serve as snapshots of current care quality, but not as tools to improve performance. They suggested a number of ways to improve both the acceptability and utility of performance feedback. Staffing and resources Dedicated, management-supported "brain time" was suggested as a means to enable care teams to review feedback data together and identify next steps to addressing care gaps. Providers also recommended designating a trusted team member (eg an RN) as responsible for identifying potentially actionable items from the feedback data. . . . [what] I'm kind of looking for is a QI [quality improvement] person to come in here that has the data, and goes to the team meetings, and can [be] sort of non-judgmentally preventive. . . . So it's not so much as you bad person . . . but hey, we look a little low here, how about if we just talk for a few minutes about, you know, what one little step we could take, and let's try it for a few months and see how it works. But being in the team so that they can support that work, and then checking back in. (PCP) Action plans Providers also requested concrete suggestions for how to prioritize and act on feedback data (along with resources to do so), saying that data alone are insufficient to drive change. [What] I'd really like is here's your data, and here's what we're going to do with this. . . . Here's the twelve patients that you have six things wrong with them, that if you got these patients in they're really high yield, something like that. (PCP) Holistic, patient-specific format Many providers commented that patientlevel data would be more useful than aggregate performance metrics, and asked that such feedback data include patient-specific information along with the panel-based metrics. Many also requested that such patientlevel feedback data include relevant clinical indicators in addition to measures targeted by a given initiative (eg, a diabetes "dashboard" that shows HbA1c, blood pressure, and low-density lipoprotein results along with guideline-indicated medications) for a more holistic view of the patient's needs. I guess . . . if you were trending in the wrong direction that would be useful information . . . But for me . . . probably meatier is pulling lists and looking at specific individuals and saying, you know, here's this woman . . . she's not on statin . . . is there a reason why? (PCP) The patient panel data (Figure 2), an example of this approach, were generally well-received by care teams. The colors are a "stoplight" tool: green indicated measurement within normal limits, yellow indicated that a measure approaching a concerning level, and red indicated a problem. Our findings concur, and add to this literature by exploring provider perceptions within the safety net setting. CHC patients are often unable to follow care recommendations for financial reasons, may receive care elsewhere for periods of time, or may be otherwise unavailable to clinic staff, leading to inaccuracies in feedback measures. CHC providers, understanding their patients' barriers to acting on recommended care, are understandably disinclined to trust feedback data that do not account for such barriers. Thus, in this important setting, creating EHR-based performance feedback that users perceive as valid may be particularly challenging because of limitations in how effectively such measures can account for the socioeconomic circumstances of CHC patients' lives. Limitations on the ability to extract data in a way that accounts for such factors are inherent to most EHRs (Baker et al., 2007;Baus et al., 2016;Gardner et al., 2014;Persell et al., 2006;Urech et al., 2015). EHR data extraction entails accessing data recorded in discrete fields accessible and searchable by a computer algorithm. The type of "nonclinical" patient information discussed earlier as barriers to care, as well as the reasoning behind the nonprovision of recommended care, is rarely documented in standardized locations or in discrete data fields (if at all) (Behforouz et al., 2014;Matthews et al., 2016;Steinman et al., 2013), compromising the ability to extract comprehensive performance feedback data recognized as legitimate by users. Improved EHR functions for documenting exceptions might enable more accurate quality measurement, and thus improve providers' receptiveness to and trust of feedback data. In prior research, providers were more receptive to EHR-based clinical decision support when documentation of exceptions was enabled (Persell et al., 2008); the same may apply for feedback measures. Another EHR adaptation that could improve such measures' accuracy would be heightened capacity for health information exchange, so that data on care that CHC patients receive external to their CHC could be reviewed by their primary care provider. This study's CHC providers' suggestions for improving the legitimacy and utility of EHR data-based performance feedback did not directly speak to the challenges of using EHR data to create accurate measures, but they do so indirectly. For example, the providers recommended giving designated staff time and support for reviewing and acting on performance feedback. Such support could include ensuring that the appropriate people understand how each measure is extracted and constructed, and what a given measure might miss due to limitations in data structures. Providers who dispute performance feedback that is extracted from their own EHR data may feel more confident in the feedback if they understand how the metrics and reports are calculated from the raw data (eg, the algorithm will not catch free text documentation of patient refusal; to remove that patient from the measure denominator it is necessary to use the alert override option). In addition, the providers' ambivalence about the performance measures illuminates the need to acknowledge that care quality cannot be judged simplistically, and to ensure that focusing on measurement does not conflict with patient-centered care. Proactively acknowledging these needs and working with providers to address them could further strengthen trust in feedback measures. This study has several limitations. The study clinics were involved in other, concurrent practice change efforts, some of which also involved performance feedback. Given this, provider reactions may have been atypical, limiting generalizability of the findings. In-terviews and observations were conducted by members of the research team potentially perceived to have an investment in intervention outcomes; respondents may therefore have moderated their responses. Finally, results are purely descriptive and are not correlated with any quantitative outcomes. CONCLUSION Provider challenges to the legitimacy of EHR data-based performance feedback measures have impeded the effective use of such feedback. Addressing issues related to such measures' credibility and legitimacy, and providing strategies and resources to take action as necessary, may help realize the potential of EHR data-based performance feedback in improving patient care.
2018-04-03T04:47:14.161Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "09c2630c41fb95029e25097ae2a343b24b10b816", "oa_license": "CCBYNC", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5137808", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "354eb34fd77990670aee78fa9731877e5f045183", "s2fieldsofstudy": [ "Psychology", "Business" ], "extfieldsofstudy": [ "Medicine" ] }
220295202
pes2o/s2orc
v3-fos-license
Integrated network modeling approach defines key metabolic responses of soil microbiomes to perturbations The soil environment is constantly changing due to shifts in soil moisture, nutrient availability and other conditions. To contend with these changes, soil microorganisms have evolved a variety of ways to adapt to environmental perturbations, including regulation of gene expression. However, it is challenging to untangle the complex phenotypic response of the soil to environmental change, partly due to the absence of predictive modeling frameworks that can mechanistically link molecular-level changes in soil microorganisms to a community’s functional phenotypes (or metaphenome). Towards filling this gap, we performed a combined analysis of metabolic and gene co-expression networks to explore how the soil microbiome responded to changes in soil moisture and nutrient conditions and to determine which genes were expressed under a given condition. Our integrated modeling approach revealed previously unknown, but critically important aspects of the soil microbiomes’ response to environmental perturbations. Incorporation of metabolomic and transcriptomic data into metabolic reaction networks identified condition-specific signature genes that are uniquely associated with dry, wet, and glycine-amended conditions. A subsequent gene co-expression network analysis revealed that drought-associated genes occupied more central positions in a network model of the soil community, compared to the genes associated with wet, and glycine-amended conditions. These results indicate the occurrence of system-wide metabolic coordination when soil microbiomes cope with moisture or nutrient perturbations. Importantly, the approach that we demonstrate here to analyze large-scale multi-omics data from a natural soil environment is applicable to other microbiome systems for which multi-omics data are available. expression profiles. Key questions that subsequently arose but remain unanswered are how condition-specific genes are structurally connected to other genes and how central they are to the response of the soil microbiome as a whole. To address these questions, we aimed to integrate our previous work with a complementary gene interaction network model [10][11][12] . Previous studies of soil gene expression profiles have examined how the soil responds to one or more conditions in isolation 13,14 . While these approaches can be useful for determining how the soil microbiome responds to specific conditions of interest, a high-level view of the system can only be obtained when all of the data is combined and instances of co-expression between genes across conditions can be viewed as a network. Networks of this type, where genes are linked based on co-expression, have been inferred for a number of prokaryotic and eukaryotic species 15,16 but are just starting to be examined for communities consisting of multiple species 17 . Some studies have linked species in networks based on their co-abundance 18,19 . However, a network of genes based on co-expression can provide more detailed information about how specific pathways are related and which processes are central not only to specific conditions but to the biological system as a whole. Such approaches have previously been used to identify gene-to-gene connections (pointing to their centrality in the network and their importance to the system) 10,11 and to show coordinated responses across conditions 17 . Here, we used a general modeling platform that integrates metabolic and gene co-expression networks to reveal the fundamental relationships between condition-specific gene functions and their centralities in the soil microbiome. For this purpose, we created metabolic models using multi-omics data collected from a native prairie soil microbiome that was subjected to different perturbations, including changes in soil moisture and nutrient addition. Previously, we used MEMPIS to identify condition-specific genes and reactions in response to changes in soil moisture 9 . We showed that our metabolic network-based prediction of condition-specific genes is more sensitive and powerful compared to typical feature selection, for example those that only focus on genes that are up or downregulated when comparing pairs of conditions 20 . Here, we compared different environmental perturbations, including addition of nutrients to soil (glycine, a common root exudate 21,22 ), with existing moisture perturbation data 9 to infer gene co-expression networks. We aimed to determine the centrality of those genes identified by MEMPIS that responded to specific conditions (e.g., the degree to which the responding genes are linked to other genes and how critical they are to the structure of the network). This allowed us to address new hypotheses related to the importance of processes responding to certain conditions (wet, dry, and glycine addition) within a global network of the soil microbiome. This combination of network analyses presented here revealed that most genes associated with dry conditions occupied highly central positions in the network, more so than genes responding specifically to wet conditions or glycine amendment. Our integrative network approach offers a powerful way to interrogate the metaphenotypic response 23 of complex and diverse microbial communities to a number of specific perturbations. Results Identification of signature genes and their functional implications in metabolic pathways. Application of MEMPIS, an algorithm that simultaneously integrates metabolite and gene expression profiles into metabolic networks, led to the identification of microbial reactions and genes (referring to gene functions described by EC numbers derived from transcript sequences) that are uniquely associated with specific soil perturbations: dry, wet, and glycine-amended soils (Supp Table 1). Unique genes for each condition were defined as those predicted to be associated with only one specific perturbation condition. The number of uniquely responsive genes varied across the conditions, with 8, 4, and 10 unique genes for dry, wet, and glycineamended conditions, respectively (Supp Table 1 and Fig. 1). In contrast with our previous study 9 that focused only on moisture perturbations, the list of genes here was determined by including the results from glycine amendment. We note that, despite this additional perturbation dataset, the resulting unique genes for dry and wet conditions remained the same, indicating that the responses of the soil microbiome to water stress and nutrient perturbations were metabolically distinct. To understand the functional implications of condition-dependent unique genes that were expressed and identified in the data in Supp Table 1, we mapped predicted gene sets onto the KEGG reaction network. Many of the 'dry-associated genes' were found in the pathway for trehalose metabolism, part of sucrose and starch metabolism (Supp Fig. 1A and Supp Fig. 2A). By contrast, 'wetting-associated genes' were found sporadically across different reaction modules and located in isolation, making it difficult to identify connected reaction pathways as biochemical signatures. This prediction supports our previous work 9 by reconfirming the activation of a set of dry-associated genes/reactions in the trehalose synthesis pathway even after newly incorporating glycine-amended data. Most of the unique glycine genes were involved in butanoate metabolism and connected reactions (Supp Fig. 1B and Supp Fig. 3). These genes included those encoding hydroxybutyrate dehydrogenase and poly(3-hydroxybutyrate) depolymerase that are related to the energy-storage and availability of nitrogen, phosphorus or oxygen in the environment [24][25][26] . We also found that genes primarily associated with fatty acid synthesis were commonly predicted under all three conditions (Supp Fig. 4). Compared to traditional statistical data analysis, metabolic network-based predictions above provided deeper insights into condition-specific biochemical reactions in soils. For example, our method predicted the synthesis of sugars such as trehalose and maltose in dry soils (and their degradation in wet soils) 9 , but metabolite (i.e., GC-MS) data showed no such changes across dry and wet conditions. With differential expression analysis, or more advanced feature selection methods, we could not fully predict the trehalose synthesis pathway as a biochemical signature for dry soils (Supp Table 2). By contrast, the integration of metabolites and genes using metabolic network models pin-pointed what specific pathways could be distinctively activated in soils across conditions. www.nature.com/scientificreports/ Inference of co-expression network of soil transcriptomic data. We next inferred a gene coexpression network for the soil microbiome by integrating data from all perturbation conditions. The network was inferred using CLR and the resulting gene networks were ranked (see Methods) before selecting a network of 1,096 nodes and 2,000 edges ( Fig. 2A). Within this network each node represents a gene (annotated with an E.C. number) and each edge represents an instance of co-expression: included as edges in the network if they had a Z-score of at least Z TH (~ 4.20, i.e. Z TH -folds standard deviations above the mean of all mutual information Figure 1. Condition-specific genes predicted from MEMPIS and the associated metabolic pathways. 21 condition-specific genes (except for EC 6.5.1.1) are broadly associated with 21 KEGG pathways with only a few overlaps (See Supp Table 1). The starch and sucrose metabolism pathways include four dry-associated genes and one wet gene, and the butanoate metabolism pathway includes five glycine genes. Overall, carbohydrate metabolism responds to both moisture and carbon amendments while glycine genes are associated with amino acid metabolism more so than other condition-specific genes. Scientific RepoRtS | (2020) 10:10882 | https://doi.org/10.1038/s41598-020-67878-7 www.nature.com/scientificreports/ scores). As a final step the main connected cluster of the network was selected so that centrality analyses would be the most accurate. This resulted in a sub-network of 1,061 nodes and 1,978 edges. Subsequently, we determined which genes occupied central positions in the network. The centrality of network genes can be measured by several metrics including how many edges a particular gene has (more edges equates to higher centrality) or how much a gene acts as a bridge between two separate clusters of genes (genes that occupy important bridging positions have higher centrality). Other studies have found that genes that have high centrality by either of these measures are critically important to the system 11,12 . We identified the most central genes in the networks inferred here (Fig. 2B). Two different measurements of centrality were applied: degree (number of edges) and betweenness (how much a gene acts as a bridge). Degree was used as a proxy for genes that are critically important to a small number of pathways the have many connections to other genes. Betweenness was used as a proxy for genes that may be involved in multiple different pathways and are linked to genes in disparate portions of the network. Genes of high centrality in the network are shown in Table 1 and include several genes involved in key metabolic pathways such as gluconeogenesis and starch and sucrose metabolism. Genes involved in respiration and with synthesis of, or resistance to, antibiotics were also highly central. One gene, encoding glycoaldehyde transferase, was of very high centrality when ranked by both betweenness (0.053, ranked 4th out of 1,061 genes) and degree (21, ranked 3rd out of 1,061 genes). Centralities of condition-specific genes and their functional relationships to other genes. As centrality can be used as a proxy for functional importance, we next aimed to determine if any of the genes that were associated with specific growth conditions occupied central positions in the network. All genes were graphed and their associated centrality values for both degree and betweenness were determined. This showed that genes associated with dry conditions occupied much higher centrality values compared to other genes, even those preferentially associated with either wet or glycine conditions (Fig. 3). The average betweenness value for genes in the network was 0.006 while 'dry-associated genes' in the network had an average betweenness value of 0.017 (2.83-fold higher than average). The average degree value for genes in the network was 3.72 while 'dryassociated genes' in the network had an average betweenness value of 9.375 (2.5-fold higher than average). Only two 'dry-associated genes' , EC 2.7.1.29 (glycerone kinase) and EC 3.4.11.5 (prolyl aminopeptidase) had betweenness and degree values that were lower than the average ( Table 2). This finding contrasts with genes associated with wet or glycine-amended conditions. The three genes in the network that were associated with wet conditions had an average betweenness value of 0.007, only 1.1-fold higher than average, with 2/3 of the genes having below average betweenness, and an average degree value of 6 (1.6-fold higher than the average) ( Table 2). Genes associated with glycine were of even lower centrality with eight genes in the network having an average betweenness value of 0.008, 1.36-fold higher than average, but with 4/8 genes showing lower than average betweenness. 'Glycine-associated genes' had an average degree value of 3 (lower than the average) with 5/8 of the genes having a below average degree value compared to all genes in the network (Table 2). Networks present powerful ways to view not only which processes occupy central positions and are thus potentially 'important' , but also how genes and processes are related to each other. Therefore, we next determined which genes were connected to the highly central genes associated with dry conditions. This was performed by forming a subnetwork consisting of genes that had an edge with at least one of the seven genes associated with dry conditions, excluding EC 3.4.11.5 which was not in proximity to other 'dry-associated genes' . This subnetwork contained 55 genes (including the seven associated with dry conditions) with 178 edges between them (Fig. 4). Among these 55 genes, the following functions were enriched: biosynthesis of secondary metabolites www.nature.com/scientificreports/ Discussion In recent years, multi-omics technologies have advanced to the point that they can now be used to help decipher functions carried out by complex soil microbial communities 27 . However, the resulting data are still computationally challenging to interpret due to the complexity and diversity of the data. Here, we demonstrated that successful integration of two modeling approaches to multi-omics data derived from soil that had been subjected to different environmental perturbations (wetting, desiccation or nutrient amendment) not only enabled prediction of unique genes and pathways that responded to each of the conditions, but also revealed their relationships with structural centralities. By combining two complementary modeling approaches (metabolic and gene network modeling) we were able to achieve a deeper understanding of the metaphenomic response of the soil microbial community to the specific perturbations. Development of reliable computational network models poses a challenge due to intrinsic hurdles associated with collection of omics data from soil samples. In particular, metabolite extraction from soil can be affected by a number of variables not present in more controlled systems including soil pH, moisture, temperature, and particle size. Chemical functional groups of metabolites can sorb to hydrophobic/philic particles in soil and temperature and pH can influence solubility and extraction. All of this means that metabolites with different chemical moieties might not be extracted and analyzed equally. Due to these challenges, we conservatively used only a subset of metabolites that were identified in different conditions. While rigorous evaluation of the level of bias was not possible, we confirmed that (1) these metabolites were compounds commonly detected in environmental samples, and (2) they were almost identical across perturbation conditions. This implies that prediction of "condition-specific" genes/reactions was primarily affected by differential gene expression profiles rather than metabolite data. However, successful prediction of those signature molecules required inclusion of metabolite data due to their role as hard constraints on metabolic network models. Integration of both transcriptomic and metabolomic data therefore complemented each other, consequently leading us to minimize challenges in obtaining unbiased data collection. The analysis of gene expression networks provided new insight that could not be obtained by metabolic network modeling alone. Previous studies of gene co-expression network structure have revealed that centrality can be a proxy for functional importance 10,11 , and that there is a significant overlap between genes in bacterial co-expression networks that occupy highly central positions and those that are part of central metabolic pathways that are crucial for growth 12 . Here, we find that (1) the unique genes associated with certain conditions occupy various centralities in our gene co-expression network and (2) dry-associated genes occupy more central positions in the network than other condition-specific genes. The observation that dry-associated genes are more central in our network may suggests such pathways are critical to soil microbiomes as they respond to a number of other conditions as well. It is important to note that our gene co-expression network is made from data representing several different conditions, therefore centrality values are derived from a model that shows the overall collective response to all of these conditions. Drought conditions not only lead to a great deal of environmental stress on the soil microbiome, but also increase other kinds of stress such as the lack of nutrients (as they are no longer soluble), increase in salt stress, etc. Other studies have also shown that lack of water leads to larger changes in the soil microbiome compared to other stresses 28 , perhaps explaining the central position that drought response occupies. These results indicate that the ability to respond to drought stress is central and important, more so than the response to excessive water or influxes of carbon. www.nature.com/scientificreports/ We also showed evidence that drought processes are critically important based on their links within the network to other pathways. Processes that are linked in networks reflect points of coordination and similar expression between these processes. The fact that dry-associated genes are linked to genes involved in central metabolic pathways (pentose phosphate, glycolysis/gluconeogenesis) strongly indicates that processes responding to dry conditions are central to the functioning of the soil microbiome. Dry-associated genes were also linked to siderophore genes suggesting that these processes (drought response, siderophore production) are correlated. Siderophore production has been linked to the responses of plants and bacteria during drought stress [29][30][31] and while no plants were included in these studies soil samples were from fields where plants were present, suggesting that bacterial processes linked to plant-microbe interactions are correlated with drought responses. The studies here lead to two general conclusions: (1) a combined approach of multiple modeling strategies provides a new understanding of soil biochemistry (such as the relationships between gene's structural centrality and condition specificity) that cannot be obtained by each approach in isolation, and (2) dry-associated genes occupy central and important positions in a network model of the soil microbiome, suggesting that for this soil, it was critical for the soil microorganisms to be able to respond to soil drying, as would be expected under drought. Future studies will make use of additional -omics data (such as proteomics) to increase the value of networks of models of microbiomes. The use of modeling approaches, specifically a combinatorial approach shown here, is a powerful way to interpret large amounts of data describing complex systems. The hypotheses generated can be tested experimentally in natural soil systems, providing new information about how these systems respond to a changing environment, such as expected to occur with climate change. Soil samples and perturbation experiments. Soil samples were collected from the Konza Prairie Biological Station (KPBS), as previously described 9,32 . In brief, composite samples (0-15 cm) were obtained from three field locations (sites A, B and C) representing a natural hydrologic gradient. The soil was frozen in liquid nitrogen in the field and shipped frozen on dry ice to the Pacific Northwest National Laboratory (PNNL). Immediately upon receipt at PNNL, the soil was quickly thawed and the individual field replicates were immediately sieved (< 2 mm) and proportioned into ~ 50 g aliquots in eighteen 50 ml Falcon Tubes per field location (resulting in 18 identical reps per site A, B and C). The soil aliquots were stored frozen (6 months to 1.5 years) at − 80 C until used in perturbation experiments. Three replicates of each field location were subjected to two different types of perturbations: nutrient (glycine) addition or soil moisture stress (wetting to saturation or drying). Glycine was chosen as a nutrient amendment because it is a common root exudate that the soil microbiome is likely to be exposed to in soils 21,22 . Soil samples were thawed and pre-incubated at 21 °C overnight before the onset of the respective perturbation experiments. For nutrient addition, a glycine solution (10 mM) was added to 10 g field-moist soil in 50 mL falcon tubes to a final concentration of 0.027 mmol g −1 dry weight soil and mixed using sterile pipette tips. Nine microcosms (3 sites × 3 replicates) were supplemented with glycine and are referred to as "Gly-positive" samples and another 9 were maintained as controls after adjusting with de-ionized water. The 18 microcosms thus constructed were incubated at 21 °C in the dark for 48 h, the period during which the highest respiration activity was measured 9 . In a separate experiment using the same soil samples, herein referred to as the soil moisture perturbation, soils were similarly pre-incubated and subjected to three moisture conditions: saturated, air-dried to constant weight or maintained at field-moist or control conditions in triplicate microcosms, as previously described 9 . At the end of the respective perturbation experiments, subsamples from each replicate microcosm were collected and analyzed to determine which soil microbial community genes were expressed (metatranscriptomes) and the metabolic compositions of the soil communities. Details of ribonucleic acid (RNA) and metabolite extractions (using MPLEX), sequencing the metatranscriptome and gas chromatography-mass spectrometry (GC-MS) analysis of the metabolome, and raw data processing were previously described 9 . We note that metatranscriptomes from soil B that had undergone moisture perturbations could not be obtained due to challenges with obtaining sufficient RNA 9 . Prediction of active metabolic reactions in each condition using metabolic network models. The MEMPIS algorithm 9 was applied to the multi-omics datasets (i.e., genes and metabolites) to identify condition-specific pathways or subnetworks of reactions. To reiterate, both metabolite and gene expression data were available for the control and treatment samples, which included dry soils A and C, wet soils A and C, and glycine-amended soils A, B, and C. A complete biochemical reaction map obtained from the comprehensively curated KEGG database was used as a master metabolic network to incorporate metabolites and genes. While the master metabolic network was generic, the pathways resulting from network-omics integration were condition-specific through the combination of site-specific omics profiles. The MEMPIS algorithm identified minimal subnetworks that connect 1) all identified metabolites and 2) over-expressed genes that satisfy two prescribed thresholds for fold changes and adjusted p-values in each perturbation against its control sample. Data-driven feature selection. For comparison to the metabolic network-based identification of condition-specific genes/reactions, data-driven feature selection methods were performed to extract key signatures from the metatranscriptomic data that effectively represented each experimental condition. The recursive feature elimination and cross-validated selection was performed using the tree-based estimators to differentiate dry, wet, glycine and control conditions, and implemented based on a python package, scikit-learn (https :// sciki t-learn .org/). We performed PCA and ANOVA tests using the same python package to extract statistically significant features. Features identified by these selection methods were considered statistically significant if the adjusted p values < 0.05 (in the ANOVA test). www.nature.com/scientificreports/ Gene co-expression networks. Gene expression data collected from the two perturbation experiments were used with the Context Likelihood of Relatedness (CLR) 33 program to infer a network where genes were nodes and edges were instances of high co-expression between nodes. CLR was run using default settings with the output being a matrix of Z-scores of mutual information values between all gene pairs. Gene pairs with higher Z-scores are considered to be more tightly co-expressed. The weighted Z-score matrix was converted to an unweighted matrix that replaced all Z-scores with either a zero (if it was below our cutoff for an edge) or a one (if it was above our cutoff). A critical decision point in inferring an unweighted matrix for network analysis is the choice of cutoff used to define an edge in the network. Here, we tested several cutoffs and chose 4.20, meaning that genes with a mutual information score that was at least 4.20 standard deviations above the mean of all mutual information scores in the matrix were connected by an edge in the network. This cutoff was chosen because it was high enough to ensure that only biologically relevant edges were included in our results (a score of 4.2 corresponds to a p value of < 5E-5) and because it led to a network with significant structure for analysis. The resulting network has a node degree distribution that fit a power law (R 2 value of 0.935), a common feature of scale-free biological networks 34 . Resulting unweighted networks were viewed in Cytoscape 35 . Centrality values, betweenness and degree, were also calculated using Cytoscape. Annotations for genes were pulled from KEGG 36 .
2020-07-02T15:47:34.653Z
2020-07-02T00:00:00.000
{ "year": 2020, "sha1": "09193c7c564d8d017f498f05a8d6fd8ffd941adb", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-67878-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a5df716438392a4e1cc8ac876556a1085fa13dd9", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
258910293
pes2o/s2orc
v3-fos-license
Eating disorder symptoms among children and adolescents in Germany before and after the onset of the COVID-19 pandemic Background Disordered eating is highly prevalent among children and adolescents. Since the outbreak of the COVID-19 pandemic, hospitalizations due to eating disorders have peaked and overweight has risen. The aim of this study was to determine differences in the prevalence of eating disorder symptoms among children and adolescents in Germany before and after the onset of the COVID-19 pandemic and to identify associated factors. Materials and methods Eating disorder symptoms and associated factors were examined in a sample of n = 1,001 participants of the nationwide population-based COPSY study in autumn 2021. Standardized and validated instruments were used to survey 11–17-year-olds along with a respective parent. To identify differences in prevalence rates, logistic regression was used to compare results with data from n = 997 participants of the prepandemic BELLA study. Multiple logistic regression analyses were performed to examine associations with relevant factors in the pandemic COPSY sample. Results Eating disorder symptoms were reported by 17.18% of females and 15.08% of males in the COPSY study. Prevalence rates were lower overall in the COPSY sample compared to before the pandemic. Male gender, anxiety, and depressive symptoms were associated with increased odds for eating disorder symptoms in the pandemic. Conclusion The pandemic underscores the importance of further research, but also prevention and intervention programs that address disordered eating in children and adolescents, with a focus on age - and gender-specific differences and developments. In addition, screening instruments for eating disorder symptoms in youths need to be adapted and validated. Introduction For more than 2 years, the daily lives of people around the world have been affected by the outbreak of the SARS-CoV-2 pandemic. Although children and adolescents experience fewer symptoms of a COVID-19 infection compared to adults (1), the pandemic has severely impaired their social, school and family life and poses a great challenge to their mental health (2). A growing body of evidence, including systematic reviews and longitudinal studies at international (3,4) and national level (5, 6), reports an increase in a range of mental health problems, such as depression and anxiety symptoms, as well as lower quality of life. Given that mental health problems in childhood are associated with an enhanced risk for mental disorders in adulthood, these findings are of great public health importance (7). One aspect of mental health that has been affected by the pandemic is eating behavior. Studies report a number of changes in eating behaviors during the pandemic, including an increase in restrictive eating, but also binge eating. Children consumed more salty and sweet snacks and were less physically active (8)(9)(10). Research findings also indicated an increase in weight among children and adolescents as well as the rise of overweight and obesity (11,12). Already at the onset of the pandemic, experts raised concerns about a potential increase in eating disorders (EDs) due to the loss of protective factors and elevated risk factors, such as disruption of routines (13)(14)(15)(16). Even before the pandemic, studies have reported an increase in prevalence and incidence rates of EDs over time across ages and genders (17,18). EDs are associated with increased mortality rates, comorbidity, and long-term functional impairments, including chronicity (19)(20)(21)(22). An early age of onset is related to longer duration of illness and higher symptomatology (23). Prior to the pandemic, symptoms of EDs, like a distorted body image and restrictive eating, were found in approximately 20% of German adolescents (24). However, it is still unclear to what extent the COVID-19 pandemic has impacted ED symptoms in children and adolescents. Considering that these symptoms are precursors to the development of EDs (25)(26)(27), research about ED symptoms in youth is crucial to identify at-risk groups. In the etiology and course of disordered eating behaviors, a number of individual, family, societal, and environmental factors play a role, in addition to sociodemographic factors such as female gender and migration background (24,28,29). Thus, self-efficacy, family climate, and social support have been identified as protective factors (28,(30)(31)(32)(33). Disordered eating behaviors have also been shown to be predicted by comorbid mental health problems such as depression and anxiety (28,34,35). Also, a higher level of emotional problems and parental depression were identified as risk factors in children and adolescents (24,28). Recent studies have identified a range of potential contributing factors to EDs and disordered eating behavior, which are associated with the COVID-19 pandemic, including increased exposure to triggering social media content (36)(37)(38). Further, high COVID-19-related stress likely exacerbates pre-existing EDs and puts individuals at higher risk for ED symptoms such as binge eating, restrictive dieting, and body image concerns (39)(40)(41). Pandemic-related contact restrictions increased feelings of loneliness (42), a feeling closely related to EDs (43)(44)(45). At the same time, family conflicts escalate more frequently during the pandemic (46). Findings from two systematic reviews show that family conflicts were associated with worse ED outcomes among adolescents (37,47). A growing number of systematic reviews addressing EDs and disordered eating behaviors in the pandemic emphasize that most existing studies focus on clinical samples with a history of EDs (40,(47)(48)(49). Despite the early age of onset of EDs and their high prevalence in adolescents (18,50), there are few studies focusing on these vulnerable populations since the onset of the pandemic. Adolescents with preexisting EDs appear to be at high risk for recurrence, exacerbation of symptoms, and severe impairment (51)(52)(53)(54)(55). Incidence rates of EDs have also increased, particularly among adolescents with anorexia nervosa (56,57). In line with these findings, clinicians report substantial increases in the symptom severity and hospitalizations of children and adolescents with EDs since the onset of the COVID-19 pandemic (43,56,(58)(59)(60)(61)). An increase in hospital referrals related to diagnosed EDs was also found by analyzing health insurance records for children and adolescents in Germany (62). Yet, it is unclear whether this rise in hospital admissions and incidences is due to an exacerbation of symptoms in groups already at risk or to an increase in disordered eating in the general population (16). Large-scale population-based studies are still scare and results of existing studies focusing on children and adolescents vary (36, 63,64). Among adults, studies mostly report a worsening of ED symptoms, such as an increase in binge eating, restrictive dieting, and worries about food and figure (48). An overall increase in the prevalence of eating pathology between the pre-and peri-COVID-19 era from 15.3 to 23.3% was reported in a recent meta-analysis (65). Considering that most population-based studies are based on cross-sectional study designs and retrospective recall and are of low or moderate methodological quality (37,48), representative studies in general populations are needed to estimate the burden of ED symptoms in the pandemic (16,66). Furthermore, there is a need to systematically assess changes in disordered eating behaviors that have arisen and to investigate which existing and new emerging risk factors might influence ED symptoms in the pandemic. Building on findings prior to and during the COVID-19 pandemic, the present study aims to fill the aforementioned research gap by answering the following research questions: (1) What is the current prevalence of ED symptoms in children and adolescents in Germany? (2) How has this prevalence changed in the general population and in age -and gender-specific subgroups compared with prepandemic findings? (3) Which factors (general and pandemic-specific) are related to ED symptoms among children and adolescents in the pandemic? Based on these findings, recommendations for further research and clinical practice are drawn. The study will further inform policy makers and professionals about the impact of the pandemic on disordered eating among children and adolescents in Germany. Study design and sample The longitudinal, population-based COPSY study (COVID-19 and Psychological Health) investigates the impact of the COVID-19 pandemic on the mental health of children and adolescents in Germany. It has been conducted since the beginning of the COVID-19 pandemic in 2020. The first wave of the COPSY study took place in May/June 2020. During a nationwide lockdown in Germany, participants were re-contacted in winter 2020/2021 for the second wave of the COPSY study. After a summer with low infection rates and loosened restrictions, n = 1,618 families with children aged 7 to 18 years agreed to participate in the third wave of the COPSY study and completed the online survey between September and October 2021. A cross-sectional subsample of n = 1,001 children and adolescents aged 11 to 17 years who participated in the third wave of the COPSY study and provided information on eating disorder symptoms was included in the present analysis. The method and design of the COPSY study were developed in accordance with the population-based BELLA study, which is the mental health module of the National Health Survey of Children and Adolescents in Germany (KiGGS) (67). Data from n = 977 participants of the second wave of the KiGGS and the parallel fourth wave of the BELLA study (2014-2017) were used as a reference sample prior to the pandemic. The datasets of the COPSY and BELLA study were each weighted to reflect the sociodemographic characteristics of the German population. Weights for the COPSY sample were calculated according to the 2018 Microcensus. Individual weights ranged from 0.2 to 3. 93. Further details about the study design and methodology of the COPSY and BELLA studies are provided elsewhere (5, 68). The COPSY study was approved by the Local Psychological Ethics Committee (LPEK-0151) and the data protection commissioner of the University of Hamburg. Measures Sociodemographic information Children and adolescents self-reported their age and gender. Information on parental education, migration background and occupational status were obtained in the proxy survey among parents. Parental education status was classified according to the Comparative Analyses of Social Mobility in Industrial Nations (CASMIN) (69). Eating disorder symptoms ED symptoms were assessed using the SCOFF (Sick, Control, One stone, Fat, Food) screening instrument (70). The five dichotomous questions of the SCOFF address core features of anorexia nervosa and bulimia nervosa, including deliberate vomiting, loss of control over eating, distorted body image, impact of food on life and weight loss. The latter item was adapted, rewording the weight loss of one stone to six kilograms as it has been done in other studies (24,71). The diagnostic accuracy of the SCOFF was considered to be good overall according to a meta-analysis of international studies (72). Results from German studies vary. While a validation study in a representative sample of German adults found low sensitivity and a high number of false negatives (73), overall satisfactory psychometric properties, but a low positive predictive value were found in a sample of 12-year-olds (74). In addition, low internal consistency (Cronbach's α = 0.44-0.66) was found in most studies (75). The SCOFF is known to have a tendency toward overinclusion, which is why reaching the cut-off for ED symptoms does not necessarily imply having an eating disorder (76). Nevertheless, the SCOFF is considered a useful and effective screening tool for detecting symptoms of EDs (72,74). An established cut-off score of ≥2 expresses suspicion of an ED and was applied. Associated factors Emotional problems were assessed by the parent-reported version of the respective subscale of the Strengths and Difficulties Questionnaire (SDQ) (77). Participants self-reported symptoms of depression using seven items from the German version of the Center for Epidemiological Studies Depression Scale (CES-DC) (78, 79) and anxiety on nine items of the subscale generalized anxiety of the Screen for Child Anxiety Related Disorders (SCARED) (80). Parental depressive symptoms were assessed using the Patient Health Questionnaire (PHQ-8) (81). Scores can range between 0 and 10 for the SDQ, 0 and 21 for CES-DC, 0 and 18 for SCARED and 0 and 24 for PHQ-8. For all scales, higher scores indicate stronger symptoms. A four item-subscale of the Family Climate Scale (FCS) was administered to assess family cohesion (82). Social support was selfreported using four modified items from the Social Support Scale (SSS) (83,84). Sum scores range between 4 and 16 for the FCS and 4 and 20 for SSS. The 5-item Personal Resources Scale (PRS) was administered to assess self-efficacy with scores between 5 and 20 (85). Higher sum scores on FCS, SSS, and PRS correspond to more pronounced resources. Pandemic specific factors Children and adolescents rated feelings of loneliness using a short version of the Los Angeles Loneliness Scale (UCLA) (86). The four items used in this study had already been used with adolescents in population-based surveys (87), and a slightly modified response scale (1 = never to 5 = always) was used, resulting in an overall score between 4 and 20, with higher scores representing more loneliness. Pandemic-related burden, increases in family conflicts and digital media use were assessed by newly developed items. Participants were asked to compare the frequency of family conflicts and duration of digital media use to the prepandemic time on a 5-point-likert scale (1 = much less to 5 = much more). Both variables were dichotomized into 1 = increase (response options more and much more) and 0 = no increase in family conflicts/digital media use (response options much less, slightly less and same). Data analysis Based on self-reported ED symptoms according to the SCOFF, the prevalence of ED symptoms in the pandemic was calculated with 95%-CIs, stratified by age and gender. N = 8 participants of the COPSY study who reported their gender as "other" were excluded from the calculation of prevalence rates. Differences in symptomatology between age groups (11-13-year-olds vs. 14-17-year-olds) and gender were examined by bivariate chi-square (χ 2 ) statistics and logistic regression. To evaluate differences in the prevalence of ED symptoms before and during the pandemic, cross-sectional data from the BELLA study (prepandemic, control group) and the COPSY study (pandemic, index group) were pooled. Sociodemographic characteristics of COPSY and BELLA participants and differences in response to single items were compared using bivariate tests (χ 2 and independent t-test). Furthermore, a multiple logistic regression with study (COPSY/ BELLA group), age, gender, and interaction between gender and age as predictors for the total SCOFF score and specific ED symptoms was conducted. In a second explorative step, interactions between study and age as well as study and gender were included in the regression model. To further describe the association between selected general and pandemic-specific factors (predictors) and ED symptomatology (outcome), unadjusted and adjusted logistic regression analyses with stepwise inclusion of general and pandemic-related factors were conducted using the COPSY dataset. All adjusted regression models were controlled for age, gender and the interaction of gender and age. All analyses were carried out in IBM SPSS, version 27 and a p value ≤0.05 was considered as an indicator for significant differences or effects. Effect sizes Cohen's d (d), Pearson's r (r) and Phi (ɸ) are interpreted with regards to Cohen (88). Internal consistency was determined by Cronbach's alpha (α) (89). A power analysis was conducted prior to data analysis using the software G-Power 3.1. For determining the assumed OR to test for small effects in logistic regression analysis between two groups at a particular age (11-13 years, 14-17 years) and gender (girls, boys), we assumed an OR of 1.436/0.696 as suggested by Chinn (90). This resulted in a minimum required sample size to test for statistical significance with p (alpha) < 0.05 and a power of p = 0.8 of n = 302. Sample description The present analysis is based on two subsamples of n = 1,001 (COPSY) and n = 997 (BELLA) 11-to 17-year-olds and a respective parent who participated in the COPSY or BELLA study. Girls participated slightly more often than boys in both studies (COPSY: 51.95%, BELLA: 54.16%). The mean age of children was 14.47 years (SD = 2.05) and 45.48 years (SD = 7.09) for participating parents in the COPSY study. Comparing the unweighted subsamples, participating children and adolescents and their parents in the COPSY study were about 1 year older than in the prepandemic BELLA subsample [Children's age: t(1980.92) = 11.36, p < 0.001, d = 0.51; Parental age: t(1836.66) = 2.69, p = 0.007, d = 0.12]. Accordingly, differences were also found in children's occupation with more children still attending school in BELLA [χ 2 (1) = 90.70, p < 0.001, ɸ = −0.22]. Another significant difference was found in the educational level of parents [χ 2 (2) = 81.83, p < 0.001, ɸ = 0.20], indicating that more parents reported a low educational level in the pandemic sample. Samples differed significantly in terms of migration background [χ 2 (1) = 9.93, p = 0.002, ɸ = −0.07], with more migrants in the COPSY sample. With the exception of children's age, the differences found were of small effect size. Sample details are provided in Table 1. Comparison with prepandemic findings from the BELLA study Table 2 shows the prevalence of ED symptomatology in BELLA and COPSY across age groups and genders and the results of unadjusted logistic regressions. Descriptive analyses revealed a 3.9 percent point lower prevalence of eating disorder symptoms in COPSY compared to prepandemic results. This was confirmed by the regression analysis, as the odds for disordered eating were significantly lower in the COPSY study. Compared to girls, an inverse development was found in boys, who reported a significantly higher prevalence of 15.08% (95% CI = 11.96-18.20%) in the pandemic compared to 9.35% (95% CI = 6.78-11.92%) in the BELLA study prior to the pandemic. Unadjusted logistic regressions stratified by age group and gender showed that participation in the COPSY study was significantly associated with lower odds of ED symptoms in 14-to 17-year-olds (OR = 0.59; p < 0.001) and females (OR = 0.45; p < 0.001) but increased odds in boys (OR = 1.75; p = 0.004). In a multiple logistic regression model with age, female gender, and the interaction between the two as covariates, COPSY participants exhibited lower odds of disordered eating. Thus, participation in COPSY was associated with significantly reduced odds [OR 0.74 (95% CI = 0.58-0.94)] of ED symptoms. Age and the interaction of age and male gender were also significant predictors, with overall higher odds Frontiers in Psychiatry 05 frontiersin.org in females with increasing age. However, boys were less likely to show disordered eating with increasing age. Inclusion of interaction effects between study and age as well as study and gender improved the overall model fit according to Nagelkerke R 2 . Model 2 showed a significant interaction between study and male gender, indicating that boys were more likely to reach the cut off value of the SCOFF in the pandemic as compared to prior to the pandemic. In contrast, the main effects gender and study were not significant in model 2. Age remained a significant predictor. Details for models 1 and 2 are provided in Table 3. A visualization of significant interaction effects is provided in Figure 2. Table 4 shows the prevalence for each of the five symptoms for ED assessed by the SCOFF in the prepandemic and pandemic sample. There were significant differences between the two samples for items 2 to 5, with fewer participants reporting symptoms of eating disorders in the COPSY study compared to BELLA. However, the proportion of participants reporting recent weight loss was almost twice as high in the pandemic. The highest prevalence was found for item 5, whereas intentional vomiting and recent weight loss were reported by less than 10% of participants in both samples. Symptom prevalence Multiple regressions for individual symptoms revealed that the interaction of study and male gender was associated with two-to fourfold increased odds for all symptoms [OR = 2.48 (item 4)-3.76 (item 2), p < 0.05]. As in the full model, the interaction of age and male gender was related to lower odds for items 1 to 4 [OR = 0.70 (item 1)-0.80 (item 3), p < 0.05], whereas the interaction of study and age was associated with lower odds for item 5 (OR = 0.77, p < 0.001). Higher age was associated with increased odds for items 2 to 4 [OR = 1.15 (item 4)-1.38 (item 3), p < 0.05]. The strongest effects were found for male gender as a predictor for items 1 and 2 [OR = 65.44, p = 0.011 (item 1); OR = 19.03, p = 0.002 (item 2)] and for participation in the COPSY study as a predictor for item 5 (OR = 19.95, p < 0.001). Results of the multiple logistic regression analysis for each of the five symptoms assessed by the SCOFF are provided in Supplementary Table S1. 4 Based on CASMIN classification. 5 COPSY: n = 18 missing; BELLA: n = 13 missing. 6 BELLA: n = 1 missing. FIGURE 1 Prevalence of eating disorder symptoms in the COVID-19 pandemic by gender. Based on n = 993 participants of the COPSY study. N = 8 COPSY participants who reported other as gender are not included. Error bars indicate 95%-Confidence-Interval. Significant differences between groups were examined by chi-square tests. n.s., not significant. Intercorrelations of predictor variables Correlations between general and pandemic-specific predictor and control variables are shown in Table 5. Most variables displayed small to medium correlations. Sociodemographic variables had only weak correlations, whereas moderate and strong correlations were found between other predictors. Symptoms of depression, anxiety, and emotional problems intercorrelated with large effects (r ≥ 0.60, p < 0.001), and showed the strongest correlations with other variables. Negative correlations were found between all non-sociodemographic variables and social support, personal resources, and family climate, which in turn correlated positively with each other at r ≥ 0.35. Among pandemic factors the strongest correlation was found between loneliness and depressive symptoms (r = 0.53, p < 0.001). Loneliness as well as increased family conflicts and digital media use showed significant but small associations with symptoms of mental health problems. Collinearity statistics indicated no multicollinearity (VIF 1.02-2.44, tolerance 0.43-0.98), according to Menard (91) and Myers (92). Results of the univariate logistic regression analyses In a series of unadjusted logistic regressions, all factors, except for female gender and migration background were significantly associated with ED symptomatology in the pandemic. Thus, higher emotional problems, symptoms of depression and anxiety as well as higher parental depressive symptoms were associated with increased odds for disordered eating. In contrast, higher personal resources, social support, and a better family climate were associated with reduced odds. Further, all pandemic-specific factors (increased digital media use, family conflicts, higher burden and greater loneliness) were related to higher ORs. High pandemic burden and increased family conflicts were associated with a more than twice as high chance of disordered eating. The results of the univariate logistic regressions are provided in Supplementary Table S2. Visualization of interactions for eating disorder symptoms predictors. n = 1956, Outcome: eating disorder symptoms according to SCOFF; only significant interaction effects (p < 0.05) from model 2, Table 3 are included. OR, Odds Ratio. Results of the multiple logistic regression analyses The results of the multiple logistic regression are presented in Table 6 for the full sample and stratified by gender in Supplementary Table S3. In model 1, only general factors as predictors of ED symptomatology in the total sample were incorporated. Generalized anxiety, symptoms of depression, and gender were significantly associated with disordered eating. Inclusion of factors related to the pandemic (model 2) did not improve the model significantly. Female gender was associated with reduced odds (OR = 0.07; p = 0.044), while symptoms of anxiety (OR = 1.08; p = 0.004) and depression (OR = 1.10; p = 0.002) were associated with slightly increased odds of disordered eating. None of the other significant factors from the univariate model predicted eating disorder symptomatology in the multiple models. Stratified regressions by sex (Supplementary Table S3) revealed that anxiety symptoms were a significant predictor only in female adolescents (OR = 1.09; p = 0.012), whereas depressive symptoms were associated with increased odds for eating disorder symptoms in both females (OR = 1.09; p = 0.028) and males (OR = 1.13; p = 0.020). All multiple logistic regression models were statistically significant with Nagelkerke R 2 ranging between 0.181 and 0.202. Discussion The aim of this study was to estimate the prevalence of ED symptoms in children and adolescents 1.5 years after the outbreak of the COVID-19 pandemic in Germany and to compare the results with prepandemic data. In addition, factors associated with ED symptoms during the pandemic were to be identified. Prevalence of eating disorder symptoms in the pandemic An overall prevalence of ED symptoms of 16.20% was found, with 17.18% of female and 15.08% of male participants reaching the SCOFF cut-off. Other studies administering the SCOFF in more homogenous samples reported considerably higher prevalence rates of 18.4 and 31.1% for males and 25.3 and 51.8% for females in 2020/2021, respectively (63,93). According to our descriptive results, prevalence was slightly higher among girls and older participants. However, significant gender differences were only found in 14-to 17-year-olds. Most studies from the pandemic period and before report similar but more pronounced differences in disordered eating behaviors between genders and age groups (65,71). Comparison with prepandemic findings We found a significant difference in the prevalence of ED symptoms compared with the prepandemic BELLA study, with an overall reduced likelihood for ED symptoms in the pandemic. Prior to the pandemic, girls had a three times higher prevalence compared Frontiers in Psychiatry 08 frontiersin.org to boys. Interestingly, findings from our regression analysis indicate that in boys, risk for ED symptoms increased significantly in the pandemic, in contrast to girls. Consequently, boys had a higher prevalence during the pandemic compared to before it. As the effect found is based on a large standard error and CI, results should be interpreted with caution. Gender differences were found in terms of age-specific developments. Thus, boys were less likely to have ED symptoms with increasing age, whereas older girls were more likely to show disordered eating behavior. While the symptoms "deliberate vomiting, " "loss of control over eating," "distorted body image," and "impact of food on life" decreased or did not differ significantly during the pandemic compared to the prepandemic sample, the percentage of participants reporting "recent weight loss" increased. In addition, boys were more likely to show all symptoms of EDs during the pandemic. Contrary to our findings, a school-based study in Germany found no changes in disordered eating habits in the beginning of the pandemic compared to prepandemic data (64), while a significant increase of perceived disordered eating and overeating was observed in a sample of female adolescents in the summer/autumn 2021 (36). Also international studies using the SCOFF in older students found significant increases in the prevalence of ED symptoms in both male and female participants from 2018/2019 to the first and second year of the pandemic, respectively (63,93). Only limited evidence is available regarding the potential increase of ED symptoms in boys, particularly at a young age. Consistent with our findings, boys were more likely to show increased consumption of snacks, soft drinks, and carbohydrates and to gain weight during the pandemic, especially between the ages of 10 and 12 (8).The high prevalence of ED symptoms in boys may also be due to an increase in binge eating during the pandemic (9). This is underscored by evidence showing that subclinical forms of binge eating disorder were as common in boys as in girls even before the pandemic (94). In addition, the ongoing discussion concerning the historically female-oriented diagnostic framework and assessment of disordered eating should be considered (94,95). An increase in diagnosed EDs was only found in young men between 20 and 24 years of age, but not boys in the first year of the pandemic according to German health insurance data (96). Others reported a decrease in the number of ED-related hospital admissions among boys, but an increase among girls (62). For girls, younger age was associated with increases in disordered eating and EDs (36, 97). As noted above, in contrast to sharp increases in EDs, particularly anorexia nervosa, reported by clinicians and health care data, our findings show a decrease in disordered eating behaviors after the onset of the COVID-19 pandemic. There are several possible explanations for this discrepancy. First and foremost, ED symptoms do not necessarily lead to diagnosed EDs, so the number of reported diagnosed cases and self-reported prevalence may differ. This was also the case before the pandemic, and it has been suggested that this may be attributable to the awareness effect. Thus, greater societal awareness of ED and a greater willingness to seek medical consultation could explain the increase in diagnosed EDs (24,98). The increase in clinically relevant cases could also be due to an exacerbation of symptoms in risk groups or patients with pre-existing EDs (e.g., 47,54). As families had to stay at home during nationwide lockdowns, parents might have noticed disordered eating habits earlier and intervened. Further, given that family conflicts escalated more frequently during the pandemic, parents may have been more inclined to bring children in for treatment in order to reduce tensions at home. Another hypothesis is that the pandemic has led to positive developments in children and adolescents with disordered eating behaviors. This might include families supporting at-risk children through supervised or shared mealtimes at home. Increased time for self-care and reflection may also be beneficial (37,49). This is in line with the results of our univariate regression analysis, where family climate was identified as a protective factor. Furthermore, the use of the SCOFF as an instrument to assess ED symptoms can be seen as a limitation. The low psychometric properties of the SCOFF such as a low positive predictive value and a high number of false negatives are particularly evident in heterogeneous population-based samples (73,75). In line with others, we also found very low internal consistency (α = 0.52) (74). Given the limited reliability in this study, all observations need to be interpreted carefully. In addition, the SCOFF does not assess all major symptoms of disordered eating behaviors, including laxative abuse and excessive exercise. As a result, it does not depict symptoms of other highly prevalent eating disorders, such as binge eating disorder or newly emerging forms of disordered eating such as orthorexia nervosa (75,99). However, the SCOFF was developed to detect core symptoms of anorexia nervosa and bulimia nervosa (70). Yet, studies show that the SCOFF captures more symptoms in overweight children and adolescents suggesting that, for example, item 2 ("Do you worry you have lost control over how much you eat?") could be understood as experiencing binge eating (71). Because of their usability and efficiency, screening tools such as the SCOFF are essential for both clinical assessment and public health research to estimate the burden of EDs and to identify at-risk groups. As to date there is a lack of evaluated, standardized screening tools to measure EDs in children and adolescents (75,100), there is high need for further research. In addition, the time of data collection in the COPSY study should be taken into account. Since ED symptoms were first assessed 1.5 years after the onset of the pandemic in the third wave of the COPSY study, the prevalence at the beginning of the pandemic is unknown. Therefore, the progression of ED symptoms from the beginning to later points in the pandemic cannot be compared in the same way that other changes to mental health can. For instance, anxiety symptoms increased in the first year of the pandemic but decreased slightly in the third wave of the COPSY study (5). This might be due to greater awareness of the adverse impact of the pandemic on young people's mental health and the increased availability of support services. To better understand the development of disordered eating behavior over the course of the pandemic and beyond, there is a high need for longitudinal studies. Associated factors The results of the univariate regression analyses of the COPSY study showed that there were associations between all factors examined and a positive SCOFF score, with the exception of gender and migration background. However, a multiple regression model showed that only gender, depression and anxiety symptoms were associated with ED symptoms 1.5 years after the onset of the pandemic. The association between symptoms of anxiety and depression is consistent with findings from other studies conducted before and during the pandemic (34, 35, 101). However, in contrast to recent findings among adults (102), anxiety was only a significant predictor among girls. One possible explanation is that female gender has been identified as a risk factor for anxiety symptoms in the pandemic (103). Furthermore, we found that girls were less likely than boys to show ED symptoms 1.5 years after the pandemic outbreak when other factors were considered. As mentioned before, this contrasts with the reported increase in ED-related hospitalizations among girls (62). In the first model, Nagelkerke R 2 was <0.2 and the addition of pandemic factors did not significantly improve model fit. Thus, none of the factors were significant in the multivariate model. This might be due to the fact that these factors become less significant in interaction with other factors. Furthermore, it is known that in addition to the investigated factors, there are other determinants for disordered eating. Besides predisposing factors like genetics, ethnicity, self-esteem and negative childhood experiences, these include stress factors like thin body preoccupation, negative life events, negative family perception and social pressure (30,31,104). Other factors that may be relevant in times of the pandemic could be uncertainty intolerance, food insecurity, and socioeconomic status (36, 47). The latter has been identified as a risk factor for higher weight gain (8) and other mental health problems in the pandemic (5). Strengths and limitations This study has the following strengths. The COPSY study is one of the first nationwide population-based studies focusing on child and adolescent mental health following the COVID-19 pandemic outbreak (6). By comparing the results with nationally representative prepandemic data, it is possible to draw conclusions about changes in prevalence in specific subgroups. In addition, established instruments for the assessment of mental health as well as risk and protective factors were administered. This allowed the inclusion of a range of potential predictors in the analyses. In addition to the use of the SCOFF despite its low psychometric properties, there are a number of other limitations. First, height and weight were not assessed in the COPSY study. Given the high prevalence of ED symptoms in overweight and underweight individuals (24,71,101) and the increase in overweight that has been reported in the pandemic (11), it is highly relevant to consider the association of body mass index with ED symptoms in the pandemic. Second, it is not possible to draw causal relationships between the reported associations given the cross-sectional design of the study. Third, it should be considered that biases are likely to occur in self-reported surveys. Since especially patients in the early stages of an ED often deny symptoms (105), this should be given particular consideration. Further, most pandemic-specific factors in the regression model were assessed with single items because of the broad range of issues covered by the COPSY study. Future studies should examine potential Frontiers in Psychiatry 10 frontiersin.org pandemic-specific risk factors in more detail by assessing them with standardized and validated instruments. Lastly, all findings of the COPSY study are not generalizable to other countries, especially given differences in the course and handling of the COVID-19 pandemic. Implications for further research and practice To the best of our knowledge, this study provides the first estimate for the prevalence of self-reported ED symptoms among children and adolescents in a nationwide sample in Germany since the onset of the COVID-19 pandemic. Our findings indicate an overall decrease compared to prepandemic findings and highlight gender-specific developments. Thus, we found an increase of disordered eating habits among boys, especially in the younger age group. This emphasizes the need for further research, examining the relevance of gender-and age-specific developments of disordered eating in children and adolescents in the pandemic. In addition, family-based intervention and prevention programs targeting at-risk groups and taking up gender-and age-specific approaches are highly warranted. Our results indicate that symptoms of anxiety and depression are significant predictors for ED symptoms in children and adolescents in the pandemic. Given that these have also increased in the pandemic (4), their association with ED symptoms needs to be examined in further studies to detect cause and effect relationships. In clinical practice, screening for ED symptoms to ensure early detection could also become part of the diagnosis and treatment of children and adolescents with depressive and anxiety symptoms. Furthermore, future research should focus on predictors of specific forms of EDs, such as anorexia nervosa, bulimia nervosa and binge eating disorder in the pandemic. High-quality screening instruments are essential for the early detection of ED symptoms to prevent these symptoms from developing into clinical forms of EDs. By using valid and reliable screening instruments in longitudinal and large-scale populationbased studies, it is possible to provide highly relevant and valid data to investigate the public health burden and incidence of ED symptoms in children and adolescents. Considering that adolescence is a highrisk period for the onset of EDs and that the pandemic has exacerbated mental health problems in young people, there is a high need to address better evaluation of existing instruments and to develop alternative screening tools that also allow for disease-specific screening in children and adolescents. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by Local Psychological Ethics Committee and the Commissioner for Data Protection of the University of Hamburg, Germany. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. Author contributions A-KN performed the statistical analyses, interpreted the data, and wrote the first draft of the manuscript. UR-S and AK were principle investigators of the COPSY study, responsible for its design, funding, general decisions of measurement, supervised data cleaning and preparation, and revised the manuscript critically. JW and ME revised the manuscript critically. All authors contributed to the article and approved the final manuscript. Funding The COPSY study was funded by the Kroschke Child Foundation, the Fritz and Hildegard Berg Foundation, the Jaekel Foundation and the Foundation "Wissenschaft in Hamburg." The funders had no role in study design, data collection and analysis, decision to publish, or the preparation of the manuscript. We acknowledge financial support from the Open Access Publication Fund of UKE -Universitätsklinikum Hamburg-Eppendorfand DFG -German Research Foundation.
2023-05-27T13:06:00.329Z
2023-05-26T00:00:00.000
{ "year": 2023, "sha1": "4234cb9b5093382b64a46e6dabeff4479557de49", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "4234cb9b5093382b64a46e6dabeff4479557de49", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
55094607
pes2o/s2orc
v3-fos-license
Study on the Influence Caused by Opening Different Types of Community on Surrounding Traffic In February of 2016, China formally promulgated and pointed out that the new communities should promote the block system. This new policy sparked heated debate. To demonstrate the rationality of this decision, the article studies on the problem that the influence on surrounding traffic causing by the opening of different types of community. The result shows that there is a big difference of the influence on surrounding traffic between opening different types of communities. So the decision of opening communities should fully consider the particularity of communities and adjust measures according to local conditions. Introduction As the economy in China develops, the size of urban becomes bigger and bigger and the number of vehicles is sharp increase, which sets a higher demand on the urban traffic networks.In February of 2016, "The central committee of the communist party of China on the management of the state council on further strengthening the construction of urban planning several opinions" formally promulgated and pointed out that the new housing should promote block system.This new policy sparked heated debate. In China, influenced by traditional living model, urban land is divided into blocks.Road network is sparse and the traffic flow focuses on the trunk road, causing the traffic jam.Originally government wanted to reduce traffic jam by widening the road and increasing the number of road but the effect of this method is not ideal.Reducing traffic jam becomes important problem of government.Scholars emphasis on urban spatial structure to research the method of reducing traffic jam.Cao and Gu pointed out the disadvantage of closed communities and proposed to open communities gradually.Mei putted forward idea of opening communities which will make the communities road network and the urban road network integrated.Li, using traffic analysis theory and Braess Paradox, evaluated and compared the traffic condition under communities opening or not.Wang realized a series of evaluated route, which comes from the network performance analysis, intersection signal timing scheme optimization, traffic organization optimization, intersection highly channelizing design scheme design to form a complete set of traffic engineering facilities, using simulation technology.In recent years, study of "subtle circulation", which close contact to opening communities, also gradually rise.Some scholars made optimal study on existing urban subtle circulation.Zhang analyzed and optimized the microcirculation system from three sides of the road network system, traffic organization, residents living environment of the street.Zhong designed bi-level optimized programming model and used Genetic Algorithm to optimize the microcirculation system.Lu using TOPSIS whiten pattern and designing Grey Integer TOPSIS grey solving model made further study on the planning method of the microcirculation of the urban.Wang discussed principle and method of the optimization of the urban traffic microcirculation system basing on Multi-objective decision. From what has been discussed above, China has a wide study range of opening the community and urban traffic microcirculation, which established the foundation of the policy.But there is no studying about the influence causing by opening different communities on surrounding traffic.This problem has important significance to the opening community policy effectively implemented. Therefore, the article sets up reasonable evaluation index system to evaluate the effect causing by opening communities on surrounding traffic firstly.Then the article compares the difference influence on surrounding traffic when communities have different shapes, different road network structures and different sizes.Finally, making some advises basing on study result about the exciting policy of opening the communities. Evaluation Indexs To study the influence of communities opening, the article studies the change of index to study the influence on surrounding traffic causing by opening communities.Therefore, the article sets up scientific and reasonable evaluation system and then ensure the weight of every index by AHP.The article uses comprehensive traffic index to measure the influence on surrounding traffic causing by communities opening. In this system, the index of road network structure includes the increase in road network density and the reduction of nonlinear coefficient and the increase in connectivity index.The index of traffic efficiency includes the reduce time of intersection delay and the reduce time of straight section's delay.The evaluation system is showed in Figure 1.The road network density is the specific value of total length of road and regional land use area.It is an important index to evaluate traffic microcirculation system of the area and it impacts the accessibility of street.The road of community can't become part of the regional public road.But it becomes part of the regional public road after open the community.The increase in road network density is: where L ab is length of the first article b road in a community.Assuming that there are m communities in this area and there are roads in a community.S is the area of this region. Reduction of Nonlinear Coefficient  The nonlinear coefficient is the specific value of the actual distanceand the spatially linear distance between the two points.Generally, the bigger nonlinear coefficient is, the greater actual distance between two points is and the worse surrounding traffic jam are.Because the road in the community has exclusiveness, vehicles can't run across the community and only can adopt the method of round to reach destination.The existence of closed community make the actual distance of vehicles running bigger than the spatially linear distance between the two points.The formula of nonlinear coefficient is where C is the nonlinear coefficient, La is the actual distance between two points, d means the spatially linear distance between the two points. The reduction of nonlinear coefficient equals the errand of nonlinear coefficient between opening and closing The connectivity index reflects the developed degree of the traffic network.It is the index of describing the number of nodes and edges in the rode network.The higher connectivity index is, the less end breaking road existing in the road network is, confirming that net rate higher.In contrast, it confirms that net rate lower.The formula is: Where J is the connectivity index.P is the number of nodes in the road network.Q is the number of edges in the road network. Reduction Time of Intersection Delay D 1 The article used traffic impact analysis (T-TIA) based on time to do analysis of delay.The reduce time of intersection delay reflects level of delay causing by traffic jam at intersection.Generally, the higher delay level is, the longer delay time is, confirming that the intersection has lower traffic capacity.In contrast, the intersection has better traffic capacity.The article use the reduce time of intersection delay to reflect traffic capacity of intersection.Assuming that the primary vehicles (along the dotted line in the Figure 4) blocked because close community and round the edge of community.The reach of vehicles follows Poisson Distribution; all the intersection have signal light and vehicles turning right have accommodation lane.The formula of the reduce time of intersection delay is: T 's formula is: T is signal cycle, y is the split of intersection, x is the road saturation, 0 q is the biggest volume traffic at entrance equal to the per unit time through road a cross-section of the maximum number of vehicles (pcu/h, the number of vehicles per hour), Q  is the volume of round traffic (pcu/h); C is the partition coefficient of volume traffic at entrance. Reduction Time of Straight Section's Delay D 2 The reduce time of straight section's delay reflects the level of delay causing by traffic jam at straight road. Generally, the higher delay level is, the longer delay time is, confirming that the straight road has lower traffic capacity.In contrast, the straight road has better traffic capacity.We use the reduce time of straight section's delay to reflect traffic capacity of straight road.Assuming that the primary vehicles (along the dotted line in the Figure 4) blocked because close community and round the edge of community.The reaching of vehicles follows Poisson Distribution.The formula of the reduce time of straight section's delay is , where ij S is the time of straight section's delay causing by closing community(s) equal to the saved time causing by opening community.The formula of S ij is: Where V0 is the biggest volume traffic.α, β are retardation factors and α=0.15, β=4, v is speed of this road; L is the distance of road.We can get the reduce time of straight section's delay 2 2 D d = m between every pair of points, m is the number of points pair.Basing the evaluation system, the article uses AHP to ensure the weight of every index.Limited to the space not go into here.Finally we got the evaluation formula of the influence on surrounding traffic causing by communities opening E=0.2495ρ+0.5806ψ+0.0643λR +0.4243D 1 +0.1414d 2 , E is the index of traffic capacity. Calculation and Analysis of the Influence Caused by the Policy Influences of opening community impacted by the shape of communities and the surrounding road network structure and the size of communities.Due to geographic, political, economic, social, cultural and other factors, the kinds of community in China shows the characteristics of the homogeneity and diversity coexist.From the shape of communities, the common shapes in China are square, triangle and trapezoid.From the surrounding road network structure, there are mesh structure, tree structure and the main street structure; besides the size of the community is also difference.This unite analyze the difference of the influence on surrounding traffic causing by opening communities. Analysis of the Influence of Opening Different Shapes Communities Because of the city planning problems, shapes of communities are different, which results different road structure in communities.There is difference between influences of opening different shape communities.Now the common community shapes in China are square, triangle and trapezoid.Most of communities are square, analyzing the influence causing by opening square communities on surrounding traffic have representativeness and actual meaning.Therefore, the article chooses the square community as the base of study and compares with triangle community to evaluate the influence causing by opening different shape communities.As shown in the Figure 4, it is a diagrammatic drawing of a square community.In actual life, community is in commonly 10 hectares.To simplify the calculation, the article assumes that this community is 400 meters long, 300 meters wide.It covers an area of 12 hectares.The vertical distance of two nodes inside the village is 133 m and the horizontal distance is 100 m.The surrounding road structure is the main street structure which is popular in China and surrounding roads are one-way street.Due to the size of the traffic flow has nothing to do with the moving direction and the total traffic flow is the sum of all directions of traffic flow, the article only studies one direction of traffic flow.Namely vehicles enter into the area from the left side and leave the area on the right side.Vehicles can't enter into the community and only round the edge rode when the community closed.The detour road length is 0.63 km.Assuming that volume of traffic at the community surrounding roads is 600 pcu/h, the biggest volume of traffic is 1200 pcu/h, the road saturation is 0.5, the partition coefficient is 0.5, the retardation factors α=0.15, β=4, the average velocity speed is 50 km/h.There are 4 signal lights at intersections and the cycle of a signal lights is 60 seconds, the split is 1/4.As the basal community of analyze, the article thinks that these parameters can reflect the most of communities in China. After the community opened, as shown in Figure 5, the fence of the community disappeared and the road in the community connected with the urban road.Two transverse and longitudinal internal transport networks connected with urban, and forms a complete the network traffic in this area. The article takes the parameters into the evaluated model, using MATLAB to calculate the index of road network structure and the index of traffic operation efficiency showed in Table 1.Take the normalized number to formula (1), gaining the comprehensive index of traffic capacity is 0.4915.The article takes the parameters into the evaluated model, using MATLAB to calculate the index of road network structure and the index of traffic operation efficiency showed in Table 2. Table 2. The index of opening triangular community The increase in road network density (km/km 2 ) The reduction of nonlinear coefficient The increase in connectivity index The reduce time of intersection delay (s) The reduce time of straight section's delay (s) 0.0083 0.58 0.57 516 82.69 Take the normalized number to formula (1), gaining the comprehensive index of traffic capacity is 0.3656.From the comparison, the article finds that the traffic capacity has greatly improvement after opened square community and triangular community when the surrounding road structure is the main street structure.But the comprehensive index of square community is higher than triangular community, confirming that the improvement of surrounding transportation of opening square community is better than opening triangular community.Comparing these index, the article finds that the reduce time of straight section's delay of opening triangular community is more than opening square community.And it may because that the within road of triangular community is complex.But the actual shapes of communities are not standard square and triangle and we infer that the closer to square, the better influence on traffic capacity.Similarly, the complex community structure like triangle is worse.The other shape community like trapezoid can mirror this method of calculation. Analysis of the Influence of Different Surrounding Road Network Structures There are three kinds of road structure in China, mesh structure, tree structure and the main street structure, as shown in the Figure 8.Among the three kinds of road, the main street structure is used in most cities; some boom cities are mesh structure; but the tree structure is fewer used in cities.This unite analyze the influence of opening community under the main structure and the mesh structure. Mesh structure Tree structure Main street structure The surrounding road structure of square community in 3.1.1 is typical main street structure; therefore, the article compares the difference between opening communities when the surrounding structure is the mesh structure and the main street structure. 3.2.1 Calculation of the Influence When the Surrounding Road Structure Is the Mesh Structure Same as above, the parameters are same with the square community in 3.1.1besides the surrounding road structure. The article takes the parameters into the evaluated model, using MATLAB to calculate the index of road network structure and the index of traffic operation effciency showed in list 4. Table 4. Index of opening square community under the mesh road structure The increase in road network density (km/km 2 ) The reduction of nonlinear coefficient The increase in connectivity index The reduce time of intersection delay (s) The reduce time of straight section's delay (s) 0.012 0.5 1 688 60.57 Take the normalized number to formula (1), gaining the comprehensive index of traffic capacity is 0.3870. Analysis of the Influence of Opening Community Causing by Different Road Structures Comparing with the index in 3.1.1,the article gets the Table 5. Table 5.Comparison of the influence of opening community under different road structure The comprehensive index The increase in road network density (km/km2) The reduction of nonlinear coefficient The main street 0.4915 0.012 0.88 The mesh street 0.3870 0.012 0.5 The increase in connectivity index The reduce time of intersection delay (s) The reduce time of straight section's delay (s) The main street 1 688 60.57 The mesh street 1 688 60.89 From the comparison, the article finds that the traffic capacity has greatly improvement after opened community under the main street and the mesh street.But the comprehensive index of opening community under the main street is higher than that of the mesh street, confirming that the improvement of traffic capacity of opening community under the main street is better than opening under the mesh street.Although the index in closely, there is still exists some difference.The reduction of nonlinear coefficient under the mesh structure is fewer than the main street; the reduce time of straight section's delay is also shorter.And it may because that when the road structure is the mesh structure, the external road network density is big and the influence on the road network is not obvious; reduce of round road is also limited.Therefore, when the surrounding road structure is the mesh structure, the influence is not better than the main structure although it improves the traffic capacity at a certain extent.Combined with the actual, the influence of policy will effective because the main street structure is the most common in China. Analysis of the Influence of Community Size The population density in China is large and the clustering characteristics are obvious, so the community scale in China is also large about 10 hectares.But there are also some small communities influenced by territory, population and economy.This unite analyze the different influence causing by community scale on surrounding traffic capacity.Comparing the 10 hectares community with the 3 hectares community. The square community in 3.1.1 is 12 hectares community, which closes to the regular size of community and compare with the small community.with the moving direction and the total traffic flow is the sum of all directions of traffic flow, the article only studies one direction of traffic flow.Namely vehicles enter into the area from the left side and leave the area on the right side.Vehicles can't entrance the community and only round the edge rode when the community closed. The detour road length is 0.63 km.Assuming that volume of traffic at the community surrounding roads is 600 pcu/h, the biggest volume of traffic is 1200 pcu/h, the road saturation is 0.5, the partition coefficient is 0.5, the retardation factor α=0.15, β=4, the average velocity speed is 50 km/h.There are 4 signal lights at the intersection and the cycle of signal lights is 60 seconds, the split is 1/4.As the basal community of analyze, we think that these parameters can reflect the most of communities. After the community opened, as shown in Figure 12, the fence of the community disappeared and the road in the community connected with the urban road.Two transverse and longitudinal internal transport networks connected with urban and forms a complete the network traffic in this area. The article takes the parameters into the evaluated model, using MATLAB calculate the index of road network structure and the index of traffic efficiency showed in Table 6. Table 6.Index of opening small square community The increase in road network density (km/km 2 ) The reduction of nonlinear coefficient The increase in connectivity index The reduce time of intersection delay (s) The reduce time of straight section's delay( s) 0.00875 0 1 688 67.31 Take the normalized number to formula (1), gaining the comprehensive index of traffic capacity is 0.2203. Analysis of the Influence of Opening Community Causing by Different Community Sizes Comparing with the index in 3.1.1,the article gets the Table 7. Table 7.Comparison of the influence of opening community under different community scale The comprehensive index The increase in road network density (km/km2) The reduction of nonlinear coefficient The main street 0.4915 0.012 0.88 The mesh street 0.2203 0.0058 0 The increase in connectivity index The reduce time of intersection delay (s) The reduce time of straight section's delay (s) The main street 1 688 60.57 The mesh street 1 688 67.31 From the comparison, the article finds that the traffic capacity have greatly improvement after opened big and small community.But the comprehensive index of opening big community is higher than opening small community, confirming that the improvement of traffic capacity of opening big community is better than opening small community.Although the index is similar, there is still some difference.The reduction of nonlinear coefficient opening small community is zero; the reduce time of straight section's delay is 67.31s.The increase in road network density is also small.And it may because that the round distance is long when the community is small.The shortest distance from one point to another without change, only increased the road number.Although opening communities improved the surrounding traffic capacity, the influence of small is not good compared with big community.The small community in China is few and government can think partial open or not open small communities. Conclusions and Suggestions The article sets up reasonable evaluation index system and establish the comprehensive index of traffic effciency to evaluate the influence of opening different community, using the AHP to ensure the weight of index.From the above analysis, all kinds of opening community will improve the traffic efficiency of surrounding roads but there are large differences of different kinds opening. Figure 2 . Figure 2. Network of closing community Figure 3 . Figure 3.The analysis graphics of the reduce time of intersection delay Figure 4. Closed square community Figure 8 . Figure 8. Basic road structures in China Figure 9 . Figure 9.The closed community Figure 11.The closed small community Table 1 . The index of opening square community Table 3 . The comparison of the influence of opening community
2018-12-11T00:41:42.559Z
2016-12-25T00:00:00.000
{ "year": 2016, "sha1": "e3c1583d87b8b6e75a9dd9d47fa6e8537cd8792e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5539/emr.v6n1p9", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "e3c1583d87b8b6e75a9dd9d47fa6e8537cd8792e", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Political Science" ] }
249890311
pes2o/s2orc
v3-fos-license
Scaleability of dielectric susceptibility $\epsilon_{zz}$ with the number of layers and additivity of ferroelectric polarization in van der Waals semiconductors We study the dielectric response of few layered crystals of various transition metal dichalcogenides (TMDs) and hexagonal Boron Nitride (hBN). We showed that the out-of-plane polarizability of a multilayer crystal (which characterizes response to the external displacement field) scales linearly with the number of layers, $\alpha_{zz}^{NL} =N \alpha_{zz}^{1L}$, independently of the stacking configuration in the film. We also established additivity of ferroelectric polarizations of consecutive interfaces in case when such interfaces have broken inversion symmetry. Then we used the obtained data of monolayer $\alpha_{zz}^{1L}$ to calculate the values of the dielectric susceptibilities for semiconductor TMDs and hBN bulk crystals. We study the dielectric response of few layered crystals of various transition metal dichalcogenides (TMDs) and hexagonal Boron Nitride (hBN). We showed that the out-of-plane polarizability of a multilayer crystal (which characterizes response to the external displacement field) scales linearly with the number of layers, α N L zz = N α 1L zz , independently of the stacking configuration in the film. We also established additivity of ferroelectric polarizations of consecutive interfaces in case when such interfaces have broken inversion symmetry. Then we used the obtained data of monolayer α 1L zz to calculate the values of the dielectric susceptibilities for semiconductor TMDs and hBN bulk crystals. I. INTRODUCTION Dielectric permittivity is an important parameter for modeling optoelectronic devices. It characterizes how a material is polarized under an external electric field, which is relevant for modeling field-effect transistors [1,2], capacitors [3], and ferroelectrics based memristors [4,5]. In layered materials, dielectric permittivity reflects [6][7][8][9] a strong anisotropy of crystalline and electronic properties, which is particularly strong in van der Waals (vdW) layered crystals such as graphite, black phosphorus, hexagonal boron nitride (hBN), and transition metal dichalcogenides (TMDs). Because of the layered nature of these compounds, all of them had already been implemented as components in various field-effect transistor devices, where electrostatics is determined by the out-of-plane component of the dielectric permitivitty tensor, zz . Despite its importance for device modeling, only few theoretical studies have been dedicated to the evaluation of zz in layered vdW semiconductors such as InSe, GaSe, MoS 2 , WS 2 , MoSe 2 , WSe 2 , or MoTe 2 , and these published [10][11][12][13][14][15][16][17][18][19][20][21][22][23] broadly disagree on their values and even qualitative dependence on the material thickness. Here, we perform a detailed ab initio density functional theory (DFT) study of zz in few-layer films of MX 2 (M=Mo,W and X=S,Se,Te) and hBN. We compute the out-of-plane polarizability, α zz , of crystals with different numbers of layers and established linear scaleability of α zz with the number of layers, as illustrated in Fig. 1(a). This indicates that each layer screens the external electric field independently, in agreement with other works [20,24,25]. In this study we take into account both electronic and ionic polarizabilities which enables us to establish the values of static and high frequency (higher than optical frequency) values of zz , which appears to be particular important for hBN. We also model various stacking arrangements of layers in the multilayers, e.g., as shown in Fig 1(b), in particular those that allow for inversion symmetry broken interfaces. For such interfaces, the ferroelectric polarization is possible due to the interlayer charge transfer which also showed to be additive for consecutive interfaces. Below, in section II, we start by discussing the DFT method which enables us to compute the values of α zz for monolayers and multilayers taking into account that some of those exhibit ferroelectric interfaces. These results presented in section II demonstrate scaleability of α zz in semiconducting TMDs and multilayer hBN. In section III the computed values of α zz are recalculated into zz of a bulk crystal which appears to be a parameter independent of the number of layers in the slab. II. COMPUTATION OF POLARIZABILITY αzz In this work the method of choice is to compute the dependence of the energy of a thin slab of a crystal subjected to an out-of-plane displacement field using DFT. In this calculation a displacement field enters via gradient of a sawtooth potential imposed onto periodically placed few-layer 2D materials with a large spacer along z-axis. The external displacement field induces a dipole moment α zz D which screens the displacement field inside the film and determines the material dielectric constant as it will be discussed in the next section. Here we describe the results of DFT calculations of α zz for monolayer, bilayer, trilayer and tetralayer crystals from compounds listed in the introduction. A. TMD monolayers Here we use the approach implemented earlier in the analyses of the out-of-plane polarizability of a monolayer graphene [26]. For this we compute the total energy of a 2D crystal per unit cell area (U ) as a function of the outof-plane displacement field (D) and fit it with a parabolic dependence [27], Here U 0 is the energy of a unit-cell for D = 0, A is the area of the unit-cell, and 0 is the permittivity of vacuum. The DFT calculations were carried out using the planewave code implemented in Quantum Espresso package [28,29]. A plane wave cut-off of 70 Ry was used for all calculations with TMDs, where the integration over the Brillouin zone was performed using the scheme proposed by Monkhorst-Pack [30] with a grid of 13 × 13 × 1. We used full relativistic ultrasoft pseudopotentials with spin-orbit interaction and an exchange correlation functional that is approximated by using the PBE method [31]. The convergence threshold for self-consistency was set to 10 −9 Ry. A Coulomb truncation [32] in the outof-plane direction was used for all calculations and the displacement field was implemented with a z-dependent sawtooth potential. Calculations that allowed relaxation of the atomic coordinates were done using the BFGS quasi-newton algorithm, where the atoms were relaxed until the total force acting on them was smaller than 10 −5 Ry/Bohr. The above described DFT modeling was implemented in three ways. In one we used frozen lattice positions of all ions with spacings set to values from earlier literature [33,34]. In the second calculation we relaxed the lattice positions of the ions for D = 0 imposing the lattice constant fixed to the experimentally known value without allowing further relaxation at a finite D. These two calculations returned the values of α zz which can be attributed to a purely electronic response to the external perturbation which will enable us to describe the dielectric permittivity at frequencies higher than the optical phonon frequencies. In the third calculation we implemented lattice relaxation (still fixing the lateral lattice constant) for all values of D, which gives us a combined electron and ionic polarizabilities and enables us to describe static (ω = 0) susceptibility of the crystal. In Fig. 2(a), we show a typical dependence of U (D) exemplified for MoS 2 . To demonstrate convergence of the result against the spacing between the z-direction we show data in Fig. 2(a) for two calculations outputs: circles for an out-of-plane period of 40Å and crosses for 70Å. The data in Fig. 2(a) corresponds to α 1L zz = 44.46Å 3 . In Fig. 2 zz averaged over the unit-cell of a TMD, which indicates that polarizability is dominated by the contribution of chalcogen orbitals. The values of α zz computed in the above-described three ways are gathered in Table I for various TMDs. The comparison of the last two columns of Table I indicates that ionic contribution towards TMD polarizability is less than 0.2%. Therefore in the following analysis of few-layered crystals we implement the computationally less expensive first method (out of the described three), switching off lattice relaxation at all stages and using the experimentally known positions of atoms in the crystal. Table II. Table I. Electronic (e) and ionic (i) contributions for the computed αzz for all studied TMDs. ω0 corresponds to the optical frequency. B. TMD bilayers In the analysis of TMD bilayers we take into account that those can be composed of qualitatively different stackings. In one, known as 2H stacking (commonly synthesized in bulk crystals) the unit cells of consecutive layers are inverted as shown in Fig. 2(c). In the other, which corresponds to stacked consecutive layers in 3R-TMD polytypes, the consecutive layers have parallel orientation of the unit cells and alignment of chalcogen atom in one layer with a metal in the other as illustrated in Fig 2(c). It has been recently shown [35][36][37][38][39][40][41][42][43] that such bilayers exhibit interlayer charge transfer and a spontaneous out-of-plane ferroelectric polarization with the opposite orientation for XM and MX stacking configurations (see Fig 2(c)). The results of the crystal energy computation, U (D), for all of those configurations are shown in Fig. 2(d), using MoS 2 as typical example. We use such computed data to fit both spontaneous ferroelectirc polarization, P , and α zz values using a parabolic dependence which now incorporates a linear term P D to account for the spontaneous interface electric dipole, The values that are extracted for the out-of-plane polarizability of MoS 2 bilayers with all the described stackings coincide (see Table II), with the DFT computation . Symbols represent DFT data, whereas dashed and solid lines represent the fittings to DFT data, which were computed by using the expression in Eq. (2). Our results are well converged for an out-of-plane periodicity of Lz = 70Å. The interfaces used for trilayers and tetralayers TMDs are shown in panels (c) and (d) respectively and we show the directions of the ferroelectric polarization P MX and P XM due to interlayer charge transfer. The parameters obtained from our fittings are gathered in Table II. accuracy, and appear to be approximately twice the value of monolayer polarizability. This relation between the monolayer and bilayer polarizabilities is systematically reproduced for all other four studied TMDs. C. Trilayers and tetralayers of TMDs To test the scaleability of α zz further we considered trilayers and tetralayers with various stacking interfaces (2H-2H, XM-MX, XM-2H, XM-XM, 2H-2H-2H, XM-2H-MX, 2H-2H-XM, XM-2H-XM). The results of the DFT computed U (D) depedence for each of those systems are displayed in Fig. 3(a) and (b) and analysed using Eq. (2). This produces polarizabilty values which scale linearly with the number of layers for all of those configurations as listed in Table II. The values of P obtained from the same data using Eq. (2) also correspond well to algebraic summation of independent contribution of consecutive interfaces which either compensate each other or double the resulting value (see Table II), depending on the type of interfaces in the layer stacking illustrated in Fig. 3(c) and (d). Overall the data for all five different TMDs are collected in Fig.1(a) where one can see that α N L zz = N α 1L zz , where N is the number of layers. D. Monolayer and multi-layer hBN crystals In this subsection we repeat the analysis of α zz for hexagonal boron nitride crystals, this includes the study of lattice relaxation and ionic contribution towards polarizability and additivity of ferroelectric polarization due to the interlayer charge transfer in inversion-asymmetric interfaces. We carried out DFT calculations using the plane-wave code implemented in Quantum Espresso package [28,29]. A plane wave cut-off of 90 Ry was used for all calculations with hBN films, where the integration over the Brillouin zone was performed using the scheme proposed by Monkhorst-Pack [30] with a Table III. Table II. Values of P and αzz obtained by fitting DFT data to Eq. (1) and (2) for monolayer and multilayer MoS2 crystals respectively. In the second column we indicate the interfaces that contribute to the ferroelectric prolarization (FP) of each crystal (see Fig. 1 and 2). In the fifth column we show the ratio between αzz and α 1L zz , where the latter corresponds to αzz of a monolayer. MoS2 FP grid of 9 × 9 × 1. For hBN calculations, we used norm conserving pseudopotentials and an exchange correlation functional that is approximated by using the PBE method [31], with the inclusion of a van-der-Waals functional described by Tkatchenko-Scheffler vdW-TS [44]. The convergence threshold for self-consistency was set to 10 −9 Ry. A z-dependent sawtooth potential was used to induce an out-of-plane displacement field where an outof-plane Coulomb truncation [32] was considered. Relax-ation calculations for the atomic coordinates were done using the BFGS quasi-newton algorithm, with a threshold convergence for the total force acting on the atoms less than 10 −5 Ry/Bohr In contrast to TMDs the out-of-plane relaxation of ions (for a lattice constant fixed to the experimentally known value) gives two clearly distinguishable U (D) dependence which result in the values of α e zz and α e+i zz listed in Table III. An example of calculations for three stacking configurations of hBN bilayers (with parallel and anti-parallel unit cell orientations), performed with lattice relaxation implemented in the code, are displayed in Fig. 4(b). The difference between the U (D) dependence results from the ferroelectric polarization of bilayers with parallel unit cells orientations [40,41,43,45]. The values of α e+i zz for these bilayers as well for trilayers and tetralayers listed in Table III demonstrate perfect scaleability with the number of layers, α N L zz = N α 1L zz , for both electronic and combined electronic plus ionic responses. Moreover, ferroelectric polarization exhibits additivity of contribution of individual interfaces. III. DISCUSSION In order to recalculate the computed α zz into dielectric constant of a medium composed of many layers in a TMD or hBN crystal, we use the following expression (3) Table III. Values of P and αzz obtained by fitting DFT data to Eq. (1) for monolayer (1L) and multi-layer (2L, 3L, 4L) hBN crystals respectively. In the second column we indicate the interfaces that contribute to the ferroelectric prolarization (FP) of each crystal (see Fig. 4). In the sixth and ninth column we show the ratio between αzz and α 1L zz with and without ionic contribution. Electronic ( which has been successfully implemented before in the analyses of layered materials [25,26,39]. In Table IV we gather the resulting values of zz (obtained from the corresponding computed values of α 1L zz ) for all the materials studied here). As we found in section II that for TMDs the ionic contribution is negligibly small, the results of zz are expected to be the same for both zero and high frequencies. For hBN the low and high frequency dielectric constants are distinguishable due to a substantial contribution of ions towards static polarizability evident in Table IV. Nevertheless the overall dielectric constant of hBN is smaller than of TMDs due to a weaker electronic polarizabilty which we attribute to a much larger band gap of this material. IV. CONCLUSIONS The analysis presented here demonstrates linear scaling of polarizability of multilayer TMDs and hBN with the number of layers of these van der Waals materials which suggests that layers respond to the out-of-plane perturbation independently of each other. This enabled us to quantify the dielectric constant of these materials determined by such polarizabilty of the layers with the values of the computed polarizabilities that we compare with the results of previous computations and some experimentally available data in Table IV. ACKNOWLEDGMENTS This work was supported by EC-FET European Graphene Flagship Core3 Project, EC-FET Quantum Flagship Project 2D-SIPC, EPSRC grants EP/S030719/1 and EP/V007033/1, and the Lloyd Register Foundation Nanotechnology Grant. Computational resources were provided by the Computational Shared Facility of the University of Manchester and the ARCHER2 UK National Supercomputing Service (https://www.archer2.ac.uk) through EPSRC Access to HPC project e672.
2022-06-22T01:16:26.531Z
2022-06-18T00:00:00.000
{ "year": 2022, "sha1": "a4d21941ea75f41278e970e6f8172cadac2ba3a5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a4d21941ea75f41278e970e6f8172cadac2ba3a5", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
18824881
pes2o/s2orc
v3-fos-license
How to Reduce the Latent Social Risk of Disease: The Determinants of Vaccination against Rabies in Taiwan To control the latent social risk of disease, the government usually spreads accurate information and attempts to improve the public’s attitude toward adopting prevention. However, these methods with the Knowledge, Attitudes, and Practices (KAP) model do not always work. Therefore, we used the theory of planned behavior (TPB) to understand dog owners’ behavior and distinguished the knowledge effect as objective knowledge (OK) and subjective knowledge (SK). A total of 310 dog owners completed a questionnaire based on our model. We employed structural equation modeling to verify the structural relationships and found three main results. First, our model was fit, and each path was significant. People with better attitudes, stronger subjective norms, and more perceptive behavioral control have stronger behavioral intention. Second, perceived behavioral control, not attitude, was the best predictive index in this model. Finally, on perceived behavioral control, subjective knowledge showed more influence than objective knowledge. We successfully extended TPB to explain the behavioral intention of dog owners and presented more workable recommendations. To reduce the latent social risk of disease, the government should not only address dog owners’ attitudes, but also their subjective norms and perceptive behavioral control. Indeed, perceptive behavioral control and SK showed the most influence in this model. It is implied that the self-efficacy of dog owners is the most important factor in such a behavior. Therefore, the government should focus on enhancing dog owners’ self-efficacy first while devoted to prevention activities. number of infected canines; (2) ineffective canine management; (3) a canine vaccination rate below 80%, including stray dogs; and (4) people's ignorance because of insufficient general knowledge and insufficient education budgets. Conversely, the effective investigation of epidemic situations, a high vaccination rate (over 80%), and effective canine management are key factors in successfully controlling rabies infections. According to the report, since 1973, WHO has widely promoted two rabies prevention measures: broad vaccination programs and strict control of stray dogs. Among KAP-model promotions, improving knowledge and instilling positive attitudes toward prevention should be the main elements of a Taiwanese rabies-prevention campaign. However, Taiwan's current vaccination rate is only between 30% and 40% [4,5], far below WHO's 80% recommendation [2]. To close this gap, we focused on owners' intentions to vaccinate dogs and attempted to find the determinants for vaccinating against rabies in Taiwan. We had three main purposes: first, we argued that the KAP model's attitude concept should be extended to behavioral intention and that the theory of planned behavior (TPB) would be a suitable structural model for behavioral intention of vaccination against rabies because of its wide application in research on forecasting and building rational behavior. TPB has been used to predict, for instance, people's intention to vaccinate against influenza as well as many other health behaviors [6]. We used structural equation modeling (SEM) to verify that TPB can explain people's intention to vaccinate dogs against rabies [7]. Second, we argued that attitude is not the only key factor, or even the best factor, for understanding dog owners' behavioral intention. According to the low vaccination rate in Taiwan, we considered only improving knowledge and instilling a positive attitude are not enough to control the latent risk of rabies in Taiwan. By applying TPB to the intention of rabies vaccination, we described relationships among the variables to determine the best predictive index. Finally, we argued that the knowledge concept of the KAP model should be distinguished into two types, objective knowledge (OK) and subjective knowledge (SK). OK indicates the level of accurate information in one's cognition about the target; SK indicates the level of one's perceptions of what or how much one knows about the target [8,9]. Azjen et al. [10] argued that knowledge, especially objective knowledge, would affect attitude and enhance self-efficacy (perceived behavioral control, PBC), but they did not test the correlation between subjective knowledge and TPB. Besides, we thought that especially SK, rather than OK, would be more likely to prompt people to vaccinate their dogs. In conclusion, we proposed to the government rabies prevention policies and suggestions for raising the vaccination rate through TPB, thus not only helping prevent a rabies outbreak in Taiwan but also preventing latent risk for similar situations. Behavior for Vaccination against Rabies Vaccination is a health behavior that consists of a personal act to preserve or strengthen one's health [11][12][13]. Many methods have been employed to increase vaccination rates, for example, through increased knowledge and better attitudes, but these strategies have shown only limited success [14,15]. Descriptions of vaccination determinants have been mainly from physician perspectives, and past studies have often ignored those who actually make the decisions [15]. Hence, the limited success of these interventions clearly indicates the need for a fresh approach and new methods. Besides, vaccinating one's dog is not purely a health behavior. It includes a variety of factors: health, emotions, risk aversion, social perception, and so on. From the human perspective, dogs may be movable property, personal goods, and beloved pets, and those who vaccinate dogs against rabies may do so for one or more of the following motivations: enhancing their dogs' health; loving their dogs; perceiving the risk of rabies; thinking other people hope they will; and others. Furthermore, vaccinating dogs is not only a personal act but also a social behavior involving moral perception. At least partly because authorities, such as WHO and the Center for Disease Control (CDC), have advocated epidemic prevention for several years; for some people, vaccinating dogs has become an ethical and moral act of socialization. Theories for Understanding Behavior of Vaccination against Rabies Applications of certain theoretical frameworks have manifested as well suited to the design of health behavioral change interventions [6,14,15]. Among theories commonly used to understand health behavior [14][15][16] are the theory of reasoned action (TRA) and the TPB that have effectively explained inventions and induced health behavior changes [14][15][16]. These two theories have increased understanding of the processes involved in vaccination decision-making at the individual level. Constructed by Ajzen (1991) [17], the TPB is an especially well established framework for predicting various types of health behaviors [6]. A central element in TRA and TPB is the individual's intention to perform a given behavior [17]. Previous research shows the instant antecedent of any behavior to be the intention to perform that behavior. People who have a stronger intention to act are more likely to perform the behavior [18][19][20], especially reasoned actions and planned behaviors. Developed by Fishbein and Azjen, the TRA is based on two assumptions. The first is that intentions best predict behaviors, and the second is that human behavior is quite rational and employs the limited information available to the individual [21,22]. In this theory, two independent factors determine one's intention: attitudes and subjective norms. Attitudes consist of general evaluations of behavioral performance and beliefs about the consequences of performing the behavior, weighted by an individual's evaluation of each consequence. Subjective norms reflect general perceptions of social pressure to perform the target behavior and are affected by the expectations of important referents, weighted by an individual's motivation to comply with each referent. Researchers have conducted many empirical studies on this topic over the past 30 years and provided evidence in support of TRA to explain health and social behavior [23][24][25][26][27][28]. However, although previous studies have successfully verified that TRA is helpful in predicting intention and behavior, other studies have revealed its limitations [17,29,30]. For the most part, the behaviors investigated through TRA have been subject to considerable volitional control [31]. Some studies using the health belief model, the source of all health behavior change models [32], have added self-efficacy to their models [31][32][33]. In contrast to TRA, TPB contains perceived behavioral control, including the concept of self-efficacy [17,29]. Due to TRA's limitations, Ajzen developed an enhanced behavior prediction model in which the individual may not have considerable volitional control or be able to perform well [17,29]. Ajzen believed that the construct of perceived behavioral control is belief-based, similar to attitudes and subjective norms in TRA [34][35][36]. Perceived behavioral control represents one's belief about how easy or difficult it is to perform a behavior and is easily measured with a questionnaire [29,30]. Using the TRA as a base, Ajzen constructed TPB to incorporate perceptions of control over performance of behavior as an additional predictor [17,29]. He then used TPB to predict behavior that an individual may not be able to perform at will [20]. Ajzen also proposed that perceived behavioral control affects behavior not only indirectly through intention but also directly [17,29]. In fact, many theoretical and empirical studies provide evidence supporting TPB. In 1985, Ajzen presented "From intentions to actions: A theory of planned behavior" to open theoretical discussions about TPB. He theorized that the relationship between behavioral intention and behavior is stronger when perceived behavioral control is high. To provide a powerful foundation for TPB, he also presented arguments about social psychology [34], organizational behavior [17], self-efficacy [36], laws of human behavior [37], and the relationship between consumer attitudes and behavior [38]. In addition to this psychological research, Armitage and Conner argued that TPB can be applied to health behavior and also disseminated TPB to such fields as moral behavior, technological behavior, and exercise behavior [39]. These researchers found that TPB explained an average of 39% of the variance in intention and 27% of the variance in behavior. The TPB concept has received strong empirical support in applications to a variety of domains. Nevertheless, the current study is one of only a few attempts to use TPB as a conceptual framework for vaccination, and more specifically, canine vaccination against rabies. This behavior involves morals, social impressions, and health concepts. Researchers have repeatedly used TPB to interpret moral behavior [40], including behaviors of health promotion [41], environmental friendliness [42], and tax compliance [43]. In the social behavior domain, researchers used TPB to examine alcohol abuse [44], volunteer behavior, substance use [45], blood donations [18], and others. In the health domain, TPB has explained various behaviors, for example, smoking [46][47][48], giving up smoking [49], and drinking [48,50]. Researchers have also used TPB to predict a variety of attendance decisions for many types of health behaviors, including the decision to attend health checks and health clinics [51,52], breast cancer screenings [53], and workplace health and safety courses [54]. Therefore, TPB could be the most powerful theory for predicting a rise in the rate of vaccination against rabies. However, despite the fact that TPB has never been applied to explain the behavior of vaccination against rabies, it has been applied to predict of a wide range of other behaviors in previous research, including health behavior, social behavior, and moral behavior, and these behaviors resemble the targeted behavior's concepts. Therefore, we employed TPB to construct a theoretical framework for explaining the behavior of owners' ensuring that dogs are vaccinated against rabies. As in the original TRA, intention determines actual behavior [17,29]. Intentions are assumed to measure motivational factors that influence a behavior; they are indications of how hard people are willing to try and how much effort they are planning to exert to perform the behavior [17,29]. Furthermore, intention is jointly determined by attitudes, subjective norms, and perceived behavioral control in the TPB model. First, attitudes refer to the degree to which an individual favorably or unfavorably evaluates the behavior in question; second, subjective norms refer to social pressure to perform or not to perform the behavior; and, third, perceived behavioral control refers to whether the individual anticipates the action's performance as relatively easy or difficult. Presumably, this third measure reflects past experiences and anticipated hindrances. Generally speaking, a person with a more favorable attitude, more positive subjective norms, and higher perceived behavioral control has a stronger intention to perform the target behavior [17,29]. Based on TPB, we argue that behavioral intention is determined by an individual's attitudes toward rabies vaccination, subjective norms about this behavior, and perceived behavioral control, i.e., whether one can control taking a dog to receive the rabies vaccine. In other words, this study hypothesizes that favorable attitudes, high subjective norms, and good perceived behavioral control enhance the behavioral intention of rabies vaccination. Beside, Ajzen argued that individual behaviors sometimes could be predicted best by self-efficacy, especially while the behaviors need to be controlled [55,56]. In taking a dog to receive the rabies vaccine, we also considered perceived behavioral control, not attitude, would be the best predictive index: H1: Attitude (A) toward the vaccination of rabies positively affects behavioral intention (BI). H2: Subjective norms (SN) about the vaccination of rabies positively affect BI. H3: Perceived behavioral control (PBC) over vaccination positively affects BI. H3b: the PBC effect is greater than the attitude effect. The Knowledge Effect on Attitude and Perceived Behavioral Control Knowledge changes people's cognition and affects their behavior [57]. Knowledge can be defined as a kind of stored information, which people obtain and acquire from processing data [58]. However, in previous studies, knowledge has been discussed according to two concepts, objective knowledge (OK) and subjective knowledge (SK) [8,9]. OK indicates the level of accurate information in one's cognition about the target; SK indicates the level of one's perceptions of what or how much one knows about the target [8,9]. The two concepts are related, but must be distinguished: Specifically, people cannot actually recognize whether their perceptions of how much they know are correct. In other words, a cognitive gap usually exists between OK and SK. Moreover, OK can be measured by objective scales, but SK relates more to one's selfconfidence [8,59]. In this study, we defined the level of dog owners' accurate information about rabies as OK and the level of dog owners' perceptions of how much they know about rabies as SK. People use their knowledge to develop a cognitive system and to judge whether to perform a specific behavior [10]. In the KAP model, people improve their preventive attitudes as they raise their knowledge of disease. When people receive accurate information about prevention of diseases, they know what coping behaviors should be taken and improve their attitudes about performing these behaviors [60]. In this study, we also presumed that owners will have better attitudes about taking their dogs to be vaccinated when they possess greater objective knowledge. People who have higher OK will have more positive attitudes about taking their dogs to be vaccinated. H4: Objective knowledge (OK) about rabies positively affects attitude about rabies prevention. Perceived behavioral control, including the concept of self-efficacy, is the distinguishing feature of TPB. When people do not have considerable volitional control or are not able to perform well, perceived behavioral control becomes a good predictor for explaining behavioral intention [17,29]. Ajzen argued that perceived behavioral control represents one's belief about how easy or difficult it is to perform a behavior and that the construct of perceived behavioral control is belief-based [34][35][36]. An individual with knowledge about specific behavior reduces feelings about impediments and increases perceived behavioral control [17,29]. In other words, when people have enough knowledge about rabies and about vaccinating their dogs, they add self-efficacy and then feel good about performing the behavior. Specifically, both OK and SK can affect the ability to perform a specific behavior but with different mechanisms [9]. OK provides information and skill, reducing the impediment of bringing dogs to be vaccinated; SK enhances dog owners' self-efficacy so that they feel they can perform the behavior well. Therefore, we argued that people with higher levels of OK and SK will have higher perceived behavioral control. Furthermore, Azjen et al. [10] argued that knowledge would positively affect attitude and perceived behavioral control. However, they did not test the correlation between SK and TPB. Because SK combines knowledge and self-confidence, it is more important in problem-solving [61,62]. Hence, we also argued that SK could affect dog owners' perceived behavioral control more than OK. Figure 1 displays the proposed hypotheses for this study: H5: Objective knowledge (OK) about rabies positively affects perceived behavioral control (PBC) about rabies prevention. H6a: Subjective knowledge (SK) about rabies positively affects PBC about rabies prevention. H6b: the SK effect is greater than the OK effect. Questionnaire This study administered a questionnaire to assess: (1) attitude; (2) subjective norms; (3) perceived behavioral control; (4) behavioral intention; (5) objective knowledge; (6) subjective knowledge; and (7) basic demographic data. The first four scales were revised from the sample TPB questionnaire designed by Icek Ajzen. Four sections evaluated attitudes, subjective norms, perceived behavioral control, and behavioral intention as to whether owners would take their dogs to receive the rabies vaccine injection. The last two scales (OK and SK) rested on the literature of objective knowledge and subjective knowledge and were revised by the outcome of export pretesting. Intention. This study used three items with a 5-point semantic differential scale to measure participants' intentions to have their dogs vaccinated. First, the statement, "I would like to take my dog to have the rabies vaccine injection" was rated on a 5-point semantic differential scale ranging from extraordinarily impossible (1) to extraordinarily possible (5). Second, "I will take my dog to have the rabies vaccine injection in the near future (3 months)" was rated on a 5-point semantic differential scale ranging from absolutely incorrect (1) to absolutely correct (5). Lastly, "I plan to take my dog to have the rabies vaccine injection in the near future (1 year)" was rated on a 5-point scale ranging from absolutely incorrect (1) to absolutely correct (5). Attitudes. This study used three items with 5-point scales to assess attitudes toward the behavior. The scales ranged from strongly disagree (1) to strongly agree (5). These items were modified from "Constructing a TPB Questionnaire" by Ajzen and included "For me to take my dog to have the rabies vaccine injection is good," "For me to take my dog to have the rabies vaccine injection is beneficial," and "For me to take my dog to have the rabies vaccine injection is helpful." Subjective norms. To assess subjective norms, this study used three 5-point scale items ranging from strongly disagree (1) to strongly agree (5). We not only focused on the opinions of participants' relatives and friends about taking their dogs for the rabies vaccine injection, but considered whether they would like to do so. These items included: "My family thinks I should take my dog to have the rabies vaccine injection," "My friends think I should take my dog to have the rabies vaccine injection," and "My relatives and friends have taken their dogs to have the rabies vaccine injection." Perceived behavioral control. This study used three items with 5-point scales to measure perceived behavioral control. These items were also adapted from Ajzen's "Constructing a TPB Questionnaire." The item "To bring my dog to have the rabies vaccine injection every year" was rated on a 5-point semantic differential scale ranging from "I can't make this happen" (1) to "I can make this happen" (5); the item "I have the ability to take my dog to have the rabies vaccine injection" was rated on a 5-point semantic differential scale ranging from completely incorrect (1) to completely correct (5); and the item "To bring my dog to have the rabies vaccine injection" was rated on a 5-point semantic differential scale ranging from "I have no control over this" (1) to "I have control over this" (5). Objective knowledge. Aligning with the CDC and WHO reports [1,2], we designed 17 items with key information on rabies. After our pretesting, we performed item analysis and deleted 7 items. Finally, 10 items were used to evaluate rabies knowledge in the "the objective knowledge of rabies index." Each item employed a dichotomous scale (Yes or No question). We summarized ten scores to represent the objective knowledge of dog owners. Subjective knowledge. According to the literature, subjective knowledge can be measured as a kind of self-confidence [8,59]. We took one subjective knowledge item from a self-report and three subjective knowledge items to assess the respondent's self-confidence of rabies knowledge as compared with other dog owners, pet traders, and prevention experts. We used a Likert 5-point scale to measure the score, ranging from strongly disagree (1) to strongly agree (5). Pre-Testing and Sampling This study's questionnaire in this study was reviewed by 10 epidemic prevention experts and staff members selected from among veterinary professors in universities and personnel at the bureau of animal and plant health inspection and quarantine. Besides that, it was pretested on 133 dog owners in Taiwan. According to their suggestions, we revised some items. The geographic scope of this study is Taiwan and the Kimen district. We distributed the samples around Taipei, Taichung, Kaohsiung, Taitung, and Kinmen. To increase the response rate, each participant received a questionnaire accompanied by a gift valued at one US dollar. In total, 310 participants completed the questionnaire. The respondents were almost equally male (163; 52.6%) and female (147; 47.4%). Their age ranged from 16 to 73, with an average age of 37.6 years old (with a standard deviation of 12.33 years). As for the level of education completed, 12.9% (N = 42) had a junior/senior school degree, 72.6% (N = 225) had a bachelor's degree, 11.9% (N = 37) had a master's degree, and 1.9% (N = 6) had a doctorate degree. Results For this study, we used SEM to verify whether TPB can explain the intention of people to have their dogs vaccinated and whether knowledge of rabies can positively affect people's attitude and perceived behavioral control. Besides that, we tried to review the relationships of these variables and find a determinant to explain the dog owners' intention. We employed LISREL 8.7 to achieve this goal. The Measurement Model According to the hypotheses, based on TPB and KAP, there are six latent variables in this study: objective knowledge (OK), subjective knowledge (SK), attitudes (A), subjective norms (SN), perceived behavioral control (PBC), and behavioral intention (BI). Table 1 shows the means and standard deviations of all variables, for which there were no significant differences in gender, age, and education level. To evaluate internal consistency, we used Cronbach's α to test the reliability of A, SN, PBC, BI, and SK to obtain rabies vaccines. In this study, the Cronbach's α for A was 0.903, for SN was 0.839, and for PBC was 0.940. For the BI scale, it was 0.884, and for the SK scale, 0.945. All values of Cronbach's α exceeded 0.80, and are thus well within the commonly accepted range of reliability [7,63] (Table 2). Convergent validity can be determined by reviewing the average variance extracted (AVE) and composite reliability (CR) for each construct. This value should exceed 0.5 for average variance extracted and 0.7 for composite reliability [7,63]. In this study, all values of AVE and CR were greater than 0.639 and 0.841, respectively, well within the acceptable range (Table 2), thus providing evidence that the convergent validity in this study is acceptable. The AVE can also measure discriminant validity. Discriminant validity is acceptable when the AVE score is greater than the squared correlation coefficients among variables. In this study, AVE scores, showing in diagonal in Table 3, were all greater than squared correlation coefficients, confirming discriminant validity. The Structural Model This study used SEM to examine the structural relationship of our model based on TPB and KAP and to determine the factors that are keys for owners' intention to take their dogs for rabies vaccine. Besides, Ajzen argued that there could be some correlation among attitude, social norm and perceived behavioral control [17,29]. Several literatures also found evidence to support the relations among these variables [64,65]. Hence, according to the Modification Indices (MI), we opened the correlation among these variables in TPB. Finally, all indicators used to test the fitness of SEM models were acceptable (Table 4). It is shown that the fitness of our model was confirmed. Figure 2 illustrates the SEM results of our model. Almost all the paths and relations were significant, and the hypotheses were supported, except H5. For the A-I path (H1), the standardized coefficient was 0.28, with a t-value of 2.60 (p < 0.01). Hence, the stronger the attitudes, the stronger the behavioral intention to take a dog to be vaccinated against rabies. For the SN-I path (H2), the standardized coefficient was 0.22, with a t-value of 2.47 (p < 0.05). Accordingly, people who felt more social pressure and had higher subjective norms exhibited stronger intention to take their dogs to be vaccinated. For the PBC-I path (H3), the standardized coefficient was 0.43 with a t-value of 6.22 (p < 0.001). In other words, subjects with a higher sense of behavioral control had a higher intention to take their dogs to be vaccinated against rabies. Therefore, people who had a more positive attitude, stronger subjective norms, and more perceptive behavioral control would have stronger behavioral intention to take their dogs for vaccination against rabies. Additionally, three indices explained 69% of the variance on behavioral intention. Social norm positively affected attitude and attitude positively affected perceived behavioral control individually. In knowledge effect on attitude and perceived behavioral control, we also obtained evidence to support our hypotheses. For the OK-A path (H4), the standardized coefficient was 0.14 with a t-value of 3.11 (p < 0.05). In other words, the dog owners with higher objective knowledge had a more positive attitude toward taking their dogs to be vaccinated. For the OK-PBC path (H5), the standardized coefficient was 0.02 with a t-value of 0.50 (p > 0.05); for the SK-PBC path (H6), the standardized coefficient was 0.11 with a t-value of 2.29 (p < 0.05). In other words, people who have more subjective knowledge enhance their perceived behavioral control to perform rabies prevention. Therefore, for vaccinating a dog, objective knowledge enhanced an individual's attitude and subjective knowledge enhanced an individual's perceived behavioral control individually. Furthermore, according to the path analysis result, perceptive behavioral control was the most obvious predictor of behavioral intention for vaccination against rabies (H3b); subjective knowledge effect on perceived behavioral control was greater than objective knowledge effect (H6b). That is to say that attitude is not the best factor and subjective knowledge must be considerable, for understanding dog owners' behavioral intention. Therefore, when we are devoted to raising the vaccination rate against rabies, we need to revise the traditional KAP model, which only contains attitude and objective knowledge. Conclusions Vaccinating dogs is the most effective way to prevent an outbreak of rabies. Recently, there have been some animal cases but no human cases, in Taiwan. However, many latent risks still surround this area, especially those coming from China. With the ECFA deal, more activities between Taiwan and China could lead to a higher chance of a rabies outbreak in Taiwan. Although the administration has tried to improve knowledge and instill positive attitudes, the vaccination rate in Taiwan is still between 30% and 40%, far below the 80% rate recommended by the WHO. Hence, it is necessary to better understand and predict owners' behavior about vaccinating their dogs. In this study, we tried to integrate KAP and TPB to achieve the main goals. SEM results showed all the indices are acceptable and confirmed the fitness of our model. This means that our model is suitable not only for measurement but also for exploring the behavioral intention of vaccination against rabies. To explain behavioral intention through TPB, each path was significant; this supported Hypotheses 1-3. In other words, people with more positive attitudes, stronger subjective norms, and more perceived behavioral control have stronger behavioral intention to vaccinate their dogs. Through these results, we verified that TPB is a suitable structural model for the behavioral intention of vaccination against rabies and successfully extended TPB to explain the behavioral intention of dog owners. In addition, perceived behavioral control, not attitude, is the most obvious index for predicting the target behavioral intention. In other words, the results confirm our argument that attitude, although important, is not the best index for understanding dog owners' behavioral intention. Whether the Taiwanese vaccinate their dogs is mostly related to their belief about how easy or difficult it will be to accomplish. To understand the knowledge effect on preventive behavior, SEM results also supported Hypothesis 4 and 6. People who had more objective knowledge of rabies tended to have more positive attitudes about taking their dogs to be vaccinated. As in the KAP model, objective knowledge could strengthen attitudes about prevention and provide the accurate information and skill that reduces impediments to vaccination. At the same time, people who had more subjective knowledge tended to have a better perceived behavioral control toward vaccination. In other words, subjective knowledge could change people's perceptions and considerations about preventing rabies and enhance their self-efficacy so that they feel they perform the behavior well. Furthermore, results confirmed our argument that subjective knowledge showed greater influence than objective knowledge on perceived behavioral control: If we want to improve the vaccination rate by raising the dog owners' perceived behavioral control, enhancing their subjective knowledge is more effective than providing greater objective knowledge. Discussion and Suggestions According to these findings, we made some contributions in both theory and practice. First, we successfully extended TPB to explain behavioral intention not planned only concerning the individual. This study is the first one to use TPB as a conceptual framework for canine vaccination against rabies. Previous authors successfully applied TPB in studies regarding the intention of people to obtain vaccinations for themselves. This study proves that TPB provides adaptability, in that people decide to perform some behavior for their dogs. Besides, for the KAP model, we also found the evidence to support it can describe the behavior against rabies. However, we found other important factors should be considered at the same time. Therefore, we should extend the KAP model to the Knowledge, Intention, Practices (KIP) model and take care, at least, of perceived behavioral control, attitude, and subjective norms. Second, we found that when people decide to perform this kind of behavior, perceived behavioral control might be the most important factor. This result suggests that when people must make a decision outside of their own control, they might not feel that they can have considerable volitional control. Their control was a primary determinant of their behaviors [55,56]. In other word, for the TPB model, we also found evidence to support the argument of Ajzen. Furthermore, for the KAP model, we thought when the owners of dogs, or other animals, try to bring them to be vaccinated against diseases, the perceived behavioral control of owners should be more important than the attitude of them. Finally, we found that subjective knowledge more greatly influences perceived behavioral control, the most obvious predictor of behavioral intention for vaccination against rabies, than objective knowledge. Ajzen et al. argued that knowledge is positively correlation with attitude and perceived behavioral control [10]. We not only found the evidence to support their argument but also pointed out the mechanism more clearly. We added the concept of subjective knowledge in our model and found the SK-PBC-BI path should be the most effective method for raising the vaccination rate against rabies. In other words, SK-PBC-BI-Practices path would be better than the traditional KAP model. Self-efficacy plays the key factor in such preventive behavior. People with greater self-efficacy feel more subjective knowledge and perceived behavioral control, and then they perform better. Therefore, in order to reduce the latent social risk, the government should first focus on raising perceived behavioral control toward behavior. According to our findings, perceived behavioral control was the primary factor influencing behavioral intention. In other words, an effective epidemic prevention policy must be aimed at this factor. To influence perceived behavioral control, the government should provide manageable conditions and a comfortable situation, for instance, a vaccination subsidy, a more convenient location for vaccination, and so on. Moreover, social norms and attitude should also be addressed. Owners who consider vaccination necessary within a social atmosphere and believe that vaccination against rabies is beneficial will have stronger intentions toward this behavior. At the same time, subjective knowledge plays an important role on positively affecting perceived behavioral control. In other words, when we are devoted to epidemic prevention activities, enhancing dog owners' self-efficacy is more important than confirming their learning. We should first address people's confidence about rabies knowledge, and then confirm how much knowledge they actually have. Furthermore, our results not only offer the government a reference for disaster prevention but also present some interesting directions for further research. The TPB model obtained a 69% prediction rate for dog owners' behavioral intention, which still leaves 31% unexplained. In other words, people who tend to vaccinate their dogs are influenced by other factors, including risk perception, good care of dogs, and temporal immediacy. These may affect not only the intention to vaccinate but also the behavior's practical execution. Author Contributions Ku-Yuan Lee: initiated the research, preparation of the text, study conception and design, acquisition of data, drafting of the manuscript; Li-Chi Lan: acquisition of data, analysis and interpretation of statistical data, preparation of the revision; Jiun-Hao Wang: study conception and design, acquisition of data, final approval of the article; Chen-Ling Fang: initiated the research, study conception and design, submitting and corresponding the article; Kun-Sun Shiao: initiated the research, preparation of the text.
2016-03-22T00:56:01.885Z
2014-06-01T00:00:00.000
{ "year": 2014, "sha1": "933cf26238748810734adf1316973224abb70033", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijerph110605934", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5752683f35a4f922c8a1a363c93dcae050b92b50", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
125974629
pes2o/s2orc
v3-fos-license
Review of Jet Measurements in Heavy Ion Collisions A hot, dense medium called a Quark Gluon Plasma (QGP) is created in ultrarelativistic heavy ion collisions. Hard parton scatterings generate high momentum partons that traverse the medium, which then fragment into sprays of particle called jets. Experimental measurements from high momentum hadrons, two particle correlations, and full jet reconstruction at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) continue to improve our understanding of energy loss in the QGP. Run 2 at the LHC recently began and there is a jet detector at RHIC under development. Now is the perfect time to reflect on what the experimental measurements have taught us so far, the limitations of the techniques used for studying jets, how the techniques can be improved, and how to move forward with the wealth of experimental data such that a complete description of energy loss in the QGP can be achieved. Measurements of jets to date clearly indicate that hard partons lose energy. Detailed comparisons of the nuclear modification factor between data and model calculations led to quantitative constraints on the opacity of the medium to hard probes. While there is substantial evidence for softening and broadening jets through medium interactions, the difficulties comparing measurements to theoretical calculations limit further quantitative constraints on energy loss mechanisms. We call for an agreement between theorists and experimentalists on the appropriate treatment of the background, Monte Carlo generators that enable experimental algorithms to be applied to theoretical calculations, and a clear understanding of which observables are most sensitive to the properties of the medium, even in the presence of background. This will enable us to determine the best strategy for the field to improve quantitative constraints on properties of the medium in the face of these challenges. A hot, dense medium called a Quark Gluon Plasma (QGP) is created in ultrarelativistic heavy ion collisions. Early in the collision, hard parton scatterings generate high momentum partons that traverse the medium, which then fragment into sprays of particle called jets. Understanding how these partons interact with the QGP and fragment into final state particles provides critical insight into quantum chromodynamics. Experimental measurements from high momentum hadrons, two particle correlations, and full jet reconstruction at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) continue to improve our understanding of energy loss in the QGP. Run 2 at the LHC recently began and there is a jet detector at RHIC under development. Now is the perfect time to reflect on what the experimental measurements have taught us so far, the limitations of the techniques used for studying jets, how the techniques can be improved, and how to move forward with the wealth of experimental data such that a complete description of energy loss in the QGP can be achieved. Measurements of jets to date clearly indicate that hard partons lose energy. Detailed comparisons of the nuclear modification factor between data and model calculations led to quantitative constraints on the opacity of the medium to hard probes. However, while there is substantial evidence for softening and broadening jets through medium interactions, the difficulties comparing measurements to theoretical calculations limit further quantitative constraints on energy loss mechanisms. Since jets are algorithmic descriptions of the initial parton, the same jet definitions must be used, including the treatment of the underlying heavy ion background, when making data and theory comparisons. We call for an agreement between theorists and experimentalists on the appropriate treatment of the background, Monte Carlo generators that enable experimental algorithms to be applied to theoretical calculations, and a clear understanding of which observables are most sensitive to the properties of the medium, even in the presence of background. This will enable us to determine the best strategy for the field to improve quantitative constraints on properties of the medium in the face of these challenges. I. INTRODUCTION In ultrarelativistic heavy ion collisions, the temperature is so high that the nuclei melt, forming a hot, dense liquid of quarks and gluons called the Quark Gluon Plasma (QGP). Hard quark and gluon scatterings occur early in the collision, prior to the formation of the QGP. These quarks and gluons, known as partons, traverse the medium and then fragment into collimated sprays of particles called jets. These partons lose energy to the medium and the jets they produce are thus modified. This process, called jet quenching, is studied with experimental measurements of high momentum hadrons, two particle correlations, and jet reconstruction at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC). After nearly two decades of experimental measurements have taught us so far, we reflect on the limitations of the techniques used for studying jets, how the techniques can be improved, and how to move forward with the wealth of experimental data such that a complete description of energy loss in the QGP can be achieved. Our goal in the following sections is to provide an overview of what we have learned from jet measurements and what the field needs to do in order to improve our quantitative understanding of jet quenching and the properties of the medium from RHIC energies ( √ s to LHC energies ( √ s NN = 2.76-5.02 TeV). We will discuss measurements using the AL-ICE, ATLAS, and CMS detectors at the LHC, and the BRAHMS, PHENIX, Phobos, and STAR detectors at RHIC. The main goal of this paper is to review experimental techniques and measurements. While we discuss some models and their interpretation, a full review of the theory of partonic interactions with the medium is outside the scope of this paper. In this section, we provide an overview of the formation of the QGP and other processes which impact the measurement of jets and their interaction with the medium. One key factor in measuring jets in heavy ion collisions is accounting for the effect of the fluctuating background on different observables. Section II discusses the various measurement techniques and approaches to background subtraction and suppression and how these techniques may impact the results and their interpretation. We include measurements of nuclear modification factors, dihadron and multi-hadron correlations, and reconstructed jets. We follow this with a discussion of results in Section III organized by what they tell us about the medium. Do jets lose energy in the medium? Is fragmentation modified in the medium? Do jets modify the medium? Are there cold nuclear matter effects? We show that there is substantial evidence for both partonic energy loss and modified fragmentation. The evidence for modification of the medium by jets is considerably more scant. Our understanding of cold nuclear matter effects is rapidly evolving, but currently there do not appear to be substantial cold nuclear matter effects for jets. We conclude with a discussion of what we have learned and the way forward for the field in Section IV. There are extensive detailed measurements of jets, benefited by improved detector technologies, high cross sections, and higher luminosities, and there have been dramatic improvements in our theoretical understanding and capabilities. However, experimental techniques and the bias they may impose are frequently neglected, and it is not currently possible to apply experimental algorithms to most models. The current status of comparisons between models and data motivates our call for an agreement between theorists and experimentalists on the appropriate treatment of the background, Monte Carlo generators that enable experimental algorithms to be applied to theoretical calculations, and a clear understanding of which observables are most sensitive to the properties of the medium, even in the presence of background. This will enable us to quantitatively constrain properties of the medium. The abbreviation T fo is for the thermal freezeout temperature, T ch is for the chemical freeze-out temperature, and Tc is for the critical temperature where the phase transition between a hadron gas and a QGP occurs. τ0 is the formation time of the QGP. Figure courtesy of Thomas Ullrich. A. Formation and evolution of the Quark Gluon Plasma Quarks and gluons become deconfined under extremely high energy and density conditions. This deconfined state became known as the QGP (Shuryak, 1980). With the advancements in accelerator physics, it can be created and studied in high energy heavy ion collisions. The formation of the QGP requires energy densities above 0.2-1 GeV/fm 3 (Bazavov et al., 2014;Karsch, 2002). These energy densities can currently be reached in high energy heavy ion collisions at RHIC located at Brookhaven National Laboratory in Upton, NY and the LHC located at CERN in Geneva, Switzerland. Estimates of the energy density indicate that central heavy ion collisions with an incoming energy per nucleon pair as low as √ s NN = 7.7 GeV, the lower boundary of collision energies accessible at RHIC, can reach energy densities above 1 GeV/fm 3 (Adare et al., 2016e) and that collisions at 2.76 TeV, accessible at the LHC, reach energy densities as high as 12 GeV/fm 3 (Adam et al., 2016i;Chatrchyan et al., 2012d). Contrary to initial naïve expectations of a gas-like QGP, the QGP formed in these collisions was shown to behave like a liquid of quarks and gluons (Adams et al., 2005b;Adcox et al., 2005;Arsene et al., 2005b;Back et al., 2005;Heinz and Snellings, 2013). The heavy ion collision and the evolution of the fireball, as depicted in Figure 1, has several stages, and the measurement of the final state particles can be affected by one or all of these stages depending on the production mechanism and interaction time within the medium. The initial state of the incoming nuclei is not precisely known, but its properties impact the production of final state particles. The incoming nuclei are often modeled as either an independent collection of nucleons called a Glauber initial state (Miller et al., 2007), or a wall of coherent gluons called a Color Glass Condensate (Iancu et al., 2001). In either initial state model, both the impact parameter of the nuclei and fluctuations in the positions of the incoming quarks or gluons, called partons, lead to an asymmetric nuclear overlap region. This asymmetric overlap is shown schematically in Figure 2. The description of the initial state most consistent with the data is between these extremes (Moreland et al., 2015). The proposed electron ion collider is expected to resolve ambiguities in the initial state of heavy ion collisions (Aprahamian et al., 2015). In all but the most central collisions, some fraction of the incoming nucleons do not participate in the collision and escape unscathed. These nucleons, called spectators, can be observed directly and used to measure the impact parameter of the collision. Before the formation of the QGP, partons in the nuclei may scatter off of each other just as occurs in p+p collisions. An interaction with a large momentum transfer (Q) is called a hard scattering, a process which is, in principle, calculable with perturbative quantum chromodynamics (pQCD). The majority of these hard scatterings are 2→2, which result in high momentum partons traveling 180 • apart in the plane transverse to the beam as they travel through the evolving medium. These hard parton scatterings are the focus of this paper. As the medium evolves, it forms a liquid of quarks and gluons. The liquid reaches local equilibrium, with temperature fluctuations in different regions of the medium. The liquid QGP phase is expected to live for 1-10 fm/c, depending on the collision energy (Harris and Muller, 1996). As the medium expands and cools, it reaches a density and temperature where partonic interactions cease, a hadron gas is formed, and the hadron fractions are fixed. This point in the collision evolution is called chemical freeze-out (Adam et al., 2016j;Adams et al., 2005b;Fodor and Katz, 2004). As the medium expands and cools further, collisions between hadrons cease and hadrons reach their final energies and momenta. This stage of the collision, thermal freeze-out, occurs at a somewhat lower temperature than the chemical freezeout. Thermal photons, in a manner analogous to black body radiation, reveal that the QGP may reach temperatures of 300-600 MeV in central collisions at both 200 GeV (Adare et al., 2010a) and 2.76 TeV (Adam et al., 2016g). The temperature can also be inferred from the sequential melting of bound states of a bottom quark and antiquark (Chatrchyan et al., 2012g). The ratios of final state hadrons are used to determine that the chemical freeze-out temperature is around 160 MeV (Adam et al., 2016j;Adams et al., 2005b;Fodor and Katz, 2004) and that the thermal freeze out occurs at about 100-150 MeV, depending on the collision energy and centrality (Abelev et al., 2013b;Adcox et al., 2004;Arsene et al., 2005a;Back et al., 2007). The properties of the medium are determined from the final state particles that are measured. The initial gluon density can be related to the final state hadron multiplicity through the concept of hadron-parton duality (Van Hove and Giovannini, 1988), leading to estimates of gluon densities of around 700 per unit pseudorapidity at the top RHIC energy of √ s NN = 200 GeV (Adler et al., 2005) and 2000 per unit pseudorapidity at the top LHC energy of √ s NN = 5.02 TeV (Aad et al., 2012(Aad et al., , 2016cAamodt et al., 2010;Adam et al., 2016d;Chatrchyan et al., 2011a). The azimuthal anisotropy in the momentum distribution of final state hadrons is the result of the initial state anisotropy. The survival of these anisotropies provides evidence that the medium flows in response to pressure gradients (Aad et al., 2014b;Adam et al., 2016a;Adler et al., 2001Adler et al., , 2003cAlver et al., 2007;Chatrchyan et al., 2014b). This asymmetry is illustrated schematically in Figure 2. The shape and magnitude of these anisotropies can be used to constrain the viscosity to entropy ratio, revealing that the QGP has the lowest viscosity to entropy ratio ever observed (Adams et al., 2005b;Adcox et al., 2005;Arsene et al., 2005b;Back et al., 2005). Hadrons containing strange quarks are enhanced in heavy ion collisions above expectations from p+p collisions (Abelev et al., 2013f, 2014bKhachatryan et al., 2017d). This is due to a combination of the suppression of strangeness in p+p collisions due to the limited phase space for the production of strange quarks, and the higher energy density available for the production of strange quarks in heavy ion collisions. Correlations between particles may provide evidence for increased production of strangeness due to the decreased strange quark mass in the medium (Abelev et al., 2009c;Adam et al., 2016f). Baryon production is enhanced for both light (Abelev et al., 2006;Adler et al., 2004;Arsene et al., 2010) and strange quarks (Abelev et al., 2013f, 2014b(Abelev et al., 2013f, , 2008Khachatryan et al., 2017d), an observation generally interpreted as evidence for the direct production of baryons through the recombination of quarks in the medium (Dover et al., 1991;Fries et al., 2003;Greco et al., 2003;Hwa and Yang, 2003). Hard parton scattering occurs early in the collision evolution, prior to the formation of the QGP, so that their interactions with the QGP probe the entire medium evolution. Therefore, they can be used to reveal the properties of the medium, such as its stopping power and transport coefficients. Since the differential production cross section of these hard parton scatterings is calculable in pQCD, and these calculations have been validated over many orders of magnitude in proton-proton colli-sions, in principle they form a well calibrated probe. The initial production must scale by the number of nucleon collisions, which means that their interactions with the medium would cause deviations from this scaling. Since the majority of these hard partons are produced in pairs, they can be used both as a probe and a control. Particle jets of this nature are formed in e + e − and proton-proton (p+p) collisions as well and are observed to fragment similarly in e + e − and p+p collisions. In a heavy ion collision, where a QGP is formed, the hard scattered quarks and gluons are expected to interact strongly with the hot QCD medium due to their color charges, and lose energy, either through collisions with medium partons, or through gluon bremsstrahlung. The energy loss of high momentum partons due to strong interactions is a process called jet quenching, and results in modification of the properties of the resulting jets in heavy ion collisions compared to expectations from proton-proton collisions (Baier et al., 1995;Bjorken, 1982;Gyulassy and Plumer, 1990). This energy loss was first observed in the suppression of high momentum hadrons produced in heavy ion collisions at RHIC (Adams et al., 2003b;Adler et al., 2003b;Back et al., 2004) and later also observed at the LHC (Aamodt et al., 2011b;Chatrchyan et al., 2012e). The modification can be observed through measurements of jet shapes, particle composition, fragmentation, splitting functions and many other observables. Detailed studies of jets to characterize how and why partons lose energy in the QGP require an understanding of how evidence for energy loss may be manifested in the different observables, and the effect of the large and complicated background from other processes in the collision. Early studies of the QGP focused on particles produced through soft processes, measuring the bulk properties of the medium. With the higher cross sections for hard processes with increasing collision energy, higher luminosity delivered by colliders, and detectors better suited for jet measurements, studies of jets are enabling higher precision measurements of the properties of the QGP (Akiba et al., 2015). The 2015 nuclear physics Long Range Plan (LRP) (Aprahamian et al., 2015) highlighted the particular need to improve our quantitative understanding of jets in heavy ion collisions. Here we assess our current understanding of jet production in heavy ion collisions in order to inform what shape future studies should take in order to optimize the use of our precision detectors. B. Jet definition In principle, using a jet finding algorithm to cluster all of the daughter particles of a given parton will give access to the full energy and momentum of the parent parton. However, even in e + +e − collisions, the definition of a jet is ambiguous, even on the partonic level. For instance, in e + e − → qq, the quark may emit a gluon. If this gluon is emitted at small angles relative to the quark, it is usually considered part of the jet, whereas if it is emitted at large angles relative to the parent parton, it may be considered a third jet. This ambiguity led to the Snowmass Accord, which stated that in order to be comparable, experimental and theoretical measurements had to use the same definition of a jet and that the definition should be theoretically robust (Huth et al., 1990). The choice of which final state particles should be included in the jet is also somewhat arbitrary and more difficult in A+A collisions than in p+p collisions. Figure 3 shows an event display from a Pb+Pb collision at √ s NN = 2.76 TeV, showing the large background in the event. If a hard parton emits a soft gluon and that gluon thermalizes with the medium, are the particles from the hadronization of that soft gluon part of the jet or part of the medium? Any interaction between daughters of the parton and medium particles complicates the definition of what should belong to the jet and what should not. This ambiguity in the definition of the observable itself makes studies of jets qualitatively different from, e.g., measurements of particle yields. These aspects of jet physics need to be taken into account in the choice of a jet finding algorithm and background subtraction methods in order to be able to interpret the resulting measurements. One of the main motivations for studies of jets in heavy ion collisions was to provide measurements of observables with a production cross-section that can be calculated using pQCD, which yields a well calibrated probe. In certain limits, this is feasible, although it is worth noting that many observables are sensitive to non-perturbative effects. One such non-perturbative effect is hadronization, which can affect even the measurements of relatively simple observables such as the jet momentum spectra. In addition to the ambiguities inherent in the definition of what is and is not a jet, there is the question of how to deal with the large background in heavy ion collisions. For example, measurements of reconstructed jets usually have a minimum momentum threshold for constituents in order to suppress the background contribution. If the corrections for these analysis techniques are insensitive to assumptions about the background and hadronization, the results may still be perturbatively calculable. However, these techniques for dealing with the background may also bias the measured jet sample, for instance by selecting gluon jets at a higher rate than quark jets. In the context of jets in a heavy ion collision, these analysis cuts are part of the definition of the jet and can not be ignored. The interpretation of the measurement of any observable cannot be fully separated from the techniques used to measure it because both measurements and theoreti-cal calculations of jet observables must use the same definition of a jet. As we review the literature, we discuss how the jet definitions and techniques used in experiment may influence the interpretation of the results. Even though our goal is an understanding of partonic interactions within the medium, a detailed understanding of soft particle production is necessary to understand the methods for suppressing and subtracting the contribution of these particles to jet observables. C. Interactions with the medium There are several models used to describe interactions between hard partons and the medium, however, a full review of theoretical calculations is beyond the scope of this paper. We briefly summarize theoretical frameworks for interactions of hard partons with the medium here and refer readers to (Burke et al., 2014;Qin and Wang, 2015) and the references therein for details. The production of final state particles in nuclear collisions is described by assuming that these processes can be factorized (Majumder, 2007a;Majumder and Van Leeuwen, 2011). The nuclear parton distribution functions x a f A a (x a ) and x b f B b (x b ) describe the probability of finding partons with momentum fraction x a and x b , respectively. The differential cross sections for partons a and b interacting with each other to produce a parton c with a momentum p can be described using pQCD. The production of a final state hadron h is then given by fragmentation function D h c (z) where z = p h /p is the fraction of the parton's momentum carried by the final state hadron. The differential cross section for the production of hadrons as a function of their transverse momenta p T and rapidity y at leading order is then given by (1) wheret = (p − x a P ) 2 ,p is the four-momentum of parton, c, and P is the average momentum of a nucleon in nucleus A. The nuclear parton distribution functions and the fragmentation functions cannot be calculated perturbatively. The parton distribution functions describe the initial state of the incoming nuclei. Any differences between the nuclear and proton parton distribution functions, which describe the distribution of partons in a nucleon, are considered cold nuclear matter effects. Cold nuclear matter effects may include coherent multiple scattering within the nucleus (Qiu and Vitev, 2006), gluon shadowing and saturation (Gelis et al., 2010), or partonic energy loss within the nucleus (Bertocchi and Treleani, 1977;Vitev, 2007;Wang and Guo, 2001). Most models for interactions of partons with a QGP factorize this process and only modify the fragmentation functions (Majumder, 2007a). One goal of studies of high momentum particles in heavy ion collisions is to study the modification of these fragmentation functions, which will allow us to understand how and why partons lose energy within the QGP and to determine the microscopic structure of the medium. We note that the theoretical definition in Equation 1 associates the production of a final state hadron with a particular parton. This is not possible experimentally, so the experimentally measured quantity also referred to as a fragmentation function is not the same as D h c (z) in Equation 1. Medium-induced gluon radiation (bremsstrahlung) and collisions with partons in the medium cause the partons to lose energy to the medium, often described as a modification of the fragmentation functions in Equation 1. There are four major approaches to describing these interactions. The GLV model (Djordjevic and Gyulassy, 2004;Djordjevic et al., 2005;Djordjevic and Heinz, 2008;Vitev and Gyulassy, 2002;Wicks et al., 2007) and its CUJET implementation (Buzzatti and Gyulassy, 2012) assumes that the scattering centers in the medium are nearly static and that the mean free path of a parton is much larger than the color screening length in the medium. This assumption is valid for a thinner medium. The Higher Twist (Majumder, 2012) framework assumes medium modified splitting functions during fragmentation calculated by including higher twist corrections to the differential cross sections for deep inelastic scattering off of nuclei. These corrections are enhanced by the length of the medium. The higher twist model has also been adapted to include multiple gluon emissions (Collins et al., 1985;Majumder, 2012;Majumder and Van Leeuwen, 2011). The energy loss mechanism in the AMY model is similar to BDMPS but the rate equations for partonic energy loss are solved numerically and convoluted with differential pQCD cross sections and fragmentation functions to determine the final state differential hadronic cross sections (Arnold et al., 2002;Jeon and Moore, 2005;Qin et al., 2009Qin et al., , 2008. This is applied in a realistic hydrodynamical environment (Qiu and Heinz, 2012;Qiu et al., 2012;Song and Heinz, 2008a,b). The MAR-TINI model (Qin et al., 2008;Schenke et al., 2011) is a Monte Carlo model implementation of the AMY formalism which uses PYTHIA (Sjostrand et al., 2006) to describe the hard scattering and a Glauber initial state (Miller et al., 2007). Partonic energy loss occurs in the medium, taking temperature and hydrodynamical flow into account (Nonaka and Bass, 2007;Schenke et al., 2010Schenke et al., , 2011. There are additional approaches, including embedding jets into a hydrodynamical fluid (Tachibana et al., 2017) and using the correspondence between Anti-deSitter space and conformal field theories (Gubser, 2007). There is a new description of jet quenching in which coherent parton branching plays a central role to the jet-medium interactions (Casalderrey-Solana et al., 2013;Mehtar-Tani and Tywoniuk, 2015). In this work it is assumed that the hierarchy of scales governing jet evolution allow the jet to be separated int a hard core, which interacts with the medium as a single coherent antenna, and softer structures that will interact in a color decoherent fashion. In order for this to be valid, there must be a large separation of the intrinsic jet scale and the characteristic momentum scale of the medium. While this certainly is valid for the highest momentum jets at the LHC, it is not clear at which scales in collision energy and jet energy this assumption breaks down. We refer readers to a recent theoretical review for a more complete picture of theoretical descriptions of partonic energy loss in the QGP (Qin and Wang, 2015). Medium-induced bremsstrahlung occurs when the medium exchanges energy, color, and longitudinal momentum with the jet. Since both the energy and longitudinal momentum of the hard partons exceeds that of the medium partons, these exchanges cause the parton as a whole to lose energy. Additionally, since the hard partons have much higher transverse momentum than the medium partons, any collision will reduce the momentum of the jet as a whole. Both of these effects will broaden the resulting jet and soften the average final state particles produced from the jet. Collisional energy loss similarly broadens and softens the jet. Partonic energy loss in the medium is quantified by the jet transport coefficientsq = Q 2 /L, where Q is the transverse momentum lost to the medium and L is the path-length traversed;ê, the longitudinal momentum lost per unit length; andê 2 , the fluctuation in the longitudinal momentum per unit length (Majumder, 2013;Muller, 2013). The JET collaboration systematically compared each of these models to data to determine how well the transport properties of partons in the medium can be constrained (Burke et al., 2014). This substantially improved our quantitative understanding of partonic energy loss in the medium, but only used a small fraction of the available data. The Jetscape collaboration (Collaboration", 2017) has formed to develop a Monte Carlo framework which enables combinations of different models of the initial state, the hydrodynamical evolution of medium, and partonic energy loss to be used within the same framework. The goal is a Bayesian analysis comparing models to data to quantitatively determine properties of the medium, similar to (Bernhard et al., 2016;Novak et al., 2014). Jetscape will incorporate many of the available jet observables into this Bayesian analysis. Part of the motivation for this paper is to evaluate which experimental observables might provide effective input for this effort and what factors need to be considered for these comparisons. In light of the ambiguities in the jet definition discussed above, we note that whether or not the energy is lost depends on this definition. The functional experimental definition of lost energy is any energy which no longer retains short-range correlations with the parent parton, meaning that it is further than about half a unit in pseudorapidity and azimuth. Energy which retains short-range correlations with the parent parton is still considered part of the jet and any short-range modifications are considered modifications of the fragmentation function. D. Separating the signal from the background Hard partons traverse a medium which is flowing and expanding, with fluctuations in the density and temperature. Since the mean transverse momentum of unidentified hadrons in Pb+Pb collisions at √ s NN = 2.76 TeV is 680 MeV/c (Abelev et al., 2013g), sufficiently high p T hadrons are expected to be produced dominantly in jets and production from soft processes is expected to be negligible. It is unclear precisely at which momentum the particle yield is dominated by jet production rather than medium production. Moreover, most particles produced in jets are at low momenta even though the jet momentum itself is dominated by the contribution of a few high p T particles. Particularly if jets are modified by processes such as recombination, strangeness enhancement, or hydrodynamical flow, these low momentum particles produced in jets may carry critical information about their parent partons' interactions with the medium. Methods employed to suppress and subtract background from jet measurements are dependent on assumptions about the background contribution and can change the sensitivity of measurements to possible medium modifications. The resulting biases in the measurements can be used as a tool rather than treated as a weakness in the measurement; however, they must be first understood. The largest source of correlated background is due to collective flow. The azimuthal distribution of particles created in a heavy ion collision can be written as where N is the number of particles, φ is the angle of a particle's momentum in azimuth in detector coordinates and ψ R is the angle of the reaction plane in detector coordinates (Poskanzer and Voloshin, 1998). The Fourier coefficients v n are thought to be dominantly from collective flow at low momenta (Adams et al., 2005b;Adcox et al., 2005;Arsene et al., 2005b;Back et al., 2005), although equation 2 is valid for any correlation because any distribution can be written as its Fourier decomposition. The magnitude of the Fourier coefficients v n decreases with increasing order. The sign of the flow contribution to the first order coefficient v 1 is dependent on the incoming direction of the nuclei and changes sign when going from positive to negative pseudorapidities. For most measurements, which average over the direction of the incoming nuclei, v 1 due to flow is zero, although we note that there may be contributions to v 1 from global momentum conservation. The even v n arise mainly from anisotropies in the average overlap region of the incoming nuclei, considering the nucleons to be smoothly distributed in the nucleus with the density depending only on the radius. The odd v n for n > 1 are generally understood to arise from the fluctuations in the positions of the nucleons within the nucleus. These fluctuations also contribute to the even v n , though these coefficients are dominated by the overall geometry. Jets themselves can lead to non-zero v n through jet quenching, complicating background subtraction for jet studies. At high momenta (p T 5-10 GeV/c) the v n are thought to be dominated by jet production. Furthermore, the v n fluctuate event-by-event even for a given centrality class. This means that independent measurements, which differ in their sensitivity to jets, averaged over several events cannot be used blindly to subtract the correlated background due to flow. To measure jets, experimentalists have to make some assumptions about the interplay between hard and soft particles and about the form of the background. Without such assumptions, experimental measurements are nearly impossible. Some observables are more robust to assumptions about the background than others, however, these measurements are not always the most sensitive to energy loss mechanisms or interactions of jets with the medium. An understanding of data requires an understanding of the measurement techniques and assumptions about the background. We therefore discuss the measurement techniques and their consequences in great detail in Section II before discussing the measurements themselves in Section III. II. EXPERIMENTAL METHODS This section focuses on different methods for probing jet physics including inclusive hadron measurements, dihadron correlations, jet reconstruction algorithms and jet-particle correlations and a brief description of relevant detectors. In addition to explaining the measurement details and how the effect of the background on the observable is handled for each, this section highlights strengths and weaknesses of these different methods which are important for interpreting the results. We emphasize background subtraction and suppression techniques because of potential biases they introduce. TeV from (Srivastava et al., 2016) assuming that Tc = 155 MeV from the extrapolation of the chemical freeze-out temperature using comparisons of data to statistical models in (Floris, 2014). A. Detectors Measurements of heavy ion collisions often focus on midrapidity, with precision, particle identification, and tracking in a high multiplicity environment. Some measurements, such as those of single particles, are not significantly impacted by a limited acceptance, while the acceptance corrections for reconstructed jets are more complicated when the acceptance is limited. We briefly summarize the colliders, RHIC and the LHC, and the most important features of each of their detectors for measurements of jets, referring readers to other publications for details. The properties of the medium are slightly different at RHIC and the LHC, with the LHC reaching the highest temperatures and energy densities and RHIC providing the widest range of collision energies and systems. The relevant properties of each collider are summarized in Table I. Some properties of each detector are summarized in Table II. The BRAHMS (Adamczyk et al., 2003), PHENIX (Adcox et al., 2003), and PHOBOS (Back et al., 2003) experiments are experiments which have completed their taking data at RHIC. The STAR (Ackermann et al., 2003) experiment is taking data at RHIC and sPHENIX (Adare et al., 2015) is a proposed upgrade at RHIC to be built in the existing PHENIX hall. STAR has full azimuthal acceptance and nominally covers pseudorapidities |η| < 1 with a silicon inner tracker and a time projection chamber (TPC), surrounded by an electromagnetic calorime- (Ackermann et al., 2003). An inner silicon detector was installed before the 2014 run. Particle identification is possible both through energy loss in the TPC and a time of flight (TOF) detector. STAR also has forward tracking and calorimetry. The PHENIX central arms cover |η| < 0.35 and are split into two 90 • azimuthal regions (Adcox et al., 2003). They consist of drift and pad chambers for tracking, a TOF for particle identification, and precision electromagnetic calorimeters. There are both midrapidity and forward silicon for precision tracking and forward electromagnetic calorimeters. PHENIX also has two muon arms at forward rapidities (−1.15 < |η| < −2.25 and 1.15 < |η| < −2.44) with full azimuthal coverage. The PHOBOS detector consists of a large acceptance scintillator with wide acceptance for multiplicity measurements (|η| < 3.2) and two spectrometer arms capable of both particle identification and tracking covering 0 < |η| < 2 and split into two 11 • azimuthal regions (Back et al., 2003). The BRAHMS detector has a spectrometer arm capable of particle identification with wide rapidity coverage (0 y 4) (Adamczyk et al., 2003). sPHENIX will have full azimuthal acceptance and acceptance in pseudorapidity of approximately |η| < 1 with a TPC combined with precision silicon tracking and both electromagnetic and hadronic calorimeters (Adare et al., 2015). sPHENIX is optimized for measurements of jets and heavy flavor at RHIC. The LHC has four main detectors, ALICE, ATLAS, CMS, and LHCb. ALICE, which is primarily devoted to studying heavy ion collisions at the LHC, has a TPC, silicon inner tracker, and TOF covering |η| < 0.9 and full azimuth (Aamodt et al., 2008). It has an electromagnetic calorimeter (EMCal) covering |η| < 0.7 with two azimuthal regions covering 107 • and 60 • in azimuth and a forward muon arm. Both ATLAS and CMS are multipurpose detectors designed to precisely measure jets, leptons and photons produced in pp and heavy ion collisions. The ATLAS detector's precision tracking is performed by a high-granularity silicon pixel detector, followed by the silicon microstrip tracker and complemented by the transition radiation tracker for the |η| < 2.5 region. The hadronic and electromagnetic calorimeters provide her-metic azimuthal coverage in the |η| < 4.9 range. The muon spectrometer surrounds the calorimeters covering |η| < 2.7 with full azimuthal coverage (Aad et al., 2008). The main CMS detectors are silicon trackers which measure charged particles within the pseudorapidity range |η| < 2.5, an electromagnetic calorimeter partitioned into a barrel region (|η| < 1.48) and two endcaps (|η| < 3.0), and hadronic calorimeters covering the range |η| < 5.2. All CMS detectors listed here have full azimuthal coverage (Chatrchyan et al., 2008). LHCb focuses on measurements of charm and beauty at forward rapidities. The LHCb detector consists of a single spectrometer covering 1.6 < |η| < 4.9 and full azimuth (Alves et al., 2008). This spectrometer arm is capable of tracking and particle identification, however, tracking is limited to low multiplicity collisions. B. Centrality determination The impact parameter b, defined as the transverse distance between the centers of the two colliding nuclei, cannot be measured directly. Glancing interactions with a large impact parameter generally produce fewer particles while collisions with a small impact parameter generally produce more particles, with the number of final state particles increasing monotonically with the overlap volume between the nuclei. This correlation can be used to define the collision centrality as a fraction of the total cross section. High multiplicity events have a low average b and low multiplicity events have a large average b. The former are called central collisions and the latter are called peripheral collisions. In large collision systems, the variations in the number of particles produced due to fluctuations in the energy production by individual soft nucleon-nucleon collisions is small compared to the variations due to the impact parameter. The charged particle multiplicity, N ch , can then be used to constrain the impact parameter. Usually the correlation between the impact parameter and the multiplicity is determined using a Glauber model (Miller et al., 2007). The distribution of nucleons in the nucleus is usually approximated as a Fermi distribution in a Woods-Saxon potential and the multiplicity is assumed to be a function of the number of participating nucleons (N part ) and the binary number of interactions between nucleons (N bin ). The experimentally observed multiplicity is fit to determine a parametric description of the data and the data are binned by the fraction of events. For example, the 10% of all events with the highest multiplicity are referred to as 0-10% central. There are a few variations in technique which generally lead to consistent results (Abelev et al., 2013c). Centralities determined assuming that the distribution of impact parameters at a fixed multiplicity is Gaussian are consistent with those using a Glauber model (Das et al., 2017). The largest source of uncertainty from centrality determination in heavy ion collisions is due to the normalization of the multiplicity distribution at low multiplicities. In general an experiment identifies an anchor point in the distribution, such as identifying the N ch where 90% of all collisions produce at least that multiplicity. Because the efficiency for detecting events with low multiplicity is low, the distribution is not measured well for low N ch , so identification of this anchor point is model dependent. This inefficiency does not directly impact measurements of jets in 0-80% central collisions because these events are typically high multiplicity, however, it can lead to a significant uncertainty in the correct centrality. This uncertainty is largest at low multiplicities, corresponding to more peripheral collisions. As the phenomena observed in heavy ion collisions have been observed in increasingly smaller systems, this approach to determining centrality has been applied to these smaller systems as well. While the term "centrality" is still used, this is perhaps better understood as event activity, since the correlation between multiplicity and impact parameter is weaker in these systems and other effects may become relevant (Alvioli et al., 2016(Alvioli et al., , 2014Alvioli and Strikman, 2013;Armesto et al., 2015;Bzdak et al., 2016;Coleman-Smith and Muller, 2014). The interpretation of the "centrality" dependence in small systems should therefore be done carefully. C. Inclusive hadron measurements Single particle spectra at high momenta, which are dominated by particles resulting from hard scatterings, can be used to study jets. To quantify any modifications to the hadron spectra in nucleus-nucleus (A+A) collisions, the nuclear modification factor was introduced. The nuclear modification factor in A+A collisions is defined as where η is the pseudorapidity, p T is the transverse momentum, N bin is the average number of binary nucleonnucleon collisions for a given range of impact parameter, and σ N N is the integrated nucleon-nucleon cross section. N AA and σ pp in this context are the yield in AA collision and cross section in p+p collisions for a particular observable. If nucleus-nucleus collisions were simply a superposition of nucleon-nucleon collisions, the high p T particle cross-section should scale with the number of binary collisions and therefore R AA = 1. An R AA < 1 indicates suppression and an R AA > 1 indicates enhancement. R AA is often measured as a function of p T and centrality class. Measurements of inclusive hadron R AA are relatively straightforward as they only require measuring the single particle spectra and a calculation of the number of binary collisions for each centrality class based on a Glauber model (Miller et al., 2007). Theoretically, hadron R AA can be difficult to interpret, particularly at low momenta, because different physical processes that are not calculable in pQCD, such as hadronization, can change the interpretation of the result. Interpretation of R AA usually focuses on high p T , where calculations from perturbative QCD (pQCD) are possible. An alternative to R AA is R CP , where peripheral heavy ion collisions are used as the reference instead of p+p collisions where cent and peri denote the values of N bin and N AA for central and peripheral collisions, respectively. This is typically done either when there is no p+p reference available or the p+p reference has much larger uncertainties than the A+A reference. It does have the advantage that other nuclear effects could be present in the R CP crosssection and cancel in the ratio, and that these collisions are recorded at the same time and thus have the same detector conditions. However, there can be QGP effects in peripheral collisions so this can make the interpretation difficult. The pQCD calculations used to interpret these results are sensitive in principle to hadronization effects, however, if the R AA of hard partons does not have a strong dependence on p T , the R AA of the final state hadrons will not have a strong dependence on p T . R AA will therefore be relatively insensitive to the effects of hadronization and more theoretically robust. D. Dihadron correlations A hard parton scattering usually produces two partons that are separated by 180 • in the transverse plane (commonly stated as back-to-back). In a typical dihadron correlation study (Aamodt et al., 2012;Abelev et al., 2009b;Adler et al., 2003aAdler et al., , 2006dAlver et al., 2010), a high-p T hadron is identified and used to define the coordinate system because its momentum is assumed to be a good proxy for the jet axis of the parton it arose from. This hadron is called the trigger particle. The azimuthal angle of other hadrons' momenta in the event is calculated relative to the momentum of this trigger particle. These hadrons are commonly called the associated particles. This is illustrated schematically in Figure 4. The associated particle is typically restricted to a fixed momentum range, also typically higher than the p T of tracks in the event and lower than the momenta of trigger particles. The distribution of associated particles relative to the trigger particle can be measured in azimuth (∆φ), pseudorapidity (∆η), or both. Figure 5 shows a sample dihadron correlation in ∆φ and ∆η and its projection onto ∆φ for trigger momenta 10 < p t T < 15 GeV/c within pseudorapidities |η| <0.5 and associated particles within |η| <0.9 with momenta and 1.0 < p a T < 2.0 GeV/c in p+p collisions at √ s = 2.76 TeV in PYTHIA (Sjostrand et al., 2006). The peak near 0 • , called the near-side, is narrow in both ∆φ and ∆η and results from associated particles from the same parton as the trigger particle. The peak near 180 • , called the away-side, is narrow only in ∆φ and is roughly independent of pseudorapidity. This peak arises from associated particles produced by the parton opposing the one which generated the trigger particle. The partons are back-to-back in the frame of the partons, but the rest frame of the partons is not necessarily the same as the rest frame of the incoming nuclei because the incoming partons may not carry the same fraction of the parent nucleons' momentum, x. Since most of the momenta of both the partons and the nucleons are in the direction of the beam (which is universally taken to be the z axis), a difference in pseudorapidity is observed, while the influence on the azimuthal position is negligible. This causes the away-side to be broad in ∆η without requiring modified fragmentation or interaction with the medium, as evident in Figure 5. Background subtraction methods Dihadron correlations typically have a low signal to background ratio, often less than 1:25. The raw signal in dihadron correlations is typically assumed to arise from only two sources, particles from jets and particles from the underlying event, which are correlated with each other due to flow. The production mechanisms of the signal and the background are assumed to be independent so they can be factorized. These assumptions are called the two source model (Adler et al., 2006b). The correlation of two particles in the background due to flow is given by (Adler et al., 2003a;Bielcikova et al., 2004) where B is a constant which depends on the normalization and the multiplicity of trigger and associated particles in an event, the v t n are the v n for the trigger particle, the v a n are the v n for the associated particle, and ∆φ is the difference in azimuthal angle between the associated particle and the trigger. The v n for the trigger particle may arise either from flow, if the trigger particle is not actually from a jet, or from jet quenching, since the path length dependence of partonic energy loss leads to a suppression of jets out-of-plane. Because dihadron correlations are typically measured by averaging over positive and negative pseudorapidities, the average v 1 due to flow is zero and the n = 1 term is usually omitted. Global momentum conservation also leads to a v 1 signal which is approximately inversely proportional to the particle multiplicity (Borghini et al., 2000). The momentum conservation term is typically assumed to be negligible, which may be valid for higher multiplicity events. The pseudorapidity range for both trigger and associated particles is typically restricted to a region where the v n do not change dramatically so that the pseudorapidity dependence of dN dφ is negligible. The azimuthal dependence of any additional sources of long range correlations could be expanded in terms of their Fourier coefficients without loss of generality. There are two further assumptions commonly used in order to subtract this background: that the appropriate v n are the same as the v n measured in other analyses and that there is a region in ∆φ near ∆φ ≈ 1 where the signal is zero. The latter assumption is called the Zero-Yield-At-Minimum (ZYAM) method (Adams et al., 2005a). Early studies of dihadron correlations fit the data near ∆φ ≈ 1 to determine the background level (Adams et al., 2004a;Adare et al., 2007b,b;Adler et al., 2003aAdler et al., , 2006c. Later studies typically use a few points around the minimum (Adler et al., 2006b;Agakishiev et al., 2010;Aggarwal et al., 2010). An alternative to ZYAM for determining the background level, B in Equation 5, is the absolute normalization method (Sickles et al., 2010). This method makes no assumption about the background level based on the shape of the underlying background but rather estimates the level of combinatorial pairs from the mean number of trigger and mean number of associated particles in all events as a function of event multiplicity. It has been suggested that Hanbury-Brown-Twiss (HBT) correlations (Lisa and Pratt, 2008;Lisa et al., 2005), quantum correlations between identical particles from the same source, may contribute to the near-side peak in some momentum regions. If the momenta of the trigger and associated particles are sufficiently different, these contributions are expected to be negligible. Distinguishing resonances from jet-like correlations is more difficult. A high momentum resonance can itself be considered a jet or part of a jet. The appropriate classification for lower momentum resonances is less clear, but functionally any short range correlations are considered part of the signal in dihadron correlations. The background is then dominated by contributions from flow. However, this does not mean that the v n measured in other analyses are necessarily the Fourier coefficients of the background for dihadron correlations. Methods for measuring v n have varying sensitivities to non-flow (such as jets) and fluctuations (Voloshin et al., 2008). Fluctuations in v n may either increase or decrease the effective v n , depending on their physical origin and its correlation with jet production. The correct v n in equation 5 is also complicated by proposed decorrelations between the reaction planes for soft and hard processes, which would change the effective v n (Aad et al., 2014a;Jia, 2013). A recent method uses the reaction plane dependence of the background in equation 5 to extract the background level and shape from the correlation itself . The majority of measurements of dihadron correlations in heavy ion collisions in the literature omit odd v n since these studies were done before the odd v n were observed and understood to arise due to collective flow. The first direct observation of the odd v n was in high-p T dihadron correlations, where subtraction of only the even v n led to two structures called the ridge (on the nearside) (Abelev et al., 2009b;Alver et al., 2010) and the shoulder or Mach cone (on the away-side) (Abelev et al., 2009b;Adare et al., 2008a,a,d;Afanasiev et al., 2008;Agakishiev et al., 2010). This means that the majority of studies of dihadron correlations at low and intermediate momenta (p T 3 GeV/c) do not take the odd v n into account and therefore include distortions due to flow. Exceptions are studies which used the ∆η dependence on the near-side to subtract the ridge and focused on the jet-like correlation (Abelev et al., 2009b(Abelev et al., , 2010a(Abelev et al., , 2016Agakishiev et al., 2012c). An understanding of the low momentum jet components is important because many of medium modifications of the jet manifest as differences in distributions at low momenta. While some of the iconic RHIC results showing jet quenching did not include odd v n (Adams et al., 2004a) and the complex structures at low and intermediate momenta are now understood to arise due to flow rather than jets , some of the broad conclusions of these studies are robust, and studies at sufficiently high momenta (p T 3 GeV/c) are still valid because the impact of the higher order v n is negligible. Section III focuses on results robust to the omission of the odd v n and more recent results. E. Reconstructed jets A jet is defined by the algorithm used to group final state particles into jet candidates. In QCD any parton may fragment into two partons, each carrying roughly half of the energy and moving in approximately the same direction. This is a difficult process to quantify theoreti-cally and leads to divergencies in theoretical calculations. A robust jet finding algorithm would find the same jet with the same p T regardless of the details of the fragmentation and would thus be collinear safe. Additionally, QCD allows for an infinite number of very soft partons to be produced during the fragmentation of the parent parton. All experiments have low momentum thresholds for their acceptance so these particles cannot generally be observed and the production of soft partons leads to theoretical divergencies as well. A robust jet finding algorithm will find the same jets, even in the presence of a large number of soft partons and would thus be infrared safe. In order for the jet definition to be robust, the jet-finding algorithm must be both infrared and collinear safe (Salam, 2010). Jet finding algorithms are generally characterized by a resolution parameter. In the case of a conical jet, this is the radius of the jets where ∆φ is the distance from the jet axis in azimuth and ∆η is the distance from the jet axis in pseudorapidity. A conical jet is symmetric in ∆φ and ∆η, although it is not theoretically necessary for jets to be symmetric. We will focus the discussion on conical jets, since they are the most intuitive to understand. The most common jetfinding algorithm in heavy ion collisions, anti-k T , usually reconstructs conical jets. The majority of jet measurements include corrections up to the energy of all particles in the jet, whether or not they are observed directly. The ALICE experiment also measures charged jets, which are corrected only up to the energy contained in charged constituents. We emphasize that a measurement of a jet is not a direct measurement of a parton. A jet is a composite object comprising several final state hadrons. If the jet reconstruction algorithm applied to theoretical calculations and data is the same, experimental measurements of jets can be comparable to theoretical calculations of jets. However, even theoretically, it is unclear which final state particles should be counted as belonging to one parton. What the original parton's energy and momentum were before it fragmented is therefore an ill-posed question. The only valid comparisons between theory and experiment are between jets comprised of final state hadrons and reconstructed with the same algorithm. This understanding was the conclusion of the Snowmass Accord (Huth et al., 1990). Ideally both the jet reconstruction algorithms and the treatment of the combinatorial background in heavy ion collisions would also be the same for theory and experiment. Jet-finding algorithms Infrared and collinear safe sequential recombination algorithms such as the k T , anti-k T and Cambridge/Aachen (CAMB) are encoded in F astJet (Cacciari et al., 2011(Cacciari et al., , 2008a(Cacciari et al., ,b, 2012Salam, 2010). The F astJet (Cacciari et al., 2012) framework takes advantage of advanced computing algorithms in order to decrease computational times for jet-finding. This is essential for jet reconstruction in heavy ion collisions due to the large combinatorial background. Due to the ubiquity of the anti-k T jet-finding algorithm in studies of jets in heavy ion collisions, it is worth describing this algorithm in detail. The anti-k T algorithm is a sequential recombination algorithm, which means that a series of stpdf for grouping particles into jet candidates is repeated until all particles in an event are included in a jet candidate. The steps are: and for every pair of particles where p T,i and p T,j are the momenta of the particles, η i and η j are the pseudorapidities of the particles, and φ i and φ j are the azimuthal angles of the particles. 2. Find the minimum of the d ij and d i . If this minimum is a d ij , combine these particles into one jet candidate, adding their energies and momenta, and return to the first step. 3. If the minimum is a d i , this is a final state jet candidate. Remove it from the list and return to the first step. Iterate until no particles remain. The original implementation of the anti-k T used rapidity rather than pseudorapidity (Cacciari et al., 2008a), however, in practice most experiments cannot identify particles to high momenta and the difference is negligible at high momenta so pseudorapidity is used in practice. The anti-k T algorithm has a few notable features for jet reconstruction in heavy ion collisions. Since d ij is smallest for pairs of high-p T particles, the anti-k T algorithm starts clustering high-p T particles into jets first and forms a jet around these particles. The anti-k T algorithm creates jets which are approximately symmetric in azimuth and pseudorapidity, at least for the highest energy jets. Particularly in heavy ion collisions, it must be recognized that the "jets" from a jet-finding algorithm are not necessarily generated by hard processes. Since all final state particles are grouped into jet candidates, some jet candidates will comprise only particles whose production was not correlated because they were created in the same hard process but which randomly happen to be in the same region in azimuth and pseudorapidity. These jet candidates are called fake or combinatorial jets. Particles that are correlated through a hard process will be grouped into jet candidates, which will also contain background particles. Care must therefore be used when interpreting the results of a jet-finding algorithm as it is possible to have jet candidates in an analysis that come from processes that may not be included in the calculation used to interpret the results. There are two important additional points to be made with regard to jet-finding algorithms as applied to heavy ion collisions. While jet-finding algorithms have been optimized for measurements in small systems such as e + +e − and p+p collisions, these algorithms are computationally efficient and well-defined both theoretically and experimentally. Although we may want to consider how we use these algorithms, there is no need for further development of jet-finding algorithms for use in heavy ion collisions. However, there is a difference between jetfinding in principle and in practice. While these jetfinding algorithms are infrared and collinear safe if all particles are input into the jet-finding algorithm, most experimental measurements restrict the momenta and energies of the tracks and calorimeter clusters input into the jet-finding algorithms. Some apply other selection criteria to the population of jets, such as requiring a high momentum track, which are not infrared or collinear safe. These techniques are not necessarily avoidable, especially in the high background environment of heavy ion collisions, however, they must be considered when interpreting the results. Dealing with the background Combinatorial jets and distortions in the reconstructed jet energy due to background need to be taken into account in order to interpret a measured observable. This can be done either in the measurement, or in theoretical calculations that are compared to the measurement. The latter is particularly difficult in a heavy ion environment because the background has contributions from all particle production processes. While it is impossible to know which particles in a jet candidate come from hard processes and which come from the background, and indeed it is even ambiguous to make this distinction on theoretical level, differences between particles in the signal and the background on average can be used to reduce the impact of particles from the background and calculate the impact of the remaining background on an ensemble of jet candidates. As mentioned in Section I, the average momentum of particles in the background is much lower than that of those in the signal. Figure 6 shows a comparison of HYDJET to STAR data (Lokhtin et al., 2009b) and the particles produced by hard and soft processes in HYDJET. At sufficiently high p T , particle production is dominated by hard processes. HYDJET has been tuned to match fluctuations and v n from heavy ion collisions, so this qualitative conclusion should be robust. Jets themselves can contribute to background for the measurement of other jets, however, the probability of multiple jets overlapping spatially and fragmenting into several high momentum particles is low. Therefore, introducing a minimum momentum for particles to be used in jet-finding reduces the number of background particles in the jet candidates. This also reduces the number of combinatorial jets, since there are very few high momentum particles which were not created from a hard process. While this selection criterion reduces the background contribution, it is not collinear safe. Additionally, as most of the modification of the jet fragmentation function is observed for constituents with p T < 3 GeV, this could remove the modification signature for particular observables. The effect of the background can also be reduced by focusing on smaller jets or higher energy jets. For a conical jet, the jet area is A jet = πR 2 . The average number of background particles in the jet candidate is proportional to the area. The background energy scales with the area of the jet, but is independent of the jet energy (assuming that the signal and background are independent), so the fractional change in the reconstructed jet energy due to background is smaller for higher energy jets as the majority of the jet energy is focused in the core of the jet. Furthermore, in elementary collisions, the distribution of final state particles in the jet as a function of the fraction of the jet energy carried by the particle is approximately independent of the jet energy. This means that the difference in the average momentum for signal particles versus background particles is larger for high energy jets. Since jets that interact with the medium are expected to lose energy and become broader, studies of high momentum, narrow jets alone cannot give a complete picture of partonic energy loss in the QGP. Furthermore, even in p+p collisions, theoretical calculations are more difficult for jets with smaller cone sizes because they are sensitive to the details of the hadronization (Abelev et al., 2013d). The fraction of combinatorial jet candidates can also be reduced by requiring additional evidence of a hard process, such as requiring that the candidate jet has at least one particle above a minimum threshold, requiring that the jet candidate have a hard core, or identifying a heavy flavor component within the jet candidate. We note that the distinction between fake jets and the background contribution in jets from hard processes is ambiguous, particularly for low momentum jets, however, the corrections for these effects are generally handled separately. Below we review methods for addressing the impact of background particles on the jet energy and corresponding methods for dealing with any remaining combi- (Lokhtin et al., 2009a) calculations to STAR data (Abelev et al., 2006). Particle production in HYDJET is separated into those from hard and soft processes. This shows that at sufficiently high momenta, particle production is dominated by hard processes. natorial jets. Each of these methods have strengths and weaknesses, and may lead to biases in the surviving jet population. There are five classes of methods for background subtraction in the four experiments which have published jet measurements in heavy ion collisions. ALICE and STAR use measurements of the average background energy/momentum density in the event to subtract the background contribution from jet candidates. ATLAS uses an iterative procedure, first finding jet candidates, then omitting them from the calculation of the background energy distribution, and then using this background distribution to find new jet candidates. CMS subtracts background before jet finding, omitting jet candidates from the background subtraction. In addition, an event mixing method was recently applied to STAR data to estimate the average contribution from the background to both the jet energy and combinatorial jets. Constituent subtraction refers to corrections to account for background before jet finding. Each of these are described in greater detail below. ALICE/STAR In this method the background contribution to a jet candidate is assumed to be proportional to the area of that candidate. The area of each jet is estimated by filling an event with many very soft, small area particles (ghost particles), rerunning the jet-finder, and then counting how many are clustered into a given jet. The background energy/momentum density per unit area (ρ) is measured by either using randomly oriented jet cones or the k T jet-finding algorithm and calculating the momentum over the area of the cone or k T jet. The median of the energy per unit area of the collection is used to reduce the impact from real jets in the event on the determination of the background density. The two highest energy jets in the event are omitted from the distribution of jets used to determine the background energy density. Since the background has a p T modulation that is correlated with the reaction plane, an event plane dependent ρ can be determined as well (Adam et al., 2016b). This method was proposed in (Cacciari et al., 2008b) for measurements in p+p collisions under conditions with high pile up and its feasibility in heavy ion collisions demonstrated in (Abelev et al., 2012a). The strength of this method is that it can be used even with jets clustered with low momentum constituents. However, the energy of individual jets is not known precisely since only the average background contribution is subtracted, but the background itself could fluctuate which smears the measurement of the jet energy and momentum. Additionally measurements of the background energy density can include some contribution from real jets. Subtracting the average contribution to a jet candidate due to the background may not fully take into account the tendency of jet-finding algorithms to form combinatorial jets around hot spots in the background. ATLAS We outline the approach in (Aad et al., 2013b). We note that the details of the analysis technique are optimized for each observable. ATLAS measures both calorimeter and track jets. Track jets are reconstructed using charged tracks with p T > 4 GeV/c. The high momentum constituent cut strongly suppresses combinatorial jets, and ATLAS estimates that a maximum of only 4% of all R = 0.4 anti-k T track jet candidates in 0-10% central Pb+Pb collisions contain a 4 GeV/c background track. For calorimeter jet measurements, ATLAS estimates the average background energy per unit area and the v 2 using an iterative procedure (Aad et al., 2013b). In the first step, jet candidates with R = 0.2 are reconstructed. The background energy is estimated using the average energy modulated by the v 2 calculated in the calorimeters, excluding jet candidates with at least one tower with E T > E T . Jets from this step with E T > 25 GeV and track jets with p T > 10 GeV/c are used to calculate a new estimate of the background and a new estimate of v 2 , excluding all clusters within ∆R < 0.4 of these jets. This new background modulated by the new v 2 and jets with E T > 20 GeV were considered for subsequent analysis. Combinatorial jets are further suppressed by an additional requirement that they match a track jet with high momentum (e.g. p T > 7 GeV/c (Aad et al., 2013b)) or a high energy cluster (e.g. E T > 7 GeV (Aad et al., 2013b)) in the electromagnetic calorimeter. These requirements strongly suppress the combinatorial background, however, they may lead to fragmentation biases and may suppress the contribution from jets which have lost a considerable fraction of their energy in the medium. These biases are likely small for the high energy jets which have been the focus of ATLAS studies, however, the bias is stronger near the 20 GeV lower momentum threshold of ATLAS studies. CMS In measurements by CMS the background is subtracted from the event before the jet-finding algorithm is run. The average energy and its dispersion is calculated as a function of η. Tower energies are recalculated by subtracting the mean energy plus the mean dispersion. Negative energies after this step are set to zero. These tower energies are input into a jet-finding algorithm and the background is recalculated, omitting towers contained in the jets. The tower energies are again calculated by subtracting the mean energy plus the dispersion and setting negative values to zero. Event Mixing The goal of event mixing is to generate the combinatorial background -in the case of jet studies, fake jets. In STAR, the fraction of combinatorial jets in an event class is generated by creating a mixed event where every track comes from a different event (Adamczyk et al., 2017c). The data are binned in classes of multiplicity, reconstructed event plane, and z-vertex position so that the mixed event accurately reflects the distribution of particles in the background. Jet candidates are reconstructed using this algorithm in order to calculate the contribution from combinatorial jets, which can then be subtracted from the ensemble. This is a very promising method, particularly for low momentum jets, but we note that it is sensitive to the details of the normalization at low momenta. It is also computationally intensive, which may make it impractical, and it is unclear how to apply it to all observables. Constituent Subtraction The constituent background subtraction method was first developed to remove pileup contamination from LHC based experiments, where it is not unusual to have contributions from multiple collisions in a single event. Unlike the area based subtraction methods described above, the constituent method subtracts the background constituent-by-constituent. The intention is to correct the 4-momentum of the particles, and thus correct the 4-momentum of the jet (Berta et al., 2014). It is necessary to consider the jet 4-momentum for some of the new jet observables that will be described in this paper, such as jet mass. The process is an iterative scheme that utilizes the ghost particles, which are nearly zero momentum particles with a very small area on the order of 0.005 which are embedded into the event by many jet finding algorithms. The jet finder is then run on the event, and the area is determined by counting the number of ghost particles contained within the jet. Essentially the local background density is determined and then subtracted from the constituents, which are thrown out if they reach zero momentum. The effect of this background scheme on the applicable observables is under study and it is not clear as of yet what its effect is compared to the more traditional area based background subtraction schemes. F. Particle Flow The particle flow algorithm was developed in order to use the information from all available sub-detectors in creating the objects that are then clustered with a jet-finding algorithm. Many particles will leave signals in multiple sub-detectors. For instance a charged pion will leave a track in a tracker and shower in a hadronic calorimeter. If information from both detectors is used, this would double count the particle. However, excluding a particular sub-detector would remove information about the energy flow in the collision as well. Tracking detectors generally provide better position information while hadronic calorimeters are sensitive to more particles but whose positions are altered by the high magnetic field necessary for tracking. The goal is to use the best information available to determine a particle's energy and position simultaneously. The particle flow algorithm operates by creating stable particles from the available detectors. Tracks from the tracker are extrapolated to the calorimeters -in the case of CMS, an electromagnetic calorimeter and a hadronic calorimeter (CMS, 2009). If there is a cluster in the associated calorimeter, it is linked to the track in question. Only the closest cluster to the track is kept as a charged particle should only have a single track. The energy and momentum of the cluster and track are compared. If the energy is low enough compared to the momentum, only a single hadron with momentum equal to a weighted average of the track and calorimeter is created. The exact threshold should depend on the details of the detector and its energy resolution. If the energy is above a cer-tain threshold, neutral particles are then created out of the excess energy. If that excess is only in an electromagnetic calorimeter, the neutral particle is assumed to be a photon. If the excess is in a hadronic calorimeter, the neutral particle is assumed to be a hadron. If there is some combination, multiple neutral particles may be created, with the photon given preference in terms of "using up" the excess energy. By grouping the information into individual particles, the particle flow algorithm reduces the sensitivity of the measurement of the jet energy to the jet fragmentation pattern. This is a correction that can be done prior to unfolding, which is described below. The particle flow algorithm can be a powerful tool, however, it depends on the details of the sub-detectors that are available, their energy resolution, and their granularity. For example, the ALICE detector has precision tracking detectors and an electromagnetic calorimeter but no hadronic calorimeter. The optimal particle flow algorithm for the ALICE detector is to use the tracking information when available and only use information from the electromagnetic calorimeter if there is no information from the tracking detectors. Additionally, the magnetic field strength plays a role, as this will dictate how much the charge particle paths diverge from one another before reaching the calorimeter and how far charged particles are deflected before reaching the calorimeters. To fully utilize this algorithm, the energy resolution of all calorimeters must be known precisely, and the distribution of charged and neutral particles must be known. G. Unfolding Before comparing measurements to theoretical calculations or other measurements, they must be corrected for both detector effects and smearing due to background fluctuations. Both the jet energy scale (JES) and the jet energy resolution (JER) need to be considered in any correction procedure. The jet energy scale is a correction to the jet to recover the true 4-vector of the original jet (and not of the parton that created it). The background subtraction methods described above are examples of corrections to the jet energy scale due to the addition of energy from the underlying background. Precision measurements of the energy scale, as done by the ATLAS collaboration (ATL, 2015a), are an important step in understanding the detector response and necessary to reduce the systematic uncertainties. The jet energy resolution is a measure of the width of the jet response distribution. An example from the ALICE experiment can be seen in Figure 7. In heavy-ion collisions there are two components, the increase in the distribution due to the fluctuating background that will be clustered into the jet, and due to detector effects. In most measurements of reconstructed jets, the jet energy resolution is on the order of 10-20% for the high momentum jets, where detector effects dominate. This can be understood because even a hadronic calorimeter is not equally efficient at observing all particles. In particular, the measurement of neutrons, antineutrons, and the K 0 L is difficult. The high magnetic field necessary for measuring charged particle momentum leads to a lower threshold on the momenta of reconstructed particles and can sweep charged particles in or out of the jet. As a result, even an ideal detector has a limited accuracy for measuring jets. The large fluctuations in the measured jet energy due to these effects distort the measured spectrum. This is qualitatively different from measurements of single particle observables, where the momentum resolution is typically 1% or better, often negligible compared to other uncertainties. This means that measurements of jet observables must be corrected for fluctuations due to the finite detector resolution if they will be compared to theoretical calculations or to measurements of the same observable in a different detector, or even from the same detector with different running conditions. Fluctuations in the background in A+A collisions lead to further distortions in the reconstructed jet energy. Correcting for these effects is generally referred to as unfolding in high energy physics, although it is called unsmearing or deconvolution in other fields. Here we summarize unfolding methods, based on the discussion in (Adye, 2011;Cowan, 2002). If the true value of an observable in a bin i is given by y true i , then the observed value in bin j, y reco j , is given by where R ij is the response matrix relating the true and reconstructed values. The response matrix is generally determined using Monte Carlo models including particle production, propagation of those particles through the detector material and simulation of its response, and application of the measurement algorithm, although sometimes data-driven corrections are incorporated into the response matrix. As an example, we consider the analysis of jet spectra. The truth result (y true i ) is usually generated by an event generator such as PYTHIA (Sjostrand et al., 2006) or DPM-JET (Ranft, 1999). The jet finding algorithm to be used in the analysis is run on this truth event, which generates the particle level jets comprising y true i . The truth event is then run through a simulation of the detector response. It is common to include a simulated background from a generator such as HIJING (X.-N. Wang, and M. Gyulassy, 1991), but not required. This creates the reconstructed event, and as before, the jet finding algorithm used in the analysis is run on this event to create the detector level jets that make up y reco j . Next, the particle level jets must be matched to detector level jets to build . On the left is the standard deviation of the combined jet response (black circles) for R=0.2 anti-kT jets , including background fluctuations (red squares) and detector effects (blue triangles) for 0-10% central Pb+Pb events. On the right is the standard deviation of the combined jet response (black circles) for R=0.3 anti-kT jets , including background fluctuations (blue triangles) and detector effects (red squares) for 0-10% central Pb-Pb events. The background effects increase the jet energy resolution more for larger jets, as can be seen from the difference in the background distributions in both plots. For high momentum jets, where the momentum of the jet is much larger than background fluctuations, the jet energy resolution will be dominated by detector effects. the response matrix, with unmatched jets determining the reconstruction efficiency. There are several ambiguities in this method. The first is that it comes with an assumption of the spectra shape and fragmentation pattern of the jets from the simulation. The second is that there is not always a one-to-one correspondence between the truth and detector level jets. The detector response may cause the energy of a particular truth jet to be split into two detector level jets. However, the response matrix requires a one-to-one correspondence, which necessitates a choice. If one could simply invert the response matrix,it would be possible to determine y true However, response matrices for jet observables are generally ill-conditioned and not invertible. The further the jet response matrix is from a diagonal matrix, the more difficult the correction procedure is. This is one reason the background subtraction methods outlined in the preceding section are employed. By correcting the jet energy scale on a jet-by-jet basis, the response matrix is much closer to a diagonal matrix, however this is not a sufficient correction. The process of unfolding is thus required to determine y true i given the information in Equation 9. One of the main challenges in unfolding is that it is an ill-posed statistical inverse problem which means that even though the mapping of y true i to y reco j is wellbehaved, the inverse mapping of y reco j to y true i is unstable with respect to statistical fluctuations in the smeared observations. This is a problem even if the the response matrix is known with precision. The issue is that within the statistical uncertainties, the smeared data can be explained by the actual physical solution, but also by a large family of wildly oscillating unphysical solutions. The smeared observations alone cannot distinguish among these alternatives, so additional a priori information about physically plausible solutions needs to be included. This method of imposing physically plausible solutions is called regularization, and it essentially is a method to reduce the variance of the unfolded truth points by introducing a bias. The bias generally comes in the form of an assumption about the smoothness of the observable, however, this assumption always results in a loss of information. If an observable is described well by models, it may be possible to correct the measurement using the ratio of the observed to the true value in Monte Carlo: where γ true j is the estimate of the true value, γ true,M C j is the true value in the Monte Carlo model, and y reco,M C j is the measurement predicted by the model. This approach is called a bin-by-bin correction. It is also satisfactory when the response matrix is nearly diagonal which is generally true when the bin width is wider than the resolution in the bin. In this circumstance, the inversion of the response matrix is generally stable and the measurement is not affected significantly by statistical fluctuations in the measurement or the response matrix. For example, bin-by-bin efficiency corrections to measurements of single particle spectra may be adequate as long as the momentum resolution is fairly good and the input spectra have roughly the same shape as the true spectra. This approach can work for measurements of reconstructed jets in systems such as p+p collisions [e.g. fragmentation function measurements]. Unfortunately, for typical jet measurements, the desired binning is significantly narrower than the jet energy resolution, and fluctuations in the response matrix then lead to instabilities if the response matrix is inverted. Additionally, the high background environment of heavy ion collisions leads to lower energy resolution, and Monte Carlo models generally do not describe the data well. Bin-by-bin corrections are therefore usually inadequate for measurements in heavy ion collisions. Several algorithms have been developed to solve equation 9. The two most commonly used algorithms are Single Value Decomposition (SVD) (Hocker and Kartvelishvili, 1996) and Bayesian Unfolding (D'Agostini, 1995). Bayesian unfolding uses a guess, which is called the prior of the true distribution, usually from a Monte Carlo model, as the start of an iterative procedure. This method is regularized by choosing how many iterations to use, where choosing an early iteration will result in a distribution that is closer to the prior, and thus more regularized. As the number of iterations increase there is a positive feedback which is driven by fluctuations in the response matrix and spectra, that makes the asymptotically unfolded spectrum diverge sharply from reality. The SVD formalism is a way by which to factorize a matrix into a set of matrices. This is used to write the 'unfolding' equation as a set of linear equations, with the assumption that the response matrix R can be decomposed into three matrices such that R = U SV T where U and V are orthogonal and S is diagonal. The regularization method for using SVD formalism in unfolding uses a dampened least squares method to couple all the linear equations that come out of the process and solve them. One then chooses a parameter, k, which corresponds to the k th singular value of the decomposed matrix, and suppresses the oscillatory divergences in the solution. It is worth noting that for any approach, there is a trade off between potential bias imposed on the results by the input from the Monte Carlo and the uncertainty in the final result. In practice, different methods and different training for Bayesian unfolding are compared for determination of the systematic uncertainties. For measurements where models describe the data well or where the resolution leads to minimal bin-to-bin smearing, binby-bin corrections are often preferred, both because of the potential bias and because of the difficulty of unfolding. In order to confirm whether a particular algorithm used in unfolding is valid, it is necessary to perform closure tests, demonstrations that the method leads to the correct value when applied to a Monte Carlo model. The most simple tests are to convolute the Monte Carlo truth distribution with the response matrix to form a simulated detector distribution. This distribution can then be un-folded and compared to the original truth distribution. For this test, one should use roughly the same statistical precision as will be available in the data given how strongly the unfolding procedure is driven by statistics. However, this does not test the validity of the response matrix, or of the choice of spectral shape for the input distribution, or of the effect of combinatorial jets that will appear in the measured data. A more rigorous closure test can be done by embedding the detector level jets into minimally biased data, and performing the background and unfolding procedures on the embedded data to compare with the truth distribution. Another approach is to "fold" the reference to take detector effects into account. For example, the initial measurements of the dijet asymmetry did not correct for the effect of background or detector resolution in Pb+Pb but instead embedded p+p jets in a Pb+Pb background in order to smear the p+p by an equivalent amount (Aad et al., 2010;Chatrchyan et al., 2011b). This may lead to a better comparison between data and a particular theory, but since the response matrix is generally not made available outside of the collaboration, it can only be done by experimentalists at the time of the publication. However, this would be an important cross-check for any model as it removes the mathematical uncertainty due to the ill posed inverse problem. H. Comparing different types of measurements The ultimate goal of measurements of jets in heavy ion collisions is not to learn about jets but to learn about the QGP. Measurements of jets in e + +e − and p+p collisions are already complicated and the addition of a large combinatorial background in heavy ion collisions imposes greater experimental challenges. Suppressing and subtracting the background imposes biases on the resultant jet collections. Additionally, selection criteria applied to the collection of jet candidates in order to remove the combinatorial contribution will also impose a bias. The exact bias imposed by these assumptions cannot be known without a complete understanding of the QGP, which is what we are trying to gain by studying jets. Occasionally various methods are claimed to be "unbiased", but is unclear what this means precisely since every measurement is biased towards a subset of the population of jets created in heavy ion collisions. Any particular measurement may have several types of bias. We discuss a few types of bias below. Survivor bias As jets interact with the medium and lose energy to the medium, they may begin to look more like the medium. There are fluctuations in how much energy each individual parton will lose in the medium, and selecting jets which look like jets in a vacuum may skew our measurements towards partons which have lost less energy in the medium. Fragmentation bias Many measurement techniques select jets which have hard fragments, which may lead to a survivor bias since interactions with the medium are expected to soften the fragmentation function. Some measurements may preferentially select jets which fragment into a particular particle, such as a neutral pion or a proton. This in turn can bias the jet population towards quark or gluon jets. If fragmentation is modified in the medium, it could also bias the population towards jets which either have or have not interacted with the medium. Quark bias Even in e + +e − collisions, quark and gluon jets have different structures on average, with gluon jets fragmenting into more, softer particles at larger radii (Abreu et al., 1996;Akers et al., 1995). A bias may also be imposed by the jet-finding algorithm. OPAL found that gluon jets reconstructed with the k T jet finding algorithm generally contained more particles than those reconstructed with the cone algorithm in (Abe et al., 1992) and that gluon jets contain more baryons (Ackerstaff et al., 1999). The measurement techniques described above generally focus on higher momentum jets which fragment into harder constituents and have narrower cone radii. This surely induces a bias towards quark jets. Since gluon jets are expected to outnumber quark jets significantly (Pumplin et al., 2002), this may not be quantitatively significant overall, depending on the measurement and the collision energy. In some measurements, survivor bias is used as a tool. For instance measurements of hadron-jet correlations select a less modified jet by identifying a hard hadron and then look for its partner jet on the away-side (Adam et al., 2015c). Correlations requiring a trigger on both the near and away sides select jets biased to be near the surface of the medium (Agakishiev et al., 2011). These biases are inherently unavoidable and they must be understood in order to properly interpret data. However, once they are well understood, the biases can be engineered to purposefully select particular populations of jets, for instance to select jets biased towards the surface in order to increase the probability that the away side jet has traversed the maximum possible medium. As our experience with the v n modulated background in dihadron correlations shows, the issue is not merely which measurements are most sensitive to the properties of the medium but the possibility that our current understanding of the background may be incomplete. However, the potential error introduced varies widely by the measurement -single particle spectra, dihadron correlations, and reconstructed jets all have completely different biases and assumptions about the background. Our certainty in the interpretation of the results is therefore enhanced if the same conclusions can be drawn from measurements of multiple observables. We therefore discuss a variety of different measurements in Section III and demonstrate that they all lead to the same conclusions -partons lose energy in the medium and their constituents are broadened and softened in the process. III. OVERVIEW OF EXPERIMENTAL RESULTS RHIC and the LHC have provided a wealth of data which enhance our understanding of the properties of the QGP. This section of the article reviews experimental results available at of the time of publication, and is organized according to the physics addressed by the measurement rather than according to observable to focus on the implications of the measurements. Therefore the same observable may appear in multiple subsections. The questions that jet studies attempt to answer to understand the QGP are: Are there cold nuclear matter effects which must be taken into consideration in order to interpret results in heavy ion collisions? Do partons lose energy in the medium and how much? How do partons fragment in the medium? Is fragmentation the same as in vacuum or is it modified? Where does the lost energy go and how does it influence the medium? Finally, in the next section we will discuss how well these questions have been answered and the questions that remain. A. Cold nuclear matter effects Cold nuclear matter effects refer to observed differences between p+p and p+A or d+A collisions where a hot medium is not expected, but the presence of a nucleus in the initial state could influence the production of the final observable. These effects may result from coherent multiple scattering within the nucleus (Qiu and Vitev, 2006), gluon shadowing (Gelis et al., 2010), or partonic energy loss within the nucleus (Bertocchi and Treleani, 1977;Vitev, 2007;Wang and Guo, 2001). While such effects are interesting in their own right, if present, they would need to be taken into account in order to interpret heavy ion collisions correctly. Studies of open heavy flavor at forward rapidities through spectra (Adare et al., 2012a) and correlations (Adare et al., 2014b) of leptons from heavy flavor decays indicate that heavy flavor is suppressed in cold nuclear matter. The J/ψ is also suppressed at forward rapidities (Adare et al., 2013d). Recent studies have also indicated that there may be collective effects for light hadrons in p+A collisions (Aad et al., 2014d;Adam et al., 2016h;Khachatryan et al., 2015a) and even high multiplicity p+p events (Aad et al., Khachatryan et al., 2015bKhachatryan et al., , 2017a are consistent with one within the systematic uncertainties of these measurements, indicating that the large hadron suppression observed in A+A collisions can not be due to cold nuclear matter effects. This is shown in Figure 8. We note here that the CMS results shown here were updated with a p+p reference measured at √ s NN = 5.02 TeV (Khachatryan et al., 2017a), which is also consistent with an R pP b of one. Reconstructed jets Measurements of reconstructed jets in d+Au collisions at √ s NN = 200 GeV and p+Pb collisions at 5.02 TeV indicate that the minimum bias R dAu (Adare et al., 2016b) and R pP b (Aad et al., 2015a;Adam et al., 2016c), respectively, are also consistent with one. Figure 9 shows R pP b measured by the CMS experiment and compared with NLO calculations including cold nuclear matter effects. The theoretical predictions and the experimental measurements in Figure 9 show that cold nuclear matter effects are small for jets for all p T and pseudorapidity measured at the LHC. A centrality dependence at midrapidity in 200 GeV d+Au and 5.02 TeV p+Pb collisions which cannot be fully explained by the biases in the centrality determination as studied in (Aad et al., 2016a;Adare et al., 2014a) is observed. It has been proposed that the forward multiplicities used to determine centrality are anti-correlated with hard processes at midrapidity (Armesto et al., 2015;Bzdak et al., 2016) or that the rare high-x parton configurations of the proton which produce high-energy jets have a smaller cross-section for inelastic interactions with nucleons in the nucleus (Alvioli et al., 2016(Alvioli et al., , 2014Alvioli and Strikman, 2013;Coleman-Smith and Muller, 2014). The latter suggests that high p T jets may be used to select proton configurations with varying sizes due to quantum fluctuations. While this is interesting in its own right and there may be initial state effects, there are currently no indications of large partonic energy loss in small systems, thus scaling the production in p+p with the number of binary nucleonnucleon collisions as a reference appears to valid for comparison to larger systems. Dihadron correlations Detailed studies of the jet structure in d+Au and comparisons to both PYTHIA and p+p collisions using dihadron correlations at √ s NN = 200 GeV found no evidence for modification of the jet structure at midrapidity in cold nuclear matter (Adler et al., 2006d). Studies of correlations between particles at forward rapidities (1.4< η < 2.0 and -2.0< η < -1.4) in order to search for fragmentation effects at low x also found no evidence for modified jets in cold nuclear matter (Adler et al., 2006a). However, jet-like correlations with particles at higher rapidities (3.0< η < 3.8) indicated modifications of the correlation functions in d+Au collisions at √ s NN = 200 GeV (Adare et al., 2011d). This indicates that nuclear effects may have a strong dependence on x and that studies of cold nuclear matter effects for each observable are important in order to demonstrate the validity of the baseline for studies in hot nuclear matter. While there is little evidence for effects at midrapidity, observables at forward rapidities may be influenced by effects already present in cold nuclear matter. Searches for acoplanarity in jets in p+Pb collisions observed no difference between jets in p+Pb and p+p collisions (Adam et al., 2015b). Summary of cold nuclear matter effects for jets Based on current evidence from p+Pb and d+Au collisions, p+p collisions are an appropriate reference for jets, however, since numerous cold nuclear matter effects have been documented, each observable should be measured in cold nuclear matter in order to properly interpret data in hot nuclear matter. We therefore conclude that, based on the current evidence, p+Pb and d+Au collisions are appropriate reference systems for hard processes in A+A collisions, although caution is needed, particularly at at large rapidities and high multiplicities, and future studies in small systems may lead to different conclusions. B. Partonic energy loss in the medium Electroweak probes such as direct photons, which do not interact via the strong force, are expected to escape the QGP unscathed while probes which interact strongly lose energy in the medium and are suppressed at high momenta. Figure 10 shows a compilation of results TeV (CMS, 2016a;Aamodt et al., 2011b;Chatrchyan et al., 2012e). The R AA of the charged hadron spectra appears to reach unity at p T ≈ 100 GeV/c (CMS, 2016a). This is expected from all QCD-inspired energy loss models that at some point R AA must reach one, because at leading order the differential cross section for interactions with the medium is proportional to 1/Q 2 (Levai et al., 2002). Studies of R CP as a function of collision energy indicate that suppression sets in somewhere between √ s NN = 27 and 39 GeV (Adamczyk et al., 2017a). At intermediate p T the shape of R AA with p T is mass dependent with heavier particles approaching the light particle suppression level at higher momenta (Agakishiev et al., 2012a). However, even hadrons containing heavy quarks are suppressed at levels similar to light hadrons (Abelev et al., 2012b). QCD-motivated models are generally able to describe inclusive single particle R AA qualitatively, however, for each model the details of the calculations make it difficult to compare results between models directly and extract quantitative information about the properties of the medium from such comparisons (Adare et al., 2008b). The JET collaboration was formed explicitly to make such comparisons between models and data and their extensive studies determined that for a 10 GeV/c hadron the jet transport coefficient iŝ q = 1.2± 0.3 GeV 2 in Au+Au collisions at √ s NN = 200 GeV andq = 1.9± 0.7 GeV 2 in Pb+Pb collisions at √ s NN = 2.76 TeV (Burke et al., 2014). These detailed comparisons between data and energy loss models are one of the most important results in heavy ion physics and are one of the few results that directly constrain the properties of the medium. We emphasize that these constraints came from a careful comparison of a straightforward observable to various models. While we discuss measurements of more complicated observables later, this highlights the importance of both precision measurements of straightforward observables and careful, systematic comparisons of data to theory. Similar approaches are likely needed to further constrain the properties of the medium. It is remarkable that the R AA values for hadrons at RHIC and the LHC are so similar since one would expect energy loss to increase with increased energy density which should result in a lower R AA at the LHC with its higher collision energies. However, the hadrons in a particular p T range are not totally quenched but rather appear at a lower p T , so it is useful to study the shift of the hadron p T spectrum in A+A collisions to p+p collisions rather than the ratio of yields. Note that the spectral shape also depends on the collisional energy. Spectra generally follow a power law trend described by dN dp T ∝ p −n T at high momenta. The spectra of hadrons is steeper in 200 GeV than in 2.76 TeV collisions (n ≈ 8 and n ≈ 6.0 repectively for the p T range 7-20 GeV/c) (Adare et al., 2012b(Adare et al., , 2013c. Therefore, for R AA , greater energy loss at the LHC could be counteracted by the flatter spectral shape. To address this, another quantity, the fractional momentum loss, (S loss ) has also been measured to better probe a change in the fractional energy loss of partons ∆E/E as a function of collision energy. This quantity is defined as where p AA T is the p T of the A+A measurement. p pp T is determined by first scaling p T spectrum measured in p+p collisions by the nuclear overlap function, T AA of the corresponding A+A centrality class and then determining the p T at which the yield of the scaled spectrum matches the yield measured in A+A at the p AA T point of interest. This procedure is illustrated pictorially in Figure 12. Indeed a greater fractional momentum loss was observed for the most central 2.76 TeV Pb+Pb collisions compared to the 200 GeV Au+Au collisions (Adare et al., 2016d). The analysis found that S loss scales with energy density related quantities such as multiplicity (dN ch /dη), as shown in Figure 12, and dE T /dy/A T where A T is the transverse area of the system. The latter quantity can be written in terms of Bjorken energy density, Bj and the equilibrium time, τ 0 such that dE T /dy/A T = Bj τ 0 and has been shown to scale with dN ch /dη (Adare et al., 2016e). On the other hand, S loss does not scale with system size variables such as N part . Assuming that S loss is a reasonable proxy for the mean fractional energy loss of the partons the scaling observations implies that fractional energy loss of partons scales with the energy density of the medium for these collision energies. Jet RAA Measurements of hadronic observables blur essential physics due to the complexity of the theoretical description of hadronization and the sensitivity to nonperturbative effects. In principle, measurements of reconstructed jets are expected to be less sensitive to these effects. Next to leading order calculations demonstrate the sensitivity of R AA measurements to the properties of the medium-induced gluon radiation . These measurements can differentiate between competing models of parton energy loss mechanisms, reducing the large systematic uncertainties introduced by different theoretical formalisms (Majumder, 2007b). Figure 13 shows the reconstructed anti-k T jet R AA from AL-ICE (Adam et al., 2015d) with R = 0.2 for |η| < 0.5, ATLAS (Aad et al., 2015b) with R = 0.4 for |η| < 2.1, and CMS (Khachatryan et al., 2017c,c) with R = 0.2, 0.3, and 0.4 for |η| < 2.0. At lower momenta, the AL-ICE data are consistent with the CMS data for all radii, while the ATLAS R AA is higher than that of ALICE. At higher momenta, all measurements of jets from all three experiments agree within the experimental uncertainties of the jet measurements. A jet is defined by the parameters of the jet finding algorithm and selection criteria such as those that are used to identify background jets due to fluctuations in heavy ion events. When making comparisons of jet observables between different experiments and to theoretical predictions, not only jet definitions but also the effects of selection criteria need to be considered carefully. While the difference between the pseudorapidity coverage is unlikely to lead to the difference between the ATLAS and ALICE results given the relatively flat distribution at mid-rapidity, the resolution parameter R as well as the different selection criteria could cause a difference as observed at low transverse momenta. The ATLAS approach to the combinatorial background, which favors jets with hard constituents, may bias the jet sample to unmodified jets, particularly at low momenta where the ATLAS and ALICE measurements overlap. ATLAS and CMS jet measurements agree at high momenta where jets are expected to be less sensitive to the measurement details. We therefore interpret the difference between the jet R AA measured by the different experiments not as an inconsistency, but as different measurements due to different biases. We implore the collaborations to construct jet observables using the same approaches to background subtraction and suppression of the combinatorial background so that the measurements could be compared directly. Ultimately the overall consistency of R AA at high p T , even with widely varying jet radii and inherent biases in the jet sample, indicate that more sensitive observables are required to understand jet quenching quantitatively. Although, the observation of jet quenching through R AA was a major feat, it still leaves several open questions about hard partons' interactions with the medium. How do jets lose energy? Through collisions with the medium, gluon bremsstrahlung, or both? Where does that energy go? Are there hot spots or does the energy seem to be distributed isotropically in the event? Few experimental observables can compete with R AA for overall precision, however, more differential observables may be more sensitive to the energy loss mechanism. Dihadron correlations The precise mechanism responsible for modification of dihadron correlations cannot be determined based on these studies alone because there are many mechanisms which could lead to modification of the correlations. This includes not only energy loss and modification of jet fragmentation but also modifications of the underlying parton spectra. However, they are less ambiguous than spectra alone because the requirement of a high momentum trigger particle enhances the fraction of particles from jets. Figure 14 shows dihadron correlations in p+p, d+Au, and Au+Au at √ s NN = 200 GeV, demonstrating suppression of the away-side peak in central Au+Au collisions. The first measurements of dihadron correlations showed complete suppression of the away-side peak and moderate enhancement of the near-side peak (Adams et al., 2003a(Adams et al., , 2004aAdler et al., 2003a). However, as noted above, a majority of dihadron correlation studies did not take the odd v n due to flow into account, including those in Figure 14. A subsequent measurement with similar kinematic cuts including higher order v n shows that the away-side is not completely suppressed, as shown in Figure 14, but rather that there is a visible but suppressed away-side peak . Studies at higher momenta also see a visible but suppressed awayside peak (Adams et al., 2006). The suppression is quantified by where Y AA is the yield in A+A collisions and Y pp is the ). The first plot (left) is a cartoon demonstrating how δpT is determined. The fractional energy loss, S loss measured as a function of the multiplicity, dN ch /dη is plotted for several heavy ion collision energies for hadrons with p pp T of 12 GeV (middle) and 6 GeV/c (right) where p pp T refers to the transverse momentum measured in p+p collisions. The Pb+Pb data are from ALICE measured over |η| < 0.8 while all other data are from PHENIX which measures particle in the range |η| < 0.35. These results indicate that the fractional energy loss scales with the energy density of the system. yield in p+p collisions. The yields must be defined over finite ∆φ and ∆η ranges and are usually measured for a fixed range in associated momentum, p a T . Similar to R AA , an I AA greater than one means that there are more particles in the peak in A+A collisions than in p+p collisions and an I AA less than one means that there are fewer. Gluon bremsstrahlung or collisional energy loss would result in more particles at low momenta and fewer particles at high momenta, leading to an I AA greater than one at low momenta and an I AA less than one at high momenta, at least as long as the lost energy does not reach equilibrium with the medium. Both radiative and collisional energy loss would lead to broader correlations. Partonic energy loss before fragmentation would lead to a suppression on the away-side but no modification on the near-side and no broadening because the near-side jet is biased towards the surface of the medium. Changes in the parton spectra can also impact I AA because harder partons hadronize into more particles and higher energy jets are more collimated. No differences between d+Au and p+p collisions are observed on either the near-or away-side at midrapidity (Adler et al., 2006a,d), indicating that any modifications observed are due to hot nuclear matter effects. The near-side yields at midrapidity in A+A, d+Au, and p+p collisions are within error at RHIC (Abelev et al., 2010a;Adams et al., 2006;Adare et al., 2008a), even at low momenta (Abelev et al., 2009b;Agakishiev et al., 2012c), indicating that the near-side jet is not substantially modified, although the data are also consistent with a slight enhancement . A slight enhancement of the near-side is observed at the LHC (Aamodt et al., 2012) and a slight broadening is observed at RHIC (Adare et al., 2008a;Agakishiev et al., 2012c;Nattrass et al., 2016). The combination of broadening and a slight enhancement favors moderate partonic energy loss rather than a change in the underlying jet spectra since higher energy jets are both more collimated and contain more particles. The away-side is suppressed at high momenta at both RHIC (Abelev et al., 2010a;Adams et al., 2006) and the LHC (Aamodt et al., 2012). A reanalysis of reaction plane dependent dihadron correlations from and trigger momenta 4 < p t T < 6 GeV/c. This measurement is now understood to be quantitatively incorrect because of erroneous assumptions in the background subtraction. We now see only partial suppression on the away-side . STAR (Agakishiev et al., 2010(Agakishiev et al., , 2014 at low momenta using a new background method which takes odd v n into account observed suppression on the away-side but no broadening, even though broadening was observed on the near-side at the same momenta . This may indicate that the away-side width is less sensitive because the width is broadened by the decorrelation between the near-and away-side jet axes rather than indicating that these effects are not present. Reaction plane dependent studies can constrain the path length dependence of energy loss because, as shown in Figure 2, partons traveling in the reaction plane (in-plane) traverse less medium than those traveling perpendicular to the reaction plane (outof-plane). The I AA is highest for low momentum particles and is at a minimum for trigger particles at intermediate angles relative to the reaction plane rather than in-plane or out-of-plane. This likely indicates an interplay between the effects of surface bias and partonic energy loss. Energy loss models are generally able to describe I AA qualitatively, however, there has been no systematic attempt to compare data to models, as was done for R AA . Simultaneous comparisons of R AA and I AA are expected to be highly sensitive to the jet transport coefficientq (Jia et al., 2011;Zhang et al., 2007). Such a theoretical comparison is partially compounded by the wide range of kinematic cuts used in experimental measurements and the fact that most measurements neglected the odd v n in the background subtraction. Dijet imbalance The first evidence of jet quenching in reconstructed jets at the LHC was observed by measuring the dijet asymmetry, A J . This observable measures the energy or momentum imbalance between the leading and sub-leading or opposing jet in each event. Due to kinematic and detector effects, the energy of dijets will not be perfectly balanced, even in p+p collisions. Therefore to interpret this measurement in heavy ion collisions, data from A+A collisions must be compared to the distributions in p+p collisions. Figure 15 shows the dijet asymmetry measurement from the ATLAS experiment where et al., 2010). The left panel on the top row shows the A J distribution for peripheral Pb+Pb collisions and demonstrates that it is similar to that from p+p collisions. However, dijets in central Pb+Pb collisions are more likely to have a higher A J value than dijets in p+p collisions, consistent with expectations from energy loss. The bottom panel shows that these jets retain a similar angular correlation with the leading jet, even as they lose energy. The CMS measurement of A J = p T 1 −p T 2 p T 1 +p T 2 (Chatrchyan et al., 2011b) shows similar trends. The structure in the distribution of A J is partially due to the 100 GeV lower limit on the leading jet and the 25 GeV lower limit on the subleading jet and partially due to detector effects and background in the heavy ion collision. These measurements are not corrected for detector effects or distortions in the observed jet energies due to fluctuations in the background. Instead the jets from p+p collisions are embedded in a heavy ion event in order to take the effects of the background into account. Recently ATLAS has measured A J , and unfolded the distribution in order to take background and detector effects into account (ATL, 2015b), with similar conclusions. For jets above 200 GeV, the asymmetry is observed to be consistent with those observed in p+p, indicating that sufficiently high momentum jets are unmodified. This is consistent with observation that the R AA consistent with one for hadrons at p T ≈ 100 GeV/c (CMS, 2016a), indicating that very high momentum jets are not modified. Energy and momentum must be conserved, so the balance should be restored if jets can be reconstructed in such a way that the particles carrying the lost energy are included. For jets reconstructed with low momentum constituents, the background due to combinatorial jets is non-negligible, but requiring the jet to be matched to a jet constructed with higher momentum jet constituents, as well as a higher momentum jet will sup-press the combinatorial jet background. STAR measurements of A J using a high momentum constituent selection (p T > 2 GeV/c) observed the same energy imbalance seen by ATLAS and CMS. However, the energy balance was recovered by matching these jets reconstructed with high p T constituents, to jets reconstructed with low momentum constituents (p T > 150 MeV/c) and then constructing A J from the jets with the low momentum constituents (Adamczyk et al., 2017b). γ-hadron, γ-jet and Z-jet correlations At leading order, direct photons are produced via Compton scattering, q+g → q+γ, and quark-antiquark annihilation, as shown in the left two and right two Feynman diagrams in Figure 16, respectively. Due to the dearth of anti-quarks and abundance of gluons in the proton, Compton scattering is the dominant production mechanism for direct photons in p+p and A+A collisions. Therefore jets recoiling from a direct photon at midrapidity are predominantly quark jets. In the center of mass frame at leading order, the photon and recoil quark are produced heading precisely 180 • away from each other in the transverse plane with the same momentum. At higher order, fragmentation photons and gluon emission impact the correlation such that the momentum is not entirely balanced and the back-to-back positions are smeared, even in p+p collisions. Since photons do not lose energy in the QGP, the photon will escape the medium unscathed and the energy of the opposing quark can be determined from the energy of the photon. This channel is called the "Golden Channel" for jet tomography of the QGP because it is possible to calculate experimental observables with less sensitivity to hadronization and other non-perturbative effects than dihadron correlations and measurements of reconstructed jets. Additionally, direct photon analyses remove some of the ambiguity with respect to differences between quarks and gluons since the outgoing parton opposing the direct photon is predominantly a quark. Correlations of direct photons with hadrons can be used to calculate I AA , as for dihadron correlations. Studies of γ-h at RHIC led to similar conclusions to those reached by dihadron correlations, as shown in Figure 17, demonstrating suppression of the away-side jet (Abelev et al., 2010c;Adamczyk et al., 2016;Adare et al., 2009Adare et al., , 2010b. In addition, γ-h correlations can measure the fragmentation function of the away-side jet assuming the jet energy is the photon energy. This is discussed in Section III.C.2. It should be noted that nonzero photon v 2 and v 3 have been observed (Adare et al., 2012c(Adare et al., , 2016a, leading to a correlated background. The physical origin of this v 2 is unclear, since photons do not interact with the medium, so it is also unclear if v 3 and higher order v n impact the background. Measurements at high momenta are robust because the background is small and the photon v 2 appears to decrease with p T . In (Adare et al., 2013b), the systematic uncertainty due to v 3 was estimated and included in the total systematic uncertainty. Since the direct photon-hadron correlations are extracted by subtracting photon-hadron correlations from decays (primarily from π 0 → γγ) from inclusive photon-hadron correlations, the impact of the v n in the final direct photon-hadron correlations is reduced as compared to dihadron and jet-hadron correlations. Direct photons can also be correlated with a reconstructed jet. In principle, this is a direct measurement of partonic energy loss. Figure 18(a) shows measurements of the energy imbalance between a photon with energy E > 60 GeV and a jet at least 7 8 π away in azimuth with at least E jet > 30 GeV. Even in p+p collisions, the jet energy does not exactly balance the photon energy because of next-to-leading order effects and because some of the quark's energy may extend outside of the jet cone. The lower limit on the energy of the reconstructed jet is necessary in order to suppress background from combinatorial jets, but it also leads to a lower limit on the fraction of the photon energy observed. Figure 18(a) demonstrates that the quark loses energy in Pb+Pb collisions. Figure 18(b) shows the average fraction of isolated photons matched to a jet, R Jγ . In p+p collisions nearly 70% of all photons are matched to a jet, but in central Pb+Pb collisions only about half of all photons are matched to a jet. These measurements provide unambiguous evidence for partonic energy loss. However, the kinematic cuts required to suppress the background leave some ambiguity regarding the amount of energy that was lost. Some of the energy could simply be swept outside of the jet cone. The preliminary results of an analysis with higher statistics for the p+p data and the addition of p+Pb collisions also shows no significant modification, confirming that the Pb+Pb imbalance does not originate from cold nuclear matter effects (Collaboration, 2013b). By construction, measurements of the process q+g → q+γ can only measure interactions of quarks with the medium. Since there are more gluons in the initial state and quarks and gluons may interact with the medium in different ways, studies of direct photons alone cannot give a full picture of partonic energy loss. With the large statistics data collected during the 2015 Pb+Pb running of the LHC at 5 TeV, another "Golden Probe" for jet tomography of the QGP, the coincidences of a Z 0 and a jet, became experimentally accessible (Neufeld et al., 2011;Wang and Huang, 1997). While this channel has served as an essential calibrator of jet energy in TeV p+p collisions, in heavy ion collisions it can be used to calibrate in-medium parton energy loss as the Z 0 carries no color charge and is expected to escape the medium unattenuated like the photon. However, photon measurements at higher momentum are limited due to the large background from decay photons in TeV (Sirunyan et al., 2017c) show that angular correlations between Z bosons and jets are mostly preserved in central Pb+Pb collisions. However, the transverse momentum of the jet associated with that Z boson appears to be shifted to lower values with respect to the observations in p+p collisions, as expected from jet quenching. Hadron-jet correlations Correlations between a hard hadron and a reconstructed jet were measured to overcome the downside of an explicit bias imposed by the background suppression techniques described in Section II.E. Similar to dihadron correlations, a reconstructed hadron is selected and the yield of jets reconstructed within |π − ∆φ| < 0.6 relative to that hadron is measured in (Adam et al., 2015c). For sufficiently hard hadrons, a large fraction of the jets correlated with those hadrons would be jets that originated from a hard process, however, for low momentum hadrons, the yield will be dominated by combinatorial jets. The yield of combinatorial jets should be independent of the hadron momentum, so the difference between where ∆ recoil is the difference between the number of jets within π − ∆φ < 0.6 of a hadron with 20 < pT < 50 GeV/c and a hadron with 8 < pT < 9 GeV/c. The green line indicates the momentum of the higher momentum hadron, an approximate lower threshold on the jet momentum. This demonstrates the suppression of a jet 180 • away from a hard hadron. the yields, ∆ recoil , is calculated to subtract the background from the ensemble of jet candidates. This difference in yields is then compared to the same measurement in p+p collisions. Since the requirement of a hard hadron is opposite the jet being studied, no fragmentation bias is imposed on the reconstructed jet. Therefore, this measurement may be more sensitive to modified jets than observables that require selection criteria on the jet candidates themselves. Figure 19 shows the ratio of ∆ recoil in Pb+Pb collisions to that in p+p collisions, ∆I AA = ∆ P bP b recoil /∆ P Y T HIA recoil . PYTHIA is used as a reference rather than data due to limited statistics available in the data at the same collision energy. PYTHIA agrees with the data from p+p collisions at √ s = 7 TeV. These data demonstrate that there is substantial jet suppression, consistent with the results discussed above. Measurements of hadron-jet correlations by STAR (Adamczyk et al., 2017c) used a novel mixed event technique for background subtraction in order to extend the measurement to low momenta. The conditional yield correlated with a high momentum hadron was clearly suppressed in central Au+Au collisions relative to that observed in peripheral collisions, though substantially less so at the lowest momenta. A benefit of this method is that, in principle, the conditional yield of jets correlated with a hard hadron can be calculated with perturbative QCD. Path length dependence of inclusive RAA and jet vn The azimuthal asymmetry shown in Figure 2 provides a natural variation in the path length traversed by hard partons and the orientation of the reaction plane can be reconstructed from the distribution of final state hadrons. The correlations with this reaction plane can therefore be used to investigate the path length of partonic energy loss. The reaction plane dependence of inclusive particle R AA demonstrates that energy loss is path length dependent (Adler et al., 2007a), as expected from models. The path length changes with collision centrality, system size, and angle relative to the reaction plane, however, the temperature and lifetime of the QGP also change when the centrality and system size are varied. When particle production is studied relative to the reaction plane angle, the properties of the medium remain the same while only the path length is changed. Because the eccentricity of the medium and therefore the path length can only be determined in a model, any attempt to determine the absolute path length is model dependent. Attempts to constrain the path length dependence of R AA were explored in (Adler et al., 2007a). While these studies were inconclusive, they showed that R AA is constant at a fixed mean path length and that there is no suppression for a path length below L = 2 fm, indicating that there is either a minimum time a hard parton must interact with the medium or there must be substantial effects from surface bias. More conclusive statements would require more detailed comparisons to models. At high p T , the single particle v n in equation 2 are dominated by jet production and a non-zero v 2 indicates path length dependent jet quenching. Above 10 GeV/c, a non-zero v 2 is observed at RHIC (Adare et al., 2013a) and the LHC (Abelev et al., 2013a;Chatrchyan et al., 2012a) and can be explained by energy loss models (Abelev et al., 2013a). Above 10 GeV/c, v 3 in central collisions is consistent with zero (Abelev et al., 2013a). The v n of jets themselves can be measured directly, however, only jet v 2 has been measured (Aad et al., 2013a;Adam et al., 2016b). Figure 20 compares jet and charged particle v 2 from ATLAS and ALICE. ALICE measurements are of charged jets, which are only constructed with charged particles and not corrected for the neutral component, with R = 0.2 and |η| < 0.7 and ATLAS measurements are reconstructed jets with R = 0.2 and |η| < 2.1. The v 2 observed by ALICE is higher than that observed by ATLAS, although consistent within the large uncertainties. The ALICE measurement is unfolded to correct for detector effects, but it is not corrected for the neutral energy contribution. Both measurements use methods to suppress the background which could lead to greater surface bias or bias towards unmodified jets. The ALICE measurement requires a track above 3 GeV/c in the jet to reduce the combinatorial background. The AT-LAS measurement requires the calorimeter jets used in the measurement to be matched to a 10 GeV track jet or to contain a 9 GeV calorimeter cluster. Because of the higher momentum requirement the ATLAS measurement has a greater bias than the ALICE sample of jets. These measurements provide some constraints on the path length dependence, however, this is not the only relevant effect. Theoretical calculations indicate that both event-by-event initial condition fluctuations and jet-byjet energy loss fluctuations play a role in v n at high p T (Betz et al., 2017;Noronha-Hostler et al., 2016;Zapp, 2014a). This is perhaps not surprising, analogous to the importance of fluctuations in the initial state for measurements of the v n due to flow. However, it does indicate that much more insight into which observables are most sensitive to path length dependence and the role of fluctuations in energy loss is needed from theory. Heavy quark energy loss The jet quenching due to radiative energy loss is expected to depend upon the species of the fragmenting parton . The simplest example is gluon jets, which are expected to lose more energy in the medium than quark jets due to their larger color factor. Similarly, the mass of the initial parton also plays a role and the interpretation of this effect depends on the theoretical treatment of parton-medium interactions. Strong coupling calculations based on AdS/CFT correspondence predict large mass effects at all transverse momenta and in weak-coupling calculations based on pQCD mass effects may arise from the "dead-cone" effect (Dokshitzer and Kharzeev, 2001), the suppression of gluon emission at small angles relative to a heavy quark, but may be limited to a small range of heavy-quark trans- verse momenta comparable to the heavy-quark mass. However, the relevance of the dead-cone effect in heavy ion collisions is debated (Aurenche and Zakharov, 2009). Searches for a decreased suppression of heavy flavor using single particles are still inconclusive due to large uncertainties, although they indicate that heavy quarks may indeed lose less energy in the medium. As shown in Figure 10, the R AA of single electrons from decays of heavy flavor hadrons is within uncertainties of that of hadrons containing only light quarks. Measurements of single leptons are somewhat ambiguous because of the difference between the momentum of the heavy meson and the decay lepton. Since the mass effect is predicted to be momentum dependent with negligible effects for p T m, the decay may wash out any mass effect. The R AA of D mesons is within uncertainties of the light quark R AA (Adam et al., 2015aAdamczyk et al., 2014b). Particularly at the LHC, these results may be somewhat ambiguous because D mesons may also be produced in the fragmentation of light quark or gluon jets. B mesons are much less likely to be produced by fragmentation. Preliminary measurements of B meson R AA show less suppression than for light mesons, although the uncertainties are large and prohibit strong conclusions (CMS, 2016b). Experimentally, heavy flavor jets are primarily identified using the relative long lifetimes of hadrons containing heavy quarks, resulting in decay products significantly displaced from the primary vertex. A variant of the secondary vertex mass, requiring three or more charged tracks, is also used to extract the relative contribution of charm and bottom quarks to various heavy flavor jet observables. However these methods cannot discriminate between heavy quarks from the original hard scattering, which then interact with the medium and lose energy, and those from a parton fragmenting into bottom or charm quarks (Huang et al., 2013). A requirement of an additional B-meson in the event could ensure a purer sample of bottom tagged jets (Huang et al., 2015), however, this is not currently experimentally accessible due to the limited statistics. Figure 21 shows a compilation of all current measurements of heavy flavor jets at LHC (Chatrchyan et al., 2014a;Khachatryan et al., 2016d;Sirunyan et al., 2017b). The R AA of bottom quark tagged jets is measured utilizing the Pb+Pb and p+p data collected at √ s NN = 2.76 TeV. Bottom tagged jet measurements in p+Pb collisions are also performed to study cold nuclear matter effects in comparison to expectations from PYTHIA at the 5 TeV center of mass energy (Khachatryan et al., 2016d). Jets which are associated with the charm quarks in p+Pb collisions are also studied with a variant of the bottom tagging algorithm (Sirunyan et al., 2017b). A strong suppression of R AA of jets associated with bottom quarks is observed in Pb+Pb collisions while the R pP b is consistent with unity. These CMS measurements demonstrate that jet quenching does not have a strong dependence on parton mass and flavor, at least in the jet p T range studied (Chatrchyan et al., 2014a;Khachatryan et al., 2017c). The charm jet R pP b also shows consistent results with negligible cold nuclear matter effects when compared with the measurements from p+p collisions. Summary of experimental evidence for partonic energy loss in the medium Partonic energy loss in the medium is demonstrated by numerous measurements of jet observables. To date, the most precise quantitative constraints on the properties of the medium come from comparisons of R AA to models by the JET collaboration (Burke et al., 2014). The interpretation of R AA as partonic energy loss is confirmed by measurements of dihadron, gamma-hadron, jet-hadron, hadron-jet, and jet-jet correlations. The assumption about the background contribution and the biases of these measurements vary widely, so the fact that they all lead to a coherent physical interpretation strengthens the conclusion that they are due to partonic energy loss in the medium. This energy loss scales with the energy density of the system rather than the system size. Reaction plane dependent inclusive particle R AA , inclusive particle v 2 , and jet v 2 indicate that this energy loss is path length dependent, perhaps requiring a parton to traverse a minimum of around 2 fm of QGP to lose energy. Comparison of jet v n to models indicates that jet-by-jet fluctuations in partonic energy loss impacts reaction plane dependent measurements significantly, however, this is not yet fully understood theoretically. Measurements of heavy quark energy loss are consistent with expectations from models, however, they are also consistent with the energy loss observed for gluons and light quarks. Studies of heavy quark energy loss will improve substantially with the slated increases in luminosity and detector upgrades. The STAR heavy flavor tracker has already enabled higher precision measurements of heavy flavor at RHIC and one of the core goals of the proposed detector upgrade, sPHENIX, is precision measurements of heavy flavor jets. Run 3 at the LHC will enable higher precision measurements of heavy flavor, including studies of heavy flavor jets in the lower momentum region which may be more sensitive to mass effects. The key question for the field is how to constrain the properties of the medium further. The Monte Carlo models the Jetscape collaboration is developing will include both hydrodynamics and partonic energy loss and the Jetscape collaboration plans Bayesian analyses similar to (Bernhard et al., 2016;Novak et al., 2014) incorporating jet observables. These models will also enable the exact same analysis techniques and background subtraction methods to be applied to data and theoretical calculations. We propose including single particle R AA (including particle type dependence), jet R AA (with experimental analysis techniques applied), high momentum single particle v 2 , jet v 2 , hadron-jet correlations, and I AA from both γ-hadron and dihadron correlations. The analysis method for all of these observables should be replicable in Monte Carlos. We omit A J because a majority of these measurements are not corrected for detector effects. Bayesian analyses comparing theoretical calculations to data may be the best avenue for constraining the properties of the medium using measurements of jets. This is likely to improve our understanding of which observables are most useful for constraining models. C. Influence of the medium on the jet Section III.B examined the evidence that partons lose energy in the medium, but did not examine how partons interact with the medium. Understanding modifications of the jet by the medium requires a bit of a paradigm shift. As highlighted in Section II, a measurement of a jet is not a measurement of a parton but a measurement of final state hadrons generated by the fragmentation of the parton. Final state hadrons are grouped into the jet (or not) based on their spatial correlations with each other (and therefore the parton). Whether or not the lost energy retains its spatial correlation with the parent parton depends on whether or not the lost energy has had time to equilibrate in the medium. If a bremsstrahlung gluon does not reach equilibrium with the medium, when it fragments it will be correlated with the parent parton. Interactions with the medium shift energy from higher momentum final state particles to lower momentum particles and broadens the jet. Similar apparent modifications could occur if partons from the medium become correlated with the hard parton through medium interactions (Casalderrey-Solana et al., 2017). Whether or not this lost energy is reconstructed as part of a jet depends on the jet finding algorithm and its parameters. Whereas the observation that energy is lost is relatively straightforward, there are many different ways in which the jet may be modified, and we cannot be sure which mechanisms actually occur in which circumstances until we have measured observables designed to look for these effects. There are several different observables indicating that jets are indeed modified by the medium, each with different strengths and weaknesses. We distinguish between mature observables -those which have been measured and published, usually by several experiments -and new observables -those which have either only been published recently or are still preliminary. Mature observables largely focus on the average properties of jets as a function of variables which we can either measure directly or are straightforward to calculate, such as momentum and the position of particles in a jet. This includes dihadron correlations (h-h); correlations of a direct photon or Z with either a hadron or a reconstructed jet (γ-h and γ-jet); the jet shape (ρ(r)); the dijet asymmetry (A J ); the momentum distribution of particles in a reconstructed jet, called the fragmentation function (D jet (z) where z = p T /E jet ); identification of constituents (PID), and heavy flavor jets (HF jets). Where our experimental measurements of these observables have limited precision, this is either due to the limited production cross section (heavy flavor jets and correlations with direct photons) or due to limitations in our understanding of the background (identified particles). Our improving understanding of the parton-medium interactions has largely motivated the search for new, more differential observables. Partonic energy loss is a statistical process so ensemble measurements such as the average distribution of particles in a jet, or the average fractional energy loss, are important but can only give a partial picture of partonic energy loss. Just as fluctuations in the initial positions of nucleons must be understood to properly interpret the final state anisotropies of the medium, fluctuations play a key role in partonic interactions with the medium. The average shape and energy distribution of a jet is smooth, but each individual jet is a lumpy object. These new observables include the jet mass M jet , subjettiness (N subjettiness ), LeSub, the splitting function z g , the dispersion (p D T ), and the girth (g). We leave the definitions of these variables to the following sections and we focus our discussion on ob-servables which have been measured in heavy ion collisions, omitting those which have only been proposed to date. In general these observables are sensitive to the properties and structure of individual jets, and they are adapted from advances in jet measurements from particle physics. Investigations of new observables are important because they will allow access to well defined pQCD observables, which increases the sensitivity of our measurements to the properties of the QGP. The goal of each new observable is to construct something that is sensitive to properties of the medium that our mature observables are not sufficiently sensitive to, or to be able to disentangle physics processes that are not directly related to the medium properties, such as the difference in fragmentation between quark and gluon jets. Most measurements of these new observables are still preliminary and we therefore avoid drawing strong conclusions from them. Our understanding of these observables is still developing, particularly our understanding of how they are impacted by analysis cuts and the approach to the approach used to remove background effects. An observable which is highly effective for, say, distinguishing between quark and gluon jets in p+p collisions, may not be as effective in heavy ion collisions. We summarize the current status of observables sensitive to the medium modifications of jets in Table III. This list of observables also shows the evolution of the field. Early on, due to statistical limitations, studies focused on dihadron correlations. These measurements are straightforward experimentally, however, they are difficult to calculate theoretically because all hadron pairs contribute and the kinematics of the initial hard scattering is poorly constrained. In contrast, as discussed in Section III.B.4, when direct photons are produced in the process q+g → q+γ, the initial kinematics of the hard scattered partons are known more precisely. In some kinematic regions, these measurements are limited by statistics, and in others they are limited by the systematic uncertainty predominately from the subtraction of background photons from π 0 decay. Measurements of reconstructed jets are feasible over a wider kinematic region, but the kinematics of the initial hard scattering are not constrained as well. Nearly all measurements are biased towards quarks for the reasons discussed in Section II, however, it may be possible to tune the bias either using identified particles or by using new observables that select for particular fragmentation patterns. Table III summarizes whether or not modifications, particularly broadening and softening, have been observed using each observable and which experiments have measured them. This table demonstrates that each measurement has strengths and weaknesses and that all observations contribute to our current understanding. Modifications to the jet structure have been observed for most observables, but not all. Since each observable is sensitive to different modifications, all provide useful in-put for differentiating between jet quenching models and understanding the effects of different types of initial and final state processes. We begin our discussion of measurements indicating modification of jets by the medium with mature observables. For each observable we revisit these issues in a discussion stating what we have learned from that observable. Fragmentation functions with jets Fragmentation functions are a measure of the distribution of final state particles resulting from a hard scattering and represent the sum of parton fragmentation functions, D h i , where i represents each parton type (u, d, g, etc.) contributing to the final distribution of hadrons, h. Typically, fragmentation functions are measured as a function of z or ξ where z = p h /p and ξ = − ln(z), where p is the momentum of parton produced by the hard scattering. Jet reconstruction can be used to determine the jet momentum, p jet to approximate the parton momentum p, while the momentum of the hadrons, p h , are measured for each hadron that is clustered into the jet by the jet reconstruction algorithm. In collider experiments, the transverse momentum,p T , is typically substituted for the total momentum p in the fragmentation function. It should be noted that this is not precisely the same observable as what is commonly referred to as the fragmentation function by theorists. The fragmentation functions for jets in Pb+Pb collisions at √ s NN = 2.76 TeV have been measured by the ATLAS (Aad et al., 2014c) and CMS (Chatrchyan et al., 2012c(Chatrchyan et al., , 2014c Collaborations. The ratios of the fragmentation functions for several different centrality bins to the most peripheral centrality bin are shown in Figure 22. The most central collisions show a significant change in the average fragmentation function relative to peripheral collisions. At low z there is a noticeable enhancement followed by a depletion at intermediate z. This suggests that the energy loss observed for mid to high momentum hadrons is redistributed to low momentum particle production. We note that this corresponds to only a few additional particles and is a small fraction of the energy that R AA , A J and the other energy loss observables discussed in Section III.B indicate is lost. Arguably, this is the most direct observation of the softening of the fragmentation function expected from partonic energy loss in the medium. However, the definition of a fragmentation function in Equation 1 uses the momentum of the initial parton and, as discussed in Section II, a jet's momentum is not the same as the parent parton's momentum. Fragmentation functions measured with jets with large radii are approximately the same as the fragmentation functions in Equation 1, but this is not true for the jets with smaller radii measured in heavy ion collisions. It is important to note that initial fragmentation mea- surements from the LHC used only dijets samples with large momenta (p T > 4 GeV/c) constituents, which indicated that there was no modification of fragmentation functions (Chatrchyan et al., 2012c). With increased statistics and improved background estimation techniques these fragmentation measurements were re-measured later with inclusive jets with constituent tracks with p T > 1 GeV/c utilizing the 2011 data. Figure 23 compares the measurements from CMS from two different measurements using 2010 and 2011 data. The initial 2010 analysis did not include lower momentum jet constituents due to the difficulty with background subtraction in that data (Chatrchyan et al., 2012c(Chatrchyan et al., , 2014c. Even though the two measurements are consistent, the 2010 data in isolation indicate that fragmentation is not modified while the 2011 data, which extend to lower momenta and use a less biased jet sample, clearly show modification at low momenta (high ξ). This highlights the difficulty in drawing conclusions from a single measurement, particularly when neglecting possible biases. kinematic region and focused on leading and subleading jets. While the two measurements are consistent, the conclusion drawn from the 2010 data alone was that there was no apparent modification of the jet fragmentation functions. This highlights how critical biases are to the proper interpretation of measurements. The high momentum of these jets combined with the background subtraction and suppression techniques also means that the data in both Figure 22 and Figure 23 are likely biased towards quark jets. Boson tagged fragmentation functions As described previously, bosons can be used to tag the initial kinematics of the hard scattering. For fragmentation functions, this gives access to the initial parton momentum in the calculation of the fragmentation variable z. At the top Au+Au collision energy at RHIC, √ s NN = 200 GeV, there have been no direct measurements of fragmentation functions from reconstructed jets so far, however, γ-hadron correlations have been measured both in p+p and Au+Au collisions. The fragmentation function was measured in p+p collisions at RHIC as a function of et al., 2010b) and is shown in Figure 24. The p+p results agree well with the TASSO measurements of the quark fragmentation function in electron-positron collisions, which is consistent with the production of a quark jet opposite the direct photon as expected in Compton scattering. Using the p+p results as a reference, direct photon- ). The top panel shows IAA for the away-side as a function of ξ = log( 1 z ) = log( p jet p had ). The points are shifted for clarity. The bottom panel shows the ratio of the IAA for |∆φ−π| < π/2 to |∆φ − π| < π/6. This demonstrates the enhancement at low momentum combined with a suppression at high momentum, a shift consistent with expectations from energy loss models. The change is largest for wide angles from the direct photon. hadron correlations were measured in Au+Au collisions at RHIC (Adare et al., 2013b). The I AA are shown in Figure 25. A suppression is observed for ξ < 1 (z > 0.4) while an enhancement is observed for ξ > 1 (z < 0.4). This suggests that energy loss at high z is redistributed to low z. Comparing these results to the results from STAR (Abelev et al., 2010c;Adamczyk et al., 2016) suggests that this is not a z T dependent effect but rather a p T dependent effect. STAR measured direct photonhadron correlations for a similar z T range but does not observe the clear enhancement exhibited in the PHENIX measurement. However, STAR is able to measure low values of z T by increasing the trigger photon p T , while PHENIX goes to low z T by decreasing the associated hadron p T . Preliminary PHENIX results as a function of photon p T are consistent with the conclusion that modifications of fragmentation depend on associated particle p T rather than z T . Furthermore, STAR does observe an enhancement for jet-hadron correlations with hadrons of p T < 2 GeV/c which is consistent with the PHENIX direct photon-hadron observation. The direct photon-hadron correlations also suggest that the low p T enhancement occurs at wide angles with respect to the axis formed by the hard scattered partons. Figure 25 shows the yield measured by PHENIX for different ∆φ windows on the away-side. The enhancement is most significant for the widest window, |∆φ−π| < π/2. Dihadron correlations Measurements of dihadron correlations are sensitive to modifications in fragmentation, although the interpretation is complicated because the initial kinematics of the hard scattering are poorly constrained. Differences observed in the correlations can either be due to medium interactions or due to changes in the parton spectrum. At high p T , there are no indications of modification of the near-or away-side at midrapidity in d+Au collisions (Adler et al., 2006a,d) so any effects observed in A+A are hot nuclear matter effects and either d+Au or p+p can be used as a reference for A+A collisions. The near-side peak can be used to study the angular distribution of momentum and particles around the triggered jet. The away-side peak is wider than the nearside due to the resolution of the triggered jet peak axis and the effect of the acoplanarity momentum vector, k T . Dihadron correlations have been measured in p+p collisions to determine the intrinsic k T . Measurements of p T pair = √ 2k T as a function of √ s are shown in Figure 26. The effect of the nucleus on k T has been studied in d+Au collisions at 200 GeV (Adler et al., 2006d) and in p+Pb collisions at 5.02 TeV (Adam et al., 2015b) via dihadron correlations and reconstructed jets respectively. The dihadron measurements in d+Au are consistent with the PHENIX p+p measurements shown in Figure 26, while the p+Pb dijet results agree with PYTHIA expectations. Since no broadening has been observed in p+Pb or d+Au collisions, any broadening of the away-side jet peak in A+A collisions would be the result of modifications from the QGP. Assuming this is purely from ra- diative energy loss, the transport coefficientq can be extracted directly from a measurement of k T according tô q ∝ k 2 T (Tannenbaum, 2017). Figure 27 shows the widths in ∆φ and ∆η on the nearside as a function of p t T , p a T , and the average number of participant nucleons, N part for d+Au, Cu+Cu, and Au+Au collisions at √ s NN = 62.4 and 200 GeV (Agakishiev et al., 2012c). The near-side is broader in both ∆φ and ∆η in central collisions. This broadening does not have a strong dependence on the angle of the trigger particle relative to the reaction plane . One interpretation of this is that the jet-by-jet fluctuations in partonic energy loss are more significant than path length dependence for this observable (Zapp, 2014a). Higher energy jets have higher particle yields and are more collimated, so if changes were due to an increase in the average parton energy the yield would increase but the width would decrease. In contrast, interactions with the medium would lead to broadening and the softening of the fragmentation function which would lead to more particles. The near-side yields are not observed to be modified (Agakishiev et al., 2012c), although I AA at RHIC is also consistent with the slight enhancement seen at the LHC (Aamodt et al., 2012). This indicates that the increase in width is most likely due to medium interactions rather than changes in the parton spectra. Recent studies of the away-side do not indicate a mea- (Agakishiev et al., 2012c). Dependence of the Gaussian widths in ∆φ and ∆η on p t T for 1.5 GeV/c < p a T < p t T , p a T for 3 < p t T < 6 GeV/c, and Npart for 3 < p t T < 6 GeV/c and 1.5 GeV/c < p a T < p t T for 0-95% d+Au, 0-60% Cu+Cu at √ sNN = 62.4 GeV and √ sNN = 200 GeV, 0-80% Au+Au at √ sNN = 62.4 GeV, and 0-12% and 40-80% Au+Au at √ sNN = 200 GeV. This demonstrates that the correlation is broadened in central Au+Au collisions. surable broadening , at least for the low momenta in this study (4 < p t T < 6 GeV/c, 1.5 GeV/c > p a T ). This is in contrast to earlier studies which neglected odd v n in the background subtraction, indicating dramatic shape changes. These earlier studies are discussed in greater detail in Section III.D.3 because the modifications observed were generally interpreted as an impact of the medium on the jet. We note that broadening is observed on the away-side for jet-hadron correlations, as discussed below. The current apparent lack of broadening in dihadron correlations may indicate that this is not the most sensitive observable because of the decorrelation between the trigger on the near-side and the angle of the away-side jet. It may also be a kinematic effect because modifications are extremely sensitive to momentum. The away-side I AA decreases with increasing p a T , indicating a softening of the fragmentation function of surviving jets . A large collection of experimental measurements in e + +e − collisions show that jets initiated by gluons exhibit differences with respect to jets from light-flavor quarks (Abreu et al., 1996;Acton et al., 1993;Akers et al., 1995;Barate et al., 1998;Buskulic et al., 1996). First, the charged particle multiplicity is higher in gluon jets than in light-quark jets. Second, the fragmentation functions of gluon jets are considerably softer than that of quark jets. Finally, gluon jets appeared to be less collimated than quark jets. These differences have already been exploited to differentiate between gluon and quark jets in p+p collisions (Collaboration, 2013a). The sim-plest and most studied variable used experimentally is the multiplicity, the total number of constituents of reconstructed jet. Since gluon hadronization produces jets which are 'wider' than jets induced by quark hadronization, jet shapes could be studied with jet width variables to distinguish quark and gluon jets. Since there are significant differences in baryon and meson production in A+A collisions compared to p+p collisions, such differences may exist for jets. Furthermore, energy loss is different for quark and gluon jets, so species-dependent energy loss may mean that there are differences between jets with different types of leading hadrons. These differences may be observed through comparisons of jets with leading baryons and mesons or light and strange hadrons. The OPAL collaboration measured the ratio of K S 0 production in e + +e − collisions in gluon jets to that in quark jets to be 1.10 ± 0.02 ± 0.02 and the ratio of Λ production in gluon jets to that in quark jets to be 1.41 ± 0.04 ± 0.04 (Ackerstaff et al., 1999), meaning that jets containing a Λ or a proton are somewhat more likely to arise from gluon jets than jets which do not contain a baryon. This difference is small, however, a large difference in the interactions between quark and gluon jets in heavy ion collisions may be observable. Measurements of dihadron correlations with identified leading triggers may be sensitive to these effects. Studies of identified strange trigger particles found a somewhat higher yield in jets with a leading K S 0 than those with a leading unidentified charged hadron or Λ at the same mo- (Adamczyk et al., 2015). The ∆φ and ∆η projections of the correlation for |∆η| < 0.78 and |∆φ| < π/4, respectively, for pion triggers (left two panels) and non-pion triggers (right two panels). Filled symbols show data from the 0-10% most central Au+Au collisions at √ sNN = 200 GeV. Open symbols show data from minimum bias d+Au data at √ sNN = 200 GeV. This figure shows that the yield is higher for pion trigger particles than non-pion trigger particles, which are mostly kaons and protons, and that there is a higher yield for pion trigger particles in central Au+Au collisions than in d+Au collisions. This may be an indication of differences in partonic energy loss for quarks and gluons in the medium. mentum (Abelev et al., 2016). This was also observed in d+Au collisions, indicating that the more massive leading Λ simply takes a larger fraction of the jet energy. The slight centrality dependence indicates there may be medium effects, however, these could arise from differences in quark and gluon jets or from strange and nonstrange jets. Ultimately these data are inconclusive due to their low precision. Dihadron correlations with identified pion and non-pion triggers (Adamczyk et al., 2015) shown in Figure 28 observed a higher yield in jets with a leading pion than those with a leading kaon or proton. This difference was larger in Au+Au collisions than in d+Au collisions, which (Adamczyk et al., 2015) proposes may be impacted to fewer baryon trigger particles coming from jets due to recombination. Both of these results could be impacted by several effects -differences in quark and gluon jets in the vacuum, differences in energy loss in the medium for quark and gluon jets, and modified fragmentation in the medium. Since both studies observe differences, at least some of these effects are present in the data, however, the data cannot distinguish which effects are present. Jet-hadron correlations Measurements of jet-hadron correlations are sensitive to the broadening and softening of the fragmentation function, but have the advantage over dihadron correlations that the jet will be more closely correlated with the kinematics of its parent parton than a high p T hadron. Figure 29 shows jet-hadron correlations measured by CMS (Khachatryan et al., 2016a) as a function of ∆η from the trigger jet. Not shown here are the results as a function of ∆φ from the trigger jet, however the con-clusions were quantitatively the same. The jets in this sample had a resolution parameter of R = 0.3 and a leading jet p T > 120 GeV/c in order reduce the effect of the background on the trigger jet sample. The background removal for the jets reconstructed in Pb+Pb was done via the HF/Voronoi method, which is described in (CMS, 2013), a slightly different method than described in Section II. The effect of the combinatorial background on the distribution of associated tracks was removed by a sideband method, in which the background is approximated by the measured two dimensional correlations in the range 1.5 < |∆η| < 3.0. Jets in Pb+Pb are observed to be broader, with the greatest increase in the width for low momentum associated particles. This is consistent with expectations from partonic energy loss. These studies found that the subleading jet was broadened even more than the leading jet, indicating a bias towards selecting less modified jets as the leading jet. Jet hadron correlations have also been studied at RHIC energies, where the width and yield of the away-side peak, rather than the associated particle correlations themselves, can be seen in Figure 30. This figure shows the away-side widths and where Y Au+Au and Y p+p are the number of particles in the away-side from (Adamczyk et al., 2014a) for two different ranges of jet p T . The width in p+p is consistent with that in Au+Au within uncertainties, although the uncertainties are large due to the large uncertainties in the v n . The D AA shows that momentum is redistributed within the jet, with suppression (D AA < 0) for p T < 2 GeV/c associated particles and enhancement (D AA > 0) for > 2 GeV/c. This indicates that the suppression at high momenta was balanced by the enhancement at low (Khachatryan et al., 2016a). Symmetrized ∆η distributions correlated with Pb+Pb and p+p inclusive jets with pT > 120 GeV are shown in the top panels for tracks with 1 <pT < 2 GeV. The difference between per-jet yields in Pb+Pb and p+p collisions is shown in the bottom panels. These measurements indicate that the jet is broadened and softened, as expected from energy loss models. momenta, which means that this change in the jet structure likely comes from modification of the jet rather than modifications of the jet spectrum. This enhancement at low p T is at the same associated momentum for both jet energies, which may indicate that the enhancement is not dependent on the energy of the jet but the momentum of the constituents. Dijets The LHC A J measurements shown in Figure 15 show a significant energy imbalance for dijets due to medium effects in central collisions (Aad et al., 2010;Chatrchyan et al., 2011b) while RHIC A J measurements suggest that energy imbalance observed for jet cones of R=0.2 can be recovered within a jet cone of R=0.4 for measurable dijet events (Adamczyk et al., 2017b). The STAR measurements demonstrate that the energy imbalance is recovered when including low p T constituents (Adamczyk et al., 2017b), also indicating a softening of the fragmentation function. Comparing these two results is complicated since they have very different surface biases, both due to the experimental techniques and the different collision energies. In order to interpret such comparisons and draw definitive conclusions a robust Monte Carlo generator is required because the differences in these observables are not analytically calculable. To develop a better picture of the transverse structure of the jets, it is best to measure observables specifically designed to probe the transverse direction. The effect on dijets along the direction transverse to the jet axis was studied by measuring the angular difference between the reconstructed jet axis of the leading and sub-leading jets (Aad et al., 2010;Chatrchyan et al., 2011b). These results are shown in Figure 15 and little change to the angular deflection of the sub-leading jet in central Pb+Pb collisions compared to p+p collisions is observed. It is important to point out that the tails in the p+p distribution may be due to 3-jet events while those pairs in Pb+Pb events are the results of dijets undergoing energy loss. Jet Shapes Another observable that is related to the structure of the jet is the called the jet shape. This observable is constructed with the idea that the high energy jets we are interested in are roughly conical. First a jet finding algorithm is run to determine the axis of the jet, and then the sum of the transverse momentum of the tracks in concentric rings about the jet axis are summed together (and divided by the total transverse jet momentum). The differential jet shape observable ρ(r) is thus the radial distribution of the transverse momentum: (lower) are both plotted as a function of p a T . The widths (note the log scale on the y-axis) show no evidence of broadening in Au+Au relative to p+p due to the large uncertainties in the Au+Au measurement. However, DAA shows the suppression of high momentum particles associated with the jet is balanced by the enhancement of lower momentum associated particles. The point at which enhancement transistions to suppression appears to occur at the same associated particle's momentum and does not depend on the jet momentum. Data are for √ sNN = 200 GeV collisions and YaJEM-DE model calculations are from (Renk, 2013b). where the jet cone is divided rings of width δr which have an inner radius r a and an outer radius r b . The differential and integrated jet shape measurements measured by CMS are shown in Figure 31. For this CMS study, inclusive jets with p T > 100 GeV/c, resolution parameter R = 0.3 and constituent tracks with p T > 1 GeV/c were used. The effect of the background on the signal jets was removed through the iterative subtraction technique described in Section II. The associated tracks were not explicitly required to be the constituent tracks, however given that the momentum selection criteria is the same and the conical nature of jets at this energy, they will essentially be the same. The effect of the background on the distribution of the associated particles was removed via an η reflection method, where the analysis was repeated for an R = 0.3 cone with the opposite sign η but same φ. This preserves the flow effects in a model independent way in the determination of the background. The differential jet shapes in the most central Pb+Pb collisions are broadened in comparison to measurements done in p+p collisions at the same center of mass energy (Chatrchyan et al., 2013a). As shown in other measurements, the effect is centrality dependent. These measurements demonstrate that there is an enhancement in the modification with increasing angle from the jet axis, indicating a broadening of the jet profile and a depletion near r ≈ 0.2. Particle composition Theory predicts higher production of baryons and strange particles in jets fragmenting in the medium relative to jets fragmenting in the vacuum (Sapeta and Wiedemann, 2008). The only published study searching for modified particle composition in jets in heavy ion collisions is the Λ/K 0 S ratio in the near-side jet-like correlation of dihadron correlations in Cu+Cu collisions at √ s NN = 200 GeV by STAR (Abelev et al., 2016) shown in Figure 32. This measurement indicated that particle ratios in the near-side jet-like correlation are comparable to the inclusive particle ratios in p+p collisions. At high momenta, the inclusive particle ratios in p+p collisions are expected to be dominated by jet fragmentation and therefore are a good proxy for direct observation of the particle ratios in reconstructed jets. PYTHIA studies show that the inclusive particle ratios in p+p collisions are approximately the same as the particle ratios in dihadron correlations with similar kinematic cuts; differences are well below the uncertainties on the experimental measurements. The consistency between the Λ/K 0 S ratio in the jet-like correlation in Cu+Cu collisions and the inclusive ratio in p+p collisions is therefore interpreted as evidence that the particle ratios in jets are the same in A+A collisions and p+p collisions, that at least the particle ratios are not modified. In contrast, the inclusive Λ/K 0 S reaches a maximum near 1.6 (Agakishiev et al., 2012b), a few times that in p+p collisions. Preliminary measurements from both the STAR dihadron correlations (Suarez, 2012) and ALICE collaborations from both dihadron correlations (Veldhoen, 2013) and reconstructed jets (Kucera, 2016;Zimmermann, 2015) support this conclusion. However, experimental uncertainties are large and for studies in dihadron correlations, results are not available for the away-side and the near-side is known to be surface biased. (Chatrchyan et al., 2013a). Differential jet shapes in Pb+Pb and p+p collisions for four Pb+Pb centralities. Each spectrum is normalized so that its integral is unity. This shows that there are more particles in jets in central collisions and these modifications are primarily at large angles relative to the jet axis, as expected from partonic energy loss. LeSub One of the new observables constructed in order to attempt to create well defined QCD observables is LeSub, defined as: LeSub characterizes the hardest splitting, so it should be insensitive to background, however, it is not colinear safe and therefore cannot be calculated reliably in pQCD. It agrees well with PYTHIA simulations of p+p collisions and is relatively insensitive to the PYTHIA tune (Cunqueiro, 2016), which is not surprising as the hardest splittings in PYTHIA do not depend on the tune. LeSub calculated in PYTHIA agrees well with the data from Pb+Pb collisions for R = 0.2 charged jets. This indicates that the hardest splittings are likely unaffected by the medium. Modifications may depend on the jet momentum, as the ALICE results are for relatively low momentum jets at the LHC. The ALICE measurement is also for relatively small jets, which preferentially selects more collimated fragmentation patterns, but it indicates that observables that depend on the first splittings are insensitive to the medium. Jet Mass In a hard scattering the partons are produced off-shell, and the amount they are off-shell is the virtuality (Ma-jumder and Putschke, 2016). When a jet showers in vacuum, at each splitting the virtuality is reduced and momentum is produced transverse to the original scattered parton's direction, until the partons are on-shell and thus hadronize. For a vacuum jet, if the four vectors of all of the daughters from the original parton are combined, the mass calculated from the combination of the daughters would be precisely equal to the virtuality. The virtuality of hard scattered parton is important as it is directly related to how broad the jet itself is, as it is directly related to how much momentum transverse to the jet axis the daughters can have. The mass of a jet might serve as a way to better characterize the state of the initial parton. It is important to construct observables where the only difference between p+p collisions compared to heavy ion collisions is due to the effects of jet quenching, and not the result of biases in the jet selection. Jet mass may make a much closer comparison between heavy ion and p+p observables by selecting more similar populations of parent partons than could be achieved by selecting differentially in transverse momentum alone. Secondly, the measured jet mass itself could be affected by in-medium interactions as the virtuality of the jet can increase for a given splitting due to the medium interaction, unlike in the vacuum case. Figure 33 shows the ALICE (Acharya et al., 2017) jet mass measurement of charged jets for most central collisions. No difference is observed between PYTHIA Perugia 2011 tune (Skands, 2010) and data from Pb+Pb collisions in all jet p T bins indicating no apparent modifica- (Abelev et al., 2016). Λ/K 0 S ratio measured in jet-like correlations in 0-60% Cu+Cu collisions at √ sNN = 200 GeV for 3 < p trigger T < 6 GeV/c and 2 < p associated T < 3 GeV/c along with this ratio obtained from inclusive pT spectra in p+p collisions. Data are compared to calculations from PYTHIA (Sjostrand et al., 2006) using the Perugia 2011 tunes (Skands, 2010) and Tune A (Field and Group, 2005). This shows that, within the large uncertainties, there is no indication that the particle composition of jets is modified in A+A collisions, where Λ/K 0 S reaches a maximum of 1.6 (Agakishiev et al., 2012b). tion within uncertainties. In addition to PYTHIA, these distributions were compared to three different quenching models, JEWEL (Zapp, 2014a) with recoil on, JEWEL with recoil off, and Q-PYTHIA (Armesto et al., 2009). Both Q-PYTHIA and JEWEL with the recoil on produced jets with a larger mass distribution than in the data, whereas JEWEL with the recoil off gives a slightly lower value than the data. This implies that jet mass as a distribution in these energy and momentum ranges is rather insensitive to medium effects, as JEWEL and Q-PYTHIA both incorporate medium effects whereas PYTHIA describes vacuum jets. The agreement between PYTHIA and data could also indicate that the jets selected in this analysis were biased towards those that fragmented in a vacuum-like manner. More differential measurements of jet mass are needed to determine the usefulness of jet mass variable. Dispersion Since quark jets have harder fragmentation functions, they are more likely to produce jets with hard constituents that carry a significant fraction of the jet energy. This can be studied with p D T = Σ i p 2 T,i /Σ i p T,i . This observable was initially developed in order to distinguish between quark and gluon jets with quark jets yielding a larger mean p D T (Collaboration, 2013a). The ALICE ex-periment has measured p D T in Pb+Pb collisions, shown in Figure 34. The data from Pb+Pb collisions for R = 0.2 charged jets with transverse momentum between 40 and 60 GeV is compared to data from PYTHIA with the Perugia 11 tune. In Pb+Pb collisions, the mean p D T was found to be larger compared to the PYTHIA reference, which had been validated by comparisons with p+p data. This may indicate either a selection bias towards quark jets or harder fragmenting jets. Girth The jet girth is another new observable describing the shape of a jet. The jet girth, g, is the p T weighted width of the jet where r i is the angular distance between particle i and the jet axis. If jets are broadened by the medium, we would expect that g would be increased, and the converse would be that if jets were collimated than g would be reduced. While the distributions overlap, the gluon jets are broader and have a higher average g than quark jets. The ALICE experiment has shown that distributions of g in p+p collisions agree well with PYTHIA distributions, indicating that it is a reasonable probe and that PYTHIA can be used as a reference. In Pb+Pb collisions, the ALICE experiment found that g is slightly shifted towards smaller values compared to the PYTHIA reference for R = 0.2 charged jets (Cunqueiro, 2016), although the significance of this shift is unclear. This indicates that the core may appear to be more collimated in Pb+Pb collisions than p+p collisions. Measurements are compared to JEWEL and PYTHIA calculations in Figure 35. JEWEL includes partonic energy loss and predicts little modification of the girth in heavy ion collisions. PYTHIA calculations include inclusive jets, quark jets, and gluon jets. The data are closest to PYTHIA predictions for quark jets. This may be due to bias towards quarks in surviving jets in Pb+Pb collisions. One of the unanswered questions regarding jets in heavy ion collisions is whether jets start to fragment while they are in the medium, or whether they simply lose energy to the medium and then fragment similar to fragmentation in vacuum after reaching the surface. If the latter is true, jet quenching would be described as a shift in parton p T followed by vacuum fragmentation, which would mean that jets shapes in Pb+Pb collisions would be consistent with jet shapes in p+p collisions. If g is shifted, this would favor fragmentation in the medium and if it is not, it would favor vacuum fragmentation. These observations are qualitatively consistent with the measurements of p D T discussed in Section III.C.6 and the jet shape discussed in Section III.C.6. (Acharya et al., 2017). Fully-corrected jet mass distribution for anti-kT jets with R=0.4 in the 10% most central Pb+Pb collisions compared to PYTHIA (Sjostrand et al., 2006) with the Perugia 2011 tune (Skands, 2010) and predictions from the jet quenching event generators JEWEL (Zapp, 2014a) and Q-PYTHIA (Armesto et al., 2009). No difference is observed between PYTHIA and the data. This shows that there is no modification of the jet mass within uncertainties. ALI-PREL-101616 FIG. 34 Figure from ALICE (Cunqueiro, 2016). Unfolded p D T shape distribution in Pb+Pb collisions for R=0.2 charged jets with momenta between 40 and 60 GeV/c compared to PYTHIA simulations, to JEWEL calculations, and to q/g PYTHIA templates. This shows that the dispersion is larger in Pb+Pb collisions than in p+p collisions. This may indicate either modifications or a quark bias. Grooming Jet grooming algorithms (Butterworth et al., 2008;Dasgupta et al., 2013;Ellis et al., 2010;Krohn et al., 2010) attempt to remove soft radiation from the leading partonic components of the jet, isolating the larger scale structure. The motivation for algorithms such as jet grooming was to develop observables which can be calculated with perturbative QCD, and which are relatively insensitive to the details of the soft background. This allows us to determine whether the medium affects (Cunqueiro, 2016). The girth g for R=0.2 charged jets in Pb+Pb collisions with jet p ch T between 40 and 60 GeV/c compared to a PYTHIA simulations, to JEWEL calculations, and to q/g PYTHIA templates. This shows that jets are somewhat more collimated in Pb+Pb collisions than in p+p collisions. This may indicate a quark bias in surviving jets in Pb+Pb collisions. the jet formation process from the hard process through hadronization, or whether the parton loses energy to the medium with fragmentation only affected at much later stages. It is important to realize that the answers to these questions will depend on the jet energy and momentum, so there will not be a single definitive answer. Jet grooming allows separation of effects of the length scale from effects of the hardness of the interaction. Essentially this will allow us to see whether we are scattering off of pointlike particles in the medium or scattering off of something with structure. However, to properly apply this class of (Sirunyan et al., 2017a). Ratio of the splitting function zg = pT 2/(pT 1 + pT 2) in Pb+Pb and p+p collisions with the jet energy resolution smeared to match that in Pb+Pb for various centrality selections and 160 < p jet T < 180 GeV. This shows that the splitting function is modified in central Pb+Pb collisions compared to p+p collisions, which may indicate either a difference in the structure of jets in the two systems or an impact of the background. algorithms to the data, a precision detector is needed. The jet grooming algorithm takes the constituents of a jet, and recursively declusters the jet's branching history and discards the resulting subjets until the transverse momenta, p T,1 , p T,2 , of the current pair fulfills the soft drop condition (Larkoski et al., 2014): where θ is an additional measure of the relative angular distance between the two sub-jets and z cut and θ β are parameters which can select how strict the soft drop condition is. For the heavy-ion analyses conducted so far, β has been set to zero and z cut has been set to 0.1. A measurement of the first splitting of a parton in heavy ion collisions is performed by the CMS collaboration in Pb+Pb collisions at √ s NN = 5 TeV. The splitting function is defined as z g = p T 2 /(p T 1 + p T 2 ) with p T 2 indicating the transverse momentum of the least energetic subjet and p T 1 the transverse momentum of the most energetic subjet, applied to those jets that passed the soft drop condition outlined above. Figure 36 shows the ratio of z g in Pb+Pb to that in p+p from CMS for several centrality intervals for jets within the transverse momentum range of 160-180 GeV/c (Sirunyan et al., 2017a). While the measured z g distribution in peripheral Pb+Pb collisions is in agreement with the expected p+p measurement within uncertainties, a difference becomes apparent in the more central collisions. This observation indicates that the splitting into two branches becomes increasingly more unbalanced for more central collisions for the jets within the transverse momentum range of 160-180 GeV/c. A similar preliminary measurement by STAR observes no modification in z g (Kauder, 2017). The apparent modifications seen by CMS were proposed to be due to a restriction to subjets with a minimum separation between the two hardest subjets R 12 > 0.1 (Milhano, 2017). This indicates that there may be modifications of z g limited to certain classes of jets but not observed globally. This dependence of modifications on jets may be a result of interactions with the medium . While grooming and measurements of the jet substructure are promising, we emphasize the need for a greater understanding of the impact of the large combinatorial background and the bias of kinematic cuts on z g . Subjettiness The observable τ N is a measure of how many hard cores there are in a jet. This was initially developed to tag jets from Higgs decays in high energy p+p collisions. A jet from a single parton usually has one hard core, but a hard splitting or a bremsstrahlung gluon would lead to an additional hard core within the jet. An increase in the fraction of jets with two hard cores could therefore be evidence of gluon bremsstrahlung. The jet is reclustered into N subjets, and the following calculation is performed over each track in the jet: where ∆R N,i is the distance in η−φ between the ith track and the axis of the Nth subjet and the original jet has resolution parameter R 0 . In the case that all particles are aligned exactly with one of the subjets' axes, τ N will equal zero. In the case where there are more than N hard cores, a substantial fraction of tracks will be far from the nearest subjet axis, however, all tracks must have min(∆R 1,i , ∆R 2,i , ....∆R N,i ) ≤ R 0 because they are contained within the original jet. The maximum value of τ N is therefore one, the case when all jet constituents are at the maximum distance from the nearest subjet axis. Jets that have a low value of τ N are therefore more likely to have N or fewer well defined cores in their substructure, whereas jets with a high value are more likely to contain at least N +1 cores. A shift in the distribution of τ N in a jet population towards lower values can indicate fewer subjets while a shift to higher τ N can indicate more subjets. The observable τ 2 /τ 1 was constructed by the ALICE experiment (Zardoshti, 2017). Similar to the approach in (Adam et al., 2015c;Adamczyk et al., 2017c), background was subtracted using the coincidence between a soft trigger hadron, which should have only a weak correlation with jet production, and a high momentum trigger hadron, and can be seen in Figure 37. A jet where this ratio is close to zero most likely has two hard cores. This observable is relatively insensitive to the fluctuations in the background, as it would have to carry a significant fraction of the jet momentum to be modified. The ALICE result shows that the structure of the jets was unmodified for R = 0.4 charged jets with 40 ≤ p ch t,Jet < 60 GeV/c compared to PYTHIA calculations. This implies that medium interactions do not lead to extra cores within the jet, at least for selection of jets in this measurement. As for many jet observables, this observable may be difficult to interpret for low momentum jets in a heavy ion environment. Summary of experimental evidence for medium modification of jets The broadening and softening of jets due to interactions with the medium is demonstrated clearly by several mature observables which measure the average properties of jets. This includes fragmentation functions measured with both jets and bosons, widths of dihadron correlations, jet-hadron correlations, and measurements of the jet shape. On average, no change in the particle composition of jets in heavy ion collisions as compared to p+p collisions is observed. There are some indications from dihadron correlations that quark and gluon jets do not interact with the medium in the same way. These observables generally preferentially select quark jets over gluon jets, even in p+p collisions. Some of the observables have a strong survivor bias due to the kinematic cuts that are applied in order to reduce the combinatorial background. As our understanding of partonic energy loss has improved, the community has sought more differential observables. This is motivated in part by an increased understanding of the importance of fluctuations -while the average properties of jets are smooth, individual jets are lumpy, and by a desire construct well defined QCD observables. These new observables give us access to different properties of jets, such as allowing distinction between quark and gluon jets, and therefore may be more sensitive to the properties of the medium. Since the exploration of these observables is in its early stages, it is unclear whether we fully understand the impact of the background or kinematic cuts applied to the analyses. It is therefore unclear in practice how much additional information these observables can provide about the medium, without applying the observables to Monte Carlo events with different jet quenching models. We encourage cautious optimism and more detailed studies of these observables. For future studies to maximize our understanding of the medium by the Jetscape collaboration using a Bayesian analysis, we propose first to produce comparisons between dihadron correlations, jet-hadron correlations, and γ-hadron correlations to insure that the models have properly accounted for the path length dependence, initial state effects and the basics of fragmentation and hadronization. We do not list R AA here as it is likely that this observable will be used to tune some aspects of the model, as it has been used in the past. For the most promising jet quenching models, we would propose that these studies would be followed by comparisons of observables that depend more heavily on the details of the fragmentation, but are still based on the average distribution such as jet shapes, fragmentation functions, and particle composition. Finally, it would be useful to see the comparison of z g to models. We urge that initial investigations of the latter happen early so that the background effect can be quantified. We note that the same analysis techniques and selection criteria must be used for analyses of the experiment and of the models in order for the comparisons to be valid. This is particularly true for studies using reconstructed jets where experimental criteria to remove the effects of the background can bias the sample of jets used in construction of the observables. We omit A J from consideration because nearly any reasonable model gives a reasonable value, thus it is not particularly differential. We also omit heavy flavor jets because current data do not give much insight into modifications of fragmentation, and it is not clear whether it will be possible experimentally to measure jets with a low enough p T that the mass difference between heavy and light quarks is relevant. Inclusion of new observables into these studies may increase the precision with which medium properties can be constrained, but it is critical to replicate the exact analysis techniques. In order to compare experimental data, or to compare experimental data with theory, not only is it necessary for the analyses to be conducted the same way as it is stated above, but they should be on the same footing. Thus comparing unfolded results to uncorrected results it not useful. In general, we urge extreme caution in interpreting uncorrected results, especially for observables created with reconstructed jets. Since it is unclear how much the process of unfolding may bias the results, an important check would be to compare the raw results with the folded theory. However, this requires complete documentation of the raw results and the response matrix on the experimental side, and requires a complete treatment of the initial state, background, and hadronization on the theory side. This comparison, which we could think of as something like a closure test, would still require that the same jet finding algorithms with the same kinematic elections are applied to the model. D. Influence of the jet on the medium The preceding sections have demonstrated that hard partons lose energy to the medium, most likely through gluon bremsstrahlung and collisional energy loss. Often an emitted gluon will remain correlated with the parent parton so that the fragments of both partons are spatially correlated over relatively short ranges (R = ∆φ 2 + ∆η 2 0.5). Hadrons produced from the gluon may fall inside or outside the jet cone of the parent parton, depending on the jet resolution parameter. Whether or not this energy is then reconstructed experimentally as part of the jet depends on the resolution parameter and the reconstruction algorithm. For sufficiently large resolution parameters, the "lost" energy will still fall within the jet cone, so that the total energy clustered into the jet would remain the same. "Jet quenching" is then manifest as a softening and broadening of the structure of the jet. The evidence for these effects was discussed in the previous section. If, however, a parton loses energy and that energy interacts with or becomes equilibrated in the medium, it may no longer have short range spatial correlations with the parent parton. This energy would then be distributed at distances far from the jet cone. Alternately, the energy may have very different spatial correlations with the parent parton so that it no longer looks like a jet formed in a vacuum, and a jet finding algorithm may no longer group that energy with the jet that contains most of the energy of its parent parton. Evidence for these effects is difficult to find, both because of the large and fluctuating background contribution from the underlying event, and because it is unclear how this energy would be different from the underlying event. We discuss both the existing evidence that there may be some energy which reaches equilibrium with the medium, and the ridge and the Mach cone, which are now understood to be features of the medium rather than indications of interactions of hard partons with the medium. We also discuss searches for direct evidence of Molière scattering off of partons in the medium. Evidence for out-of-cone radiation The dijet asymmetry measurements demonstrate momentum imbalance for dijets in central heavy ion collisions, implying energy loss, but do not describe where that energy goes. To investigate this, CMS looked at the distribution of momentum parallel to the axis of a high momentum leading jet in three regions (Chatrchyan et al., 2011b), shown schematically in Figure 38. The jet reconstruction used in this analysis was an iterative cone algorithm with a modification to subtract the soft underlying event on an event-by-event basis, the details of which can be found in (Kodolova et al., 2007). Each jet was selected with a radius R = 0.5 around a seed of minimum transverse energy of 1 GeV. Since energy can be deposited outside R > 0.5 even in the absence of medium effects and medium effects are expected to broaden the jet, the momenta of all particles within in a slightly larger region, R < 0.8, were summed, regardless of whether or not the particles were jet constituents or subtracted as background. This region is called in-cone and the region R > 0.8 is called out-of-cone. CMS investigated these different regions of the events with a measurement of the projection of the p T of reconstructed charged tracks onto the leading jet axis. For each event, this projection was calculated as where the sum is over all tracks with p T > 0.5 GeV/c. These results were then averaged over events to obtain p T . This momentum imbalance in-cone and out-of-cone as a function of A J , shown as black points in Figure 39. The momentum parallel to the jet axis in-cone is large, but should be balanced by the partner jet 180 • away in the absence of medium effects. A large A J indicates substantial energy loss for the away-side jet, while a small A J indicates little interaction with the medium. This shows that the total momentum in the event is indeed balanced. For small A J , the p T in the in-cone and outof-cone regions is within zero as expected for balanced jets. For large A J , the momentum in-cone is non-zero, balanced by the momentum out-of-cone. These events were compared to PYTHIA+HYDJET simulations in order to understand which effects were simply due to the presence of a fluctuating background and which were due to jet quenching effects. In both the central Pb+Pb data and the Monte Carlo, an imbalance in jet A J also indicated an imbalance in the p T of particles within the cone of R = 0.8 about either the leading or subleading jet axes. To investigate further, CMS added up the momentum contained by particles in different momentum regions. The imbalance in the direction of the leading jet is dominated by particles with p T > 8 GeV/c, but is partially balanced in the subleading direction by particles with momenta below 8 GeV/c. The distributions look very similar in both the data and the Monte Carlo for the in-cone particle distribution. The out-of-cone distributions indicated a slightly different story. For both the data and the Monte Carlo, the missing momentum was balanced by additional, lower momentum particles, in the subleading jet direction. The difference is that in the Pb+Pb data, the balance was achieved by very low momentum particles, between 0.5 and 1 GeV/c. In the Monte Carlo, the balance was achieved by higher momentum particles, mainly above 4 GeV/c, which indicates a different physics mechanism. In the Monte Carlo, the results could be due to semi-hard initial-or final-state radiation, such as three jet events. The missing transverse momentum analysis was recently extended by examining the multiplicity, angular, and p T spectra of the particles using different techniques. As above, these results were characterized as a function of the Pb+Pb collision centrality and A J (Khachatryan et al., 2016c). This extended the results to quite some distance from the jet axes, up to a ∆R of 1.8. The angular pattern of the energy flow in Pb+Pb events was very similar to that seen in p+p collisions, especially when the resolution parameter is small. This indicates that the leading jet could be getting narrower, and/or the subleading jet is getting broader due to quenching effects. For a given range in A J , the in-cone imbalance in p T in Pb+Pb collisions is found to be balanced by relatively low transverse momentum out-of-cone particles with 0.5< p T <2 GeV/c. This was quantitatively differ- (Chatrchyan et al., 2011b). Average missing transverse momentum for tracks with pT > 0.5 GeV/c, projected onto the leading jet axis is shown in solid circles. The average missing pT values are shown as a function of dijet asymmetry AJ for 0-30% centrality, inside a cone of ∆R < 0.8 of one of the leading or subleading jet cones on the left, and outside (∆R > 0.8) the leading and subleading jet cones on the right. The solid circles, vertical bars and brackets represent the statistical and systematic uncertainties, respectively. For the individual pT ranges, the statistical uncertainties are shown as vertical bars. This shows that missing momentum is found outside of the jet cone, indicating that the lost energy may have equilibrated with the medium. ent than in p+p collisions where most of the momentum balance comes from particles with p T between 2< p T <8 GeV/c. This could indicate a softening of the radiation responsible for the p T imbalance of dijets in the medium formed in Pb+Pb collisions. In addition, a larger multiplicity of associated particles is seen in Pb+Pb than in p+p collisions. In every case, the difference between p+p and Pb+Pb observations increased for more central Pb+Pb collisions. However, some caution should be used in interpreting the result as these measurements make assumptions about the background, and require certain jet kinematics, which may limit how robust the conclusions are. It is unlikely that the medium would focus the leading jet so that it would be more collimated, for instance, but that a selection bias causes narrower jets to be selected in Pb+Pb collisions for a given choice in R and jet kinematics. Additionally, as with any analysis that attempts to disentangle the effects of the medium on the jet with the jet on the medium, the ambiguity in what is considered part of the medium and what is considered part of the jet can also complicate the interpretation of this result. While the results demonstrate that there is a difference in the missing momentum in Pb+Pb and p+p collisions, in order to identify the mechanism responsible, the data would need to be compared to a Monte Carlo model that incorporates jet quenching, and preserves momentum and energy conservation between the jet and medium. Searches for Molière scattering The measurement of jets correlated with hard hadrons in (Adam et al., 2015c) was also used to look for broadening of the correlation function between a high momentum hadron and jets. Such broadening could result from Molière scattering of hard partons off other partons in the medium, coherent effects from the scattering of a wave off of several scatterers. No such broadening is observed, although the measurement is dominated by the statistical uncertainties. Similarly, STAR observes no evidence for Molière scattering (Adamczyk et al., 2017c). We note that this would mainly be sensitive to whether or not the jets are deflected rather than whether or not jets are broadened. 3. The rise and fall of the Mach cone and the ridge Several theoretical models proposed that a hard parton traversing the medium would lose energy similar to the loss of energy by a supersonic object traveling through the atmosphere (Casalderrey-Solana et al., 2005;Renk and Ruppert, 2006;Ruppert and Muller, 2005). The energy in this wave forms a conical structure about the object called a Mach cone. Early dihadron correlations studies observed a displaced peak in the awayside (Adare et al., 2007b(Adare et al., , 2008dAdler et al., 2006b;Aggarwal et al., 2010). Three-particle correlation studies observed that this feature was consistent with expectations from a Mach cone (Abelev et al., 2009a). Studies indicated that its spectrum was softer than that of the jet-like correlation on the near-side (Adare et al., 2008d) and its composition similar to the bulk (Afanasiev et al., 2008), as might be expected from a shock wave from a parton moving faster than the speed of light in the medium. Curiously, the Mach cone was present only at low momenta (Adare et al., 2008a;Aggarwal et al., 2010), whereas some theoretical predictions indicated that a true Mach cone would be more significant at higher momenta (Betz et al., 2009). At the same time, studies of the near-side indicated that there was a feature correlated with the trigger particle in azimuth but not in pseudorapidity (Abelev et al., 2009b;Alver et al., 2010), dubbed the ridge. The ridge was also observed to be softer than the jet-like correlation (Abelev et al., 2009b) and to have a particle composition similar to the bulk (Bielcikova, 2008;Suarez, 2012). Several of the proposed mechanisms for the production of the ridge involved interactions between the hard parton and the medium, including collisional energy loss (Wong, 2007(Wong, , 2008 and recombination of the hard parton with a parton in the medium (Chiu and Hwa, 2009;Chiu et al., 2008;Hwa and Yang, 2009). However, the observation of odd v n in heavy ion collisions (Aamodt et al., 2011a;Adamczyk et al., 2013;Adare et al., 2011b) indicated that the Mach cone and the ridge may be an artifact of erroneous background subtraction. Since the ridge was defined as the component correlated with the trigger in azimuth but not in pseudorapidity, it is now understood to be entirely due to v 3 . Initial dihadron correlation studies after the observation of odd v n are either inconclusive about the presence or absence of shape modifications on the away-side (Adare et al., 2013b) or indicate that the shape modification persists (Agakishiev et al., 2014). A reanalysis of STAR dihadron correlations (Agakishiev et al., 2010(Agakishiev et al., , 2014 using a new method for background subtraction found that the Mach cone structure is not present . This new analysis indicates that jets are broadened and softened , as observed in studies of reconstructed jets (Aad et al., 2014c;Chatrchyan et al., 2014c). While the ridge is currently understood to be due to v 3 in heavy ion collisions, a similar structure has also been observed in high multiplicity p+p collisions (Aaboud et al., 2017;Khachatryan et al., 2010). There are some hypotheses that this might indicate that a medium is formed in violent p+p collisions (Khachatryan et al., 2017b), although there are other hypotheses such as production due to gluon saturation (Ozonder, 2016) or string percolation (Andrs et al., 2016). Whatever the production mechanism for the ridge in p+p collisions, there is currently no evidence that it is related to or correlated with jet production in either p+p or heavy ion collisions. Summary of experimental evidence for modification of the medium by jets Measurements of the impact of jets on the medium are difficult because of the large combinatorial background. The background may distort reconstructed jets and requiring the presence of a jet may bias the event selection. Because the energy contained within the background is large compared to the energy of the jet, even slight deviations of the background from the assumptions of the structure of the background used to subtract its effect could skew results. A confirmation of the CMS result indicating that the lost energy is at least partially equilibrated with the medium will require more detailed theoretical studies, preferably using Monte Carlo models so that the analysis techniques can be applied to data. The misidentification of the ridge and the Mach cone as arising due to partonic interactions with the medium highlights the perils of an incomplete understanding of the background. E. Summary of experimental results Section III.A reviews studies of cold nuclear matter effects, indicating that currently it does not appear that there are substantial cold nuclear matter effects modifying jets at mid-rapidity and that therefore effects observed thus far on jets in A+A collisions are primarily due to interactions of the hard parton with the medium. We note, however, that our understanding of cold nuclear matter effects is evolving rapidly and recommend that each observable is measured in both cold and hot nuclear matter in order to disentangle effects from hot and cold nuclear matter. Section III.B shows that there is ample evidence for partonic energy loss in the QGP. Nearly every measurement demonstrates that high momentum hadrons are suppressed relative to expections from p+p and p+Pb collisions in the absence of quenching. Section III.C reviews the evidence that these partonic interactions with the medium result in more lower momentum particles and particles at larger angles relative to the parent parton, as expected from both gluon bremsstrahlung and collisional energy loss. Table III summarizes physics observations, selection biases and ability to constrain the initial kinematics for the measured observables. Section III.D discusses the evidence that at least some of this energy may be fully equilibrated with the medium and no longer distinguishable from the background. For future studies to maximize our understanding of the medium, most observables can be incorporated into a Bayesian analysis. We encourage exploration of comparisons of new observables to describe the jet structure. However, we caution that many observables are sensitive to kinematic selections and analysis techniques so that a replication of these techniques is required for the measurements to be comparable to theory. IV. DISCUSSION AND THE PATH FORWARD In the last several years, we have seen a dramatic increase in the number of experimentally accessible jet observables for heavy-ion collisions. During the early days of RHIC, measurements were primarily limited to R AA and dihadron correlations, and reconstructed jets were measured only relatively recently. Since the start of the LHC, measurements of reconstructed jets have become routine, fragmentation functions have been measured directly, and the field is investigating and developing more sophisticated observables in order to quantify partonic energy loss and its effects on the QGP. The constraint of q, the energy loss squared per fm of medium traversed, using R AA measurements by the JET collaboration is remarkable. However, studies of jets in heavy ion collisions largely remain phenomenological and observational. This is probably the correct approach at this point in the development of the field, but a quantitative under-standing of partonic energy loss in the QGP requires a concerted effort by both theorists and experimentalists to both make measurements which can be compared to models and use those measurements to constrain or exclude those models. Below we lay out several of the stpdf we think are necessary to reach this quantitative understanding of partonic energy loss. We think that it is critical to quantitatively understand the impact of measurement techniques on jet observables in order to make meaningful comparisons to theory. We encourage the developments in new observables but urge caution -new observables may not have as many benefits as they first appear to when their biases and sensitivities to the medium are better understood. Many experimental and theoretical developments pave the way towards a better quantitative understanding of partonic energy loss. However, we think that the field will not fully benefit from these without discussions targeted at a better understanding of and consistency between theory and experiment and evaluating the full suite of observables considering all their biases. One of the dangers we face is that many observables are created by experimentalists, which often yields observables that are easy to measure such as A J , but that are not particularly differential with respect to constraining jet quenching models. A. Understand bias As we discussed in Section II, all jet measurements in heavy ion collisions are biased towards a particular subset of the population of jets produced in these collisions. The existence of such biases is transparent for many measurements, such as surface bias in measurements of dihadron correlations at RHIC. However, for other observables, such as those relating to reconstructed jets, these biases are not always adequately discussed in the interpretation of the results. As the comparison between ALICE, ATLAS, and CMS jet R AA at low jet momenta shows, requiring a hard jet core in order to suppress background and reduce combinatorial jets leads to a strong bias which cannot be ignored. The main biases that pertain to jets in heavy ion collisions are: fragmentation, collision geometry, kinematic and parton species bias. The fragmentation bias can be simply illustrated by the jet R AA measurement. Requiring a particular value of the resolution parameter, a particular constituent cut, or even the particular trigger detector used by the experiment selects a particular shower structure for the jet. The geometry bias is commonly discussed as a surface bias, since the effect of the medium increases with the path length causing more hard partons come from the surface of the QGP. The kinematic bias is somewhat related to the fragmentation bias as the fragmentation depends on the kinematics of the parton, but the energy loss in the medium means that jets of given kinematics do not come from the same selection of initial parton kinematics in vacuum and in heavy ion collisions. The parton species bias results as the gluons couple more strongly with the medium, and thus are expected to be more modified. This can be summarized by stating that nearly every technique favors measurement of more quark jets over gluon jets, is biased towards high z fragments, and is biased towards jets which have lost less energy in the medium. While some measurements may claim to be bias free because they deal with the background effects in a manner which makes comparisons with theoretical models more straightforward, they still contain biases, usually towards jets which interacted less with the medium and therefore have lost less energy. For example, for the hadron-jet coincidence measurements, it is correct to state that the away side jet does not have a fragmentation bias since the hadron trigger is not part of its shower. However, this does not mean that this measurement is completely unbiased since the trigger hadron may select jets that have traveled through less medium or interacted less with the medium. In addition, the very act of using a jet finding algorithm introduces a bias (particularly toward quark jets) that is challenging to calculate. Given the large combinatorial background, such biases are most likely unavoidable. We propose that these biases should be treated as tools through jet geometry engineering rather than a handicap. These experimental biases should also be made transparent to the theory community. Frequently the techniques which impose these biases are buried in the experimental method section, with no or little mention of the impact of these biases on the results in the discussion. Theorists should not neglect the discussion of the experimental techniques, and experimentalists should make a greater effort to highlight potential impacts of the techniques to suppress and subtract the background on the measurement. B. Make quantitative comparisons to theory With the explosion of experimentally accessible observables, much of the focus has been on making as many measurements as possible with less consideration of whether such observables are calculable, or capable of distinguishing between different energy loss models. Even without direct comparisons to theory, these studies have been fruitful because they contribute to a phenomenological understanding of the impact of the medium on jets and vice versa. While we still feel that such exploratory studies are valuable, the long term goal of the field is to measure the properties of the QGP quantitatively, making theoretical comparisons essential. Some of the dearth of comparisons between measurements and models is due to the relative simplicity of the models and their inability to include hadronization. The field requires another systematic attempt to constrain the properties of the medium from jet measurements. The Jetscape collaboration has formed in order to incorporate theoretical calculations of partonic energy loss into Monte Carlo simulations, which can then be used to directly calculate observables using the same techniques used for the measurements. This will then be followed up by a Bayesian analysis similar to previous work (Bernhard et al., 2016;Novak et al., 2014) but incorporating measurements of jets. This is essential, both to improve our theoretical understanding and to provide Monte Carlo models which can be used for more reliable experimental corrections. In our opinion, it should be possible to incorporate most observables into these measurements. However, we urge careful consideration of all experimental techniques and kinematic selections in order to ensure an accurate comparison between data and theory. The experimental collaborations should cooperate with the Jetscape collaboration to ensure that response matrices detailing the performance of the detectors for different observables are available. C. More differential measurements The choices of what to measure, how to measure it and how to both define and treat the background are key to our quantitative understanding of the medium. There have been substantial improvements in the ability to measure jets in heavy ion collisions in recent years, such as the available kinematic reach due to accelerator and detector technology improvements. Additionally, our quantitative understanding of the effect of the background in many observables has also significantly improved. Given the continuous improvement in technology and analysis techniques, it is vital that the some of the better understood observables such as R AA and I AA are repeated with higher precision. Theoretical models should be able to simultaneously predict these precisely measured jet observables with different spectral shapes and path length dependencies. While this is necessary it is not sufficient to validate a theoretical model. Given that these will also depend on the collision energy, comparisons between RHIC and the LHC would be valuable, but again only when all biases are carefully considered. Now that the era of high statistics and precision detectors is here, the field is currently exploring several new observables to attempt to identify the best observables to constrain the properties of the medium. Older observables, such as R AA , were built with the mindset that the final state jet reflects the kinematics of its parent parton, and the change in these kinematics due to interactions with the medium would be reflected in the change in the jet distributions. One of the lessons learned is that the majority of the modification of the fragmen-tation occurs at a relatively low p T compared to the momentum of the jet. However, jet finding algorithms were specifically designed in order to not be sensitive to the details of the soft physics, which means that the very thing we are trying to measure and quantify is obscured by jet finder. The new observables are based on the structure of the jet, rather than on its kinematics alone. Specifically, they recognize that a hard parton could split into two hard daughters. If this splitting occurs in the medium, not only can the splitting itself be modified by the presence of the medium, but each of the daughters could lose energy to the medium independently. This would be actually be rather difficult to see in an ensemble structure measurement such as the jet fragmentation function, which yields a very symmetric picture of a jet about its axis, and so requires the specific structures within the jet to be quantified. While these new observables hold a lot of promise in terms of our understanding, caution must also be used in interpreting them until precisely how the background removal process or the detector effects will play a role in these measurements is carefully studied. The investigations into these different observables are very important, since we have likely not identified the observables most sensitive to the properties of the medium. We cannot forget that we want to quantify the temperature dependence of the jet transport coefficients, as well as determine the size of the medium objects the jets are scattering off of. While these are global and fundamental descriptors of a medium, the fact that the process by which we make these measures is statistical means that the development of quantitative Monte Carlo simulations is key. Not only will they allow calculations of jet quenching models to be compared with the same initial states, hadronization schemes, etc, but they also could make the calculations of even more complicated observables feasible. However, the sensitivity of simple observables should not be underestimated as with every set of new observables there are new mistakes to be made, and we can be reasonably sure that we understand the biases inherent in these simple observables. While it is not likely that comparison between R AA and theories will constrain the properties of the medium substantially better than the JET collaboration's calculation ofq, calculations of γhadron, dihadron, and jet hadron correlations are feasible with the development of realistic Monte Carlo models. The relative simplicity of these observables makes them promising for subsequent attempts to constrainq and other transport coefficients, especially since we now have a fairly precise quantitative experimental understanding of the background. This may be a good initial focus for systematic comparisons between theory and experiment. Interpreting a complicated result with a simple model that misses a lot of physics is a misuse of that model, and can lead to incorrect assumptions. We caution against overconfidence, and encourage scrutiny and skepticism of measurement techniques and all observables. For each observable, an attempt needs to be made to quantify its biases, and determine which dominate. Observables should be measured in the same kinematic region and, if possible, with the same resolution parameters in order to ensure consistency between experiments. If initial studies of a particular observable reveal that it is either not particularly sensitive to the properties of the medium, or that it is too sensitive to experimental technique, we should stop measuring that observable. We urge caution when using complicated background subtraction and suppression techniques, which may be difficult to reproduce in models and requires Monte Carlo simulations that accurately model both the hard process that has produced the jet and the soft background. Given that the response of the detector to the background is different from experiment to experiment, complicated subtraction processes may make direct comparisons across experiments and energies difficult. We also caution against the overuse and blind use of unfolding. Unfolding is a powerful technique which is undoubtedly necessary for many measurements. It also has the potential to impose biases by shifting measurements towards the Monte Carlo used to calculate the response matrix, and obfuscating the impact of detector effects and analysis techniques. When unfolding is necessary, it should be done carefully in order to make sure all effects are understood and that the result is robust. Since most effects are included in the response matrix rather than corrected for separately, it can be difficult to understand the impact of different effects, such as track reconstruction efficiency and energy resolution. Unfolding is not necessarily superior to careful studies of detector effects and corrections, and attempts to minimize their impact on the observables chosen. Given the relative simplicity of folding a result, for all observables we should perform a theory-experiment closure test where the theoretical results are folded and compared to the raw data. Since the robustness of a particular measurement depends on the unfolding corrections, the details of the unfolding method should be also transparent to both experimental and theoretical communities. Of course making more differential measurements is aided by better detectors. The LHC detectors use advanced detector technology, and are designed for jet measurements. However, the current RHIC detectors were not optimized for jet measurements, which has limited the types of jet observables at these lower energies. Precise measurements of jets over a wide range of energies is necessary to truly understand partonic energy loss. The proposed sPHENIX detector will greatly aid these measurements by utilizing some of the advanced detector technology that has been developed since the design of the original RHIC experiments (Adare et al., 2015). The high rate and hermetic detector will improve the results by reducing detector uncertainties and increasing the kinematic reach so that a true comparison between RHIC and LHC can be made. In particular, upgrades at both RHIC and LHC will make precise measurements of heavy-flavor tagged jets and boson-tagged jets, which constrain the initial kinematics of the hard scattering, possible. D. An agreement on the treatment of background in heavy ion collisions The issues we listed above are complicated and require substantive, ongoing discussions between theorists and experimentalists. A start in this direction can be found in the Lisbon Accord where the community agreed to use Rivet (Buckley et al., 2013), a C++ library which provides a framework and tools for calculating observables at particle level developed for particle physics. Rivet allowed event generator models and experimental observables to be validated. Agreeing on a framework that all physicists can use is an important first step, however it is not sufficient. It would not prevent a comparison of two observables with different jet selection criteria, or a comparison of a theoretical model with a different treatment or definition of the background than a similar experimental observable. The problems we face are similar to those faced by the particle physics community as they learned how to study and utilize jets, to make them one of the best tools we have for understanding the Standard Model. An agreement on the treatment of the background in heavy ion collisions experimentally and theoretically is required as it is part of the definition of the observable. Theorists and experimentalists need to understand each other's techniques and find common ground, to define observables that experimentalists can measure and theorists can calculate. We need to recognize that observables based on pQCD calculations are needed if we are to work towards a text-book formulation of jet quenching, and what we learn about QCD from studying the strongly coupled QGP. However, observables that are impossible to measure are not useful, nor is it useful to measure observables that are impossible to calculate or are insensitive to the properties of the medium. We propose a targeted workshop to address these issues in heavy ion collisions with the goal of an agreement similar to the Snowmass Accord. Ideally we would agree on a series of jet algorithms, including selection criteria, that all experiments can measure, and a background strategy that can be employed both in experiment and theory.
2018-03-04T19:25:25.000Z
2017-05-04T00:00:00.000
{ "year": 2017, "sha1": "4ca7a52a274dfecfb578e0fc299c3b526210c496", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/RevModPhys.90.025005", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "4ca7a52a274dfecfb578e0fc299c3b526210c496", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
5096810
pes2o/s2orc
v3-fos-license
High-Speed VLSI Architecture Based on Massively Parallel Processor Arrays for Real-Time Remote Sensing Applications Developing computationally efficient processing techniques for massive volumes of hyperspectral data is critical for space-based Earth science and planetary exploration (see for example, (Plaza & Chang, 2008), (Henderson & Lewis, 1998) and the references therein). With the availability of remotely sensed data from different sensors of various platforms with a wide range of spatiotemporal, radiometric and spectral resolutions has made remote sensing as, perhaps, the best source of data for large scale applications and study. Applications of Remote Sensing (RS) in hydrological modelling, watershed mapping, energy and water flux estimation, fractional vegetation cover, impervious surface area mapping, urban modelling and drought predictions based on soil water index derived from remotelysensed data have been reported (Melesse et al., 2007). Also, many RS imaging applications require a response in (near) real time in areas such as target detection for military and homeland defence/security purposes, and risk prevention and response. Hyperspectral imaging is a new technique in remote sensing that generates images with hundreds of spectral bands, at different wavelength channels, for the same area on the surface of the Earth. Although in recent years several efforts have been directed toward the incorporation of parallel and distributed computing in hyperspectral image analysis, there are no standardized architectures or Very Large Scale Integration (VLSI) circuits for this purpose in remote sensing applications. Additionally, although the existing theory offers a manifold of statistical and descriptive regularization techniques for image enhancement/reconstruction, in many RS application areas there also remain some unsolved crucial theoretical and processing problems related to the computational cost due to the recently developed complex techniques (Melesse et al., 2007), (Shkvarko, 2010), (Yang et al., 2001). These descriptive-regularization techniques are associated with the unknown statistics of random perturbations of the signals in turbulent medium, imperfect array calibration, finite dimensionality of measurements, multiplicative signal-dependent speckle noise, uncontrolled antenna vibrations and random carrier trajectory deviations in the case of Synthetic Aperture Radar (SAR) systems (Henderson & Lewis, 1998), (Barrett & Myers, 2004). Furthermore, these techniques are not suitable for Introduction Developing computationally efficient processing techniques for massive volumes of hyperspectral data is critical for space-based Earth science and planetary exploration (see for example, (Plaza & Chang, 2008), (Henderson & Lewis, 1998) and the references therein). With the availability of remotely sensed data from different sensors of various platforms with a wide range of spatiotemporal, radiometric and spectral resolutions has made remote sensing as, perhaps, the best source of data for large scale applications and study. Applications of Remote Sensing (RS) in hydrological modelling, watershed mapping, energy and water flux estimation, fractional vegetation cover, impervious surface area mapping, urban modelling and drought predictions based on soil water index derived from remotelysensed data have been reported (Melesse et al., 2007). Also, many RS imaging applications require a response in (near) real time in areas such as target detection for military and homeland defence/security purposes, and risk prevention and response. Hyperspectral imaging is a new technique in remote sensing that generates images with hundreds of spectral bands, at different wavelength channels, for the same area on the surface of the Earth. Although in recent years several efforts have been directed toward the incorporation of parallel and distributed computing in hyperspectral image analysis, there are no standardized architectures or Very Large Scale Integration (VLSI) circuits for this purpose in remote sensing applications. Additionally, although the existing theory offers a manifold of statistical and descriptive regularization techniques for image enhancement/reconstruction, in many RS application areas there also remain some unsolved crucial theoretical and processing problems related to the computational cost due to the recently developed complex techniques (Melesse et al., 2007), (Shkvarko, 2010), (Yang et al., 2001). These descriptive-regularization techniques are associated with the unknown statistics of random perturbations of the signals in turbulent medium, imperfect array calibration, finite dimensionality of measurements, multiplicative signal-dependent speckle noise, uncontrolled antenna vibrations and random carrier trajectory deviations in the case of Synthetic Aperture Radar (SAR) systems (Henderson & Lewis, 1998), (Barrett & Myers, 2004). Furthermore, these techniques are not suitable for (near) real time implementation with existing Digital Signal Processors (DSP) or Personal Computers (PC). To treat such class of real time implementation, the use of specialized arrays of processors in VLSI architectures as coprocessors or stand alone chips in aggregation with Field Programmable Gate Array (FPGA) devices via the hardware/software (HW/SW) co-design, will become a real possibility for high-speed Signal Processing (SP) in order to achieve the expected data processing performance (Plaza, A. & Chang, 2008), (Castillo Atoche et al., 2010a, 2010b. Also, it is important to mention that cluster-based computing is the most widely used platform on ground stations, however several factors, like space, cost and power make them impractical for on-board processing. FPGA-based reconfigurable systems in aggregation with custom VLSI architectures are emerging as newer solutions which offer enormous computation potential in both cluster-based systems and embedded systems area. In this work, we address two particular contributions related to the substantial reduction of the computational load of the Descriptive-Regularized RS image reconstruction technique based on its implementation with massively processor arrays via the aggregation of highspeed low-power VLSI architectures with a FPGA platform. First, at the algorithmic-level, we address the design of a family of Descriptive-Regularization techniques over the range and azimuth coordinates in the uncertain RS environment, and provide the relevant computational recipes for their application to imaging array radars and fractional imaging SAR operating in different uncertain scenarios. Such descriptive-regularized family algorithms are computationally adapted for their HWlevel implementation in an efficient mode using parallel computing techniques in order to achieve the maximum possible parallelism. Second, at the systematic-level, the family of Descriptive-Regularization techniques based on reconstructive digital SP operations are conceptualized and employed with massively parallel processor arrays (MPPAs) in context of the real time SP requirements. Next, the array of processors of the selected reconstructive SP operations are efficiently optimized in fixed-point bit-level architectures for their implementation in a high-speed low-power VLSI architecture using 0.5um CMOS technology with low power standard cells libraries. The achieved VLSI accelerator is aggregated with a FPGA platform via HW/SW co-design paradigm. Alternatives propositions related to parallel computing, systolic arrays and HW/SW codesign techniques in order to achieve the near real time implementation of the regularizedbased procedures for the reconstruction of RS applications have been previously developed in (Plaza, A. & Chang, 2008), (Castillo Atoche et al., 2010a, 2010b. However, it should be noted that the design in hardware (HW) of a family of reconstructive signal processing operations have never been implemented in a high-speed low-power VLSI architecture based on massively parallel processor arrays in the past. Finally, it is reported and discussed the implementation and performance issues related to real time enhancement of large-scale real-world RS imagery indicative of the significantly increased processing efficiency gained with the proposed implementation of high-speed low-power VLSI architectures of the descriptive-regularized algorithms. Remote sensing background The general formalism of the RS imaging problem presented in this study is a brief presentation of the problem considered in (Shkvarko, 2006(Shkvarko, , 2008, hence some crucial model elements are repeated for convenience to the reader. The problem of enhanced remote sensing (RS) imaging is stated and treated as an illposed nonlinear inverse problem with model uncertainties. The challenge is to perform high-resolution reconstruction of the power spatial spectrum pattern (SSP) of the wavefield scattered from the extended remotely sensed scene via space-time processing of finite recordings of the RS data distorted in a stochastic uncertain measurement channel. The SSP is defined as a spatial distribution of the power (i.e. the second-order statistics) of the random wavefield backscattered from the remotely sensed scene observed through the integral transform operator (Henderson & Lewis, 1998), (Shkvarko, 2008). Such an operator is explicitly specified by the employed radar signal modulation and is traditionally referred to as the signal formation operator (SFO) (Shkvarko, 2006). The classical imaging with an array radar or SAR implies application of the method called "matched spatial filtering" to process the recorded data signals (Franceschetti et al., 2006), (Shkvarko, 2008), (Greco & Gini, 2007). A number of approaches had been proposed to design the constrained regularization techniques for improving the resolution in the SSP obtained by ways different from the matched spatial filtering, e.g., (Franceschetti et al., 2006), (Shkvarko, 2006(Shkvarko, , 2008, (Greco & Gini, 2007), (Plaza, A. & Chang, 2008), (Castillo Atoche et al., 2010a, 2010b but without aggregating the minimum risk descriptive estimation strategies and specialized hardware architectures via FPGA structures and VLSI components as accelerators units. In this study, we address a extended descriptive experiment design regularization (DEDR) approach to treat such uncertain SSP reconstruction problems that unifies the paradigms of minimum risk nonparametric spectral estimation, descriptive experiment design and worst-case statistical performance optimization-based regularization. Problem statement Consider a coherent RS experiment in a random medium and the narrowband assumption (Henderson & Lewis, 1998), (Shkvarko, 2006) that enables us to model the extended object backscattered field by imposing its time invariant complex scattering (backscattering) function e(x) in the scene domain (scattering surface) X  x. The measurement data wavefield u(y) = s(y) + n(y) consists of the echo signals s and additive noise n and is available for observations and recordings within the prescribed time-space observation domain Y = TP, where y = (t, p) T defines the time-space points in Y. The model of the observation wavefield u is defined by specifying the stochastic equation of observation (EO) of an operator form (Shkvarko, 2008): (Henderson & Lewis, 1998), (Shkvarko, 2008) , is referred to as the nominal SFO in the RS measurement channel specified by the time-space modulation of signals employed in a particular radar system/SAR (Henderson & Lewis, 1998), and the variation about the mean (,) S  yx = (y,x)S(y,x) models the stochastic perturbations of the wavefield at different propagation paths, where (y,x) is associated with zero-mean multiplicative noise (so-called Rytov perturbation model). All the fields , , enu in (2) are assumed to be zero-mean complex valued Gaussian random fields. Next, we adopt an incoherent model (Henderson & Lewis, 1998), (Shkvarko, 2006) of the backscattered field () e x that leads to the -form of its correlation function, R e (x 1 ,x 2 ) = b(x 1 )(x 1x 2 ). Here, e(x) and b(x) = <|e(x)| 2 > are referred to as the scene random complex scattering function and its average power scattering function or spatial spectrum pattern (SSP), respectively. The problem at hand is to derive an estimate ˆ( ) b x of the SSP () b x (referred to as the desired RS image) by processing the available finite dimensional array radar/SAR measurements of the data wavefield u(y) specified by (2). Discrete-form uncertain problem model The stochastic integral-form EO (2) to its finite-dimensional approximation (vector) form (Shkvarko, 2008) is now presented. in which the perturbed SFO matrix represents the discrete-form approximation of the integral SFO defined for the uncertain operational scenario by the EO ( vector b at its principal diagonal), R n , and R u = <  e SR S  > p( Δ ) + R n , respectively, where <> p( Δ ) defines the averaging performed over the randomness of Δ characterized by the unknown probability density function p( Δ ), and superscript + stands for Hermitian conjugate. Following (Shkvarko, 2008), the distortion term Δ in (4) is considered as a random zero mean matrix with the bounded second-order moment   2 || || Δ . Vector b is composed of the elements, b k = () k e  = e k e k *  = |e k | 2 ; k = 1, …, K, and is referred to as a K-D vector-form approximation of the SSP, where  represents the second-order statistical ensemble averaging operator (Barrett & Myers, 2004). The SSP vector b is associated with the so-called lexicographically ordered image pixels (Barrett & Myers, 2004). The corresponding conventional K y K x rectangular frame ordered scene image B = {b(k x , k x ); k x , = 1,…,K x ; k v , = 1,…,K y } relates to its lexicographically ordered vector-form representation b = {b(k); k = 1,…,K = K y  K x } via the standard row by row concatenation (so-called lexicographical reordering) procedure, B = L{b} (Barrett & Myers, 2004). Note that in the simple case of certain operational scenario (Henderson & Lewis, 1998), (Shkvarko, 2008), the discrete-form (i.e. matrix-form) SFO S is assumed to be deterministic, i.e. the random perturbation term in (4) is irrelevant, Δ = 0. The digital enhanced RS imaging problem is formally stated as follows (Shkvarko, 2008): to map the scene pixel frame image B via lexicographical reordering B = L{b } of the SSP vector estimate b reconstructed from whatever available measurements of independent realizations of the recorded data vector u. The reconstructed SSP vector b is an estimate of the second-order statistics of the scattering vector e observed through the perturbed SFO (4) and contaminated with noise n; hence, the RS imaging problem at hand must be qualified and treated as a statistical nonlinear inverse problem with the uncertain operator. The high-resolution imaging implies solution of such an inverse problem in some optimal way. Recall that in this paper we intend to follow the unified descriptive experiment design regularized (DEDR) method proposed originally in (Shkvarko, 2008). DEDR method 2.3.1 DEDR strategy for certain operational scenario In the descriptive statistical formalism, the desired SSP vector b is recognized to be the vector of a principal diagonal of the estimate of the correlation matrix R e (b), i.e. b = {ˆe R } diag . Thus one can seek to estimate b = {ˆe R } diag given the data correlation matrix R u pre- by determining the solution operator (SO) F such that where {·} diag defines the vector composed of the principal diagonal of the embraced matrix. To optimize the search for F in the certain operational scenario the DEDR strategy was proposed in (Shkvarko, 2006) that implies the minimization of the weighted sum of the systematic and fluctuation errors in the desired estimate b where the selection (adjustment) of the regularization parameter  and the weight matrix A provide the additional experiment design degrees of freedom incorporating any descriptive properties of a solution if those are known a priori (Shkvarko, 2006). It is easy to recognize that the strategy (7) is a structural extension of the statistical minimum risk estimation strategy for the nonlinear spectral estimation problem at hand because in both cases the balance between the gained spatial resolution and the noise energy in the resulting estimate is to be optimized. www.intechopen.com From the presented above DEDR strategie, one can deduce that the solution to the optimization problem found in the previous study (Shkvarko, 2006) where represents the so-called regularized reconstruction operator; 1  n R is the noise whitening filter, and the adjoint (i.e. Hermitian transpose) SFO S + defines the matched spatial filter in the conventional signal processing terminology. DEDR strategy for uncertain operational scenario To optimize the search for the desired SO F in the uncertain operational scenario with the randomly perturbed SFO (4), the extended DEDR strategy was proposed in (Shkvarko, 2006) where the conditioning term (12) represents the worst-case statistical performance (WCSP) regularizing constraint imposed on the unknown second-order statistics <|| Δ || 2 > p( Δ ) of the random distortion component Δ of the SFO matrix (4), and the DEDR "extended risk" is defined by where the regularization parameter  and the metrics inducing weight matrix A compose the processing level "degrees of freedom" of the DEDR method. To proceed with the derivation of the robust SFO (11), the risk function (13) was next decomposed and evaluated for its the maximum value applying the Cauchy-Schwarz inequality and Loewner ordering (Greco & F. Gini, 2007) of the weight matrix A   I with the scaled Loewner ordering factor  = min{   : A   I  } = 1. With these robustifications, the extended DEDR strategy (11) is transformed into the following optimization problem with the aggregated DEDR risk function The optimization solution of (14) follows a structural extension of (9) for the augmented (diagonal loaded)  R that yields S S www.intechopen.com represents the robustified reconstruction operator for the uncertain scenario. DEDR imaging techniques In this sub-section, three practically motivated DEDR-related imaging techniques (Shkvarko, 2008) are presented that will be used at the HW co-design stage, namely, the conventional matched spatial filtering (MSF) method, and two high-resolution reconstructive imaging techniques: (i) the robust spatial filtering (RSF), and (ii) the robust adaptive spatial filtering (RASF) methods. 1. MSF: The MSF algorithm is a member of the DEDR-related family specified for  >> ||S + S||, i.e. the case of a dominating priority of suppression of noise over the systematic error in the optimization problem (7). In this case, the SO (9) is approximated by the matched spatial filter (MSF): 2. RSF: The RSF method implies no preference to any prior model information (i.e., A = I) and balanced minimization of the systematic and noise error measures in (14) by adjusting the regularization parameter to the inverse of the signal-to-noise ratio (SNR), e.g.  = N 0 /B 0 , where B 0 is the prior average gray level of the image. In that case the SO F becomes the Tikhonov-type robust spatial filter in which the RSF regularization parameter  RSF is adjusted to a particular operational scenario model, namely,  RSF = (N 0 /b 0 ) for the case of a certain operational scenario, and  RSF = (N  /b 0 ) in the uncertain operational scenario case, respectively, where N 0 represents the white observation noise power density, b 0 is the average a priori SSP value, and N  = N 0 +  corresponds to the augmented noise power density in the correlation matrix specified by (16). 3. RASF: In the statistically optimal problem treatment,  and A are adjusted in an adaptive fashion following the minimum risk strategy, i.e.  A -1 = D = diag(b ), the diagonal matrix with the estimate b at its principal diagonal, in which case the SOs (9), (17) become itself solution-dependent operators that result in the following robust adaptive spatial filters (RASFs): for the certain operational scenario, and for the uncertain operational scenario, respectively. Using the defined above SOs, the DEDR-related data processing techniques in the conventional pixel-frame format can be unified now as follows www.intechopen.com with F (1) = F MSF ; F (2) = F RSF , and F (3) = F RASF , F (4) = F RASF , respectively. Any other feasible adjustments of the DEDR degrees of freedom (the regularization parameters , , and the weight matrix A) provide other possible DEDR-related SSP reconstruction techniques, that we do not consider in this study. VLSI architecture based on Massively Parallel Processor Arrays In this section, we present the design methodology for real time implementation of specialized arrays of processors in VLSI architectures based on massively parallel processor arrays (MPPAs) as coprocessors units that are integrated with a FPGA platform via the HW/SW co-design paradigm. This approach represents a real possibility for low-power high-speed reconstructive signal processing (SP) for the enhancement/reconstruction of RS imagery. In addition, the authors believe that FPGA-based reconfigurable systems in aggregation with custom VLSI architectures are emerging as newer solutions which offer enormous computation potential in RS systems. A brief perspective on the state-of-the-art of high-performance computing (HPC) techniques in the context of remote sensing problems is provided. The wide range of computer architectures (including homogeneous and heterogeneous clusters and groups of clusters, large-scale distributed platforms and grid computing environments, specialized architectures based on reconfigurable computing, and commodity graphic hardware) and data processing techniques exemplifies a subject area that has drawn at the cutting edge of science and technology. The utilization of parallel and distributed computing paradigms anticipates ground-breaking perspectives for the exploitation of high-dimensional data processing sets in many RS applications. Parallel computing architectures made up of homogeneous and heterogeneous commodity computing resources have gained popularity in the last few years due to the chance of building a high-performance system at a reasonable cost. The scalability, code reusability, and load balance achieved by the proposed implementation in such low-cost systems offer an unprecedented opportunity to explore methodologies in other fields (e.g. data mining) that previously looked to be too computationally intensive for practical applications due to the immense files common to remote sensing problems (Plaza & Chang, 2008). To address the required near-real-time computational mode by many RS applications, we propose a high-speed low-power VLSI co-processor architecture based on MPPAs that is aggregated with a FPGA via the HW/SW co-design paradigm. Experimental results demonstrate that the hardware VLSI-FPGA platform of the presented DEDR algorithms makes appropriate use of resources in the FPGA and provides a response in near-real-time that is acceptable for newer RS applications. Design flow The all-software execution of the prescribed RS image formation and reconstructive signal processing (SP) operations in modern high-speed personal computers (PC) or any digital signal processors (DSP) platform may be intensively time consuming. These high computational complexities of the general-form DEDR-POCS algorithms make them definitely unacceptable for real time PC-aided implementation. In this section, we describe a specific design flow of the proposed VLSI-FPGA architecture for the implementation of the DEDR method via the HW/SW co-design paradigm. The HW/SW co-design is a hybrid method aimed at increasing the flexibility of the implementation and improvement of the overall design process (Castillo Atoche et al., 2010a). When a co-processor-based solution is employed in the HW/SW co-design architecture, the computational time can be drastically reduced. Two opposite alternatives can be considered when exploring the HW/SW co-design of a complex SP system. One of them is the use of standard components whose functionality can be defined by means of programming. The other one is the implementation of this functionality via a microelectronic circuit specifically tailored for that application. It is well known that the first alternative (the software alternative) provides solutions that present a great flexibility in spite of high area requirements and long execution times, while the second one (the hardware alternative) optimizes the size aspects and the operation speed but limits the flexibility of the solution. Halfway between both, hardware/software co-design techniques try to obtain an appropriate trade-off between the advantages and drawbacks of these two approaches. In (Castillo Atoche et al., 2010a), an initial version of the HW/SW-architecture was presented for implementing the digital processing of a large-scale RS imagery in the operational context. The architecture developed in (Castillo Atoche et al., 2010a) did not involve MPPAs and is considered here as a simply reference for the new pursued HW/SW co-design paradigm, where the corresponding blocks are to be designed to speed-up the digital SP operations of the DEDR-POCS-related algorithms developed at the previous SW stage of the overall HW/SW co-design to meet the real time imaging system requirements. The proposed co-design flow encompasses the following general stages: i. Algorithmic implementation (reference simulation in MATLAB and C++ platforms); ii. Partitioning process of the computational tasks; iii. Aggregation of parallel computing techniques; iv. Architecture design procedure of the addressed reconstructive SP computational tasks onto HW blocks (MPPAs); Algorithmic implementation In this sub-section, the procedures for computational implementation of the DEDR-related robust space filter (RSF) and robust adaptive space filter (RASF) algorithms in the MATLAB and C++ platforms are developed. This reference implementation scheme will be next compared with the proposed architecture based on the use of a VLSI-FPGA platform. (ii) Partitioning process of the computational tasks One of the challenging problems of the HW/SW co-design is to perform an efficient HW/SW partitioning of the computational tasks. The aim of the partitioning problem is to find which computational tasks can be implemented in an efficient hardware architecture looking for the best trade-offs among the different solutions. The solution to the problem requires, first, the definition of a partitioning model that meets all the specification requirements (i.e., functionality, goals and constraints). Note that from the formal SW-level co-design point of view, such DEDR techniques (20), (21), (22) can be considered as a properly ordered sequence of the vector-matrix multiplication procedure that one can next perform in an efficient high performance computational fashion following the proposed bit-level high-speed VLSI co-processor architecture. In particular, for implementing the fixed-point DEDR RSF and RASF algorithms, we consider in this partitioning stage to develop a high-speed VLSI co-processor for the computationally complex matrix-vector SP operation in aggregation with a powerful FPGA reconfigurable architecture via the HW/SW co-design technique. The rest of the reconstructive SP operations are employed in SW with a 32 bits embedded processor (MicroBlaze). This novel VLSI-FPGA platform represents a new paradigm for real time processing of newer RS applications. Fig. 1 illustrates the proposed VLSI-FPGA architecture for the implementation of the RSF/RASF algorithms. Once the partitioning stage has been defined, the selected reconstructive SP sub-task is to be mapped into the corresponding high-speed VLSI co-processor. In the HW design, the precision of 32 bits for performing all fixed-point operations is used, in particular, 9-bit integer and 23-bits decimal for the implementation of the co-processor. Such precision guarantees numerical computational errors less than 10 -5 referring to the MATLAB Fixed Point Toolbox (Matlab, 2011). Aggregation of parallel computing techniques This sub-section is focused in how to improve the performance of the complex RS algorithms with the aggregation of parallel computing and mapping techniques onto HWlevel massively parallel processor arrays (MPPAs). The basic algebraic matrix operation (i.e., the selected matrix-vector multiplication) that constitutes the base of the most computationally consuming applications in the reconstructive SP applications is transformed into the required parallel algorithmic representation format. A manifold of different approaches can be used to represent parallel algorithms, e.g. (Moldovan & Fortes, 1986), (Kung, 1988). In this study, we consider a number of different loop optimization techniques used in high performance computing (HPC) in order to exploit the maximum possible parallelism in the design: -Loop unrolling, -Nested loop optimization, -Loop interchange. In addition, to achieve such maximum possible parallelism in an algorithm, the so-called data dependencies in the computations must be analyzed (Moldovan & Fortes, 1986), (Kung, 1988). Formally, these dependencies are to be expressed via the corresponding dependence graph (DG). Following (Kung, 1988), we define the dependence graph G=[P, E] as a composite set where P represents the nodes and E represents the arcs or edges in which each e  E connects 12 , pp P that is represented as 12 ep p   . Next, the data dependencies analysis of the matrix-vector multiplication algorithms should be performed aimed at their efficient parallelization. , where y and ji a represents an n-dimensional (n-D) output vector and the corresponding element of A, respectively. The first SW-level transformation is the so-called single assignment algorithm (Kung, 1988), (Castillo Atoche et al., 2010b) that performs the computing of the matrix-vector product. Such single assignment algorithm corresponds to a loop unrolling method in which the primary benefit in loop unrolling is to perform more computations per iteration. Unrolling also reduces the overall number of branches significantly and gives the processor more instructions between branches (i.e., it increases the size of basic blocks). Next, we examine the computation-related optimizations followed by the memory optimizations. Typically, when we are working with nests of loops, we are working with multidimensional arrays. Computing in multidimensional arrays can lead to non-unit-stride memory access. Many of the optimizations can be perform on loop nests to improve the memory access patterns. The second SW-level transformation consists in to transform the matrix-vector single assignment algorithm in the locally recursive algorithm representation without global data dependencies (i.e. in term of a recursive form). At this stage, nestedloop optimizations are employed in order to avoid large routing resources that are translated into the large amount of buffers in the final processor array architecture. The variable being broadcasted in single assignment algorithms is removed by passing the variable through each of the neighbour processing elements (PEs) in a DG representation. Additionally, loop interchange techniques for rearranging a loop nest are also applied. For performance, the loop interchange of inner and outer loops is performed to pull the computations into the center loop, where the unrolling is implemented. Architecture design onto MPPAs Massively parallel co-processors are typically part of a heterogeneous hardware/softwaresystem. Each processor is a massive parallel system consisting of an array of PEs. In this study, we propose the MPPA architecture for the selected reconstructive SP matrix-vector operation. This architecture is first modelled in a processor Array (PA) and next, each processor is implemented also with an array of PEs (i.e., in a highly-pipelined bit-level representation). Thus, we achieved the pursued MPPAs architecture following the spacetime mapping procedures. First, some fundamental proved propositions are given in order to clarify the mapping procedure onto PAs. Proposition 1. There are types of algorithms that are expressed in terms of regular and localized DG. For example, basic algebraic matrix-form operations, discrete inertial transforms like convolution, correlation techniques, digital filtering, etc. that also can be represented in matrix formats (Moldovan & Fortes, 1986), (Kung, 1988). Proposition 2. As the DEDR algorithms can be considered as properly ordered sequences vector-matrix multiplication procedures, then, they can be performed in an efficient computational fashion following the PA-oriented HW/SW co-design paradigm (Kung, 1988). Following the presented above propositions, we are ready to derive the proper PA architectures. (Moldovan & Fortes, 1986) proved the mapping theory for the transformation where N represents the dimension of the DG (see proofs in (Kung, 1988) and details in (CastilloAtoche et al., 2010b). Second, the desired linear transformation matrix operator T can be segmented in two blocks as follows www.intechopen.com where Π is a (1×N)-D vector (composed of the first row of T ) which (in the segmenting terms) determines the time scheduling, and the (N -1)×N sub-matrix Σ in (24) is composed of the rest rows of T that determine the space processor specified by the so-called projection vector d (Kung, 1988).Next, such segmentation (24) yields the regular PA of (N-1)-D specified by the mapping where K is composed of the new revised vector schedule (represented by the first row of the PA) and the inter-processor communications (represented by the rest rows of the PA), and the matrix Φ specifies the data dependencies of the parallel representation algorithm. For a more detailed explanation of this theory, see (Kung, 1988), (CastilloAtoche et al., 2010b). In this study, the following specifications for the matrix-vector algorithm onto PAs . Now, for a simplified test-case, we specify the following operational parameters: m = n = 4, the period of clock of 10 ns and 32 bits data-word length. Now, we are ready to derive the specialized bit-level matrix-format MPPAs-based architecture. Each processor of the vector-matrix PA is next derived in an array of processing elements (PEs) at bit-level scale. Once again, the space-time transformation is employed to design the bit-level architecture of each processor unit of the matrix-vector PA. The following specifications were considered for the bit-level multiply-accumulate architecture: for the vector schedule, for the projection vector and, for the space processor, respectively. With these specifications the transformation matrix becomes 12 01 The specified operational parameters are the following: l=32 (i.e., which represents the dimension of the word-length) and the period of clock of 10 ns. The developed architecture is next illustrated in Fig. 2. From the analysis of Fig. 2, one can deduce that with the MPPA approach, the real time implementation of computationally complex RS operations can be achieved due the highlypipelined MPPA structure. Bit-level design based on MPPAS of the high-speed VLSI accelerator As described above, the proposed partitioning of the VLSI-FPGA platform considers the design and fabrication of a low-power high-speed co-processor integrated circuit for the implementation of complex matrix-vector SP operation. Fig. 3 shows the Full Adder (FA) circuit that was constantly used through all the design. An extensive design analysis was carried out in bit-level matrix-format of the MPPAs-based architecture and the achieved hardware was studied comprehensively. In order to generate an efficient architecture for the application, various issues were taken into account. The main one considered was to reduce the gate count, because it determines the number of transistors (i.e., silicon area) to be used for the development of the VLSI accelerator. Power consumption is also determined by it to some extent. The design has also to be scalable to other technologies. The VLSI co-processor integrated circuit was designed using a Low-Power Standard Cell library in a 0.6µm double-poly triple-metal (DPTM) CMOS process using the Tanner Tools® software. Each logic cell from the library is designed at a transistor level. Additionally, S-Edit® was used for the schematic capture of the integrated circuit using a hierarchical approach and the layout was automatically done through the Standard Cell Place and Route (SPR) utility of L-Edit from Tanner Tools®. Performance analysis 4.1 Metrics In the evaluation of the proposed VLSI˗FPGA architectue, it is considered a conventional side-looking synthethic aperture radar (SAR) with the fractionally synthesized aperture as an RS imaging system (Shlvarko et al., 2008), (Wehner, 1994). The regular SFO of such SAR is factored along two axes in the image plane: the azimuth or cross-range coordinate (horizontal axis, x) and the slant range (vertical axis, y), respectively. The conventional triangular,  r (y), and Gaussian approximation,  a (x)=exp(-(x) 2 /a 2 ) with the adjustable fractional parameter a, are considered for the SAR range and azimuth ambiguity function (AF), (Wehner, 1994). In analogy to the image reconstruction, we employed the quality metric defined as an improvement in the output signal-to-noise ratio (IOSNR) IOSNR = 10 log 10 where k b represents the value of the kth element (pixel) of the original image B, () MSF k b represents the value of the kth element (pixel) of the degraded image formed applying the MSF technique (19), and () p k b represents a value of the kth pixel of the image reconstructed with two developed methods, p = 1, 2, where p = 1 corresponds to the RSF algorithm and p = 2 corresponds to the RASF algorithm, respectively. The quality metrics defined by (26) allows to quantify the performance of different image enhancement/reconstruction algorithms in a variety of aspects. According to these quality metrics, the higher is the IOSNR, the better is the improvement of the image enhancement/reconstruction with the particular employed algorithm. RS implementation results The reported RS implementation results are achieved with the VLSI-FPGA architecture based on MPPAs, for the enhancement/reconstruction of RS images acquired with different www.intechopen.com fractional SAR systems characterized by the PSF of a Gaussian "bell" shape in both directions of the 2-D scene (in particular, of 16 pixel width at 0.5 from its maximum for the 1K-by-1K BMP pixel-formatted scene). The images are stored and loaded from a compact flash device for the image enhancement process, i.e., particularly for the RSF and RASF techniques. The initial test scene is displayed in Fig. 4(a). Fig. 4(b) Fig.4 and Table 1, one may deduce that the RASF method over-performs the robust non-adaptive RSF in all simulated scenarios. MPPA analysis The matrix-vector multiplier chip and all of modules of the MPPA co-processor architecture were designed by gate-level description. As already mentioned, the chip was designed using a Standard Cell library in a 0.6µm CMOS process (Weste & D. Harris, 2004), (Rabaey et al., 2003). The resulting integrated circuit core has dimensions of 7.4 mm x 3.5 mm. The total gate count is about 32K using approximately 185K transistors. The 72-pin chip will be packaged in an 80 LD CQFP package and can operate both at 5 V and 3 V. The chip is illustrated in Fig. 5. Fig. 4 and 5, one can deduce that the VLSI-FPGA platform based on MPPAs via the HW/SW co-design reveals a novel high-speed SP system for the real time enhacement/reconstruction of highly-computationally demanded RS systems. On one hand, the reconfigurable nature of FPGAs gives an increased flexibility to the design allowing an extra degree of freedoom in the partitioning stage of the pursued HW/SW co-design technique. On the other side, the use of VLSI co-processors introduces a low power, highspeed option for the implementation of computationally complex SP operations. The highlevel integration of modern ASIC technologies is a key factor in the design of bit-level MPPAs. Considering these factors, the VLSI/ASIC approach results in an attractive option for the fabrication of high-speed co-processors that perform complex operations that are constantly demanded by many applications, such as real-time RS, where the high-speed low-power computations exceeds the FPGAs capabilities. Conclusions The principal result of the reported study is the addressed VLSI-FPGA platform using MPPAs via the HW/SW co-design paradigm for the digital implementation of the RSF/RASF DEDR RS algorithms. First, we algorithmically adapted the RSF/RASF DEDR-related techniques over the range and azimuth coordinates of the uncertain RS environment for their application to imaging array radars and fractional imaging SAR. Such descriptive-regularized RSF/RASF algorithms were computationally transformed for their HW-level implementation in an efficient mode using parallel computing techniques in order to achieve the maximum possible parallelism in the design. Second, the RSF/RASF algorithms based on reconstructive digital SP operations were conceptualized and employed with MPPAs in context of the real time RS requirements. Next, the bit-level array of processors elements of the selected reconstructive SP operation was efficiently optimized in a high-speed VLSI architecture using 0.6um CMOS technology with low-power standard cells libraries. The achieved VLSI accelerator was aggregated with a reconfigurable FPGA device via HW/SW co-design paradigm. Finally, the authors consider that with the bit-level implementation of specialized arrays of processors in VLSI-FPGA platforms represents an emerging research field for the real-time RS data processing for newer Geospatial applications. In this book the reader will find a collection of chapters authored/co-authored by a large number of experts around the world, covering the broad field of digital signal processing. This book intends to provide highlights of the current research in the digital signal processing area, showing the recent advances in this field. This work is mainly destined to researchers in the digital signal processing and related areas but it is also accessible to anyone with a scientific background desiring to have an up-to-date overview of this domain. Each chapter is self-contained and can be read independently of the others. These nineteenth chapters present methodological advances and recent applications of digital signal processing in various domains as communications, filtering, medicine, astronomy, and image processing.
2016-01-08T23:06:28.682Z
2011-11-23T00:00:00.000
{ "year": 2011, "sha1": "732d242657e2a5ff222a4c1da8837e2679aefe95", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/24306", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "7ac3e6a8828680580206a4d350ee07f07cbb21d1", "s2fieldsofstudy": [ "Mathematics", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
122849689
pes2o/s2orc
v3-fos-license
Effect of Cross-Rolling on Microstructure , Texture and Magnetic Properties of Non-Oriented Electrical Steels Hot rolled non-oriented electrical steel samples were subjected to cold cross-rolling of 80 % reduction in thickness. The cross-rolled samples were then annealed at 650, 750 and 850 oC for 1 hr, 2 hrs and 4 hrs respectively. The role of cross-rolling on microstructure, texture and magnetic properties of the samples after annealing has been investigated. Two different samples were used for the present investigation – one had higher Al content (sample S1) while the other had higher C, Si, Mn, P and S content (sample S2). It was observed that the sample S1 had higher grain size compared to sample S2 after annealing. The cross-rolling was observed to be controlled the texture developments in the samples and it was found that the texture factor was identical in all directions of the sample. The core losses in the samples were found to be decreased with increasing grain size of the samples. Introduction Non-oriented electrical steels are widely used in electrical generators and motors where the magnetic field is important in all directions [1][2][3] .To achieve adequate magnetic properties in all directions, both grain size and texture have been controlled through different thermo-mechanical processing of these steels [4][5][6][7] .An optimum grain size is essential for minimum watt loss in the material -for example, the optimum grain sizes are 100 and 150 micron for 1.85 and 3.2 % Si steels respectively 8 .Texture components like (001)<uvw> and (111)<uvw> are considered respectively as good and bad texture components from magnetic properties point of view 9,10 .However, various texture components other than (001) <uvw> texture of these steels i.e. goss {110} <100>, cube {100} <001> and eta {hkl} <100> are also good texture components from magnetic properties viewpoint 11 .At present no method is feasible for the production of (001)<uvw> texture in all directions.Therefore, in the entire world isotropic non-oriented steels with multi-component texture have been developed for this purpose.Multistep cross-rolling (MSCR) has been observed to be an efficient method to randomize the texture of a material [12][13][14][15] .During MSCR, the material is rotated by 90 o in the intermittent steps of the rolling 16 . In the present study the effect of MSCR processing and subsequent annealing of hot rolled non-oriented electrical steels on microstructure, texture and magnetic properties of these steels have been investigated. Material and Sample Preparation Hot rolled steels of 2.3 mm thickness were used as the starting material for the present study.Two different samples were used for the present investigation -sample S1, had higher Al content and sample S2 which had higher C, Si, Mn, P and S content.The chemical composition (in wt.%) of the samples is shown in Table 1.The steels were subjected to multi-step cross-rolling (MSCR) in a laboratory rolling mill down to 80 % reduction in thickness.During each steps of MSCR, true strain was maintained at 10 %.The MSCR samples were subjected to annealing at 650, 750 and 850 o C for 1 hr, 2 hrs and 4 hrs respectively.The samples were then metallographically polished for subsequent characterizations. Optical Microscopy Optical microstructures of the samples were characterized using a Zeiss AxioCam ERc 5s image analyzer.The samples were etched with nital solution for the microstructural analysis. X-Ray Diffraction (XRD) XRD was carried out in a Bruker D8-Advance system using Co K α radiation.Three pole figures, (110), ( 200) and (211), were measured on the ND plane containing the RD-TD direction.RD corresponds to the rolling direction whereas ND and TD correspond to the normal and transverse direction respectively.Orientation distribution functions (ODFs) were calculated using the Labotex 3.0 software 17 .Using the software, volume fractions of different texture components were estimated through integration method with a 15° tolerance on the deviation from the ideal orientation(s). Magnetic Measurements Magnetic measurements were carried out on Brockhaus MPG 200 instrument.The core losses at 1.5 T and 50 Hz were obtained along RD of the samples.The samples were measured on single sheet tester of size 210 mm × 210 mm.For one condition three samples were measured in order to get the accuracy of the measurement. Figures 1 and 2 respectively show the optical microstructures of sample S1 and sample S2 at different conditions of annealing.A near equiaxed grains were observed in both the samples.However, a distinct bimodal distribution of grains was observed during annealing of the sample S1 at lower temperature (650 o C) and at higher temperature of annealing, it had abnormal grain growth.A regular increase in grain size as a function of both annealing temperature and annealing time was observed in the samples after annealing.Figure 3 shows the average grain size of the samples at different annealing conditions.The rate of grain growth was relatively higher when annealed at 850 o C. Also the sample S1 had higher grain size than that of sample S2 after annealing.2, it can be observed that the texture factor along all other directions were approximately similar to the values observed along RD. Figure 7 shows the core losses of the samples at different annealing conditions.It was observed that the core losses were decreased with increasing the time of annealing.Also the sample S1 had relatively lower core losses compared to that of sample S2. Discussion The magnetic properties of electrical steels largely depend on the chemical composition, crystallographic texture and grain size 4,10,18 .The elements such as P, Si, Al and Mn are generally added to the steels to improve the resistivity 9,10,18 .The other major elements such as C and S affect the magnetic properties of the steels and these contents should be lower for better magnetic properties of the steels.It has also been reported that the harmful effects of C and S can be minimized through second phase precipitation 5,19 .The alloying elements also help in controlling the texture of the electrical steels through growth inhibition 18 .Mn is a good example of generating growth inhibitors during annealing of non-oriented electrical steels 18 .This might be the reason for the sample S2 to have lower grain size compared to sample S1 after annealing.Also it was observed that the abnormality in grain growth was seen in sample S1 (Figure 1) although sample S1 higher Al content which also can generate growth inhibitors 5 .However, Al has been observed to be detrimental to the workability of the samples and this could affect the stored energy of the sample during cold rolling which affects the subsequent grain growth of the sample. Controlling the crystallographic texture and grain size of these steels mostly depends on the thermo-mechanical processing steps during the production.It has been found that an optimum grain size and low texture factor, (111)<uvw>/ (001)<uvw>, is required for better magnetic properties of electrical steels 10 .However, unlike CRGO (cold rolled grain oriented) electrical steels where ideal texture is required along one direction/orientation, CRNO (cold rolled nonoriented) electrical steels the ideal texture is required in all direction.The present study is an attempt to weaken/ randomize the texture of the steels rather than improving the ideal texture in all directions.Cross-rolling has been a successful method to weaken the texture of cubic metals [12][13][14][15] .This was also observed in the present study which showed that the texture factor was identical in all directions of the samples (Table 2).The possible explanation for the formation of suitable recrystallization textures can be attributed to a combination of oriented nucleation and oriented growth, deformation accumulated stored energy, geometric softening and orientation pinning 18,20,21 .In the present study, however, the mechanism of recrystallization texture development is not debatable as the texture factor in the samples was not very different (Figure 6, Table 2).The significant increase in texture factor of sample S1 annealed at 850 o C for 2 -4 hrs of annealing time (Figure 6a) may be attributed to the oriented growth mechanism (Figure 1).However, the increase in texture factor of sample S2 annealed at 650 o C for 2 hrs of annealing time (Figure 6b) could be attributed to the combination of different recrystallization texture mechanisms 21 .The magnetic property of the samples, represented by core losses, was found to be dependent mainly on the average grain size of the samples although texture factor was also important (Figures 3, 6 and 7).The higher the grain size of the sample lowered the core losses in the sample.It also indicated that the abnormality in grain growth in the sample S1 was not detrimental from the magnetic property viewpoint. Summary In the present study, the effect of chemistry and crossrolling on the microstructure, texture and magnetic properties of non-oriented electrical steels have been investigated.It was found that the electrical steel sample with higher Mn content showed normal grain growth whereas the sample with higher Al content showed abnormal grain growth during annealing of the samples.The cross-rolling prior to annealing of the samples had a significant effect on the texture developments in the samples and the samples had similar texture factor in all directions of the samples.It was also observed that the samples with higher grain size had lower core losses. Figures 1 and 2 respectively show the optical microstructures of sample S1 and sample S2 at different conditions of annealing.A near equiaxed grains were observed in both the samples.However, a distinct bimodal distribution of grains was observed during annealing of the sample S1 at lower temperature (650 o C) and at higher temperature of annealing, it had abnormal grain growth.A regular increase in grain size as a function of both annealing temperature and annealing time was observed in the samples after annealing.Figure3shows the average grain size of the samples at different annealing conditions.The rate of grain growth was relatively higher when annealed at 850 o C. Also the sample S1 had higher grain size than that of sample S2 after annealing.Figures 4 and 5 respectively show the ODFs, at constant φ 2 = 45 o , of samples S1 and S2 at different conditions of annealing.It may be observed that the γ-fiber was distinct in sample S2.However, in sample S1 the fiber was not clear except during annealing at 650 o C, and at 850 o C for 2 hrs of annealing time.The volume fraction of texture factor i.e. (111)<uvw>/(001)<uvw> 10 as a function of annealing temperature and time in samples S1 and S2 are shown in Figure 6.The following observations may be obtained from Figure 6: • At 650 o C annealing temperature: The texture factor decreased, though not significantly, as a function of annealing time for sample S1.Whereas in case of sample S2, it was increased till 2 hrs of annealing time and the decreased on further increasing the annealing time of 4 hrs.• At 750 o C annealing temperature: For sample S1, the texture factor was decreased till 2 hrs of annealing time and then increased at 4 hrs of annealing time.However, the texture factor was decreased with increasing annealing time may be observed in sample S2. • At 850 o C annealing temperature: An increased texture factor with increased annealing time was observed in sample S1.However, in sample S2 the texture factor was initially decreased till 2 hrs of annealing time and then increased on further increasing the annealing time of 4 hrs.As shown in Table2, it can be observed that the texture factor along all other directions were approximately similar to the values observed along RD.Figure7shows the core losses of the samples at different annealing conditions.It was observed that the core losses were decreased with increasing the time of annealing.Also the sample S1 had relatively lower core losses compared to that of sample S2. Figure 1 : Figure 1: Optical microstructures of the sample S1 at different annealing conditions. Figure 2 : Figure 2: Optical microstructures of the sample S2 at different annealing conditions. Figure 3 : Figure 3: Average grain size of the samples as a function of temperature and time of annealing: (a) sample S1 and (b) sample S2. Figure 7 : Figure 7: Core losses as a function of annealing temperature and annealing time for: (a) sample S1 and (b) sample S2.The core loss of the sample S1 before annealing was 12.17 W/kg and that of sample S2 before annealing was 14.183 W/kg. Table 1 : Chemical composition (in wt.%) of steel samples used in the present study.The balance amount was being Fe.
2019-04-18T13:05:03.614Z
2016-12-16T00:00:00.000
{ "year": 2016, "sha1": "6cd434cb8a25c6a3839df98ddf9ee948658f48a2", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/mr/v20n1/1516-1439-mr-1980-5373-MR-2016-0437.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1d86e4f70d717ea22c0adcb1b3f3663fd1a6e743", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
234072858
pes2o/s2orc
v3-fos-license
Development of a recipe and technology of pumpkin soup puree with the addition of functional polysaccharides The effect of polysaccharides (PS)–xanthan and guaran–on the quality of our “Pumpkin milk soup puree” was studied, and a recipe and technology for this culinary product were developed. A positive effect of the studied PS on the quality of the pumpkin soup puree was revealed, namely: the dry matter content decreased (by 4% on average), the acidity increased (by 25% on average), the palatability improved, and the shelf life of the pumpkin soup puree increased up to 48 h. Adding PS did not significantly affect the cost of the samples under study; a decrease in cost by an average of 0.6% took place. As a result of sensory analysis, physicochemical and microbiological studies, our developed dish “Pumpkin soup puree” with the addition of 0.25% guaran can be recommended for introduction into the food industry (catering) as a functional and dietary product. Introduction The formula "Health is a function of nutrition" is the basis for modern food science. Over the past two hundred years, our nutrition has undergone significant changes. First, human consumption of refined foods, deprived of many vitamins, dietary fiber and other essential food components, has sharply increased. Second, the composition and ratio of the components used in food that are involved in providing the body with plastic and regulatory compounds have changed. Our ancestors' food contained a smaller amount of protein, and its composition had more various mineral salts, dietary fiber and antioxidants 2, 4-10 and 10 times, respectively. Third, the intake of lactic acid bacteria into the modern human body has sharply decreased [1][2][3]. There is evidence that currently the food consumed by Russians does not meet the physiological needs of humans, resulting in an increase in overall nutritional incidence, a decrease in performance, and a significant reduction in life expectancy and population of the Russian Federation [4]. At the same time, there is a tendency to improve the quality of life, including through healthy nutrition, namely, in the development of the new Foodnet market of the national technological initiative, whose goal is to create "smart" services and products that will become leaders by 2035 in world markets due to the best technological solutions for human food security [5]. Therefore, based on the foregoing, the development of a recipe and technology of pumpkin soup puree for functional purposes is an urgent task. The aim of our study was to develop a recipe and technology of pumpkin soup puree with the addition of functional polysaccharides. To achieve this goal, the following tasks were set: • to theoretically substantiate and experimentally confirm the feasibility of using polysaccharides • to estimate the physicochemical characteristics of the pumpkin soup puree; • to estimate the nutritional and energy values of the pumpkin soup puree; • to estimate the biological value of the pumpkin soup puree; • to explore the structural and mechanical properties of the pumpkin soup puree; • to evaluate the microbiological parameters of the pumpkin soup puree; and • to calculate the cost of our pumpkin soup puree. Objects and research methods The object of our study was "Pumpkin milk soup" taken from the collection of recipes for the diet food production for catering enterprises [6]. Two polysaccharides were used in the work, namely: xanthan (xanthan gum, Deosen, China) and guaran (guar gum, Guarsar, India), which are related to dietary fiber. Sampling for organoleptic analysis was carried out in accordance with GOST No. 31986-2012 "Catering services. The method of organoleptic assessment of the quality of catering products" on a five-point scale [7]. The mass fraction of solids was measured in a Chizhov apparatus at a temperature of 150 ± 5°C during seven min according to GOST R 54607.2-2012 [8][9][10]. Total acidity was estimated by titration according to the guidelines for laboratory quality control of catering products [9, 10]. The number of mesophilic aerobic and facultative anaerobic microorganisms (NMAFAM) was estimated in accordance with GOST 10444.15-94 "Food products. Methods for measuring the number of mesophilic aerobic and facultative anaerobic microorganisms" [11]. The presence of Escherichia coli bacteria (ECB) was detected according to GOST 31747-2012 "Food products. Methods for detecting and determining the number of Escherichia coli bacteria (coliform bacteria)" [12]. The research was conducted at the chairs of "Food Technology" and "Microbiology, Biotechnology and Chemistry." The research results were statistically processed using the Windows applications Microsoft Office Excel 2007 and MathCad 14 [13]. Results and discussion When developing a recipe and technology of the pumpkin soup puree, semolina was replaced with polysaccharides. The selected PS have a number of technological advantages, namely, a wide range of viscosity, high thermal stability, no syneresis (the stability of product quality during storage), an antioxidant effect, economic efficiency, and the possibility of using in gluten-free and dietary nutrition [14]. The optimal concentrations for these PS were selected using sensory analysis [7,15], on whose basis, organoleptic profiles were plotted (see figures 1 and 2). Figure 1 shows that the best concentrations were 0.15% (29.3 points) and 0.2% (29.5 points) for the group 1 samples, since these samples had a more pronounced milk-pumpkin taste and their consistency was homogeneous. In addition, an advantage of these pilot samples was the absence of a film on the soup surface which is usually formed during storage. For the group 2 samples, as can be seen from Figure 2, the best concentrations were 0.25% (28.7 points) and 0.3% (28.9 points), since pumpkin taste and aroma were more pronounced in these samples as compared to the control. In addition, we noted that the use of guaran led to the formation of a "mucous" consistency, which allows these soups to be recommended for diet nutrition. As a result of organoleptic evaluation, samples 1.2, 1.3 and 2.3, 2.4 with the best organoleptic properties were selected for further physicochemical studies (see table 1). From Table 1 it is seen that the solids content in the pilot samples (with the addition of polysaccharides) decreased as compared with the control sample by an average of 4%. This decrease is associated with the replacement of wheat flour with a polysaccharide. In addition, it can be seen from Table 1 that the pilot sample acidity increased by an average of 25%, this increase being presumably due to the property of PS to affect the pH level of the product [16][17][18]. Using data given in the reference book "Chemical composition and calorie content of Russian food products" [19], we calculated the nutritional and energy values of the studied products, which are presented in Table 2. As can be seen from Table 2, our change in the recipe composition of the pumpkin soup puree has affected the content of its main nutrients. The amount of protein, fat and carbohydrates decreased by 8.97%, 1.01% and 20.86%, respectively, as compared with the control. As a result, the calorie content decreased by 12.25%. Moreover, there was a slight decrease in the level of vitamins and minerals in the pilot samples of our soup (by 2.6% on average), except for retinol and ascorbic acid. At the same time, the content of dietary fiber in the selected samples with PS added, as compared with the control, increased by an average of 12.7% per 100 g of product. From literature data [20] it is known that, according to physiological norms, the need for dietary fiber for an adult is 20 g/day, and 10-20 g/day for children over 3 years old. Based on this, our developed product can be attributed to the functional ones, the recommended portion size being 300-400 g. In order to characterize microbiological safety and to estimate shelf life, we conducted 5 microbiological studies, whose results are presented in Table 3. Table 3 it is seen that a small amount of mesophilic aerobic and facultative anaerobic bacteria was found only in sample 1.3 in the pumpkin soup puree with xanthan added after 24 h, not exceeding the SanPin limit. No Escherichia coli bacteria were detected in samples 1.2 and 1.3. After 48 h, a slight increase in these microorganisms was observed only in sample 1.2, in significantly smaller quantities than in the control one. At the same time, no microorganisms were detected in sample 1.3. In the pumpkin soup puree with guaran added, no mesophilic aerobic and facultative anaerobic bacteria as well as Escherichia coli bacteria were found in samples 2.3 and 2.4 after 24 h. After 48 h, a small NMAFAM was detected in samples 2.3 and 2.4, but significantly less than in the control sample. Escherichia coli bacteria were detected in sample 2.4 in very small quantities only ( Table 3). As a result of the foregoing, it can be concluded that the introduction of the polysaccharides into the pumpkin soup puree helps to prolong the shelf life of the finished product. As a result of our preliminary economic calculations, the cost of the finished product (both control and pilot samples) was calculated, which is presented in Figure 3. Figure 3 shows that our change in the component composition of pumpkin soup puree led to a decrease in the cost of pilot samples by an average of 0.6%. This is due to the replacement of semolina, which was in the recipe of the control sample.
2021-05-10T00:04:05.020Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "389c2bfedd2f341039cb8dc2e831ecf8fddffedb", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/640/5/052019", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "48454db208226de4faf531bf634cda6d28df4c3d", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
11145341
pes2o/s2orc
v3-fos-license
Does rTMS Alter Neurocognitive Functioning in Patients with Panic Disorder/Agoraphobia? An fNIRS-Based Investigation of Prefrontal Activation during a Cognitive Task and Its Modulation via Sham-Controlled rTMS Objectives. Neurobiologically, panic disorder (PD) is supposed to be characterised by cerebral hypofrontality. Via functional near-infrared spectroscopy (fNIRS), we investigated whether prefrontal hypoactivity during cognitive tasks in PD-patients compared to healthy controls (HC) could be replicated. As intermittent theta burst stimulation (iTBS) modulates cortical activity, we furthermore investigated its ability to normalise prefrontal activation. Methods. Forty-four PD-patients, randomised to sham or verum group, received 15 iTBS-sessions above the left dorsolateral prefrontal cortex (DLPFC) in addition to psychoeducation. Before first and after last iTBS-treatment, cortical activity during a verbal fluency task was assessed via fNIRS and compared to the results of 23 HC. Results. At baseline, PD-patients showed hypofrontality including the DLPFC, which differed significantly from activation patterns of HC. However, verum iTBS did not augment prefrontal fNIRS activation. Solely after sham iTBS, a significant increase of measured fNIRS activation in the left inferior frontal gyrus (IFG) during the phonological task was found. Conclusion. Our results support findings that PD is characterised by prefrontal hypoactivation during cognitive performance. However, verum iTBS as an “add-on” to psychoeducation did not augment prefrontal activity. Instead we only found increased fNIRS activation in the left IFG after sham iTBS application. Possible reasons including task-related psychophysiological arousal are discussed. Introduction According to DSM-IV, panic disorder (PD) is characterised by the sudden onset of unexpected panic attacks resulting in constant worries about possible reasons and negative consequences of the attacks. Moreover, in the case of comorbid agoraphobia, this eventually leads to behavioural avoidance of situations from which escape might be difficult in case of an attack [1]. On a neurobiological level, functional imaging studies of PD-patients with and without agoraphobia have found hypoactivity of the prefrontal cortex (PFC), paired with hyperactivity of fear relevant brain structures such as the amygdala, suggesting an inadequate inhibition by the PFC in response to anxiety-related stimuli [2][3][4]. In fact, 2 BioMed Research International hypofrontality of PD-patients has not just been observed in response to emotional stimuli [5], but also during cognitive tasks without any emotional content. For example, in a near-infrared spectroscopy study, Nishimura et al. [6] reported hypoactivation of the left PFC in particular while Ohta et al. [7] found that PD-patients as well as patients with a depressive disorder showed lower bilateral prefrontal activation than healthy controls during a verbal fluency task. Moreover, Nishimura et al. [8] investigated a potential relation between the frequency of panic attacks/agoraphobic avoidance and PFC activation during a cognitive task, indeed finding an association between altered activation patterns in the left inferior prefrontal cortex and panic attacks as well as between the anterior part of the right PFC and the severity of agoraphobic avoidance. Cortical activation patterns can be selectively modified by means of repetitive transcranial magnetic stimulation (rTMS) via electromagnetic induction [9]. This way, rTMS has been shown to modulate neurotransmitter release [10] and-depending on its stimulation frequency-normalise prefrontal hypoactivity [11]. In fact, even though results are still inconsistent [12], rTMS has been shown to have a moderate antidepressant effect [13,14]. Within this framework it is of special interest that the method does not just seem to alter affective states but also cognitive functioning [15,16]. Functional near-infrared spectroscopy (f NIRS) is an imaging method which allows for a less complicated and faster application compared to other imaging methods such as functional magnetic resonance imaging (fMRI) or positron emission tomography (PET) [17]. Especially psychiatric patients with claustrophobic fears benefit from the fact that they merely need to sit in a chair while optodes that emit and receive near-infrared light are attached to their heads [18]. This way, task-related changes in oxygenated and deoxygenated haemoglobin concentrations can be examined. Even though disadvantages such as a relatively low spatial resolution (approximately 3 cm), a limited penetration depth (approximately 2 to 3 cm) [19,20], and influences of extracranial signals do exist (for a review see [21]), f NIRS has proven to be a useful tool in psychiatric research [22]. Based on these findings and considerations, the goal of the current study was to (1) clarify whether the findings of Ohta et al. [7] concerning prefrontal hypoactivity in PDpatients compared to healthy controls during a cognitive paradigm (verbal fluency task) could be replicated via f NIRS in a larger sample. Also, a sham-controlled rTMS protocol was applied over the time course of three weeks above the left DLPFC to (2) examine whether excitatory rTMS can serve as an adequate tool in order to improve cognitive dysfunction in terms of prefrontal hypoactivation in PD-patients. In this regard, the patients' behavioural performance during the verbal fluency task was also taken into account. Participants. Patients were recruited via the outpatient departments of the two study centres, advertisement in newspapers, as well as the internet and information material sent to local physicians. Exclusion criteria for all participants were age under 18 and over 65 years, pregnancy, and severe somatic disorders (e.g., cardiovascular disease, epilepsy, and neurological disorders). Also, patients fulfilling rTMS contraindications such as ferromagnetic implants or significant abnormalities in routine EEG were excluded. All patients were diagnosed with PD with or without agoraphobia according to DSM-IV-TR criteria [1]. Nonprominent comorbid psychiatric disorders (except for bipolar or psychotic disorder, borderline personality disorder, acute substance abuse disorders, and acute suicidality) were no exclusion criteria. Psychopharmacological treatment was permitted if the dosage had been stable for at least three weeks prior to baseline assessment (t1). Benzodiazepines, tricyclic antidepressants (except for Opipramol), and antipsychotics (expect for Quetiapine with maximal dosage of 50 mg) were excluded. Healthy controls who suffered from any axis-I psychiatric disorder (except for specific phobia) or had a family history of psychiatric disorders were excluded. A total of 23 controls and 44 PD-patients, of which 22 were randomised to the sham and 22 to the verum rTMS group, were selected for the study. Groups did not differ with respect to gender, age, years of education, and handedness (Table 1). After a comprehensive study description, written informed consent was obtained. The study was approved by the Ethics Committees of the Universities of Muenster and Tuebingen and all procedures were in accordance with the latest version of the Declaration of Helsinki. 2.2. Design. PD-patients received a total of 15 rTMS applications during three weeks at one of the study centres (Muenster or Tuebingen). Before the first and after the last rTMS-session brain activation was assessed with f NIRS while patients were performing a cognitive task. Between the first and the second f NIRS assessment, all patients received three group sessions of psychoeducation concerning PD. Healthy control subjects attended the two f NIRS measurements but received no rTMS in-between. Enrolment took place between January 2011 and July 2013. Patients and therapists were blinded to rTMS group assignment. This investigation was conducted within the framework of a larger study which included 9 weeks of cognitive behavioral therapy for patients with panic disorder/agoraphobia and additional f NIRS investigations described elsewhere (Deppermann et al.,in preparation [23]). Psychoeducation. Psychoeducation sessions were held in groups of up to 6 participants and were conducted by trained psychologists, who were supervised regularly by clinical psychotherapists. A state-of-the-art, standardised treatment manual was used [24,25]. The content of the sessions included information about the pathogenesis of PD and agoraphobia, the vicious cycle of anxiety, somatic components of anxiety, and the sharing of personal experiences among the patients. Verbal Fluency Task (VFT). All subjects were assessed twice within a three-week interval between the first (t1) and the second (t2) measuring time. During the measurements participants sat in a comfortable chair and were advised to keep their eyes closed and relax in order to avoid head or body movements. The VFT consisted of a phonological, a semantical and a control task. During the phonological task, subjects were instructed to produce as many nouns as possible beginning with a certain letter, whereas during the semantical task they had to name as many nouns as possible belonging to a certain category while repetitions and proper nouns were supposed to be avoided. During the control task the participants were instructed to repeat the weekdays in a speed that approximately matched the number of recited days to the number of mentioned nouns. The VFT started with a resting state phase of 10 seconds followed by the different tasks and more resting state periods, which lasted 30 seconds each. The sequence of the three tasks and resting phases were repeated three times, each time with a different letter or category. The letters and categories were chosen from the "Regensburger Wortflüssigkeitstest" [26]. Different letters/categories were used at t1 and t2 and counterbalanced between subjects. During the resting phase, participants were told to relax. rTMS. Starting after the first f NIRS measurement, intermittent theta burst stimulation (iTBS, [27]) was applied in the patient group during 15 daily sessions on workdays during three weeks with a figure-of-eight coil (MCF-B65, 2 × 75 mm diameter, = 34, MAGSTIM 9925-00, 2 × 70 mm, = 9) by means of a MagOption/MagPro X100 stimulator (MagVenture, Denmark, = 34) and a MAGSTIM RAPID 2 T/N 3567-23-02 stimulator ( = 9), respectively. ITBS was used in order to achieve a facilitating effect on cortex excitability, as this could be demonstrated for the motor cortex, but also for more frontal cortex areas in previous studies [27,28]. The iTBS protocol consisted of a total of 600 pulses applied in intermittent biphasic bursts at a frequency of 15 pulses per second via 2 second trains, starting every 10 seconds as described by Huang et al. [27]. The time of day for iTBS application did not vary for more than 2 hours from one day to the next. As the circadian rhythm is known to influence cortical excitability [29] the participants' individual resting motor threshold was determined prior to each iTBS session on the left motor cortex and stimulation intensity was set to 80% of this threshold. Stimulation site was F3 (left DLPFC) according to the international 10-20 system for electrode placement [30]. In order to ensure that the site of stimulation stayed constant over all sessions, F3 was drawn onto an individual textile cap for each participant prior to the first session. Additionally, other orientation points as the nasion, the inion, and the auricles were sketched on. While the coil was held tangentially to the scalp forming a 45 ∘ angle to the mid-sagittal line of the head (handling pointing in a posterior direction) for verum stimulation, it was flipped away from the scalp in a 90 ∘ angle for the sham stimulation. The post-f NIRS measurement (t2) was set to be conducted no earlier than 12 hours after the last rTMS-session to avoid the measurement of acute rTMS effects. 2.6. f NIRS. Relative temporal changes in oxygenated (O 2 Hb) and deoxygenated haemoglobin (HHb) were measured from a 10-second baseline using the ETG-4000 optical topography system (Hitachi Medical Co., Japan). For this purpose, the ETG-4000 uses laser diodes which emit light of two wavelengths (695 ± 20 nm and 830 ± 20 nm) and photodetectors which receive the scattered light intensity. Since the main light absorbers in this setup are the two types of haemoglobin, changes in measured light intensity between the emitterdetector pairs can be related to haemodynamic changeswhich are coupled to neural activation-using a modified Beer-Lambert equation [31]. Altogether the probe set consisted of 16 photodetectors and 17 light emitters arranged in a 3 × 11 fashion with an interoptode distance of 3 cm resulting in 52 distinctive channels with a penetration depth of approximately 2 to 3 cm [19,20]. The probe set was attached over the participants' prefrontal cortex having the central optode of the lowest row on FPz stretching out towards T3 and T4, respectively, according to the 10-20 international EEG system [32]. The sampling frequency was 10 Hz. The unit used to quantify haemoglobin concentration changes was mmol × mm. Subsequently, the recorded data were averaged over the corresponding blocks and exported into Matlab R2012b (The Math Works Inc., Natick, USA) where they were first corrected for changes in the NIRS signal that were not directly due to functional changes in haemoglobin concentration related to the attended tasks. To this end, frequencies that exceeded 0.05 Hz were removed using a low pass filter and clear technical artefacts (e.g., due to an optode losing contact to the scalp during measurement) were corrected by means of interpolation by replacing the values of the corresponding channels with the values of the circumjacent channels in a Gaussian manner (closer channels were taken more into account). In order to further remove artefacts, due to head movements, a correlation-based signal improvement (CBSI) procedure according to Cui et al. [33] was applied, adjusting the values for each channel by the equation ). According to this approach, cortical activation should result in a negative correlation between O 2 Hb and HHb concentrations so in case of positive correlations the O 2 Hb signal is adjusted. Even though exceptions regarding a strictly negative correlation during brain activation exist [34], Brigadoi et al. [35] showed promising results for this procedure. Finally, the CBSI adjusted signal was once more interpolated in a Gaussian manner by using an inner-subject variance threshold of 4 as an interpolation criterion, assuming that exceeding values were most likely the result of further artefacts. Altogether a total of 5% of all channels were replaced. After preprocessing, the data were averaged for all three groups within a time frame of 0-45 seconds after the onset of each task. The amplitude integrals in CBSI concentration between 5 and 40 seconds were taken as the basis for statistical analysis as a delay of the haemodynamic response after task onset can be assumed. Regions of Interest (ROI). Based on prior studies investigating verbal fluency [6-8, 36, 37], different a priori ROIs were defined. Accordingly, in addition to temporal areas (middle and superior temporal gyrus (MSTG)) and the inferior frontal gyrus (IFG) comprising Broca's area, the DLPFC is also supposed to be critically involved when performing a VFT. Corresponding channels were chosen using a virtual registration procedure as described by Tsuzuki et al. [38], Rorden and Brett [39], and Lancaster et al. [40] (cf. Figure 1). Clinical Assessment. PD with or without agoraphobia was diagnosed by experienced clinical psychologists with the German version of the Structured Clinical Interview for DSM-IV, Axis I Disorders (SCID-I [41,42]). Anxiety was measured with the following questionnaires: Panic and Agoraphobia Scale (PAS; [43]), Hamilton Anxiety Rating Scale (HAM-A; [44]), and Cardiac Anxiety Questionnaire (CAQ; [45,46]). All questionnaires were completed at t1 and t2. For all scales, higher scores indicate more severe symptoms. In case of missing questionnaire items, a last observation carried forward analysis (LOCF) was conducted. If less than 10% of all items were left out, missing values were substituted by the participant's mean on the relevant scale. Statistical Analyses. All analyses were conducted with IBM SPSS Statistics 20 and 21, respectively. The sample characteristics were assessed by means of 2 tests (gender, handedness, and first language) or t-tests (age, years of education, duration of illness for patients, and questionnaire data for t1 and t2), directly comparing the experimental groups (active versus sham, sham versus controls, and active versus controls). If numbers for the corresponding categories were below 5, Fisher's exact test was considered instead of asymptotic significance. The effects of patients' blinding regarding rTMS treatment condition were evaluated using binomial tests (test proportion: 0.5) for the subjectively perceived rTMS condition in each patient group, separately. The optimal sample size was determined based on previous studies investigating the effect of high-frequency rTMS on symptom severity in depression (e.g., [47]). The effect size of such a treatment protocol was estimated to approximate 0.5, while power was defined as 80%. The -level was set to 5%. Since the effect of rTMS protocols in patients suffering from anxiety disorders is still difficult to quantify [48], it was decided to follow a more conservative assessment resulting in a target sample size of = 40 patients. For baseline assessment, f NIRS-data for all ROIs were analysed by means of analyses of variance (ANOVA) with the between-subject factor group (patients versus controls). The corresponding behavioural performance was analysed accordingly. In order to verify that changes in CBSI concentration were task-related, effects of hemispheric lateralisation were further analysed using a 2 × 3 repeated measurement ANOVA (RM-ANOVA) with the within-subject factors hemisphere (left versus right) and task (semantical versus phonological versus control task). As the factor time was of no relevance within this context, the corresponding data were averaged across the two measurement times. Accordingly, the phonological and semantical task should elicit a left lateralisation in the language relevant ROIs (IFG & MSTG) [36]. To evaluate the effects of rTMS on prefrontal activity, 2×3 RM-ANOVAs for each ROI and cognitive task were conducted (within-subject factor time (t1 versus t2), betweensubject factor group (verum versus sham versus controls)). The total number of produced nouns for the phonological and semantical task was investigated according to the collected f NIRS-data via a 2 × 3 RM-ANOVA with the within-subject factors time (t1 versus t2) and the betweensubject factor group (verum versus sham versus controls). The number of weekdays was not considered in the analysis as it was matched to fit the number of nouns in the other tasks. In case of violations of the sphericity assumption, the degrees of freedom in the ANOVAs were corrected using the Greenhouse-Geisser or Huynh-Feldt procedure depending on ( > 0.75 Huynh-Feldt, < 0.75 Greenhouse-Geisser; see [49]). To avoid -error accumulation due to multiple testing, the significance level of = 0.05 was adjusted using a Bonferroni-Holm (BH) [50] correction procedure for the ROIs in each hemisphere, separately. Post hoc analysis was conducted by means of two-tailed t-tests for paired and independent samples. In order to assess the relationship between cortical activation and behavioural performance, correlations between the number of recited words and CBSI-concentration were calculated at t1 and t2 for each group and task separately by means of Spearman's rho. To further directly consider changes over time, correlations between the differences (t2−t1) in CBSI concentrations and number of recited words were calculated. For post hoc -tests and correlations, one-tailed -values were considered in case of directed hypotheses. Tables 1 and 2 give an overview of the sociodemographic sample characteristics at baseline and clinical questionnaire data for t1 and t2. Sociodemographic data did not differ between groups. For the clinical questionnaire data, no significant differences emerged between the sham and verum stimulated group for t1. Verum group versus controls and sham group versus controls, respectively, revealed significant differences on all scales in the expected directions (data shown for HAM-A, self-rated PAS, and CAQ, Table 2). Sample Characteristics. When patients were asked to guess whether they had received active or sham rTMS, 16 patients in the sham group thought that they had been sham stimulated while 5 thought that it had been the active protocol. Fourteen patients in the verum group thought they had obtained the active protocol and 4 said that they received a placebo treatment. Additionally, 5 patients (1 sham, 4 verum) did not reply to the question. For each patient group, these guesses differed significantly from chance (binomial test, sham group: P = 0.027 and verum group: P = 0.031). Table 3 contains means and standard deviations for the number of produced nouns for the phonological as well as the semantical task for each group and each measuring time. Behavioural Performance. With respect to behavioural data, no significant baseline differences could be found between patients and controls. Further the 2×3 RM-ANOVA revealed no significant changes for either the phonological or the semantical task. Prefrontal Activity at Baseline. Because one patient missed t2, the f NIRS-data of this subject were excluded from all analyses. Concerning the remaining subjects, significant results were found for all ROIs on both hemispheres for the phonological task (Figure 2) whereby the healthy controls displayed more activation than the patients (left DLPFC: Over the course of treatment, the degree of assessed symptoms on HAM-A, self-rated PAS, and CAQ significantly declined in the verum and sham stimulated group. However, no significant differences after rTMS-treatment between these two groups occurred. a < 0.001 compared with sham rTMS ( 1); b < 0.001 compared with verum rTMS ( 1); c < 0.001 compared with sham rTMS ( 2) For the control task no significant differences were found ( Figure 3). Effects of Hemispheric Lateralisation. Regarding hemispheric lateralisation effects, the 2 × 3 RM-ANOVA showed a significant main effect for the two language related ROIs IFG (F 1,65 = 15.030, P < 0.001 (< 0.0167, BH-corrected)) and MSTG (F 1,65 = 8.317, P = 0.005 (< 0.025, BH-corrected)) where activation-as indicated by CBSI concentration-was higher for the left hemisphere. A significant main effect of task was identified for all ROIs (DLPFC: F 2,100 = 24.275 P < 0.001 (<0.0167, BH-corrected), MSTG: F 2,100 = 55.974 P < 0.001 (<0.025, BH-corrected), and IFG: F 2,100 = 61, 718 P < 0.001 (<0.05, BH-corrected)). The interaction hemisphere * task was significant for the IFG (F 2,130 = 8.151, P < 0.001 (<0.0167, BH-corrected) and the MSTG (F 2,114 = 3.478, P = 0.040 (<0.05, BH-corrected)). Post hoc analyses showed that this was due to a left lateralisation concerning the phonological (IFG, right versus left: t 65 = −3.734, P < 0.001 and MSTG, right versus left: t 65 = −2.983, P = 0.002) and partly the semantical (IFG, right versus left: t 65 = −4.034, P < 0.001) task while there was no significant difference for the control task. Regarding the DLPFC, no significant main effect of hemisphere was found, whereas the interaction hemisphere * task was significant (F 2,130 = 11.040, P < 0.001 (< 0.025, BH-corrected)). For the DLPFC, results were in contrast to the above-mentioned findings with a significant lateralisation effect in terms of increased activation in the right hemisphere for the control task (t 65 = 5.072, P < 0.001) but no significant difference for the two active verbal fluency tasks. Differences between tasks were significant for all comparisons for the IFG (right hemisphere: t 65 ≥ 2.7, P ≤ 0.005 and left hemisphere: t 65 ≥ 3.37, P < 0.001) and left MSTG (t 65 ≥ 3.322, P < 0.001) with activation during the phonological task > activation during the semantical task > the control task. For the right hemisphere of the DLPFC, activation during the phonological task was also higher than for the semantical task (t 65 = 6.083, P < 0.001). For the left DLPFC, participants showed similar activation patterns as for the IFG and left MSTG with respect to the three test tasks (phonological > semantical > control, for all: t 65 ≥ 3.114, P ≤ 0.0015). For the semantical task, a significant main effect of group was found for the left and the right DLPFC (for both: F 2,63 ≥ 5.30, P ≤ 0.007 (<0.0167, BH-corrected)). For both areas, actively stimulated patients showed a significantly reduced cortical activation compared to healthy controls (left DLPFC: t 35 = −2.78, P = 0.005 and right DLPFC: t 43 = −2.60, P = 0.007). Also, sham stimulated patients showed significant hypoactivation compared to healthy participants with respect to the right (t 38 = −3.19, P = 0.002) and left DLPFC (t 34 = −2.316, P = 0.014). No significant main effects of time or significant interactions of time and group were discerned for the left and right DLPFC, respectively. No significant differences between sham and verum stimulated patients existed with regard to the left or right DLPFC for the phonological and semantical task, respectively. Effects of rTMS on Prefrontal The analyses of the control task for the left and right DLPFC revealed neither significant main effects of group nor significant main effects of time. Also, no significant interaction effects of time and group were found. For reasons of clarity, solely significant results for the IFG with respect to the three test tasks are depicted in Table 4. For the MSTG, no significant outcomes were found. Discussion The present study aimed to confirm the finding that PDpatients are characterised by prefrontal hypoactivation during cognitive tasks as compared to healthy controls [7]. Moreover, it additionally addressed the question whether a potential hypoactivation of the PFC can be normalised by means of repeated iTBS. Patients with PD were investigated via f NIRS while performing a VFT prior to and after receiving daily prefrontal iTBS application over a time course of three weeks in addition to weekly group sessions of psychoeducation. The VFT-results were compared with those of healthy control subjects. Regarding our first hypothesis, our results are in line with the above-mentioned findings concerning hypofrontality during cognitive tasks in PD-patients. With respect to our second hypothesis, unexpectedly, an increase in activation over time could only be found for the left IFG in sham stimulated patients. In more detail, before the start of rTMS treatment, differences in cortical activation (as indicated by CBSI data) between patients and controls were observed for specific task conditions of the VFT. In fact, as predicted by our hypothesis, patients did not differ from controls during the control task but displayed decreased prefrontal activation in all ROIs during the phonological task and partly also during the semantical task. The missing differences during the control task indicate that the differences in CBSI concentration between healthy controls and patients during the two active tasks were indeed due to altered cognitive processing and not to more general effects elicited by the measurement situation. Still, it cannot be excluded that our f NIRS signal may have been affected by components that are not directly related to cognitive processing but still lead to a (task-related) change in blood flow and hence a change of the measured signal. Regarding more general effects that might influence the f NIRS signal, a recent study by Takahashi et al. [51] showed that the verbal fluency task is particularly affected by confounding effects due to stress induced skin blood flow, especially for NIRS channels located over the forehead. In order to verify that we still mainly measured cortical activation, we presumed that lateralisation effects in terms of increased left hemispheric activation should be found for language related areas such as the MSTG and IFG but not for the DLPFC. Further, increases in these two ROIs should only exist for the semantical and phonological but not for the control task. In line with previous studies [36] we could confirm these assumptions and accordingly ascribe our finding mainly to differences in cortical activation. Contrary to our second hypothesis, no significant changes in prefrontal activation after rTMS treatment could be found in the verum group. In fact, the only significant change was found for the sham group which showed an increase in CBSI concentration in the left IFG during the phonological task. As at first glance these findings are hard to interpret and we further analysed the prefrontal activation patterns in relation to the behavioural performance of healthy controls and the two patients groups. When regarding only the behavioural data, descriptively, healthy controls could name more nouns than both patients groups; however, this difference was not significant. Further, when associating CBSI concentrations in the different ROIs with the number of recited nouns at baseline, no significant correlations could be revealed for either group. Interestingly, however, at the second measurement time, negative correlations between the behavioural performance and activation patterns in nearly all ROIs existed for the healthy controls. Even though we originally applied one-sided testing (assuming a positive relationship between behavioural performance and cortical activation), we still think that it is worthwhile to give these negative correlations some considerations as they might be helpful for a better understanding of our results. Similar to the finding in healthy controls, negative associations between changes in the number of recited nouns from t1 to t2 and changes in DLPFC activation bilaterally during the phonological task could be found for both patients groups. In order to interpret these results in a meaningful way, it has to be considered that multiple distinct mechanisms might have an influence on the f NIRS signal. Firstly, according to our hypothesis, it can be assumed that a demanding cognitive task leads to an increase in cortical activation which then triggers a certain performance at the behavioural level. In this context, higher cortical activation should lead to a better behavioural performance as it implies that more cognitive resources can be recruited to fulfil the task as well as possible. From another perspective, one could also assume that in subjects with a highly efficient cortical processing (i.e., in case of a subjectively nonchallenging task situation) fewer cognitive resources are needed to achieve good results. In this case, low cortical activation should be associated with high behavioural performance. However, it needs to be kept in mind that the f NIRS signal might not just contain components which are due to cortical activation but might also be influenced by extracranial signal components that relate to peripheral processes such as psychophysiological arousal induced changes in blood flow. In particular, in frontopolar regions, these components have been shown to also trigger an increase in the f NIRS signal due to stress induced vasodilation during a verbal fluency task [51]. In this context, higher CBSI concentrations might then also be associated with a decrease in behavioural performance as it can be presumed that too much psychophysiological arousal should have a negative effect on cognitive functioning. Even though we tried to control for such arousal effects by performing a control task and considering lateralisation effects, we cannot exclude the fact that it still had an effect on our results. Accordingly, we conclude that we could not find any significant correlations at the baseline measurement time as psychophysiological arousal was probably very high for all participants, hence having confounding effects on the f NIRS signal components due to cortical activation. At the second measurement time, cortical activation should have been the same for the healthy controls while arousal may have decreased for some participants as the situation was more familiar, leading to a reduction in signal intensity and negative correlations with behavioural performance due to improved cognitive function (with reduced arousal). While it cannot be excluded that these negative correlations also imply that the task was not challenging enough for some of the healthy subjects, the study by Takahashi et al. [51] points more in favour of an interpretation in terms of a decrease in psychophysiological arousal. In fact, the authors could show that already a repetition of the verbal fluency task within one measurement could lead to a significant repetition effect by means of a decrease in psychophysiological arousal and associated f NIRS signal intensity. Concerning the PD-patients, psychophysiological arousal should have also decreased but possibly not as much as in the healthy controls as the measurement situation still represented a typical panic-relevant situation (patients had to sit in a small room with the f NIRS probe set attached to their heads so a sudden escape was not possible). At the same time it can be expected that arousal effects, which are prominent in the frontopolar area of the PFC, also have an effect especially on the DLPFC which cannot be neglected [52]. A possible explanation especially for the influence of DLPFC activation through the frontopolar region is given by Kirilina et al. [53] who found that the vein responsible for arousal effects in the forehead also stretches out to dorsolateral regions. Consequently, apparent effects of a slight decrease in arousal would most likely be expected in the DLPFC, hence explaining the negative correlations between changes in behavioural performance and changes in CBSI concentrations for the patients. Even though correlations between CBSI concentrations and behavioural performance during the semantical task were not significant, it is noteworthy to mention that the direction of the correlations was generally the same, supporting our prior assumptions. We therefore conclude that healthy controls as well as patients in both groups were generally less affected by psychophysiological arousal during the second measurement time. In this regard, the increase in activation from the first to the second measurement time for the left IFG in the sham group might not be related to an increase in cognitive functioning but might merely represents a more general possibly also arousal related effect. A further reason which might have contributed to the increase in CBSI concentrations after sham iTBS might be given by simple regression towards the mean. In this regard it needs to be considered that sham and verum stimulated patients did not differ significantly in their activation patterns after rTMS application. Instead, sham stimulated patients showed a significantly decreased baseline CBSI concentration in the left IFG compared to healthy controls. All in all, our findings confirm our first hypothesis that PD-patients show a prefrontal dysfunction that is at least partly independent of panic-related tasks. However, an increase in cortical activation after verum iTBS was not found. Instead, we could accentuate the need to consider task-related arousal induced effects especially when investigating patients with anxiety disorders. To our knowledge, this is the first controlled study investigating effects of add-on theta burst stimulation (TBS) on prefrontal activation and cognitive functioning in patients with PD/agoraphobia. So far, only a few open studies investigated the effects of TBS on psychiatric symptoms (e.g., [54,55]). However, limitations of this study have to be mentioned. The stimulation condition (verum versus sham) was correctly identified by the majority of patients, so one could argue that placebo effects might have affected our results. Possibly, patients exchanged their perceptions about rTMS during the psychotherapy group sessions, as they became acquainted with each other over the course of psychoeducation. For further investigations, we therefore emphasise the need for specialised sham coils which produce a superficial electrical current on the skull, as demonstrated by Rossi et al. [56]. Although in our study sufficient blinding could not be reached, promising results of rTMS in controlled studies with electromagnetic placebo coils could demonstrate specific effects of verum stimulation on psychiatric symptoms (e.g., for PTSD and comorbid depression by Boggio et al. [57]). Referring to the choice of the rTMS-frequency, we used a protocol which is assumed to facilitate motor cortex excitability [27]. Also, a facilitation of frontal activity could be demonstrated. For example, speech repetition accuracy was promoted by intermittent theta burst stimulation of the left posterior inferior frontal gyrus [28]. Nevertheless, rTMS effects seem to be influenced by a wide range of factors, for example, genetic variables or the way of application. Cheeran et al. [58] could demonstrate a significant influence of the brain-derived neurotrophic factor gene (BDNF) on the TBS-efficacy for the primary motor cortex. Also, TBS aftereffects seem to hinge on the NMDA-receptor [59]. Further, a study of Gamboa et al. [60] demonstrated reversed iTBSeffects after a prolonged, single application of 1200 instead of 600 stimuli. Taken together, it could be questionable if iTBS consistently facilitates the excitability of stimulated neurons. Moreover, in our study, rTMS was generally applied after psychoeducation sessions. However, an application prior to psychoeducation could have led to a different processing of the afterwards presented information. We therefore suggest that future studies should systematically assess temporal effects of rTMS applications in relation to additional intervention methods. Regarding methodology, we have already discussed the problems that arise from the confounding skin blood flow signal component in the f NIRS data. A possible solution to this-which allows for an even more precise interpretation of the result-might be to measure the skin components selectively by additionally placing optodes with shorter interoptode distances on the probe set [51]. Finally, concerning the diagnostic process, PD/agoraphobia was diagnosed prior to t1 with the help of structured clinical interviews. However, the time lag between these interviews and t1 was not standardized in our study. Conclusion This pilot study investigated cortical activation patterns of patients with PD/agoraphobia compared to healthy controls. Further, effects of add-on iTBS on cortical activation and cognitive performance in PD/agoraphobia were analysed. Findings of a baseline cortical hypoactivation could be replicated. However, an increase in cortical activation after verum iTBS could not be supported. Instead we only found increased CBSI concentrations for the left IFG after sham iTBS application. By integrating behavioural performance into our analysis we could attribute this finding to more general effects such as task-related psychophysiological arousal and regression towards the mean. Taken together, our results confirm that PD is characterised by prefrontal hypoactivation. As we could not verify an increase in cortical activation after verum iTBS, further studies that should control for taskrelated psychophysiological arousal are needed in order to evaluate under which circumstances iTBS might serve as a therapeutic tool in the treatment of PD.
2018-04-03T00:22:34.405Z
2014-03-18T00:00:00.000
{ "year": 2014, "sha1": "0590ac44943597a4cd9fd918bb074fba96279fef", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2014/542526.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fbf68e89ecafdf55ba0c48a0ec9bef7a96ca75c6", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
199548850
pes2o/s2orc
v3-fos-license
The Biological and Clinical Relevance of G Protein-Coupled Receptors to the Outcomes of Hematopoietic Stem Cell Transplantation: A Systematized Review Hematopoietic stem cell transplantation (HSCT) remains the only curative treatment for several malignant and non-malignant diseases at the cost of serious treatment-related toxicities (TRTs). Recent research on extending the benefits of HSCT to more patients and indications has focused on limiting TRTs and improving immunological effects following proper mobilization and engraftment. Increasing numbers of studies report associations between HSCT outcomes and the expression or the manipulation of G protein-coupled receptors (GPCRs). This large family of cell surface receptors is involved in various human diseases. With ever-better knowledge of their crystal structures and signaling dynamics, GPCRs are already the targets for one third of the current therapeutic arsenal. The present paper assesses the current status of animal and human research on GPCRs in the context of selected HSCT outcomes via a systematized survey and analysis of the literature. Hematopoietic Stem Cell Transplantation (HSCT) The field of hematopoietic stem cell transplantation (HSCT) has witnessed t remendous progress since its origins in the 1950s [1]#. The number of HSCTs has exploded, along with its range of indications, candidates, and donor sources [2,3]#. HSCT remains indispensable for treating several malignant and non-malignant disorders. The use of peripheral blood stem cells (PBSCs) is well established in autologous transplantation [4]#, and they have become the preferred source of allogeneic hematopoietic stem cells (HSCs), at least in adults [5,6]#. In both scenarios, the number of circulating HSCs mobilized from the bone marrow is closely associated with the engraftment outcome [7]#. Before the graft infusion, most HSCT protocols require a preparation phase, which aims to kill malignant cells to make room for the newly infused HSCs to engraft or to induce immunosuppression. The latter is important to avoid graft rejection and graft-versus-host disease (GvHD) in allogeneic settings. This so-called conditioning regimen comprises high doses of chemotherapeutic drugs and/or radiotherapy that cause a cytotoxic burst of tumor and/or normal cells. This results in a pro-inflammatory status [8,9]#, which is desired Mobilization Mobilization of HSC from the BM into peripheral blood (PB) is usually measured by the number of circulating CD34 + and/or nucleated blood cells harvested using leukapheresis [42]#. The standard mobilization agent is recombinant granulocyte colony-stimulating factor (G-CSF; filgrastim or lenograstim), an endogenous growth factor responsible for inducing granulocyte expansion and maturation in times of infection or stress [43]#. Within the group of chemokines (Table 1), the CXCL12 3'UTR A allele (rs1801157; g.44372809G>A) has shown positive correlation with mobilization in both healthy donors and patients undergoing autologous transplantation [44][45][46]. The functional consequence of this CXCL12 (SDF-1) polymorphism is still unclear, but it may lead to lower protein levels [47]#. This would concur with abounding evidence on CXCR4, the CXCL12 receptor, whose blockade promotes mobilization when using plerixafor. As expected, several publications on plerixafor (47) were relevant, with most assessing mobilization for autologous HSCT in MM and lymphoma patients. Although an improvement may be achieved by increasing the dose [48]#, plerixafor has been demonstrated to be less efficient as a monotherapy than in combination with G-CSF [49]. Interestingly, in patients responding poorly to G-CSF (< 20 × 10 6 /L CD34+ cells in PB), pre-emptive plerixafor treatment led to a final yield equivalent to a rescue strategy administered to patients with insufficient leukapheresis [50]. Several additional studies have endorsed the use of plerixafor in autologous transplantation for diabetic patients [51] and pediatric patients [52][53][54], whereas other articles have supported its use in elderly patients and those with renal insufficiency [55,56]#. Two early studies also showed plerixafor to be efficient in mobilizing healthy allogeneic donors with a reasonable safety profile [57,58], and this was later reported by a phase I/II trial [59]#. Examining these varied studies, an extension of plerixafor indications is to be expected in the coming years, as are new pharmacological alternatives. Indeed, new compounds targeting CXCR4 are in development: small molecules (TG-0054 [60][61][62]) such as plerixafor, but also peptides (BL-8040 [63], (BK)T140 [64], POL6326 [65], LY2510924 [66]), or oligonucleotides (NOX-A12 [67]). All have already been tested in humans as part of phase I or early phase II clinical trials. Finally, although the CD34 + count in PB remains the most used predictor for guiding cost-efficient mobilization regimens [68]#, new biomarkers are being eagerly sought to improve individualized prescriptions. Nonetheless, the expression of CXCR4 in CD34 + HSC in correlation with mobilization has thus far shown discordant findings [69][70][71], and additional studies are needed. Engraftment Engraftment in humans is assessed in PB and defined by the stable recovery of blood cell counts after myeloablative conditioning and graft infusion: platelets > 50 × 10 9 /L in the absence of transfusion (platelet engraftment); or neutrophils > 500 × 10 6 /L (neutrophil engraftment) [105]#. In allogeneic HSCT, additional genetic testing for chimerism is performed to confirm the donor origin of the hematopoietic recovery [106,107]#. The absence of engraftment or the loss of donor cells after initial engraftment constitute primary and secondary graft failure (GF), respectively [108]#. In animal studies, mostly on mice, competitive repopulation assays allow for a much larger toolkit of measurements of HSC engraftment capacity [109]#. The use of anti-CXCR4 compounds for mobilization in the donor did not preclude engraftment in humans [61,85,94,95,110,111] or mice [112], with some studies reporting even better engraftment in mice [113,114] (Table 2). Targeting CXCR4 could also improve engraftment by vacating the hematopoietic niches in the recipient before HSCT, either via chimeric antigen receptor (CAR) T cells co-expressing CXCR4 and C-kit or via plerixafor [115][116][117]. Despite discordant results in mice [118], plerixafor administration post-HSCT in human recipients improved engraftment in one phase I/II clinical trial [119]. In this study, "mobilizing" doses of plerixafor were started from day 2 post-HSCT and continued until day 21 or neutrophil engraftment. Conversely, CXCR4 expression in both mice and human cells correlated positively with autologous and xeno-engraftment [120,121]. In humans, following G-CSF mobilization, CXCR4 expression showed a positive correlation with engraftment [122][123][124]. Surprisingly, here, the CXCL12 3'UTR A polymorphism whose occurrence had been associated with increased mobilization (see the Mobilization subsection) was associated with faster hematopoietic recovery in autologous transplant patients [125]. Indeed, if it really decreased protein expression, one would expect reduced homing of the graft CXCR4+ HSC by CXCL12-expressing stromal cells. However, more research seems warranted to define the timing of CXCR4 requirements both before and during the course of engraftment. Concerning other chemokines, high levels of interferon gamma-dependent CXCL9 [132,133] have been associated with GF in humans. In mice, knocking out (CXCR2) delayed hematopoietic recovery [134]. On the other hand, CCR1 expression marked human HSC as responsible for high levels of xeno-engraftment in mice [135]. These are some examples of the contribution of chemokines to hematopoietic-niche integrity. There is less evidence available for other classes of GPCR. For instance, the engraftment of cells mobilized by cannabinoid receptor 2 (CB2) agonism [136] in animals or Beta-3 adrenergic receptor (B3AR) agonism [102] in humans was equivalent to those mobilized by G-CSF. Frizzled-6 (Fzd-6), a class F GPCR for Wnt protein ligands [137]#, is another potential contributor, as it was shown to be necessary for BM reconstitution beyond the homing phase [138]. A potentially clinically relevant finding is the presence of auto-antibodies activating Angiotensin 1 receptor (AT1R) in human allogeneic HSCT recipients, described in auto-immune settings [139]# and solid organ allo-rejection [140]#, and their association with decreased engraftment [141]. Sinusoidal Obstruction Syndrome (SOS) Some early HSCT complications such as thrombotic microangiopathy and SOS are initiated by endothelial cell damage [156,157]#. SOS, formerly called veno-occlusive disease of the liver (VOD), occurs in 5-60% of HSCT patients, depending on prophylaxis and risk factors [158][159][160]# such as the underlying disease, the use of alkylating agents for conditioning, patient age, or liver disease. Sinusoidal endothelial cell damage is the key step in the pathophysiology of SOS, leading to the activation of the coagulation cascade, centrilobular thrombosis and consequent post-sinusoidal hepatic hypertension and, potentially, multiple-organ failure [157]#. Clinically, SOS is characterized by jaundice, fluid retention, painful hepatomegaly, and often thrombocytopenia refractory to transfusion [160,161]#. Our review strategy identified no direct associations between any GPCRs and SOS occurrence or severity, yet some additional reports caught our attention. For example, recombinant thrombomodulin (rTM) is approved in Japan to treat disseminated intravascular coagulation (DIC) and has been shown to reduce SOS and the occurrence of thrombotic microangiopathy in HSCT patients [162,163]#. In two murine SOS models, one using monocrotaline (MCT) and the other using busulfan/cyclophosphamide conditioning followed by HSCT, rTM's cytoprotective effect was demonstrated to depend on its fifth epidermal growth factor-like region (TME5) [164,165]#. A murine model of tacrolimus-induced vascular injury showed that the pro-angiogenic functions of TME5 depended on its binding to G protein-coupled receptor (GPR) 15 [165,166]#. rTM was able to mitigate aGvHD in mice in a GPR15-dependent manner [167]#. However, this GPR15 dependency has yet to be demonstrated directly for SOS in vivo. Interestingly, the oligonucleotide-defibrotide-the only FDA/European Medicines Agency (EMA)-approved drug for the treatment of SOS [168]#, was shown to increase thrombomodulin expression in humans [169]#. A traditional Japanese medicine called Dai-kenchu-to (DKT) was able to attenuate liver damage but not prevent the development of SOS induced by MCT [170]#. As a potential mechanism, MCT-induced CXCL1 (or CINC1) upregulation was suppressed in the DKT-treatment group, which could be a potential mechanism for explaining the associated reduction of neutrophil accumulation in the liver. Acute GvHD Acute GvHD (aGvHD) occurs when naïve T cells from an allogeneic donor are activated by recipient or donor antigen-presenting cells to attack recipient cells [171]#. This process is triggered by the inflammatory setting of HSCT. Once activated within lymph nodes, the alloreactive effector T cells migrate to the skin, the gastrointestinal (GI) tract, or the liver, causing further inflammation and damage [172]#. Some of the main determinants of aGvHD risk are the sources of HSCs themselves, donor-recipient HLA mismatches, the intensity of the conditioning regimen, and the absence of any GvHD prophylaxis [173]#. Immunosuppression is systematically used to prevent and treat aGvHD [174]#. Like other immune cells, T cell trafficking is regulated by myriad chemo-attractants, including chemokines. A study of the expression kinetics of a panel of chemokines and receptors in GvHD-target organs following allo-HSCT compared that expression to the histopathological changes occurring in the same organs [175]#. Characterization of the individual contributions of each chemokine/receptor would be needed to make further conclusions, but it highlights that aGvHD is a dynamic process with a complex spatiotemporal network of chemo-attractants at play. A number of chemokines or their receptors are associated with the development of aGvHD (Table 3). For instance, higher CCL8 levels correlated with more severe murine aGvHD [176], and CCR2 expression on CD8 + effector T cells was necessary for their migration to the murine gut and the liver and for the generation of aGvHD [177]. In contrast, broad inhibition of CCL2, CCL3, and CCL5 reduced murine liver aGvHD [178]. Also in mice, anti-CD3 treatment during preconditioning reduced aGvHD by limiting both CCR7 + dendritic cells homing to lymph nodes and CCR9 + effector T cells homing to aGvHD target organs without reducing GvL [179]. In humans, both a CCL5 (RANTES; Regulated on Activation, Normal T Cell Expressed and Secreted) haplotype of three polymorphisms [180] and the expression of the CX3CL1/CX3CR1 pair [181] positively correlated with the occurrence of aGvHD. Depending on the cells bearing GPCRs, other chemokine receptors can prevent aGvHD. The presence of CCR8 on regulatory T cells (Tregs) is crucial to their anti-GvHD action in mice [182], whereas Chem23R, another chemo-attractant receptor [183], prevents intestinal aGvHD in mice [183]. The CXCL12 3'UTR A allele previously discussed for mobilization and engraftment was here associated with reduced risk and severity of aGvHD [184], highlighting the favorable prognosis carried by this allele. The anti-CCR4 antibody, mogamulizumab, is currently approved for human use before HSCT to treat certain adult T-cell leukemias. This might accelerate subsequent aGvHD because it not only targets CCR4 + tumor cells but also CCR4 + Tregs [185,186]. Higher CCR5 and CCR9 levels were detected on children's memory effector T cells before they developed GI aGvHD [187]. CCR5 is particularly interesting and is used by human immunodeficiency virus (HIV) as a co-receptor for entry into CD4 + T cells, thus partly explaining the genetic susceptibility to HIV infection [188]#. Maraviroc, a CCR5-antagonist, was approved in 2016 for the treatment of HIV. In the context of HSCT, the CCR5 ∆32 mutation was first associated with lower aGvHD [189,190]. Several related studies subsequently showed different subgroups of CCR5 + /CD4 + T cells could be associated with intestinal aGvHD [191][192][193]. Similarly, dendritic cells expressing CCR5 could be associated with aGvHD [194][195][196], showing that CCR5 could be a chemo-attractant for several causative immune cell types. Two phase I/II trials have now tested the safety and the efficacy of CCR5 blockade using maraviroc for the prevention of aGvHD. The first trial, conducted in adults [197], proved successful and led to follow-up studies by the same group of researchers [198][199][200][201][202] as well as an ongoing phase II study (NCT01785810). Another trial [203], published in 2019, included adults and children but had inconclusive findings due to unrelated toxicities. According to its authors, CCR5 blockade could prevent lymphocyte homing but not their activation, highlighting the temporal complexity of immune activation. In mice, three studies have shown the absence of CCR5 to accelerate aGvHD [204][205][206]. Among adrenergic receptors, alpha-2 adrenergic receptor (A2AR) agonism [230,231] or beta-adrenergic receptor (BAR) activation under stressful conditions [232,233] was associated with lower aGvHD in mice, and so was P2Y 2 knock-out [234]. The previously mentioned AT1R auto-antibodies were also revealed to be associated with increased aGvHD in humans [141]. Some interesting candidates have also emerged from the class of lipid mediators. The role played by the endocannabinoid system (ECS) in inflammation is now established [28]#, and the ECS was previously implicated in solid organ rejection [235,236]#. In mice, CB1/2 activation with tetrahydrocannabinol (THC) was able to mitigate aGvHD [237], whereas transplants where CB2 was knocked-down induced higher aGvHD [238]. In a human phase II trial, cannabidiol was also able to prevent aGvHD [239]#. The broad S1P 1 agonist, fingolimod, is approved for the treatment of multiple sclerosis and works by sequestering lymphocytes in secondary lymphoid organs [240]#. A more specific agonist (CYM-5442) was shown to reduce the severity of murine aGvHD by inhibiting macrophage recruitment via a reduction of CCL2 and CCL7 expression on endothelial cells [241]. Among the other GPCR classes, complement 3/5 activator fragments receptors (C3aR/C5aR) [242] or platelet-activating factor receptor (PAFR) [243] in mice, as well as a microsatellite in human EGF, Latrophilin, and Seven Transmembrane Domain-Containing Protein 1 (ELTD1) [244], have all shown positive correlation with aGvHD. A frizzled agonist was able to rescue LGr5 + gastric stem cells from murine aGvHD [245], underlining the importance of each target organ's microenvironment. Activated protein C (aPC) signaling using protease-activated receptor 2/3 (PAR 2 and 3) expanded Tregs and mitigated aGvHD in mice [246]. rTM depends on GPR15 to mitigate murine aGvHD [167], whereas human patients receiving rTM were shown to have lower CCL5 levels and aGvHD [247]. The case of GPR43 merits further discussion; it is a sensor of gut microbiota-derived metabolites, such as short-chain fatty acids (SCFAs). These metabolites limit a number of inflammatory processes via action on endothelial cells [248]# by modulating neutrophil recruitment [249]# or the CD8 + T cell's effector function [250]#. In the intestine, GPR43 contributes to epithelial integrity, and GPR43 knock-out in mice was associated with the increased severity of aGvHD [251]. Table 3. Acute GvHD occurrence/severity in animal (A) or human (H) studies. See the Methods section regarding the reporting of results (Section 3.2). aGvHD Studies Correlation with Outcome References Chronic GvHD Chronic cGvHD (cGvHD) historically develops from 100 days after allogeneic HSCT, but it can nevertheless overlap with aGvHD, as it shares some initiating events, although it has different pathophysiology and clinical manifestations [257]#. Although it has not been completely elucidated, cGvHD pathogenesis involves chronic inflammation, aberrant tissue repair, and fibrosis, while the underlying immune dysregulation affects multiple cell types [258]#. The therapeutic arsenal against cGvHD is limited [259,260]#, making cGvHD the main contributor to TRM in long-term HSCT survivors [261]#. Due to its timescale, cGvHD overlaps with other chronic and/or age-related conditions, such as metabolic syndrome, chronic infections, or second primary cancers [262]#. cGvHD can affect virtually any organ but strikes the following systems in particular: skin and its appendages, mucosae, muscles and joints, and lungs. cGvHD Studies Effect Lung Toxicity Pulmonary complications following HSCT are a cause of morbidity and mortality. They arise from infections, iatrogenic fluid overload, idiopathic pneumonia syndrome (IPS), and as a consequence of renal or cardiac failure or cGvHD [285]#. IPS is an early complication of allogeneic HSCT that encompasses a spectrum of clinical presentations arising from acute, widespread alveolar injury [286]#. The type and the intensity of conditioning medication, especially cyclophosphamide, and the activation and the migration of donor T cells are important contributors to that injury [287,288]#. Various cellular and soluble inflammatory mediators are thought to play a role in the development of IPS [286,289]#. Our systematized search of the literature found several animal studies reporting associations between chemokines and/or receptors and the occurrence of IPS (Table 5). CXCL 9 and 10, and their receptor, CXCR3 [290], in addition to CCL5 (RANTES) [178,291], showed positive correlation with IPS. For CCL2 [178,292,293] and CCL3 [178,294], the evidence was more conflicting. As with GvHD, specific chemokines probably correlate with distinct immune cell functions during the course of IPS [286]#. No reports of associations were found for any other functional classes of GPCR. It is interesting to note that no new studies on this have been published in the last ten years. Treatment-Related Mortality (TRM) TRM comprises deaths not due to the underlying disease. In cases of malignant diagnoses, this means death not due to a relapse of the disease, also sometimes called non-relapse mortality (NRM) [295]#. GvHD, VOD, lung toxicity, and infections due to HSCT-related immunosuppression are important causes of TRM [285]#. Mortality rates are tightly linked to responses to the initial treatments for each one of those complications; refractory aGvHD, for instance, is fatal in up to 80% of cases [285]#. Animal transplantation models do not usually recapitulate the underlying diseases for which human HSCTs are indicated, thus defining TRM seems futile in animal studies. In contrast, in human studies, two human polymorphisms in CCL2 (rs1024610, NG_012123.1:g.2936T>A) [296] and CXCL10 (rs3921, NM_001565.3:c.*140G>C) [297] have been associated with increased and decreased TRM, respectively ( Table 6). The study on CCL2 found no significant associations between the variant and aGvHD, suggesting that another TRT could be the cause of the observed TRM. In contrast, in the study on the CXCL10 variant, lower TRM was associated with lower organ failure. CXCL9 was part of a four-biomarker panel associated with TRM [268]. High CCR5 expression in recipient T cells increased TRM [198,254], whereas a CD4 + CCR5 + cell population was associated with higher TRM [191]. In another study, CCR7 + CD4 + T cells were associated with death from cGvHD [298]. No reports of associations were found for any other functional class of GPCR. Systematized Search We used MEDLINE (www.ncbi.nlm.nih.gov/pubmed/) and EMBASE (www.embase.com/) databases to carry out a systematized review [299]# of articles published in English up to 4 March 2019 (see Appendix A.4). The search extended to in vivo models and human interventional and non-interventional studies (see Appendix A.3). The following HSCT outcomes were selected: mobilization, engraftment, SOS, acute GvHD, chronic GvHD, lung toxicity, and TRM. The rationale for this selection and the measurement methods are explained in the Appendix A.8. Due to the advances in research on plerixafor and the existence of another recent systematic report reviewing its use for its approved indications [41]#, we restricted our search to human studies where mobilization was the measured outcome. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA; www.prisma-statement.org/) guidelines were followed to ensure a systematized search, although some of the requirements were not applicable due to the quite inclusive selection criteria used [300]#, as explained in the Appendices A.6-A.11. The selection and data collection processes are described in the Appendices A.6 and A.7 as well. The search workflow and its output are reported in Figure 1 Reporting of the Results In the Results and Discussion section, each table lists the GPCRs, GPCR ligands, or related proteins whose expression/activity was reported to correlate with the HSCT outcome under consideration, as well as the corresponding reference(s) from the systematized search. Whenever an increase in gene/protein expression or activity of a GPCR, a GPCR ligand, or a related protein was associated with an increase in the incidence/level/severity of the outcome under consideration, the correlation is described as positive (+). The same applies whenever a decrease in a GPCR expression/activity was associated with a decrease in the outcome. Conversely, whenever an increase or a decrease in the GPCR expression/activity was associated with a decrease or, respectively, an increase in the outcome, the correlation is negative (−). Whenever there was no association between a GPCR expression/activity and the outcome, the correlation is null (0). As for polymorphisms (identified as "haplotype", "microsatellite", or by the variant number), their presence can correlate either positively (+) or negatively (−) with the outcome, yet their effect on protein level/function is not necessarily known. GPCRs or their ligands are grouped according to functional classes: chemokines [C-C ligand/receptor (CCL/R), C-X-C ligand/receptor (CXCL/R), C-X3-C ligand/receptor (CX3CL/R) blue], adrenergic receptors (orange), lipid mediators/receptors (green), and "others" (gray). To introduce topics or to enrich the discussion, we considered additional studies, which were not selected by the research query and/or criteria, as well as reviews. These references, along with those cited in the introduction, are specifically identified (#). Conclusions and Perspectives This systematized review reports on a significant number of GPCRs showing consistent associations with mobilization and engraftment and for which research has moved on to more advanced stages. Although there is some evidence that GPCRs play a role in SOS, GvHD, lung toxicity, and TRM in HSCT settings, there is a flagrant paucity of clinical associations. For several target GPCRs, the evidence is lacking or conflicting. In contrast, chemokines and their receptors make promising potential targets/biomarkers, as there are numerous potential candidates in various settings. Despite the difficulties in isolating the contributions of individual GPCRs, research has made significant progress for several of them. Targeting CXCR4 for mobilization has proven its utility, with the marketing authorization of plerixafor coming in 2008. Further work is needed to extend plerixafor's indications, and the new anti-CXCR compounds in development could offer interesting pharmacological alternatives. The timing of CXCR4 s role during engraftment remains unclear, but CXCR4 blocking during mobilization does not seem to prevent engraftment, and CXCR4 could be manipulated so that it vacates the recipient niche or stimulates engraftment. No direct link between a GPCR and SOS has been consistently demonstrated in vivo. As for aGvHD, CCR5 blockade, such as with the anti-HIV drug, maraviroc, is on track to become a therapeutic option for its prophylaxis. Combined or alternated blockade using CXCR3 and CCR5 might bring further benefits. Activating cannabinoid receptors could be another prospect. GPR43 also merits further investigation as the importance of the gut's microbiota in inflammatory processes is increasingly recognized. It seems that research on GPCRs in the context of cGvHD is less advanced than in that of aGVHD. The current state of knowledge involves multiple chemokines but is either based on single studies or reports with conflicting findings. Studies on lung toxicity and IPS were scarce, and no relevant contribution to this field has been made in the last ten years. For both cGvHD and IPS, a better understanding of the molecular pathogenesis will probably be required before any useful biomarkers are revealed. As for TRM, it is often multifactorial and may thus prove more challenging to associate death with a single biomarker than individual or even combined toxicities. The absence of an assumed common toxicity-related pathway may explain the paucity of studies revealed by our literature search strategy. The methodology used in the present paper strived to follow the PRISMA protocols, which, due to the nature of the search performed, could not be followed strictly. However, given the broad range of GPCRs, using the PRISMA methodology helped the authors to guide their search, resulting in a systematized review [299]. Because our search included both pre-clinical and clinical studies, quality, precision, and developmental stage of the evidence was inevitably heterogeneous and could not be reported or summarized using quantitative measures. Despite our best efforts to cover all GPCRs, certain reports that we judged significant enough to mention were missed by the search strategy. Nevertheless, the structure of this article allowed the authors to include such papers in the Results and Discussion section in order to properly cover the subject. Also, time and human resources limitations did not allow for quality or bias assessment by multiple unbiased reviewers, as should be expected from a completely systematic review. Regardless of the limitations to our systematized approach, it did allow for comprehensive scope and was meant to inform scientists and clinicians of the latest developments in a field that is (re-)gaining momentum. This manuscript is a review using some of the systematic elements recommended in the latest PRISMA guidelines [300]#. Its protocol was not registered prior to its completion. Cochrane, Prospero, and Epistemonikos databases were searched (see Appendix A.4) to verify that this manuscript was not repeating any existing review. Appendix A.2. Rationale and Objectives The rationale was explained in the main text. The general objective was to genuinely report on the state of knowledge regarding the identification and the targeting of GPCRs in the management of hematopoietic stem cell transplantation (HSCT) outcomes. Hence, we used an inclusive approach in selecting the types of studies under consideration. Clinical studies, both observational and interventional, prospective randomized clinical trials, and retrospective cohort or case-control studies were included in the search. Both adult and pediatric studies were considered. Preclinical studies were included provided they used animal models (mice, rats, primates, zebrafish) that reproduced conditioning and hematopoietic stem cell transplantation comparable to humans. We thus excluded animal studies that only investigated the mobilization stage of HSCT. Conference abstracts were included, whereas reviews, editorials, and case reports were excluded. Only English language literature was included. As there was no existing comprehensive review of the field to start from (see Appendix A.1), we did not set any anterior limit on the time of publication. We ran the search for the last time on 4 March 2019. Appendix A.3.2. Interventions/Observations Studies were considered for this review if they reported: • any association between the expression of a GPCR, a GPCR ligand, or a related protein (e.g. GPCR kinase, beta-arrestins) and one of the selected outcomes (see Appendix A.8) of autologous and/or allogeneic HSCT; • any intervention on a GPCR, a GPCR ligand, or a related protein to change one of the selected outcomes of autologous/allogeneic HSCT. The list of GPCRs provided by Uniprot (Available online: www.uniprot.org/docs/7tmrlist.txt) was used as a reference and was sometimes cross-checked with other public databases. Appendix A.5. Data Management The references were assembled and screened using EndNote X9.1.1 Desktop software for MacOS. The extracted data were stored in an Excel form. Appendix A.6. Selection Process Appendix A.6.1. De-Duplications HG used EndNote's automatic duplication search function and also conducted a manual curation of the assembled articles to remove obvious duplicates before screening. As this review was not completely systematic for the reasons previously explained, we decided against running a thorough de-duplication algorithm [301]#. Appendix A.6.2. Screening HG screened the title and the abstract of each article found using the search strategy described above for whether they fulfilled the eligibility/exclusion criteria. A rating system was used to discuss the least obvious exclusions with TN and SJM. Records for which important information was missing, typically the abstract, but for which titles indicated a likely match to our topic were further screened, along with all the included records. HG performed this second screening step based on full-text articles, at the time as he collected the data. Appendix A.7. Data Collection Process HG extracted the data from the full-text records using an Excel piloting form drafted with TN and SJM. The investigators undertook no data verification. No systematic publication quality assessment was conducted. The item variables sought were as follows: • Outcome, effect of GPCR, and direction of the effect: mobilization, engraftment, VOD, acute GvHD, chronic GvHD, lung toxicity, treatment-related mortality. Appendix A.8. Outcomes and Measurement The search and the ensuing data collection considered the following outcomes, as they were identified as being the most common or the most important issues in HSCT in the latest European Society for Blood and Marrow Transplantation (EBMT) Handbook [285]. The measurement method is indicated in brackets, when appropriate. -Stem cell mobilization in donors (allogeneic) or hosts (autologous), as measured using circulating CD34 + (HSC) and/or nucleated blood cells harvested through leukapheresis. -Engraftment (neutrophil and/or platelet recovery, lab diagnosis). Better engraftment was measured by shorter recovery times or lower rates of graft failure. Due to the heterogeneity of the studies found and the fact that the screening was executed by one reviewer only, the risk of bias in individual studies was not properly assessed. It is obviously an important shortcoming of this review. Appendix A.10. Data Synthesis Data were qualitatively analyzed and the results summarized in Tables 1-6. Due to the heterogeneity of the studies, no quantitative analysis could be reasonably carried out. Appendix A.11. Meta-Biases and Cumulative Evidence Due to the heterogeneity of the studies reviewed, no meta-bias analysis was undertaken, and this is a shortcoming of this review. No systematic approach was undertaken to assess the quality of individual studies due to the constraints and heterogeneity previously underlined.
2019-08-14T13:05:07.803Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "0f478e0bbc243dec6794b372158a8e31dda47780", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/20/16/3889/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d9453e155c03b99ba8f2d0849345501786d93db9", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
100782489
pes2o/s2orc
v3-fos-license
1,3-Dipolar cycloadditions of organic azides to ester or benzotriazolylcarbonyl activated acetylenic amides Reactions of 3-lithiopropiolate 10 with isocyanates or diisocyanates gave mono-carbamoylpropiolates 11a,b and bis-carbamoylpropiolates 12a − d , in 40 − 76% yields. 1,3-Dipolar cycloadditions of benzyl azide ( 1a ) and mono-acetylenes 11a,b under thermal conditions gave mono-triazoles 13a,b in 83 and 84% yields, respectively. The structure of 13a was confirmed by X-ray crystallography. Microwave induced cycloadditions of mono-azide 1a with bis-carbamoylpropiolates 12a − d furnished the bis-triazoles 14a − d . Similar reactions of 3-(azidomethyl)-3-methyloxetane ( 15 ) with mono-acetylenes 11a,b or bis-acetylenes 12a,d produced the mono-and bis-triazoles 16a,b and 17a,b , respectively. Reactions of 1,4-bis(azidomethyl)benzene ( 1b ) with mono-acetylenes 11a,b gave the azido-triazoles 18a,b and microwave irradiation with simultaneous air-cooling gave bis-triazoles 19a,b . 1,3-Dipolar cycloaddition of benzotriazolylcarbonyl-substituted acetylene 4 and benzyl azide ( 1a ) proceeded smoothly under microwave irradiation or thermal conditions to give the corresponding triazole 20 , which on further treatment with a variety of amines gave the C -carbamoyl triazoles 21a − d in 54 − 91% yields. Introduction 1,2,3-Triazoles possess therapeutic value, 1 are synthetic intermediates in the preparation of medicinal compounds, 2 and find numerous applications in the chemical industry. 3Triazoleoligomers have been considered as new, robust binder systems for high-energy explosive and propellant formulations. 4The design and synthesis of such compounds is presently in the initial stage of development, but it is already known that structural features such as the length of chains between the triazole cross-links and the substituents on the triazole ring significantly impact the mechanical properties of triazole-oligomers. 1,3-Dipolar cycloaddition of azides to alkynes is the optimum method for the preparation of 1,2,3-triazoles 3a,5 and copper (I) catalyzed reactions offer good regioselectivity.5f Cycloadditions are faster with electron-withdrawing substituents on the acetylene moiety, while their presence on the azide has the opposite effect.5d Previously utilized activating substituents on the alkyne include especially alkoxycarbonyl 6 and other electron-withdrawing groups such as carboxyl, acyl, cyano, aryl, haloalkyl, trimethylsilyl, phenylsulfonyl or phosphonate. 7Functionalities on the acetylene play an important role in the kinetics of 1,3-dipolar cycloaddition reactions; for example, while reactions with alkoxycarbonyl substituents are fast and require low reaction temperature, carbamoylacetylenes require high temperatures and reaction times of 24 h to one week.7e,8 The low reactivity of acetyleniccarboxamides towards 1,3-dipolar cycloaddition with azides has remained a problem for direct access to important 1,2,3-triazoles with a carbamoyl substituent; the preparation of these compounds has generally involved the use of easily available 1,2,3-triazole esters, 1a,9 -acids or -imines 10 as intermediates, followed by a functional group transformation to amide. Synthesis of oligomers with 1,2,3-triazole subunits is an emerging area in macromolecular chemistry: with examples on the preparation of bis-triazoles or triazole-oligomers by the 1,3-dipolar cycloaddition of diacetylenes and diazides, 11a,b diacetylenes and monoazides, 11c diazides and monoacetylenes, 11d or tris-acetylenes and diazides.11e The reported examples commonly use ester substituents and mostly require long reaction times (1-5 days) and relatively high temperatures (80-100 o C). 11a In continuation of an ongoing program in our laboratories to develop strategies for lowtemperature synthesis of 1,2,3-triazoles 12 and oligo-and poly-triazoles as new high-energy explosive and propellant ingredients, we now report the 1,3-dipolar cycloadditions of organic azides to ester or benzotriazolylcarbonyl activated acetylenic carboxamides, under mild conditions. Results and Discussion Preparation of acetylenic carboxamides and preliminary experiments on triazole formation.A literature search for the preparation of acetylenic carboxamides revealed few reports; (i) reaction between methyl propiolate and an amine (conducted at -30 o C, for the desired 1,2-addition to predominate over 1,4-addition), 13a,b (ii) Ritter reaction between cyanoacetylene and an appropriate carbenium ion generated in the presence of concentrated sulfuric acid, 13c (iii) reaction of amines with either the N-hydroxysuccinimide ester or a mixed anhydride of propiolic acid.13d Methods (i) and (ii) give low to moderate yields or mixtures with products resulting from 1,4-addition, while method (iii) usually affords a 1:1 mixture of the required acetylenic amide with an amide formed from ethyl chloroformate.13d A general and mild procedure for the preparation of primary, secondary and tertiary amides from carboxylic acids via N-acylbenzotriazoles was recently reported by our group, 13e but this procedure was not previously tested with acetylenic acids.Since few methods for the preparation of acetylenic amides are available in the literature, we explored the N-acylbenzotriazole route.Interestingly, reaction of phenylpropiolic acid (2) with 1-(methylsulfonyl)-1H-benzotriazole (3) 13e furnished the N-propioloylbenzotriazole 4 in 50% yield.Reaction of 4 with morpholine (5) in THF at 25 o C for 12 h gave the corresponding acetylenic amide 6 in 53% yield.Under similar conditions, reaction of 4 with 1,4-diaminocyclohexane (7) gave the acetylenic diamide 8 in 65% yield (Scheme 1). Our objective was to prepare the triazoles under mild conditions.Reactions of acetylenic amides 6, 8 with benzyl azide (1a) were attempted in refluxing acetone for 12 to 24 h but no triazole formation could be detected by TLC or 1 H NMR analyses and the starting materials were recovered (Scheme 1).Literature reports support the need of high temperatures (>100 o C) to effect the 1,3dipolar cycloadditions of acetylenic amides and organic azides.7e Scheme 1 The presence of electron-withdrawing substituents on the acetylene facilitates the 1,3-dipolar cycloaddition with organic azides owing to the mechanism and energies of the HOMO-LUMO interactions involved to form the 1,2,3-triazole ring.5d,14 Alkoxycarbonyl has been the most widely used alkyne substituent and 1,3-dipolar cycloadditions of acetylenic esters and organic azides proceed under mild conditions (50−60 o C) to give the corresponding triazoles in good to excellent yields.3a The failure of 1,3-dipolar cycloaddition of benzyl azide (1a) with acetyleniccarboxamides 6, 8 in refluxing acetone and the requirement of higher reaction temperatures in the reported examples 7e,8 indicate that the degree of activation provided by the carbamoyl group is much lower than that available from an alkoxycarbonyl substituent.It was concluded that low temperature triazole formation cannot be realized by the presence of the carbamoyl group alone on the acetylene.Therefore, we decided to incorporate an ester group in the acetylenic amides to study their 1,3-dipolar cycloaddition with organic azides under mild conditions.Preparation of mono-and bis-carbamoylpropiolates.Treatment of ethyl propiolate (9) with n-BuLi at -78 o C and reaction of the resulting 3-lithiopropiolate 10 with phenyl isocyanate or p-tolyl isocyanate gave the carboxamido-substituted propiolates 11a and 11b in 76 and 64% yields, respectively. 15Using this procedure, we also prepared bis-carbamoylpropiolates 12a−d.Thus, reaction of the carbanion 10 with 1,4-phenylene diisocyanate, tolylene Preparation of mono-and bis-triazoles.The concept of increasing the activation of acetylenic amides by further substitution with an ester functionality was realized when 1,3-dipolar cycloadditions of benzyl azide (1a) with carbamoyl-substituted propiolates 11a or 11b proceeded smoothly in refluxing acetone to give the N-substituted 1,2,3-triazoles 13a and 13b as the major regioisomers in 83 and 84% yields, respectively.The successful preparation of triazoles 13a,b is the first example of low-temperature 1,3-dipolar cycloaddition of organic azides to ester activated acetylenic amides under thermal conditions (Scheme 3) (Table 1). Microwave heating has emerged as a useful technique to promote a variety of chemical reactions. 16We recently reported our preliminary results on microwave induced 1,3-dipolar cycloadditions of acetylenic carboxamides and organic azides under mild conditions. 17Herein, we report the extension of this method to synthesize substituted mono-triazoles by the 1,3-dipolar cycloaddition of mono-azides with mono-acetylenes and bis-triazoles from mono-azides and diacetylenes or di-azides and mono-acetylenes, under microwave irradiation.Thus, microwave reaction of benzyl azide (1a) with bis-carbamoylpropiolate 12a at 100 o C and 120 W irradiation power for 1 h gave a regioisomeric mixture of bis-triazoles.The regioisomers were separated and characterized as bis-triazoles 14a′ and 14a′′ in 42 and 37% yields, respectively.Similar reactions of benzyl azide (1a) with bis-carbamoylpropiolates 12b, 12c or 12d gave the corresponding bistriazoles 14b, 14c or 14d as the major regioisomers in 41, 37 or 73% yields, respectively (Scheme 3) (Table 1).The corresponding minor isomers were present in the mixtures but were not isolated pure. For identity of R, see Table 1. Under similar conditions, microwave reactions of 3-(azidomethyl)-3-methyloxetane (15) with carbamoylpropiolates 11a or 11b at 55 o C and 120 W microwave irradiation power gave the triazoles 16a or 16b as the major regioisomers in 72 and 53% yields, respectively.Also, the reactions of 15 with bis-carbamoylpropiolates 12a or 12d furnished the bis-triazoles 17a and 17b as the major isomers in 43 and 42% yields, respectively (Scheme 4) (Table 1).Structures of all the isolated mono-and bis-triazoles were confirmed by NMR ( 1 H and 13 C) and elemental analysis or high resolution mass spectrometry. 11a,b 15 Scheme 4. For identity of R, see Table 1. a Isolated yields. Next, we explored the preparation of bis-triazoles by 1,3-dipolar cycloadditions of di-azides and mono-acetylenes.Microwave reactions of di-azide 1b with carbamoylpropiolates 11a or 11b at 120 W irradiation power and 55 o C temperature for 30 min.resulted in 1,3-dipolar cycloaddition at only one of the azido moieties to give the regioisomeric mixtures of azido-triazoles that were isolated as 18a and 18a′ in 60 and 12% yields or 18b and 18b′ in 54 and 18% yields, respectively (Scheme 5) (Table 1).Triazole formation at the second azido moiety in di-azide 1b could not be induced even after repeated trials with different reaction conditions.Increasing the temperature or irradiation power to higher levels resulted in charring and decomposition.Interestingly, use of a new model microwave synthesizer equipped with simultaneous irradiation and external air-cooling system proved beneficial.The reaction of 1,4-bis(azidomethyl)benzene (1b) with 2 equiv of ethyl 4anilino-4-oxo-2-butynoate (11a) in toluene under continuous microwave irradiation (120 W) with simultaneous cooling at 75 o C for 1 h furnished a mixture of regioisomeric bis-triazoles; the major regioisomer 19a was isolated by column chromatography in pure form in 54% yield.Similarly, bistriazole 19b was isolated in 65% yield from the reaction of di-azide 1b and ethyl 4-oxo-4-(4toluidino)-2-butynoate (11b) by the simultaneous cooling and irradiation procedure (Scheme 5) (Table 1).Thus, using microwave irradiation we have developed new methods of preparation of substituted bis-triazoles by the 1,3-dipolar cycloadditions of mono-azides and bis-acetylenes or diazides and mono-acetylenes.Scheme 5.For identity of R, see Table 1. The structure of 13a was confirmed by X-ray crystallography (Figure 1), which unambiguously showed that this is the 5-(phenylcarbamoyl) regioisomer.In the solid state the ester and amide groups are approximately coplanar with the triazole ring [angles between meanplanes = 7.7(2) and 10.5(2) o , respectively] and are held in place by an intramolecular hydrogen bond between the amide hydrogen and the ester carbonyl oxygen [ In contrast the plane of the phenyl ring of the benzyl substituent is approximately orthogonal to the triazole ring [79.1(2) o ].The benzylic protons adjacent to N-1 of the triazole ring in regioisomer 13a resonate at 6.2 ppm as a singlet. 1 H NMR spectra of regioisomers 13b, 18a and 18b also display the benzylic protons as singlets at 6.2 ppm and 13b, 18a and 18b were therefore assigned the 5-(phenylcarbamoyl) structures.In the 1 H NMR spectra of azido-triazoles 18a′ and 18b′, the benzylic proton singlet resonated at 5.8 ppm and regioisomers 18a′ and 18b′ were assigned the 4-(phenylcarbamoyl) structure.Two separate singlets at 6.2 and 5.8 ppm for benzylic protons in the bis-triazoles 14a′ and 19a suggest the unsymmetrical structures displayed with one triazole ring having a 5-(phenylcarbamoyl) and the other a 4-(phenylcarbamoyl) substituent.Similarly, a singlet at 6.2 ppm for four benzylic protons indicated a symmetrical structure with both the triazole rings having a 5-(phenylcarbamoyl) substituent in bis-triazoles 14b, 14c, 14d and 19b.The methylene protons adjacent to N-1 of the triazole ring in 16a and 16b resonated at 5.2 ppm as a singlet and these regioisomers were assigned the 5-(phenylcarbamoyl) structure.Similarly, a singlet for four methylene protons at 5.2 ppm in bis-triazoles 17a and 17b suggested a symmetrical 5-(phenylcarbamoyl) structure. 1,3-Dipolar cycloaddition of benzotriazolylcarbonyl activated acetylenes and organic azides. The benzotriazolyl group has been used as a synthetic auxiliary in many chemical transformations. 18t was of interest to see whether the presence of a benzotriazolylcarbonyl group on the acetylene provides the required activation for 1,3-dipolar cycloaddition with an organic azide.Indeed, the thermal reaction of benzyl azide (1a) with N-propioloylbenzotriazole 4 in refluxing acetone for 18 h gave the benzotriazolylcarbonyl substituted 1,2,3-triazole 20 in 32% yield.Alternatively, the microwave reaction of benzyl azide (1a) with 4 at 120 W and 100 o C for 1 h provided 20 in an improved yield of 75%.Further treatment of 20 with amines 13e such as morpholine, p-chloroaniline, phenethylamine or benzylamine in dichloromethane at 25 o C for 12 h replaced the benzotriazolyl group to give the corresponding C-carbamoyl 1,2,3-triazoles 21a−d in 54-91% yields.This strategy demonstrates the utility of benzotriazolylcarbonyl group as an activating group for 1,3-dipolar cycloaddition of azides with alkynes and subsequent displacement of the benzotriazolyl group by the amine moiety to form the corresponding C-carbamoyl triazoles under mild conditions (Scheme 6). Conclusions In summary, we have introduced a convenient and general method for the preparation of substituted C-carbamoyl mono-and bis-triazoles by the 1,3-dipolar cycloaddition of a variety of organic azides with ester or benzotriazolylcarbonyl activated acetylenic amides under thermal or microwave reaction conditions. Experimental Section General Procedures.Melting points are uncorrected.All of the reactions under microwave irradiation were conducted in heavy-walled Pyrex tubes sealed with aluminum crimp caps fitted with a silicon septum.Microwave heating was carried out with a single mode cavity Discover Microwave Synthesizer (CEM Corporation, NC, USA), producing continuous irradiation at 2455 MHz and equipped with simultaneous external air-cooling system. 1 H NMR (300 MHz) and 13 C NMR (75 MHz) spectra were recorded in CDCl 3 (with TMS for 1 H and chloroform-d for 13 C as the internal reference) unless specified otherwise. General procedure for triazole formation under thermal conditions Substituted acetylene (1 mmol) and benzyl azide (1a) (1.2 mmol) were dissolved in acetone (20 mL) and the solution was refluxed for the specified time.The solvent was removed under reduced pressure and the residue was purified by column chromatography on silica-gel using hexanes/ethyl acetate (4:1) as the eluent to give the pure triazoles.Using this procedure, acetylenic amides 6, 8 failed to give the corresponding triazoles on reaction with benzyl azide (1a) while the reaction of 1a General procedure for triazole formation under microwave irradiation A dried heavy-walled Pyrex tube containing a small stir bar was charged with mono-acetylene (1 mmol) and mono-azide (1.2 mmol) or bis-acetylene (1 mmol) and mono-azide (2.2 mmol) or monoacetylene (2 mmol) and di-azide (1.2 mmol).The tube containing the reaction mixture was sealed with an aluminum crimp cap fitted with a silicon septum and then it was exposed to microwave irradiation according to the conditions specified in Schemes 3 and 4. The build-up of pressure in the closed reaction vessel was carefully monitored and was found to be typically in the range 4−10 psi.After the irradiation, the reaction tube was cooled with high-pressure air through an inbuilt system in the instrument until the temperature had fallen below 40 o C (ca. 2 min.).The crude product was purified by column chromatography on silica-gel using hexanes/ethyl acetate (4:1) as the eluent to give the pure triazoles 14 and 16−20. Bis X-Ray Crystallography Data were collected with a Siemens SMART CCD area detector, using graphite monochromatized MoKα radiation (λ = 0.71073 Å).The structure was solved by direct methods using SHELXS 22 and refined on F 2 , using all data, by full-matrix least-squares procedures using SHELXTL. 23Hydrogen atoms were included in calculated positions, with isotropic displacement parameters 1.2 times the isotropic equivalent of their carrier carbons, except for the NH hydrogen which was found in a difference map and its position refined.
2018-05-26T17:32:55.974Z
2003-12-12T00:00:00.000
{ "year": 2003, "sha1": "9565910c6c6b7ca425cb58263cc128139ff612ef", "oa_license": "CCBY", "oa_url": "https://www.arkat-usa.org/get-file/19988/", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a061b26d55f507f710db977dc4f814c1d0fdc5e1", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
209280089
pes2o/s2orc
v3-fos-license
Pregnancy, Children and Inter-Relating Factors Affected by Geohelminthiasis A life-threatening parasitic infection arising in evolving countries, principally prevalent in children below 5 years and pregnant women, has led to the grow-ing interest for understanding the condition acknowledged as geohelminthiasis. Decreased cell-mediated immunity (a necessity in fetal retention) leading to a compromised immunological response is what makes pregnant women more prone to the infection thereby increasing the risk of maternal anemia, preterm deliveries and stillbirths based on reports. An outcome of geohelminthiasis on children is its deteriorative effect on cognition. This chapter highlights the relationship between the helminthic infection with respect to pregnant women and children addition-ally focusing on other associated factors such as poverty, hygiene, etc. that further contribute to the decline in quality of life in developing countries. Introduction The general term used to describe a worm is referred to as a "helminth." These invertebrates fall under two categories, namely flatworms or Platyhelminthes (flukes and tapeworms) and roundworms or Nematoda [1,2]. They either survive in aquatic and terrestrial environments as parasites or free of a host. Out of the various types, intestinal nematodes or soil-transmitted helminths (STH) also known as "Geohelminths" are the most common worldwide. The World Health Organization (WHO) claims that 1.5 billion people worldwide, constituting to 24% of the world's population, are infected by STH; with wide distributions in the Sub-Saharan Africa, America, China and East Asia in tropical and subtropical regions [3]. The major infection of STH originates from the attack of Ascaris lumbricoides (commonly called the large intestinal roundworm or the common roundworm) and Trichuris trichiura (whipworm) [4,5]. Hookworm (Ancylostomatidae) affliction is also another most common chronic infection found in humans that contribute to STH [6]. Children are the frequent victims to an STH attack as many of them are school aged, living in areas of extensive disease transmission; requiring treatment interventions and preventive measures [7]. Secondary victims of this infection are pregnant women reported every year, among which 44 million are estimated to be affected globally [8]. Improvements in potable water services, drainage, sanitary food control, living quarters, individual and community anti-vector action are a few conceptualizations that can be implemented for the eradication of this infectious outbreak [9]. The central focus of this chapter is to gain insight into as to how aspects of childhood and pregnancy are concomitant to geohelminthiasis, along with various other inter-relating factors such as poverty, hygiene, etc.; that cause a drop in the quality of life in developing countries. Immune responses to STH Preclinical data from animals suggest that th2 (T-helper) cells are triggered by cytokine release along with immunoglobulin E (IgE) of the host immune system aiding in the elimination of helminthic burdens [10,11]. However, the innate and adaptive immunity are often markedly found to remain suppressed. This indicates that immune responses triggered due to a helminthic infection could result in host protection responses against microbial pathogens to be antagonized [12,13]. Recent findings of the involvement of macrophages referred to as alternatively activated macrophages can also be a contributing factor leading to an inflammatory response when in contact with a helminth [14]. Children and geohelminthiasis School children of countries affected by this epidemic, were found to exhibit the greatest incidence and severity of the outbreak. No ill effects (with respect to morbidity) were thought to be experienced by children with light infections. However, recent evidences oppose this traditional notion with reports of slight or minimal intensity outbreaks having significant decrement in the development and growth of children [15]. Information regarding as to how various factors affect geohelminthiasis in children is discussed below. Nutrition Nutrition plays a key role as a target for the alleviation of helminthic infections. Several surroundings of the developing world are impacted by malnutrition and helminthic infection, both as their main or supplementary factors governing mortality [16]. Impaired digestion, malabsorption, diminution in food consumption and poor growth rates are often noted in children who endure this helminthic incursion [17]. Recent studies also depict the fact that malnutrition is in direct proportion to the intensity of the pathogen Ascaris [18]. Other factors governing infection scale include the extent of nutritional deficiency and concurrent prominence of single or multiple infections and single or multiple nutritional deficiencies [19]. Increased loss of endogenic protein paired with the distress of energy and mineral metabolism are the mechanisms by which an intestinal nematode reduces feed intake by the host. Better nutrition can improve the rate of adult worm rejection via an approach of diet consumption rich in metabolizable proteins [20]. The improvement of the nutritional status of school children would be an essential remedy for disease alleviation [21,22]. Environment The environmental variables attributing to the risk of this parasitic outbreak cannot be avoided as a correlation between this aspect and disease condition is of high prevalence. Recent studies of various schools reporting the presence of certain other influential environmental factors governing the infection such an inadequate water supply, requirement of regular water/sanitation maintenance regimes and overcrowding in classrooms can be taken into consideration for disease management [23]. Anti-helminthic treatment in children Since the primary mode of therapy includes the use of anti-helminthics, development of resistance due to their administration is a crucial factor governing geohelminthiasis. The known variables that add value to an anti-helminthic resistance are medication frequency, refuge or the percentage in the parasite population not exposed to drugs and the possibility of underdosing [24]. Another causative factor in children leading to intestinal obstruction observed was prior anti-helminthic treatment [25]. Although specific IgE antibodies are believed to participate in the protection against helminthic infection, the polyclonal stimulation of IgE caused by helminthic parasites could be the sole reason for re-infection [26]. In a follow-up investigation concerning growth retarded children where anti-helminthic therapy was discontinued after successful alleviation; the extent of re-infection was found to dramatically increase which could pose difficulty in the quality of life of the concerned [27]. Cognition The negative influence of STH infections on cognitive processes, notably in school children; has been deduced by researchers since 1900. Prolonged anemia and toxemia were factors accountable for the substantial increase in the degree of cognitive delay with respect to the level of infection. However, clarification remains to be produced regarding the mechanism by which worms impact cognition. Certain postulates comprise of malnutrition and fatigue in children troubled from the infection as consequences of diminished cognition. Reports of medication reversing this adverse effect are also at large and very much essential for effective control of the disease [28][29][30]. Pregnancy and geohelminthiasis Helminthic infections are suggested to be extremely damaging, with detrimental effects on maternal anemia and birth outcomes in cases of pregnancy, with a total global impact on pregnancies estimated to be 44 million [31,32]. Probable mechanism of susceptibility to STH in pregnancy A characteristic feature of pregnancy is the successful retention of the fetus due to hormonal, dietary and immunological changes occurring during the period [33]. This is a unique illustration of how the body adjusts to a destructive immune response during pregnancy [34]. Therefore, studies have clearly defined the characteristic of pregnancy as immune modulation and not its suppression. In other words, an alteration to the immune system contributes to differential responses not merely on the basis of microorganisms but on the basis of stage of pregnancy [35]. Although the periparturic immunosuppression involvement remains unclear, one of the proposed mechanism depicts the avoidance of particular processes of host immune defense by the parasitic helminth [36,37]. The resemblance between the immune reactions to helminths and pregnant females may be a sign that tolerance may be invoked by analogous mechanisms (i.e., type 2 responses). Another suggestion has been that helminths may have undergone self-adaptation in order to combat immune responses from the mother by utilizing the similarity in mechanisms as used by a human fetus [38]. These could have been some among the many reasons a pregnant mother's susceptibility to helminthic attack is widespread. The WHO reports that far more than half of the pregnant females in emerging economies have concerns pertaining to iron deficiency anemia, which could be a result of an elevated metabolic requisite for iron during childbirth coupled with poor nutrition. This iron STH related deficiency has been concomitant to augmented mortality rate, premature birth and low birth weights during the period of pregnancy [39,40]. Co-infection Considering pregnancy, the susceptibility to co-infections cannot be ignored due to immunological modulations associated with the stage. Data indicating the exhibition of higher prevalence of Trichuris trichiura, followed by an Ascaris lumbricoides infection were found in cases of pregnancy; where attacks of a single infection was found to be at a higher percent than that of co-infections [41]. Considering co-infections associated with pregnancy related STH, it was found that the malarial parasite Plasmodium falciparum co-existed with hookworms, when compared to roundworms and whipworms [42,43]. Geophagy (soil eating) Another causal component for STH diseases is geophagy that is practiced among some African females. While the exact reason remains a mystery, some beliefs such as curing heartburn and alleviating morning sickness are still at large [44]. Adequate data indicates that geophagy can be associated with enhanced anemic peril and reduced hemoglobin amounts [45]. Geophagy in lactating mothers resulted in reinfection and hence was advised for immediate interventions to tackle disease transmission [46]. Maternal anemia The greater the severity of hookworm infestations, the greater was the percentage of blood loss or anemia observed in pregnant women from an endemic area survey [47]. During pregnancy, the hookworm, in particular, was considered to be the source of mild associative anemia while the other STH's were involved in mild deficiencies of iron [48,49]. A current connection between co-infection and anemia, as reported in the latest studies indicate that the latter is not a sole companion of helminthic attack alone [50]. Since there is an additional relationship between anemia and birth outcomes (increased risk of preterm birth or low birth weight), a helminthic outbreak could also be affiliated to the second during pregnancy [51]. All the above findings indicate that the association of anemia due to STH can be debilitating in case of pregnant women. Birth outcomes The reason for the problem of low birth weight was the exposition to an attack of hookworm resulting in intrauterine growth retardation especially in cases of HIV infected subjects [52]. A lower prevalence of low birth weight was the end result of periodic anti-helminthics and the weekly iron folic acid supplements before pregnancy [53]. Another birth outcome experienced was the premature birth. Similar to the case of maternal anemia, the co-existence of other infections with STH brought about a greater negative birth outcomes. Present beneficial hypotheses Although helminthic infections are difficult for kids and for pregnant females, the asymptomatic stage in an helminthic infection was found act as a guard keeper against immunological syndromes [54]. An unusual, inflammatory bowel disease (characterized by chronic gastrointestinal inflammation) hygiene hypothesis suggests a lack of exposure to intestinal helminths as an important environmental factor contributing to the development of such illnesses [55,56]. The possibility of predisposition to Crohn's disease (an inflammatory idiopathic bowel disease, most often involving the ileum, colon and in certain cases; the esophagus) due to lack of exposure to helminthic parasites as per data of a certain study [57,58]. A similar small cross-sectional study showed the prevalence of STH to have beneficial effects in patients with type 2 diabetes (insulin resistant). However, this may seem to be damaging in areas where helminthic treatment options are a must to curb disease morbidity [59]. Other inter-relating factors All the above discussed variables associated with children and pregnant women are also dependent on conditions of on geographical circumstances, poverty and bad hygiene. The STH assault is restricted to rural regions of tropics, especially in coastal regions; where temperature, humidity and soil type are appropriate for development and growth. Exposure to larval eggs in farming areas where individuals expose their skin to the hot and humid soil is what aids in disease transmission. Sandy soils provide better growth conditions for these worms when compared to clayey soils. An important adverse link between socioeconomic status and incidence or severity of helminthic disease can also attribute towards the spread of STH. It was found that the prevalence of disease was less in cases of higher income groups. Bad sanitation or hygiene due to the lack of income is also an associated factor leading to an attack of STH [60][61][62][63]. Conclusion Geohelminthiasis or soil-transmitted helminthiasis is recognized as a lifethreatening parasitic outbreak in developing nations, predominantly in kids under 5 years of age and pregnant females and has resulted to increased concern. Reports of nutrition, environment, resistance to treatment and cognition were the associative parameters found in children, whereas in the condition of pregnancy existence of co-infections, geophagy, maternal anemia and birth outcomes were found to be the inter-relating variables to STH. The avoidance of particular processes of host immune defense and self-adaptation to combat immune responses from the mother by utilizing the similarity in mechanisms as used by a human fetus were the proposed mechanisms by which pregnant women are more prone to the attack. All the associative parameters discussed above were found to increase disease burden. Tackling these factors is therefore a must for achieving an improved quality of life. Although recent or upcoming beneficial hypotheses could play an important role in the eradication of associative diseases, the same benefit could have least highlighting phenomena when poverty is involved. Improvements in hygiene and improved access to anti-helminthic drugs are some of the factors that could establish a better alleviating status for the disease attack. Further researches/studies and proper awareness among groups where the disease is endemic is however still a requisite for both devising and strategizing to fight against the disease.
2019-11-22T00:35:48.917Z
2019-11-06T00:00:00.000
{ "year": 2020, "sha1": "55c440e81872f4d1ed242a95a9cf9466b6bc991a", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/69933", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "7bbb09f625520733a487788866bedf96bc599b36", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
2316223
pes2o/s2orc
v3-fos-license
A comparative scanning electron microscope investigation of cleanliness of root canals using hand k-flexofiles, rotary race and k3 instruments. INTRODUCTION The most important aims of root canal preparation are the removal of vital pulp tissue, remaining necrotic debris and infected dentin, eliminating the bulk of bacteria present in the root canal system. The aim of this study was to compare the cleaning efficacy of hand K-Flexofiles and rotary RaCe and K3 instruments in root canal preparation. MATERIALS AND METHODS A total of 60 single rooted teeth with maximum curvature of <20º were selected and divided into three groups of 20 teeth each. Canals were prepared with K-Flexofiles, K3 and RaCe instruments using crown down preparation technique, up to size #30. After instrumentation, the root canals were flushed with 5 mL of 2.5 % NaOCl solution. The amount of debris and smear layer was quantified on the basis of Hulsmann method using a scanning electron microscope. The data were statistically analyzed with one-way ANOVA test at a significance level of P<0.05. RESULTS None of the three groups achieved completely debrided root canals.. In general, K-Flexofiles were able to achieve cleaner canals compared to K3 and RaCe instruments (P<0.05). There were no significant differences between three groups in smear layer removal throughout the root canal walls (P<0.05). CONCLUSION K-Flexofiles group had less remained debris when compared to K3 and RaCe instruments. INTRODUCTION The major objectives of root canal preparation are elimination of residual pulp tissue, infected dentin and debris and decreased number of microorganism from the root canal system (1,2). The quality of root canal cleaning is evaluated via debris and smear layer removal. Debris contains vital and/or necrotic pulp tissue and dentinal chips that loosely attach to the root canal walls; it is usually infected (3). So debris inhibits removal of root canal's bacteria (4). Smear layer with 1-2 µm of thickness remains on root canal wall after instrumentation (1,5). This layer contains dentinal particles, residual pulpal tissue and bacteria that remain after irrigation sealing the dentinal tubules, which can inhibit the removal of bacteria from the root canal system and therefore root canal seal (5,6). There are many conflicting reports on the cleaning ability of different hand and rotary instruments (7)(8)(9)(10)(11). The past decade has seen the development of nickel-titanium rotary instruments with advanced blade designs; developed to improve the cleaning efficiency during root canal preparation. Rake angle of the cutting blade may affect the cutting and cleaning efficiency of endodontic hand instrument. There are some clues that the flute design of rotary nickel titanium files maybe a key factor for the cleaning efficiently of these instruments. According to a recent report instruments with sharp cutting edges seem to be Rahimi et al. superior to those having radial lands in cleaning the root canal (12). Positive rake angles will cut more efficiency than neutral or negative rake angles which scrape the inside of the root canal (13). Variable helix angles and pitch is another feature that can improve the removal of the cutting debris formed by instrumentation (14). Once the instrument has cut into dentin, debris needs to evacuate the canal space. Compression occurs when debris is caught between the canal wall and instrument flutes. If the instrument becomes clogged, there will not be any space left for debris to move out of the root canal system. Instruments with consistent helix angle and pitch may allow debris to accumulate, particularly in the coronal part of the file, blocking the escape way of cutting debris (13). One of the NiTi rotary files is RaCe file (short for reamers with Alternative cutting edges). This file possesses an alternating spiral and has a cutting shank of 8 mm, giving variable helical angles and a variable pitch. A recently produced NiTi rotary file is K3 file. It has a modified radial land with a slightly positive rake angle. The helix flute angle increases from the tip to the handle. Additionally, it has a variable pitch throughout the cutting shank. The manufacturer claims that this design will effectively cut the dentin surface; and dentinal debris can easily be irrigated away (13). However, there is not sufficient data regarding the cleaning ability of these instruments to remove smear layer and debris. The aim of this investigation was to compare the cleaning efficacy after preparation with rotary NiTi K3, RaCe and hand K-Flexofiles. MATERIALS AND METHODS A total of 60 single rooted extracted human teeth with close apex were selected for this in vitro study. Standard buccolingual and mesiodistal radiographs were taken for the purpose of appropriate selection of studied samples. The teeth with abnormal apex and calcified canal were excluded. Root curvature was determined by using Schneider method and the teeth with<20º curvatures were chosen (15). The teeth were decoronated with a diamond disk (D&Z, Berlin, Germany) and 15mm of root structure was left .Working length was determined by 1 mm less than length of the initial file (size #15) up to the apical foramen. The teeth were then randomly divided into 3 groups as follow (each containing 20 teeth): RaCe (FKG dentaire, La Chaux-de-Fonds, Switzerland): these instruments were set into rotational speed of 500 RPM with 8:1 reduction handpiece powered by a torque limited electric motor (Novage, Konstanz, Germany). Instrumentation was completed using the crown down technique, according to the manufacture's instruction (16). All canals were sequentially prepared to the apical size #30. The preparation sequence was: 1) 0.1 tapered size #40 instruments were used to one-third of the working length 2) 0.08 tapered size #35 instruments were used to one-half of the working length 3) 0.06 tapered size #25 instruments were used to two-thirds of the working length 4) 0.04 tapered size #25 instruments were used to full working length 5) 0.02 tapered size# 25 instruments were used to full working length 6) 0.02 tapered size #30 instruments were used to full working length K3 (SybronEndo, CA, USA): these instruments were set into rotational speed of 250 rpm with 8:1 reduction handpiece powered by a torque limited electric motor (TCM 3000 Novage, Konstanz, Germany). Instrumentation was completed using the crown down technique (17). All canals were sequentially prepared to the apical size #30 according to the manufacturer's instruction as follow: 1) 0.1 tapered size #25 instruments were used to one-third of the working length 2) 0.08 tapered size #25 instruments were used to one-half of the working length 3) 0.04 tapered size #40 instruments were used to two-thirds of the working length 4) 0.04 tapered size #35 instruments were used to near the working length 5) 0.04 tapered size #30 instruments were used to full working length K-Flex Files (Dentsply, Maillefer, Ballaigues, Switzerland): hand instrumentation with these instruments was completed using crown down technique. All canals were sequentially prepared to the apical size #30. 1) Sequential use of file #45 in coronal parts to #15 in full working length. Cleaning effect of different instruments 2) Sequential use of file #50 in coronal parts to #20 in full working length. 3) Sequential use of file #55 in coronal parts to #25 in full working length. 4) Sequential use of file #60 in coronal part to #30 in full working length. During instrumentation, the root canals were flushed with 5mL of 2.5 % NaOCl and after instrumentation, 5mL of normal saline was used with a plastic syringe (Yazd Syringe, Yazd, Iran) and 27 gauge needle (Iran needle, Iran) as a final rinse in all groups. After final rinse with normal saline, two longitudinal grooves were prepared using a No.1 diamond disk on the buccal and lingual aspects of the teeth. The teeth were separated into two halves by a plastic instrument and both halves were prepared for SEM evaluation, and examined under the Leo-440i-SEM (Leo electron microscopy, Cambridge, UK) at ×500 for debris and ×1500 for smear layer evaluation. The cleanliness of each root canal was evaluated in three areas (apical, middle and coronal thirds of the roots) by means of a numerical evaluation scale (3). The canal cleanliness was evaluated by blind observation. The following scheme was used (8): Debris: Score 1: clean canal wall, few debris particles Score 2: few small agglomerations Score 3: many agglomerations, less than 50% of canal wall covered Score 4: more than 50% of the canal wall covered Score 5: complete or nearly complete covering of the canal wall by debris. Smear layer: Score 1: no smear layer, orifice of dentinal tubules patent Score 2: small amount of smear layer, some open dentinal tubules Score 3: homogenous smear layer along almost the entire canal wall, only very few open dentinal tubules Score 4: the entire root canal wall covered with a homogenous smear layer, no open dentinal tubules Score 5: a thick, homogenous smear layer covering the entire root canal wall Score 1 and 2 were considered suitable scores (18). The data were statistically analyzed with one-way ANOVA test at a significance level of P<0.05. Figure 5). Also when comparing the amount of debris in the three parts of the root canal (coronal, middle and apical) significant differences were only found in the coronal parts of root canal walls; K-Flexofiles resulted in less debris compared to RaCe and K3 (P<0.05) (Figure 1), (Figure 2). Smear layer evaluation of the three root areas, interest-ingly found a significant difference in the middle part of the canals in which K-Flexofiles resulted in less smear compared to RaCe and K3 (P<0.05) (Figure 4), (Figure 5). DISCUSSION Smear layer is created by the root canal preparation having a thickness of 1-2 µm (1,5). It is composed of mostly inorganic materials and is not found on uninstrumented areas (19). Although there are many controversy about effectiveness of smear layer removal in endodontic therapies, its removal seems desirable because it will increase dentin permeability, allowing better disinfection of deeper layers of the infected root canal dentin (13). Debris is defined as dentin chips and residual vital or necrotic pulp tissue attached to the root canal walls, which is usually infected with bacteria (3). Thus, debris might prevent the efficient removal of bacteria from the root canal system. Also, debris may occupy part of the root canal space, which might also prevent complete obturation of the root canal (4). In this study, cutting and cleaning efficacy of three instrumentation methods was examined on the basis of a separate numerical evaluation scheme for debris and smear layer (Hulsmann method) by means of SEM evaluation in the coronal, middle and apical portions of the canals (3). In previous studies, different magnifications ranging from ×15 to ×2500 were used (20,22). At low magnification large amounts of debris can easily be seen, but detail such as remnants of the smear layer or identification of dentinal tubules needs to be observed at higher magnifications. A disadvantage of using a higher magnification is the small size of the area of evaluation, potentially leading to misinterpretation (23). Using the data of a pilot study in the present investigation, SEM evaluation was performed in ×500 and ×1500 magnifications for analysis of debris and smear layer (8). Cleaning effect of different instruments To prevent discrepancy and bias in the results a key consideration is the consistency of the examiner's evaluation and the blindness of the examiner to the various groups. , The samples in the present study were coded and randomly examined under SEM and the clinicians had no knowledge about the codes and the methods employed in preparation procedures. In this study, partially uninstrumented areas with remaining debris and smear layer were found in all canal sections concurring with other studies (8). In general, the use of K-Flexofiles resulted in significantly less remnant debris compared to canal instrumentation with rotary K3 and RaCe instruments; these results corroborate with a previous report that showed hand K-Flexofiles to be superior in cleaning efficacy (9). The use of K-Flexofiles showed significantly less smear layer in middle part of canal compared to RaCe and K3. This finding was not in agreement with previous studies (9). Interestingly within the apical third no statistical difference was observed between the instrument groups. The clinical significance of this finding may have greater weight in endodontics, because the microorganisms which remain in the apical portion of the root canal are considered to be the main cause of treatment failure (24). One of the reasons that may explain why hand K-Flexofiles show lower debris and smear layer scores than rotary RaCe and K3 instruments is the greater stiffness of K-Flexofiles; the greater force against the root canal wall may result in more efficient cleaning. In contrast, NiTi instruments used only in a rotary motion and without lingual and buccal pressure tend to only partially remove tooth structure. (23). CONCLUSION K-Flexofiles resulted in less remnant debris within the root canal system compared to K3 and RaCe instruments. There were no significant differences between three groups with regards to smear layer removal in all three portions of the canal system.
2018-04-03T01:19:49.649Z
2008-10-01T00:00:00.000
{ "year": 2008, "sha1": "033964c9f34f4dae48d49bcdf5ba7196840d192e", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "033964c9f34f4dae48d49bcdf5ba7196840d192e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
10585508
pes2o/s2orc
v3-fos-license
Synergistic Proinflammatory Responses by IL-17A and Toll-Like Receptor 3 in Human Airway Epithelial Cells Viral respiratory infections activate the innate immune response in the airway epithelium through Toll-like receptors (TLRs) and induce airway inflammation, which causes acute exacerbation of asthma. Although increases in IL-17A expression were observed in the airway of severe asthma patients, the interaction between IL-17A and TLR activation in airway epithelium remains poorly understood. In this study, we demonstrated that IL-17A and polyI:C, the ligand of TLR3, synergistically induced the expression of proinflammatory cytokines and chemokines (G-CSF, IL-8, CXCL1, CXCL5, IL-1F9), but not type I interferon (IFN-α1, -β) in primary culture of normal human bronchial epithelial cells. Synergistic induction after co-stimulation with IL-17A and polyI:C was observed from 2 to 24 hours after stimulation. Treatment with cycloheximide or actinomycin D had no effect, suggesting that the synergistic induction occurred without de novo protein synthesis or mRNA stabilization. Inhibition of the TLR3, TLR/TIR-domain-containing adaptor-inducing interferon β (TRIF), NF-κB, and IRF3 pathways decreased the polyI:C- and IL-17A/polyI:C-induced G-CSF and IL-8 mRNA expression. Comparing the levels of mRNA induction between co-treatment with IL-17A/polyI:C and treatment with polyI:C alone, blocking the of NF-κB pathway significantly attenuated the observed synergism. In western blotting analysis, activation of both NF-κB and IRF3 was observed in treatment with polyI:C and co-treatment with IL-17A/polyI:C; moreover, co-treatment with IL-17A/polyI:C augmented IκB-α phosphorylation as compared to polyI:C treatment alone. Collectively, these findings indicate that IL-17A and TLR3 activation cooperate to induce proinflammatory responses in the airway epithelium via TLR3/TRIF-mediated NF-κB/IRF3 activation, and that enhanced activation of the NF-κB pathway plays an essential role in synergistic induction after co-treatment with IL-17A and polyI:C in vitro. Introduction Bronchial asthma is a chronic inflammatory disease of the airway involving many cells and cellular elements [1][2][3]. Airway inflammation in asthma is associated with hyperresponsiveness and reversible airflow obstruction, which leads to clinical manifestations such as wheezing, chest tightness, breathlessness, and coughing. Current asthma practice guidelines emphasize the importance of inhaled corticosteroids (ICSs) as an anti-inflammatory therapy, which contributes substantially to improving the quality of life and disease control in asthma patients [4,5]. Nevertheless, even if anti-inflammatory therapy is administered, persistent airway inflammation cannot always be controlled, leading to acute exacerbation of asthma symptoms. Viral respiratory infections have been demonstrated to be one of the most common causes of acute exacerbation in asthma patients [3,4]. Airway epithelial cells play an important role during viral infections as a first line of defense in the lung [6]. Viral detection by intracellular receptors in the airway such as Toll-like receptors (TLR) 3, 7, or the RIG-I-like receptor (RLR), collectively called pattern recognition receptors, elicit a strong immune response [7,8]. Polyinosinic:polycytidylic acid (polyI:C), which is a synthetic double-stranded RNA viral mimic and a ligand of TLR3, causes severe inflammation in the lung in mouse models [9]. We have recently shown that polyI:C treatment strongly induced epithelial cell-derived cytokines and anti-microbial peptides, including IL-17C, colony-stimulating factor (CSF) 3, human β-defensin (hBD) 2, and S100A12 in normal human bronchial epithelial (NHBE) cells through the TLR/TIR-domain-containing adaptor-inducing interferon-β (TRIF)/nuclear factor (NF)-κB signaling pathway [10]. PolyI:C also elicited strong inflammatory responses inducing proinflammatory cytokines, chemokines, and metalloproteases in small airway epithelial cells [11]. These findings suggest that an excessive inflammatory response in the airway epithelium during a viral infection is closely related to the exacerbation of asthma; however, the precise molecular mechanisms have not been fully elucidated. The proinflammatory cytokine IL-17A is mainly produced by Th17 and γδT cells and is a part of the "IL-17 family" with other five members (IL-17B-F). IL-17A is well known for its protective properties during bacterial infections and its involvement in autoimmune diseases [12]. IL-17A initiates the innate host defenses and repair responses that include the induction of proinflammatory cytokines and chemokines from epithelial cells, fibroblasts, endothelial cells, chondrocytes, and adipocytes, by binding to the receptors, IL-17RA and IL-17RC [13][14][15]. IL-17A induces proinflammatory cytokines, chemokines (e.g., CXCL-1, -2, -3, -5, -6, CXCL8/IL-8, CCL20, and IL- 19), and hBD-2 in airway epithelial cells [13][14][15][16][17], which enhance inflammatory responses. We have previously shown that IL-17A or Th2 cytokines also enhanced mucin (MUC) 5AC and MUC5B induction in NHBE cells [18][19][20], which causes the over production of mucus in chronic airway disorders such as chronic obstructive pulmonary disease (COPD) and asthma. Recently, an increased level of IL-17A in sputum and serum has been reported in patients with severe asthma [21][22][23], which is associated with neutrophilic airway inflammation [2,21]. Approximately half of the mild-to-moderate asthma patients have neutrophilic airway inflammation, a disease phenotype that responds poorly to inhaled corticosteroids [24]. These findings suggest that the existence of IL-17A in the airway increases the severity of asthma and is involved in the pathophysiology of exacerbation of asthma symptoms by enhancing the inflammatory response. In viral-induced exacerbation of asthma, however, the part of IL-17A and the interplay between IL-17A and TLR3 activation have not been fully elucidated. In this study, we examined the molecular basis of the interaction between TLR3-mediated innate immune responses and intracellular signaling of IL-17A in airway epithelial cells. We showed that IL-17A synergistically enhances polyI:C-induced expression of proinflammatory cytokines and chemokines (IL-8, CSF3, CXCL1, CXCL5, and IL-1F9) but not type I interferon (IFN-α1, -β), in NHBE cells and BEAS-2B, which is a bronchial epithelial cell line. The subsequent mechanistic study reveals a crucial role for the transcriptional factors NF-κB and interferon regulatory factor 3 (IRF3) in IL-17A/polyI:C-provoked synergistic induction in airway epithelial cells. Culture conditions Primary NHBE cells were purchased from Lonza (Catalog No. CC-2540; Basel, Switzerland). NHBE cells were seeded in 6-well plates at 1.8 × 10 4 cells/cm 2 in commercially available bronchial epithelial growth medium (BEGM, Lonza). The BEAS-2B (ATCC 1 CRL-9609™) cell line was obtained from American Type Culture Collection (ATCC) through Summit Pharmaceuticals International Corporation (Tokyo, Japan). BEAS-2B cells were plated in 6-well plates at 0.3-1.0 × 10 4 cells/cm 2 in LHC-9 serum-free medium (Gibco, Grand Island, NY). Both types of cells were incubated at 37°C in a humidified atmosphere with 5% CO 2 . Submerged cells were stimulated with polyI:C and/or IL-17A. Conditioned media were collected from the cultured NHBE cells and stored at -80°C for immunoassays. TLR ligands and cytokine treatments PolyI:C (high molecular weight) was purchased from Imgenex (San Diego, CA). Recombinant human IL-17A was from R&D Systems (Minneapolis, MN). Concentrations of polyI:C and IL-17A used in this study were 50 μg/ml and 10 ng/ml, respectively. In our preliminary data and previous report [10], we have performed a concentration study of polyI:C treatment (0.1-200 μg/ml) in NHBE cells. Because 50 μg/ml of polyI:C had potent stimulation in proimflammatory cytokines mRNA expression (data not shown), we chose 50 μg/ml of polyI:C treatment in the present study. Concerning the dose of IL-17A, we have previously shown that 10 ng/ml of IL-17A induced MUC5AC mRNA expression in NHBE cells, and slight decrease of stimulation was seen when dose higher than 20 ng/ml were used [18]. Thus, we selected 10 ng/ml IL-17A in this study. RNA isolation and real-time RT-qPCR Total RNA was extracted using RNA the TRIzol reagent (Invitrogen, Carlsbad, CA), and stored at -80°C. Total RNA was quantified using a spectrophotometer (NanoDrop 1 ND-1000; Thermo Scientific, Chicago, IL) before reverse transcription PCR. Preparation of first strand cDNA was performed using the ReverTra Ace 1 qPCR RT Master Mix (TOYOBO, Osaka, Japan) from 2 μg of total RNA. The PCR mixture consisted of 10 μl of the THUNDERBIRD 1 SYBR 1 qPCR Mix (TOYOBO), 0.3 μM of forward and reverse primers, and the cDNA samples (total volume of 20 μl). Real-time PCR analysis was performed using the 7500 FAST Detection System (Applied Biosystems, Foster City, CA) according to the manufacturer's instructions as described previously [10,18,19].Specifically, the relative amount of mRNA was calculated from comparisons between the threshold cycle (Ct) of each sample and the Ct of the housekeeping gene β-actin or glyceraldehyde-3-phosphate dehydrogenase (GAPDH). The results were presented as 2 -(Ct of gene of interest-Ct of β-actin or GAPDH) in arbitrary units. The list of primers used in real-time RT-qPCR analysis is described in S1 Table. A single peak on the dissociation curve was used as evidence of purity for each amplified product. There were no visible fluctuations in the Ct values of housekeeping genes from differently treated cells throughout this study (data not included). ELISA To determine the concentration of the granulocyte-colony stimulating factor (G-CSF) and the IL-8 protein in the conditioned media, double-sandwich ELISAs for human G-CSF and IL-8 were performed using a Quantikine 1 ELISA Kit (R&D Systems). Absorbance was read at 450 nm with wavelength correction at 540 nm using a microplate reader (Synergy HT, BIOTEK, Winooski, VT). All the measurements were performed in duplicate. Inhibitor treatments To investigate the influence of de novo protein synthesis, 5 μg/ml of cycloheximide (Calbiochem by Merck KGaA, Darmstadt, Germany) was administrated together with IL-17A and/or polyI:C treatment. To explore the stability of the mRNA, the cells were stimulated with polyI:C overnight (approximately 15 hours) to induce the expression of cytokines. Then, actinomycin D (1 μg/ml; SIGMA, Saint Louis, MO) was added together with IL-17A and/or polyI:C to block further mRNA synthesis, and mRNA was harvested at different time points (0.5, 2, 6 hours) after actinomycin D treatment. BAY11-7082 (InvivoGen, San Diego, CA), an IκB-α phosphorylation inhibitor, was added 1 hour before stimulation with IL-17A, polyI:C, and co-treatment of IL-17A/polyI:C to inhibit IκB-α phosphorylation. Cycloheximide, actinomycin D, and BAY11-7082 were dissolved in dimethyl sulfoxide before use. Small-interfering RNA (siRNA) and transient transfection of BEAS-2B cells The siRNA for TLR3, Toll-like receptor adaptor molecule 1 (TICAM-1, also known as TRIF), IRF3, and tumor necrosis factor receptor 1 (TNFR1) were purchased from Santa Cruz Biotechnology (Dallas, TX). NF-κB p65 siRNA and random oligomer for negative control were obtained from Ambion Biotech (Austin, TX). BEAS-2B cells were transiently transfected with siRNAs using a DharmaFECT-based transfection kit (Thermo Scientific), as described previously [10,18,19]. Briefly, BEAS-2B cells were transfected using transfection mix that contained 1 μM of siRNA. After 24 hours of transfection, the transfection mix was replaced with fresh LHC-9 medium. Cells were harvested 72 hours post transfection for real-time qPCR (after stimulation for 24 hours). Western blot analysis Total protein lysates from different treatments were harvested using RIPA lysis buffer (ATTO Corporation, Tokyo, Japan) and quantified with a DC protein assay (Bio-Rad, Hercules, CA). Before loading, 20 μg of the cell lysate and 4× reducing sample buffer were mixed and heated at 95°C for 8 minutes. The proteins were separated on a Mini-PROTEAN 1 TGX gel (Bio-Rad) and transferred electronically to PVDF membranes. The membranes were blocked with 3% bovine serum albumin (BSA) in 50mM Tris-buffered saline (TBS) or 5% nonfat milk for 30 minutes at room temperature before incubation with each primary antibody overnight at 4°C or 2 hours at room temperature. Then, the membranes were incubated with HRP conjugated secondary antibodies for 30 minutes at room temperature. The ECL chemiluminescence reagent was used to detect the signal bands as described previously [10] and semi-quantitative analyses using densitometry were performed using ImageJ version 1.48v (National Institutes of Health, Bethesda, MD). Antibodies. Phospho-IκBα mouse monoclonal antibody (mAb) (Catalog No. #9246), phospho-IRF3 rabbit mAb (#4947), and IRF3 rabbit mAb (#4392) were purchased from Cell Signaling Technology (Boston, MA) and diluted 1:1000 in 3% BSA/TBS or 5% nonfat milk. The monoclonal anti-β-actin antibody was produced in mice (SIGMA; A5441) and used at a 1:3000 dilution. Statistical analysis Measurements are described as mean ± S.E. Experiments were carried out in duplicate and repeated in at least three independent cultures. Differences among the groups were compared using oneway or two-way analysis of variance (ANOVA) and the Tukey-Kramer multiple comparison test. The p values that were less than or equal to 0.05 were considered statistically significant. Synergistic induction of a proinflammatory response following PolyI:C and IL-17A treatment in NHBE cells To investigate the proinflammatory responses induced by viral respiratory infection in airway epithelial cells, primary NHBE cells in submerged cultures were challenged with polyI:C, which is a synthetic double-stranded RNA analogue and a ligand of TLR3, for 24 hours. The mRNA levels of cytokines and chemokines were measured by real-time RT-qPCR. The mRNA expression of proinflammatory mediators such as G-CSF, IL-8, CXCL1, CXCL5, and IL-1F9 was increased after 24 hours of exposure to polyI:C (50 μg/ml) as compared to untreated NHBE cells (Fig 1A-1E). Notably, co-stimulation with IL-17A (10 ng/ml) and polyI:C resulted in a synergistic up-regulation in G-CSF, IL-8, CXCL1, CXCL5, and IL-1F9 mRNA expression in NHBE cells (Fig 1A-1E). In terms of antiviral gene expression, polyI:C slightly induced IFN-β mRNA expression but not IFN-α1 in NHBE cells (Fig 1F and 1G). Moreover, the addition of IL-17A to polyI:C-treated cells did not cause any additional induction of type I interferon, IFN-α1 or IFN-β (Fig 1F and 1G). Time course analyses indicated that the mRNA expression of G-CSF and IL-8 increased after stimulation with polyI:C (Fig 2A and 2B). Notably, co-treatment with IL-17A and polyI:C synergistically increased G-CSF and IL-8 mRNA expression from 2 to 24 hours after stimulation (Fig 2A and 2B). Although the IFN-β mRNA levels were induced at 2 hours after polyI:C stimulation, there was no synergistic increase in IFN-α1 or -β mRNA levels by co-treatment with IL-17A/polyI:C throughout the time course (Fig 2C and 2D). ELISAs demonstrated a similar trend for the synergistic induction of G-CSF or IL-8 proteins in NHBE cells after 24 hour of treatments (Fig 2E and 2F). IL-17A acts synergistically with polyI:C to promote G-CSF and IL-8 induction independent of de novo protein synthesis and mRNA stabilization Prior to performing mechanistic analyses on the synergistic induction of expression, we confirmed that the mRNA expression of G-CSF and IL-8 was also synergistically upregulated by co-treatment with IL-17A and polyI:C in BEAS-2B cells, which are an epithelial cell line from the human bronchus, as well as NHBE cells (Fig 3A and 3B). To assess whether some de novo protein synthesis was involved in the IL-17A/polyI:C-provoked synergistic induction of proinflammatory cytokines, BEAS-2B cells were treated with the protein synthesis inhibitor cycloheximide in the presence of IL-17A and polyI:C. Cycloheximide had no effect on the IL-17A/ polyI:C-provoked synergistic induction of G-CSF and IL-8 mRNA levels (Fig 3C and 3D). To determine whether mRNA stability was related to the synergy, BEAS-2B cells were stimulated with polyI:C overnight and treated with IL-17A and/or polyI:C in the presence of actinomycin D. The relative amount of G-CSF and IL-8 mRNA significantly decreased over time after the addition of actinomycin D, and IL-17A and/or polyI:C treatment had no effect on the persistence of these mRNAs (Fig 3E and 3F). Taken together, these results indicate that the IL-17A/ polyI:C-induced synergistic proinflammatory gene expression occurred without any participation of de novo protein synthesis or mRNA stabilization. IL-17A/polyI:C-provoked synergistic induction was mediated by the TRIF, NF-κB, and IRF3 pathways We next evaluated the signaling pathways and transcription factors contributing to the IL-17A/polyI:C-provoked synergistic induction of G-CSF and IL-8. We first ascertained the involvement of the TLR3-TRIF (also known as TICAM-1) axis using TLR3 and TICAM-1 siR-NAs. TRIF/TICAM-1 is an adapter in responding to activation of TLR3. BEAS-2B cells were transiently transfected with siRNAs for TLR3 or TICAM-1. Inductions of G-CSF and IL-8 mRNA expression increased over time and was significantly higher in co-treatments with IL-17A and polyI:C than in controls or other treatments. IFN-β mRNA was significantly upregulated by polyI:C or co-treatment with IL-17A and polyI:C at 2, and 6 hours. However, IFN mRNA expression was not different between polyI:C-treatment and co-treatment with IL-17A/polyI:C. Gray dashed lines with circles, unstimulated control; gray solid lines with squares, IL-17A; black dashed lines with triangles, polyI:C; black solid lines with diamonds, co-treatment with IL-17A and polyI:C. The concentration of G-CSF (E) and IL-8 (F) proteins in the conditioned medium of submerged cultures treated with IL-17A and/or polyI:C for 24 hours were detected by ELISA. Synergistic increases in G-CSF and IL-8 in protein levels were observed after co-treatment with IL-17A and polyI:C. Results are shown as the mean with S.E. of three independent experiments. * p < 0.01 versus cotreatment with IL-17A and polyI:C. Submerged BEAS-2B cells were stimulated with IL-17A and/or polyI:C for 24 hours. G-CSF (A) and IL-8 (B) mRNA induction was evaluated by real-time RT-qPCR and normalized to β-actin levels. Co-treatment with IL-17A and polyI:C synergistically upregulated mRNA expression of these genes. Next, BEAS-2B cells were treated for 24 hours with IL-17A, polyI:C, and cycloheximide (CHX) (5 μg/ml). The mRNA levels of G-CSF (C) and IL-8 (D) were evaluated using real-time RT-qPCR and normalized to β-actin levels. CHX had no effect on the IL-17A/polyI:C-provoked synergistic induction. After overnight stimulation with polyI:C (50 μg/ml) in submerged cultures, BEAS-2B cells were incubated with actinomycin D (Act D, 1μg/ml) together with IL-17A and/or polyI:C. Total RNA was harvested at different time points (0, 0.5, 2 and 6 hours) from Act D, IL-17A and/or polyI:C treated cells. Next, G-CSF (E) and IL-8 (F) mRNA levels were evaluated by RT-qPCR and normalized to GAPDH levels. There was no significant difference between each treatment in two-way ANOVA analyses. Gray dashed lines with circles, Act D only; gray solid lines with squares, Act D + IL-17A; black dashed lines with triangles, Act D + polyI:C; and black solid lines with diamonds, Act D + IL-17A + polyI:C. Results are shown as the mean with S.E. of three independent experiments. * p < 0.01, † p < 0.05 vs. co-treatment with IL-17A and polyI:C. The TLR3 siRNA suppressed 90% of its own message (Fig 4A). At 72 hours post transfection, mRNA was harvested after 24 hours of stimulation with IL-17A and/or polyI:C. TLR3 siRNA significantly attenuated polyI:C-induced G-CSF and IL-8 expression, and synergistic induction by co-treatment with IL-17A/polyI:C (Fig 4C and 4D).The TICAM-1 siRNA was effective in attenuating its own message (Fig 4D). At 2 days post transfection, cells were stimulated for 24 hours with IL-17A and/or polyI:C. The TICAM-1 siRNA clearly attenuated the polyI:C-induced G-CSF and IL-8 expression, and IL-17A/polyI:C-provoked synergistic induction of these genes (Fig 4E and 4F). These results demonstrated that TLR3-TRIF was involved in both the polyI:C-induced gene expression and the IL-17A/polyI:C-induced synergistic gene expression in airway epithelial cells. Next, we examined the role of the NF-κB pathway, which is related to signaling pathways downstream of both TLR3 and IL-17A, using a p65 siRNA. The p65 siRNA attenuated 90% of its own message (Fig 5A). The IL-17A/polyI:C-induced synergistic increases in expression of G-CSF and IL-8 mRNA levels were strongly suppressed by the p65 siRNA as compared to the Contribution of the TLR3-TRIF axis in the IL-17A/polyI:C-provoked synergistic induction of gene expression. Transient transfection of BEAS-2B cells was performed using a TLR3 siRNA, a TICAM-1 siRNA or a random oligomer. At 24 hours after transfection, the transfection mix containing siRNAs was replaced with fresh LHC-9 medium. After 2 days from transfection, the cells were treated for 24 hours with IL-17A and/or polyI:C. mRNA levels of G-CSF, IL-8, TLR3, and TICAM-1were evaluated using quantitative real-time PCR. TLR3 siRNA (solid bars) attenuated its own mRNA level as compared with the random oligomer negative control (shadowed bars) (A). PolyI:C-induced G-CSF and IL-8 mRNA expression was suppressed by TLR3 siRNAs, and IL-17A/ polyI:C-provoked synergistic induction was attenuated (B, C). TICAM-1 siRNA (solid bars) suppressed 80% of its own message compared with the negative control (shadowed bars) (D). PolyI:C-induced mRNA expression of G-CSF and IL-8 was suppressed by TICAM-1 siRNAs, and synergistic mRNA induction of G-CSF and IL-8 by co-treatment with IL-17A and polyI:C was significantly attenuated (E, F). All measurements were averaged from duplicate samples, and the experiment was repeated three times. * p < 0.001, siRNA vs. negative control. † p < 0.01, siRNA vs. negative control in polyI:C treatment. ‡ p < 0.01, siRNA vs. negative control in co-treatment with IL-17A and polyI:C. § p < 0.05, siRNA vs. negative control in polyI:C treatment. negative control (Fig 5B and 5C). Comparing the induction levels between IL-17A/polyI:C cotreatment and polyI:C treatment alone, p65 siRNA significantly decreased the amplification ratio (co-treatment/polyI:C) of both G-CSF and IL-8 mRNA expression as compared to the negative control (mean, 5.87 to 2.29, 5.37 to 2.62, respectively; S2 Table). To confirm the impact of the inhibition of the NF-κB pathway, submerged BEAS-2B cells were pre-incubated with BAY11-7082 (5-10 μM), an IκB-α phosphorylation inhibitor, for 1 hour before stimulation and then the cells were stimulated for 24 hours with IL-17A and/or polyI:C in the presence of BAY11-7082. BAY11-7082 treatment inhibited the IL-17A/polyI:C-provoked synergistic G-CSF and IL-8 induction in a dose-dependent manner (Fig 5D and 5E). These results indicate that the NF-κB pathway was indispensable for the IL-17A/polyI:C-provoked synergistic G-CSF and IL-8 mRNA induction in airway epithelial cells. To further confirm the involvement of IRF3 in the IL-17A/polyI:C-induced synergistic proinflammatory response, we used an siRNA approach to knock down IRF3. As shown in Fig 6A, IRF3 siRNA was effective in reducing IRF3 expression 80% of its original level. The IRF3 siRNA treatment also suppressed the IL-17A/polyI:C-provoked induction of G-CSF and IL-8 IL-17A/polyI:C-provoked synergistic induction of proinflammatory cytokines depends on the NF-κB pathway. BEAS-2B cells were transfected with siRNA for NF-κB p65 (solid bars) or a negative control (shadowed bars) for 24 hours. At 2 days after exposure to siRNA, transfected cells were treated for 24 hours with IL-17A and/or polyI:C, and then the expression of mRNA was evaluated by quantitative real-time PCR. Treatment with p65 siRNA reduced endogenous p65 mRNA levels about 90% as compared with the random oligomer negative control (A). PolyI:C-induced G-CSF and IL-8 mRNA expression was suppressed by p65 siRNA, and the IL-17A/polyI:C-provoked synergistic induction was strongly attenuated (B, C). BEAS-2B cells were pre-incubated in the presence of the IκBα inhibitor BAY11-7082 (0-10 μM) for 1 hour and treated with IL-17A and/or polyI:C in submerged cultures. After 24-hour treatments, total RNA was harvested, and G-CSF (D) and IL-8 (E) mRNA expression was evaluated using quantitative real-time PCR that was normalized to β-actin levels. BAY11-7082 treatment inhibited IL-17A/polyI:C-provoked synergistic G-CSF and IL-8 induction in a concentration-dependent manner. All measurements were averaged from duplicate wells, and the experiment was repeated three times. * p < 0.001, p65 siRNA vs. negative control. † p < 0.01, p65 siRNA vs. negative control in polyI:C treatment. ‡ p < 0.01, p65 siRNA vs. negative control in co-treatment with IL-17A and polyI:C. § p < 0.05, vs. BAY11-7082 absent. doi:10.1371/journal.pone.0139491.g005 (Fig 6B and 6C). The IRF3 siRNA decreased the amplification ratio (co-treatment/polyI:C) of IL-8 mRNA expression (mean, 9.94 to 5.26, S2 Table), however, there was no reduction observed in the amplification ratio of G-CSF mRNA expression (mean, 6.78 to 11.0, S2 Table), which suggests that the IRF3 pathway was involved in the synergy of IL-8 mRNA expression but was not in the synergy of G-CSF mRNA expression. The detailed mechanism of IL-17A/ polyI:C-provoked synergistic G-CSF expression and that of IL-17A/polyI:C-provoked synergistic IL-8 expression may be slightly different. These results indicate that the IRF3 pathway is partially involved in the IL-17A/polyI:C-induced synergistic proinflammatory response in airway epithelial cells, on the other hand, the NF-κB pathway is more important for the synergism than the IRF3 pathway. TNF receptor signaling did not contribute to IL-17A/polyI:C-provoked synergistic induction To date, several groups have demonstrated the synergistic induction of proinflammatory cytokines by IL-17A and tumor necrosis factor (TNF) α treatment [25,26]. We investigated whether TNF receptor signaling is involved in IL-17A/polyI:C-induced synergistic proinflammatory responses using siRNAs for TNF receptor 1 (TNFR1). Although the TNFR1 siRNA was effective in attenuating its own mRNA expression 80% of its original level (Fig 7A), transfection of TNFR1 siRNA did not have any effect on G-CSF and IL-8 mRNA expression that was induced by IL-17A and/or polyI:C treatment (Fig 7B and 7C). These results indicate that TNF receptor signaling did not contribute to the IL-17A/polyI:C-induced synergistic proinflammatory response in airway epithelial cells. Activation of NF-κB and IRF3 by treatment with polyI:C and/or IL-17A To evaluate the activation of NF-κB signaling, we performed western blotting analysis to detect phosphorylation of IκB-α. BEAS-2B cells were cultured in submerged conditions until the cells were 90% confluent, and then stimulated with IL-17A and/or polyI:C. PolyI:C treatment or cotreatment with IL-17A and polyI:C induced IκB-α phosphorylation from 30 to 240 minutes after treatment (Fig 8A, S1 and S2 Files). In semi-quantitative analyses using densitometry, the Effects of IRF3 siRNA on inflammatory cytokines gene expression. BEAS-2B cells were transfected with IRF3 siRNA (solid bars) or a random oligomer negative control (shadowed bars). After 24 hours of transfection, the siRNA transfection mix was replaced with a fresh LHC-9 medium. At 2 days after siRNA treatment, transfected cells were incubated for 24 hours with IL-17A and/or polyI:C and mRNA expression was evaluated by real-time RT-qPCR. The IRF3 siRNA reduced endogenous IRF3 mRNA expression (A). Synergistic mRNA induction of G-CSF (B) and IL-8 (C) was significantly suppressed by the siRNA knockdown of IRF3. All measurements were averaged from duplicate wells, and the experiment was repeated three times. * p < 0.001, IRF3 siRNA vs. negative control. † p < 0.01, IRF3 siRNA vs. negative control in polyI:C treatment. ‡ p < 0.01, IRF3 siRNA vs. negative control in co-treatment with IL-17A and polyI:C. doi:10.1371/journal.pone.0139491.g006 relative amount of IκB-α phosphorylation was significantly higher in co-treatment conditions with IL-17A and polyI:C as compared to the single treatments at 120 and 240 minutes (Fig 8B). Phosphorylation of IRF3 was also analyzed to assess the activation of IRF3. PolyI:C treatment or co-treatment conditions with IL-17A and polyI:C induced IRF3 phosphorylation at 60 to 120 minutes after treatment (Fig 8C, S3 and S4 Files). No significant difference was found in the relative amount of IRF3 phosphorylation between the single treatment and co-treatment conditions with IL-17A and polyI:C (Fig 8D). These results suggest that activation of both the NF-κB and the IRF3 signaling pathways were essential in polyI:C/IL-17A-induced proinflammatory responses. Moreover, the NF-κB pathway may play a primary role in mediating the synergistic induction of inflammatory genes after co-treatment with IL-17A and polyI:C in airway epithelial cells. Discussion In this study, we investigated the interaction between IL-17A and TLR3 signaling in airway epithelial cells, and the regulatory mechanism of the proinflammatory response mediated by IL-17A and polyI:C, a ligand of TLR3. We found that IL-17A and polyI:C synergistically induced proinflammatory cytokines and chemokines expression (G-CSF, IL-8, CXCL1, CXCL5, and IL-1F9), but not antiviral gene expression (IFN-α1 and -β) in primary cultures of NHBE cells. The IL-17A/polyI:C-induced synergistic proinflammatory response occurred in the absence of de novo protein synthesis and mRNA stabilization. Attenuation of TLR3, TICAM-1 (also known as TRIF), NF-κB, and IRF3 using specific inhibitors or siRNAs decreased the synergistic effects on G-CSF and IL-8 mRNA expression. Comparing the ratio of mRNA induction between co-treatment with IL-17A/polyI:C and treatment with polyI:C alone, blocking the NF-κB pathway significantly attenuated the synergism. Western blotting analysis revealed that both NF-κB and IRF3 activation were observed in polyI:C single treatment and co-treatment with IL-17A and polyI:C. In addition, NF-κB activation was potentiated by co-stimulation with IL-17A and polyI:C as compared with the single treatment. These findings provide evidence that IL-17A and TLR3 signaling cooperate to enhance the expression of proinflammatory cytokines in the airway epithelium via TLR3/TRIF-mediated NF-κB/IRF3 activation; moreover, enhanced NF-κB dependent pathway activation may play a key role in the synergism. Our data demonstrates that IL-17A and TLR3-signaling synergistically enhanced proinflammatory cytokine and chemokine expression in airway epithelial cells. Viral respiratory infections activate innate immune responses through pattern recognition receptors, particularly TLRs and RLRs, in airway epithelial cells [3,7,27], which causes exacerbation of chronic airway disorders (e.g., asthma and COPD). Patients with severe asthma are highly susceptible to viral infections leading to acute exacerbation of asthma symptoms [4], and have a higher level of IL-17A in induced sputum and bronchial biopsies [23]. The airway epithelial lining is the first line of defense; thus, the response of airway epithelial cells during viral infections is likely to be related to the pathogenesis of acute exacerbation of asthma. When allergen challenge was followed by polyI:C exposure in an asthma mouse model, there were similarities to the observations made in viral-induced exacerbation of human asthma. Mahmutovic-Persson and colleagues [28] demonstrated that airway polyI:C challenge in an allergic experimental asthma mouse model produced an exacerbation-like condition, with increased lung tissue inflammation and increased levels of neutrophils and CXCL1 expression in bronchoalveolar lavage (BAL). In this view, the present observation of the synergistic expression of G-CSF, IL-8, CXCL1, and CXCL5 after co-treatment with IL-17A and polyI:C in NHBE cells indicates a significant role of airway epithelial cells in promoting excessive neutrophilic inflammation in viral-induced exacerbations of asthma. In addition, a recent study using human skin fibroblasts showed that a combined IL-17A/polyI:C treatment resulted in the synergistic upregulation of IL-6 and IL-8 expression [29]. These results, including our data, highlight the impact of synergism for IL-17A and polyI:C in promoting excessive inflammation during viral infections. In asthma, this synergism is likely to be associated with viral infection-induced acute exacerbation, and may therefore be a therapeutic target for preventing the exacerbation of asthma. To date, few studies describe the interaction between IL-17A and TLR signaling in airway epithelial cells, and the molecular mechanism governing the synergism elicited by IL-17A and TLRs has not been fully elucidated. Wiehler et al. [30] has shown the synergism of IL-17A and rhinovirus infection in NHBE cells, demonstrating that IL-17A enhanced human rhinovirusinduced IL-8 and hBD2 expression. However, their study did not mention the involvement of TLR3 and the downstream signaling molecules (e.g., TRIF, NF-κB, and IRF3). In the present study, we provide evidence that IL-17A interacts with the TLR3/TRIF signaling pathway to augment proinflammatory gene expression in a primary cultures of NHBE cells, and that the activation of downstream transcriptional factors, NF-κB and IRF3, was required in the synergistic proinflammatory response. Concerning other TLRs, Mizunoe and colleagues [31] reported that IL-17A enhanced the production of IL-8 induced by peptidoglycan (TLR2 agonist) or lipopolysaccharide (TLR4 agonist) in bronchial epithelial cells from cystic fibrosis patients, but not in NHBE cells. The interplay between IL-17A and TLR signaling may differ among disease conditions and cell types. Further studies are warranted to elucidate the molecular details of this process. Activation of NF-κB and IRF3 signaling pathways after treatment with IL-17A and/or polyI:C. BEAS-2B cells were stimulated with IL-17A and/or polyI:C in submerged cultures. Whole cell lysates were obtained at different time points (30, 60, 120 and 240 minutes) after treatment. Phosphorylation of IκBα was assessed by western blotting. β-actin was used as a loading control. Band images from the representative experiment (A) and semi-quantification of phosphorylated IκBα using densitometry (B) are shown. IκB-α phosphorylation was strongly induced at 30-240 minutes in cells treated with polyI:C alone or co-treated with IL-17A and polyI:C. The relative amount of IκB-αphosphorylation was significantly higher in co-treatment conditions with IL-17A and polyI:C as compared to polyI:C treatment alone at 120 and 240 minutes (p = 0.036 and 0.044, respectively). Phosphorylation of IRF3 was also assessed by western blotting. The representative experiment (C) and densitometric analysis (D) are shown. IRF3 phosphorylation was enhanced by polyI:C treatment or cotreatment with IL-17A and polyI:C, and no difference in the relative amount of IRF3 phosphorylation was observed between polyI:C treatment alone and cotreatment with IL-17A and polyI:C. Results of densitometoric analysis represent the mean with S.E. from three independent experiments. C, control. * p < 0.05, polyI:C vs. co-treatment with IL17-A and polyI:C. doi:10.1371/journal.pone.0139491.g008 Synergism between IL-17A and other cytokines, including TNF-α, IL-1β, IL-4, and IL-13, in inflammatory gene expression has been appreciated in various cell types [26, [32][33][34][35][36]. A variety of mechanisms, including the action of transcription factors, was reported to be involved in the synergism [34,36]. Our siRNA knockdown results highlight the importance of NF-κB and IRF3 activation in polyI:C-induced and IL-17A/polyI:C-induced synergistic G-CSF and IL-8 expression in airway epithelial cells. NF-κB and IRF3 are key transcription factors that are downstream of the TLR3/TRIF signaling pathway; thus, it is understandable that both NF-κB and IRF3 are involved in the induction. Moreover, comparing the induction levels between treatment with polyI:C alone and co-treatment with IL-17A/polyI:C, the blockade of the NF-κB pathway significantly attenuated the IL-17A/polyI:C-provoked synergistic induction of G-CSF and IL-8 mRNA. In addition, western blotting analysis revealed that combined treatment with IL-17A and polyI:C significantly augmented NF-κB activation, although the difference of IκB-a phosphorylation between polyI:C treatment and co-treatment with polyI:C/IL-17A was modest. These findings suggest a critical role for the NF-κB dependent pathway in the IL-17A/polyI:C-induced synergistic proinflammatory cytokine expression. Considering the broad nature of NF-κB activation, changes in NF-κB binding ability, or an interplay between NF-κB and IRF3 may be involved in the mechanisms of IL-17A/TLR3-mediated synergism, which will be a topic for future studies. In terms of other transcription factors, Huang and colleagues [34] has shown that IL-17A and IL-4/IL-13 synergistically upregulated IL-19 expression in airway epithelial cells through STAT6-dependent mechanisms. Several groups have shown the synergistic proinflammatory cytokines expression induced by IL-17A and TNF-α treatment [25,26]. The ability of IL-17A in combination with TNF-α to promote enhanced mRNA stability for cytokines and chemokines, including IL-8, has been previously shown in fibroblasts and airway smooth muscle cells [33,37]. In our system, IL-17A had no effect on mRNA stability in IL-17A/polyI:C-induced synergistic G-CSF and IL-8 expression. In addition, treatment with cycloheximide had no effect on the synergy, suggesting that de novo protein synthesis is dispensable for the synergism. Moreover, blocking TNF receptor signaling did not attenuate the IL-17A/polyI:C-induced synergistic expression. Based on these findings, IL-17A/polyI:C-induced synergistic G-CSF and IL-8 expression in airway epithelial cells is mediated by transcriptional activation, possibly through NF-κB and IRF3 activation. The different mechanism by which IL-17A augments proinflammatory gene expression may be partly explained by the different cell types and stimulants used in each study. The detailed molecular mechanisms of synergistic inflammatory cytokine induction may be considerably complicated. Recently, Qiao et al. [38] demonstrated that IFN-γ promotes chromatin remodeling to increase chromatin accessibility and augment TLR4-induced inflammatory gene expression in primary human macrophages. IFN-γ induced sustained occupancy of STAT1 and IRF1, and associated histone acetylation at the promoter and enhancer, which increased recruitment of TLR4-induced transcriptional factors, such as NF-κB and C/EBPβ, and transcription of inflammatory cytokine genes. In this study, we did not evaluate the participation of chromatin remodeling and histone acetylation in IL-17A/polyI:C-induced synergistic inflammatory cytokine gene expression, however, it is feasible that similar mechanisms exist for the synergism between IL-17A and TLR3 signaling in airway epithelium, which may augment NF-κB activation and transcription of these genes. In summary, our data provide an explanation for the IL-17A/polyI:C-induced synergistic proinflammatory cytokine and chemokine expression in airway epithelial cells. IL-17A and polyI:C synergistically induced proinflammatory (G-CSF, IL-8, CXCL1, CXCL5, and IL-1F9) but not antiviral (IFN-α1 and -β) gene expression in primary cultured NHBE cells, which promotes the attraction of other immune cell types (e.g., neutrophils) in the airway and excessive airway inflammation. Analysis of regulatory mechanisms for IL-17A/polyI:C-induced synergistic proinflammatory responses revealed the importance of TLR3/TRIF-mediated NF-κB/IRF3 activation; moreover, enhanced activation of the NF-κB dependent pathway may play an essential role in the observed synergism. Patients with severe asthma are highly susceptible to viral respiratory infection, which results in severe airway inflammation and acute exacerbation of asthma. This study highlights an important interaction between IL-17A and TLR3 signaling in excessive airway inflammation, and control of IL-17A/TLR3-mediated NF-κB/IRF3 activation may be a novel therapeutic target to prevent exacerbation of asthma. Table. Induction ratio between co-treatment with IL-17A / polyI:C and polyI:C. (DOCX)
2016-05-10T04:03:37.173Z
2015-09-29T00:00:00.000
{ "year": 2015, "sha1": "a40c5c263e09780a5ccb2359758c0b4fdca4cae3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0139491", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a40c5c263e09780a5ccb2359758c0b4fdca4cae3", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
237142599
pes2o/s2orc
v3-fos-license
Iterative learning control with discrete-time nonlinear nonminimum phase models via stable inversion Output reference tracking can be improved by iteratively learning from past data to inform the design of feedforward control inputs for subsequent tracking attempts. This process is called iterative learning control (ILC). This article develops a method to apply ILC to systems with nonlinear discrete-time dynamical models with unstable inverses (i.e. discrete-time nonlinear non-minimum phase models). This class of systems includes piezoactuators, electric power converters, and manipulators with flexible links, which may be found in nanopositioning stages, rolling mills, and robotic arms, respectively. As these devices may be required to execute fine transient reference tracking tasks repetitively in contexts such as manufacturing, they may benefit from ILC. Specifically, this article facilitates ILC of such systems by presenting a new ILC synthesis framework that allows combination of the principles of Newton's root finding algorithm with stable inversion, a technique for generating stable trajectories from unstable models. The new framework, called Invert-Linearize ILC (ILILC), is validated in simulation on a cart-and-pendulum system with model error, process noise, and measurement noise. Where preexisting Newton-based ILC diverges, ILILC with stable inversion converges, and does so in less than one third the number of trials necessary for the convergence of a gradient-descent-based ILC technique used as a benchmark. INTRODUCTION Iterative learning control (ILC) is the process of learning an optimal feedforward control input over multiple trials of a repetitive process based on feedback measurements from previous trials. Compared to real-time-feedback and/or feedforward control techniques, many case studies of ILC have shown a substantial reduction in tracking error. Relevant applications include robotassisted stroke rehabilitation 1 , high speed train control 2 , laser additive manufacturing 3 , and vehicle-mounted manipulators 4 , all of which use nonlinear models. In fact, while the majority of ILC literature focuses on linear systems, the prevalence of nonlinear dynamics in real-world systems has motivated the development of numerous ILC theories for discrete-time nonlinear models [5][6][7][8][9][10] . In addition to the state nonlinearities most commonly treated by nonlinear systems literature, many real-life systems exhibit dynamics well-represented by models with at least one of the following properties: (P1) relative degree ≥ 1, (P2) input nonlinearities, (P3) time-variation, and (P4) instability of the model inverse. For example, (P1) may be exhibited in the position control of myriad systems including piezoactuators 11 , motors 12 , robotic manipulators 13 , and vehicles 14 . (P2) may be exhibited by piezoactuators 11 , electric power converters 15 , wind energy systems 16 , magnetic levitation systems 17 , and flexible-link manipulators 13 . (P3) may be exhibited by any feedforward-input-to-output model of systems using both feedforward and feedback control, as is often done for robotic manipulation 18 . Finally, and of primary concern in this work, (P4) may be exhibited by piezoactuators 19 , electric power converters 15 , wind energy systems 16 , DC motor and tachometer assemblies 20 , and flexiblelink manipulators 13 . However, published discrete-time-nonlinear-model-based ILC theories exclude at least one of properties (P1)-(P4) from consideration. While the prior art makes important contributions such as foundational nonlinear ILC theory [5][6][7] , relaxation of process repetitiveness assumptions 8 , robustness to packet dropout in measurement and controller signals 9 , and integration of ILC with adaptive control 10 , these studies' analyses are limited to specific system structures. As a consequence, (P1) is not addressed by references 6, 7, 9, (P2) is not addressed by references 5-7, 9, 10, (P3) is not addressed by references 5,8, and (P4) is not addressed by references 5-9 † . The fact that many of the above example systems exhibit multiple properties and many of the above ILC theories exclude multiple properties from consideration illustrates that it can be challenging to find a model-based ILC synthesis scheme appropriate for many real-world applications. Indeed, flexible-link manipulators exhibit all four properties, and they are relevant to the fast and cost-effective automation of pick-and-place and assembly tasks as well as to the control of large structures such as cranes 21, ch. 6 . Such application spaces would benefit from having a versatile ILC scheme compatible with (P1)-(P4). Additionally, while ILC seeks to converge to a satisfactorily low error, this learning is not immediate, and trials executed before the satisfactory error threshold is passed may be seen as costly failures from the perspective of the process specification. It is thus desirable to develop ILC schemes that converge as quickly as possible. One ILC scheme that comes close to meeting these needs for versatility and speed is that of Avrachenkov 22 , called Newton ILC (NILC) here. NILC is the application of Newton's root finding algorithm to a complete finite time series (as opposed to individual points in time). NILC's synthesis procedure and convergence analysis are unusually broad in that they admit discrete-time nonlinear models with properties (P1)-(P3) 22,23 . Additionally, Newton's method has been shown to deliver faster convergence in ILC than schemes such as P-type ILC 24, ch. 5 , upon which much of the relevant prior art on the ILC of discrete-time nonlinear systems is founded [5][6][7][8][9] . However, this work demonstrates that when synthesized from models with unstable inverses, i.e. nonminimum phase models, NILC typically generates control signals that diverge towards very large magnitudes. In other words NILC may be incompatible with models exhibiting (P4). This article presents a new ILC framework inheriting the benefits of NILC while surmounting this shortcoming. For linear models with unstable inverses, a common way to obtain feedforward control signals is to systematically synthesize approximate dynamical models with stable inverses by individually changing the model zeros and poles, e.g. the work of Tomizuka 25 . However, it is difficult to prescribe analogous systematic approximation methods for nonlinear models because the poles and zeros do not necessarily manifest as distinct binomial factors in the system transfer function that can be individually inverted or modified. An alternative is to harness the fact that a scalar difference equation that is unstable when evolved forward in time from an initial condition is stable if evolved backwards in time from a terminal condition. If the stable and unstable modes of a system are decoupled and evolved in opposite directions, a stable total trajectory can be obtained. This process is called stable inversion. For linear systems on a bi-infinite timeline, with boundary conditions at time ±∞, stable inversion gives an exact solution to the output tracking problem posed by the unstable inverse model. In practice on a finite timeline, a high-fidelity approximation is obtained by ensuring the reference is designed with sufficient room for pre-and post-actuation, i.e. with a "flat" beginning and end. However, unlike ILC, stable inversion alone cannot account for model error. To address this, Zundert et al 26 details stable inversion and presents an ILC scheme for linear systems that incorporates a process similar to stable inversion. Extension of stable inversion to nonlinear models involves additional complexities. Some of these challenges, e.g. the difficulty of completely decoupling the stable and unstable parts of a nonlinear system, have been addressed by works such as those of Devasia et al 27,28 for continuous-time systems and Zeng et al 29 for discrete-time systems. However, the following challenges remain. First, this prior art assumes that if the state and input are both zero at a particular time step, then the state will be zero at the next time step. This is not true for most representations of systems employing both feedback and feedforward control because if the reference is nonzero it drives state change via the feedback controller despite the initial state and feedforward input being zero. Stable inversion erroneously based on this assumption can have poor performance, and stable inversion has not been proven to converge when this assumption is relaxed. Secondly, Zeng et al 29 does not translate from the theoretical solution on a bi-infinite timeline to an implementable solution on a finite timeline. This work addresses these challenges. In short, although the work to date on NILC and stable inversion has made great strides, gaps remain between the prior art and a synthesis scheme for ILC that is fast and applicable to a wide variety of models-including nonlinear non-minimum phase models. This leads to the main contribution of the present article: an ILC framework enabling controller synthesis from models satisfying all of (P1)-(P4). The key elements of this framework are • reversing the order of the linearization and model inversion processes in NILC to circumvent issues associated with matrix inversion, • reformulation of the model inversion in NILC as stable inversion, • proof of stable inversion convergence with relaxed assumptions on state dynamics, enabling treatment of a wider array of feedback control and other time-varying models, and • development of a structured method for implementing the stable inversion technique proposed in this work. The proposed framework is validated in simulation on a nonlinear, relative degree 2, time-varying, non-minimum phase cartand-pendulum system with model error and process and measurement noise. The remainder of the paper is organized as follows. Section 2 provides technical details from the prior art in NILC 23 and stable inversion 29 necessary to present the novel contributions of the present work. Section 3 presents analysis that justifies the attribution of a class of NILC failures to inverse instability, and provides a new ILC framework that enables the circumvention of this failure mechanism by incorporating stable inversion. Section 4 provides proof of convergence of stable inversion for an expanded class of systems and provides improved methods for practical implementation. Section 5 details and discusses the validation of the new ILC framework with stable inversion through benchmark simulations on a non-minimum phase cart-andpendulum system. This includes demonstration of conventional NILC's divergence when applied to the same system. Section 6 presents conclusions and areas for future work. Newton ILC Consider SISO, discrete-time, nonlinear, time-varying modelŝ wherê ∈ ℝ is the state vector, ∈ ℝ is the control input,̂ ∈ ℝ is the output, and is the discrete time index. The system is made to perform repeated trials of a reference tracking task, where ∈ ℤ >0 is the number of time steps in a trial (i.e. the number of samples minus 1), and ∈ ℤ ≥0 is the trial index ‡ . Additionally, consider the trial-invariant reference ( ) ∈ ℝ. Hats, , are used to emphasize that (1) is an imperfect model of some true system. It is assumed that the control input and trial-invariant initial condition are perfectly known. A classical ILC structure is given by where ∈ ℝ − +1 and ∈ ℝ − +1 are input and error time series vectors, is the relative degree of (1), and ∈ ℝ − +1× − +1 is the learning matrix, which must be designed by a human or generated by an automatic synthesis procedure. ‡ is used for the trial index because and will be used for matrix element indexing, is used for the discrete time index, is avoided to prevent confusion with continuous time, and is the next letter in the alphabet and thus commonly used for indexing. The time series vectors, also called lifted vectors, are explicitly given by where ∈ ℝ is the measured output of the true, but unknowable, system. These unknown system dynamics are represented as the function ∶ ℝ − +1 → ℝ − +1 , which takes in and outputs . The work of Avrachenkov 22 analyzes the convergence of (3) within a ball around the solution input d ("solution" meaning that d = ). In the present context this ball can be defined as < } with > 0 and ‖⋅‖ 2 being the Euclidean norm. Three conditions are posited: (C1) The true dynamics are continuously differentiable with respect to in d , and their Jacobian is Lipschitz continuous with respect to in d , . NILC is the use of the Newton-Raphson root finding algorithm to derive an automatic synthesis formula for the trial-varying learning matrix . The learning matrix is derived from the lifted representation of (1)-(2),̂ =̂ ( ), which is defined as follows. Elements of̂ output bŷ are given viâ where the parenthetical superscript notation indicates function composition of the form Becausê (0) = 0 is known in advance and the time argument is determined by the element index of the lifted representation, is a function of only . Note that because the first element of̂ iŝ ( ) it explicitly depends on (0). Using Newton's method to find the root of the error time series where is the Jacobian of with respect to as a function of . This learning matrix formula is impossible to evaluate because of its dependence on the unknown dynamics . Thus,̂ is used as an approximation of to yield the implementable NILC learning matrix formula When NILC was originally developed, large Jacobians such as ̂ were prohibitively difficult to derive and store as functions of , necessitating the definition of additional approximation techniques. However, with automatic differentiation tools such as CasADi 30 , the barrier to Jacobian computation is vastly reduced, and can be done directly in many cases. Stable Inversion The first step of stable inversion is deriving the conventional inverse. To synthesize a minimal inverse system representation, first assume (1) is in the normal form̂ wherê (0) = 0, and the superscripts indicate the vector element index, starting from 1. Note the ILC trial index subscript is omitted in this section, as stable inversion on its own does not involve incrementing . Equation (12a) captures the time delay arising from the system relative degree, while equation (12b) captures the remaining system dynamics. One method of deriving this normal form from a system not in normal form is given in Eksteen et al 31 . Note that this coordinate transformation is performed in advance of any stable inversion or ILC analysis or synthesis. Thus the coordinate transform does not interfere with satisfaction of the identical initial condition assumption in (1a). Given this normal form, use (12c) to replace the first state variables with output variables viâ Similarly, replace the th state variable incremented by one time step (i.e. the left side of (12b) for = ) with an output variable viâ These substitutions are made to facilitate the inversion of system (12), as the inverse of a system with relative degree ≥ 1 is necessarily acausal with dependence on some subset of {̂ ( ),̂ ( + 1), ⋯ ,̂ ( + )} at each time step . For notational compactness, define thê -preview vector̂ ( ) ≡ [̂ ( ), ⋯ ,̂ ( + )] . Then inverting (12b) with = yields the conventional inverse output function wherê −1 is the inverse of̂ , i.e. (12b, = ) solved for ( ). This output equation is substituted into (12b, > ) along with (13)- (14) to yield the entire inverse system dynamicŝ Next, a similarity transform is to be applied to this inverse system to decouple the stable and unstable modes of its linearization about the initial condition. Consider the Jacobian wherê † is the solution tô (0,̂ † , 0) = 0. Then let be the similarity transform matrix such that wherẽ ∈ ℝ × has all eigenvalues inside the unit circle, and̃ ∈ ℝ − × − has all eigenvalues outside the unit circle. This can be satisfied by deriving the real block Jordan form of . The corresponding inverse system state dynamics arẽ where the tilde oñ indicates application tõ rather than̂ . Note that despite using a linearization-derived linear similarity transform, (21) describes the same nonlinear time-varying dynamics as (16a), but with the linear parts of the stable and unstable modes decoupled. If (1) has an unstable inverse, then (21) is unstable and̃ ( ) will be unbounded as increases. However, given an infinite timeline in the positive and negative direction, the equatioñ where is an exact, bounded solution to (21) provided the right hand side of (22) exists for all ∈ ℤ. However, (22) is implicit, and thus cannot be directly evaluated. A fixed-point problem solver-Zeng et al 29 uses Picard iteration-must be used to find̃ , and sufficient conditions for the solver convergence and solution uniqueness must be determined. The Picard iterative solver 32, ch. 9 for (22) is where the parenthetical subscript ( ) ∈ ℤ ≥0 is the Picard iteration index. To prove that (24) converges to a unique solution, 29 makes the assumptions that Note that the continuous time literature also makes these assumptions 27,28 . The first assumption is violated for many representations of systems incorporating both feedback and feedforward control. An example of such a system is given in Section 5, where is the feedforward control input and the feedback control is part of the time-varying dynamics of̂ . This feedback control influenceŝ regardless of whether or not ( ) = 0. While there may often be a change of variables that enables satisfaction of (Z1), (12) already imposes constraints on the states and outputs, and for many systems it is unlikely for there to exist a change of variables satisfying both assumptions. Furthermore, while for systems satisfying (Z1), (Z2) may be the zero-input state trajectory, this is untrue for systems violating (Z1). For these systems, the zero state trajectory (Z2) is essentially arbitrary, and may degrade the quality of low-Picard iterates if far from the solution trajectory. This jeopardizes convergence because the computational complexity of the Picard iteration solution grows exponentially with the number of iterations. It is thus desirable to reach a satisfactory solution in as few iterations as possible, i.e. it is desirable to have high-quality low-iterates. Section 4 addresses these limitations by proving a new set of sufficient conditions for the unique convergence of (24) that relaxes (Z1), (Z2). ILC ANALYSIS AND DEVELOPMENT In order to develop a new ILC framework for non-minimum phase models, it is necessary to concretely identify the failure mechanism of Section 2.1's NILC. Such analysis is absent in the literature, and is thus provided in Section 3.1. Section 3.2 then presents a new learning matrix formula overcoming this failing. Failure for Models with Unstable Inverses The NILC scheme (3), (11) provides convergence of to 0 in theory. However, this assumes perfect computation of the matrix inversion in (11). In practice, the precision to which ̂ ( ) For linear systems, this magnitude is directly dependent on the zero magnitudes, and thus on the inverse systems' stability. If inverse instability degrades the conditioning of ̂ for linear models, it is guaranteed to do so for nonlinear models. This is because the Jacobian evaluated at a particular input trajectory, ̂ ( * ), is equal to the constant matrix ̄ wherē is the lifted input-output model of the linearization of (1) about the trajectory * . Lifting (27) in the same manner as (1) yields the output perturbation as a function of the input perturbation time series via Because of (27)'s linearity,̄ ( −1) ( ) can be explicitly expanded as where ∏ is ordered with the factor of least on the right and the factor of greatest on the left. The terminal condition of the recursive function composition iŝ (−1) =̂ (0). From (28) and (29) it is clear that the elements of ̄ are given by which is equal to (25) because ( ) = ( ) due to the identical structures (4) of and with respect to and time indexing. Thus, if (1) is such that its linearization (27) is unstable, ̂ ( ) will suffer ill-conditioning and attempts to compute the learning matrix (11) may yield a matrix with large arbitrary elements. Such a learning gain matrix may in turn cause +1 to contain large arbitrary elements, causing the learning law to diverge. Therefore, for the learning law (3) to converge for a system with an unstable inverse in practice, a learning matrix synthesis that does not require matrix inversion of ̂ ( ) is desired. Alternative Learning Matrix Synthesis To circumvent issues associated with inverting ̂ ( ) a new learning matrix definition seeking to satisfy the requirements (C2)-(C3) in the spirit of Newton's method, but without the matrix inversion requirement of (11), is given by wherê −1 ∶ ℝ − +1 → ℝ − +1 is a lifted model of the inverse of (1). This makes ̂ a function of the output of (1), namelŷ . As stated in Section 2.1,̂ is the output of a necessarily erroneous model, and thus is merely a prediction of the accessible, measured output . Hence is used as the input to ̂ −1 ̂ . In short, this work proposes using the linearization of the inverse of (1) rather than the inverse of the linearization, and thus the new framework (3), (31) will be referred to as "Invert-Linearize ILC" (ILILC). The first step in derivinĝ −1 , and thus in deriving (31) is the inversion of the original model (1) . A direct method of inverting (1) is to solvê for ( ), and substitute the resulting function of ̂ ( ),̂ ( + 1), ⋯ ,̂ ( + ) into (1a). However, if (1) has an unstable inverse, this method of inversion will yield unbounded stateŝ ( ) as increases. Thus,̂ −1 is derived via the stable inversion procedure described in Section 4 rather than direct inversion. Note, though, that (31) also admits the use of other stable approximate inverse models for̂ −1 should they be available. STABLE INVERSION DEVELOPMENT This section proves a relaxed set of sufficient conditions for the convergence of Picard iteration to the unique solution to the stable inversion problem, i.e. the unique solution to (22) from Section 2.2. This enables stable inversion-and thus ILC-for a new class of system representations capturing simultaneous feedback and feedforward control. Additionally, a new initial Picard iterate prescription is given to suit the broadened scope of stable inversion, and a procedure for practical implementation is described. This procedure enables the derivation of̂ −1 . Fixed-Point Problem Solution Several definitions are needed to prove the relaxed set of sufficient conditions for convergence of the fixed-point problem solver used for stable inversion. Definition 1 (Lifted Matrices and Third-Order Tensors) . Given the vector and matrix functions of time ( ) ∈ ℝ and ( ) ∈ ℝ × , the corresponding lifted matrix and third order tensor are given by upright bold notation: ∈ ℝ × and ∈ ℝ × × .  is the time dimension, and may be ∞. Elements of the lifted objects are , ≡ ( ) and , , ≡ , ( ). Definition 2 (Matrix and Third-Order Tensor Norms) . ‖⋅‖ ∞ refers to the ordinary ∞-norm when applied to vectors, and is the matrix norm induced by the vector norm when applied to matrices (i.e. the maximum absolute row sum). Additionally, the entry-wise (∞, 1)-norm is defined for the matrices and third-order tensors and from Definition 1 as With these definitions a new set of sufficient conditions for Picard iteration convergence is established. Proof of this theorem shares the approach of Zeng et al 29 the induction proceeds as follows. Here, ellipses indicate the continuation of a line of mathematics. By the Picard iterative solver (24): By the triangle inequality: By the fact that for matrix norms induced by vector norms ‖ ‖ ≤ ‖ ‖ ‖ ‖ for matrix and vector : By the fact that ∑ ∞ =−∞ ‖ ( − )‖ ∞ has the same value ∀ By (C5): By (C6): By (C7), the denominator of (C8) is positive. Thus both sides of (C8) can be multiplied by this denominator without changing the inequality direction. Thus by (35) and algebraic rearranging of (C8) ∀ , this implies that̃ ( ) ( ) is within the locally approximately linear neighborhood ∀ , . Remark 1. Neither the preceding presentation nor the nonlinear stable inversion prior art 29 explicitly discusses the intuitive foundation of stable inversion: evolving the stable modes of an inverse system forwards in time from an initial condition and evolving the unstable modes backwards in time from a terminal condition. Unlike for linear time invariant (LTI) systems, this intuition is not put into practice directly for nonlinear systems because the similarity transforms that completely decouple the stable and unstable modes of linear systems do not necessarily decouple the stable and unstable modes of nonlinear systems. However, the same principle underpins this work. This is evidenced by the fact that the intuitive LTI stable inversion is recovered from (22) when̂ is LTI, as illustrated briefly below. For LTÎ ,̃ takes the form̃ Then the implicit solution (22) becomes the explicit solutioñ which is the forward evolution of the stable modes and backward evolution of the unstable modes where the initial and terminal conditions at = ±∞ are zero. Initial Picard Iteratẽ (0) Selection and Implementation This subsection addresses the need to select a new initial Picard iteratẽ (0) ( ) in the absence of (Z2). Also addressed is the fact that (24) is a purely theoretical, rather than implementable, solution because it contains infinite sums along an infinite timeline. Furthermore, for the first Picard iteration ( + 1 = 1) these assumptions yield identical (54)-generated and (24)-generated Because output tracking of systems with unstable inverses typically requires preactuation, for this range of to contain a practical control input trajectory there must be sufficient leading zeros in the reference starting at = 0. For the following Picard iterates the theoretical and implementable trajectories are unlikely to be equal, but can be made closer the more leading zeros are included in the reference. Ultimately, applying (54) for any number of iterations final ≥ 1 yields an expression for each time step of̃ ( final ) ( ) whose only variable parameters are the elements of̂ . This is because the recursion calling̃ ( final ) ( ) terminates at the known trajectorỹ (0) ( ), and becausê ( ) = 0 for ∈ {0, ..., − 1} due to the known initial condition̂ (0) = 0. The concatenation of these expressions plugged into the inverse output function (16b) yields the lifted inverse system model which enables the synthesis of the ILILC learning matrix (31). With this, the complete synthesis of ILILC with stable inversionstarting from a model in the normal form (12)-can be summarized by Procedure 1. 2: Apply similarity transform (from (20)) to derive the inverse state dynamics representatioñ (from (21)) with decoupled stable and unstable linear parts. in the time series, and can become relatively long. However, the overwhelming majority of this computation is performed before the execution of the zeroth trial and need not be repeated. This allows for minimal computation time-i.e. minimal downtime-between trials. More specifically, Steps 1-5 are all performed before trial zero execution, with Step 5 being the most computationally intensive. These steps yield a function ̂ −1 ̂ (⋅) that arithmetically produces a learning matrix given an output time series. Step 6-the only step featuring intertrial computation-merely needs to call this function and the simple matrix-vector multiplication of (3). The fixed-point problem solving and automatic differentiation does not need to be redone. For reference, the validation system's computation times for each step of Procedure 1 are given in Section 5.4, Table 2. VALIDATION This section presents validation of the fundamental claim that the original NILC fails for models with unstable inverses and that the newly proposed ILILC framework-when used with stable inversion-succeeds. Additionally, while the intent of ILC is to account for model error, overly erroneous modeling can cause violation of (C3), which may cause divergence of the ILC law. Thus this section also probes the performance and robustness of ILILC with stable inversion over increasing model error in physically motivated simulations. The ILILC law (3), (31) is applied as a reference shaping tool to a feedback control system (sometimes called "series ILC"). This represents the common scenario of applying a higher level controller to "closed source" equipment. The resultant system (1) is a nonlinear time-varying system with relative degree = 2. Modeling error is simulated by synthesizing the ILC laws from a nominal "control model" of the example system, and applying the resultant control inputs to a set of "truth models" featuring random parameter errors and the injection of process and measurement noise. Finally, to give context to the results for ILILC with stable inversion, identical simulations are run with a benchmark technique that does not require modification for models with unstable inverses: gradient ILC. Benchmark Technique: Gradient ILC Gradient ILC is gradient descent applied to the optimization problem arg min 1 2 (56) which yields the ILC law where > 0 is the gradient descent step size. Note that (57) is free of the matrix inversion that inhibits the application of NILC to systems with unstable inverses. Past work on gradient ILC 40 has been limited to linear systems due in part to the difficulty of synthesizing ̂ for nonlinear systems. This article extends gradient ILC to nonlinear systems by using the automatic differentiation tool CasADi to synthesize ̂ . The tuning parameter influences the performance-robustness trade off of (57). Reducing improves the probability that (57) will converge for some unknown model error, but may also reduce the rate of convergence. For the sake of comparing the convergence rates between gradient ILC and ILILC, here we choose such that the two methods have comparable probabilities of convergence over the set of random model errors tested: = 1.1. Example System Consider the system pictured in Figure 1, consisting of a pendulum fixed to the mass center of a cart on a rail. This subsection presents the first-principles continuous-time equations of motion for this plant, the method for converting these dynamics to the discrete-time normal form (12), and the control architecture of the system. The cart is subjected to an applied force , and viscous damping occurs both between the cart and the rail and between the pendulum and the cart. Equations of motion for this plant are given bÿ where ( ) is the pendulum angle, ( ) is the cart's horizontal position, ℊ = 9.8 m s 2 is gravitational acceleration, and the process noise ( ) is a random sample from a normal distribution with 0 mean and standard deviation 3.15 × 10 −2 N. is the pendulum half-length, and are the cart and pendulum masses, and and are the cart-rail and pendulum-cart damping coefficients, respectively. The time argument of , , and their derivatives has been dropped for compactness. The output to be tracked is the pendulum tip's horizontal position, . Obtaining a discrete-time state space model of this system in the normal form (12) requires first a change of coordinates such that the desired output is a state, and then discretization. The change of coordinates is = arcsin − 2 (60) with associated derivative substitutionṡ Then the equations of motion are solved for in terms of the new coordinates. In the present case (58)-(62) can be solved for ( ) and̈ ( ) as functions of ( ), ( ),̇ ( ), anḋ ( ). Next, forward Euler discretization is applied recursively to the equations of motion to reformulate the state dynamics in terms of discrete time increments rather than derivatives, as is required by the normal form. The innermost layer of the recursion is the first derivativeṡ where the sample period = 0.016 s in this case. These can be plugged intö ( ) and̈ ( ) to eliminate their dependence on derivatives. The next-and in this case final-layer is the forward Euler discretization of the second derivatives. The outermost layer can be rearranged to yield the discrete-time equations of motion ( + 2) =̈ ( ) 2 + 2 ( + 1) − ( ) which are directly used to define the state dynamics in terms of the state vector ( ) = [ ( ), ( + 1), ( ), ( + 1)] . The explicit expressions of (64) are too long to print here, but can be easily obtained in Mathematica, MATLAB symbolic toolbox, etc. via the algebra described in (60)-(64). The output must track the reference ( ) given in Figure 2. To accomplish this the plant is equipped with a full-state feedback controller modeled as Here, * ( ) is the effective reference and ( ) is the control input generated by the ILC law. In other words, the ILC law adjusts the reference delivered to the feedback controller to eliminate the error transients inherent to feedback control. Finally, the error signal input to the ILC law is subject to measurement noise ( ) where the noise's distribution has 0 mean and standard deviation 5 × 10 −5 m. The ILC law itself is synthesized from a control model that is identical in structure to the truth model presented above, but haŝ =̂ = 0 and uses the model parameters tabulated in Table 1. Stable inversion for the synthesis of learning matrix (31) is performed with a single Picard iteration, i.e. final = 1 in (55). To simulate model error, the hatless truth model parameters differ from the behatted control model parameters in a manner detailed in Section 5.3. This ultimately results in the system block diagram given in Figure 3. Simulation and Analysis Methods Let̂ ∈ ℝ 10 be a vector of the control model parameters in Table 1. Then a truth model can be specified by the vector , generated via where ⊙ is the Hadamard product and ∈ ℝ 10 is a random sample of a uniform distribution. Under (68), each element of is the relative error between the corresponding elements of and̂ . Thus, ‖ ‖ ‖ ‖2 provides a scalar metric for the model error between the control model and a given truth model. The range ‖ ‖ ‖ ‖2 ∈ [0, 0.1] is divided into 20 bins of equal width, and 50 truth models are generated for each bin. Both ILC schemes are applied to each truth model with 50 trials, and 0 ( ) = 0 ∀ . A full set of 50 trials of one of the ILC laws applied to a single truth model is referred to as a "simulation." The results of these simulations are used to characterize the probability of convergence and rate of convergence of each ILC law. For each iteration of a simulation, the normalized root mean square error (NRMSE) is given by A simulation is deemed convergent if there exists * such that NRMSE is less than some tolerance for all ≥ * . This work uses a tolerance of 5 × 10 −4 , which is close to the NRMSE floor created by noise. Table 1 and by ( ) = ( ) = 0. The plant and controller gain blocks are defined with the truth model parameters generated according to Section 5.3. Inter-trial signals from trial are stored and used to compute the input for trial + 1. Let , , be the minimum * for truth model ∈ [1,50] in bin ∈ [1,20] under ILC law ∈ { ILILC , gradient ILC}, and let  be the set of all ( , ) for which both ILILC and gradient ILC converge. Then the mean transient convergence rate offers a numerical performance metric. Note that Avrachenkov 22 gives a theoretical convergence analysis for the ILC structure (3) in general (covering NILC, ILILC, and gradient ILC). This analysis can be used to lower bound performance (i.e. upper bound convergence rate) via multiple parameters computed from the learning matrix and the true dynamics . The mean transient convergence rate (70) may thus serve as a specific, measurable counterpart to any theoretical worst-case-scenario analyses performed via the formulas in the work of Avrachenkov 22 . Finally, to verify the fundamental necessity and efficacy of ILILC for systems with unstable inverses, 2 trials of traditional stable-inversion-free NILC (3), (11) are applied to each truth model. All computations are performed on a desktop computer with a 4 GHz CPU and 16 GB of RAM. Results and Discussion The condition number of ̂ 0 is 1 × 10 17 . Attempted inversion of this matrix in MATLAB yields an inverse matrix with average nonzero element magnitude of 4 × 10 13 and max element magnitude of 3 × 10 16 . Consequently, 1 generated by (3), (11) has an average element magnitude of 2 × 10 10 m and a max element magnitude of 8 × 10 11 m, which is so large that 1 and ̂ 1 contain NaN elements for all simulations. Conversely, while some simulations using ILILC , i.e. (3), (31), diverge due to excessive model error, the majority converge. Additionally, the computation times given in Table 2 show that Procedure 1 successfully front-loads almost all of the required computation; intertrial computation time is almost always less than 150 ms. Together, these results validate the fundamental claim that the direct application of Newton's method in NILC is insufficient for systems with unstable inverses, and that the combination of ILILC and stable inversion fills this gap. To accompany the quantitative metric ‖ ‖ ‖ ‖2 , Figure 4 offers a qualitative sense of the degree of model error in this study by comparing two representative ILILC solution trajectories 50 ( ) with the solution to the ‖ ‖ ‖ ‖2 = 0, ( ) = ( ) = 0 scenario. The lower-model-error representative solution is from within the range of ‖ ‖ ‖ ‖2 for which all simulations converged, while the higher-model-error solution comes from a bin in which some simulations diverged. A more detailed analysis of the boundaries in -space determining convergence or divergence of a simulation is beyond the scope of this work. However, the given trajectories illustrate that even in the conservative subspace defined by the 100% convergent bins learning bridges a visible performance gap, and that beyond this subspace there are far greater performance gains to be had. Finally, a statistical comparison of the performance and robustness of ILILC with stable inversion and gradient ILC is given in Figure 5. The tuning of gradient ILC indeed yields comparable robustness to ILILC, with ILILC 97% as likely to converge as gradient ILC over all simulations. The convergence rates of the two ILC schemes, however, differ substantially, with gradient FIGURE 4 Representative input solution trajectories from low-and high-model-error ILILC simulations compared with the solution to the zero-model-error problem. The zero-model-error solution is the input trajectory that would be chosen for feedforward control in the absence of learning, and differs notably from both minimum-error trajectories found by ILILC with stable inversion. Table 3 give a more portable quantification of ILILC's advantage, having a convergence rate nearly half that of gradient ILC's. This analysis confirms that ILILC with stable inversion is an important addition to the engineer's toolbox because it enables ILC synthesis from nonlinear non-minimum phase models and delivers the fast convergence characteristic of algorithms based on Newton's method. CONCLUSION This work introduces and validates a new ILC synthesis scheme applicable to nonlinear time-varying systems with unstable inverses and relative degree greater than 1. This is done with the support of nonlinear stable inversion, which is advanced from the prior art via proof of convergence for an expanded class of systems and methods for improved practical implementation. In all, this results in a new, broadly implementable ILC scheme displaying a competitive convergence speed under benchmark testing. Future work may focus on further broadening the applicability of ILILC by relaxing reference and initial condition repetitiveness assumptions, and on the extension of ILILC with a potentially adaptive tuning parameter or other means to enable the exchange of some speed for robustness when called for. Levenberg-Marquardt-Fletcher algorithms may offer one source of inspiration for such work. Each of references 5-9 proposes sufficient conditions for the convergence lim →∞ = 0 − +1 of a particular ILC scheme applied to a particular class of nonlinear dynamics. All of these classes of nonlinear dynamics are supersets of the SISO LTI dynamics with relative degree = 1, i.e. ≠ 0. Additionally, assume (A1) is stable and (0) is such that (0) = (0) ∀ . Given a system of this structure, the ILC schemes and convergence conditions of the past works reduce to the following. Finally, reference 6 uses the combination of (C9) and with the reference given in Figure 2 and the zeroth control input 0 ( ) = 0 ∀ . This system has an unstable inverse. The plant (A3) satisfies (C11), and with (A4) it satisfies (C9) and (C10) for 1 = 1. Thus, according to references 5-9 the ILC scheme (A2) is guaranteed to yield tracking error convergence in a model-error-free simulation. However, Figure A1 shows that the tracking error diverges under (A2), meaning that satisfaction of (C9)-(C11) is not actually sufficient for the convergence of all systems (A1) under the learning law (A2) in practice. This illustrates that the failure to account for phenomena arising from inverse instability is not unique to NILC, but rather pervades the literature on ILC with discrete-time nonlinear systems. In light of the counterexample given by (A3) to the sufficiency of (C9)-(C11) for the convergence of the ILC schemes in references 5-9, it is desirable to formalize an additional condition that precludes systems such as (A3) from consideration for the application of these ILC schemes. Such a condition is given by: (C12) equation (21) must be asymptotically stable about its solution. For SISO LTI systems with relative degree ≥ 1 (i.e. systems of class (A1)), (C12) is equivalent to is the state matrix of the inverse system. (A3) violates this condition, but many systems satisfy it, including all damped harmonic oscillators discretized via the forward Euler method. While sufficient, note that (C9)-(C12) might not be necessary conditions. Analysis of necessary conditions for error convergence under past works' ILC schemes is beyond the scope of this work. As shown in Figure A1, the ILC scheme proposed by the present article is capable of solving the problem presented by the given counterexample-(A3)-to past works' ILC schemes. These NRMSEs monotonically increase, confirming the inability of the past work on ILC with discrete-time nonlinear systems to account for unstable inverses. The NRMSE trajectory yielded by the stable-inversion-supported ILILC scheme proposed by this article is also displayed. The convergence of this ILC scheme when applied to (A3) reiterates its ability to control such non-minimum phase systems.
2021-08-18T01:16:09.780Z
2021-08-06T00:00:00.000
{ "year": 2021, "sha1": "99984ab7a9397d7d74a542d6cf01a9de4a9b636e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2108.07315", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "99984ab7a9397d7d74a542d6cf01a9de4a9b636e", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
5378628
pes2o/s2orc
v3-fos-license
Fragmented QRS electrocardiogram--the hidden Talisman? The asynchronous excitation of muscle fibers causing fragmentation of electrocardiogram may be due to the poorly inter-connected muscle bundles; separated by high resistance intercellular connective tissue caused by the healing process. This effect is most pronounced at the bordering areas of necrosis where the connective tissue invades the surviving 'islands' of muscle, hence causing separation and distorted orientation of muscle fibers. Any form of gross structural abnormality, like large chamber dilatation, may also cause a similar picture. Flowers et al [2] have even suggested the high frequency notching of QRS complex as a screening device for any structural heart disease causing biventricular enlargement. Introduction There are several stigmas on the resting surface electrocardiogram that are indicators of past myocardial injury. Broad QRS pattern with bundle branch block, Q waves, persistent ST elevation are some of those facsimiles which may at times even be considered as definitive signs of left ventricular impairment. We would like to focus here on a lesser known entity of the surface electrocardiogram -the fragmented QRS complex. This marker of myocardial injury may often be the only electrocardiographic marker in patients with non-Q myocardial infarction and in patients with resolved Q wave. It can also be a reliable pointer to left ventricular functional compromise. Fragmented QRS electrocardiograms were for the first time recorded from canine hearts with experimentally induced acute ischemia and healing. It was found that fragmented electrocardiograms were more frequently observed in healed myocardial infarctions more than 2 weeks old, than in preparations from 5 day old infarcts [1]. The asynchronous excitation of muscle fibers causing fragmentation of electrocardiogram may be due to the poorly inter-connected muscle bundles; separated by high resistance intercellular connective tissue caused by the healing process. This effect is most pronounced at the bordering areas of necrosis where the connective tissue invades the surviving 'islands' of muscle, hence causing separation and distorted orientation of muscle fibers. Any form of gross structural abnormality, like large chamber dilatation, may also cause a similar picture. Flowers et al [2] have even suggested the high frequency notching of QRS complex as a screening device for any structural heart disease causing biventricular enlargement. Ischemic heart disease and fragmented QRS It has been observed in various studies [3,4] that fragmented QRS on the resting electrocardiogram has a moderate sensitivity (62.2%) and high specificity (up to 94%) in detecting ischemic heart disease. It is especially relevant in cases where the baseline electrocardiogram does not have a Q wave. The presence of a Q wave in addition to QRS fragmentation further augments Frijo Jose, Mangalath Krishnan, "Fragmented QRS Electrocardiogram - 239 The Hidden Talisman?" the sensitivity of detecting ischemic heart disease up to 92.4%. Many studies have reiterated the significance of the fragmented QRS on patients with resolved Q waves and non-Q myocardial infarction. Myocardial scar, left ventricular aneurysm and fragmented QRS Das MK et al [5] using myocardial perfusion imaging have shown that fragmented QRS has a superior sensitivity and negative predictive value compared to Q waves in detecting myocardial scar; though there was a small compromise in specificity -especially in inferior wall myocardial necrosis. The presence of rsR' pattern or its variants on the left sided precordial leads was found to be an excellent marker of extensive confluent scarring and hence, ventricular aneurysm [6]. The sensitivity and specificity of f-WQRS in detecting myocardial scar is 86.8% and 92.5% respectively [7]. Broad premature ventricular complexes (≥160 ms) with notched QRS (notch separation >40 ms) was found to be a reliable marker of a global form of ventricular dysfunction involving ventricular mass, chamber size or function [8]. It may also be indicative of the chronic nature of the underlying disease. Localization value of fragmented QRS Flowers et al [9] found a certain amount of localizing value of fragmented QRS, especially in patients without chamber dilatation in the absence of Q waves. This can be useful in patients with resolved Q waves and non transmural myocardial infarction. Postero-inferior lesions are more reliably localized (inferior axis leads) than anterior lesions (lesions are larger and the periphery may extent laterally). Left ventricular function and fragmented QRS Fragmented QRS electrocardiogram is an independent predictor of left ventricular function. It is a marker of higher stress myocardial perfusion abnormalities and functional deterioration [5]. This was also observed in other studies, where gross left ventricular dilation and decreased ejection fraction were found to be faithfully reflected by the fragmented electrocardiogram [2,6]. Fragmented QRS and prognosis QRS fragmentation with or without Q waves was found to predict a higher mortality and recurrent cardiac events than either Q wave alone or resolved Q wave without QRS fragmentation [10,11]. Hence fragmented QRS, though not extensively studied yet, is probably a reliable indicator of past myocardial ischemia in the absence of Q waves. It also suggests increased scar burden and poorer prognosis. This promising and simple noninvasive modality of investigation may be of immense help in evaluating coronary artery disease patients, but needs to be energetically promoted in routine clinical practice, where it is a neglected entity at present. Conclusion Fragmented QRS on the resting surface electrocardiogram is a simple, fast and inexpensive modality of non invasive investigation that can be of great value in predicting the cardiac status and prognosis of an individual being evaluated for coronary artery disease.
2016-05-04T20:20:58.661Z
2009-09-01T00:00:00.000
{ "year": 2009, "sha1": "cd1a8a2788d042a41ac6912a4032c95e71149e51", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "cd1a8a2788d042a41ac6912a4032c95e71149e51", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55555176
pes2o/s2orc
v3-fos-license
Effect of Different Protein and Energy Levels in Concentrate Diets on Nutrient Intake and Milk Yield of Saanen x Etawah Grade Goats Supriyati, Krisnan R, Budiarsana IGM, Praharani L. 2016. Effect of different protein and energy levels in concentrate diets on nutrient intake and milk yield of Saanen x Etawah Grade goats. JITV 21(2): 88-95. DOI: http://dx.doi.org/10.14334/jitv.v21i2.1356 Dairy goat contributes to food and nutrition security. However, information on nutrient consumption and milk yield, as well as milk composition of Saanen x Etawah (SAPERA) grade goat is limited. This experiment was done to evaluated nutrient intake, milk yield and its composition of lactating SAPERA goats fed with different levels of dietary energy and protein in concentrate diet. Thirty multiparous SAPERA goats were used in a randomized block design with three treatments (R1, R2 and R3) and ten replications for 12 weeks of lactation. The concentrate diets were formulated to contain: 18% CP and 72% TDN (R1), 17% CP and 75% TDN (R2), 16% CP and 78% TDN (R3). Those does were penned individually, and fed by basal diet (fresh chopped King Grass ad libitum, 500 g of fresh mixed forages) and 1 kg of experimental concentrate. Results showed that the treatments had significant (P<0.05) effects on CP, DIP, Ca, P intakes and FCR but had no significant (P>0.05) effects on DM and TDN intake. No significant differences were found in milk yield and milk composition between treatments. In conclusion, this trial suggested that the best feed for lactating SAPERA goats was the mixture of chopped grasses, mixed forages and concentrate diets (16% CP and 78% TDN) with 160 g/kg CP and 750 g/kg TDN of the total DM, produced a milk of 1.55 kg/d with 90 g/day of milk fat, 43 g/day of milk protein and 75 g/day of milk lactose. INTRODUCTION Population of goat in Indonesia was around 18.88 million heads in 2015 (DGLAH 2015) used for milk and meat production.This Saanen breed was introduced in to breeding program of the Indonesian Research Institute for Animal Production to improve quality and quantity of goat milk yield.This Saanen breed often produced triplets (Mellado et al. 2011) and higher milk yield compared to Etawah Grade (Praharani 2014) and Angora goats (Anwar et al. 2015).Therefore, Saanen genetics were used to produce a new goat breed with higher milk yield and adapted well to Indonesian environmental conditions.Saanen goats were crossed with Etawah goats to produce crossbred Saanen and Etawah grade goats, named as SAPERA. Information on feed intake and nutrient utilization of these SAPERA goats under traditional or intensive production systems are infrequent in Indonesia.Goat feeding involves combining various feedstuffs into an acceptable and palatable ration to meet nutrient requirements.These requirements vary depending on the stage of growth, gestation and lactation.The considered nutrients in diet formulation are energy, protein, minerals, vitamins and water.The balance of nutrients will determine performance of a dairy goat. Lactating doe requires high level of energy, protein, and water for milk yield.Basal diets of dairy goats were often supplemented with concentrate to meet their requirements. Nutritional requirements of energy and protein of goats have been reported and reviewed by some previous researchers.Krishnamoorthy & Moran (2011) reviewed that the nutritional requirement of goats in the tropic could refer to as recommended by the Nutrient Requirement Council (NRC).Energy required by female Etawah grade goat was 1.1 times NRC (Supriyati et al. 2014a) and for female Anglo Nubian goat was 1.2 times NRC (Supriyati et al. 2014b).Martínez-Marín et al. (2011) reported that intake of metabolism energy (ME) was 5.4% greater than that recommended by the NRC for young female Murciano-Granadina dairy goats.Park et al. (2010) suggested that minimum dietary level of protein and energy was 15% CP and 60% TDN in mid lactation for Saanen dairy goats. This study was aimed to evaluate effect of different level protein and energy in concentrate diets on nutrient intake, milk yield and milk composition of SAPERA goats during the first 12 weeks lactation. Animal and feeding trial Thirty multiparous of SAPERA goats, around 3-4 years with an average body weight of 40.75±3.35kg, were used in this trial.Animals were grouped into three concentrate diets treatments.Those concentrate diets were formulated at different crude protein (CP) and total digestible nutrients (TDN) levels, i.e.R1 = 18%CP and 72%TDN, R2 = 17% CP and 75%TDN and R3 = 16%CP and 78%TDN on dry matter (DM) basis.Animals were offered chopped fresh King grass ad libitum, 500 g of fresh mixed forages and 1 kg of concentrate diet as feed during the first 12 weeks of the lactation period.Table 1 shows the chemical composition of feed.The experimental design applied was completely randomized in three treatments and ten replications.Each animal was housed in individual cage.Those cages had metal wire galvanized floors and attached to each cage was a secured woody container for feed.Water was provided through a nipple in each cage.Feed intakes were measured daily. Parameters observed were nutrient intake of DM, CP, digestible intake protein (DIP), TDN, neutral detergent fiber (NDF), acid detergent fiber (ADF), calcium (Ca) and phosphorus (P).DM, CP, NDF, ADF, Ca and P contents of the grass, mixed forage and concentrate diets were analyzed according to the AOAC method (AOAC 2012) modified in ours laboratory.Gross energy values were determined by bomb calorimeter (Adiabatic Bomb, Parr Instrument Co), and these values were used for TDN calculation as described by NRC (1981).Percentage of total digestible of nutrient (TDN) = Kcal/kg metabolism of energy (ME) divided by 0.0361, where ME is equal with 0.62 Kcal/kg of Gross Energy (NRC 1981) and 0.0361 is the conversion factor of ME to TDN as described by Langston University's ME calculator.At the end of the experiment, digestible intake protein (DIP) was measured using total collection technique in metabolism cages.Four animals of each treatment from similar experimental goats were pleaced in individual metabolism cages.These animals were allowed ten days to adjust to the feed, followed by seven days collection.Feed intake, refusals and fecal output were recorded and kept, and a sub sample of each (10% of daily output in case of feces) was retained for analysis.Samples were then dried, grounded, and analyzed for protein. Digestibility of protein intake (DIP) was calculated as follows: Protein intake -Protein in feces x 100 Protein intake Milk yield and samples Goats were milked by hand in the morning and evening.Individual morning and evening milk yields were recorded daily for each goat.The 4% fat corrected milk (FCM) for each goat was calculated from milk yield and percentage of milk fat using the formula as given by Gaines 1928) i.e. 4% FCM = (0.4 x g milk yield) + (0.15 x g milk yield x % fat).FCR value during lactation was determined as the amount of DM intake required to produce 1 kg 4% FCM yield. Milk samples from the consecutive evening and morning milkings were collected from each goat on day seven of each at the first week of lactation. Approximately 30 ml of milk from each goat were composited and stored at +4°C until subsequent analysis for milk composition.Milk compositions of fat, protein, lactose, solids non-fat (SNF), total solids (TS) and specific gravity were analyzed using a Lacto-Scan Milk Analyzer. Statistical analysis Data of feed intake, milk yield and milk quality of goats were subjected for analysis of variance using General Linear Model (GLM) procedure of SAS (SAS 2002).If there was a significant difference between treatments, the difference then was compared using Duncan's Multiple Range Test at a significance level of P<0.05. Nutrient intake Table 2 shows feed (grass, concentrate, forages, and total DM), CP, TDN, NDF, ADF, Ca and P intakes during lactation.The feed (grass, concentrate, mixed forages and total DM) intakes were not significantly different (P>0.05)among the treatments.However, there was a significant difference in CP, DIP, Ca and P intakes between treatments (P<0.05) but no effect on TDN, NDF, ADF and ratio of roughage to concentrate intakes during lactation period.Average total daily DM and TDN intakes were not significant (P>0.05)among those three treatments during the first 12 weeks lactation.In this trial, the does were separated with the kids, therefore, nutrient requirement of goats during lactation considered similar to the values recommended by (NRC 2007) for single kid.Furthermore, results of this trial showed that average litter size of goats was 1.4 (data was not shown in the Table ).The mean daily total DM and TDN intakes in this trial were less (0.84 times) and similar (0.98 times) to the NRC requirement.According to NRC (2007), daily requirement of DM and TDN for early lactation of a single kid dairy goat at 40 kg of BW and -21 g ADG was 1.67 kg and 1.03 kg, respectively.Kearl (1982) recommended requirement of DM and TDN intakes for the first 10 weeks of lactating goats at 40 kg of BW were 1.90 and 1.05 kg, respectively.From the above results, only the TDN requirement of lactation goats in this trial was closed to Kearl's and NRC's recommendations. In this trial, different level of protein and energy did not affect DM intake during lactation period.A similar result was reported by Goetsch et al. (2001) that increase energy level had no effect on DM intake of lactating Alpine dairy goats.However, our findings were in contrary from those reported by Rufino et al. (2012), that supplementation of concentrate as sources of protein and energy up to 1.5% BW under grasspasture increased DM and nutrients intake of goats.Furthermore, Teh et al. (1994) reported that high yielding goats required great amounts of energy during early lactation. Moreover, different levels of protein and energy in the concentrate diets significantly influenced (P<0.05) the mean daily CP intakes of feed during the lactation period (Table 2).CP intake was higher than Kearl's recommendation Kearl (1982) and NRC requirement (NRC 2007).Requirement of total protein for lactating goats at 40 kg of BW and -20g ADG were 160 g (Kearl 1982) and 89 g UIP 40% and 80 g DIP (NRC 2007) for single kid, respectively.Intakes of Ca and P in this trial were higher than Kearl's recommendation (Kearl 1982) and NRC requirements (NRC 2007).Requirement of Ca and P for lactating goats at 40 kg of BW and -20 g ADG were 5 g and 3.5 g (Kearl 1982), and 5.9 g and 3.9 g for single kid (NRC 2007), respectively. The main daily intakes of NDF and ADF in this trial were not significant (P>0.05)but the main daily intake of ADF were significantly (P<0.05)different among those concentrate diets.NDF percentages in total DM intakes were 51.78, 50.23, and 50.66% for R1, R2, and R3, respectively.Meanwhile, ADF contents were 32.82, 33.07, and 33.38% for R1, R2, and R3, respectively.NDF and ADF contents of feed intakes were higher than NRC recommendation.The 18 to 20% ADF or 41% NDF was nutritionally adequate for high producing lactating dairy goats (Lu et al. 2008;Mirzaei-Aghsaghali & Maheri-Sis 2011).Moreover, the ratio of roughage to concentrate intakes in this trial was in range as recommended, except for R3 diets which was slightly higher than their recommendations (40 : 60%).Minimum recommended dairy NDF and ADF were 25 to 28% and 19 to 21%, respectively, with at least 75% of this NDF from forages rather than concentrate.Lower dietary fiber level could depress milk fat percentage and increase fat storage in the body of the doe during lactation. In this trial, different levels of protein and energy in the concentrate diets had no influence (P>0.05) to the ratio of roughage to concentrate intakes during lactation.Intake of concentrates was in the range of 58-61% of total DM intake.These ratios of roughage to concentrate intakes were in the normal range of the feed intakes, except for the R3 diets which was slightly higher than recommended.Those concentrate diets should make up 50-60% of the diets. From the above results, average TDN intake was adequate to meet the requirement (Kearl 1982;NRC 2007).CP, DIP, NDF, ADF, Ca, and P intakes were higher than the nutrient requirement of lactation goats as recommended by International Feeding System (Kearl 1982;NRC 2007). Milk yield Table 3 summarizes effect of different levels of protein and energy in concentrate diets on average daily milk yield at different weeks of lactation, 4% FCM yields, total milk yields for 12 weeks production, FCR, milk constituents and milk composition yields.During milk yield period, different level of protein and energy in concentrate diets, where the three treatments containing 17.14% CP and 71.31% TDN (R1), 16.44% CP and 72.33% TDN (R2), 15.95% CP and 74.67% TDN (R3) of total feeds, did not affect (P>0.05) the average weekly milk yields and the total 12 weeks milk yields. These results were similar to the result of previous researchers (Bava et al. 2001;Goetsch et al. 2001;Zambom et al. 2012) showing that milk yield was not affected by different level of protein and energy intakes.Bava et al. (2001) reported that milk yield was similar for silage-based control diet and non-forage diet (high CP content) of dairy goats.Goetsch et al. (2001) reported that milk yield in the first 12 weeks of subsequent lactation were not affected by dietary treatment of different level of energy and concentrate or parity of Alpine dairy goats.Zambom et al. (2012) evaluated milk yield of Saanen goats fed diets with soybean hulls replacing ground corn (0, 50, and 100% replacement) in early lactation and the results showed that milk yield was not affected by three different diets containing of 13% CP and 66.49% TDN, 14.5% CP and 63.33% TDN or 15% CP and 57.34% TDN.However, the results of this trial were in contrary with those obtained by other researchers (Sahlu et al. 1995;Park et al. 2010;Souza et al. 2014;Nascimento et al. 2014), who found that the different levels of protein and energy affected the milk yield.Sahlu et al. (1995) reported that milk yield in the subsequently lactation increased quadratically in response to pre-partum CP and TDN concentration.Park et al. (2010) reported that milk yield in the diets of Saanen goats containing 15.19% CP and 62.60% TDN was the highest among the treatments 11.90% CP and 70.08% TDN, 12.73% CP and 67.03% TDN, 16.60% CP and 57.90% TDN.Souza et al. ( 2012) observed that increasing dietary energy level of Saanen goats using calcium salts of fatty acids changed their lactation curves, resulting in the best milk yield response with 76.18% TDN on DM diets.Nascimento et al. (2014) reported that daily milk yield of dairy goats showed linear improvement with increasing TDN content from 65% to 75% and 85%.The difference in milk yield from other result of previous study might due to the variation of goat responce to the treatment diets, breed, or stages of lactation. Daily average milk yields of SAPERA in this trial were higher compared to milk yield of Etawah Grade goats (Supriyati et al. 2016) and lower than milk yield of Saanen goats (Gomes et al. 2014;Zambom et al. 2012).Supriyati et al. (2016) reported that average daily milk yields of Etawah Grade goat fed with diets containing 12.6% CP and 70.1% TDN during 12 weeks of lactation was 0.678 kg/d.However, Gomes et al. (2014) reported that average daily milk yields of Saanen goats fed with diets based on soybean meal containing 23% CP during the first 60 days of lactation was 3.29 kg/d.Furthermore, Zambom et al. (2012) reported that average daily milk yields of Saanen goats fed based on soybean hull containing 22% CP and 85% TDN during the 50 days of lactation was 3.64 kg/d.From the above results, it could be concluded that milk yield of SAPERA goats was in the middle range between Etawah Grade and Saanen goats. Milk constituents and composition yields Table 3 summarizes milk constituent and composition yield of goats fed different levels of energy and protein.Different levels of energy and protein in concentrate diets had no influences (P>0.05) on milk fat, protein, lactose, specific gravity, SNF, and TS.Milk constituent yields were also not influenced (P>0.05) by the different levels of energy and protein in concentrate diets. In this trial, milk samples were collected from each goat on day seven of each week of lactation.In this period, milk samples would represent milk quality during whole experiment.As reported by Zeng et al. (1997) that milk sample collection carried out when does were in one to two weeks in lactation.They also reported that daily variation concentration of milk components did not change significantly.Milk components changed depending on the stages of lactation (Zeng et al. 1997) and traits (Silva et al. 2013). Milk fat and total solids of goats in this trial were in the range reported by Sutama (2009) for Etawah Grade goats under the tropical region, from 4.42 to 6.4%, and 13.62 to 15.72%, respectively.Milk protein and milk lactose of Etawah Grade goats in this trial were less than those reported by Sutama (2009).He also reported that milk protein and milk lactose of Etawah Grade goats were 3.78 to 4.52% and 5.08 to 5.62%, respectively.Protein percentage were less and fat and lactose percentage were higher than those reported by Silva et al. (2013), who worked with Saanen goats; they obtained values of 3.13, 3.78 and 4.25, respectively.Different results from previous studies might due to the differences in feeds, breed and lactation period.From the above results showed that milk content is the most variable nutrient because of the differences between breed, feeding and their interaction. During milk yield period, different level of protein and energy in concentrate diets, where the three treatments containing 17.14% CP and 71.31% TDN (R1), 16.44% CP and 72.33% TDN (R2), 15.95% CP and 74.67% TDN (R3) of the total feeds did not affect (P>0.05)milk composition and milk constituent yields.However, our findings were in contrary to those obtained by other researchers (Sahlu et al. 1995;Park et al. 2010;Zambom et al. 2012).Sahlu et al. (1995) reported that milk fat percentage increased linearly in response to increased pre-partum energy.Park et al. (2010) reported that the decrease of energy and increase of protein in diets of mid lactation Saanen goats significantly reduced the content of fat milk but the yields of milk protein and lactose increased significantly.Zambom et al. (2012) reported that milk quality of Saanen goats fed diets with soybean hulls in early lactation were not affected by three different diets containing 13%CP and 66.48% TDN, 14.5% CP and 62.33% TDN or 15% CP and 57.34% TDN.Furthermore, Park et al. (2010) suggested that minimum dietary level of protein and energy was 15% CP and 60% TDN in mid lactation for Saanen dairy goats for producing the best milk composition and milk yield constituents. The different levels of protein and energy response on milk yields and milk composition yields between different research reports might be due to many factors such as forage to concentrate ratio (Tufarelli et al. 2009;Park et al. 2010), breed and traits (Ciappesoni et al. 2004).But forage to concentrate ratio in this trial might not affect milk yield and milk composition since their rations were not significantly different as shown in Table 2.As reported by Tufarelli et al. (2009), ratio 35/65 forage to concentrate provided greater milk yield compared to 50/50 ratio and 65/35 ratios without influencing milk composition during lactation period of Jonica breed goats. CONCLUSION Levels of protein and energy in concentrate diets had significant effects on CP, DIP, Ca, P intakes, and FCR but not on DM, TDN, NDF, and ADF intakes during lactation.No significant differences were found in milk yield and milk composition between the different levels of protein and energy in the concentrate diets.This trial suggested that the best feed for lactating SAPERA goats was the mixture of chopped grasses, mixed forages and concentrate diets (16% CP and 78% TDN) with 160 g/kg CP and 750 g/kg TDN of the total DM, produced a milk of 1.55 kg/day with 90 g/day of milk fat, 43 g/day of milk protein and 75 g/day of milk lactose. Table 2 . Average daily nutrient intake of goats fed different levels of protein and energy during lactation abc Values in the same row having different letters show significant (P<0.05)difference Table 3 . Milk yield, milk composition and milk constituent yields of goats fed different levels of protein and energy abValues in the same row having different letters differ significantly (P<0.05)
2018-12-12T21:32:30.095Z
2016-07-01T00:00:00.000
{ "year": 2016, "sha1": "8eac5c4aaaf35e58d404dca67e5ef6c2e1da3486", "oa_license": "CCBY", "oa_url": "http://medpub.litbang.pertanian.go.id/index.php/jitv/article/download/1356/pdf-", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8eac5c4aaaf35e58d404dca67e5ef6c2e1da3486", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
17817480
pes2o/s2orc
v3-fos-license
Copy number variants and rasopathies: germline KRAS duplication in a patient with syndrome including pigmentation abnormalities RAS/MAPK pathway germline mutations were described in Rasopathies, a class of rare genetic syndromes combining facial abnormalities, heart defects, short stature, skin and genital abnormalities, and mental retardation. The majority of the mutations identified in the Rasopathies are point mutations which increase RAS/MAPK pathway signaling. Duplications encompassing RAS/MAPK pathway genes (PTPN11, RAF1, MEK2, or SHOC2) were more rarely described. Here we report, a syndromic familial case of a 12p duplication encompassing the dosage sensitive gene KRAS, whose phenotype overlapped with rasopathies. The patient was referred because of a history of mild learning disabilities, small size, facial dysmorphy, and pigmentation abnormalities (café-au-lait and achromic spots, and axillar lentigines). This phenotype was reminiscent of rasopathies. No mutation was identified in the most common genes associated with Noonan, cardio-facio-cutaneous, Legius, and Costello syndromes, as well as neurofibromatosis type 1. The patient constitutional DNA exhibited a ~10.5 Mb duplication at 12p, including the KRAS gene. The index case’s mother carried the same chromosome abnormality and also showed development delay with short stature, and numerous café-au-lait spots. Duplication of the KRAS gene may participate in the propositus phenotype, in particular of the specific pigmentation abnormalities. Array-CGH or some other assessment of gene/exon CNVs of RAS/MAPK pathway genes should be considered in the evaluation of individuals with rasopathies. Electronic supplementary material The online version of this article (doi:10.1186/s13023-016-0479-y) contains supplementary material, which is available to authorized users. Rasopathies are a class of genetic syndromes caused by germline mutations in the RAS/mitogen-activated protein kinase (RAS/MAPK) cascade [1], better known for its role in growth factor and cytokine signalling and cancer pathogenesis [2]. Individuals with these syndromes typically present with some combination of facial abnormalities, heart defects, and short stature, although skin and genital abnormalities as well as mental retardation are also common. Germline mutations of genes encoding components of RAS/MAPK pathway have been described in Noonan (NS; OMIM 163950), cardio-facio-cutaneous (CFC; OMIM 115150), Legius (LS; OMIM 611431), and Costello (CS; OMIM 218040) syndromes, capillary malformation and arteriovenous malformation (OMIM 608354) and neurofibromatosis type 1 (NF1; OMIM 162200). The majority of the mutations identified in the rasopathies are mutations which increase RAS/MAPK pathway signaling, many of which are missense mutations [3]. Whole gene deletions have also been reported in patients with NF1 [4] and duplications encompassing other RAS/MAPK pathway genes (PTPN11, RAF1, MEK2, or SHOC2) were more rarely described [5][6][7][8]. However, it is sometimes difficult to conclude that an altered RAS/MAPK pathway gene copy number variation (CNV) is critical for the associated phenotype. Here we report, to the best of our knowledge, the first case of a syndromic familial case of a large 12p duplication encompassing the dosage sensitive gene KRAS, whose phenotype overlapped with RASopathies. We report a patient who was evaluated in our clinic at age 12 and 17 years because of a history of mild learning disabilities (late of 2 years at school), small size (1.35 m as adult = −4 SD), and pigmentation abnormalities: nine café-au-lait spots all over body (the biggest one 3 cm of size), 14 achromic spots, and axillar lentigines (Fig. 1a). We did not observe any evidence for spatial relationship between the café-au-lait spots and the achromic macules. Facial dysmorphy was also noticed, including long face with a broad front and a large philtrum. Bones X rays were normal. She was a premature baby (birth at 27 weeks of pregnancy with birth weight of 1090 g) and had an interauricular communication which improved spontaneously. Her mother had the same phenotype with small size (1.25 m), and coarse face. She died at age 51 years of an unknown cause. This phenotype was reminiscent of RASopathies, among which neurofibromatosis type 1 (NF1), Legius syndrome, cardiofacio-cutaneous (CFC) syndrome, and Noonan syndrome represent prototypic entities [9]. The study was approved by the local ethics committee. Informed consents to participate and to publish were obtained from the patient and her parents. High-molecular-weight DNA was prepared by standard proteinase K digestion followed by phenolchloroform extraction from whole-blood leukocytes. Genome-wide array-CGH was performed as previously described [11] to identify potential genetic rearrangements. Patient DNA (labelled with Cy5-dUTP) was hybridized on Agilent whole human genome 244 K microarrays (Agilent Technologies) using a pool of genomic constitutional DNAs (leukocytes DNA labelled with Cy3-dUTP) from non-affected individuals as reference. Array was scanned with an Agilent DNA microarray scanner (G2565BA). Log2 ratios were determined with Agilent Feature Extraction software. Results were visualized and analysed with Agilent's Genomic Workbench 5.0 software. The patient constitutional DNA exhibited a~10.5 Mb large duplication at 12p (Fig. 1b, c), including 49 protein coding genes, two microRNA genes, and one long non coding RNA gene (Additional file 1: Table S1). The patient's mother carried the same chromosome abnormality (karyotype: dup(12) (p12.1p11.1)) and also showed development delay with short stature, and numerous café-au-lait spots that were not distinguishable from those of NF1 and Legius syndrome. The duplication observed in the propositus included the KRAS gene. RASopathy-associated constitutional activating mutations in KRAS lead to increase in RAS signalling. These mutations are responsible for less than 5 % of PTPN11 mutation negative Noonan patients or of patients with CFC [9,12]. The possibility that CNVs encompassing dosage sensitive genes can lead to inherited or sporadic diseases from de novo rearrangements was previously discussed [13]. Authors questioned if the increase in the expression of a functionally normal signalling component can mimic the effects of a hyperactive mutant protein. Contribution of CNVs to phenotype can be complex, and interpretation is frequently complicated by the size and type of chromosomal rearrangements, and epigenetic regulation. Whole gene duplication may lead to a weaker increased protein expression than oncogenic activating mutation actually found in BRAF or KRAS genes. However, although many of the activating mutations are similar to activating somatic mutations seen in cancer, on the whole, they tend to be not as strongly activating in rasopathies. For example, the most common oncogenic BRAF mutation, p.Val600Glu, does not occur in CFC syndrome and the specific KRAS mutations associated with Noonan syndrome are not the same as the known recurrent somatic mutations associated with cancer. It is likely that the strongly activating oncogenic mutations cannot be tolerated as constitutional mutations [14]. Rasopathy-specific phenotypic traits associated were sometime lacking in previous reported PTPN11, MAP2K2, or RAF1 constitutional duplications [6,7]. Our observation suggests that duplication of the KRAS gene may participate in the propositus phenotype, in particular of the specific pigmentation abnormalities. The RAS/MAPK pathway was identified as crucial for controlling pigmentation [15] and some perturbation in the RAS/MAPK cascade can result in multiple café-au-lait spots, although the exact mechanism remains to be elucidated. Café-au-lait macules are a key diagnostic phenotype of rasopathies: they are the most common first sign of NF1 (and also of the rare Legius syndrome) and they are present in 95 % of NF1 patients by the age of 1 year [16][17][18]. We conclude that our observation suggests that duplication of the region containing KRAS may partly result in the observed syndrome phenotype. Array-CGH or some other assessment of gene/exon CNVs of RAS/MAPK pathway genes should be considered in the evaluation of individuals with rasopathies with no point mutation identified by sequencing.
2018-04-03T05:05:13.526Z
2016-07-22T00:00:00.000
{ "year": 2016, "sha1": "32cb6fc164995fdbe322dee0006b25138fc9d173", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13023-016-0479-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "32cb6fc164995fdbe322dee0006b25138fc9d173", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
3027751
pes2o/s2orc
v3-fos-license
Primary malignant mixed müllerian tumor of the peritoneum a case report with review of the literature Malignant mixed Müllerian tumor is a rare malignancy of the genital tract and extremely uncommon in extragenital sites. This report describes a case of malignant mixed Müllerian tumor arising in the lower peritoneum of a 72-year-old female patient. The patient presented with ascites, lower abdominal mass and pleural effusion. The serum level of CA125 was elevated. At operation a diffuse carcinosis associated with tumor mass measuring 20 × 15 × 10 cm in the vesicouterine and Duglas' pouch were found. The uterus and the adnexa were unremarkable. Histopathology revealed a typical malignant mixed Müllerian tumor, heterologous type. The epithelial component was positive for cytokeratin 7 and vimentin whereas the mesenchymal component was positive for Vimentin, S100 and focally for CK7. The histogenesis of this tumor arising from the peritoneum is still speculative. Based on the previous reports and the immunohistochemical analysis of our case, we believe that this is a monoclonal tumor with carcinoma being the "precursor" element. Nevertheless, further molecular and genetic evidence is needed to support such a conclusion. Background Malignant mixed Müllerian tumor (MMMT) is a rare entity that arises from structures that are embryologically related to the Müllerian system [1,2]. The usual location of MMMT is the female genital tract. Extragenital origin is extremely rare [3][4][5]. Histologically and by immunohistochemistry, the tumor exhibits both epithelial and mesenchymal components. Case Report A 72-year-old woman, with unremarkable gynaecological history presented with chest pain and dyspnoea, increasing in intensity over the last three weeks. A chest X-ray showed pleural effusion and the subsequent fine needle aspiration cytology revealed malignant epithelial cells. The serum level of CA125 was 712 U/ml. CT scan of abdomen and chest revealed ascites and pleural effusion but no tumor mass. Pelvic ultrasonography, however, revealed excrescences adjacent to the interior surface of the abdominal wall and tumor load in the lower part of abdomen. The uterus and the right ovary were described as normal; the left ovary was not visualised. An ultrasound-guided biopsy from the tumor reported a carcinosarcoma (see later). The patient underwent exploratory laparotomy. A widely spread peritoneal carcinosis and a tumor measuring 20 × 15 × 10 cm in the vesicouterine and Duglas' pouch were found. Biopsy samples were taken from the tumor as well as from the serosa of the urinary bladder. Also, complete hysterectomy with partial omentectomy was performed. There was no suspicion of intrahepatic metastasis. Gallbladder, stomach, pancreas and appendix were unremarkable. Histopathology was consistent with the diagnosis of a primary peritoneal malignant mixed Müllerian tumor given that the uterus and the adnexa were unremarkable. Postoperatively, the patient underwent chemotherapy (Carboplatin in doses of 468 mg/360 mg every third week, as it was not felt that the patient was fit for more aggressive treatment). The disease progressed despite treatment and subsequent introduction of Treosulfan. The patient passed away 12 months after diagnosis. No autopsy was performed. Gross pathology The tumor tissue submitted for pathology were fragmented, irregular tissue masses measuring altogether 16 × 12 × 4.5cm. All the fragments were tan-white, irregular fleshy masses with areas of necrosis and hemorrhage. The uterus and the Fallopian tubes were unremarkable. The right and the left ovary were of normal dimensions. Numerous sections were taken from the tumor as well as from the uterus and adnexa. Histology and Histochemistry The tissue was fixed in 10% neutral buffered formalin (pH 7,0), routinely processed, and embedded in paraffin using standard methods. Four-micrometer sections were stained with hematoxylin and eosin, Periodic acid-Schiff (PAS) with diastase predigestion, and Alcian/PAS. Immunohistochemistry Formalin-fixed, paraffin-embedded tissue was stained with the peroxidase method using the EnVision visualisation system (Additional file 1). Histology and histochemistry Most of the tumor tissue had the characteristics of poorly differentiated carcinoma ( Figure 1). There was no squamous cell or glandular differentiation. The mesenchymal component was composed of sheets of undifferentiated spindle cells and areas of cartilage ( Figure 2). Numerous sections from the uterus and the adnexae showed no evidence of tumor. Immunohistochemistry Immunohistochemical staining for cytokeratin 7 decorated cells of the epithelial component and scattered cells within the mesenchymal component (Figure 3 and Figure 4). Vimentin was strongly positive in mesenchymal component and sporadicaly in the epithelial areas ( Figure 5). Cytokeratin 20 and calretinin were negative in both epithelial and mesenchymal elements. epithelial and stromal cells. MMMTs were traditionally regarded as a subtype of uterine sarcomas or a mixture of true carcinoma and sarcoma, however several reports suggested a monoclonal origin of these tumors [1][2][3] Interestingly, molecular data published by Wada and co-workers [4] suggested that although most carcinosarcomas are combination tumors, some develop as collision tumors. Discussion and the review of Literature The morphology of the present tumor is consistent with malignant mixed Müllerian tumor. The epithelial component exhibited positivity for cytokeratin 7 and Vimentin and was negative for Cytokeratin 20. Moreover, the mesenchymal component was diffusely positive for Vimentin, focally for CK 7 and exhibited areas of heterologous malignant cartilage. The primary peritoneal location and origin was confirmed after thorough gross and microscopic examination of the uterus and adnexae. Furthermore, absence of calretinin expression suggested that the tumor was of Müllerian rather than pure mesothelial origin [6]. A search of the literature revealed 30 previously reported cases of extragenital MMMT (Additional file 2) with the majority of the patients being in the postmenopausal age [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. Twenty-two of the reported cases were of primary peritoneal origin and most of them arose in the pelvis [19]. Thirteen were heterologous type [19]. In the homologous type MMMT, mesothelioma would rightfully be considered as a possibility [6]. However, in this case a positive reaction of the cells to Calretinin would be expected [6]. Immunoreactivity of the epithelial cells to Cytokeratin 7 and negative reaction for cytokeratin 20 also points toward Müllerian origin. A theoretical possibility is of course the origin in endometriosis or endosalpingiosis. However, although impossible to rule out, the lack of demonstrable endometriosis associated with the patient's current tumor makes this hypothesis unlikely [16]. Several theories have emerged in the attempt to explain the biphasic appearance of the tumor, the most important of which are the "collision", "conversion" and the "combination" theory [1]. Sternberg et al. [20] was the first to suggest the conversion, stating that sarcomatous elements may develop from carcinoma. They described a case of metastatic heterologous type carcinosarcoma of the omentum with primary endometrial origin. There was no evidence of sarcoma component in the primary tumor. From that time onward, a number of cases with metachronous or synchronous gynaecologic carcinoma have been reported [11,14,16]. Masuda et al. [21] further supported the conversion theory with their study in which cell lines established from malignant mixed Müllerian tumors showed the ability of the epithelial tumor cells to transform into epithelial, mesenchymal or both types of differentiation in vitro, while the mesenchymal cells did not show similar capabilities. MMMT has a poor prognosis with most of the patients following a rapidly fatal course regardless of the initial tumor stage [9]. A review done by Garamvoelgyi et al. [14] showed that most patients passed away within one year with median postoperative survival time being 14 months. Due to the rarity of the disease, limited data regarding the management of peritoneal MMMT exist. Treatment modalities include surgery, chemotherapy and irradiation with various survival outcomes. Ko et al. [19] reported on a patient that was treated with optimal tumor debulking and a combination of chemotherapy with Ifosfamide and Cisplatin, followed by pelvic irradiation. There were no signs of recurrence for 48 months and was the case with the longest disease-free survival in the reported literature.
2018-05-31T06:59:32.639Z
2011-02-04T00:00:00.000
{ "year": 2011, "sha1": "6c9cda3825edd02c8f53f4bb0ff7ac141857f974", "oa_license": "CCBY", "oa_url": "https://wjso.biomedcentral.com/track/pdf/10.1186/1477-7819-9-17", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8634b1177a31287a3036aa044773897c379c38c0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232260398
pes2o/s2orc
v3-fos-license
Atypical hemolytic uremic syndrome and acute tubular necrosis induced by complement factor B gene (CFB) mutation Abstract Rationale: Atypical hemolytic uremic syndrome (aHUS) is an uncommon and serious disease that manifests hemolytic anemia, thrombocytopenia, and acute kidney injury. Genetic complement abnormalities have been shown to be responsible. Compared with the aHUS caused by other mutated genes, aHUS secondary to CFB mutation in adults is extremely rare. We report an adult with CFB mutation developing aHUS. Patient concerns: A 56-year-old man was admitted for 4-day history of nausea and fatigue, anuria for 2 days, and unconsciousness for 10 hours. Diagnoses: The patient presented with life-threatening anemia, thrombocytopenia, acute kidney injury, and nervous system abnormalities. The patient had schistocytes on the peripheral blood smear, increased lactate dehydrogenase (LDH), and plasma-free hemoglobin levels. The patient was later found to harbor a pathogenic variant in the CFB gene (C.1598A>G), and was diagnosed with aHUS and acute kidney injury. Intervention: The patient was treated by plasmapheresis, continuous renal replacement therapy, blood transfusion, and anti-infective and antihypertensive treatment. Outcomes: After the treatment, the patient's consciousness returned to normal, and the hemoglobin, platelet, and serum creatinine recovered. The disease activity remained quiescent during the follow-up. Lessons: A rare heterozygous variant c.1598A>G p.Lys 533Arg in the CFB gene, which was associated with adult-onset aHUS, was described and successfully treated. This case can help in understanding the early diagnosis and effective therapies of this rare disease. Introduction As a rare and serious microvascular thrombotic disorder, atypical hemolytic uremic syndrome (aHUS) is characterized by microangiopathic hemolytic anemia, platelet consumption, and the development of acute kidney injury. [1] The aHUS is caused by the dysregulation of the alternative complement pathway, [1,2] and aHUS patients have been found to have mutations that involve C3, complement factor H (CFH), factor I (CFI), factor B (CFB), and membrane cofactor protein (MCP, or CD46). [2] Compared with other mutated genes, mutations that involve CFB are rare, while adult-onset cases are even rarer. [3][4][5][6] We report an adult with life-threatening anemia, thrombocytopenia, nervous system abnormalities, and acute kidney injury, who harbored a pathogenic variant in the CFB gene (C.1598A>G). The patient was diagnosed with aHUS and acute tubular injury, and was successfully treated with plasmapheresis. Clinical presentation A 56-year-old male was admitted for 4-day history of nausea and fatigue. The patient also had anuria for 2 days, and was unconscious for 10 hours. Furthermore, the patient had no medically important history. On the physical examination, the patient was comatose, and had a pale appearance and impaired blood oxygenation. Rales and rhonchi were present in the bilateral lungs. The laboratory investigations yielded features suggestive of microangiopathic hemolytic anemia, including a hemoglobin level of 35 g/L, schistocytes on the peripheral blood smear (1%), and increased lactate dehydrogenase (LDH) (416 IU/L) and plasma-free hemoglobin levels. The patients direct Coomb test results were negative, and the platelet and white blood cell count were 88Â 10 9 /L and 19.43 Â 10 9 /L, respectively. Furthermore, the serum creatinine and blood urea nitrogen (BUN) levels were 1046 and 63.19mmol/L, respectively, suggestive of renal impairment. The urinalysis revealed microscopic hematuria and 3+ proteinuria. There was no evidence of infection caused by hepatitis B, C virus, or human immunodeficiency virus. The anti-DNA and antiphospholipid antibody, and cryoglobulin were all negative. The chest computed tomography revealed pulmonary edema, and the culture revealed Staphylococcus aureus. Immediately after admission, continuous renal replacement therapy (CRRT) and mechanical ventilation were started, and blood transfusion was administered. After 1 day of treatment, the patient regained consciousness with less dyspnea, and the investigators discontinued the mechanical ventilation. However, the patient's hemoglobin and platelet count remained persistently low, accompanied by anuria and hypertension. On the basis of the fact that the patient had microangiopathic hemolytic anemia, thrombocytopenia, and acute kidney injury, the diagnosis of aHUS was considered. Plasmapheresis with a dose of 3000 mL per session for 7 sessions was initially prescribed (Fig. 1A). In addition, antibiotics were given, and the hemodialysis was continued. Losartan (200 mg/day) was given to reduce blood pressure for potential vascular protection. After 2 sessions of plasmapheresis, the patient's amount of urine gradually increased to the degree of polyuria, and this normalized thereafter. Meanwhile, the patient's serum creatinine decreased (Fig. 1A), while the platelet (Fig. 1B) and hemoglobin (Fig. 1C) recovered. After 15 months of follow-up, the patient's platelets and hemoglobin remained normal, and the serum creatinine trended toward normality (Fig. 1). Pathology of renal biopsy At the 21st day after hospitalization, the patient's platelet and hemoglobin recovered to a safe level, and renal biopsy was performed for the pathologic examination. The results revealed severe vacuolar degeneration of the tubular epithelia ( Fig. 2A) and disruption of the brush border (Fig. 2B). A large number of high-density brown granules were found to impact the tubular lumen (Fig. 2C), but was compatible with the hemoglobin casts. The immunofluorescence staining for immunoglobulin G (IgG), IgA, IgM, C3, and C1q were all negative. The electron microscopy revealed severe vacuolar degeneration of the renal tubular epithelia and segmental foot process fusion without electron dense deposits (Fig. 2D). Complement component analysis After admission, the levels of the complement components were checked. The von Willebrand factor (VWF) activity was 72% (normal: 40-99%). The CFH concentration was 408.9 mg/mL (normal: 247.00-1010.80 mg/mL), and the anti-CFH antibody was negative. Gene analyses A mutation was identified in exon 12 of the CFB, which changed a lysine at amino acid position 533 to an arginine (c.1598A>G p. Lys 533Arg). No mutations were identified in the complement Discussion Hemolytic uremic syndrome (HUS) is a rare and serious disorder, and is characterized by intravascular hemolysis, thrombocytopenia, and acute kidney injury. [4] Approximately 30% of HUS patients also develop central nervous system abnormalities and fever. [6] HUS typically follows a diarrhea episode associated with O157:H7 Escherichia coli infections. [5] However, 10% of patients with similar presentations do not have diarrhea, and are diagnosed with aHUS. [4] The aHUS portends a poor prognosis. [6] That is, 25% of patients die during the acute phase, and 50% of patients eventually progress to end-stage renal disease (ESRD). [7] It can be challenging to initially make a correct diagnosis of aHUS, but an abrupt onset of hemolytic anemia, thrombocytopenia, and acute kidney injury should prompt the consideration of HUS. For the present case, the evidence of red cell fragmentation (schistocytes and polychromasia) in the peripheral blood smear, elevated LDH, and indirect hyperbilirubinemia aided in the diagnosis of aHUS, and the negative direct Coomb test helped to exclude the possibility of autoimmune hemolytic anemia. In cases of aHUS, the renal pathology typically presents findings, such as glomerular endothelial damage leading to microthrombi formation within the glomerular capillaries and subsequent endothelial proliferation, thickening of the basement membrane, and the formation of double contours. [8] Surprisingly, for the present case, the renal damage mainly resulted from the hemolytic anemia with hemoglobin cast blocking tubules, which led to acute tubular necrosis. One possible reason can be that the renal biopsy was performed late in the disease course, during which the endothelial damage already recovered, and the endothelial injury was no longer discernible upon pathologic examination. Nonetheless, based on the pathological findings of the present case, the investigators consider that tubular damages in the form of acute tubular necrosis can be another possible presentation in aHUS. Genetic abnormalities are found in approximately 60% of aHUS patients. [9,10] Most mutations and variants in complement regulatory proteins related to aHUS involve CFH, CFI, MCP, CFHR (complement factor related proteins)1-5, DGKE, and CFB genes. [2,[11][12][13] Among complement-associated HUS, CFB mutations are relatively rare, [14][15][16][17] with a frequency of 1% to 2%. [11] The CFB gene encodes the factor B protein, which is an important component of the alternate pathway, and its activation provides an active subunit that binds with C3b to form C3 convertase, C3bBb. The catalytic site of C3bBb required for amplifying the alternative pathway is located within the Bb portion of the convertase. [16] In the literature, CFB mutations can enhance the formation of C3 convertase or increase its resistance www.md-journal.com to inactivation, cause complement 3 overactivation, which causes vascular endothelial injury, the hemolytic anemia, and an episode of aHUS and kidney injury. [18] The investigators reported a pathogenic variant in the CFB gene (C.1598A>G), which could cause HUS in 1 literature report. [19] The present report confirms that the mutation can lead to the occurrence of HUS. Although the present patient had CFB mutations, this patient did not develop aHUS during childhood. The trigger of the patient's aHUS was likely the serious pulmonary infection of the S. aureus. The infection can activate the complement system, but a defect in factor B can induce a complement overactivation, and precipitate the episode of aHUS. Consequently, the development of aHUS in the present patient may stem from the interaction between environmental factors and inherent genetic defects. At present, one of the most effective therapeutic regimens for aHUS is the plasma exchange or infusion for correcting the complement dysregulation. [20] This patient underwent plasma exchange, received antibiotics and hemodialysis, and obtained a favorable outcome. After 15 months of follow-up, the patient remained clinically stable without recurrence. In conclusion, a rare heterozygous variant c.1598A>G p.Lys 533Arg in the CFB gene, which was associated with adult-onset aHUS, was described and successfully treated. More reports of aHUS in patients with CFB gene mutation would likely provide further insights into the pathogenesis of this rare disease, possibly uncovering the underlying mechanisms to aid in the early diagnosis and development of effective therapies.
2021-03-18T06:17:39.218Z
2021-03-19T00:00:00.000
{ "year": 2021, "sha1": "8842a23f9c117910e0d74bfcb7e5218b89ab1b3d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000025069", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5e4db3c34c425d641a8f6b288b631b0a6fee1533", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
4903718
pes2o/s2orc
v3-fos-license
Relationship between Sleep Quality and Cognitive Function in Patients with Mild-to-Moderate Parkinson's Disease To the Editor: Parkinson’s disease (PD) is a common degenerative disease of the central nervous system (CNS) in middle‐aged and elderly people. PD is characterized by resting tremor, myotonia, bradykinesia, abnormal posture, and gait. The incident of PD increases with age. In addition to motor symptoms, nonmotor symptoms have raised additional concerns in recent years. Cognitive impairment is very common in PD patients. It is estimated that the incidence of PD mild cognitive impairment (PD‐MCI) is 20–50%,[1] which is present at the initial visit in some patients and a great number of patients with PD‐MCI eventually develop PD with dementia (PDD). PD patients are much more likely to develop dementia than the normal population. It has been found that 2/3 of PD patients suffer from different forms of sleep disorders, which are indicated to be one of the common nonmotor symptoms in PD patients. The symptoms of various sleep disorders in PD patients include night insomnia, increased sleepiness, sleep fragmentation, reduced sleep efficiency, and rapid eye movement (REM) sleep behavior disorder, thus having a serious impact on the patient’s sleep quality and increasing the risk of dementia.[2] Therefore, cognitive dysfunction and sleep disorders are two important nonmotor symptoms of PD, exerting greater impacts on the quality of life (QOL). Here, we evaluated the relationship between sleep quality and cognitive function of PD patients. From May 2016 to May 2017, we enrolled a total of 111 native Chinese patients with primary PD and without audiovisual dysfunction who were admitted to the Department of Neurology, the First Hospital of Hebei Medical University. The diagnosis of PD was based on the criteria of the United Kingdom PD Society Brain Bank. Moreover, the diagnosis of PDD conformed to the diagnostic criteria of dementia proposed by the Movement Disorder Society Task Force: (1) diagnosis of primary PD, (2) PD-related cognitive decline by the Mini-Mental State Examination (MMSE), (3) cognitive impairment that affected the patient's daily life ability, and (4) development of extrapyramidal system of PD before dementia, with regular time interval. Patient information was collected by two trained neurologists, including name, sex, age, duration of illness, and educational level. Of these, duration of disease was calculated from the time the patient initially complained of discomfort. The assessment was performed since medications were initiated. The Unified PD Rating Scale (UPDRS) was used to assess the severity of the disease, and Hoehn-Yahr (H-Y) staging was utilized for disease rating. According to the severity, the patients were divided into stage 1.0-5.0, including mild (1.0-2.0), moderate (2.5-3.0), and severe (4.0-5.0). According to complaints of patients and their family concerning declined cognitive function and interference with daily life activities in our study, PD patients were divided into three groups as follows: normal group, MCI group, and dementia group, according to MMSE and MoCA (Beijing version) normal cognitive function. MoCA ≥26 points and denying cognitive decline was categorized as a normal cognitive function, MoCA <26 points and MMSE ≥26 as well as a complaint of decreased cognitive function but denying interference with daily life activities as MCI, and MMSE <26 and complaint of decreased cognitive function and affected daily lives as dementia. The Pittsburgh Sleep Quality Index (PSQI) was used to evaluate the quality of sleep in PD patients over the pastmonth. Patients with PSQI score ≥6 points were classified as suffering sleep disorders, and patients with long-term use of sleeping pills were excluded. SPSS version 19.0 software (SPSS, USA) was used for statistical analysis. Student's t-test was used for the comparisons. The significance level was set at a value of P < 0.05. There were a total of 111 (61 males and 50 females) PD patients, with a mean age of 66.5 ± 8.7 years, a mean educational level of 11.2 ± 3.5 years and a mean duration of 6.1 ± 4.1 years. The mean H-Y staging was 2.20 ± 0.80, which was classified as mild-to-moderate PD, including mild in 59 (53.15%) patients, and moderate in 52 (46.85%) patients. Patients with PSQI ≥6 were diagnosed with having sleep disorders. There were 67 (60.36%) patients with cognitive impairment, including 30 (27.03%) with MCI and 37 (33.33%) with dementia. The MMSE score in dementia group was 23.04 ± 1.73, which was categorized as mild dementia. The results of the comparison of sleep quality in PD patients with different cognitive levels showed different sleep quality in mild PD patients with different cognitive levels, with the poorest sleep quality in the dementia group. Pairwise comparison showed that the PSQI score was significantly higher in the dementia group than in the normal group, and the difference was statistically significant (P < 0.05) [Supplementary Table 1]. Different cognitive levels in moderate PD patients were associated with different sleep quality, with the poorest quality in the dementia group. The pairwise comparison indicated significantly higher PSQI scores in dementia group than in the normal group (P < 0.01) [Supplementary Table 2]. Regarding sleep quality, no significant differences were noted in mild and moderate PD patients (P = 0.935). With respect to general information in PD patients with different sleep quality, there was no significant difference between the two PD groups regarding age, educational level, duration of disease, and H-Y staging (P > 0.05). However, UPDRS score was significantly higher in the sleep disturbance group than in the normal group, whereas MoCA score and MMSE score were remarkably lower than those in the normal group (both P < 0.05) [ Table 1]. The correlation analysis showed a positive correlation between cognitive level and sleep quality in PD patients after controlling H-Y variables (r = 0.461, P < 0.01). PD is a common degenerative disease of CNS in the elderly. In addition to motor symptoms, nonmotor symptoms in recent years, such as cognitive decline, sleep disorders, autonomic nerve damage, anxiety, depression, psychiatric symptoms, seriously affect the QOL of patients, thereby raising additional concerns. A considerable number of people experience decreased sleep quality at the onset of early PD or before PD symptoms. Common types of sleep disorders in PD patients include difficulty falling asleep, wakefulness/sleep fragmentation, daytime lethargy, sleep-deprivation, REM sleep behavior disorder, restless legs syndrome, and sleep episodes. In addition, altered sleep parameters have been observed, including sleep structure. Those are confronted with unapparent difficulties of falling asleep, difficulties of maintaining sleep and disordered sleep structure, higher incidence of asymptomatic periodic limb movements, and REM sleep behavior disorder, adversely impacting patients' QOL. There are studies suggesting that RBD is a risk factor for cognitive impairment in PD patients. [3] As we reported, mean H-Y staging of all 111 patients was 2.20 ± 0.80, showing a mild-to-moderate PD. PQSI scale was used to assess the quality of sleep in 111 PD patients enrolled, indicating the prevalence of 54.95% in sleep disorders. PD sleep disorders have been reported to be associated with aging, the severity of illness, depression, and dopaminergic dose. There is no consensus on the prevalence of sleep disorders in PD patients based on various studies, which is, however, generally higher than our results. Several studies have shown that PD patients suffer abnormal sleep at an early stage, which is likely to worsen as the disease progresses. Even at early stage, PD patients have a significantly decreased health-related quality of life (HR-QOL) compared with their peers, and that PD impacts HR-QOL in various manners. PD at an early stage has a limited effect on HR-QOL due to relatively mild motor symptoms. Thus, the priority should be given to nonmotor symptoms. The authors of the study recognize depression, fatigue, and sleep disorders as the leading cause of the decline in HR-QOL in early PD patients. [4] The current findings reveal that a higher rate (54.95%) of sleep disorders is noted in mild-to-moderate PD patients, and sleep disorders negatively impact cognitive function in PD patients, thereby leading to further decline of HR-QOL. Sleep disorders and cognitive disorders, as the two major nonmotor symptoms of PD, interact with each other. Sleep disorders can be seen as a concomitant symptom of cognitive impairment and can lead to involvement of the brainstem nerve nucleus, thus impairing cognitive function. The current results showed that sleep disorders in PD patients were associated with a high prevalence of cognitive impairment and a marked decline in cognitive function. Consistent with the above findings, PD patients with dementia had poorer sleep quality than those with normal cognitive function, suggesting an interaction between sleep quality and cognitive function. Based on studies in the Western countries, [5] α-synuclein, Aβ protein, and tau protein have been found in multiple brain functional areas of PD patients with sleep disorders. Currently, Aβ protein and tau protein are recognized to be associated with pathological changes of Alzheimer's disease. [6] In addition, α-synuclein abnormally aggregates to form Lewy bodies, which are characteristic pathological changes of Lewy body dementia, suggesting a common pathological basis of sleep disorders and cognitive decline. This study has several limitations. First, patients enrolled had mild-to-moderate PD and MCI. Second, there was a small sample size. Third, the effect of medications patients took on sleep is not taken into account. Last, the types of sleep disorders patients suffered were not specified. Patients with severe PD should be included, and larger sample size is needed. In addition, research methods need to be improved to obtain more detailed and accurate data. Conclusively, our findings suggest that sleep disorder in PD patients can be considered as a concomitant symptom of cognitive decline, and further aggravate cognitive impairment, indicating a potential interaction. Although the underlying pathophysiological processes remain incompletely understood, the association between sleep disorders and cognitive function in PD patients may suggest a decrease in dopamine levels in the limbic system. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form, the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2018-04-27T03:28:06.013Z
2018-04-20T00:00:00.000
{ "year": 2018, "sha1": "8aec864b8717b8301c7dd53db1937db79284832f", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0366-6999.229908", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8aec864b8717b8301c7dd53db1937db79284832f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232078306
pes2o/s2orc
v3-fos-license
Improving outcomes in patients with acute upper gastrointestinal bleeding In the issue of the Journal, there are three articles on ‘acute upper gastrointestinal bleeding (AUGIB)’. Kola et al.[1] report a randomized controlled trial (RCT) that compared a restrictive to liberal transfusion strategy in patients with AUGIB. The threshold to transfusion was 7 g/dl and 8 g/dl respectively in either group. The primary outcome endpoint was mortality at day 45. In the restrictive and liberal group, 10 of 112 (8.9%) and 12 of 112 (10.7%) met primary endpoint (absolute difference 1.8%; 95%CI‐6.27% to 9.93%), respectively, The RCT was of a non‐inferiority design with a margin of 3.5%. The authors concluded that a restrictive transfusion would not be inferior to a liberal transfusion strategy. There is a methodological issue over the acceptance of non‐inferiority. The authors[2] should nonetheless be commended for this RCT which adds to other RCTs in the literature. In the current trial, 48% of patients had cirrhosis. With a small difference between transfusion thresholds (7 vs. 8 g/dL), one would not be surprised to see no difference in outcomes. The mean packed cells transfused between groups was not significantly different (1.72 units and 1.96 units, respectively). The Spanish multicenter study by Villanueva et al.[3] enrolled 921 patients, 21% of them had variceal hemorrhage and 31% had cirrhosis. This landmark trial compared the transfusion threshold of 7 g/dl to 9 g/dl. It found a 4% difference in mortality at day 45 mostly observed in patients with Child‐Pugh class A or B cirrhosis. The TRIGGER trial[4] conducted in the United Kingdom was a clustered RCT. Six hospitals were randomized to adopt either a restrictive or a liberal transfusion strategy (8 g/dl vs. 10 g/dl). A total of 936 patients were enrolled. Fewer patients in the restrictive transfusion group received blood transfusion (33 vs. 46%) and the mean unit of packed red cell transfusion was 1.2 and 1.9, respectively. The difference was not significantly different. Not surprisingly, in this trial, clinical outcomes following either strategy were similar. All of the above cited RCTs varied in their designs and patient demographics. The caveats are, however, in exsanguinating patients and in patients with cardiovascular co‐morbidities, where with holding red cell transfusion can be hazardous. In a pilot RCT[5] that enrolled 110 patients with acute coronary syndrome or angina undergoing percutaneous coronary intervention, 6 (10.9%) patients in the liberal group (transfusion when Hb <10 g/dl) met primary outcome (death, myocardial infarction or unscheduled revascularization) compared to 14 (25.5%) in the restrictive group (transfusion when Hb <8 g/dL). Editorial In the issue of the Journal, there are three articles on 'acute upper gastrointestinal bleeding (AUGIB)'. Kola et al. [1] report a randomized controlled trial (RCT) that compared a restrictive to liberal transfusion strategy in patients with AUGIB. The threshold to transfusion was 7 g/dl and 8 g/dl respectively in either group. The primary outcome endpoint was mortality at day 45. In the restrictive and liberal group, 10 of 112 (8.9%) and 12 of 112 (10.7%) met primary endpoint (absolute difference 1.8%; 95%CI-6.27% to 9.93%), respectively, The RCT was of a non-inferiority design with a margin of 3.5%. The authors concluded that a restrictive transfusion would not be inferior to a liberal transfusion strategy. There is a methodological issue over the acceptance of non-inferiority. The authors [2] should nonetheless be commended for this RCT which adds to other RCTs in the literature. In the current trial, 48% of patients had cirrhosis. With a small difference between transfusion thresholds (7 vs. 8 g/dL), one would not be surprised to see no difference in outcomes. The mean packed cells transfused between groups was not significantly different (1.72 units and 1.96 units, respectively). The Spanish multicenter study by Villanueva et al. [3] enrolled 921 patients, 21% of them had variceal hemorrhage and 31% had cirrhosis. This landmark trial compared the transfusion threshold of 7 g/dl to 9 g/dl. It found a 4% difference in mortality at day 45 mostly observed in patients with Child-Pugh class A or B cirrhosis. The TRIGGER trial [4] conducted in the United Kingdom was a clustered RCT. Six hospitals were randomized to adopt either a restrictive or a liberal transfusion strategy (8 g/dl vs. 10 g/dl). A total of 936 patients were enrolled. Fewer patients in the restrictive transfusion group received blood transfusion (33 vs. 46%) and the mean unit of packed red cell transfusion was 1.2 and 1.9, respectively. The difference was not significantly different. Not surprisingly, in this trial, clinical outcomes following either strategy were similar. All of the above cited RCTs varied in their designs and patient demographics. The caveats are, however, in exsanguinating patients and in patients with cardiovascular co-morbidities, where with holding red cell transfusion can be hazardous. In a pilot RCT [5] that enrolled 110 patients with acute coronary syndrome or angina undergoing percutaneous coronary intervention, 6 (10.9%) patients in the liberal group (transfusion when Hb <10 g/dl) met primary outcome (death, myocardial infarction or unscheduled revascularization) compared to 14 (25.5%) in the restrictive group (transfusion when Hb <8 g/dL). A second study in this issue of the Journal is a retrospective study by Almadi et al. [6] conducted in a university hospital. In this cohort of 259 patients with a mean age of 57.1, 80.1% were bleeding from a non-variceal cause. The authors compared their study to a large United Kingdom audit of AUGIB in 2007. [7] There were more ulcer diseases (36 vs. 27%), a lower rebleeding rate (8.9 vs. 13%) and a lower crude mortality rate (4.4 vs. 10%). Interestingly, only 13.9% in this cohort compared to 43% in the UK audit received red cell transfusion. The authors suggested a link between a lower rate of transfusion and mortality. In this series, none of the patients required surgery for hemostasis. The third study was a time trend analysis of the causes of upper gastrointestinal bleeding in 2075 patients over 13 years (2004-2016) from a single tertiary care public hospital in Saudi Arabia. [8] The causes of bleeding were quite consistent throughout these years with nonvariceal causes constituting to 80.5% of them. Gastro-duodenal ulcers (34.3%) were the dominant endoscopic diagnoses. Authors to these studies ought to be complimented for their contribution to medical knowledge. With these studies, readers are allowed a glimpse of the 'upper gastrointestinal bleeding' landscape in the Kingdom, the epidemiology of the condition and how it has been managed. Through critical appraisal, authors can compare and contrast current practice with what is happening in the rest of the world. Advancements in the management of patients with AUGIB have come in small increments. Endoscopic treatment represents a major advance and is the cornerstone to the management of AUGIB. Acid suppression and use of vasoactive drugs have reduced recurrent bleeding in non-variceal and variceal bleeding, respectively. Patients with severe variceal bleeding are increasingly salvaged using Trans-jugular intrahepatic portosystemic shunts. In See accompanying articles in this issue refractory non-variceal cases, angiographic treatment is now preferred over surgery. As evident from one of the studies, transfusion medicine is now an integral management component. The International Consensus Group in the management of nonvariceal upper gastrointestinal bleeding suggested several research areas with a view to further improve care of patients with AUGIB. [9] These include issues in critical care including the optimal fluid regimen, management of anti-thrombotics in the acute setting and in secondary prophylaxis, identification of those at risk of further bleeds and deaths, novel endoscopic treatments specifically over TC-325, a hemostatic powder, and over-the-scope clips, and efficacy to different regimens of acid suppression. We are excited to see research studies abound on the management of AUGIB. These studies will no doubt continue to lead to improvements in patient care.
2021-03-02T06:22:30.501Z
2021-02-24T00:00:00.000
{ "year": 2021, "sha1": "825466d23bf7a3f8b51c3420af2f49aaa513b1c0", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/sjg.sjg_552_20", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "07d69a0733d08cfde28a219166722829f98bd24c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
228914409
pes2o/s2orc
v3-fos-license
General scientific and regional conceptual approaches to compilation of Red Data Books of Soils The authors explain the need to compile the Red Data Book of soils similar to the existing Red Data Books of plants and animals. The paper reveals some theoretical and methodological approaches from general scientific and regional points of view relevant for the compilation of such works. The main soil objects in the pedosphere of the Kirov region that meet the zonal standards are identified, i.e. rare, unique and endangered soils. These soils are to be included in the environmental documents mentioned above. The results can be used to develop a local network of specially protected natural territories due to a new – pedogenic – category of objects of conservation of natural heritage as part of reserves, micro-reserves, soil monuments of nature. These materials are of interest to the scientific community and environmental services of the Kirov region and other constituent entities of the Russian Federation. Introduction The idea of compiling the Red Data Book of soils is based on the provision that unique and irreplaceable soil cover performs a number of important biospheric functions. According to G.V. Dobrovol'sky and E.D. Nikitin [1], these functions are biodiversity preservation, ensuring bioproduсtion process, maintaining stable gas composition of the atmosphere, chemical composition of natural waters, and, consequently, the preservation of life on Earth. This movement, which was born in the late 1970s, was inspired by our domestic scientists E.D. Nikitin, G.V. Dobrovol'sky [2][3][4][5][6]. At first, it was informal since the country's environmental legislation did not have an appropriate regulatory framework. The situation changed in 2002 when the government of the Russian Federation adopted Federal law No. 7 on Environmental Protection, Article 62 of which states, "Rare and endangered soils are subject to state protection, and the Red Data Book of soils of the Russian Federation and the Red Data Books of soils of the constituent entities of the Russian Federation are established for the purpose of their registration and protection..." [7]. The latter served as an incentive to intensify work in this direction by enthusiasts from a number of country's regions [8][9][10][11][12][13]. These works focused on the foundation of scientific approaches to compiling the Red Data Book of soils, determining the taxonomic rank and list of soil objects which need priority protection. At the suggestion of one of the paper's authors, the first edition of the Red Data Book of soils of Russia [14] included passport data on a series of valuable soil objects, identified by him in the 1980-90s in the territory of the Kirov region. To date, we have collected representative materials on typical and rare components of the soil cover of the Kirov region necessary to set an urgent goal -to prove the need to compile the Red Data Book of soils. The aim of this paper is to propose and discuss conceptual approaches to compiling the regional Red Data Book of soils taking into account general scientific and regional aspects. Theoretical and methodological provisions In our opinion, the following principles and approaches should be considered the general scientific provisions on the compilation of the Red Data Books of soils (Table 1). Table 1. Conceptual principles and approaches to the compilation of the Red Data Book of soils Principles Approaches 1. phenomenological principle 1. regional approach 2. principle of equivalence of soil cover components 2. zonal approach 3. principle of priority of virgin soil objects 3. azonal approach 4. principle of representativeness 4. catenary approach 5. principle of rarity of soil cover components 5. natural historical approach The phenomenological principle implies the recognition of the importance of any component in the pedosphere, as well as in the composition of nature as a whole, as an original natural historical body that should be studied and preserved. This was recognized by V.V. Dokuchaev at the dawn of the genetic soil science in the provision on the unique soils as the "fourth kingdom" of nature along with plants, animals and minerals. In the second half of the 20 th century, it was developed in the form of the doctrine of the irreplaceability of soil cover for a number of biospheric and anthropospheric functions, including the preservation of life on Earth. The principle of equivalence of soil cover components is the relative equivalence of soil cover components regardless of the occupied area due to their possible irreplaceability as an ecological niche for the inhabitants of local biomes that are closely related to soils as their living environment. According to the regional approach, the local natural conditions and features of the soil cover of any constituent entity of the Russian Federation are taken into account. For the Kirov region, these include: location in 3 subzones, terrain heterogeneity, diversity of parent and underlying rocks, belonging of soils to podzolic, gray forest, t sod, peat, alluvial types of pedogenesis, polygenetic soil cover and some others, as described below. The principle of representativeness implies the inclusion of the main zonal, azonal and intrazonal soil cover components in the Red Data Book of soils, which reveal the diversity of local soils, modes and processes of pedogenesis. According to the principle of the priority of virgin soil objects, they are considered as the only reference samples of natural -virgin -soil cover, which serve as a reference point for assessing the primary state -morphology, substantive properties, functioning, dynamics, development, evolution of the local pedosphere -under the conditions of possible technogenic transformation of soils. At the same time, they are the natural, reproductive and evolutional habitat of most flora and fauna species, including the microbiota. The zonal approach implies the mandatory inclusion of background soil cover components in the Red Data Book of soils. In the Kirov region, located in the subzones of the middle, southern taiga and mixed forests, podzolic, sod-podzolic and gray soils on clay-loam soil-forming types are to be included in the Red Data Book of soils. The azonal (lithogenic) approach is an addition to the zonal approach with a selective sampling of soil-forming types according to two complementary criteria: a) typicality and homogeneity -for zonal standards; b) exoticism or uniqueness of soil-forming substrates on which certain soil differences, which are interesting from scientific and other points of view, were formed. In the Kirov region, cover loam is considered to be the optimum alternative of soil-forming types for zonal soil standards. They are known in all the subzones and most geomorphic positions and are a good option due to their stable properties: the homogeneity of granulometric and chemical-mineralogical composition, addition, etc. Phosphate Jurassic-Cretaceous deposits of the Vyatka-Kama district and exceptionally rare in the The catenary approach is a conjugate representation of soil types and subtypes of eluvial, semihydromorphic and hydromorphic series of different subzones developed on homogeneous soil-forming types. This allows us to reveal soil geographical regularities of soil functioning and lateral material energy exchange between them more fully in a comparative aspect. The natural historical approach implies the selection of objects taking into account the regional history of soil cover in the post-glacial period and/or earlier stages of nature evolution. This approach is of particular importance for the Vyatka region due to the location of its southern and central parts near the boreal ecotone of European Russia in comparison with taiga and forest-steppe biomes. At a very dynamic-post-glacial stage of development -in the pre-boreal, boreal, Atlantic, sub-boreal and sub-Atlantic stages of the Holocene -it caused significant climate changes and migration of landscape zones [15][16][17]. An important consequence of this was the formation of a number of soil types with clear morphological and analytically fixed signs of polygenesis in the form of relict -residual and buried -humus horizons, etc. In particular, these are soils with the so-called second humus horizons, which are now widely known as part of several background types of watershed soils as well as valley landscapes on various soil-forming types. The principle of rarity of soil cover components is closely related to the above approach. It consists in ranking soil cover components taking into account their scientific value, biospheric role, productivity, threat of degradation and extinction of certain taxa. This principle refers to almost all soils with a polygenetic profile of the Vyatka region as well as intensively exploited soils of the gray forest type. Objects and results In the light of the above mentioned, the following group of reference and rare soils are to be included in the Red Data Book of soils (Table 2). Table 2. Groups of reference and rare soils of the Vyatka region І. Reference soils ІІ. Rare soils а) primary standards а) unique soils б) local standards б) rare soils on the territory of RF в) reference complexes в) rare soils on the territory of the region г) endangered species Zonal types and subtypes of soils -podzolic, sod-podzolic, gray -formed under upland conditions under virgin or conditionally indigenous forests on cover loam, known in the corresponding subzones of the region are considered to be primary standards. However, when moving to the south, it is difficult to sample virgin geosystems due to the high degree of agrogenic transformation of land. Another difficulty is the partial preservation of traces of former pedogenesis in the profile of sodpodzolic and gray soils in the form of second humus horizons, i.e. relics of the boreal-Atlantic time of the post-glacial period. Sod-podzolic soils under secondary forests are, in fact, often derivatives of the most eluvial differences of soils with second humus horizons. The criteria for the selection of local standards are the features of the lithology of soil-forming types, topography, hydrothermal regime or historical development. On the territory of the Vyatka region, soils developed on morainic loams, eluvium of Permian bedrock or on binomial deposits (sands on clays and vice versa) are local standards. They are quite common in the middle, southern taiga and, in part, mixed forest landscapes of our region. The flat-undulating plateaus of the southern right bank of the Vyatka river are zonal reference complexes with soil combinations due to meso-and microrelief. Here, along the slopes, you can distinguish the following soil series: gray humus -gray dark humus -gray dark humus gleyic -gray dark humus gley soils. Rare soil standards are usually the soils that are formed in sparsely distributed soil-forming types, under unusual hydrothermal conditions, characterized by a complex history of development, which affected the appearance and properties of soils. According to the natural historical approach, reflecting the complex history of the formation of the soil cover of the Vyatka region in the post-glacial period, first of all, this category should include soils with second humus horizons and with buried humus horizons. These polygenetic soils are both rare, unique, and/or endangered because they are relics of earlier Holocene stages. Their traces have been preserved to this day in the morphology, properties of the mineral and organic phase. We have studied such rare pedoobjects using physical, chemical, biochemical, physical-chemical, geochronological ( 14 С) methods as part of a series of soil types and subtypes in the south of the Kirov region. They are found mainly among the intertidal areas -sod-podzolic, humus-gley, gray, gray gleyic, sod-carbonate -and, to a lesser extent, valley landscapes -paleoalluvial, etc. The most typical among them are soils with remnants from earlier stages of pedogenesis of the second humus horizons lying at the level of modern near-surface-humus-accumulative AU(B)[hh], accumulative-eluvial AEL[hh] or, less often, middle -illuvial-textural B[hh] -horizons. Regardless of the depth of occurrence -from 15-50 to 100-120 cm -the second humus horizons reveal the close (about 5-8 thousand years or more) age of humic acids in the composition of organic matter which corresponds to the early and middle Holocene. These soils with a binary humus profile were formed during the temporary shift of natural zones to the north, with a different combination of pedogenetic factors, more appropriate to the former forest-steppe environments. Starting from the second half of the Holocene, they entered the phase of accumulative-eluvial soil formation following the return migration of natural zones and the expansion of boreal landscapes to the south. As a result, they provoked the degradation processes of organic (and mineral) phase that is why the traces of early Holocene accumulative stage of pedogenesis are partly preserved in the appearance and substantive properties of soils. Among them are residual mafic second humus horizons, ancient age, humate-calcium compound of organic matter of second humus horizons and other markers of former intensive bio-accumulative stage of soil genesis. If the direction of the spontaneous evolution established about 5 thousand years ago remains, we can expect complete erasure of the soil map of the area of soils with second humus horizons in the near or distant future. This explains the relevance of the priority inclusion of these polygenetic soils in the Red Data Book of soils. Their presence in our region throws further light not only on the history of the development of soil and vegetation cover, climate and landscapes in general in the late-post-glacial period (12-0 thousand years ago), but also has a significant potential for predicting scenarios for the future state of the pedosphere and landscapes of the Vyatka region. The soils with a binary humus profile are of high scientific value due to the discovery of gray residual-carbonate soils with second humus horizons and insite paleocarbonate pedospheric relics, theoretically predicted earlier by N.A. Karavaeva, A.E. Cherkinsky and S.V. Goryachkin [18], but unknown until our research on the right bank of the lower Vyatka [16]. The sod-podzolic soils with second humus horizons developed on the boulder-loam sediments of the Chepetsko-Kilmezskaya upland (the suburbs of the village Medvezhena) should also be included in the Red Data Book of the region. No less valuable are gray soils with second humus horizons and buried humus horizons on the floodplain terraces of the Gon'binka river in the Malmyzhsky district. Conclusion The obvious conclusion is that the soil cover is unique, especially in the areas of cover loam of the middle and southern Vyatka basin and strongly transformed during agricultural development. In agricultural landscapes, most soils with relict phenomena have almost disappeared due to plowing second humus horizons and subsequent water erosion. Therefore, it is necessary to urgently conserve certain areas of unique bodies containing biotic and abiotic components that are disappearing at present -the priceless natural heritage of the Vyatka region -for science, biosphere and society. The first steps in this direction should be the Red Data Book of soils, soil reserves and mini-reserves with reference zonal, rare, unique and endangered soils.
2020-11-12T09:08:55.299Z
2020-11-05T00:00:00.000
{ "year": 2020, "sha1": "debf99492f029bcbb0a09894de2b4d46f16cff33", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/579/1/012073", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c569c5c5ce8264ce1d011eb0aef81ddc70e8908e", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Geography" ] }
7289066
pes2o/s2orc
v3-fos-license
EGF Functionalized Polymer-Coated Gold Nanoparticles Promote EGF Photostability and EGFR Internalization for Photothermal Therapy The application of functionalized nanocarriers on photothermal therapy for cancer ablation has wide interest. The success of this application depends on the therapeutic efficiency and biocompatibility of the system, but also on the stability and biorecognition of the conjugated protein. This study aims at investigating the hypothesis that EGF functionalized polymer-coated gold nanoparticles promote EGF photostability and EGFR internalization, making these conjugated particles suitable for photothermal therapy. The conjugated gold nanoparticles (100–200 nm) showed a plasmon absorption band located within the near-infrared range (650–900 nm), optimal for photothermal therapy applications. The effects of temperature, of polymer-coated gold nanoparticles and of UVB light (295nm) on the fluorescence properties of EGF have been investigated with steady-state and time-resolved fluorescence spectroscopy. The fluorescence properties of EGF, including the formation of Trp and Tyr photoproducts, is modulated by temperature and by the intensity of the excitation light. The presence of polymeric-coated gold nanoparticles reduced or even avoided the formation of Trp and Tyr photoproducts when EGF is exposed to UVB light, protecting this way the structure and function of EGF. Cytotoxicity studies of conjugated nanoparticles carried out in normal-like human keratinocytes showed small, concentration dependent decreases in cell viability (0–25%). Moreover, conjugated nanoparticles could activate and induce the internalization of overexpressed Epidermal Growth Factor Receptor in human lung carcinoma cells. In conclusion, the gold nanoparticles conjugated with Epidermal Growth Factor and coated with biopolymers developed in this work, show a potential application for near infrared photothermal therapy, which may efficiently destroy solid tumours, reducing the damage of the healthy tissue. Introduction Nanocarriers with improved characteristics, such as size, shape and plasmonic surface properties are selected for photonic therapeutic applications for cancer treatment [1]. One of the most studied and potential application is the near-infrared (NIR) photothermal therapy based on gold-nanoparticle-mediated hyperthermia and, consequently, protein denaturation and tissue necrosis [1]. As multifunctional system, nanocarriers are further functionalized with small targeting peptides [2]. Therefore, the success of the photothermal therapy depends on the therapeutic efficiency and biocompatibility of the system, but also on the stability and biorecognition properties of the conjugated biomolecule. Bio-functionalization of nanoparticles with EGF has been applied to specific targeting cancer cells, which overexpress EGFR and therefore, with ample interest for photothermal cancer treatment. EGF offers many advantages for this type of pharmaceutical application: 1) EGF is smaller (53 amino acids; MW: 6 kDa) than antibodies or other EGFR specific ligands used for the same purpose; 2) unlike EGF, antibodies can trigger severe immune response leading to cytotoxicity [1]; 3) EGF has three SS bonds, three Trp and five Tyr and hydrophobic residues, all suitable for interactions with nanocarriers [3]; and 4) EGF is stable at physiological conditions and neutral pH since its pI value is around 4.55, conferring the peptide a negative charge at pH > 7 [4]. Fourier Transform Infrared (FT-IR) studies also showed that EGF presents a thermal unfolding at pH 7.2 that starts at 40°C, with the transition midpoint at 55.5°C and complete denaturation is observed above 76°C [5]. Another study evaluated the application of EGF for skin patches, showing the resistance of this peptide to temperature (T m~7 9°C) [6]. However, the potential physiological activation and stimulation of cancer growth has hindered the use of EGF as targeting peptide in drug delivery systems, which is dependent on the release of the anticancer drug [7]. When conjugated to metallic nanoparticles, EGF promotes a rapid internalization into cancer cells [8], and cancer destruction can be achieved by injecting the light-absorbing nanoparticles locally and applying the laser-mediated hyperthermia directly into the tumour. Therefore, it is our aim to use EGF-conjugated HAOA-coated gold nanoparticles with plasmon absorption band located in the near-infrared (NIR) range (i.e., 650-900 nm), for photothermal therapy and local hyperthermia, without damage to the surrounding tissues [9]. This study describes the behaviour of EGF when exposed to temperature, UVB light (295 nm) and quenchers, such as gold nanoparticles coated by hyaluronic and oleic acids (HAOA). The use of oleic acid (OA) and polymers like hyaluronic acid (HA) can further promote the interaction and entrapment of EGF onto gold nanoparticles, independently of the pI and pH of the solution, as previously reported [10]. In addition, HA is described to be an excellent fluorescence quencher [11] and to give structural stability to small proteins [12]. OA is also described as a good protein fluorescence quencher [13]. The presence of quenchers shortens the fluorescence lifetimes and may confer protection against photochemistry. Therefore, we have investigated if the presence of gold nanoparticles coated with OA and HA protected the attached EGF from UV wavelengths normally used to trigger protein fluorescence, such as 295 nm. UV excitation of proteins causes protein conformational changes upon excitation of the aromatic residues, i.e., tryptophan (Trp), tyrosine (Tyr) and phenylalanine (Phe). Three main photoproducts are kynurenine (Kyn, a photoproduct of Trp), N-formylkynurenine (NFK, a photoproduct of Trp) and dityrosine (DT, a photoproduct of Tyr) [14][15][16][17][18]. Furthermore, UV excitation of the side chains of aromatic residues induces the disruption of disulphide (SS) bonds, mediated by an electron transfer process, leading to the formation of a transient disulphide electron adduct and to changes in the fluorescence quantum yield of proteins [16,17]. The effect of UV light on the structure and function of key medically relevant proteins, such as Epidermal Growth Factor Receptor (EGFR) [19], insulin [14] and plasminogen [15] has been reported. The present study reports the time dependent effect of continuous 295 nm excitation of free EGF on the peptide's fluorescence emission intensity, as a function of irradiance level (power/ unit area) and temperature. Trp was selected as an intrinsic molecular probe and SYPRO 1 Orange was used as an extrinsic molecular probe in order to monitor protein conformational changes [20]. The formation of photoproducts, NKF, Kyn and DT, has been monitored. Moreover, the expected protective effect provided by HAOA-coated gold nanoparticles against 295 nm-induced photochemistry on EGF was investigated by fluorescence spectroscopy and, structurally, by circular dichroism spectroscopy. Binding of EGF and EGF-conjugated HAOAcoated gold nanoparticles to EGFR, present on the cell membrane of A549 human lung carcinoma cells, was monitored using confocal fluorescence microscopy and cytotoxicity assays (MTT) were carried out in non-cancerous human immortalized keratinocytes, HaCaT cell line. Technologies as molecular probes for confocal microscopy and protein conformational studies. Primary mouse monoclonal antibody anti-EGFR neutralizer antibody LA1 was obtained from Millipore (05-101). The water used for buffer preparation was purified through a Millipore system. Thiazolyl Blue Tetrazolium Bromide (MTT), Fetal Bovine Serum (FBS), puromycin and penicillin/streptomycin were supplied by Sigma-Aldrich (Steinheim, Germany), as of cell culture grade. Dulbecco's Modified Eagle's medium (DMEM) was supplied by Biowest (Nuaillé, France) and DMSO was supplied by Merck (Darmstadt, Germany). Preparation of EGF stock solution and EGF-conjugated gold nanoparticles A 2.5 μM (16.5 μg/mL) stock solution of EGF was prepared in 2 mM Phosphate Buffer Saline (PBS) at pH 7.4. In order to prepare EGF-conjugated gold nanoparticles, the EGF stock solution at 2.5 μM was mixed with the gold nanoparticles solution (0.22 mM) and hyaluronic acidoleic acid (HAOA) solution (1 mg/mL), at a 1:1 (v/v) ratio. The reaction mixture was kept for 30 min at room temperature and, then, left overnight at 4°C protected from the light. Gold nanoparticles were produced based on the addition of an aqueous extract of Plectranthus saccatus (10 mg/ mL) as the main reducing and capping agent [21]. The aqueous plant extract was used in alternative to cetyl trimethylammonium bromide (CTAB), and prepared according to the procedure described by Rijo et al. (2014), using a microwave method [22]. The nanoparticles suspension was centrifuged twice at 500 x g for 20 min in a FV2400 Microspin (BioSan, Riga, Latvia) to remove unbound peptides. The pellet was re-suspended in PBS buffer (pH 7.4). EGF stock solution was stored at -20°C until further use. EGF structure analysis and gold nanoparticles structure design Crystallographic data used for the display of the 3D protein structure (Fig 1) was extracted from 1JL9.pdb (3D structure of EGF, chain B), using Discovery Studio 4.1 (Accelrys Software, San Diego, CA, USA). Distances between protein residues were obtained by using the monitor tool Table 1. Shortest spatial distances between disulphide (SS) bonds and aromatic residues (tryptophan and tyrosine) in EGF chain B (1JL9.pdb). The shortest distances (< 12 Å) between atoms of each pair of elements (Trp, Tyr and disulphide bonds) were considered. For Trp and Tyr residues, only atoms belonging to the indole and benzene rings were considered, and for SS bonds one of the SG atoms. (W = Trp; Y = Tyr; PDB atomtype descriptor used is given in parentheses). in the program (Table 1). Adobe Illustrator CS5 (Adobe Systems Software Ireland Ltd.) was used in order to graphically display the EGF-conjugated HAOA-coated gold nanoparticles. Steady-state fluorescence spectroscopy studies Steady-state fluorescence emission spectra were collected upon excitation of the Trp pool of the protein at 295 nm. Excitation spectra were acquired with emission wavelength at 330 nm. All measurements were conducted on a fluorescence RTC 2000 spectrometer (Photon Technology International, Canada, Inc.347 Consortium Court London, Ontario N6E 2S8) with a Tconfiguration, using a 75-W Xenon arc lamp coupled to a monochromator. Samples were analyzed in quartz high precision cell with 10 cm x 2 cm of light path (Hellma Analytics) and gently shaken before each measurement. All slits were set to 5 mm. Continuous 295 nm illumination of EGF Temperature effect on EGF photochemistry. Continuous 295 nm illumination of EGF (fresh sample, 2.5 μM) was carried out for 2 hours and the protein's fluorescence emission intensity at 330 nm was monitored at five different temperatures: 10°C, 15°C, 20°C, 25°C and 30°C (Fig 2). Excitation slit was set at 0.8 mm, with an equivalent lamp power of 1.67 μW. Fresh samples were used for each experiment. Emission and excitation intensity spectra were corrected in real-time for oscillations in the emission intensity of the excitation lamp. The Arrhenius plot for free EGF was also represented and all parameters calculated, as explained further in the "Data analysis" section (Fig 3). Light power effect on EGF photochemistry. Continuous 295 nm illumination of EGF (fresh sample, 2.5 μM) was carried out for 2 hours and the peptide's fluorescence emission intensity at 330 nm was monitored using different excitation slit openings: 0.1 mm, 0.5 mm, 0.8 mm, 1.2 mm and 2.0 mm corresponding to 0.12 μW, 0.30 μW, 1.67 μW, 2.34 μW and 4.40 μW, respectively (Fig 4). Fluorescence excitation (em. fixed at 330 nm) and emission (exc. fixed at 295 nm) spectra of EGF were acquired before and after each EGF illumination using different excitation slit openings. The excitation slit size versus excitation power was determined by measuring the power level at the cuvette location with a power meter (Ophir Photonics StarLite Meter ASSY ROHS, P/N7Z01565, Jerusalem, Israel) and a power head (Ophir Photonics, 30A-BB-18 ROHS, P/N7Z02692, Jerusalem, Israel) upon varying the excitation slit size, as previously reported for lysozyme [23]. The temperature of the solution was kept at 20°C using a Peltier element at the cuvette holder location. A fresh sample was used for each illumination session. SYPRO 1 Orange: probing EGF conformation changes induced by 295 nm and temperature. SYPRO 1 Orange is used as a molecular probe in order to monitor protein conformational changes since its fluorescence is greatly enhanced upon contact with hydrophobic environments [24]. A 2 μL aliquot (dilution 1:1000) of SYPRO 1 Orange stock solution (5,000X Concentrate in DMSO) was added to a cuvette containing a fresh sample of EGF (2.5 μM, 0.2 mL) prior to the 295 nm continuous illumination experiment. The sample was gently shaken to mix both solutions. Fluorescence emission of SYPRO 1 Orange at 580 nm was monitored upon continuous illumination at 470 nm for 2 hours, at each of the above mentioned temperatures, i.e., 10°C, 15°C, 20°C, 25°C and 30°C. Fluorescence intensity changes were quantified. In addition, the fluorescence emission of SYPRO 1 Orange at 580 nm was monitored upon continuous illumination at 470 nm for 2 hours, at each of the above mentioned power levels, i.e., 0.12 μW, 0.30 μW, 1.67 μW, 2.34 μW and 4.40 μW (corresponding to 0.1 mm, 0.5 mm, 0.8 mm, 1.2 mm and 2.0 mm slits, respectively). Fluorescence spectral changes were quantified. Photoproducts of tryptophan and tyrosine. Fluorescence excitation and emission spectra of the Trp and Tyr photoproducts (e.g., NFK, Kyn and DT) were monitored. Excitation and emission fluorescence spectra of the photoproducts differ from the ones of Trp and Tyr: NFK and Kyn are excited at 320 nm and 360 nm and show a maximum emission between 400-440 nm and between 434-480 nm, respectively [25][26][27]. Therefore, EGF fluorescence intensity changes and spectral shifts were quantified, before and after the illumination of EGF at 295 nm, at different temperatures and different light power slit openings. Photochemistry of EGF conjugated with HAOA-coated gold nanoparticles. The effect of continuous 295 nm excitation of EGF has been investigated for EGF conjugated to gold nanoparticles covered by natural polymers, such as hyaluronic acid (HA) and oleic acid (OA) (Fig 5). Results were compared with data obtained with free EGF. Four samples were continuously illuminated with 295 nm light for 2 hours at 20°C and their fluorescence emission intensity at 330 nm has been monitored: a) free EGF, b) EGF-conjugated HAOA-coated nanoparticles, c) plain non-coated gold nanoparticles and d) HAOA coated-gold nanoparticles (Fig 6). Excitation slit was set to 2.0 mm, with an equivalent power of 4.40 μW at the entrance of the excitation chamber. Conjugation of EGF onto the HAOA-coated gold nanoparticles has been confirmed using steady state fluorescence spectroscopy. Fluorescence excitation (em. fixed at 330 nm) and emission (exc. fixed at 295 nm) spectra of non-conjugated EGF, of the supernatant after centrifugation of the solution containing conjugated and non-conjugated EGF, and of conjugated EGF onto HAOA-coated gold nanoparticles, have been acquired in order to detect the presence of protein (Fig 7). In order to detect likely lightinduced conformational changes in EGF, SYPRO 1 Orange was used as a molecular probe. Fluorescence emission spectra of SYPRO 1 Orange (excitation fixed at 470 nm) and fluorescence excitation spectra of SYPRO 1 Orange (emission fixed at 580 nm) were also acquired prior and after continuous illumination of EGF and EGF-conjugated HAOA-coated gold nanoparticles at 295 nm for 2 hours (Figs 8 and 9). Formation of Trp photo products (Kyn and NFK) upon 295 nm excitation of free EGF and EGF-conjugated HAOA-coated gold nanoparticles has been confirmed using steady state fluorescence spectroscopy (Fig 10). In order to detect Kyn and NFK, fluorescence emission spectra were acquired upon 320 nm excitation of the solution before and after 2 hours of continuous illumination at 295 nm. In order to detect the presence of Kyn, emission spectra were obtained upon 360 nm excitation before and after 295 nm continuous excitation. Fluorescence spectral changes have been quantified and compared for free and conjugated EGF. A fresh sample was used for each illumination run. Physical characterization of EGF-conjugated HAOA-coated gold nanoparticles Mean particle size, polydispersity index (PI) and zeta potential (ZP) for EGF-conjugated HAOA-coated gold nanoparticles were determined with a Coulter Nano-sizer Delsa Nano™C (Fullerton, CA). A low value of PI factor (< 0.25) will indicate a less dispersed nanoparticles distribution in size. "D-value" was determined as the size distribution in 10%, 50% and 90% of the nanoparticles population [28]. EGF-conjugated HAOA-coated gold nanoparticles were characterized by UV-visible spectroscopy (Evolution 600, UK) and the respective maximum absorbance wavelength (λ max ) was determined. TEM analysis of EGF-conjugated HAOA-coated gold nanoparticles Structure and surface morphology of EGF-conjugated HAOA-coated gold nanoparticles were analyzed by Transmission Electron Microscopy (TEM, Zeiss M10, Germany) ( Fig 5). Samples were prepared through "sequential two-droplet" method by re-suspending the nanoparticles in distilled water and placing a drop (5-10 μL) of the suspension on to a formvar grid for 30-60 sec. When the nanoparticles suspension had partly dried, the surface of the grid was washed three times with distilled water and the excess of water was removed with a filter paper. Then, sodium phosphotungstate (PTA, 2%, w/v) was applied to the grid for 10 sec, the excess of stain removed with a filter paper and the grid was left to dry at room temperature for 24 hours. Samples were analyzed at voltage setting of 10-20 kV. Different fields of the images were recorded digitally. Confocal fluorescence microscopy studies with EGF-conjugated HAOAcoated gold nanoparticles EGF-conjugated HAOA-coated gold nanoparticles were marked with two different fluorescent probes, Coumarin-6 and Alexa Fluor 647, as described below, for confocal microscope visualization and colocalization (Fig 11). Firstly, an aliquot (20 μL) of a saturated solution of Coumarin-6 (λ max_ex = 460 nm, λ max_em = 500 nm) in ethanol was added to an aqueous suspension, containing the polymer HAOA and the gold nanoparticles at 1:1 (v/v). Then, EGF marked with Alexa Fluor 647 (λ max_ex = 650 nm, λ max_em = 665 nm) was added to the HAOA-coated gold nanoparticles suspension. Coumarin-6 labeled nanoparticles were allowed to conjugate with the EGF-Alexa Fluor 647 for 30 min at room temperature, and were left 24 hours at 4°C, protected from the light. The suspension was centrifuged twice at 500 x g for 20 min in a FV2400 Microspin (BioSan, Riga, Latvia) to remove unbound EGF. The pellet was re-suspended in PBS buffer (pH 7.4). Confocal Laser Scanning Microscopy (CLSM, Leica, SP5, Mannheim, Germany) was used to verify the colocalization of both dyes on the EGF-conjugated HAOA-coated gold nanoparticles. The chosen excitation laser line He-Ne was 561 nm and the fluorescence emission selected range was set to 569-666 nm. Each sample was analyzed at room temperature and upon letting it dry on a glass slide. Different fields of the images were recorded digitally. Circular dichroism spectroscopy Far UV circular dichroism (CD) spectroscopy was carried out to detect any changes in EGF secondary structure after conjugation with HAOA-coated gold nanoparticles using a Jasco J- 720 spectropolarimeter (Jasco Corporation, Easton, MD, USA), with a photomultiplier suitable for the 200-700 nm range (Fig 12). After calibration to remove the noise of the device, the PBS buffer used to prepare the native EGF solution and Milli-Q water used for nanoparticles formulations were used as references to obtain the respective baselines. Far UV spectra were acquired using a quartz cell containing solutions of free EGF (0.3 mg/mL), EGF-conjugated HAOAcoated gold nanoparticles (16.5 μg/mL) and HAOA-coated gold nanoparticles (without peptide). Furthermore, spectra of EGF extracted from HAOA-coated gold nanoparticles by two different methods were recorded: 1) EGF non-conjugated present in the supernatant after centrifugation of EGF-conjugated HAOA-coated gold nanoparticles at 7200 x g for 10 min and 2) after incubation of EGF-conjugated HAOA-coated gold nanoparticles in PBS pH 5.5, at 37°C, for 72 hours, followed by centrifugation at 9000 x g for 3 min. Scanning of each sample was conducted from 200 nm to 260 nm with a resolution of 1 nm band width, 3 accumulations, scan speed 100 nm/min and 2 seconds response time. Data was processed using 10 point smoothing in Origin 8.1 (OriginLab Corporation, Northampton, MA, USA). Cytotoxicity assays in HaCaT cell line model Cell viability studies were conducted in human immortalized keratinocytes (HaCaT, CLS Cell Lines Service GmbH, Epplheim, Germany) using the MTT assay [29,30] in order to assess the cytotoxicity of EGF-conjugated HAOA-coated gold nanoparticles (Fig 13). Cells were cultured in Dulbecco's Modified Eagle Medium (DMEM) medium supplemented with 10% fetal bovine serum (FBS) and 1% penicillin/streptomycin solution. HaCaT cells were seeded onto 96-well plate at a density of 5,000 cells/ well to reach the desired confluence. EGF-conjugated HAOAcoated gold nanoparticles were tested at different concentrations: 0-80 μM (based on the concentration of gold). DMSO 5% (v/v) was used as the positive control. Cells were exposed to nanoparticles for 24 hours. After this period, the cells were washed twice with PBS and incubated with MTT solution (0.5 mg/mL in culture medium) for 2.5 hours at 37°C. Culture medium was then removed and cells were washed again with PBS. DMSO (200 μL per well) was added to dissolve the formazan crystals and absorbance was read at 595 nm (Thermo Scientific Multiskan FC, Shanghai, China). Three to four independent experiments were carried out, each comprising four replicate cultures. EGFR binding assay on A549 cells GFP-EGFR In vitro studies were carried out in A549 cells, in which the genomic EGFR gene has been endogenously tagged with a Green Fluorescent Protein gene (GFP) (Sigma-Aldrich ref. CLL1141), since this is a specific and well-studied cell model for EGFR binding assay [30]. A549 cells were cultured in DMEM medium + FBS 10% and 1 μg/mL puromycin and maintained at a 37°C in a 5% CO 2 atmosphere, in order to analyze the effects of adding free EGF (4 μg/mL), EGF-conjugated HAOA-coated gold nanoparticles (4 μg/mL EGF; 60 μM gold nanoparticles) and HAOA-coated gold nanoparticles (non-conjugated; 60 μM). EGF-conjugated HAOA-coated gold nanoparticles were marked with two different fluorescent probes, Coumarin-6 (λ max_ex = 460 nm, λ max_em = 500 nm) and Alexa Fluor 647 (λ max_ex = 650 nm, λ max_em = 665 nm), as previously described, for confocal microscope visualization and colocalization experiments (Fig 14). Free EGF was marked with Alexa Fluor 647, while nanoparticles were labeled with Coumarin-6. Prior to the image acquisition in the confocal fluorescence microscope, the cells were incubated 1.5 hours with the free EGF and with the EGF-conjugated HAOA-coated gold nanoparticles. In some wells, a primary mouse monoclonal antibody anti-EGFR (1 μg/mL of neutralizer antibody LA1, Millipore (05-101)) was used to block EGFR. After 1 hour of incubation with the antibody, free EGF or EGF-conjugated HAOA-coated gold nanoparticles were added to the A549 cells' incubation medium, for incubation during 1.5 hours, to see if they compete for the receptor binding and consequent receptor internalization. As controls, non-treated cells and HAOA-coated gold nanoparticles loaded with Coumarin-6 (without EGF) were used. EGFR binding and activation was analyzed by confocal fluorescence microscopy (CLSM, Leica, SP5, Mannheim, Germany). Ligand binding to EGFR activates the receptor and the GFP tagged receptor initially localized on the cell membranes, is then internalized. This leads to the appearance of fluorescence granules in the cell cytoplasm, as corresponds to the non-treated cells, while CN2 shows the exposure to HAOA-coated gold nanoparticles (without any dye). As for the treatment groups: A1) free EGF with Alexa Fluor 647, B1) EGF-conjugated HAOA-coated gold nanoparticles (only EGF is marked with Alexa Fluor 647), and C1) EGF-conjugated HAOA-coated gold nanoparticles (both EGF and HAOA-coated gold nanoparticles are marked with Alexa Fluor 647 and Coumarin-6, respectively). For A2, B2 and C2, anti-EGFR antibodies were added 1 hour before the addition of the tested samples. Data Analysis All data analysis, plotting and fitting procedures were done using Origin 8.1 (OriginLab Corporation, Northampton, MA, USA). Emission Spectra and Excitation Spectra. Emission and excitation spectra were first smoothed using a 10 points adjacent averaging. All fluorescence spectra obtained were first Raman corrected by subtracting the spectra recorded for the buffer in solution. Normalized emission and excitation spectra were obtained by dividing each data point by the maximum intensity value in each spectrum. Fitting Procedures EGF fluorescence emission kinetic traces (em. at 330 nm) upon 295 nm continuous excitation as a function of light power and temperature. Each decay curve acquired upon 2 hours of continuous 295 nm illumination of EGF exposed at different temperatures (10°C, 15°C, 20°C, 25°C and 30°C) and different excitation slit openings (0.5 mm, 0.8 mm, 1.2 mm and 2.0 mm), was fitted using a single exponential decay model given by the function F(t) = C 1 à exp(-x à k 1 ) + y 0 or a double exponential decay model, according to is the fluorescence emission intensity at 330 nm (a.u.) upon 295 nm at excitation time t (min), y 0 , C 1 and C 2 are constants and k 1 , k 2 are the rate constant of fluorescence emission intensity decrease (min -1 ); y 0 value was fixed to 0. Root mean square error R 2 was > 0.99 for all fitted traces. A double exponential decay model was selected if the single decay model did not provide a good fit. Data obtained with 0.1 mm slit size, was fitted using a linear model (F(t) = y 0 +C 1 à x). A good fit was based on the errors associated to the different parameters and the root mean square error. Fitted parameter values and corresponding errors, and root mean square error values obtained after fitting the 330 nm emission kinetic traces are displayed in Tables 2 and 3. EGF photochemistry:Arrhenius plot and activation energy. Temperature dependence of the decay constant of the EGF kinetic traces (Fig 2A) (where the fluorescence emission intensity at 330 nm is displayed upon 295 nm excitation), was analyzed using four different temperatures: 15°C, 20°C, 25°C and 30°C. Data was fitted according to the logarithmic form of the Arrhenius equation: ln k = ln A 0 + (E a /RT), where A 0 is the pre-exponential factor, E a is the activation energy, R is the universal constant for perfect gases (R = 8.314 J/ mol.K) and T is the temperature (in Kelvin). The Arrhenius plot and extracted parameters are displayed in Fig 3. Free EGF and EGF conjugated HAOA-coated gold nanoparticles fluorescence kinetics (em. at 330 nm) upon 295 nm excitation. Fluorescence emission intensity kinetic traces at Table 2. Single exponential fit using model F(t) = C 1 *exp(-x*k 1 ) + y 0 for each decay curve of EGF at 10˚C, 15˚C, 20˚C and 30˚C. For the decay curve of EGF at 25˚C, a double exponential fit using model F(t) = y 0 +C 1 *exp(-k 1 *x)+C 2 *exp(-k 2 *x) was selected (see Fig 2A). Fit parameters are displayed in this table. Adj. R 2 stands for Adjusted R-Square. Decay Parameters Statistic 330 nm for free EGF, plain non-coated gold nanoparticles, HAOA-coated gold nanoparticles and EGF-conjugated HAOA-coated gold nanoparticles samples are displayed in Fig 5. Traces were acquired upon continuous 295 nm illumination for 2 hours, at 20°C, except for plain gold nanoparticles and HAOA-coated gold nanoparticles without EGF, which were illuminated for 1 hour. All traces were fitted using a double exponential decay model according to the formula is the fluorescence emission intensity at 330 nm (a.u.) upon 295 nm excitation at time t (min), y 0 , C 1 and C 2 are constants and k 1 and k 2 is the rate constant of fluorescence emission intensity decrease (min -1 ); y 0 value was fixed to 0. The root mean square error R 2 was > 0.99 for all kinetics. The fitted parameter values and corresponding errors, and root mean square error values obtained after fitting the 330 nm emission kinetic traces are displayed in Table 4. Results Although EGF is formed by two amino acid chains (A and B), only chain B is represented in Fig 1. In total, EGF has 2 Trp residues, 5 Tyr residues and 3 SS bridges. Table 1 lists the shortest distances between each Trp and Tyr residues and the nearest SS bonds. The shortest distance between Tyr13 (atom CD1) and the SS bridge C14-C31 is 4.4 Å. All considered distances were < 12 Å. In addition, EGF has no Phenylalanine (Phe) residues but has a considerable number of Arginine (Arg) residues in its structure, close to Trp residues. Arg residues are of Table 3. Single exponential fit using model F(t) = C 1 *exp(-x*k 1 ) + y 0 for each decay curve of EGF at a power slit size of 0.5 mm and 0.8 mm (corresponding to 0.30 μW and 1.67 μW, respectively) and double exponential fit using model F(t) = y 0 +C 1 *exp(-k 1 *x)+C 2 *exp(-k 2 *x) for each decay curve of EGF at a power slit size of 1.2 mm and 2.0 mm (corresponding to 2.34 μW and 4.40 μW, respectively) (see Fig 4A). For slit 0.1 mm (0.12 μW), a linear model was selected. Fit parameters are displayed in this table. R 2 stands for Adjusted R-Square. Table 4. Double exponential fit using model F(t) = yo+C 1 *exp(-k 1 *x)+C 2 *exp(-k 2 *x) for free EGF, EGF-conjugated HAOA-coated GNP (gold nanoparticles), plain non-coated GNP (control) and HAOA-coated GNP (control) (see Fig 5). Fit parameters are displayed in this considerable importance since they quench the aromatic residues fluorescence emission, when the NH 2 groups become protonated. The closest distances between these two amino acids occur between Arg45 (NE) and Trp50 (CH2) and Trp49 (CE3) at 4.5 Å and 7.5 Å, respectively. Firstly, the behavior of the free peptide to temperature and light exposure was assessed. Fig 2A and 2B display the fluorescence kinetic traces for EGF upon 2 hours excitation at 295 nm (emission fixed at 330 nm) and for SYPRO 1 Orange an analogous experiment (excitation of 470 nm and emission fixed at 580 nm), respectively. At all acquired temperatures, fluorescence emission intensity of Trp is observed to decay as a function of illumination time. On the other hand, fluorescence emission intensity of SYPRO 1 Orange increases with illumination time. At 10°C and 15°C, EGF showed similar fluorescence emission decays with a decrease in Trp fluorescence emission intensity at 330 nm of 56.2%, 52.8%, respectively. The corresponding increase in the fluorescence emission of SYPRO 1 Orange after 2 hours excitation of EGF at 295 nm at 10°C and 15°C was 15.6% and 22.7%, respectively. At 20°C and 30°C, the fluorescence emission intensity of Trp decreased 59.6% and 59.1%, respectively, after 2 hours excitation of EGF at 295 nm, while the fluorescence emission intensity of SYPRO 1 Orange increased 2.3% and 17.3%, respectively. At last, continuous 295 nm excitation of EGF at 25°C led to a 59.7% decrease in the fluorescence emission intensity of the protein and to a 6.7% increase in the fluorescence emission intensity of SYPRO 1 Orange. When exposing free EGF to five different temperatures, an Arrhenius plot was obtained, as displayed in Fig 3. Due to the temperature dependence of the EGF rate constant (k), recovered from the fluorescence emission decays at 330 nm (excitation at 295 nm), we obtained an activation energy (E a ) and a pre-exponential factor (A 0 ) of 19.9±0.9 kJ.mol -1 and 0.44±0.37 s -1 , respectively. The equation obtained was y = -1.1x − 1933.4 (R 2 = 0.994). Slit(mm) Decay Parameters Statistic The kinetic traces for free EGF during 2 hours excitation at 295 nm (emission at 330 nm) at 20°C and the kinetic traces for SYPRO 1 Orange (excitation of 470 nm and emission at 580 nm) using different excitation powers are displayed in Fig 4A and 4B. Excitation of free EGF at 295 nm for 2 hours with different excitation slit sizes of 0.1 mm, 0.5 mm, 0.8 mm, 1.2 mm and 2.0 mm led to a 8.0%, 48.6%, 59.6%, 65.6% and 70.8% decrease in Trp fluorescence emission intensity, respectively. After 295 nm excitation of EGF for 2 hours, the fluorescence emission intensity of SYPRO 1 Orange increases 9.1%, 2.3%, 21.5% and 6.1% for slit sizes of 0.5 mm, 0.8 mm, 1.2 mm and 2.0 mm, respectively. In the same experiments, the fluoresce emission intensity of SYPRO 1 Orange has maximally increased by 9.1%, 3.2%, 26.8% and 19.3% for 0.5 mm, 0.8 mm, 1.2 mm and 2.0 mm, respectively. No change was observed in the fluorescence emission intensity of SYPRO 1 Orange at a slit size of 0.1 mm (decrease: 0.2% =~0%). A single exponential model (F(t) = y 0 +C 1 à exp(-x à k 1 )) was selected to fit the 330 nm decay curves obtained with 0.5 mm and 0.8 mm slit openings. The traces obtained with larger slit openings (1.2 mm and 2.0 mm) were fitted with a double exponential model ( . The corresponding fitted parameter values (C 1 , C 2 k 1 , k 2 , y 0 ) and corresponding errors, as well as root mean square error values, are displayed in Table 3. The 330 nm fluorescence decay obtained when a slit 0.1 mm was chosen was best fitted by a linear model. After studying the temperature and power dependence of the kinetic traces for free EGF, the behavior of this peptide has been monitored after conjugation with a nanosystem made of a gold core and a biodegradable polymeric coating of hyaluronic and oleic acids (HAOA). HAOA-coated gold nanoparticles (i.e., non-conjugated with EGF) showed a mean particle size of 300 nm (PI: 0.2) and a negatively charged surface (-19 mV) [24]. After conjugation with EGF, the volume distribution for 90% of HAOA-coated gold nanoparticles (D 90%) was 220 nm, as confirmed by TEM analysis, where EGF-conjugated HAOA-coated gold nanoparticles showed a size around 100-200 nm and a spherical morphology (see Fig 5). EGF-conjugated HAOA-coated gold nanoparticles are composed by a dense gold core observed in the TEM image as a dark core, and by a soft polymeric coating of HAOA on the surface, visible in the TEM image as a grey area around the core. EGF may be associated to the HAOA coating of the gold nanoparticles as illustrated in Fig 5. Zeta potential (ZP) of EGF-conjugated HAOA-coated gold nanoparticles was around -5 mV when compared to the lower value of -19 mV for the HAOA-coated gold nanoparticles alone. In addition, a maximum absorbance peak at 655 nm compared to 800 nm observed for the plain non-coated gold nanoparticles, indicating that a 145 nm blue shift has occurred after conjugation. The EGF fluorescence emission intensity at 330 nm during 2 hours of continuous 295 nm excitation is displayed in Fig 6 and compared for free EGF, EGF-conjugated HAOA-coated gold nanoparticles, empty HAOA-coated gold nanoparticles and non-coated plain gold nanoparticles. Plain gold nanoparticles (i.e., without HAOA coating) and HAOA-coated gold nanoparticles were used as controls. The double exponential fit model (F(t) = yo+C 1 à exp(-k 1 à x) +C 2 à exp(-k 2 à x)) used to fit the kinetic traces for free EGF and EGF-conjugated HAOA-coated gold nanoparticles showed that the fluorescence emission intensity of free EGF decayed faster than the one for conjugated EGF with HAOA-coated gold nanoparticles. Decay constants for conjugated EGF were 1.5-fold (k 1 ) and 1.3-fold (k 2 ) lower compared to the ones for free EGF. Also, the initial Trp 330 nm fluorescence emission intensity (excitation at 295 nm) for EGFconjugated HAOA-coated gold nanoparticles is almost three times lower than the initial fluorescence emission intensity of free EGF. Fitting results are represented in Table 4. Afterwards, the effect of conjugation on the fluorescence spectra of EGF was investigated. Fluorescence excitation spectra (emission fixed at 330 nm) and fluorescence emission spectra (excitation fixed at 295 nm) were compared for free EGF in supernatant, EGF-conjugated HAOA-coated gold nanoparticles (before centrifugation) and EGF-conjugated HAOAcoated gold nanoparticles (after centrifugation) (see Fig 7A). Centrifugation at 500 x g for 20 min was essential for the elimination of the non-conjugated EGF and EGF was only illuminated with the light necessary for obtaining the represented spectra. Isolated EGF-conjugated HAOA-coated gold nanoparticles (after centrifugation) showed a clear emission peak at 326 nm, which confirms the presence of Trp residues at the HAOA-coated gold nanoparticles' surface (see Fig 7B). Figs 8 and 9 display the fluorescence excitation and emission spectra of EGF, as free peptide and as conjugated with HAOA-coated gold nanoparticles, and of SYPRO 1 Orange. The fluorescence emission and excitation intensity of SYPRO 1 Orange is 10 and 18.6 higher, respectively, when added to EGF-conjugated HAOA-coated gold nanoparticles than when added to free EGF. Fluorescence emission intensity of free EGF at 328 nm decreased 74.8%, after illumination, and a blue shift occurred from 344 nm to 328 nm, while the fluorescence emission intensity of EGF-conjugated HAOA-coated gold nanoparticles at 347 nm decreased 25.7%. As for the fluorescence emission intensity of SYPRO 1 Orange, the values decreased 21.4% and 23.8% after illumination of both free EGF and EGF-conjugated HAOA-coated gold nanoparticles, respectively. Interestingly, the fluorescence emission spectra of SYPRO 1 Orange showed a blue shift (from 610 nm to 594 nm) when added to free EGF, while when added to EGF-conjugated HAOA-coated gold nanoparticles, the peak of SYPRO 1 Orange emission spectra showed a red shift from 584 nm to 628 nm (see Fig 9). In order to detect the putative presence of photochemical species such as NFK and Kyn, fluorescence emission spectra upon 320 nm excitation were acquired for free EGF and for EGF-conjugated HAOA-coated gold nanoparticles, before and after 295 nm continuous illumination of the samples (see Fig 10A). HAOA-coated gold nanoparticles spectra, before and after 2 hours illumination at 295 nm, were used as controls. For free EGF, a peak centered at 418 nm was observed upon 320 nm excitation. The fluorescence emission intensity of the peak increases 51.0% after continuous excitation with 295 nm for 2 hours. For EGF-conjugated HAOA-coated gold nanoparticles, two peaks were observed: a peak centered at 392 nm and a larger peak at 598 nm. The second peak at 596-598 nm is also visible for the controls HAOAcoated gold nanoparticles, without EGF, before and after continuous illumination, though 3 to 4 times less intense. After continuous excitation with 295 nm for 2 hours, the fluorescence emission intensity decreased by 14.8% and 5.8%, for the peak centered at 392 nm and 598 nm, respectively. In Fig 10B are displayed the fluorescence emission intensity spectra upon 360 nm excitation in order to detect the putative presence of the photochemical species Kyn and NFK. HAOA-coated gold nanoparticles spectra, before and after 2 hours illumination at 295 nm, were used as controls. For free EGF, a peak centered at 460 nm is observed upon 360 nm excitation. The fluorescence emission intensity of the peak increases 127% after continuous excitation with 295 nm for 2 hours. Two emission peaks were observed for EGF-conjugated HAOAcoated gold nanoparticles: a peak centered at 461 nm and a larger peak at 580 nm. The second peak at 580 nm is also visible for the control HAOA-coated gold nanoparticles, without EGF, before and after continuous illumination, but with less intensity, like observed upon 320 nm excitation. After continuous excitation with 295 nm for 2 hours, the fluorescence emission intensity decreased by 5.7% and 40.8%, for the peak centered at 461 nm and 580 nm, respectively. In order to characterize the binding of EGF to HAOA-coated gold nanoparticles, colocalization experiments were conducted in a confocal microscope (Fig 11; scale bar at 5 μm). EGF labeled with Alexa Fluor 647 appears in red and HAOA-coated gold nanoparticles labeled with Coumarin-6 appear in green; EGF-conjugated HAOA-coated gold nanoparticles is displayed in yellow. On the other hand, circular dichroism (CD) is a good method to evaluate changes in the secondary structure of proteins, after binding. Fig 12 shows far UV CD spectra collected for different samples. The spectra show that after conjugation with HAOA-coated gold nanoparticles EGF maintains its secondary structure. Although free EGF (non conjugated) has a signal of higher intensity than the rest of the studied samples (i.e., EGF-conjugated HAOA-coated gold nanoparticles, EGF in supernatant and extracted EGF with acidic pH solution), its concentration was also 18 times higher. The CD spectra indicate that EGF probably has a secondary structure, with contributions from different secondary elements. This is suggested by the presence of a negative peak around 208-210 nm, characteristic of αhelix structure. However, the negative band at 220 nm, also characteristic of α-helix structure was not detected. The absence of CD bands above 215-220 nm range suggests the presence of EGF's βsheets. Finally, cell culture experiments allowed us to understand how the nanoparticles interact with in vitro biological systems. When exposing human keratinocytes (HaCaT) to EGF-conjugated HAOA-coated nanoparticles for 24 hours, no aggregates were visible after addition of the nanoparticles to the plaque wells (Fig 13). In addition, EGF-conjugated HAOA-coated gold nanoparticles at 80 μM the highest concentration tested, showed a cell viability of around 75% of that of non-treated control cultures. As for the experiments for the EGFR binding assay carried out with human lung carcinoma A549 cells, images were taken 1.5 hours after the cells being in contact with EGF and for the negative control (Fig 14). Three different samples were tested: free EGF labeled with Alexa Fluor 647 (A1); EGF-conjugated HAOA-coated gold nanoparticles, being EGF labeled with Alexa Fluor 647 (B1); and EGF-conjugated HAOA-coated gold nanoparticles, with EGF labeled with Alexa Fluor 647 and the HAOA-coated gold nanoparticles labeled with Coumarin-6 (C1). In addition, the same samples were tested after the cells were incubated for 1 hour with anti-EGFR antibody, in order to block the EGF receptors. Finally, two control groups were studied: CN1, corresponding to cells from the non-treated group (i.e., cells without the addition of EGF or nanoparticles and of the anti-EGFR antibody) and CN2, corresponding cells in presence of HAOA-coated gold nanoparticles (without dye or EGF conjugation). In panels A1, B1 and C1 it can be observed that EGF has induced EGFR internalization, alone and when conjugated with the HAOA-coated gold nanoparticles. Both free EGF and EGF-conjugated HAOA-coated gold nanoparticles (panels B1 and C1) entered the cells' cytoplasm but not its nucleus. The anti-EGFR antibody blocked the binding of EGF to EGFR, preventing receptor internalization (panel A2); however, the EGF-conjugated HAOA-coated gold nanoparticles could still enter the cells (panels B2 and C2) despite the presence of the antibody. The controls (panels CN1 and CN2) confirm that in the absence of EGF and in the absence of nanoparticles there is no EGFR activation. Discussion The presented data has shown that the structure of EGF (Fig 1) can be modulated by UV-light (295 nm) and that the photochemical changes are reduced when EGF is bound to HAOAcoated gold nanoparticles (Fig 6). Like other small proteins and peptides (e.g., cutinase, insulin, α-lactoalbumin) [14,17,31], EGF is an interesting model protein for photostability studies due to the close spatial proximity between its aromatic residues and its disulphide (SS) bridges. SS bridges are key structural elements in small proteins [31], responsible for maintaining the pro-teins´structure and therefore their function. Disruption of SS bonds induced by UV excitation of aromatic residues in those peptides will most likely destroy its structure and impair its function [14,17,31]. Table 1 lists the three SS bonds of EGF located in close spatial proximity to aromatic residues (Trp and Tyr). The observed close distances will allow for electron transfer between the aromatic residues and the SS bridges [17], leading to the disruption of such bridges. EGF is a protein where these reactions will occur in the presence of UVB light leading to protein conformational changes and to loss of functionality. EGF is a natural ligand for EGFR with significant biomedical importance in cancer treatment and diagnostic [19]. Changes in the fluorescence spectra of the extrinsic fluorescence probe SYPRO 1 Orange confirmed structural changes of EGF induced by temperature (Fig 2), prolonged illumination at 295 nm (Fig 4) and pH, as its fluorescent emission is enhanced upon binding to hydrophobic regions of the protein [20]. Temperature-dependent time based photochemical studies (see Fig 2A) show that EGF photochemistry and protein conformational space is temperature dependent, being similar at 10°C and 15°C, at 20°C and 30°C but distinct at 25°C. At 25°C, SYPRO 1 Orange appears to bind less to the peptide than at other temperatures, indicating that EGF has fewer hydrophobic surfaces exposed to the solvent. The Arrhenius plot (see Fig 3) showed that the activation energy (E a ) associated with the photochemical reactions induced by 295 nm on EGF was 19.9 ±0.9 kJ.mol -1 (i.e., 4.76 kcal/mol). This value was similar to the one found for α-lactalbumin (E a = 21.8±2.3 kJ.mol −1 ) [31]. Power-dependent irradiation studies of EGF (see Fig 4A) reveal that the larger the power used, the faster the kinetics associated with the fluorescence decays. A single exponential model was used to fit the Trp decay curves acquired with 0.30 μW and 1.67 μW (see Table 3) but for larger powers (2.34 μW and 4.40 μW) a double exponential model was needed. This shows that different photochemical processes are initiated at higher powers when compared to lower powers. Experiments carried out with SYPRO 1 Orange (see Fig 4B) show the same trend. Conformational changes induced in EGF are larger when illumination was carried out with higher powers: when using a 2.0 mm slit size opening, the fluorescence emission intensity of SYPRO 1 Orange was higher than when working with a 0.1 mm slit, indicating that the extrinsic probe is in contact with a larger hydrophobic surface rendered accessible due to light induced conformational changes. Conjugation of EGF to HAOA-gold nanoparticles protected EGF from photochemistry ( Fig 6, Table 4): the presence of the particles decreased the rate of the light induced fluorescence changes and induced quenching. Both HA, OA and gold are known to be fluorescence quenchers. EGF has a promising therapeutic value as a targeting ligand for tumours overexpressing EGFR, such as melanoma [32]. Therefore, it has been coupled to nano-sized delivery systems, made of either and both metallic and polymeric materials [33,34]. The photochemical protection conferred by nanoparticulate carriers is advantageous. Gold nanoparticles were prepared according to a seed-growth method [21,35]. An aqueous extract of Plectranthus saccatus (Benth.), rich in anti-oxidative compounds (e.g., rosmarinic acid, caffeic acid and chlorogenic acid [36]), was used as the main reducing and capping agent. Furthermore, a coating made of hyaluronic and oleic acids (HAOA) was added to the gold nanoparticles. Natural polymers can work as reducing and capping agents, activating "green" reduction of gold and being less toxic for healthy tissues, which make them advantageous in the reduction and morphology of gold nanoparticles [35,37]. Furthermore, the use of polymeric coatings is also interesting as a way to control drug release and to increase the adsorption of ligands. Recently, Su et al. (2014) showed that hyaluronic acid (HA) scaffolds can increase the adsorption and sustained release of EGF, attached to the polymeric surface through selfassembly and electrostatic interactions [38]. In addition, HA is reported to confer structural stability to proteins [12]. Herein, we studied the effect of mounting EGF to HAOA-coated gold nanoparticles. In Table 4 and Fig 6 confirms that the polymers and the gold core promoted protein quenching and induced slower decay kinetics when compared with the data obtained with free EGF, protecting EGF from photochemistry. The same has been reported by Oliveira when studying the photochemistry of free lysozyme and comparing it to the photochemistry of lysozyme mounted onto HAOA-coated gold nanoparticles [23]. It is likely that the structure of EGF, after conjugation to HAOA-coated nanoparticles, sustains excitation at 295 nm for longer time periods, prior to possible loss of structure and function. The presence of EGF conjugated to the HAOA-coated gold nanoparticles was confirmed by fluorescence spectroscopy, after centrifugation and re-suspension with PBS, demonstrating a clear emission peak at 326 nm for Trp residues (Fig 7). The conjugation of EGF onto HAOA-coated gold nanoparticles conferred enhanced photostability to EGF. This is observed in Fig 8, where a smaller intensity reduction occurs in both fluorescence excitation spectra (41.0% versus 81.2%) and fluorescence emission spectra (25.7% versus 74.8%), after 295 nm continuous illumination. Furthermore, the observed blue shift in the fluorescence emission spectra of free EGF, after 295 nm continuous illumination, is no longer visible for EGF-conjugated HAOA-coated gold nanoparticles (Fig 8). This is probably due to the fact that conjugation of EGF has prevented conformational changes that rendered the Trp moieties more apolar, responsible for the blue shift. SYPRO 1 Orange was used as an extrinsic probe for monitoring UV-light (295 nm) induced conformational changes in EGF (Fig 9). Firstly, it was observed that SYPRO 1 Orange showed affinity towards hydrophobic moieties in HAOA-coated gold nanoparticles, leading to an increase of its fluorescence emission intensity signal compared to its fluorescence emission intensity in free EGF solution (Fig 9). Secondly, the fluorescence emission spectra of EGF-conjugated HAOA-coated nanoparticles suffered a red shift (from 584 nm to 638 nm) after 2 hours of continuous 295 nm illumination. On the other hand, the fluorescence emission spectra of free EGF suffered a blue shift (from 610 nm to 594 nm) after 2 hours of continuous 295 nm illumination. The observed red shift for SYPRO 1 Orange reveals that the probe is in a more polar environment after illumination. Finally, the antioxidant compounds of Plectranthus saccatus, present in the HAOA-coated nanoparticles formulation, can also have an important role in protecting EGF from light induced reactions. Phenolic compounds are highly present in natural plant extracts and are described to show anti-oxidant effects on Trp oxidation and to be fluorescence quenchers [39]. It has been shown that the oxidation of the indole ring of Trp can be inhibited and, consequently, the formation of NFK and Kyn, by associating proteins with phenolic compounds from natural plant extracts [39,40]. This is positively correlated with our data. Conjugation of EGF to HAOA-coated gold nanoparticles reduced or even avoided the formation of photoproducts such as NFK and Kyn (see Fig 10A and 10B). The presence of oxidative conditions induced by light can lead to the oxidation of the aromatic residues in proteins [14,15,19,31,41]. UVB excitation of aromatic residues in proteins leads to the disruption of SS bridges [14][15][16][17]19] and to the formation of photoproducts, such as N-formylkynurenine (NFK), kynurenine (Kyn) [25,42] and dityrosine (DT) [26]. Since 295 nm excites specifically Trp residues, it is very likely that the photoproducts formed are Trp derivatives such as NFK and Kyn and not Tyr derivatives like DT. Furthermore, the emission spectrum of EGF upon 295 nm leads to a fluorescence emission spectrum that peaks around 330 nm, which makes it unlikely that Tyr residues will be excited by EGF emission. Two excitation wavelengths were used in order to detect the presence of photochemical products: 320 nm ( Fig 10A) and 360 nm (Fig 10B). Light at 320 nm excites both NFK (ε NFK(321nm) = 3750 M -1 cm -1 ) [42][43][44][45] and Kyn (ε Kyn(321nm) = 1812 M -1 cm -1 ) [46]. At 315 nm DT has an extinction coefficient equal to 5200 M -1 cm -1 but, as explained above, it is unlikely that it has been formed [47,48]. Light at 360 nm excites NFK (ε NFK(360nm) = 1607 M -1 cm -1 [46] and Kyn (ε Kyn(365nm) = 4530 M -1 cm -1 [49,50]) but does not excite DT. In Fig 10A, the peak with maximum fluorescence emission intensity (320 nm excitation) for free EGF occurs at 418 nm and for EGF-conjugated HAOA-coated occurs at 392 nm. In Fig 10B, the peak with maximum fluorescence emission intensity (360 nm excitation) for free EGF occurs at 460 nm and for EGF-conjugated HAOAcoated nanoparticles it is seen at 461 nm. This peak cannot belong to DT, since DT is not excited at 360 nm. Therefore, it can be Kyn since the wavelength of maximum fluorescence emission of Kyn lies within 434-480 nm. Colocalization experiments carried out with confocal fluorescence microscopy (Fig 11) confirmed that EGF (red colour) appears to be associated and colocalized with HAOA-coated gold nanoparticles (green colour), which can be visualized as yellow coloured spots. Since EGF shows a pI around 4.55 and HAOA-coated gold nanoparticles have a superficial negative charge (-19 mV), attractive electrostatic interaction between the protein and the nanocarrier are not likely to occur at pH 7.4. In spite of this, a slight increase of the nanoparticles' surface charge after EGF conjugation (-5 mV) is observed and, as already mentioned, the peptide conjugation onto the particles has also been confirmed by fluorescence spectroscopy. Moreover, literature described that EGF is likely to be associated to hyaluronic acid (HA) scaffolds through the polymer's carboxylic groups and since HA is a hydrogel with high hygroscopic character, interactions between EGF and the polymer can occur by hydrophilic interactions [12,38,51]. Another possible mechanism for EGF conjugation onto HAOA-coated gold nanoparticles is by means of binding between an amino acid residue of the peptide and the HAOA coating or gold core. Histidine has been described as a very strong metal binding amino acid [52]. EGF has two histidine residues (His10 and His16). Lysine residues (Lys28 and Lys48) of EGF were also pointed out as a potential binding site for EGF conjugation with HA polymer [53], especially Lys48, which is located at the end of lateral chain of EGF. Far UV spectra for CD showed that EGF maintained its non-helical, random coil structure, before and after conjugation and after extraction from the HAOA-coated gold nanoparticles, when incubated at 37°C in pH 5.5 phosphate buffer (Fig 12). The native structure of EGF is described to be mainly composed of random coil elements (72%) and β-helical elements (25%) and only a trace of α-helical content [3]. The random coil secondary structure contributes to the presence of a negative peak at 200-210 nm [54]. Although a typical shoulder formation at 220 nm is described for EGF [54], as indicative of the presence of random non-helical forms and β-sheets, spectra with a flat curve around 215-225 nm is also expected for EGF, suggesting a low content on β forms [55]. It has been observed this flat curvature for all spectra of EGF samples above 215-220 nm (Fig 12). EGF-conjugated HAOA-coated gold nanoparticles showed a negative peak at 208 nm, as free EGF in its native form, in PBS pH 7.4. However, a spectral blue shift was detected for both EGF in supernatant, as for the unbound peptide, recovered after centrifugation of the EGF-conjugated HAOA-coated nanoparticles and EGF extracted after incubation of EGF-conjugated HAOA-coated gold nanoparticles in phosphate buffer pH 5.5, at 37°C, for 72 hours. Both EGF in the supernatant and extracted EGF's peaks shifted to 206 nm. Furthermore, HAOA-coated gold nanoparticles, without EGF, showed an intense signal for the main negative peak at 200-210 nm, with a minimum at 201 nm, which can be due to the fact that HAOA coating of gold nanoparticles, especially HA, also absorb in the far UV range [56]. After characterizing the EGF-conjugated HAOA-coated gold nanoparticles in terms of pharmaceutical technology and protein stability, its potential biological application was evaluated. Firstly, the cell viability in normal-like human keratinocytes (HaCaT cell line) was tested in order to verify if our nanoparticles were safe when in contact with a healthy non-cancer tissue (see Fig 13). Statistical analysis was based on Student's t-test for comparisons between cell viability values with non-conjugated HAOA-coated gold nanoparticles [21] and EGF-conjugated HAOA-coated nanoparticles (present work). No significant differences (p > 0.05) were found between the viability of HaCat cells treated with HAOA-coated gold nanoparticles and EGF-conjugated HAOA-coated gold nanoparticles at equal concentrations. Finally, when testing the biologic activity of EGF-conjugated HAOA-coated nanoparticles, compared to free EGF, for EGFR binding, we have used a well-studied cell model for described previously for Human EGFR Live Cell Fluorescent Biosensor Assay [30]. Therefore, it was observed that both peptides, in those different conditions, were able to bind to the receptor and activate its internalization (visible green fluorescence as shown in Fig 14), making possible downstream signal transduction. It has been observed in our study that the antibody anti-EGFR competitively inhibits the binding of free EGF to EGFR, but not for EGF-conjugated to HAOA-coated gold nanoparticles. Therefore, nanoparticles can also enter the A549 cells by a putative different internalization mechanism than EGFR-modulated internalization. EGF-conjugated HAOAcoated gold nanoparticles may modulate the endocytic pathways for entering the cells. One example is the use of other cell receptors, such as CD44, for which hyaluronic acid is a specific ligand, which are also described to be also overexpressed in many solid tumor cells including breast, melanoma and lung cancer, like the A549 cell model used in this study [57,58]. Qhattal et al. (2011) confirmed this possibility when they demonstrated by fluorescence microscopy that HA-coated liposomes, made of high-molecular weight HA, led to increased uptake of liposomes by A549 cells, with high and irreversible binding affinity [59]. As observed in Fig 14, EGF-conjugated HAOA-coated gold nanoparticles accumulate around the peri-nuclear area, but do not penetrate into the cell nucleus, after 1.5 hours-incubation. Wang et al. (2013) showed that lipid-coated gold nanoparticles promoted the formation of acidic compartments, which appear to be lamellar bodies, in A549 cells [60]. These vesicles appear to internalize the nanoparticles, allowing them to enter the cell. Wang et al. (2013) also state that internalization occurs as a result of the negative charge of gold nanoparticles penetrating the lung surfactant, which primarily contains a mixture of phosphatidyl choline and phosphatidyl glycerol lipids [60]. Other polymeric coated gold nanoparticles, also highly negatively charged (-40 mV) were retained in the endolysosomal compartments, also predominant presence at the perinuclear region, after 1 and 2 hours-incubation with A549 cells [61]. This internalization was reported to occur very quickly, around 1 hour to 2 hours, probably due to EGF small size and high affinity to EGFR. Maybe, as a consequence, the clearance is also faster in lysosomes and in cells with high expression of EGFR [62]. Finally, another study using A549 lung cancer cells, sulfhydryl-activated EGF conjugation with lipidic nanoparticles were colocalized with the labeled EGF receptors and the internalization of EGF-conjugated nanoparticles was visible [33]. In conclusion, the EGF-conjugated HAOA-coated gold nanoparticles developed in this work show a potential application for near infrared (NIR, 650-800 nm) photothermal therapy, which may efficiently destroy cancer cells. NIR photothermal therapy reduces the damage of the healthy tissue compared to visible photothermal therapy, as the optical radiation is the lowest absorbed by the human tissue [1]. HAOA-coated gold nanoparticles protected EGF from 295 nm induced photochemistry and did not induce EGF denaturation, reducing the formation of photoproducts such as NFK and Kyn. Moreover, EGF-conjugated HAOA-coated nanoparticles did not markedly decrease HaCaT cell viability, showing high biocompatibility with healthy tissues, and were able to enter the EGFR-overexpressing tumor cell line A549, by different internalization mechanisms. As future prospects, cancer treatment could benefit from a combined approach, using multiple targeting moieties, for a specific cancer cell pool and acting by several molecular pathways. Another advantage would be the conjugation of a light-absorbing core, for photothermal therapy, and a polymeric coating, capable of promoting the incorporation of anticancer drugs, for a local chemotherapy.
2017-07-15T13:30:42.670Z
2016-10-27T00:00:00.000
{ "year": 2016, "sha1": "184dce648e4f02c5d82722bfa6816cadfe52a5aa", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0165419&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "be44271a27a00dfe7b702f2adc8cb25963fe8567", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Political Science", "Medicine" ] }
251664693
pes2o/s2orc
v3-fos-license
Reconfigurable Size‐Sorting of Micronanoparticles in Chalcogenide Waveguide Array Optical tweezers are considered as a revolution, allowing for manipulating particles ranging in scale from a few hundred nanometers (nm) to several micrometers (μm). Near‐field optical force allows effective trapping of a broad range of entities, from atom to living cells. Yet, there are formidable challengings for existing on‐chip photonic trapping techniques to simultaneously handle at will multiple entities with cross‐scales (i.e., from nm to μm). Herein, optical transportation and trapping of polystyrene particles with the different diameters are demonstrated in optofluidic nanophotonic sawtooth waveguide array (ONSWA) made of chalcogenide alloys Sb2Se3. The chalcogenide ONSWA produces sawtooth‐like light fields that can be actively modulated via the phase transition of Sb2Se3. Particularly, the chalcogenide ONSWA can stably trap the polystyrene particles with the diameters of 500 nm and 1 μm for both the amorphous and crystalline states, respectively. It is experimentally demonstrated that the phase transition of Sb2Se3 from amorphous to crystalline and vice versa can be achieved in nanoseconds. Using the technique of nanosecond laser‐induced phase transition of Sb2Se3, a dynamically reconfigurable size‐sorting of objects on the identical ONSWA is proposed. Introduction Since the experimental demonstration of optical tweezers in 1970s, [1] it has been rapidly developed into noninvasive and versatile tool to manipulate atoms [2] and biomolecules. [3,4] Particularly, the optical trapping is based upon the gradient force whose direction and magnitude are determined by the local field gradient. [5][6][7] In order to manipulate multiple entities with the different sizes (i.e., viruses and DNAs), the optical tweezers need to be assisted with the other forces. [8] For example, by combining the quasi-Bessel beam with the forces induced by both fluid and photoresist, sub-100 nm particles can be separated. [9] Other schemes, like acoustics [10,11] and microfluidics, [12][13][14] are also promising for separating biomolecules with diameter of %μm. [15] Ultimately, whereas, optical tweezers face two critical challenges; first, the diffraction limits how tightly the beam can be focused hence limiting the trapping strength; second, the short focal depth of trapping area forbids the continuously optical transportation of nanoparticles using free space light. [16,17] As such, the diffraction limit of optical tweezers restrains the manipulated particles' size to be micrometer. Thus, optical trapping and long-distance transporting of particles with the size ranging from nanometer to micrometer by free-space beam is extremely formidable. To manipulate subwavelength particles, optical waveguides (WGs) confining light beam within solid structures have been intensively investigated. Such a light confinement can lead to a self-consistent beam that indefinitely transmits through the WG without loss or varying its form. [18,19] To this end, optical forces produced by the different kinds of dielectric nanostructures like channel WG, [20][21][22] WG loops, [23] rib WG, [24] and slot WG [16,17] have attracted intense attentions. These structures produce E-field decaying exponentially within a region above the diffraction limits. They can trap nanometersized objects to WG's surface and transport them by transmission fields. Most WG structures are often designed to manipulate the nanoparticles and employed rarely for microscale objects. In the meantime, the size of fluidic channels cannot diminish from nanoscale biomolecules (i.e., DNA and viruses) to microscale liquid droplets. [25] This issue has been somewhat resolved by manipulating and sorting microsphere that acts as carriers of biomolecules with a quantity of DNAs or proteins attaching on them. [26,27] The new-generation "lab-on-chip" sorting system [28,29] will merge optical manipulation, microfluidic, and some other techniques to obtain size-sorting of micronanoobjects over a single chip. Such a system entails tunable manipulation techniques that can convey and trap different sized particles freely between regimes. The chalcogenide alloys, pioneered by Ovshinsky, [30] are well known for their successful applications in phase-change memory and rewritable optical discs due to the advantages of rapid switching speed, excellent scalability, high cyclability, and thermal stability. [31,32] In particular, the reversible and prompt phase change of chalcogenide semiconductor between amorphous (AM) and crystalline (CR) [33][34][35] enables the compounds to be an exceptional ingredient for fast tunable photonic devices. [36,37] DOI: 10.1002/adpr.202200078 Optical tweezers are considered as a revolution, allowing for manipulating particles ranging in scale from a few hundred nanometers (nm) to several micrometers (μm). Near-field optical force allows effective trapping of a broad range of entities, from atom to living cells. Yet, there are formidable challengings for existing on-chip photonic trapping techniques to simultaneously handle at will multiple entities with cross-scales (i.e., from nm to μm). Herein, optical transportation and trapping of polystyrene particles with the different diameters are demonstrated in optofluidic nanophotonic sawtooth waveguide array (ONSWA) made of chalcogenide alloys Sb 2 Se 3 . The chalcogenide ONSWA produces sawtooth-like light fields that can be actively modulated via the phase transition of Sb 2 Se 3 . Particularly, the chalcogenide ONSWA can stably trap the polystyrene particles with the diameters of 500 nm and 1 μm for both the amorphous and crystalline states, respectively. It is experimentally demonstrated that the phase transition of Sb 2 Se 3 from amorphous to crystalline and vice versa can be achieved in nanoseconds. Using the technique of nanosecond laserinduced phase transition of Sb 2 Se 3 , a dynamically reconfigurable size-sorting of objects on the identical ONSWA is proposed. The phase transition significantly changes the permittivity of chalcogenide alloy that, in turn, leads to a massive shift of the working frequency of the devices and hence altering functionalities. Note that, the chalcogenide alloy has proved to be promising and useful for optical WGs, [38,39] fibers, [40] and photonics crystals devices. [41] Our study extends the above knowledge into the area of manipulation of micro-nanoparticles. In this work, we demonstrate a reconfigurable size-sorting of micro-nanoparticles on the identical optofluidic nanophotonic sawtooth waveguide array (ONSWA) based on Sb 2 Se 3 , a family of chalcogenide phase-change materials. [42] Coupled hotspots array can be produced by the beam coupling between the paired WGs. We study theoretically the optical transporting and trapping of polystyrene particles of various sizes varying from nanoscale to microscale. By reversibly switching the structural state of Sb 2 Se 3 between AM and CR, the coupling length between the hotspots can be engineered reconfigurably. This results in ondemand size-selective sieving of the micro-nanoparticles in the hotspots. Just as importantly, a variable angle spectroscopic ellipsometry (VASE) measurement presents that the Sb 2 Se 3 layer has a radical variation of complex refractive index and a very fast phase transition of nanoseconds between AM and CR phases. This makes it possible to manipulate dynamically at will the multiple objects over the cross-scale between nanometer and micrometer. Our study may put one step forward for biomedical application in which an ultrafast and reconfigurable size sieving of micro-nanoobjects is a major concern. 2. Results 2.1. Design of the Sb 2 Se 3 ONSWA Herein, we demonstrate by using a sawtooth WG array made of Sb 2 Se 3 , for the AM state optical gradient force can be exerted to polystyrene nanoparticle with a diameter of 500 nm to stably trap it inside one hotspot. As changing the phase of the Sb 2 Se 3 from as-deposited (AD) AM to CR by heating the WG above the crystallization temperature of Sb 2 Se 3 (T C ¼ 200°C) but below its melting temperature (T m ¼ 610°C), the gradient force can trap the microparticle with a diameter of 1 μm inside the hotspot. As backward transiting the structural state from CR to melt quenched (MQ) AM by heating the structure above T m ¼ 610°C using nanosecond laser pulse excitation, [43,44] followed by a fast cooling, the gradient force can be turned into trapping the nanoparticle again. As shown in Figure 1a, the ONSWA is composed of Sb 2 Se 3 photonic nanowaveguides array embedding into a SiO 2 substrate. A microfluidic channel runs over the sawtooth WG conveying the particles to the trap. Thus, the top cladding is defined by the refractive index of water. The gap (G), thickness (T ), and width (W ) of the paired WG are G ¼ 0.2, T ¼ 0.22, and W ¼ 0.35 μm, respectively. The polystyrene particles are chosen due to their low index contrast relative to water surroundings and low absorption cross sections. Moreover, the polystyrene particles roughly mimic the characteristic of biological and organic materials. The incident light has a power of 20 mW inside the WG and excitation wavelength of λ ¼ 1.55 μm. In Figure 1b, a variable angle spectroscopic ellipsometry (VASE) measurement presents that the Sb 2 Se 3 layer possesses a radical variation of complex refractive index (N se ¼ n se þ i  k se ) and a very fast phase change time of nanoseconds (ns) between the AM and CR states, where n se and k se represent the real and imaginary components of N se , respectively. We first sputter a 40 nm-thick Sb 2 Se 3 film onto the substrate by a radio frequency (RF) magnetron sputtering system, where the substrate is a 200 μm-thick Si wafer. Before the deposition, the Si substrate was cleaned ultrasonically in acetone, isopropanol, and deionized water and dried via the dry nitrogen. We then sputter-deposited the 40 nm-thick Sb 2 Se 3 laminate on the Si substrate. Detailed description of the fabrication processing can be found in Methods. The phase changes of amorphization and recrystallization occur at the different temperatures and on the different time scales. [43,44] For examples, we crystallize the AD-AM Sb 2 Se 3 layer by heating it for 5 min at T C ¼ 200°C on a hotplate in a flowing Ar gas. To reversibly change the phase from CR to MQ-AM, we melt the crystal lattice and quench it into the AM state (room temperature) under 10 9 -10 10 K/sec rate, forbidding the recrystallization of atomic structure. [45] Such a high quench rate can be realized by employing either ultrashort laser or electrical Joule pulses. [46] In Figure 1b, we experimentally demonstrate N se of the 40 nm-thick bare Sb 2 Se 3 layer for the AD-AM (red line), CR (blue line), MQ-AM (pink line), and recrystallized (R-CR, cyan line) states. The VASE is employed to measure both n se (solid lines) and k se (dashed lines), which are fitted by a Tauc-Lorentz model. The vast alternation in n se between the two structural phases provides the tunable near-infrared (N-IR) resonances. The change in both n se and k se originates from a bonding change between predominantly covalent in the AM structural phase and resonant bonds in the CR structural phase. [47] Moreover in the N-IR spectra, the photon energy is lower than the photonic bandgap of both AM-and CR-Sb 2 Se 3 . This offers a very low extinction coefficient (k se ) over the spectrum from 1000 to 1600 nm. The variation in n se controls the spectrum of the optical gradient force, and the very low k se contributes to small losses. In our proposed system, light at λ ¼ 1.55 μm is chosen because both the AM-and CR-Sb 2 Se 3 are transparent at that wavelength. The N se spectra for the CR and R-CR Sb 2 Se 3 film, as well as for the AD-AM and MQ-AM Sb 2 Se 3 film, are nearly same, indicating that the Sb 2 Se 3 layer can be reversible. The nonvolatile characteristic is another advantage of the Sb 2 Se 3 phase change. The structural phases are steady under room temperature, and the thermal energy is needed only for the phase change process, not for upholding a particular state. [48,49] This enables the reconfigurable size sieving of micro-nanoparticles to be interesting from a green technology point of view. As presented in Figure 1c, the coupling length (C L ) between the neighboring hotspots, which is associated with the structural state of the Sb 2 Se 3 of the paired sawtooth WGs, is a crucial parameter for the particle manipulation in the ONSWA. The size sieving of the cross-scale particles (from nanoscale to microscale) can be achieved by adjusting the C L between the hotspots. We employ the nanosphere with the diameter of d nano ¼ 500 nm and the microsphere with the diameter of d micro ¼ 1 μm, for instance. The refractive index of the polystyrene bead is n poly ¼ 1.59. Herein, the parameters of WG were determined to enable the WG to selectively trap the particles with the different sizes by controlling the C L via phase transition of Sb 2 Se 3 . Simulation is conducted using the finite-difference timedomain (FDTD) method solver within Lumerical Solution. The electric (E-) field distribution in the paired sawtooth WGs is calculated by solving Maxwell's equations for the WG geometry, including the upper (water) and lower (SiO 2 ) cladding areas. The mesh size is set at 3 nm along all Cartesian axes (Δx ¼ Δy ¼ Δz ¼ 3 nm) to diminish the numerical errors. The perfect matched layer (PML) boundary conditions are used along the x-, y-, and z-axes. The E-field intensity distribution for the transverse electric (TE) mode that is 10 nm above the top surface is presented at λ ¼ 1.55 μm. The region with the maximum Efield intensity above the WG can form the "hotspot" to trap the particles in the x-y plane. For the AM state, the C L is around 7 μm as shown in the left column of Figure 1c. In this case, the effect of the hotspots on the nanosphere is not dependent with the lower and upper WGs. The nanoparticle can be trapped stably in the hotspots via the optical gradient force (F g ). The trapping potential well (U ) exerted to the nanoparticle in the x-y plane 10 nm above the top surface of the WG possesses a valley-like energy shape, presenting stable trapping (the central column of Figure 1c). However, the diameter of microsphere is close to the center-to-center distance between two WG modes; the microsphere that is momentarily trapped in the lower hotspot is influenced by the upper hotspot, resulting in an unstable trapping condition. It is easy for the microparticle to jump from the WG with weak E-field to that with strong E-field. The F g from the upper hotspot attracts the microsphere and causes it to rotate by generating a torque M z . The M z can be calculated by integrating Minkowski stress tensor (T ↔ ) on the surface of microsphere, which is described as where r represents the position vector of the point on surface S; S is an arbitrary surface that encloses the sphere and Minkowski stress tensor, respectively. [50][51][52] We have shown the detailed analysis in Methods. Due to the slight rotation, the part of the microsphere temporarily trapped in the lower hotspot can shift into the upper hotspot. This causes the microsphere to escape eventually because the stable trapping location is absent in the upper WG. Moreover, the microsphere in the upper WG is drawn to the next paired WGs owing to the narrow pitch of 1 μm between each paired WGs. Thereby, the energy shape of the U shows that there is no trapping location for the larger microsphere (see the right column of Figure 1c). Namely, although the U for the microsphere along the y-axis possesses a valley-like energy shape, the U along the x-axis cannot trap the microsphere. Herein, the optical scattering force F s dominates the gradient force F g and the resultant force can convey the microsphere to the microchannel edge, where the flow stream flushes the microsphere away. Note that, the Rayleigh approximation of F s was only accurate when the radii of target objects were smaller than 1/10 of the operating wavelength. [53,54] The radii of our two target spheres are 250 and 500 nm, respectively, which were much larger than the 1/10 of the operating wavelength (λ ¼ 1550 nm). Thus, the F s was not theoretically calculated. As switching the structural state from AM to CR, the C L between the adjacent hotspots becomes %25 μm, which is much larger than the diameter of the microsphere (d micro ¼ 1 μm) as presented in the left column of Figure 1d. The microsphere can be trapped in the hotspots because this particle is only influenced by one hotspot and does not interfere with the coupling hotspot. In this case, the U acting on the microparticle is above 10k B T in the x-y plane and thus stably trapping the microsphere as shown in the right column of Figure 1d. In the meanwhile, the small nanoparticle (d nano ¼ 500 nm) runs away from the paired WGs because the U acting on the particle is lower than 10k B T as presented in the central column of Figure 1d. In summary, when the structural state of Sb 2 Se 3 is AM, the paired WGs can efficiently catch the nanoparticle inside a single hotspot while releasing the microparticle. Yet, this size sieving of the particles is reversed by crystallizing the Sb 2 Se 3 WGs. The bandgaps of Sb 2 Se 3 with the different structural states are almost same around a wavelength of 1.1 μm, [55] which was shorter compared to λ ¼ 1.55 μm. Therefore, the bandgap of the Sb 2 Se 3 may not affect the capturing process. Optical Force Acting on the Micro-Nanoparticles Above the Sb 2 Se 3 ONSWA The force exerted to polystyrene particle positioned in time harmonic electromagnetic (EM) fields can be achieved via a linear momentum conservation. This linear momentum can be either field or mechanical momentum. The sum of these two momentums is maintained. By illuminating the particle, we can transfer the momentum from the optical to mechanical, leading to an optical force acting on the particle. Thus, the optical force is associated with the change of mechanical momentum (p) with time (dp/dt). The EM-field momentum flux in the linear medium of permeability μ and permittivity ε is shown by the time-averaged Maxwell stress tensor 〈hT ↔ i〉. [56,57] where 〈…〉 shows a time-average operation 1 T ∫ T 0 : : : dt with T ¼ 2π/ω, I ↔ the matrix of identity, and EE à and HH à the outer product of the fields. The total time-averaged force (F total ) exerted on the sphere is calculated using the Maxwell stress tensor formalism and is expressed by wheren is the vector perpendicular to the surface and s is the integration calculated on a closed surface that surrounds the sphere. The EM fields are calculated at the surface of a square box surrounding the polystyrene particle. An optical force map in the x-y plane 10 nm above the Sb 2 Se 3 ONSWA is numerically calculated to determine the trapping locations of both nanoparticle and microparticle. In Figure 2a, we demonstrate the 2D force map of F x acting on the nanoparticle (d nano ¼ 500 nm) above the paired WG with the AM state. The F x switches between the repelling and dragging forces in each WG along the light propagation direction. In the figure, the green region and black lines represent the nearly zero F x and contour of F x ¼ 0, respectively. Figure 2b shows the force map of the force along the y-direction F y composed of the fluidic drag force F y drag and optical force F y opt , where the force map of F y opt is presented in Figure S1a, Supporting Information. The drag force can be expressed as [58] F y drag ¼ 3πσdυ (3) where σ is the viscosity of the buffer, υ the velocity of the flow, and d the diameter of sphere, respectively. Herein, the trapping positions are close to the WG edge where F x ¼ F y ¼ 0. This is different from the single WG trapping systems in which the particles are trapped at the locations along the central axis of the WG. To stably trap the particles, the restoring optical forces are required on both sides of the trapping position (F x ¼ F y ¼ 0). The detailed description can be found in Figure S2, Supporting Information. In Figure 2c, the stable trapping positions of F x ¼ 0 and F y ¼ 0 are demonstrated in red and blue lines, respectively. The black dot at the cross of the red and blue lines is the final trapping place for the nanoparticle (see the central column of Figure 1c). On the contrary, the AM ONSWA cannot offer stable trapping positions for the microsphere. In Figure 2d,e, we numerically demonstrate the 2D force maps of F x and F y for the microsphere while showing the contours of F x ¼ 0 (red lines) and F y ¼ 0 (blue lines) in Figure 2f. The F y opt exerted to the microsphere with a diameter of d micro ¼ 1 μm is shown in Figure S1b, Supporting Information. The F y drag acting on the microsphere is around 0.04 pN when the flow velocity is 5 μm s À1 . The smaller flow velocity may reduce the trapping performance due to the less momentum that suppresses the pulling well of the evanescent field. [59,60] As was observed, the contour of F x ¼ 0 does not intersect the contour www.advancedsciencenews.com www.adpr-journal.com of F y ¼ 0. This indicates that the AM ONSWA cannot provide the stable trapping places for the microparticles (see the right column of Figure 1c). As transiting the structural state of Sb 2 Se 3 from AM to CR, in Figure 2g,h we have numerically simulated the 2D force maps of F x and F y acting on the nanoparticle (d nano ¼ 500 nm) while presenting the contours of F x ¼ 0 and F y ¼ 0 in Figure 2i. The corresponding F y opt for the nanoparticle is shown in Figure S1c, Supporting Information. As was observed, although intersection occurs between the two contours, the U acting on the nanoparticle is less than 10k B T that cannot stably capture the nanoparticle (see the central column of Figure 1d). In contrast as shown in Figure 2l, the CR ONSWA can produce a stable trapping position at F x ¼ F y ¼ 0 (the intersections of contours of F x ¼ 0 and F y ¼ 0) for the microsphere (see the right column of Figure 1d). In Figure S3, Supporting Information, we have investigated the effect of the flow rate on the trapping position of the target particle (d nano ¼ 500 nm) above the ONSWA with AM state. It was shown that the particle was stably trapped at the flow velocity of 5 μm s À1 (see Figure S3a, Supporting Information). As increasing the flow velocity to 8 μm s À1 , the stable trapping position was changed from (À37, 0.3 μm) to (À38, 0.1 μm) (see Figure S3b, Supporting Information). It was because that the higher flow velocity could induce a larger Stokes drag force which counteracted the optical force that, in turn, affected the stable trapping location. [61] However, the stable trapping position was absent when the flow velocity was above 12 μm s À1 (see Figure S3c, Supporting Information). , and c) the contours of F x ¼ 0 (red lines) and F y ¼ 0 (blue lines) acting on the nanoparticle (d nano ¼ 500 nm) that is 10 nm above the Sb 2 Se 3 -ONSWA with AM state, where the zero F x and F y are denoted by black contour. Distributions of d) F x , e) F y , and f ) the contours of F x ¼ 0 (red lines) and F y ¼ 0 (blue lines) acting on the microparticle (d micro ¼ 1 μm) that is 10 nm above the Sb 2 Se 3 -ONSWA with AM state. Distributions of g) F x , h) F y , and i) the contours of F x ¼ 0 (red lines) and F y ¼ 0 (blue lines) acting on the nanoparticle (d nano ¼ 500 nm) that is 10 nm above the Sb 2 Se 3 -ONSWA with CR state. Distributions of j) F x , k) F y , and l) the contours of F x ¼ 0 (red lines) and F y ¼ 0 (blue lines) acting on the microparticle (d micro ¼ 1 μm) that is 10 nm above the Sb 2 Se 3 -ONSWA with CR state. www.advancedsciencenews.com www.adpr-journal.com 2.3. Optical Force Acting on the Micro-Nanoparticles Above the Sb 2 Se 3 -ONSWA The particle locomotion, random Brownian motion, and optical forces from neighboring potential wells cause the microsphere (d micro ¼ 1 μm) to rotate in the Sb 2 Se 3 -ONSWA. In Figure 3a, b, we studied the rotation-induced force F y and M z by placing microsphere at the central line of the bottom WG (y ¼ 0) but at various locations along the x-axis. For x < 2 μm, the F y is negative when the rotational angle θ (i.e., the angle between the WG and the central axis of microsphere) is positive and vice versa. This indicates that the microsphere can be aligned inside the hotspot along the propagation direction of the incident light. It is similar with the behavior of the microsphere above the surface of the single WG. For x ≤ 2 μm and θ < 0°, the F y is always positive because the upper hotspot does not affect the microsphere. Nevertheless, when x > 2 μm and θ > 0°, the upper hotspot start affecting the microsphere. Thus, a slight rotation may produce a giant F y pointing to the center of the upper hotspot as presented in Figure 3a. As increasing θ above 75°, a large part of the microsphere coincides with the upper hotspot. The F y exerted to the lower and upper components of the microsphere would cancel each other, leading to a smaller resultant F y . This enables the F y to achieve the maximum at θ ¼ 30°and decrease to nearly zero for θ > 70°. The giant positive F y at the small θ makes microsphere rotate from the lower hotspot to the upper one. Such a rotation releases the microsphere from the AM Sb 2 Se 3 -based ONSWA. The incident light can exert a torque M z on the microsphere, thus rotate the microsphere around the z-axis. In Figure 3b, we have simulated the M z on the microsphere. Herein, the Minkowski stress tensor is used to investigate the distributed force dF emerging on the surface of the microsphere in the light field, where the dF can be expressed as The M z can be subsequently derived from the cross product between dF and vector r The surface integral of the dF gives rise to a resultant force (F sum ) between F x and F y on the edge of the microsphere. The F sum around the border can be mimicked by the force with the identical magnitude at the center of sphere together with the M z . Thereby, the dynamics of the microsphere in the complex light field can be decomposed into a conversion of the center of mass under the F sum and a self-rotation under the M z . In Figure S4, Supporting Information, we studied the rotationinduced force F x by placing the microsphere at x ¼ 0 while changing the position along the y-axis. It shows that the magnitude of F x is one order weaker than F y . Thus, the effect of F x on the microsphere is ignored. In Figure 3c-f, we demonstrate the four typical actions of the microsphere in the twisted light (light coupling between the two neighboring WGs). For À90°< θ ≤ 0°a nd x ≤ 3 μm, the M z is positive (anticlockwise) because the gradient optical force pulls the microsphere particle and lines up the molecule to the WG (Figure 3c). For 0°< θ < 30°and x > 3 μm, the microsphere rotates anticlockwise owing to the dragging gradient force from the upper WG (Figure 3d). For 30°≤ θ < 90°a nd x > 3 μm, the microsphere rotates clockwise that is obtained by the dragging force of the upper WG (Figure 3e). Finally for θ ¼ 90°, the microsphere can propagate through the gaps between the coupling hotspots, where both the F y and M z are weak (Figure 3f ). We have explicitly described the simulation of the optical force and torque in Methods. The evanescent Efield appeared in the top surface of the paired Sb 2 Se 3 WGs. [20] The target particles at the different locations along the z-axis can be attracted to the surface of the structure by the F g . Thus, the optical forces was only simulated in the x-y plane. [8] 2.4. The Trajectory of Polystyrence Micro-Nanoparticle Above the Sb2Se3-Based ONSWA Taking into account the arbitrary Brownian motion of micronanoparticles in water, the stability of a particle must be studied. [62][63][64] The movement of polystyrene particle is modeled by the Langevin Equation [65] where x(t) and y(t) are the positions of the particle, F x (x,y) and F y (x,y) are the transverse forces, N x (t) and N y (t) are the stochastic www.advancedsciencenews.com www.adpr-journal.com noise terms that simulate arbitrary collisions from fluid molecules along both the x-and y-axes, respectively, m is the mass of sphere, α ¼ 3πη (d sphere ) is the drag coefficient (from Stoke's law for a spherical particle), and the viscosity of water is η ¼ 0.89 mpa s. d sphere is the diameter of polystyrene particle. The scaling constant for the stochastic noise term is given by , where T ¼ 300 K and k B is Boltzmann's constant. The simulation algorithm is performed for 10 000 times steps with a time step of 10 μs. To simplify the model, we did not consider the influence of optical force along the z-axis (F z ) on the movement of the particle. In Figure 4a,b, we modeled the stabilities of both polystyrene nanosphere (d nano ¼ 500 nm) and microsphere (d micro ¼ 1 μm), respectively, by observing time sequences of the motions of the spheres that are placed 10 nm above the ONSWA with the AM Sb 2 Se 3 . The spheres in the x-y plane are traced with nanometer accuracy. In Figure 4a, a 100 ms trajectory of the nanoparticle above the AM ONSWA is presented by the white solid line. Due to the positions of the hot spots in the ONSWA (see Figure 1c), the gradient force is generated along the þy axis (F y > 0; see Figure 2b) and thus conveys the nanosphere toward the hot spot. The particle is stably trapped at the place of pink circle at the end of 100 ms (Movie S1, Supporting Information). Yet, for the microsphere, as shown in Figure 4b, the optical scattering force is much larger than the gradient force, enabling the microsphere to pass through the surface of the ONSWA (Movie S2, Supporting Information). As transiting the structural state of the Sb 2 Se 3 from AM to CR, the nanoparticle (d nano ¼ 500 nm) passed through the top surface of the CR ONSWA (see white solid line). This is because the U exerting to the nanoparticle is smaller than 10k B T, thus cannot stably trap it (Movie S3, Supporting Information). On the contrary, in Figure 4d the ONSWA can trap the microsphere (Movie S4, Supporting Information). Our proposed strategy may also be applicable for other active materials like graphene. [66] Conclusions Herein, we have expanded the possibility of using Sb 2 Se 3 -based ONSWA to develop an optical sieving technique for multiple particles with a cross-scale ranging from 500 nm to 1 μm. By changing the structural state between AM and CR, our proposed ONSWA can actively modulate the sawtooth-like light fields. For the AM state, the ONSWA can capture the nanoparticle in each single hotspot. However, the microsphere is twisted into the other hotspots owing to the microsphere movement and the Brownian motion-induced rotation, in which the giant optical force and torque are generated to drag the microsphere away from the original hotspot, leading to an escape. For the CR state, the ONSWA can trap the microparticle inside a single hotspot due to the increasing coupling length between the neighboring hotspots. Yet, an instable trapping of the nanoparticle is induced because the force balance location of nanoparticle inside one hotspot along the x-axis is suppressed by the force along the y-axis from the other coupled hotspot. Note that, our experimental measurement illustrates that the Sb 2 Se 3 planar film is dynamically reconfigurable. This builds up the solid basis for making the ONSWA selectively trap the particles between nanoscale and microscale in 100 ms by transiting the Sb 2 Se 3 phase between AM and CR. Flexible manipulation of the interactions of the coupled hotspots opens the avenue for observing collective phenomena of cross-scale entities and may be harnessed to provide a new twist of multifunctional object manipulation optofluidic chips based upon chalcogenide phase-change material. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
2022-08-19T15:03:44.481Z
2022-08-17T00:00:00.000
{ "year": 2022, "sha1": "f07e8662ed7af99466420dd3c8f41ac50eef3d59", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/adpr.202200078", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "175eb3118c18226e977ad44a0531abc0c273272c", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [] }
119393180
pes2o/s2orc
v3-fos-license
Critical behavior of the Ashkin-Teller model with a line defect: a Montecarlo study We study magnetic critical behavior in the Ashkin-Teller model with an asymmetric defect line. This system is represented by two Ising lattices of spins $\sigma$ and $\tau$ interacting through a four-spin coupling $\epsilon$. In addition, the couplings between $\sigma$-spins are modified along a particular line, whereas couplings between $\tau$-spins are kept unaltered. This problem has been previously considered by means of analytical field-theoretical methods and by numerical techniques, with contradictory results. For $\epsilon>0$ field-theoretical calculations give a magnetic critical exponent corresponding to $\sigma$-spins which depends on the defect strength only (it is independent of $\epsilon$), while $\tau$-spins magnetization decay with the universal Ising value $1/8$. On the contrary, numerical computations based on density matrix renormalization (DMRG) give, for $\epsilon>0$ similar scaling behaviors for $\sigma$ and $\tau$ spins, which depend on both $\epsilon$ and defect intensity. In this paper we revisit the problem by performing a direct Montecarlo simulation. Our results are in well agreement with DMRG computations. We also discuss some possible sources for the disagreement between numerical and analytical results. INTRODUCTION Despite being massively studied for decades, two dimensional lattice spin models keep attracting great interest as a toolbox for the understanding of phase transitions and critical phenomena. In particular, when defects are present and systems lose translational invariance, critical behavior becomes nontrivial and physical properties on the defects can be different from those in the bulk [1]. In addition, the critical properties of these models are significant not only from an academic point of view, but are also relevant to fields as diverse as biology [2] and the physics of cuprates in condensed matter systems [3]. Some of these models, such as the Ashkin-Teller (AT) [4] and the eight-vertex model [5] have a very rich phase diagram, which features partially ordered intermediate phases, various first-order and continuous phase transitions, and exhibit as a salient feature non universal critical behavior, i.e., the critical exponents of certain operators are continuous functions of the parameters of the Hamiltonian. More recently, many studies have focused on the quantum version of the AT model as a prototypical model for the analysis of the efficacy of various sophisticated renormalization schemes [6,7]. A particularly interesting and fertile arena is the study of the role played by defects on local characteristics of these type of models, such as the local magnetization and the correlation functions, though much less is known in these inhomogeneous cases. In the paradigmatic Ising lattice with a line defect the critical exponent of the magnetization depends continually on the defect strength [8,9], whereas the scaling index of the energy density at the defect line remains unchanged. This problem was considered in a more complex system such as the AT lattice with a line defect, in [10]. Let us recall that the AT lattice can be viewed as two Ising lattices with spin variables σ and τ , respectively, interacting through their corresponding energy densities. Thus, in the absence of this interaction one has two independent Ising systems. In [10] an asymmetric line defect, affecting only one type of spins (to be definite let us say the σ spins) was introduced, and the critical behavior of spin-spin correlations was determined, through field theoretical methods, for both σ and τ spins. The magnetic critical exponents where found to be independent of the coupling between the Ising models. More specifically, σ − σ correlations decay as in Bariev's model [8] whereas τ − τ correlations behave as in the usual (homogeneous) Ising model, with the universal 1/4 exponent. These results stimulated a numerical study of the local critical behavior at an asymmetric defect in the AT model [11]. By using density matrix renormalization, in the region of parameters where the numerical computation can be compared with the field-theoretical results, these authors found that magnetization exponents at both σ and τ spins are both dependent on the interaction between Ising spins and the defect strength. These conclusions are in clear contradiction with the field-theoretical calculations of [10]. The discrepancies described above call for re-visiting the critical properties of the the AT model with a defective line. Here we report on the numerical measure of the AT model with a line defect over self-dual critical line with nonuniversal exponents that separates the ferromagnetically ordered phase from the completely disordered one. We make use of the simple Metropolis algorithm to compute critical exponents. Our results are in well agreement with the DMRG study of [11]. THE MODEL The AT model can be represented as two overimposed copies of the Ising squared lattice coupled by means of a arXiv:1612.08876v1 [cond-mat.stat-mech] 28 Dec 2016 four spin interaction where r = (i, j) labels the lattice sites, σ r and τ r represent both Ising spins, . . . indicates a sum over nearest neighbors and J 4 represents the coupling. We introduce an asymmetric line defect located at j = 0 by modifying the coupling between σ spins over a single line, where now the effective coupling over the defective line is given by J + J l . Periodic boundary conditions are assumed in both directions. As it is known,the phase diagram of the AT model is very rich. We shall be interested in the critical line defined by the equation where K = J/k B T and K 4 = J 4 /k B T , being T the temperature, which separates the ferromagnetic and paramagnetic phases for K < K 4 , (for K > K 4 an intermediate, partially ordered phase appears). Over this line the clean system exhibits non-universal critical behavior. We performed all the calculations over this critical line. We shall consider the critical behavior of the spin correlations on the defect line In absence of the line defect, these exponents take the universal values x σ = x τ = 1/8 [12,13]. Another limit that deserves attention, since it will be useful as a checking, corresponds to the case in which the defect line is present but the Ising lattices are decoupled, i.e. = J 4 /J = 0. In this case, the system reduces to two decoupled Ising models, one of them (the one identified with σ spins) defective. The Ising model with a defective line was studied by Bariev [8] and he found that where K l = J l /k B T . The τ spins become independent, conform a clean Ising plane and therefore x τ = 1/8. NUMERICAL RESULTS We performed Montecarlo simulations on the square lattice of size L × L with periodic boundary conditions and considered values up to L = 128. We analyzed several values of the defect intensity and the coupling between Ising planes. Instead of working with correlation functions, we computed the size dependence of the magnetization for the σ and τ spins. According to finite size scaling theory these magnetizations behave asymptotically as For small system sizes, the behavior of the magnetization departs from the power-law as can be appreciated in Figs. 2 and 4. However, for larger systems, typically for L > 64, the power-law is restored and fits of the size-dependent magnetization in those regions allow the extraction of the exponents. To begin with, we consider the decoupled model and compute the critical local magnetization of the σ Ising plane having a ladder defect [1]. In this case K 4 = 0 implies that K = K Ising c , with sinh 2K Ising c = 0. Fig. 1 shows the magnetization as a function of L (in logarithmic scale) and we observe that the size dependence of the exponents is negligible. The finite-size magnetization exponents are plotted in the inset as a function of defect intensity, where we observe a complete agreement with the analytic result (6). The critical exponent for the τ spins is independent of K l and takes the value x τ m = 1 8 as expected. When the coupling between σ and τ spins is nonvanishing, the defect magnetization does not behave ex-actly as a power law, or in other words, the exponents x σ m y x τ m keep a residual L dependence. This can be appreciated in Fig. 2 where we show the defect magnetization of σ and τ spins as a function of L for different values of K l and . One clearly observes deviations from power laws. Still, for values of L larger than 64 (for = 0.75) a linear dependence in logarithmic scale is approached and we use fits in this range to extract the exponents. For smaller values of the power law behaviour starts at smaller values of L. We observe that the slope at large sizes increases with the value of K l . The curve corresponding to the clean system K l = 0 has in all cases a slope close to 1 8 and shows very little deviations from that value. The behaviour of the critical exponents with the intensity of the defect is shown in Fig. 3 for three values of the coupling . Deviations from the decoupled case increase with and are stronger for x σ l . Notice that for large values of the difference between x σ l and x τ l significantly reduces and seems to vanish for very large coupling. For positive K l , exponents show lower values than in the clean plane and tend to zero for large and positive defect intensities. This can be explained in terms of the phase transition taking place in the system and its effects on the defect line. The spins lying on this line are coupled among them by an effective constant K eff = K + K l that is stronger than the coupling with the spins in the bulk. Thus, there is a tendency on the defect to order for values of K smaller than the critical value of the bulk K c and the defective line already finds itself in a sort of "quasi ordered" state when the bulk still transits from disorder to order. The local order is reflected in a smaller critical exponent. By the same reasoning, when the defect intensity is negative, the effective value of the coupling among spins over the defective line is smaller than the coupling in the bulk and therefore the defective line finds still disordered or in transition to order for K equal to the critical value K c in the bulk. The finite-size magnetization curves in this region of the phase diagram are exposed in Fig. 4 and the calculated exponents are shown in Fig. 5. We observe in this case that the behavior of the exponents for σ and τ spins is different. The defect magnetization of the σ spins follows the same behavior as in the positive coupling case, and decreases with the intensity of the defect. On the other hand, the magnetization of the τ spins is the reverse one, it monotonically increases with the defect magnitude and tend to zero for K l large and negative. CONCLUSIONS In this paper we have reconsidered the computation of magnetic critical exponents in the Ashkin-Teller model with an asymmetric line defect. The main motivation for this analysis is the discrepancy between the analytical results of Ref. [10] and the numerical findings obtained in [11]. Our results are in well agreement with this last work. In particular, for four-spin coupling > 0, which corresponds to the case studied in [10], we get σ-critical exponents that depend on both and K l . Moreover, the form of this dependence is analogous to the one presented in [11] (See Fig.3). On the other hand, the analytical calculation, based on functional integrals, gives critical exponents which are independent of , i.e. only depends on K l . In view of these results it becomes natural to ask oneself about the reason for this disagreement. Let us recall that the method employed in [10] is based on the evaluation of a fermionic determinant, which involves a regularization procedure. If one uses a gauge invariant prescription, as done in [10], then no -dependent contribution to the determinant appears. And this is so because it is precisely the four-spin coupling in the Hamiltonian which breaks gauge invariance. This means that when performing the path-integral computation a general regularization containing -dependent counterterms should be used, instead of the gauge invariant choice made in [10]. Following this idea one could reconcile both numerical and analytical results for this problem. The details of this procedure will be worked out elsewhere.
2016-12-28T13:20:17.000Z
2016-12-28T00:00:00.000
{ "year": 2016, "sha1": "a116fdcfbe10f7829b356d8c919d57ee91cb1a39", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a116fdcfbe10f7829b356d8c919d57ee91cb1a39", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
219446962
pes2o/s2orc
v3-fos-license
In-Air Continuous Writing Using UWB Impulse Radar Sensors We developed an impulse radio ultra-wideband (IR-UWB) radar-based system that can recognize alphanumeric characters in midair without the need for any handheld device. The hardware consists of four IR-UWB radar sensors set up with a rectangular geometry. Writing a single character in midair results in artifacts that make some characters look similar on a position trajectory-based ( $x$ , $y$ ) plane, which makes them difficult to classify. Thus, we developed an algorithm that transforms 2D coordinate image data into trigonometric ratios (i.e., tangents) and plots them against the time axis to obtain unique images for training a convolutional neural network. An extended Kalman filter is used to obtain the 2D trajectories of hand motions. To evaluate our proposed method, we first applied it to characters that may be written in midair very simply without creating artifacts and compared its performance with that of a state-of-the-art digit classification algorithm. Then, we considered combining characters written midair with and without artifacts. After the individual character recognition, we combined the characters into words. We defined a specific marker based on an energy threshold to detect the start and end of a character for midair writing. The energy level was found to change drastically when the hand is pulled in and out of the radar plane. The proposed method was found to outperform the current state of the art at character classification when artifacts are present in the images. I. INTRODUCTION Gesture recognition allows a user to comfortably interact with a computer or other consumer electronic device for entertainment and/or communication without physical contact or voice commands. Different sensors have been considered for gesture recognition, such as cameras [1], gloves [2], and radiofrequency identification (RFID) [3]. However, sensors that are attached to the body [4] are often uncomfortable for the user, and vision-based sensors [5] suffer from privacy issues and do not work efficiently in dark or extremely bright environments. Radar-based gesture recognition has no privacy issues and can work well in different environments with various levels of illumination [6]. The impulse radio ultra-wideband (IR-UWB) approach is characterized by the The associate editor coordinating the review of this manuscript and approving it for publication was Weimin Huang . emission of extremely short pulses with very low power and no harmful effects on the human body. Thus, it can use a large part of the radio spectrum without disturbing the narrowband systems that already operate in different frequency bands. Other benefits of this approach are its robustness in harsh environments, high precision ranging, low power consumption, and high penetration capabilities [7]. IR-UWB has been used in many applications, such as multi-human detection [8], people counting [9], vital sign monitoring [6], [10]- [15], 3D positioning [16], gesture recognition [6], [17], [18], human-computer interaction for disabled people [19], and digital menu board implementation [20]. Although some studies have considered radarbased gesture recognition [21], [22], they used raw data such as spectrograms. Leem et al. [23] used an IR-UWB radar sensor and hand trajectories instead of raw data to recognize digits; however, they only considered simply written numeric characters that did not result in any artifacts, unlike in alphabetic writing, and used an already available handwritten dataset in the image processing field to train their convolutional neural network (CNN). However, midair alphabetic writing in different styles results in different artifacts because these characters are written in continuous fashion, which makes the resulting trajectories differ from those for characters written with a pen. Because of the artifacts, some characters may produce similar patterns that make them difficult to distinguish (e.g., ''5'' vs. ''6'' or ''a'' vs. ''b'') when only the position trajectory is considered, which reduces the overall recognition accuracy. In this study, we incorporated temporal information between radar pulses (i.e., slow time in the radar literature) with 2D localization information to get the real-time trajectory and writing style for a particular alphanumeric character. Therefore, even if some characters have similar shapes on a position trajectory-based (x, y) image, they produce different patterns when the temporal information is included in the (x, y, t) image. II. PROBLEM STATEMENT AND RELATED WORK In this study, we considered English alphanumeric characters for in-air writing. Some characters cannot be written continuously on paper without lifting the pen up and then down (e.g., ''X,'' ''F''). In the case of midair alphabetic writing, however, the tracking algorithm continuously monitors the motion of the hand, which results in artifacts. A previous issue with radar-based gesture recognition using deep learning was that the raw data would change abruptly as the orientation or distance of the hand from the radar sensor changed, which reduced the accuracy [22]. The current state of the art of radar-based in-air handwriting [23] solved this problem by using the trajectory of the hand instead of raw data and then employing a CNN for classification. However, Leem et al. [23] only considered the numeric digits of 0-9 and did not discuss artifacts that may occur during inair writing. However, writing complex alphabetic characters may cause artifacts where the real-time trajectory differs from the original character written with a pen on paper. Fig. 1 gives two examples: the characters ''X'' and ''F.'' The black lines show the trajectory that is the same as the character written on paper, while the red lines show the trajectories that result in artifacts (bc in Fig. 1(a) and cd in Fig. 1(b)). In this study, we examined how some of these artifacts generate similar patterns for different characters and thus reduce the classification accuracy. The main contributions of this work are as follows. It is the first to address the problem of artifacts that occur during midair writing using radar sensors. It is the first study on continuous in-air writing using radar sensors. In addition, we optimized our own CNN for radar-based image classification, which has a simpler structure than widely used pre-trained CNNs. We verified our results through the leaveone-person-out cross-validation (LOPO-CV) scheme, where one user is excluded from the training data. The objectives of this study were as follows: 1. To solve the problem of artifacts related to in-air character writing. 2. To consider continuous character writing. Previous studies only classified individual characters, but in this study we used an energy threshold algorithm to segment the stream of radar data into blocks and then applied localization and classification algorithms to detect individual characters. III. PROPOSED METHOD FOR CHARACTER RECOGNITION We used a setup consisting of four radar sensors, which were placed as shown in Fig. 2. Characters are written by hand on the plane set up by the four sensors. We used four sensors rather than three because each sensor had a narrow beam width (around 60 • ). Covering the whole plane with only three sensors was difficult and led to low accuracy because the hand gestures sometimes did not occur within the beam widths of the transceivers, which reduced the radar cross-section (RCS) values. Using four sensors improved the recognition accuracy because of the diversity effect. Note that only one character was written on the plane at a time. The writing was continuous in that one character was followed by another, but they shared the same space. As stated above, the two main objectives of this study were to classify characters individually and detect the exact intervals within which characters are written. Fig. 3 shows the block diagram of our proposed method for detecting continuous handwriting. After the raw data are obtained from the radar sensors, which are actually the signal reflected from the hand and the background environment, the static clutter due to the background signal needs to be removed. Then, the index of the maximum magnitude sample needs to be identified for each slow time signal component. This process is repeated for the whole slow time duration of the gesture. An EKF with a median filter is used to get the position trajectory of the hand during a gesture. Since hand tracking using trilateration technique is a non-linear problem, so the EKF gives optimal results compared to classical KF. A position velocity (PV) model is used to model the hand motion. The median filter is used to remove outlier values before the EKF step. After the trajectory is determined, the tangent of the x, y data is found, and the tangent ratio is plotted against the time axis. The main reason for using the trigonometric ratio instead of (x, y) coordinate data is that we can easily plot the ratio along slow tune without adding additional axis. The resulting trigonometric ratio plot against slow time contains the writing style information which improves the classification accuracy for characters with artifacts. The stored images are then processed to be compatible with the CNN, which is used to classify the pattern of each gesture corresponding to a specific character. We used a simple architecture for the CNN because the images are not very complex. We fine-tuned the hyper parameters for the CNN structure to ensure fast and accurate character recognition. Because our focus was on continuous writing, we also developed a technique for detecting characters from a continuous stream that uses a marker for the start and end of individual characters. This technique is based on the principle that, if the user's hand is inside the plane of the radar sensors, then its RCS will be greater (i.e., higher energy), while a hand position outside the plane will result in a smaller RCS (i.e., lower energy). Hence, an energy threshold can be set during the training period. Each step is discussed in detail in the following sections. A. CLUTTER REMOVAL The signal reflected from the hand contains information on the gesture as well as the background. We used a background subtraction filter to remove unwanted echoes (i.e., clutter) [24]. The simple loopback filter is represented as follows: where m is the slow time index, n is the fast time index, α is the estimated ratio of signal to clutter, c m (n) is the clutter signal, s m (n) is the signal from which the clutter signal is removed, and α is the weighting constant that controls the sensitivity of the clutter removal process. We set α to 0.85 in our experiments. Fig. 4 shows the signal before and after clutter removal. The normalized values of the signal amplitude in the fast time range (i.e., within a radar pulse) are shown for easy comparison. The signal before clutter removal is represented by a dotted red line and initially had higher values (samples 1-25), which indicates the clutter signal. The signal after clutter removal is represented by a solid blue line, where the main signal due to the hand gesture is amplified around sample 48. B. POSITIONING WITH THE EXTENDED KALMAN FILTER The input signals from the four radar sensors are represented by r 1 (n) , r 2 (n) , r 3 (n) , and r 4 (n), respectively. The clutter-free signals s 1 (n) , s 2 (n) , s 3 (n) , and s 4 (n) are obtained with the background subtraction filter described in Section III-A. We used the time-of-arrival (TOA) of the hand with respect to each radar sensor as the index value for the maximum magnitude in slow time. After the TOA is estimated for the four radar sensors, we use the EKF to track the hand in midair, which we implemented in a PV model. The detailed algorithm for EKF-based positioning using multiple sensors is given by Khan et al. [25]. The variables for the state space representation of the EKF are defined as follows. The state vector for the PV model is The state transition matrix is The observation vector z k for 2D space is The relation between the distances and coordinates of the target and radar sensors is given by where d i is the distance from the i th radar sensor to the hand and (x i , y i ) is the position of the radar sensor. The objective is to estimate the hand position (x, y) from the noisy observation. Because the update state is nonlinear, it needs to be linearized with the following Jacobian matrix H k : Applying the EKF to the position data obtains the trajectory of the hand motion for in-air writing. Fig. 5 shows the localization results for the digit ''6'' and character ''b.'' These characters make similar 2D patterns on the (x, y) plane that are difficult to classify using only x, y coordinate data. Although the patterns look similar, they are created differently. The digit ''6'' is usually drawn with the bottom circle counterclockwise in the order shown by the green arrows in Fig. 5(a), while the character ''b'' is drawn with the bottom circle clockwise in the order shown by the green arrows in Fig. 5(b). In other words, the trajectories are drawn in different orders. With radar, trajectories are obtained sequentially according to time (i.e., the slow time index), so it is easy to determine what order trajectories are written according to time. Based on these radar properties, two characters that look similar but differ in the order in which they are written can be differentiated. We constructed images by using our proposed method given in Section III.C, which we plotted in two different ways: the x coordinate vs slow time and y coordinate vs slow time. Fig. 6 indicates a clear difference between the images even to the naked eye. The patterns differ especially after sample 35, which represents the bottom portion of these characters being written. Based on our observations, we developed an image transformation method that includes both the (x, y) coordinates and time (t) of the positioning data. In the following section, we explain the image transformation method in detail with an example. C. IMAGE TRANSFORMATION FROM (X, Y) TO (Y/X, T) We need to create an image that uses both positioning data (i.e., x and y coordinates) and slow time data. However, the three variables cannot be plotted simultaneously on a 2D image, which has only two axes. For three-dimensional representation, trajectory information within a dataset is so rare that using it would be inefficient. Instead, we use the tangent angle transformation method to take the ratio of the y and x coordinates and plot it against slow time to obtain a 2D image that incorporates all three variables with a unique shape for each character. For practical application, we do not want to restrict the user to drawing characters in a specific area rather than anywhere on the virtual plan. Therefore, we first cancel the effect of the shift in distance from the origin in the horizontal and vertical directions by subtracting the mean horizontal and vertical values from each horizontal VOLUME 8, 2020 and vertical value. This can cause some values to become negative, so we add the absolute minimum value to both axes to shift the character shape to the positive quadrant. Then, we find the ratio of the vertical and horizontal axis values and plot it against slow time to get the transformed image. Because humans naturally cannot control the exact speed and duration to write a specific gesture, we cancel the effect of the writing speed by resizing the resultant image to a constant size. The steps of the algorithm are presented below. Resize the image to 100 × 100 pixels to cancel the effect of the writing speed. Without resizing, slow writing will result in a larger image size and vice versa. 30. End procedure Fig. 7(a) shows the initial image obtained from the (x, y) coordinate data for the character ''T.'' Fig. 7(b) shows the image after the DC removal step is applied to nullify the distance shift effect. Fig. 7(c) shows the image after the normalization step, and Fig. 7(d) shows the image transformed to the tangent ratio vs slow time. After the image is constructed, we use a CNN classifier to extract the features from the images and train the network, as discussed in the next section. D. GESTURE CLASSIFICATION WITH A CNN We used a CNN to classify the image patterns. CNNs are extensively used as a deep learning technique that mimics the human vision system [26]. A CNN consists of convolutional, pooling, and fully connected layers. In convolutional layers, the key features of the input image are extracted by a convolutional filter. These layers have several feature extraction filters, and each filter performs a convolution operation while sliding the input image to generate a feature map. The first convolutional layer extracts partial features such as the edge component of the input image, and later convolutional layers extract global features [27]. Pooling layers reduce the total data size by subsampling operations. They reduce the number of weights and biases to be optimized so that the CNN can be optimally trained without overfitting. Pooling methods include max pooling to select the maximum value, median pooling to select the median value, and mean pooling to select the mean value. In fully connected layers, the input image is classified through the output of the last convolutional layer and the deep neural network. In the flattening process, the output data of the convolutional layer are converted into one-dimensional data which are inputted to the deep neural network. The output of the deep neural network is applied to the softmax layer to calculate the probability that the input image is classified into each category. Fig. 8 shows the CNN structure used in this study to extract the optimal features from the hand gesture image pattern. The CNN structure of the proposed method consists of five convolutional layers and four pooling layers. The number of convolutional layers and size of the convolutional filter were optimized through trial and error to achieve the desired accuracy. For example, the accuracy was highest with five convolutional layers and similar with five convolutional layers, so the number of convolutional layers was set to five. The rest of the hyperparameters were determined in a similar manner. The rectified linear unit (ReLU) f (x) = max(0,x) was used as an activation function because it performed better than the existing tanh or sigmoid function [28]. In addition, the max pooling technique was used because recent studies have demonstrated its excellent performance compared to other pooling techniques such as median pooling and mean pooling. The CNN was trained with the backpropagation algorithm, and parameters were updated through stochastic gradient descent with momentum [29]. The initial values of the weights were set to a normal distribution with a mean of 0 and standard deviation of 0.01, and the initial bias was set to 0. E. CHARACTER INTERVAL SEGMENTATION ACCORDING TO THE SIGNAL ENERGY Finding the start and end of a character is very important. Because of the narrow bandwidth of the antenna, the signal magnitude dropped abruptly when the hand was taken outside the writing plane. Thus, we separated characters according to the signal magnitude by taking our hand in and out of the writing plane after a character was finished and before starting the next one. Algorithm 2 presents the steps in detail. Because this method depends on the reflected signal energy, it is specific to radio sensors. It provides good performance and yet is a very simple technique for separating characters. Fig. 9 shows some characters written in a continuous fashion. The interval for each character was identified with Algorithm 2 based on the energy reflected from the hand. Samples 146-228 showed a higher energy level, which indicates when the hand was moving in the writing plane, while the other samples showed low energy values. Hence, the gesture interval was accurately identified with the algorithm. IV. EXPERIMENTAL RESULTS AND DISCUSSION We performed experiments to verify the effectiveness of the proposed method at character classification. A. HARDWARE SETUP We placed four IR-UWB radar sensors at fixed locations as shown in Fig. 2 to make a virtual plane for in-air hand writing. As discussed previously, we used four radar sensors because the transceivers had a narrow beam width (around 65 • ) that made it difficult to cover the whole plane with only two or three radar sensors. Using four sensors improved the recognition accuracy. Fig. 10 shows the Xethru X4 (Novelda, Norway) IR-UWB radar module used in this study. Table 1 gives the parameters of the radar sensors. Fig. 11 shows the clutter removal results for the slow and fast times of a gesture. The signal in Fig. 11(a) clearly contains some clutter information at samples 80-100, which made target tracking difficult. However, Fig. 11(b) shows that this high-amplitude signal was removed after clutter removal. C. LOCALIZATION AND IMAGE CONSTRUCTION RESULTS The results for some characters are presented here. We used the MATLAB software for image processing and classification with deep learning. Fig. 12 plots the images of some characters using both conventional positioning data and our proposed transformation method. Figs. 12(g) and 12(i) show that the conventional positioning data can lead to confusion between the digits ''6'' and ''5.'' However, Figs. 12(h) and (j) show that the transformed images for the corresponding digits are clearly different. Hence, the transformed images are unique even if the 2D localization data are affected by artifacts and show similar patterns. D. CLASSIFICATION RESULTS OF THE STATE-OF-THE-ART AND PROPOSED METHODS FOR CHARACTERS W|ITH ARTIFACTS We compared the classification results of the state-of-the-art 2D trajectory-based method [23] and our proposed method for characters with artifacts by using a confusion matrix. For our experiments, we used three human males between 27 and 32 years old to perform gestures. For all cases, we used 100 samples for training, while 300 samples were used for testing. As per the leave-one-person-out cross-validation (LOPO-CV) scheme, one person did not participate in the training session and was only included in the test session to show the independence of the algorithm with regard to the hand shape and size of a person. 1) COMPARISON OF CLASSIFICATION RESULTS FOR CHARACTERS (DIGITS WITHOUT ARTIFACTS) The accuracy results of the conventional and proposed methods were compared for the digits 0-9. First, the data were collected for training: 10 samples for each character. After training, we used 30 gestures to test each character. Tables 2 and 3 indicate that the accuracy results did not differ much for characters without artifacts. Thus, we next considered the accuracy for characters with artifacts. 2) COMPARISON OF CLASSIFICATION RESULTS FOR CHARACTERS WITH ARTIFACTS We selected 10 alphanumeric characters that result in artifacts during in-air writing for comparison. The first 10 gesture samples of each character were collected for training, and the next 30 gestures were used for testing. Tables 4 and 5 present the recognition accuracy results of the conventional and proposed methods. The accuracy of the conventional method decreased because some characters with artifacts produced (x, y) patterns similar to those of other characters. In contrast, the proposed method drastically improved the recognition accuracy by adding the time information to the 2D coordinates because this captured not only the shape of the character written midair but also the writing style of each character. For example, the classification accuracy for ''5'' and ''6'' was much improved because, although these two characters have similar shapes, the writing styles are different. Similarly, the proposed method greatly improved the recognition accuracy of the characters ''X'' and ''a.'' E. ACCURACY RESULTS FOR CONTINUOUS CHARACTER WRITING In continuous writing, there is no constraint on the interval between characters. This means that the interval between two consecutive characters depends upon the user's intention and comfort. We applied the energy threshold algorithm to certain words to demonstrate the energy levels when a character was being written and the interval between two characters. Fig. 13 shows the segmentation results for the words ''CAT'' and ''RADAR.'' In order to show some diversity, we intentionally made the intervals between characters of variable VOLUME 8, 2020 length. For example, the intervals between characters are slightly longer for ''CAT'' than for ''RADAR.'' The energy level in slow time was much higher when the hand was in the radar plane than when it was outside it. The energy level was very low without the hand because we had already removed the static clutter signal through background subtraction. The large difference in energy levels with and without the hand in the radar plane showed that the energy threshold algorithm was very effective at segmenting the characters during continuous writing. The segmentation accuracy was 100% in this study. We tested a set of 10 words for each word length ranging from two to seven characters. Thus, a total of 60 words was used to evaluate the segmentation accuracy of the energy threshold algorithm for radar-based continuous writing. V. CONCLUSIONS In this study, we classified characters using IR-UWB radar sensors and deep learning through a CNN. In our proposed method, 2D positioning data reflected from our hand in midair is collected by radar sensors and preprocessed with a localization algorithm before being transformed into tangent ratio data along a slow time axis. The resulting image is inputted to the CNN for character classification. An energy threshold algorithm was also developed for accurate character segmentation of continuous writing in midair. The main objective of our study was to overcome the problems caused by artifacts that occur in midair writing. We showed that our proposed method using transformed images improved the accuracy even for cases with artifacts. The accuracy was improved by 8.2% for 10 characters. In addition, the energy threshold algorithm accurately segmented the characters of midair handwriting. In future research, we plan to extend our work to all characters, including special characters, so that a complete in-air keyboard can be developed.
2020-05-21T00:06:55.804Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "5eb8a282f5b354b733d5dcf7f52b57f3cd037f19", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09092989.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "7e55f6050a968f50d3da3d6883ac915b749b284e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
117171561
pes2o/s2orc
v3-fos-license
An inverse problem for a class of canonical systems and its applications to self-reciprocal polynomials A canonical system is a kind of first-order system of ordinary differential equations on an interval of the real line parametrized by complex numbers. It is known that any solution of a canonical system generates an entire function of the Hermite-Biehler class. In this paper, we deal with the inverse problem to recover a canonical system from a given entire function of the Hermite-Biehler class satisfying appropriate conditions. This type inverse problem was solved by de Branges in 1960s. However his results are often not enough to investigate a Hamiltonian of recovered canonical system. In this paper, we present an explicit way to recover a Hamiltonian from a given exponential polynomial belonging to the Hermite-Biehler class. After that, we apply it to study distributions of roots of self-reciprocal polynomials. Introduction Let H(a) be a 2 × 2 real symmetric matrix-valued function defined almost everywhere on a finite interval I = [a 1 , a 0 ) (0 < a 1 < a 0 < ∞) with respect to Lebesgue measure da. We refer to a first-order system of differential equations is not a canonical system. A number of different second-order differential equations, such as Schrödinger equations and Sturm-Liouville equations of appropriate form, and systems of first-order differential equations such as Dirac type systems of appropriate form are reduced to a canonical system. Fundamental results on the spectral theory of canonical systems were established in works of Gohberg-Kreȋn [5], de Branges [3] and many other authors; see the survey articles Winkler [28], Woracek [29] and references there in for historical details on canonical systems. Note that the variable a in (1.1) is the variable on the multiplicative group R >0 . By the change of variable a = e −x , (1.1) is transformed into the equation for the variable x on the additive group R, and the right endpoint of a for the initial value is transformed into the left endpoint of x for the initial value. The transformed equation is the one treated by de Branges, Kreȋn, Kac, and others; see the final paragraph of the introduction for the reason we use a multiplicative variable. The subject of the present paper is an inverse spectral problem for canonical systems. In order to state the problem, we review the theory of the Hermite-Biehler class. We use the notation F ♯ (z) = F (z) for functions of the complex variable z, and denote by C + the (open) upper half-plane {z = x + iy ∈ C : y > 0}. An entire function E(z) satisfying |E ♯ (z)| < |E(z)| for every z ∈ C + (1.2) and having no real zeros is said to be a function of the Hermite-Biehler class, or the class HB, for short. (This definition of the class HB is equivalent to the definition of Levin [11, Section 1 of Chapter VII] if "the upper half-plane" is replaced by "the lower half-plane", because (1.2) implies that E(z) has no zeros in C + . We adopt the above definition for the convenience of using the theory of canonical systems via the theory of de Branges spaces.) Suppose that the system (1.1) is a canonical system endowed with a solution (A(a, z), B(a, z)). Then E(a, z) := A(a, z) − iB(a, z) is a function of the class HB for every fixed point a ∈ I. In particular, lim aրa 0 E(a, z) = 1 and E(a 1 , z) is a function of the class HB. Therefore, an inverse problem for canonical systems is to recover their Hamiltonians from given entire functions of the class HB satisfying appropriate conditions. Usually, such an inverse problem is difficult to solve in general, because it is an inverse spectral problem ( [8,Section 2], [18,Section 7]). However it was already solved by the theory of de Branges in 1960s ( [3]). For example, for fixed I and an entire function E(z) of exponential type belonging to the class HB and satisfying ∞ −∞ (1+x 2 ) −1 |E(x)| −2 dx < ∞ and E(0) = 1, there exists a canonical system (that is, there exists a Hamiltonian H(a) on I) having as its solution (see [18,Theorem 7.3] with [4,Lemma 3.3]). Moreover, such a canonical system is uniquely determined by E(z) and I under appropriate normalizations. More general situation is treated in Kac [7]; see also [28] and [29]. As above, de Bragnes's theory ensures the existence of canonical systems or Hamiltonians for given functions of the class HB, but it does not provide explicit or useful expressions of Hamiltonians. In fact, an explicit form of H(a) is not known except for a few examples of E(z), as in [3,Chapter 3], [4,Section 8], and some additional examples constructed from such known examples using transformation rules for Hamiltonians and Weyl functions [27]. In this paper, we deal with the above inverse problem for a special class of exponential polynomials, together with the problem of explicit constructions of H(a), and apply the results to the study of the distribution of roots of self-reciprocal polynomials. Let R * := R\{0}, g ∈ Z >0 , and q > 1. We denote by C a vector of length 2g +1 of the form C = (C g , C g−1 , · · · , C −g ) ∈ R * × R 2g−1 × R * , and consider exponential polynomials A basic fact (see, e.g., [11, Chapter VII, Theorem 6]) is that an exponential polynomial E(z) of (1.3) belongs to the class HB if and only if it has no zeros in the closed upper half-plane C + ∪ R = {z = x + iy ∈ C : y ≥ 0}. Therefore, there exists a Hamiltonian H(a) of a canonical system corresponding to E(z) if E(z) has no zeros in C + ∪ R. Beyond such results standing on a general theory, we show that there exists a real symmetric matrix-valued function H(a) of a quasi-canonical system corresponding to an exponential polynomial E(z) if E(z) satisfies a weaker condition assumed in Theorem 1.1. It is constructed explicitly as follows. We define lower triangular matrices E + and E − of size 2g + 1 by . . . . . . . . . C ∓g C ±g C ±(g−1) · · · C 0 · · · C ∓(g−1) C ∓g and define square matrices J n of size 2g + 1 by Then, we obtain the following results for the inverse problem associated with the exponential polynomial (1.3). Suppose that det D n (C) = 0 for every 1 ≤ n ≤ 2g. Then (1.10) We mention another way of constructing (γ(a), A(a, z), B(a, z)) in Section 6. The positive-definiteness of the Hamiltonian H(a) in (1.10) is characterized by the following to-be-expected way. (1) Suppose that E(z) belongs to the class HB. Then H(a) is well-defined and is positive definite for every 1 ≤ a < q g . Hence the quasi-canonical system attached to (1.8) and (1.9) is a canonical system, and (A(a, z), B(a, z))/E(0) is its solution. (2) Suppose that H(a) is well-defined and is positive definite for every 1 ≤ a < q g . Then E(z) belongs to the class HB. As mentioned above, E(z) of (1.3) belongs to the class HB if and only if it has no zeros in C + ∪ R. On the other hand, the following conditions are equivalent to each other by ∆ 0 = 1 and definitions (1.6), (1.7), and (1.10): (i) H(a) is positive definite for every 1 ≤ a < q g , (ii) γ(a) > 0 for every 1 ≤ a < q g , (iii) 0 < ∆ n < ∞ for every 1 ≤ n ≤ 2g. Therefore, we obtain the following corollary. Corollary 1.1. An exponential polynomial E(z) of (1.3) has no zeros in C + ∪ R if and only if 0 < ∆ n < ∞ for every 1 ≤ n ≤ 2g. If the hypothesis of Theorem 1.4 is satisfied, one proves that E(1, z) belongs to the class HB in a way similar to the proof of Theorem 1.2 (2) in Section 5.2. On the other hand, γ n > 0 for every 1 ≤ n ≤ 2g, and γ 1 = 1 if we start from the exponential polynomial of the form (1.3), by Theorem 1.2 and (5.8) below. Hence, as a consequence of the above theorems, the exponential polynomials (1.3) belonging to the class HB are characterized in terms of positive-definiteness of Hamiltonians. Now we turn to an application of Corollary 1.1. A nonzero polynomial P (x) = c 0 x n + c 1 x n−1 + · · · + c n−1 x + c n with real coefficients is called a self-reciprocal polynomial of degree n if c 0 = 0 and P satisfies the self-reciprocal condition P (x) = x n P (1/x); equivalently, c 0 = 0 and c k = c n−k for every 0 ≤ k ≤ n. The roots of a self-reciprocal polynomial either lie on the unit circle T = {z ∈ C : |z| = 1} or are distributed symmetrically with respect to T . Therefore, a basic problem is to find a "nice" condition on coefficients of a self-reciprocal polynomial under which all roots of P lie on T . Quite a few results for this problem can be found in the literature; see, e.g., books of Marden [14], Milovanović-Mitrinović-Rassias [15], Takagi [25, Section 10] and the survey paper of Milovanović-Rassias [16] for several systematic treatments of roots of polynomials. As an application of Corollary 1.1, we study roots self-reciprocal polynomials of even degree. The restriction on the degree is not essential, because if P (x) is a self-reciprocal polynomial of odd degree, there exists a self-reciprocal polynomialP (x) of even degree and an integer r ≥ 1 such that P (x) = (x+1) rP (x). In contrast, the reality of coefficients is essential. We denote by P g (x) a self-reciprocal polynomial of degree 2g of the form (1.12) and identify the polynomial with the vector c = (c 0 , c 1 , · · · , c g ) ∈ R * × R g consisting of its coefficients. For a vector c ∈ R * × R g and a real number q > 1, we define the numbers δ n (c) (1 ≤ n ≤ 2g) by where C is the vector defined by (1.14) Theorem 1.5. Let g ∈ Z >0 , q > 1. Let c ∈ R * × R g be coefficients of a self-reciprocal polynomial P g (x) of the form (1.12). Define C ∈ R * × R 2g−1 × R * by (1.14). Then, for every 1 ≤ n ≤ 2g, it is independent of q whether det(E + ± E − J n ) are zero, and the numbers δ n (c) of (1.13) are independent of q if det(E + ± E − J n ) are not zero. Moreover, a necessary and sufficient condition for all roots of P g (x) to be simple roots on T is that 0 < δ n (c) < ∞ for every 1 ≤ n ≤ 2g. Remark 1.2. As a function of indeterminate elements (c 0 , · · · , c g ), δ n (c) is a rational function of (c 0 , · · · , c g ) over Q. The criterion of Theorem 1.5 may not be new, because it seems that the quantity δ n (c) in Theorem 1.5 probably essentially coincides with quantities in a classical theory on the roots of polynomials in Section 7.5. The author has not yet considered rigorously whether δ n (c) and the classical quantities indeed have the same meaning. However, the theory of this paper at least provides a new bridge between the study of the roots of polynomials and the theory of canonical systems. In order to deal with the case that all roots of P g (x) lie on T but P g (x) may have a multiple zero, we modify the above definition of δ n (c) as follow. For a vector c ∈ R * × R g and real numbers q > 1, ω > 0, we define the numbers δ n (c ; q ω ) (1 ≤ n ≤ 2g) by where C is the vector defined by (1.16) Theorem 1.6. Let g ∈ Z >0 , q > 1. Let c ∈ R * × R g be coefficients of a self-reciprocal polynomial P g (x) of the form (1.12). Then a necessary and sufficient condition for all roots of P g (x) to lie on T is that 0 < δ n (c ; q ω ) < ∞ for every 1 ≤ n ≤ 2g and ω > 0. Furthermore, the quantities δ n (c) and δ n (c ; q ω ) are related as follows. Theorem 1.7. Let δ n (c) and δ n (c ; q ω ) be as above. Then as a rational function of c = (c 0 , · · · , c g ) over Q. Suppose that all roots of a self-reciprocal polynomial (1.12) lie in T and are simple. Then where the implied constants depend only on c. Finally, we comment on the reason we use a multiplicative variable on R >0 and the right endpoint for the initial value in (1.1). It comes from the author's personal motivation for the work of this paper: the inverse problem for entire functions of the class HB obtained by Mellin transforms of functions on [1, ∞). This problem was stimulated by Burnol [1], in particular, Sections 5-8, and was partially treated in [24] in the number theoretic setting. In [1], Burnol often uses a multiplicative variable on R >0 , and the left endpoint (= +∞) corresponds to the initial value. In order to apply the method of [1] to the above inverse problem, the author noted on the formulas Here exponential polynomials appear in the left-hand side of the second formula. If we fix X > 1, q > 1 and put 2g = ⌊log X/ log q⌋, we obtain an exponential polynomial of degree 2g. Recall that the results of the paper corresponds the exponential polynomial to the canonical system on [1, q g ). Then, if we imagine the limiting situation q → 1 and X → +∞, it is expected (but not proved rigorously) that a canonical system corresponds to the Mellin transform x should be a system on [1, ∞). For a realizations of the above heuristic discussion on the original motivation, the right endpoint may be more useful and convenient for the initial value than the left endpoint, although the left endpoint is useful for the initial value in the usual theory of canonical systems. In any case, the above number theoretic aspect of de Branges' theory is an important aspect. The paper is organized as follows. In Section 2, we describe the outline of the proof of Theorem 1.1. In Section 3, we prepare several lemmas and the notation used to prove statements of Section 2 and Theorem 1.1 in Section 4. In Section 5, we prove Theorems 1.2, 1.3, and 1.4. In Section 6, we mention a way of constructing (γ(a), A(a, z), B(a, z)) which is different from the way of Sections 2 and 4. In Section 7, we prove Theorems 1.5, 1.6, and 1.7 and compare them with classical results on the roots of self-reciprocal polynomials. Acknowledgments The author thanks Shigeki Akiyama for suggesting the book of Takagi ([25, Section 10]). The author also thanks the referee for a number of helpful suggestions for improvement in the article and for careful reading. In particular, Theorems 1. Every f ∈ L 2 (T q ) has the Fourier expansion For 0 < a ∈ q Z/2 = {q n/2 : n ∈ Z}, we define the vector space of functions of z ∈ R. As a vector space, V a is isomorphic to the direct sum L 2 (T q ) ⊕ L 2 (T q ), since a −iz f (z) + a iz g(z) = 0 if and only if (f, g) = (0, 0). In fact, if one of f and g is zero and a −iz f (z) + a iz g(z) = 0, the other one is also zero. On the other hand, if f = 0, g = 0, and a −iz f (z) + a iz g(z) = 0, then a 2iz = −f (z)/g(z), and hence a ∈ q Z/2 . The maps p 1 : (a −iz f (z)+ a iz g(z)) → a −iz f (z) and p 2 : (a −iz f (z)+ a iz g(z)) → a iz g(z) are projections from V a to the first and the second components of the direct sum, respectively. We define the inner product on V a by where φ j (z) = a −iz f j (z) + a iz g j (z) (j = 1, 2). Then V a , with this inner product is a Hilbert space and is isomorphic to the (orthogonal) direct sum L 2 (T q ) ⊕ L 2 (T q ) of Hilbert spaces ([9, Chapter 5]). We put for k, l ∈ Z and a > 0. We regard X(k) and Y (l) as functions of z, functions of (a, z), or symbols, depending on the situation. For a fixed 0 < a ∈ q Z/2 , the countable set consisting of all X(k) and Y (l) is linearly independent over C as a set of functions of z, since the linear dependence of {X(k), Y (l)} k,l∈Z implies the existence of a nontrivial pair of functions f, g ∈ L 2 (T q ) satisfying a −iz f (z) + a iz g(z) = 0. Using these vectors, we have On the other hand, we have for f j (z) = k∈Z u j (k)q ikz ∈ L 2 (T q ) (j = 1, 2). Note that, for φ ∈ V a , p 1 φ and p 2 φ are not periodic functions of z, but the integrals are independent of the intervals I = [α, α + 2π/ log q] (α ∈ R). We write φ ∈ V a as φ(z) (respectively, φ(a, z)) to emphasize that φ is a function of z (respectively, (a, z)). If we regard X(k) and Y (l) as symbols, V a , endowed with the norm defined by (2.2), is an abstract Hilbert space and is isomorphic to l 2 (Z) ⊕ l 2 (Z). For each nonnegative integer n, we define the closed subspace V a,n of V a by and denote by P * n the projection operator V a → V a,n . Then, for the conjugate operator P * * n := JP * n J : V a → V a of P * n by the involution J : V a → V a defined by we have Therefore, P n := P * * n P * n maps V a into V a,n for every nonnegative integer n. In addition, JP n also maps V a,n into V a,n for every nonnegative n, because for φ n ∈ V a,n . Note that P 0 | V a,0 = JP 0 | V a,0 , since P * 0 J| V a,0 = 0, by definition. For an exponential polynomial E(z) of (1.3), we have for C ♯ := (C −g , C −(g−1) , . . . , C g ) ∈ R * × R 2g−1 × R * , by a simple calculation. Using E(z) and E ♯ (z), we define the multiplication operators (2.5) These operators map V a into V a , because E and E ♯ are expressed as . Suppose that E(z) has no zeros on the real line. Then the operator E is invertible on V a (Lemma 4.1 (1)). Thus the operator and ΘJP n (W a,n ) ⊂ W a,n for each nonnegative integer n, where W a,n := V a,n + ΘJP n V a,n . 2.2. Quasi-canonical systems associated with exponential polynomials. In the above settings, a quasi-canonical system associated with an exponential polynomial E(z) of (1.3) is constructed starting from solutions of the set of linear equations where I is the identity operator. The set of 4g + 2 equations (2.6) is a discrete analogue of the (right) Mellin transform of differential equations [1, (117a) and (117b), Section 6]; see also [24,Section 4]. Under the assumption that both I± ΘJP n are invertible on W a,n for every 0 ≤ n ≤ 2g, that is, (I ± ΘJP n ) −1 exist as a bounded operator on W a,n , we define A * n (a, z) := using unique solutions of (2.6). The functions A * n (a, z) and B * n (a, z) are entire functions of z and are extended to functions of a on (0, ∞) (by formula (4.5)). Here we define functions A * (a, z) and B * (a, z) of (a, z) ∈ [1, q g ) × C by A * (a, z) := A * n (a, z), B * (a, z) := B * n (a, z) (2.8) for q (n−1)/2 ≤ a < q n/2 . In general, the function A * (a, z) is discontinuous at a ∈ [1, q g ) ∩ q Z/2 , because A * n (q n/2 , z) = A * n+1 (q n/2 , z) may not be hold. The same is true about B * (a, z). However, we will see that are independent of z for every 1 ≤ n ≤ 2g (Proposition 4.9). Therefore, we obtain functions A(a, z) and B(a, z) of (a, z) ∈ [1, q g ) × C which are continuous for a and entire for z by the modification A n (a, z) : = α 1 · · · α n · A * n (a, z), B n (a, z) : = β 1 · · · β n · B * n (a, z) for q (n−1)/2 ≤ a < q n/2 . Moreover, (A(a, z), B(a, z)) satisfies the system (1.8) endowed with the boundary conditions (1.9) for the locally constant function γ(a) defined by and γ(a) := γ(a; C) := γ n if q (n−1)/2 ≤ a < q n/2 (2.13) (Proposition 4.6 and Theorem 4.1). This γ(a) is equal to the function of (1.7) (Proposition 4.9). Therefore, (A(a, z), B(a, z))/E(0) is a solution of a quasi-canonical system on [1, q g ) if E(0) = 0 and α n , β n = 0 for every 1 ≤ n ≤ 2g. As a summary of the above argument, we obtain Theorem 1.1: see Section 4 for details. To prove Theorems 1.2, 1.3, and 1.4, we use the theory of de Branges spaces, which are a kind of reproducing kernel Hilbert spaces consisting of entire functions. Roughly speaking, the positivity of H(a) corresponds to the positivity of the reproducing kernels of de Branges spaces: see Section 5 for details. Lemma 3.3 (Laplace's expansion formula). Let A be a square matrix of size n. Let i i i = (i 1 , i 2 , · · · , i k ) (respectively, j j j = (j 1 , j 2 , · · · , j k )) be a list of indices of k rows (respectively, columns), where 1 ≤ k < n and 0 ≤ i 1 < i 2 < · · · < i k < n (respectively, 1 ≤ j 1 < j 2 < · · · < j k < n). Denote by A(i i i, j j j) the submatrix of A obtained by keeping the entries in the intersection of any row and column that are in the lists. Denote by A c (i i i, j j j) the submatrix of A obtained by removing the entries in the rows and columns that are in the lists. Laplace's formula for determinants yields where |i i i| = i 1 + i 2 + · · · + i k , |j j j| = j 1 + j 2 + · · · + j k , and the summation is taken over all k-tuples j j j = (j 1 , j 2 , . . . , j k ) for which 1 ≤ j 1 < j 2 < · · · < j k < n. 3.2. Definition of special matrices, I. We define several special matrices for the convenience of later arguments. We define the matrices E ± 0 to be the square matrices of size 8g where e ± 0 (C) are lower triangular matrices of size 4g defined by Replacing the column at the left edge of e − 0 by the zero column vector, we have . . . We define , where the right-hand sides mean the size of each block of matrices in middle terms. In addition, we define square matrices e ♯ 0 of size 4g by Replacing n columns from the left edge of e ♯ 0 by zero column vectors, we have We denote by I (m) the identity matrix of size m and by J (m) n the following square matrix of size m: We also use the notation χ n = t 1 0 · · · 0 = the unit column vector of length n. 3.3. Definition of special matrices, II. For every nonnegative integer k, we define the square matrix P k (m k ) of size 2k + 2, parametrized by m k , and the (2k + 2) × (2k + 4) matrix Q k for every nonnegative integer k as follows. For k = 0, 1, For k ≥ 2, we define P k (m k ) and Q k blockwisely as follows: and matrices W ± k are defined by adding column vectors t (1 0 · · · 0) to the right-side end of matrices V ± k : In particular, P k (m k ) is invertible if and only if m k = 0. Proof. This is trivial for k = 1. Suppose that k = 2j + 1 ≥ 3 and write P k (m k ) as (v 1 · · · v 2k+2 ) by its column vectors v l . At first, we make the identity matrix I k+2 at the left-upper corner by exchanging the columns v (k+5)/2 , · · · , v k+1 and v k+2 , · · · , v (3k+3)/2 , so that Then, by eliminating every 1 and −m k under I k+2 of the left-upper corner, we have Here Z k is the k × k matrix for which Z k,1 is the j × j antidiagonal matrix with −1 on the antidiagonal line and The above formula for det P k (m k ) implies the desired result. The case of even k is proved in a way similar to the case of odd k. Proof. Noting the definitions of P k (m k ), Q k and M k,l (1 ≤ l ≤ 4), the identity is checked in an elementary way. In this section, we complete the proof of Theorem 1.1 by showing each statement in Section 2. We fix g ∈ Z >0 , q > 1 and C ∈ R * × R 2g−1 × R * throughout this section. Proof. It is sufficient to prove that E is invertible on p 1 V a and p 2 V a , since e ±2itz (1/E(z)) ×f (z) = g(z) is impossible for any 0 = f, g ∈ L 2 (T q ). We have 1/E(z) ∈ L ∞ (T q ), by assumption. Therefore, multiplication by 1/E(z) defines a bounded operator E −1 on Let us consider the equations instead of (2.6). They are equivalent to (4.1) if E is invertible. Then ΘJP n defines a compact operator on W a,n for each 0 ≤ n ≤ 2g, and the resolvent set of ΘJP n | Wa,n contains both ±1. In particular, I ± ΘJP n are invertible on W a,n , and (4.1) has unique solutions in W a,n for each 0 ≤ n ≤ 2g. is either an eigenvalue of ΘJP n | Wa,n or an element of the resolvent set of ΘJP n | Wa,n . Assume that ΘJP n φ n = ±φ n . Then ΘJP n φ n = φ n . Because Θ and p i (i = 1, 2) commute, In this case, and Comparing the coefficients of {X(k)} −g≤k≤−g+n−1 and {X(k)} g≤k≤g+n−1 in the equality E ♯ JP n φ n ± Eφ n = 0, yields Here det A ± = 0 by det D n (C) = 0. Therefore (4.2) has no nontrivial solutions, which means φ n = 0. Consequently, neither 1 nor −1 is an eigenvalue, and hence both belong to the resolvent set of ΘJP n | Wa,n . This means that (I ± ΘJP n | Wa,n ) −1 exist and are bounded on W a,n . In the remaining part of the section, we assume that det D n (C) = 0 for 1 ≤ n ≤ 2g, so that both I ± ΘJP n are invertible on W a,n for every 0 ≤ n ≤ 2g; cf. Lemmas 4.1 and 4.2. As we see in Section 5.1, E has no real zeros if det D 2g = 0. Note that each φ ∈ W a,n has the absolutely convergent expansion if ℑ(z) > 0 is large enough. This is trivial for φ ∈ V a,n and follows for φ ∈ ΘJP n V a,n from (2.4) and the expansion Substituting φ ± n in (4.1) and then comparing coefficients of X(k) and Y (l), we obtain for every K ≥ 4g and L ≤ −2g − 1. Proof. For fixed n, equations (4.1) have unique solutions φ ± n , if I ± ΘJP n are invertible. On the other hand, the solutions φ ± n correspond to solutions of the pair of linear equations (4.3) and (4.4). Therefore, det(E + 0 ± E ♯ n J) = 0. On the other hand, by (4.1). Therefore, we can write for some real numbers p ± n (k) and q ± n (k). Hence E φ ± n (a, z) are extended to smooth functions of a on (0, ∞) by the right-hand side of (4.5). We use the same notation for such extended functions. We provide two lemmas used to prove Proposition 4.6. Proof. By (4.1), Therefore, On the other hand, by Lemma 4.7. Comparing the right-hand sides of the above formulas of (I ± J)E φ ± n with formulas of (I ± J)E φ ± n in Lemma 4.5, we obtain Lemma 4.8. Proof of Proposition 4.6. By definition of X(k) and Y (l), (4.5), and Lemma 4.5, Therefore, the differentiability of A * n (a, z) and B * n (a, z) with respect to a is trivial, and −a d da Applying Lemma 4.8 to the right-hand sides, we obtain Proposition 4.6 by definition (2.7). As mentioned above, A * n (a, z) and B * n (a, z) of (2.7) are smooth functions of a on (0, ∞) for every 0 ≤ n ≤ 2g. However A * (a, z) and B * (a, z) of (2.8) may be discontinuous at a ∈ (0, ∞) ∩ q Z/2 . Therefore, as mentioned in Section 2, the next aim is to make modifications so that A * (a, z) and B * (a, z) become continuous functions of a on [1, q g ). The essential part is the following proposition. Proposition 4.9. Let 1 ≤ n ≤ 2g. Suppose that I ± ΘJP n−1 are invertible on W a,n−1 for every q (n−2)/2 < a < q n/2 and I ± ΘJP n are invertible on W a,n for every q (n−1)/2 < a < q n/2 . Define α n and β n by (2.9). Then (4.12) In particular, α n and β n depend only on C. We prove Proposition 4.9 after preparing several lemmas. Proof. If 1 ≤ n ≤ 2g, we have and the proof is complete. Proof. By the proof of Lemma 4.10, On the other hand, applying Lemma 3.3 to the columns (2g + n + 1, · · · , 4g) of E + 0 ± E ♯ n,1 J, and then applying Lemma 3.3 to the rows (2g + n + 1, 4g) of the resulting matrix, we obtain We complete the proof by comparing (4.18) and (4.19). . . . Finally, we prove the latter half of (1.9). By definition (2.11), it is sufficient to prove On the other hand, and we find that the first row of S n has the form 1 1 · · · 1 n+2 0 0 · · · 0 n+2 by induction using (3.1). Hence Proofs of Theorems 1.2, 1.3 and 1.4 We use the theory of de Branges spaces together with the theory of canonical systems to prove Theorems 1.2, 1.3, and 1.4. De Branges spaces are a kind of reproducing kernel Hilbert spaces consisting of entire functions; see [3,4,8,18] for details. Firstly, we review two propositions from these theories as a preparation to the proofs of Theorems 1.2, 1.3 and 1.4. Note that their proofs presented below are the almost same as the argument in the literature on canonical systems and de Branges spaces; see, for example, the proof of equation (2.4) and Lemma 2.1, and Step 1 of the proof of Theorem 5.1 in [4]. However, we purposely give their detailed proofs to confirm that the positive semidefiniteness of the Hamiltonian, which is usually assumed in the theory of canonical systems, is not necessary for their proofs. (1) Assume that γ(a) = 0 and |γ(a)| < ∞ for every 1 ≤ a ≤ a 0 . Then there exists a 2×2 matrix-valued function M (a 1 , a 0 ; z) such that all entries are entire functions of z and that satisfies and det M (a 1 , a 0 ; z) = 1. (2) Assume that γ(a) = 0 and |γ(a)| < ∞ for every 1 ≤ a < a 0 . Then the matrixvalued function M (a 1 , a; z) of (1) is left-continuous as a function of a and holds as a vector-valued function of z ∈ C. Proof. Then the system (1.8) for 1 ≤ a < a 0 is written as By assumption, both γ(a) and γ(a) −1 are integrable on [a 1 , a 0 ]. Hence, where I = I (2) . On the other hand, Therefore, taking C(a 0 , a 1 ) := sup{γ(a), γ(a) −1 ; a ∈ [a 1 , a 0 ]} and by using the formula for every 1 ≤ i, j ≤ 2, where [M ] ij means the (i, j)-entry of a matrix M . This estimate implies that the right-hand side of (5.3) converges absolutely and uniformly if z lies in a bounded region. (2) The matrix-valued function M (a 1 , a; z) is left-continuous with respect to a by the above definition, since γ(a) is left-continuous by definition (2.13). Because A(a, z) and B(a, z) are continuous with respect to a by definition (2.10), (2.11), and Proposition 4.6, we obtain (5.2) from (5.1). Proof. We obtain (5.6) easily by substituting (5.4) into (5.5). By integration by parts, together with (1.8), we obtain Moving the second terms of the right-hand sides of the two equations to the left-hand sides, then adding both sides of the resulting two equations, and finally dividing both sides by (z −w), we obtain This implies (5.7). Proof of Theorem 1.2(1). For and f ♯ (T ) := T 2g f (1/T ) have a common root if and only if det D 2g (C) is zero ([17, Lemmas 11.5.11 and 11.5.12]). The former is equivalent that E and E ♯ have a common zero, since . If E belongs to the class HB, it has no real zeros and |E(z)| < |E(z)| in C + by definition of the class HB. Therefore E and E ♯ have no common zeros. Hence det D 2g (C) = 0, which implies that det D n (C) = 0 for every 1 ≤ n ≤ 2g. Hence γ n = 0, ∞ for every 1 ≤ n ≤ 2g by Theorem 4.1 (1). Therefore, it is sufficient to prove that E(z) is not a function of the class HB if γ n < 0 for some 1 ≤ n ≤ 2g. We proceed in three steps as follows. Step 1. We show that there is no loss of generality in assuming that there exists 1 ≤ n 0 ≤ 2g − 1 such that γ n > 0 for every 1 ≤ n ≤ n 0 and γ n 0 +1 < 0 holds. We have by definition (2.12) and Proposition 4.9. In addition, |C g /C −g | < 1 if E(z) belongs to the class HB, since for some k ≥ 1 and C ∈ R. Therefore, This implies that γ 1 > 0 if E(z) belongs to the class HB. Step 2. Let n 0 be the number of Step 1. In this part, we show that E(z) is not a function of the class HB if E(a, z) of (5.4) is not a function of the class HB for some 1 < a ≤ q (n 0 +1)/2 . We have A(a, z) B(a, z) (5.9) for 1 ≤ a < q (n 0 +1)/2 by applying (5.1) to (a 1 , a 0 ) = (1, a), when γ(a) = 0 and γ(a) −1 = 0 for 1 ≤ a < q (n 0 +1)/2 . Suppose that E(a 0 , z) is not a function of the class HB for some 1 < a 0 ≤ q (n 0 +1)/2 , that is, E(a 0 , z) has a real zero for some 1 < a 0 ≤ q (n 0 +1)/2 or |E ♯ (a 0 , z)| ≥ |E(a 0 , z)| for some z ∈ C + and 1 < a 0 ≤ q (n 0 +1)/2 . If E(a 0 , z) has a real zero for some 1 < a 0 ≤ q (n 0 +1)/2 , then A(a 0 , z) and B(a 0 , z) have a common real zero, since they are real-valued on the real line. Therefore, (5.9) and det M (1, a 0 ; z) = 1 imply that A(z) and B(z) have a common real zero. Hence E(z) has a real zero by E(z) = A(z) − iB(z). Thus E(z) is not a function of the class HB. On the other hand, we assume that E(a, z) has no real zeros for every 1 < a ≤ q (n 0 +1)/2 but it has a zero in the upper half plane for some 1 < a 0 ≤ q (n 0 +1)/2 . By (2.11) and (5.4), E(a, z) is a continuous function of (a, z) ∈ [1, q (n 0 +1)/2 ] × C. Therefore, any zero locus of E(a, z) is a continuous curve in C parametrized by a ∈ [1, q (n 0 +1)/2 ]. Denote by z a ⊂ C a zero locus through a zero of E(a 0 , z) in the upper-half plane, that is, E(a, z a ) = 0 for every 1 ≤ a ≤ q (n 0 +1)/2 . If Im(z a 1 ) < 0 for some 1 ≤ a 1 < a 0 , then Im(z a 2 ) = 0 for some a 1 < a 2 < a 0 . This implies that E(a 2 , z) has a real zero at z = z a 2 . This is a contradiction. Therefore, Im(z a ) ≥ 0 for every 1 ≤ a < a 0 , in particular Im(z 1 ) ≥ 0. This implies E(z) = E(1, z) is not a function of the class HB. Assume that E(a, z) = 0 for every Im z ≥ 0 and 1 < a ≤ q (n 0 +1)/2 but |E ♯ (a 0 , z 0 )| ≥ |E(a 0 , z 0 )| for some 1 < a 0 ≤ q (n 0 +1)/2 and Im(z 0 ) > 0. Then it derives a contradiction. Because A(a, z) and B(a, z) are bounded on the real line as a function of z by definition (2.11), E(a, z) is a function of the Cartwright class [12, the first page of Chapter II]. Therefore, we have the factorization see [12, Remark 2 of Lecture 17.2]. Here Im(ρ) < 0 for every zero of E(a 0 , z) by the assumption. Hence, we have This contradicts the assumption |E ♯ (a 0 , z 0 )| ≥ |E(a 0 , z 0 )|. Step 3. For the number n 0 of Step 1, we prove that E(z) is not a function of the class HB if γ n 0 +1 < 0. Considering the argument in Step 2, we assume that E(a, z) is a function of the class HB for every 1 < a ≤ q (n 0 +1)/2 in both cases and find a contradiction. The conclusions of Steps 2 and 3 show that E(z) is not a function of the class HB if γ n is neither positive nor finite for some 1 ≤ n ≤ 2g. Inductive construction The pair of functions (A(a, z), B(a, z)) of (2.11) is written as for q (n−1)/2 ≤ a < q n/2 by (2.10), (4.3), (4.9), and (4.10), where I = I (8g) and J = J (8g) . The above formula is explicit but is rather complicated from a computational point of view. In contrast, the following method, based on Proposition 4.17, is often useful for computing the triple (γ(a), A(a, z), B(a, z)). Proof. Let C be a numerical vector such that the exponential polynomial (1.3) has no zeros on the real line. Then γ n and Ω n of (2.12) and (4.28) satisfy (6.1) and (6.2), by the definitions of P k (m k ), Q k , and (4.29). Therefore, γ n = 0 as a function of C for every 1 ≤ n ≤ 2g, by Theorem 1.1, since the cyclotomic polynomial of degree 2g is a self-reciprocal polynomials of degree 2g, all of whose roots are simpl and on T . Hence, Lemma 3.4 implies thatΩ 1 ,Ω 2 , · · · ,Ω 2g are uniquely determined from the initial vector Ω 0 . Therefore, Ω n =Ω n for every 1 ≤ n ≤ 2g, by definitionΩ 0 = Ω 0 . Proposition 6.1. Let C = (C g , C g−1 , · · · , C −g ) be a vector consisting of 2g + 1 indeterminate elements. Defineγ n (C) by (6.1) and (6.2), starting with the initial vector (4.30). Define∆ whereγ n are functions of C in Theorem 6.1. Then∆ n = ∆ n for every 1 ≤ n ≤ 2g if C is a numerical vector such that the exponential polynomial (1.3) has no zeros on the real line. Here we mention that the vector Ω n of (6.2) can be defined from Ω n−1 by a slightly different way according to the following lemma. Lemma 6.2. For every 1 ≤ n ≤ 2g, that is, the right-hand side is independent of the indeterminate element m 2g−n . 7. Applications to self-reciprocal polynomials 7.1. Proof of Theorem 1.5. For a self-reciprocal polynomial P g (x) of (1.12) and a real number q > 1, we define 2) Then the reality of the coefficients of P g (x) and the self-reciprocal condition P g (x) = x 2g P g (1/x) imply that that A q (z) (respectively, B q (z)) is an even (respectively, odd) real entire function of exponential type, namely, A q (−z) = A q (z) and A ♯ q (z) = A q (z) (respectively, B q (−z) = −B q (z) and B ♯ q (z) = B q (z)). In particular, E ♯ q (z) = A q (z) + iB q (z). By (7.1), all roots of P g (x) are simple and on T if and only if A q (z) has only simple real zeros. The following lemma enables us to obtain Theorem 1.5 as a corollary of Theorems 1.1 and 1.2. Lemma 7.1. Let E q (z), A q (z), B q (z) be as above. Then (1) E q (z) satisfies condition (1.2) if and only if A q (z) has only real zeros, (2) E q (z) is a function of the class HB if and only if A q (z) has only simple real zeros. Proof. (1) Assume that E q (z) satisfies (1.2). Then A q (z) = 0 for Im z > 0. Furthermore, A q (z) = 0 for Im z < 0, by the functional equation A q (z) = A q (−z). Hence all zeros of A q (z) lie on the real line. Conversely, assume that all zeros of A q (z) are real. Then A q (z) has the factorization because A q (z) is real, even and of exponential type. Therefore, Hence, for Im z > 0, satisfies inequality (1.2) for every ω > 0, because for z = x + iy with y > 0, by the factorization (7.3). Moreover, E q,ω (z) has no real zeros for every ω > 0, by definition (7.4) and by assumption. Hence E q,ω (z) is a function of the class HB for every ω > 0. Conversely, suppose that E q,ω (z) is a function of the class HB for every ω > 0. Then all zeros of A q,ω (z) and B q,ω (z) are real, simple, and they interlace; see [11, Chapter VII, Theorems 3, 5, p. 313], but note the footnote of the first page). In particular, A q,ω (z) has only real zeros for every ω > 0. Hence A q (z) = lim ωց0 A q,ω (z) has only real zeros by Hurwitz's Theorem ([14, Theorem (1,5)]). Proof of necessity. By Lemma 7.3, P g (x) has a zero outside T if and only if E q,ω (z) is not a function of the class HB for some ω > 0. Hence it is sufficient to prove that E q,ω 0 (z) is not a function of the class HB if there exists ω 0 > 0 such that δ n (c ; q ω 0 ) ≤ 0 or δ n (c ; q ω 0 ) −1 ≤ 0 for some 1 ≤ n ≤ 2g. This is proved similarly to Theorem 1.1. This equality follows from the formula lim q ω ց1Ω by definitions ofγ n (c ; q ω ) and γ n (c ; log q), where (g + 1)c g−1 log q gc g log q (g − 1)c g−1 log q . . . Hence lim q ω ց1 δ n (c ; q ω ) = δ n (c) for every 1 ≤ n ≤ 2g. Because δ n (c ; q ω ) is a rational function of q ω , we obtain the second formula of Theorem 1.7. 7.4. Remark on Theorem 1.7. We have as ω → 0 + if z lies in a compact subset of C. Therefore, it seems that E q,ω (z) is similar to E q (z) = A q (z) − iB q (z) for small ω > 0, but there is an obvious gap after the limit ω → 0 + is taken. To resolve this gap, we consider E q,ω (z) := A q,ω (z) − i ω B q,ω (z). We do not know whether an analogue of Lemma 7.3 holds forẼ q,ω (z). However, if such analogue holds, we may obtain results forẼ q,ω (z) analogous to E q,ω (z). 7.5. Comparison with classical results. A necessary and sufficient condition for all roots of P (x) ∈ C[x] to lie on T is that P (x) be self-inversive (i.e., P (x) = x deg P P (1/x)) and that all roots of the derivative P ′ (x) lie inside or on T (Gauss-Lucas [13], Schur [23], Cohn [2]). If P (x) is a self-inversive polynomial, P ′ (x) has no zeros on T except at the multiple zeros of P (x) ([14, Lemma (45.2)]). Therefore, a necessary and sufficient condition for all roots of the self-inversive polynomial P (x) ∈ C[x] to be simple and on T is that all the roots of P ′ (x) lie inside T . The following classical result is quite useful for checking this condition. which is the the resultant of Q and Q ♯ . For k = 2, . . . , n, given the 2k × 2k matrix D k (Q), define the 2(k − 1) × 2(k − 1) matrix D k−1 (Q) by deleting the k-th and 2k-th rows and columns of D k (Q). In particular, a 0 a 1 a n a 0 a n−1 a n a nān−1ā0 a nā1ā0     , D 1 (Q) = a 0 a n a nā0 . That is, essentially, δ n+1 (P g ) is a determinant of a square matrix of size n+1. Therefore, it seems plausible that the criterion of Theorem 1.5 can be deduced from the known criterion using some linear algebra identities. However, the author does not have an idea how to realize such argument. As mentioned in [14] and [25], an origin of Theorem 7.1 is in work of Hermit and Hurwitz. They related the distribution of roots of polynomials with the signature of quadratic forms. Concerning the approach of the present paper, the reproducing kernel (5.5) of the de Branges space B(E) may play a role of quadratic forms. 7.6. Concluding remarks. Corollary 5.2 and arguments of Subsections 5.1, 5.2, and 7.1, show that a self-reciprocal polynomial P g (x) of (1.12) has only simple zeros on T if and only if there exists 2g positive real numbers γ 1 , · · · , γ 2g such that since P g (1) = E q (0) = A q (0). Compare this with the factorization P g (x) = P g (0) g j=1 (x 2 − 2λ j x + 1) (λ j ∈ C). As described in the introduction and in Section 6, we have at least two simple algebraic algorithms for calculating γ 1 , · · · , γ 2g from coefficients c 0 , · · · , c g , but it is impossible to calculate λ 1 , · · · , λ g , since the Galois group of a general self-reciprocal polynomial P g (x) is isomorphic to S g ⋉ (Z/2Z) g . In addition, it is understood that the positivity of γ 1 , · · · , γ 2g is equivalent to the positivity of a Hamiltonian, but a plausible meaning of |λ j | < 1 and λ i = λ j (i = j) is not clear. We now comment on two important classes of self-reciprocal polynomials. The first one is the class of zeta functions of smooth projective curves C/F q of genus g: Z C (T ) = Q C (T )/((1 − T )(1 − qT )), where Q C (T ) is a polynomial of degree 2g satisfying the functional equation Q C (T ) = (q 1/2 T ) 2g Q C (1/(qT )). Hence P C (x) = Q C (q −1/2 x) is a self-reciprocal polynomial of degree 2g with real coefficients. Weil [26] proved that all roots of P C (x) lie on T as a consequence of Castelnuovo's positivity for divisor classes on C × C. The second one is the class of polynomials P A (x) attached to n × n real symmetric matrices A = (a i,j ) with |a i,j | ≤ 1 for every 1 ≤ i < j ≤ n (no condition is imposed on the diagonal): P A (x) = I⊔J={1,2,··· ,n} x |I| i∈I,j∈J a i,j , where I ⊔ J means a disjoint union. Polynomials P A (x) are obtained as the partition function of a ferromagnetic Ising model and are self-reciprocal polynomials of degree n with real coefficients. The fact that all roots of any P A (x) lie on T is known as the Lee-Yang Circle Theorem [10]. Ruelle [20] extended this result and characterized the polynomials P A (x) in terms of multi-affine polynomials being symmetric under certain involution on the space of multiaffine polynomials [21]. It seems that a discovery of arithmetical, geometrical, or physical interpretation of the positivity of γ 1 , · · · , γ 2g or H(a) (for some restricted class of polynomials) is quite an interesting and important problem. We hope that our formula of γ 1 , · · · , γ 2g contributes to such philosophical interpretation.
2016-09-23T15:08:33.000Z
2013-08-01T00:00:00.000
{ "year": 2013, "sha1": "e34dde7c5c6a6ffd6c5fc0c0f15758bc0a99811d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1308.0228", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "759586be48aa0506076e5acbee92b99b74d75428", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
243843153
pes2o/s2orc
v3-fos-license
Co-curation: Archival interventions and voluntary sector There is a growing trend across the social sciences to engage with archives. Within human geography, this has stimulated a debate about the nature of archives, including moving from considering "archive as source" to "archive as subject." We build on and extend this thinking, suggesting that an even more active appreciation of the dynamic nature of relationships between researchers, owners of records, and archival material is needed. This paper draws on an interdisciplinary study of voluntary action and welfare provision in England in the 1940s and 2010s to highlight how the different iterative processes involved in collaborative archival research are part of what we call co-curation. Co-curation involves the negotiated identification, selection, preparation, and interpretation of archival materials. This has implications for both research processes and outcomes. | INTRODUCTION The archive has recently emerged as a subject of methodological interest in a range of disciplines beyond history or archival science, including human geography (Hyacinth, 2019;Mills, 2013a).This "archival turn" is partly indebted to a Foucauldian analysis of the archive as an artefact of knowledge production (Foucault, 1969).Rather than being seen simply as a system of files, the archive is defined as the practice that determines what is filed (Basu & De Jong, 2016, pp. 5-6).This involves a move from considering "archive as source" to "archive as subject" that examines "the practices of collecting, classifying, ordering, display and reuse" (Ashmore et al., 2012, p. 82; see also Stoler, 2002).While archival work has traditionally been perceived to be a solitary process, Ashmore et al. (2012, p. 81) have reflected on their experiences of working with the owners of archival collections as a "collaborative practice, communal knowledge formation" (2012, p. 82).In this paper we extend this thinking to suggest that an even more active appreciation of the dynamic nature of relationships is needed, particularly for private archives that are not mediated by professional archivists.We build too on the growing literature on "archival interventions" made by scholars, which has been variously conceptualised as participatory historical-geographical research, archival activism, or historian-activism (Bressey, 2014;DeLyser, 2014;Flinn, 2011;Mills, 2013b;Oppenheimer, 2020).In doing so, we develop the idea of co-curation: the identification, selection, preparation, and interpretation of archival materials as it is negotiated between researchers and owners of records. This paper offers fresh insights into the iterative processes involved in collaborative archival research with voluntary sector partners.It draws on an interdisciplinary study of voluntary action and welfare provision in England in the 1940s and 2010s, to explore the process of co-curation between the research team and institutional owners of records.The archives and records of voluntary organisations are strategic assets with huge importance for research, but, like other private archives, should be considered "at risk" because they lack the long-term legal protection afforded to records produced by government.Many such collections retained in-house are subject to the vagaries of waxing and waning organisational interest, staff turnover, office relocation, and mergers (McMurray, 2014). This paper is our first attempt to define and explore the concept of co-curation.Co-curation, we suggest, is an ongoing process through which the owners of private records work alongside researchers at every stage of a study.It enables access to previously little-used sources as well as generating insights not available in a more conventional research project.Co-curation has benefits for other scholars through interventions that improve the long-term preservation and accessibility of collections.In actively engaging staff in voluntary organisations in work with institutional archives, it also builds interest, skills, knowledge, and capacity, helping to ensure the research has lasting impact for practice.In what follows we set out our concept of co-curation as it applies to working with organisational partners in the voluntary sector.First, we briefly review existing literature on the use of voluntary sector archives in human geography, before outlining the research study and discussing how we engaged with such sources.Subsequently we discuss co-curation as a "process" and then as an "outcome."We touch on important, yet more mundane, aspects of the process that are rarely discussed in methodological literature (Ashmore et al., 2012).We conclude by reflecting on the wider implications of co-curation for human geography and beyond. | USING VOLUNTARY SECTOR ARCHIVES IN HUMAN GEOGRAPHY There is new-found recognition of the value of archives and records of and within the UK voluntary sector.High-profile inquiries into the history of public, corporate, and charitable bodies have highlighted the evidential value of records.In the humanitarian sector, leading aid agencies like CARE, Save the Children, and Oxfam have "begun to recognise that their archives are strategic assets for analysing the evolution of humanitarianism in a changing political landscape" (Götz et al., 2020, p. 308).There is growing understanding too that charity archives may preserve stories of marginalised individuals and communities whose lives are not recorded elsewhere.For example, significant contributions to historical geographies of black women in Britain have been enabled through collections such as the Barnardo's photographic archive (Bressey, 2002).However, charity archives have long been under the radar (Newton, 2004), under-resourced, and consequently at risk.Research has examined the vulnerability of in-house archive services (McMurray, 2014;Newton, 2004), records management in charities (Dawson et al., 2004), questions of cataloguing and user engagement (Mills, 2013b), third-party deposit of charity archives (Oppenheimer, 2020), and research uses (Brewis, 2020). Ketelaar argues that archivalisation is "the conscious or unconscious choice (determined by social and cultural factors) to consider something worth archiving" (2001, p. 133).Like all collections, voluntary organisations' records have been affected by subjective decisions about what has been, or will be, preserved.For private collections there are additional questions over how and in what ways outside researchers might be able to utilise their holdings (Boyer, 2004;Hyacinth, 2019).Oppenheimer argues that we need to understand not just the agency of archivists and record keepers, but, for voluntary organisations, also "the process by which the organisation itself came to value its records in a particular way" (2020 p. 172).Oppenheimer's description of herself as "historian-activist" is relevant here, in that she had been "agitating" for the Australian Red Cross to deposit its archive into a public repository since her first encounter with the records in the 1980s, a process that in fact took over 25 years to achieve.Boyer (2004, p. 170) reflected on working with historical sources through the lens of feminist geography, which included an awareness of how power places and structures identity, lived experience, and social relations in spaces of the past.She emphasised the importance of finding sources at the boundary of public knowledge, such as the non-public archives of professional organisations and charities, to expand one's base of sources.But Boyer also highlighted the challenges that can ensue from accessing and using private archives, including how organisations can choose to "filter" who can access collections and may "preserve documents selectively" in order to present themselves in a more favourable light (2004, p. 172). In her work on historical geographies of abortion, Moore (2010, p. 265) discussed the absence of archival materials and silences she faced.Moore also highlighted the potential effects of making personal and delicate information public, and the possible conflicts of interest between researcher and participant, even when the participant is dead (2010, p. 268).Dwyer and Davies (2010) considered the contradictory processes of archiving, of giving form to the identities and capacities of past communities, spaces, and landscapes, while simultaneously erasing or eliding that which cannot easily be captured (2010, p. 260).This has resonance with sustained efforts across the humanities and social sciences to increase the representativeness of archives (Johnston, 2001). Building on such scholarly insights about the potential significance of private collections for research and those which foreground participatory and inclusive approaches to the archive, this paper argues that the process of co-curating research between records owners in the voluntary sector and academics is a form of archival intervention with the potential to become a mainstream research methodology in historical and human geography. | THE DISCOURSES OF VOLUNTARY ACTION STUDY In this paper, we develop the idea of co-curation by drawing on our experience of undertaking an interdisciplinary, collaborative research project exploring voluntary action and welfare provision in the 1940s and 2010s.These two decades can be considered as "transformational moments" in which the boundaries between state, voluntary action, and others were rethought (Brewis et al., 2021).The project analysed narratives about the role, position, and contribution of voluntary organisations that emanated from the voluntary movement, the public, and the state.It focused on four fields of voluntary action: children, youth, older people, and the voluntary movement/sector as a whole. For each field of activity we identified a key voluntary sector infrastructure or umbrella body that was active in the 1940s and which continued into the 2010s: Age UK, Children England, the National Council for Voluntary Organisations (NCVO), and the National Council of Voluntary Youth Services (NCVYS).Our intention to work with one partner in each field was disrupted by closures and mergers within the voluntary youth sector towards the end of the 2010s, leading us to work with two additional organisations -Ambition and UK Youth (Table 1).We also worked in partnership with the Mass Observation Archive (MOA), 1 with Mass Observation data used to explore public narratives, while government policy documents, speeches, and parliamentary debates were used as the sources for exploring state narratives (see Brewis et al., 2021).It is, however, the process of working with our partner voluntary organisations that forms the focus of this paper. Voluntary movement The voluntary movement/sector was explored through the papers of the National Council for Voluntary Organisations (NCVO). Founded as the National Council for Social Service (NCSS) in 1919, NCVO's archive is deposited at the London Metropolitan Archives.We worked with colleagues at NCVO to identify and select 2010s material. Children Children England is the 'children's specialist' membership body for voluntary organisations in England.The Associated Council of Children's Homes was established in 1941 by four of the largest charities then providing residential care for children in the UK, with others soon joining.It became Children England in 2009.We acquired the organisation's surviving archival material, dating back to the 1940s, on temporary deposit at UCL Special Collections, and worked with the staff team to select 2010s source material. Youth We used the archive of the National Council for Voluntary Youth Services (NCVYS), established in 1936 by 11 national voluntary youth organisations, and which closed in 2016 just as the research was beginning.After closure, the collection was donated to UCL Special Collections in association with this project.Subsequently, we accessed records of Ambition, which was founded in 1925 as the National Association of Boys' Clubs (NABC).These papers are today privately held by UK Youth, following its 2018 merger with Ambition.UK Youth began life in 1911 as the National Association of Girls' Clubs; its archive is at the University of Birmingham. Older people The National Old People's Welfare Committee was established in 1940 as part of NCSS.It gained independence in 1970 and became Age Concern.Age UK was created in 2009 following the merger of Age Concern and Help the Aged.We acquired surviving material, dating back to the 1940s, which was taken on temporary deposit at UCL Special Collections for the duration of the research.We worked with colleagues at Age UK to identify and select records relating to the 2010s. Apart from NCVO and UK Youth, the organisations we partnered did not have archival records in the public domain.Even for these two organisations, their archives did not include the documents we hoped to include from the 2010s.This epitomises the challenges facing research into subjects such as the roots of the mixed economy of welfare, which is often hampered by a lack of access to such sources.Our project addressed a major gap in knowledge by accessing data from these private collections.Indeed, the viability of the research proposal depended on being able to access and interpret the "archival voice" of these organisations.Conversations were held with the four original partners during the preparation of a funding proposal, building on established relationships between members of the research team and those organisations.Subsequent discussions were held with Ambition and UK Youth after the closure of NCVYS, and the subsequent merger of Ambition into UK Youth. | CO -CURATION AS PROCESS The co-curation of the archives involved the negotiated identification, selection, preparation, and interpretation of materials.At each stage decisions were made that shaped and re-shaped the form, content, and understanding of the archive.Each stage raised questions about what records to include and exclude as well as highlighting the varying capacities of our partners to engage with the project. | Governance Co-curation depends on forging successful collaborative partnerships.Collaboration is often described as being between organisations, but it is enacted by people in organisations (Hardill & Mills, 2013).In securing access, we drew on a mix of past research connections and team members' long track record working with voluntary organisations, including earlier archival interventions (see, for example, Brewis' British Academy-funded "Archiving the Mixed Economy of Welfare" project and AH/W002353/1 AHRC-Collaborative Doctoral Partnership "Charity and voluntary sector archives at risk: Conceptualising and contextualising a neglected archives sector"). We established a steering group with members drawn from our partners, academics with relevant expertise, and a professional archivist from UCL.It met five times over two years, and provided suggestions and feedback on data gathering, joined in on the analysis process, discussed emerging findings, and co-designed dissemination activities. The formal arrangement of each partnership was specified in detailed Memoranda of Understanding (MoU), which covered access to source material, ethics, copyright, intellectual property, outputs, knowledge exchange, and depositing of data.For example, we agreed that each partner would have the chance to review every publication in which the organisation was mentioned and secured permission to reproduce copyrighted material.The MoU were signed off by senior staff at each organisation and by legal services at the lead university (for further details, see Brewis, 2020). | Identification Identification of source materials was the next step.We were able to access some material in third-party repositories: NCVO had deposited its archive at the London Metropolitan Archives (LMA) in the 1990s, while the transfer of the NCVYS archive to UCL was facilitated by the research team during the planning stages of the project.UK Youth had also deposited its archive (1909-c.2015) at the University of Birmingham.For everything else we had to rely on private collections, with the situation reflecting the differing levels of priority accorded to records across different organisations over time.The process of tracking materials down was not always simple and often relied on the knowledge of key long-serving staff members.Finding the "right" staff member(s) was a crucial step.The Children England papers from the 1940s, consisting of two boxes of board minutes and circulars, had been kept safe by a staff member and were transferred to UCL as a temporary deposit.Available records from Age UK for the National Old People's Welfare Committee in the 1940s were fragmentary; the few boxes that were tracked down were transferred to UCL on temporary deposit but lacked board minutes or printed reports.In order to fill some of the gaps, additional material was located by the research team at the British Library and this was supplemented with purchase of other material from eBay (see DeLyser et al., 2004).Access to the papers of Ambition was complicated by the merger with UK Youth part-way through our negotiations.Eventually, the research team visited the organisation to select material to transfer to UCL for temporary deposit.While the Ambition archive was the most comprehensive collection of 1940s material out of those we acquired on loan, some of the material was in poor condition after being stored for decades in a damp basement and posed a contamination threat to other collections. For materials from the 2010s, we faced the issue of the scale of records, alongside the need to provide reassurance to partners regarding the research team's access to sensitive material.The 2010s material was far more extensive in volume, taking time to locate and organise.We were heavily reliant on staff members to help us identify and retrieve relevant documents.Apart from the NCVYS archive at UCL (which was complete up to 2016), this material did not form part of archive collections but were internal working documents, often a mix of hard copies of a few key documents with the rest available as soft copies, filed in multiple places across internal virtual storage systems.Rarely were they subject to formal records management procedures. Access to contemporary sets of board papers represented the greatest concern for our partners, particularly as the timing of the project coincided with a period of sensitive restructuring and merger discussions, which were played out in the board minutes of several organisations that were involved in the study.The introduction in 2016 of General Data Protection Regulation (GDPR) 2 had shone a spotlight on sharing personal data, which further affected attitudes towards access.One organisation became particularly anxious about sharing documents in which living individuals were named after concerns regarding a potential breach of GDPR elsewhere within the organisation.Without the trust that had been built between the research team and the organisations, and the additional reassurance provided by the MoU, access to these more sensitive documents would not have been possible.Indeed, we did not get full sets of minutes for all organisations, with concerns about access combining with a lack of capacity within the organisation proving insurmountable within the time available for the research. These private archives lacked the order and structure taken for granted when using archives deposited in a thirdparty repository.None had a catalogue or box list.This created a challenge, both at the stage of identification and also later when it came to referencing materials without the familiar fall-back of box, file, and item codes.If, for 1940s materials, there was a concern about lack of documents, for the newer materials it was one of having too many.Identification and selection was a process of negotiation between the research team and partner organisations.Decisions were made by both sides that affected what was included.There were also examples of missing documents, which only became apparent when their existence was indicated through other sources.Within the process of co-curation, materials may be forgotten, not thought relevant, not possible to locate, or purposefully retained.Our experiences reflect the broader challenge of archival research on voluntary action, which entails accessing what is often considered "dispensable ephemera" via private archives or tracking down scattered records to reconstruct an organisational archive (Brewis, 2014, p. 10). | Preparation After identifying and acquiring the materials, the next task was to prepare the documents for use within the project.We had assembled an enormous amount of material, much of which did not address our research questions directly, although it was of wider value providing background and context.A considerable amount of time was spent collecting, collating, cataloguing, reviewing, prioritising, and preparing documents for analysis.The 1940s material was for the most part administrative material, including minutes, annual reports, newsletters, and, in some cases, correspondence and publications.These sources began life as typescript, printed, or handwritten documents.After professional scanning, selected documents were converted into readable PDF or Word files using Optical Character Recognition software, supplemented by manual data "cleaning."The preparation of such sources was both labour and resource intensive, and could not have been undertaken without funding.The 2010s material was either "born digital," produced as Word or PDF files, or in some cases scanned from print copies to create readable PDF files.A key issue here was the scale of the data, running to thousands of documents: without any pre-existing catalogue, all had to be read in order to select the most relevant, inevitably making choices and compromises. The original intention was to use computer-assisted qualitative data analysis software, more specifically NVivo10, to facilitating cross-team analysis.After an initial reading, those documents or sections of documents judged to be the most relevant were prepared and imported into NVivo.The range of documents and formats that can be analysed within NVivo has expanded in recent years, including enabling the inclusion of both text-based and image-only PDFs.In order to code and query the documents, however, the PDFs needed to be text-based, meaning that even some of the materials from the 2010s had to be converted.In practice, timescales and differing levels of familiarity with the software across the team meant that we did not use it as consistently as we had initially intended, but even our limited use demonstrated its potential utility, particularly given the scale of the data and the ability to share analysis across the team. | Interpretation The practice of co-curation included being in regular discussion with key staff at partner organisations to help contextualise and interpret the source material.This happened throughout the study, beginning with extensive initial conversations with potential partner organisations before the proposal was drafted.These conversations continued throughout the research, and -along with our existing knowledge and reading of wider literature and theoretical framings -helped to shape our interpretation of the materials, drawing our attention to certain documents or particular lines of argument, for example. Steering group meetings were used more directly to inform our interpretation.At two of the meetings, for example, we shared selected documents with members who then worked collectively to identify key emerging themes.This fed directly into the development of a coding frame. Towards the end of our analysis, we organised workshops with each partner organisation.These events presented emergent findings to groups of staff, trustees, or member bodies, and helped us test out and extend our interpretation of the data from individual organisational perspectives.They pointed us towards some important new avenues for our analysis, in some cases highlighting developments that our own analysis of the documents had not.In addition, we ran several wider events in which our partners joined us in sharing the emerging findings and reflecting on their relevance with other voluntary sector organisations and academics: the ensuing discussions helped refine our analysis, while also raising awareness of the value of charity archives. | CO -CURATION AS OUTCOME Co-curation has benefits, we suggest, for all concerned.The quality of our data collection and analysis was enhanced by the co-curation process, with implications for the publication of the project research findings (Brewis et al., 2021).As was noted earlier, research into the roots of the mixed economy of welfare has been hampered by a lack of access to privately held archives of voluntary organisations.The approach that we adopted enabled access, helped with identification, strengthened our interpretation/analysis, and refined our outputs.The evidence that has been generated is of a higher quality, and more robust, as a result.It was also a personally rewarding experience for the researchers involved. For our project partners, one important outcome from co-curating their archives was the rediscovery of previously little-known or lost documents, images, or objects, which offers potential for new interpretations of the earlier work of an organisation.One example was the identification for Age UK of the forget-me-knot pin badge produced in the 1940s for members of the local National Old People's Welfare Committee lunch clubs.We shared with the organisations the digitised versions of documents we produced, making them more readily accessible and useable.The uncovering of written sources and visual images was welcomed by our partners.However, while rediscovery is exciting, it can also be disruptive of an organisation's own interpretation of its history, which may draw heavily on a foundation narrative (Hilton et al., 2013) or have been reworked to shape current agendas.Sensitivity to this is needed. A third outcome relates to long-term preservation of the organisational archives we worked with.The research team is continuing to work with our partners to secure a sustainable future for all these collections.The acquisition of the NCVYS archive by UCL was a serendipitous outcome that coincided with discussions about involvement in the research.At the time of writing, plans are underway to retain the Children England archive at UCL Institute of Education, and negotiations are ongoing about Age UK's and Ambition's archives.This will ensure that these important records are preserved and available for others to research in the future. While our experience of co-curation was positive, it was not without its challenges.The timescales of academic research can often feel at odds with those of voluntary organisations.While our partners were deeply supportive of the research, it was rarely a central priority.At times this meant that we were asking more of the organisations than they had the capacity for.It likely also meant that partners were frustrated by the relatively long timescale of the study, particularly when this appeared at odds with our occasional requests to turn things around quickly.Taking the time to nurture the relationships and build trust was key.There is a risk that getting too close to research partners makes it difficult to keep a critical distance.We suggest that the ways in which our project brought voluntary sector archives into conversation with each other, and with state narratives and public narratives, enabled triangulation and allowed more critical and challenging questions to be debated.Finally, while co-curation can help secure the future of valuable archives, it will likely shape the collections in ways that reflect not just the agendas of the archive holders themselves but also those of the researchers.This is likely to be particularly marked for our project, through its focus on the 1940s and 2010s, which may have skewed the archival records towards these two time periods. | CONCLUSIONS In this paper we have provided an account of co-curation through a focus on our engagement with the private archival collections of voluntary organisations.Securing this access enabled us to address a lacuna in research into the roots of the mixed economy of welfare.Methodologically we have moved beyond conceiving of an archive as a system of files, to thinking of an archive as a practice, to a third stage of co-curation.Co-curation might be seen as part of a broader iterative approach, one that is not neatly staged, is actively negotiated and shaped, and that involves choices being made -by all concerned and at all stages in the process -about what records are included or excluded.We have focused on the different iterative processes involved in collaborative archival research, which we argue leads to the production of co-curated collections. Co-curating private archives demands the allocation of time and resources by the owners of records.Co-curation includes the identification of questions, partners, and materials and is built on trust and sustained through negotiation.Staff can lack the time to search for records and are unlikely to have professional archiving or records management skills.Co-curation also involves academics actively intervening in discussions about archive records.It can offer opportunities to researchers for reciprocity, to give back to organisations, the owners of private archives, and for the (re)discovery of the past by organisations.We were fortunate to have both the support from the organisations and the funding from a research grant to support our endeavours. Co-curation has implications for human geography, and for other allied disciplines.Importantly, co-curation could open sources at the boundary of public knowledge, such as the private archives of voluntary organisations (Boyer, 2004), which to date have remained an under-utilised resource.It may enhance research quality through promoting methodological innovation that will lead to new, substantive insights.It may also enhance research engagement and impact through the relationships that are at its core.While co-curation should not be approached uncritically, it should, we suggest, be seen as a useful addition to the human geographer's methodological toolbox.
2021-11-08T16:05:53.397Z
2021-11-05T00:00:00.000
{ "year": 2021, "sha1": "b46db8624c2a2aba9daf445a7553d222af7bae1d", "oa_license": "CCBY", "oa_url": "http://pure-oai.bham.ac.uk/ws/files/155289853/area.12768.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "77b0d400b3bcc50d45cea1a08bf887db7bf71b9d", "s2fieldsofstudy": [ "Sociology", "History" ], "extfieldsofstudy": [] }
2858318
pes2o/s2orc
v3-fos-license
Genome-wide linkage analyses of two repetitive behavior phenotypes in Utah pedigrees with autism spectrum disorders Background It has been suggested that efforts to identify genetic risk markers of autism spectrum disorder (ASD) would benefit from the analysis of more narrowly defined ASD phenotypes. Previous research indicates that 'insistence on sameness' (IS) and 'repetitive sensory-motor actions' (RSMA) are two factors within the ASD 'repetitive and stereotyped behavior' domain. The primary aim of this study was to identify genetic risk markers of both factors to allow comparison of those markers with one another and with markers found in the same set of pedigrees using ASD diagnosis as the phenotype. Thus, we empirically addresses the possibilities that more narrowly defined phenotypes improve linkage analysis signals and that different narrowly defined phenotypes are associated with different loci. Secondary aims were to examine the correlates of IS and RSMA and to assess the heritability of both scales. Methods A genome-wide linkage analysis was conducted with a sample of 70 multiplex ASD pedigrees using IS and RSMA as phenotypes. Genotyping services were provided by the Center for Inherited Disease Research using the 6 K single nucleotide polymorphism linkage panel. Analysis was done using the multipoint linkage software program MCLINK, a Markov chain Monte Carlo (MCMC) method that allows for multilocus linkage analysis on large extended pedigrees. Results Genome-wide significance was observed for IS at 2q37.1-q37.3 (dominant model heterogeneity lod score (hlod) 3.42) and for RSMA at 15q13.1-q14 (recessive model hlod 3.93). We found some linkage signals that overlapped and others that were not observed in our previous linkage analysis of the ASD phenotype in the same pedigrees, and regions varied in the range of phenotypes with which they were linked. A new finding with respect to IS was that it is positively associated with IQ if the IS-RSMA correlation is statistically controlled. Conclusions The finding that IS and RSMA are linked to different regions that only partially overlap regions previously identified with ASD as the phenotype supports the value of including multiple, narrowly defined phenotypes in ASD genetic research. Further, we replicated previous reports indicating that RSMA is more strongly associated than IS with measures of ASD severity. Background Although it is generally accepted that genetic factors play a major role in the etiology of autism spectrum disorders (ASDs) [1], identification of specific genetic risk markers is complicated by the phenotypic complexity of clinical diagnoses. For example, the Diagnostic and Statistical Manual of Mental Disorders 4 th ed. (DSM-IV) [2] diagnostic criteria for autistic disorder (AD) require impairments in three domains: social interaction, communication and repetitive and stereotyped behavior. Each of these three domains has been shown to be heritable, but their covariation in the general population is modest, and genetic modeling suggests distinct genetic influences for each [3][4][5]. Thus, it has been argued that the ability to identify susceptibility loci for ASD would be increased if specific ASD/AD traits were used as phenotypes [3,6]. Specific ASD/AD traits have been employed in genetic studies most often either to stratify pedigrees for linkage analysis or as the dependent variable in association tests for specific alleles. For example, the first approach has found stronger ASD linkage signals in pedigrees with more abnormal levels of phrased speech delay [7,8], repetitive behavior [9][10][11] and savant skills [12], but there have been failures in replication [13]. The second approach has resulted in significant genotype associations with repetitive behavior [14][15][16]. A third, less common approach has been to use the specific trait as a quantitative or qualitative phenotype in linkage analyses. For example, we used the Social Reciprocity Responsiveness Scale (SRS) [17] score as the phenotype in linkage analyses of multiplex ASD pedigrees (Coon et al., Genome-wide linkage using the Social Responsiveness Scale (SRS) in Utah autism pedigrees, submitted). Although each of these methods has merit, it should be noted that the first method attempts to reduce heterogeneity of the diagnostic phenotype by stratification on a specific trait, whereas the second and third approaches seek to identify risk markers for the trait itself. Repetitive and stereotyped behavior is a promising candidate for further genetic study because it probably comprises at least two even more specific phenotypes that differ in their behavioral correlates, familiality, and relation to genetic linkage with ASD. The 'restricted and repetitive stereotyped behavior' (RRSB) domain of the Autism Diagnostic Interview--Revised (ADI-R) [18,19] is a well-accepted measure of the repetitive behavior phenotype. To uncover the factor structure of RRSB, a variety of factor analytic techniques have been used with different subsets of RRSB items and with study populations that differ in ASD severity and ethnicity [11,[20][21][22][23][24][25]. Remarkably, in spite of their methodological differences, these analyses converge on a two-factor solution comprising 'repetitive sensory-motor actions' (RSMA) and 'insistence on sameness' (IS). RSMA items investigate repetitive physical mannerisms and unusual sensory interests, whereas IS items investigate compulsive behaviors. There are two exceptions to the common two-factor solution. First, an exploratory factor analysis of RRSB items [26] recovered essentially the same RSMA and IS factors but also found a third factor ('circumscribed interests'). This finding does not detract from the conclusion that RRSB comprises RSMA and IS, but rather suggests that RRSB may measure additional factors as well. Second, a principal components analysis of all ADI-R items identified six factors, including a 'compulsions' factor that contained some items from both the IS and RSMA factors, and a 'social intent' factor that combined social interaction items with the RSMA item of 'hand and finger mannerisms' [27]. Despite this, the preponderance of sta-tistical evidence indicates that RSMA and IS are distinct factors within the RRSB domain. It is well established that IS and RSMA have different patterns of relationship with other ASD traits. Specifically, RSMA, but not IS, has been reported to be associated with lower IQ, less adaptive behavior, and later age of appearance of first words and phrases [6,20,21], which suggests that RSMA may be more correlated with ASD severity [6]. These findings support the validity of treating IS and RSMA as different phenotypes. There is more empirical support for a genetic effect on IS than on RSMA. Whereas modest evidence of familial concordance occurs for IS, no reported concordance occurs for RSMA [21,25]. Thus, the IS factor may account for earlier findings that RRSB is familial [28,29]. Indeed, Silverman et al. [28] reported that RRSB categories that include IS items were familial, whereas those that include RSMA items were not. Further, a linkage analysis across the 15q11-q13 region in a subset of families with the highest IS scores resulted in increased LOD scores for AD [11] over scores obtained without stratification. By contrast, stratification on RRSB or RSMA did not increase lod scores. Finally, obsessive compulsive disorder (OCD) features in parents were associated with IS, but not RSMA, in children with AD [30], which suggests that IS may be part of a broader autism phenotype of obsessive behavior. We are not aware of previous genetic linkage studies with either IS or RSMA as the phenotype. The primary aim of the present study was to perform a genome-wide linkage analysis with both IS and RSMA as phenotypes using large extended ASD pedigrees. Thus, our goal was to identify genetic risk regions for IS and RSMA in ASD cases rather than to stratify on IS and RSMA to reduce ASD heterogeneity. Because IS and RSMA data were available only for ASD cases rather than for all pedigree members, we focused our analyses on these specific phenotypes in ASD cases and did not include clinically unaffected family members in this study. Signals obtained with these phenotypes were compared with those found in the same set of pedigrees using ASD diagnosis [31]. Contrasting results obtained with IS and RSMA with those obtained by ASD categorical diagnosis addresses empirically the possibilities that more narrowly defined phenotypes improve linkage analysis signals, and that different narrowly defined phenotypes are associated with different loci. Secondary aims were to examine the correlates of IS and RSMA and to assess the heritability of both scales. Methods This study has ongoing approval from the University of Utah institutional review board (IRB). All adults participating in the research signed informed consent docu-ments. All subjects under the age of 18 signed assent documents and their parents or guardians signed parental permission documents. These documents were approved by the University of Utah IRB. Subjects Subjects were members of 70 pedigrees having at least two family members with ASD. In total, 653 subjects were genotyped, 192 of whom had a study diagnosis of ASD. Study diagnosis was based in almost all instances on both the ADI-R [18,19] and the Autism Diagnostic Observation Schedule-Generic (ADOS-G) [32]. These pedigrees were used in our recent genome-wide linkage analyses of ASD [31]. All of the families studied are part of the Utah collection of multiplex ASD pedigrees. We did not include pedigrees from other collections or repositories. Additional sample characteristics including pedigree sizes, ascertainment and assessment methods were reported previously [31]. RSMA and IS scales RSMA and IS scales were derived from the RRSB domain of the ADI-R, which was available for 183 subjects with a study diagnosis of ASD. RSMA and IS items were ADI-R items that reliably loaded on one scale or the other in previous factor analytic studies [11,[20][21][22][23][24][25]. For both scales, scores were the unweighted sum of ADI-R item 'ever' ratings of 0-3. We believe this method of scoring the two scales is less susceptible to chance inter-item correlations in our data than would be factor scales derived from our data alone. RSMA items included 'hand and finger mannerisms', 'unusual sensory interests', 'repetitive use of objects', 'complex mannerisms' and 'rocking'. IS items included 'difficulties with minor changes in personal routine or environment', 'resistance to trivial changes in environment' and 'compulsions/rituals'. Language delay Items from the ADI-R ('age of first words' and 'age of first phrases') were used to assess language delay in ASD cases. For parents who indicated normal onset but who could not remember the exact ages, values were set to 23 months for words and 32 months for phrases (acquiring language after these ages is considered abnormal on the ADI-R). For parents who indicated delayed onset but could not remember the exact ages, values were set to 1.5 standard deviations above the mean. For subjects who never acquired language, values were set to 3 standard deviations above the mean. Intellectual function IQ was measured in subjects with ASD using an assessment instrument appropriate for the subject's age and developmental level. IQ measures included the Wechsler Intelligence Scale for Children, 3rd revision (WISC-III) [33], the Wechsler Adult Intelligence Scale, 3rd revision (WAIS-III) [34], the Differential Abilities Scale (DAS) [35] and the Mullen Scales of Early Development [36]. SRS The SRS is a quantitative measure of social ability ranging continuously from significantly impaired to above-average social abilities [17]. Although the SRS can be used with a general population, in our study the SRS was used only with ASD cases. The SRS mannerisms scale, which contains items that measure stereotypical behaviors and restricted interests, was used to determine whether IS or RSMA was more highly associated with another accepted measure of repetitive behavior in ASD cases. Genotyping Genotyping services were provided by the Center for Inherited Disease Research (CIDR), using the 6 K single nucleotide polymorphism (SNP) linkage panel. Methods and quality control procedures have been described in detail previously [31]. After quality control, there were genotypes from 6,044 SNPs on 653 pedigree members who were members of 67 informative families. Eliminating linkage disequilibrium (LD) between markers in linkage studies has been strongly recommended, as falsepositive results can occur in the presence of LD, particularly with extended multigenerational pedigrees for which ancestral genotypes are unavailable [37]. Recommended thresholds of acceptable LD vary, but a pair-wise r 2 value of 0.05 between SNPs has been supported with extensive simulation studies [37]. Therefore, before linkage analysis, we screened SNPs for LD using the PLINK software package [38], which recursively removes SNPs within a sliding window. We set a window size of 50 SNPs, shifted the window by 5 SNPs at each step, and used a variance inflation factor (VIF) of 1.5, which is equivalent to an r 2 of 0.33 regressed simultaneously over all SNPs in the selected window. This r 2 considers not only the correlations between SNPs but also between linear combinations of SNPs [38], and corresponds in our data to a pair-wise r 2 value of approximately 0.05. This screening for LD removed 1,207 SNPs. As part of the validation procedure, we also removed 115 SNPs with a minor allele frequency < 0.10 and 4 SNPs that were not in Hardy-Weinberg equilibrium (standard 1 degree of freedom test failed at the 0.05 level). The total number of SNPs left after this phase was 4,718. Analyses Heritability The heritability (proportion of variance in the trait due to genetic influences) of IS and RSMA was estimated using SOLAR software [39]. For discrete traits, SOLAR uses a threshold model to estimate polygenic heritability [40]. Estimates were also computed using jPAP software [41]; no substantive differences were found. Linkage analysis We used the genetic map provided by CIDR based on the deCODE genetic map [42]. Base pair positions were obtained from the March 2006 human reference sequence (hg18) assembly. Analysis was performed using the multipoint linkage software MCLINK, a Markov chain Monte Carlo (MCMC) method that allows for multilocus linkage analysis on large extended pedigrees [43]. Using blocked Gibbs sampling, MCLINK generates inheritance vectors from the Markov chain. Each state in this chain is an inheritance state, indicating the grandpaternal or grandmaternal origin of an allele at each marker locus, with changes in the origin of alleles along the inheritance vector indicating points of recombination. MCLINK then estimates the log-likelihood function linkage statistics. Internally, MCLINK runs the analysis five times to ensure a consistent solution. MCLINK has been used previously to identify candidate genomic regions for a number of complex diseases [44][45][46][47][48]. Results from MCLINK have shown a high degree of similarity to other MCMC linkage methods [49], and to exact linkage methods and variance components linkage methods as applied to extended pedigrees [50]. Allele frequencies for the MCLINK analysis were estimated using all of the observed data. We performed nonparametric and general parametric model-based analyses. Although nonparametric methods are the standard analytic approach for complex psychiatric disorders, parametric methods have some advantages in the analysis of a complex trait such as ASD, particularly when using large extended pedigrees. Parametric models, which are based on assumptions about the genotype-phenotype relationship, simplify the parameter space and allow for more powerful and efficient analyses without leading to false-positive results [51,52]. We decided to use two simple dominant and recessive models based on an extensive set of simulation analyses in which the results of various simple inheritance models were compared with the results of analyses based on a specified true model of inheritance [53]. Those simulation analyses found that the power to reach a given lod score using the simple models was approximately 80% that of the true model, and that the expected lod scores for the simple models approached the true expected lod scores. The multipoint hlod score allows for unlinked pedigrees and variation in the recombination fraction. The HLOD provided by MCLINK is robust to model mis-specification, and may reflect the true position of linkage regions more accurately under conditions of appreciable heterogeneity [54]. HLOD scores have been shown to be more powerful than homogeneity LOD scores or model-free methods under these conditions [55,56]. The HLOD has been shown to produce scores consistent with other published methods [57,58]. For both IS and RSMA, the phenotype was coded as unknown if the measure was not available, unaffected if the score was in the lowest tertile for the scale, and affected if the score was in the upper two tertiles. This approach re-codes affection status for all subjects rather than selecting a subset of subjects with high values on the traits. For IS, raw score tertile bins were 0-1, 2-3 and > 3; for RSMA, they were 0-3, 4-6 and > 6. The tertiles were given different liability classes (penetrances) to weight those in the upper tertile more strongly. Our recessive model assumed a disease allele frequency of 0.05 and penetrances of each of the three genotypes of 0.0014, 0.0014 and 0.8 in the upper tertile, and 0.01, 0.01 and 0.5 in the middle tertile. For the dominant model, the disease allele frequency was 0.0025. The penetrances were 0.0014, 0.8 and 0.8 in the upper tertile, and 0.01, 0.5 and 0.5 in the middle tertile. These model parameters roughly reproduce the reported population frequency of ASDs [1]. Linkage analyses were repeated on the basis of residual scale scores to determine whether signals could be replicated using measures of IS and RSMA phenotypes that were statistically independent of each scale's correlation with the other. Thus, for each scale, residual scores were computed using the other scale as a covariate (that is, IS-Adj = IS adjusted for RSMA and RSMA-Adj = RSMA adjusted for IS). Then, residual scores were divided into tertiles, and phenotype and liability values were coded in the same manner as were raw scores, that is, the lowest tertile was coded as unaffected and the top two tertiles were coded as affected, and the penetrance of the highest tertile was greater than that of the lower two tertiles. For HLOD scores, results are presented using the Lander and Kruglyak [59] genome-wide criteria. Suggestive linkage evidence was defined by a LOD score ≥ 1.86 and significant genome-wide linkage evidence was defined by a LOD score ≥ 3.30. Scale correlates RSMA was more strongly associated than IS with other ASD features (Table 1). Both IS and RSMA raw scores were correlated with ADI-R domain scores and SRS mannerisms scale, but the RSMA correlations with ADI-R social and SRS mannerisms scales were significantly greater than those for IS. RSMA but not IS was correlated with ADOS score (after controlling for the effect of ADOS module scale), age of first phrases and IQ measures. With the exception of IQ measures, criterion variables significantly associated with raw scale scores tended to have lower correlations with residual scores, which suggests that the variance that IS and RSMA have in common may reflect a broader ASD trait. IQ measures, which were negatively correlated with RSMA, tended to be even more negatively associated with RSMA-Adj, although this trend was nominally significant (P < 0.01) only for non-verbal IQ. IS-Adj was positively correlated with IQ measures even though raw score IS was not, and IS-IQ correlations were significantly higher with residual than with raw scores. Thus, the unique variance of both IS and RSMA was less strongly associated with ASD but more strongly associated with IQ, although the direction of the relations with IQ was opposite (Table 1). Heritability The heritability of both scales was significant. For IS, H 2 was 0.85 (P < 0.0004, SE = 0.21), and for RSMA, H 2 was 0.51 (P < 0.03, SE = 0.26). Because the scales were significantly correlated, we also estimated the heritability of each with the other as a covariate. With RSMA as a covariate, IS was still significant (H 2 = 0.69, P < 0.004, SE = 0.23) and RSMA was a significant covariate (P = 0.003). By contrast, when IS was entered as a covariate for RSMA, RSMA was not significantly heritable (H 2 = 0.31, P = 0.13, SE = 0.27), but IS was a significant covariate (P < 0.0001). Table 2 lists all regions with at least suggestive evidence of linkage (HLOD ≥ 1.86 for parametric tests [59] or P < 0.005 for nonparametric tests). There was strong correspondence between regions for which there was evidence of linkage with the recessive model and nonparametric linkage (NPL), which suggests that these linkage findings are resistant to model mis-specification. Fewer tests of the dominant model, compared with the recessive model, were suggestive or significant. Thus, to simplify presentation of genome-wide results, Figures 1 and 2 display the genome-wide distribution of HLOD scores for the recessive model only ( Table 2, Figures 1 and 2). Linkage Evidence of linkage reached genome-wide significance levels (HLOD > 3.30) for two regions, 2q37.1-q37.3 and 15q13.1-q14 (Table 2), so we examined the linkage evidence for these regions in greater detail (Table 3). For 2q37.1-q35.3, the linkage evidence was greater for the dominant model, so dominant model HLOD scores across chromosome 2 are shown in Figure 3 along with ASD HLOD scores from our earlier work [31]. The evidence of linkage to 2q37.1-q37.3 was greater for IS than for IS-Adj, RSMA and RSMA-Adj. Note too that we observed no evidence of ASD linkage to this region in our earlier study [31]. Taken together, these findings suggest 2q37.1-q35.3 may harbor a genetic risk marker for repetitive behavior, particularly IS, which is not strongly associated with ASD (Table 3, Figure 3). Linkage results for chromosome 15 were of particular interest, both because of the different pattern of signals for IS and RSMA, and the linkage magnitude. Linkage evidence for both IS and RSMA at 15q13.1-q14 was greater for the recessive than for the dominant model (Table 3). Because there also was suggestive evidence with the recessive model of IS linkage to 15q21.1-q22.2 (Table 3), Figure 4 shows HLOD scores for the recessive model across chromosome 15. The linkage evidence at 15q13.1-q14 was greater for RSMA than for IS, but nonetheless the evidence for IS was suggestive. A different pattern of findings was observed at 15q21.1-q22.2. Not only was there no RSMA signal this location, but the IS-Adj signal was much stronger than the unadjusted IS signal (HLOD = 3.03 and 1.88, respectively; NPL = 3.10 and 2.60, respectively). This was the largest difference in linkage values between adjusted and unadjusted phenotypes HLOD for any locus at which at least suggestive linkage evidence was observed for both raw and residual data. Thus, it appears that the shared variance between IS and RSMA actually dampened the IS signal at 15q21.1-q22.2. Finally, note in Figure 4 that 15q13.1-q14 and 15q21.1-22.2 both lie within a broader region in which we found evidence at genome-wide significance levels of linkage with ASD in our previous study with the same pedigrees [31]. Linkage evidence for ASD in the 15q13.1-q14 region is comparable with that for the two RSMA variables, but even stronger evidence of ASD linkage was observed in the 15q21.1-q22.2 region (Figure 4). Discussion In a large sample of multiplex ASD pedigrees, we found evidence that IS and RMSA are distinct phenotypes that can be differentiated by both their phenotypic and genotypic relations. Further, the results suggest that ASD susceptibility loci vary in the breadth of their phenotypic effects. Finally, the results illustrate the value of using narrowly defined phenotypes to detect the specific con-tribution of implicated susceptibility loci to the heterogeneous ASD phenotype. IS and RSMA as distinct phenotypes The overall pattern of relations of the two RRSB scales and their residuals with other ADI-R and ADOS mea- [60]. †HLOD values reaching genome-wide level of significance [60]. sures suggests that although both RSMA and IS are indices of ASD severity, the relation with ASD severity is greater for RSMA than for IS and is in part a function of the shared variance between IS and RSMA. This general conclusion that RSMA is more closely associated with ASD severity is consistent with a previous report of the correlates of these scales [6]. The negative correlation between RSMA and IQ and the absence of a significant correlation between IS and IQ are consistent with previous reports [6,20], but the finding that the absolute magnitude of IQ correlations with both RSMA-Adj and IS-Adj is greater than IQ correlations with the raw scale values has not been reported previously. Taken together, these correlational findings suggest that the shared variance between IS and RSMA is associated with ASD severity but not with IQ. The hypothesis that the positive relation between IS-Adj and IQ is mediated by anxiety is offered for further investigation. Anxiety, which is a common comorbid condition for ASD [60][61][62], has been reported to be positively correlated with IQ in children and adolescents with ASD [60,61]. If obsessive behaviors are attempts to regulate anxiety [63], then perhaps the positive relation between IS-Adj and IQ we observed is in part a consequence of the positive relation that others have reported between anxiety and IQ. Given that no data are available to support an association between the IS-Adj scale and anxiety, the hypothesis that the relation between IQ and IS-Adj is mediated by anxiety remains to be tested empirically. Our results indicate that whereas both IS and RSMA are heritable, the estimated heritability was greater for IS. Further, the heritability of RSMA may not be independent of its relation with IS. Our findings are consistent with previous reports of significant heritability for IS [21,25], but in our families we find significantly positive heritability for RSMA as well. It is possible that the weaker RSMA heritability effect was not detected in those earlier reports. Finally, we found different linkage patterns for IS and RSMA. There were many instances of suggestive signals Figure 3, Figure 4), but consideration of linkage results for residual scales and linkage results for ASD at the two loci suggests different interpretations of these suggestive signals. At 2q37.1-37.3, where there was a significant signal for IS, the suggestive signal for RSMA was not observed with RSMA-Adj and there was no linkage with ASD. Thus, it is possible that this region is relatively specific to IS, and that the suggestive signal for RSMA can be attributed to correlation of RSMA with IS. By contrast, at 15q13.1-q14, where there was a significant signal for RSMA, suggestive signals were found for both IS and IS-Adj, indicating that the IS signal was not due to the RSMA-IS correlation; the region was also linked to ASD in our earlier study. Thus, it seems likely that RSMA, being more strongly correlated with other ASD criteria, was more strongly linked to 15q13.1-q14, which appears to harbor risk markers for a broad range of ASD traits. Implications for studies of narrow phenotypes Some of the IS-and RSMA-specific findings not replicated in our affected status analyses (for example, the significant signal specific to IS at 2q37.1-37.3) may be examples of the hoped-for outcome of identifying susceptibility loci that are specific to narrowly defined phenotypes [6]. Given that ASD is probably caused by many genes, each with relatively small effects [64,65], increasing our ability to detect such genes is crucial. Thus, these findings encourage further research with narrowly defined phenotypes to uncover linkage signals not observed with broader diagnostic categories. Further, our findings provide an example of increased knowledge of the nature of genetic effects that may be possible with more homogeneous phenotypes. Previously, we reported possibly distinct ASD regions with evidence of linkage at 15q13.1-q14, 15q14-q21.1 and 15q21.1-q22.2 [31]. We now report that 15q13.1-q14 is linked to both RSMA and IS, but is linked more strongly to RSMA and that 15q21.1-q22.2 is linked to IS but not to RSMA. Thus, these two loci appear to affect different aspects of repetitive behavior, a possibility that was missed in our analysis of affected status. The variability observed in this study in the phenotypic scope of linkage regions leads us to suggest that multiple ASD phenotypes should be used in future genetic studies to characterize the nature and breadth of the phenotypic linkage or association of risk variants. It is possible that variants with broad phenotypic effects may affect the root causes of ASD, whereas variants with narrow effects contribute to phenotypic heterogeneity among individuals with ASD. The use of multiple phenotypes emphasizes the importance of additional research aimed at developing an empirical model of the relations and interactions between specific features of ASD. Such a model should lead to identification of a set of phenotype measures that assess all the key specific features of ASD. The work of previous investigators to identify IS and RSMA as distinct features of repetitive behavior is a substantial contribution to this goal. We note that our results are again consistent with the well-replicated finding of complexity and heterogeneity in ASD genetics. Our lod scores showed inter-and intrafamily heterogeneity. For extended pedigrees, the scores expected under an assumption of a shared haplotype across all affected members exceeded by several lod units those actually found, depending upon the pedigree and model assumptions. Homogeneity clearly did not exist across all pedigrees in our sample; for any given region, multiple pedigrees showed no evidence of linkage. Previous genetic studies of repetitive and stereotyped behavior Shao et al. [13] reported the only linkage study of which we are aware that stratified pedigrees on either IS or RSMA. That study differs from the present study in several regards. First, they limited their linkage analysis to the 15q11-q13 region, whereas we did a genome-wide scan. Second, they used nuclear families rather than extended pedigrees. Third, they used the diagnosis of AD as the phenotype, whereas we used IS and RSMA as phenotypes. Finally, they used ordered-subset analysis and we did not. Shao et al. did not find significant evidence of linkage in the 15q11-q13 region across all 81 families they studied but they did find significant evidence of linkage in the region of marker GABRB3 in the subset of 23 families with the highest mean IS scores. Stratifying families by RSMA or RRSB did not enhance the signal. GABRB3 is located at 24.4 Mb, which is upstream of the lower boundary (27.94 Mb) of 15q13.1-q14. We did not choose subsets of our sample, but rather re-defined affection status based on IS or RSMA phenotypic information, using information from all ASD members of the pedigrees. The methodological differences between our study and that of Shao et al. preclude firm conclusions about why they found that stratifying on IS but not RSMA enhanced the AD linkage signal, whereas we found both RSMA and IS, but particularly RSMA, to be associated with a region just downstream. Studies that stratified pedigrees by other repetitive behavior measures, including individual RRSB items and the 'compulsions' factor examined by Tadevosyan-Lefer et al. [27], report increased HLOD scores for AD at chro-mosome 1 [7] and at 17q11.2 [10]. Significant associations between SLC25A12 alleles (2q31.1) have been reported for both the RRSB 'routines and rituals' category (similar to IS) [15] and the compulsions factor [16]. None of these loci overlaps signals that we obtained for IS or RSMA linkage. These differences may again be due in part to methodological differences between choosing subsets versus re-defining phenotypes. The suggestive evidence of IS linkage that we observed on chromosome 9 for IS spans a region implicated as a susceptibility locus for OCD in two studies [66,67]. This replication is noteworthy because the earlier two studies did not include subjects with ASD. We did not find evidence of linkage for ASD diagnosis in this region using our full set of families, although we did find a evidence of linkage for ASD in this region in our analysis of a single large extended pedigree [68]. Previous research has indicated that OCD features in parents of children with AD are correlated with scores for IS but not RSMA in probands [30]. Thus, this region at the chromosome 9 telomere may underlie a repetitive behavior broader autism phenotype rather than ASD. Limitations Our sample was a cohort of multiplex ASD pedigrees, and IS and RSMA data were collected only on subjects thought to have ASD. We believe our method is appropriate to the valid aim of uncovering susceptibility loci for ASD and related phenotypes within extended families containing multiple members with ASD. However, we acknowledge that our method limits the generalizability of our findings to other research aims. For example, the absence of repetitive behavior phenotype data for family members without ASD limits our ability to answer the question of whether repetitive behavior is a broader autism phenotype that occurs in unaffected relatives [30,69]. Further, because our sample is not populationbased, we cannot generalize our findings to the search for genetic markers for repetitive behavior in the general population [3]. Finally, our study includes analyses of the IS and RSMA phenotypes under two simple dominant and recessive models. If we conservatively assume that these models and phenotypes are not correlated, then significance thresholds would be adjusted by log 10 (4) = 0.6 lod score units. Our thresholds would then be 2.26 for suggestive evidence and 3.9 for significant evidence. With this adjustment, results on chromosome 15 remain significant and many other results remain suggestive, but other results would be considered as nominal. Conclusions IS and RSMA, two factors within the ADI-RRSB domain, were found to be linked to largely non-overlapping chromosomal regions. Genome-wide significance was observed for IS at 2q37.1-q37.3 (dominant model HLOD = 3.42) and for RSMA at 15q13.1-q14 (recessive model HLOD = 3.93). Regions varied in the range of phenotypes with which they were linked. These findings support the value of including multiple, narrowly defined phenotypes in ASD genetic research.
2018-04-03T04:01:11.855Z
2010-02-22T00:00:00.000
{ "year": 2010, "sha1": "1d8288d39b15faad37a4bc97b950a609486cc8de", "oa_license": "CCBY", "oa_url": "https://molecularautism.biomedcentral.com/track/pdf/10.1186/2040-2392-1-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f5b585ffa58eb7d4a5cb473f1cf5b8b3bc93a471", "s2fieldsofstudy": [ "Psychology", "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
242932237
pes2o/s2orc
v3-fos-license
Landscape diagnostic survey data of wheat production practices and yield in eastern India Approximately 7,600 wheat plots were surveyed and geo-tagged in the 2017-18 winter or rabi season in Bihar and eastern Uttar Pradesh (UP) in India to capture farmers’ wheat production practices at the landscape level. A two-stage cluster sampling method, based on Census data and electoral rolls, was used to identify 210 wheat farmers in each of 40 districts. The survey, implemented in Open Data Kit (ODK), recorded 226 variables covering major crop production factors such as previous crop, residue management, crop establishment method, variety and seed sources, nutrient management, irrigation management, weed flora and their management, harvesting method and farmer reported yield. Crop cuts were also made in 10% of fields. Data were very carefully checked with enumerators. These data should be very useful for technology targeting, yield prediction and other spatial analyses. 1 INTRODUCTION: Crop yields are known to vary widely in space and time, and the so called yieldgap (defined here as the difference in yield between the best and worst 10% of farmers) can be substantial (e.g. Global Yield gap Atlas http://www.yieldgap.org/web/guest/home). Closing this yield gap through good agronomic practices or best management practices is the aim of most extension programs. But what are the best management practices being used by farmers? How do they vary spatially (and temporally)? Are they predictable? The Landscape Diagnostic Survey (LDS), which is being implemented in wheat and rice systems in eastern India, is designed to capture farmers' current practices for cultivating wheat and rice at the scale of large landscapes, and to use these data, and other spatial data, to understand and predict yield and the key drivers of production. The Cereal System Initiative for South Asia (CSISA: https://csisa.org/), in collaboration with the Indian Council of Agricultural Research (ICAR https://icar.org.in/) and State Agricultural Universities (SAU), surveyed wheat farmers in Indian states of Bihar and eastern UP to capture their current production practices. ICAR and SAUs between them have an extensive network in the field through the Krishi Vigyan Kendra -KVK system (https://kvk.icar.gov.in/). All partners jointly developed the survey questionnaire and methodology. The survey covered all aspects of production, including agronomic, social, economic and market variables, assuming that yield might also depend on other factors besides agronomic management. The sampling methodology was devised to ensure a representative sample keeping in mind the cost and time involved. Special emphasis was given on randomized selection of farmers and their spatial distribution. Farmers were interviewed individually and their production practices on the largest wheat plot were recorded for the winter or rabi season 2017/2018. Physical crop-cuts were also planned for a sub-set of samples to check the deviation between survey reported and crop-cut yields. The survey aimed to generate data based evidence around current crop production practices that can be wisely utilized by national and state level policy makers for enhancing crop productivity in the region. 2. FIELD SURVEY AND DATA COLLECTION 2.1 Sampling method: Two-stage cluster sampling was applied to ensure a balance between available resources and desired accuracy (Sedgwick, 2014). A District was considered as a survey unit and villages as clusters within a District. In the first stage, villages were selected through probability proportionate to size (PPS) method as villages vary in size. Larger villages were assigned higher probability of selection than smaller villages (Skinner, 2020). In the second stage, the same number of households (HHs) were selected randomly in each sampled village so that each unit sampled had equal chance of getting selected. Village selection was performed using data from '2011 Census of India'. All villages within a district were enlisted along with their sizes (number of HHs). Villages listed under 'urban' category, having more than 5000 HHs (extremely big) and having less than 50 HHs (extremely small) were removed. The remaining villages formed the sampling frame for village selection. PPS was applied on this frame to draw 30 villages randomly. Farmer selection relied on the 'list of voters' fetched from State's Election Commission website. The village list provided the names of all residents along with unique house numbers. These house numbers were used to construct the sampling frame for HH selection. From each sampled village, seven HHs were selected using simple random sampling. Accordingly, 210 HHs were interviewed in each District. An example of sample distribution in Gopalganj District of Bihar is portrayed (Figure 1). Digital survey instrument and ODK tool: The survey was deployed electronically using ODK. This enabled real-time progress monitoring, automation in data compilation and error minimization during interviews. The questionnaire was programmed in an offline version (.xlsx version) of ODK Build. The survey instrument had been refined over a number of cycles such that there were no open-ended questions and minimum-maximum ranges were applied to reduce errors in entering values. Enumerators used ODK Collect, an Android application (App) to capture interview responses. Raw data sent by enumerators was stored on ODK Aggregate, an open source Java App which also hosted the blank questionnaire in XForm version. Enumerators downloaded blank questionnaire on their Android devices, completed interviews and sent back filled-in questionnaires at Aggregate (https://docs.getodk.org). The survey instrument and the ODK version (XML) are included in the downloadable files. Survey deployment: The survey was deployed through staff of the Krishi Vigyan Kendras (KVK), a Government agricultural extension centre in each District. Concerned staffs of all 40 centres attended 'orientation on sampling method, survey questions and training on application of ODK in four separate batches, comprising one day of classroom training followed by mock interviews of farmers using ODK Collect App on next day. Participants were provided with the list of sampled villages and respondents of their respective Districts. During survey deployment, they received technical and logistical support from the project. Coverage: The survey covered 40 Districts and 7648 wheat farmers in Bihar and the eastern UP. All these districts together form a large area in eastern Indo-Gangetic plain of India (Figure 2) where the rice-wheat cropping system prevails. There were 31 Districts with 5793 farmers from Bihar, and nine districts with 1855 farmers from UP. The survey was conducted on the selected farmer's largest wheat plot. Figure 3 shows the interlinkage among survey steps. The digital survey form (questionnaire) was designed on the .xlsx version of the ODK Build (1). Blank form was uploaded on the ODK server (2). Mobile devices linked with this server pulled blank forms for use (3). Selected farmers were interviewed (4) and completed forms were sent to the server (5). Raw data aggregated at the server was imported as .csv file (6). Data was curated by carefully screening and validating entries with the enumerators (7). The curated and cleaned file was analysed with open access software R to identify key yield attributing factors (8). Data repository and format: The data is available from the CIMMYT CSISA Dataverse (https://data.cimmyt.org/dataverse/csisadvn). Data is available in an .xls file with metadata and variables, links to documents with the sampling method and survey instrument, and also the R script to read the data. 3. DATA SUMMARY: A summary of a few key agronomic variables are given in Table 1, although the survey has captured many other ecological, social, economic and market related parameters. The random sampling approach enabled us spread data across different land typologies as conceptualized by farmers (Figure 4). Seventy-one percent of data points were from medium land types, defined as lands that neither dry-up quickly nor face water logging situation after rain. Survey covered approximately 1100 villages and highlighted that 95% of the wheat plots are planted through broadcasting method. Rest 5% of the survey plots were line sown after tillage and under zero tillage in almost equal proportion. Planting time of wheat is an important variable whose influence on yield is well established (Malik et al., 2007). Wheat planting time in this part of India generally starts in the month of November and finishes by end of December. The survey captured this planting pattern; planting date ranged from 25 October to 26 January with a peak (28%) happening in the last week of November ( Figure 5). Late sowing of wheat results in yield penalty . The survey categorically recorded reasons for delayed wheat planting wherever farmers had previously answered planting time after November. Farmers reported using 66 different wheat varieties but interestingly, three varieties were mentioned by more than half of the farmers. These were PBW 343 (21%), HD 2967 (20%) and UP 262 (12%). Fertilizer application information was captured in complete detail. This part of the survey tells names of applied fertilizers, their respective doses in splits, application time with reference to planting day and availability. Wheat crop needs to be irrigated adequately to harvest optimal yield (Zaveri and B. Lobell, 2019). In the survey, farmers were asked to provide detail information around wheat plot irrigation -availability, accessibility, number of irrigation, crop stage(s) at which irrigation applied and irrigation decisions (when to irrigate). Similarly, data was recorded around practices farmers follow to control weeds -number of times herbicide(s) applied, herbicide names, time of application with reference to planting day, number of times weeds were removed manually and time of manual operations with reference to planting day. Information on weed control measures were followed by pictorial identification of top five weeds infesting surveyed wheat plot. Weeds identified with the help of a weed poster were then ranked by farmers based on their severity of damage. Wheat grain yields of farmers' largest wheat plot were fairly normally distributed ( Figure 6). The mean value coincided at 3.0 t ha -1 with standard deviation of 0.85. Twenty percent of farmers obtained yields >4 t ha -1 , suggesting considerable scope to increase productivity. We tried to understand gaps in production practices -why 1/3 rd of farmers settled with low yields (<3 t ha -1 ). At the end, survey recorded size of households, number of members engaged in farming, marketable surplus and percent contribution of agriculture/wheat crop in household income. Each interview ended after geo-coordinates of the surveyed plot was captured with acceptable accuracy. ACKNOWLEDGEMENTS: Funding for this survey was provided by the Bill & Melinda Gates Foundation as part of the CSISA project. We sincerely acknowledge the effort put forward by our project team members (Ajay Pundir, Anurag Kumar, Pankaj Kumar, Prabhat Kumar, Deepak Kumar Singh, Madhulika Singh and Moben Ignatius) for coordinating data collection with KVK partners. We highly appreciate continued engagement of all forty KVK personnel in data collection. The authors are grateful to ICAR for collaborating in this endeavour and taking-up the survey at much larger scale through its extension wing.
2021-10-15T16:12:12.103Z
2021-10-06T00:00:00.000
{ "year": 2021, "sha1": "352ede05be907baa5e45d59563d5dbb4501d1c98", "oa_license": "CCBY", "oa_url": "https://odjar.org/article/download/17959/17566", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6ad1366ab53a916bab57fb286953479f900cd539", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
3667749
pes2o/s2orc
v3-fos-license
Optimal Transmit Antenna Selection for Massive MIMO Wiretap Channels In this paper, we study the impacts of transmit antenna selection on the secrecy performance of massive MIMO systems. We consider a wiretap setting in which a fixed number of transmit antennas are selected and then confidential messages are transmitted over them to a multi-antenna legitimate receiver while being overheard by a multi-antenna eavesdropper. For this setup, we derive an accurate approximation of the instantaneous secrecy rate. Using this approximation, it is shown that in some wiretap settings under antenna selection the growth in the number of active antennas enhances the secrecy performance of the system up to some optimal number and degrades it when this optimal number is surpassed. This observation demonstrates that antenna selection in some massive MIMO settings not only reduces the RF-complexity, but also enhances the secrecy performance. We then consider various scenarios and derive the optimal number of active antennas analytically using our large-system approximation. Numerical investigations show an accurate match between simulations and the analytic results. I. INTRODUCTION Over the past few years, the popularity of smart phones, electronic tablets and video streaming as well as the sharp rise in the number of service providers has led to an explosive growth of data traffic in wireless networks. This increasing demand of capacity in mobile broadband communications poses challenges for designing the next generation of cellular networks (5G) in the near future [2]. Given this backdrop, confidential and private transmission of data in the next generation of wireless networks is of paramount importance. In this respect, physical layer security for 5G wireless networks has gained significant attentions in recent years aiming for design of reliable and secure transmission schemes [3], [4]. Unlike the traditional approaches relying on cryptographic techniques [5], physical layer security provides secrecy by exploiting the inherent characteristics of wireless channels. Although cryptographic techniques employed in the upper layers of networks protect processed data securely, physical layer security is a potential solution through the communication phase [6]. This work has been presented in part at GLOBECOM 2017 [1]. Saba Asaad The basic model for physical layer security is the wiretap channel in which transmitted messages to a legitimate receiver are being overheared by an eavesdropper. Wyner demonstrated that secrecy is obtained in this setting as long as the legitimate receiver communicates over a channel whose quality is better than the eavesdropper channel [7]. Based on this framework, several techniques such as artificial noise generation [8], [9] and cooperative jamming [10] were proposed for secrecy enhancement. The extension of Wyner's framework to Multiple-Input Multiple-Output (MIMO) settings has moreover shown a promising performance of such settings in the presence of eavesdroppers [11]- [13]. In fact, in MIMO wiretap channels, also referred to as Multiple-Input Multiple-Output Multiple-Eavesdropper (MIMOME) channels, the Base Station (BS) can focus its main transmit beam to the legitimate terminals, and thus, reduce the information leakage to the eavesdroppers. This technique in massive MIMO settings [14] asymptotically cancels out passive malicious terminals in the network making these settings robust against passive eavesdropping [15]. Despite promising characteristics of massive MIMO systems, they are known to pose high Radio Frequency (RF)-cost and complexity. In fact, employing a separate RF chain per antenna in massive MIMO systems imposes a burden from the implementational point of view [16]. This issue has introduced the antenna selection [17] along with other approaches such as spatial modulation [18] and hybrid analog-digital precoding schemes [19], [20] as prevalent strategies in massive MIMO. In antenna selection, only a subset of antennas is set to be active in each coherence time. This subset is in general selected with respect to some performance metric such as achievable transmission rate, outage probability or bit error rate [17]. The optimal approaches to antenna selection however deal with an exhaustive search which is not computationally feasible in practice. Alternatively, several suboptimal, but complexity efficient, methods have been proposed in the literature; see for example the approaches in [21]- [24]. The investigations have shown that these suboptimal approaches do not impose a significant loss on the performance for several MIMO settings [23], [25], [26]. In the context of massive MIMO systems, recent studies have demonstrated that the large-system properties of these systems are maintained even via simple antenna selection algorithms [27], [28]. In addition to implementational complexity reduction, antenna selection was also observed to be beneficial in MIMO systems with respect to some performance measures such as secrecy rate [29], [30], energy efficiency [31] and effective rate [32] in some special cases. For instance, it was shown in [33] that single Transmit Antenna Selection (TAS), i.e., only one trannsmit antenna being active, in a conventional MIMO setup can achieve high levels of security, especially when the total number of transmit antennas increases. The study was later extended in [34] to cases with multi-antenna eavesdroppers demonstrating that similar results hold also in these settings. In [35], secure transmission in a general MIMOME channel was investigated under single TAS. Such results were further extended in the literature for other MIMOME settings. For example in [36], secure transmission was studied for Nakagami-m fading channels under single TAS. The impacts of imperfect channel estimation and antenna correlation were also investigated in [37]. The average secrecy rate and secrecy diversity analysis for a simple single TAS scheme was moreover studied in [38], [39]. In [40], TAS with outdated Channel State Information (CSI) was analyzed for scenarios with single-antenna receivers. The effect of single TAS at the BS in the presence of randomly located eavesdroppers with a fullduplex receiver was moreover studied in [41]. In contrast to single TAS, the secrecy performance of MIMOME channels under the multiple TAS, i.e., setting multiple transmit antennas to be active, has not yet been addressed in the literature. In fact under multiple TAS, the growth in the number of transmit antennas is beneficial to both the legitimate receiver and the eavesdropper, and therefore, its effect on the overall secrecy performance is not clear. This paper intends to study the impact of multiple TAS in massive MIMOME settings. Contributions and Organization We study the secrecy performance of a MIMOME channel in which the BS employs a computationally simple TAS algorithm to select a fixed number of transmit antennas. For this setting, the distribution of the instantaneous secrecy rate in the large-system limit, i.e., when the number of transmit antennas grows large, is accurately approximated. This approximation is then utilized to investigate the secrecy performance in two different scenarios: Scenario (A) in which the eavesdropper's CSI is available at transmit side, and Scenario (B) in which the BS does not know the eavesdropper's CSI. Our investigations demonstrate that in both scenarios, there exist cases in which the secrecy performance is optimized when the number active antennas are less than the total number of transmit antennas. In other words, the growth in the number of selected antennas in some cases enhances the secrecy performance up to an optimal value; however, it becomes destructive if the number of the active antennas surpasses this optimal value. Invoking our large-system results, we develop a framework to derive analytically this optimal value. The consistency of our approach is then confirmed through numerical investigations. The remaining parts of this manuscript is structured as follows: Section II describes the system model. In Section III, we conduct analyses for large dimensions. The impacts of TAS on the secrecy performance is investigated in Section IV where we also give some numerical results and discussions. Finally, the concluding remarks are given in Section VI. The proofs of the main theorems are moreover provided in the appendices. Notations: Throughout the paper, scalars, vectors and matrices are denoted by non-bold, bold lower case, and bold upper case letters, respectively. represents the complex plain. The Hermitian of H is indicated with H H , and I N is the N × N identity matrix. The determinant of H and Euclidean norm of x are shown by |H| and x , respectively. ⌊x⌉ refers to the integer with minimum Euclidean distance from x. The binary and natural logarithm are denoted by log (·) and ln (·), respectively, and 1 {·} represents the indicator function. E {·} is the mathematical expectation, and Q(x) and φ(x) denote the standard Q-function and the zero-mean and unit-variance Gaussian distribution, respectively. II. PROBLEM FORMULATION We consider a Gaussian MIMOME wiretap setting in which the transmitter, the legitimate receiver and the eavesdropper are equipped with multiple antennas represented by M , N r and N e , respectively. The main channel, from the transmitter to the legitimate receiver, and the eavesdropper channel, from the transmitter to the eavesdropper, are assumed to be statistically independent and experience quasi-static Rayleigh fading. The CSI of the both channels are considered to be available at the receiving terminals. The transmitter is moreover assumed to know the CSI of the main channel. In practice, the CSI is obtained at the respective terminals by performing channel estimation which depends on the duplexing mode of the system. Massive MIMO settings are usually considered to operate in the time division duplexing mode in which it is sufficient to estimate the channel only in the uplink training mode due to the channel reciprocity. More details on channel estimation in massive MIMO settings are found in [42,Chapter 3]. Based on the availability of the eavesdropper's CSI at the transmitter, we consider two different scenarios in this paper: (A) The eavesdropper's CSI is available at the transmitter. (B) The transmitter does not know the eavesdropper's CSI. A. System Model The encoded message x M×1 is transmitted over the main channel. In this case, the received signal y Nr×1 reads where H m ∈ Nr×M represents the main channel matrix, ρ m denotes the average Signal-to-Noise Ratio (SNR) at each receive antenna and n m is zero-mean and unit-variance complex Gaussian noise, i.e., n m ∼ CN (0, I Nr ). Since the channel is assumed to be quasi-static Rayleigh fading, the coherence time is significantly larger than the transmission interval and entries of H m are modeled as independent and identically distributed (i.i.d.) complex-valued Gaussian random variables with zero-mean and unit-variance. At the eavesdropper, x is overheard and the signal The TAS protocol S selects L antennas which correspond to the first L indices in Ï, i,e., Ï S := {w 1 , . . . , w L }. Corresponding to the TAS protocol, the effective main and eavesdropper channel, namelyH m andH e , are respectively constructed from H m and H e by collecting those column vectors which correspond to the selected antennas. For instance, H m is an N r × L matrix with columns h w1,m , . . . , h wL,m . Note that although the TAS protocol S selects the strongest antennas corresponding to the main channels, it performs as a random TAS protocol for the eavesdropper, since H m and H e are statistically independent. Remark: In practice, Ï S in the TAS protocol can be determined either by employing a rate-limited feedback channel from the legitimate receiver to the BS or by estimating the CSI at the BS. One may note that in the former case, the rate-limited channel requires a low overhead. Moreover, in the latter case, the transmitter need not to acquire the complete CSI. In fact, as Ï S is determined via the ordering in (3), the transmitter only needs to estimate the channel norms. This task can be done at the prior uplink stage simply by analog power estimators, and requires a significantly reduced time interval compared to the case of complete CSI estimation. This reduced interval furthermore allows for averaging over the coherence time which can improve the power estimation; see [43] for more details. B. Achievable Secrecy Rate For the MIMOME wiretap setting specified by (1) and (2), the instantaneous achievable secrecy rate reads [12] where In (4), R m denotes the achievable rate over the main channel which reads and R e is the achievable rate over the eavesdropper channel which is given by with Q M×M being the power control matrix. For simplicity, we assume uniform power allocation over active antennas with unit average transmit power on each antenna. This means that Consequently, the instantaneous rates R m and R e reduce to (8b) Note that R e in (8b) is determined under the worst-case scenario in which the eavesdropper knows the indices of antennas selected by the protocol S. Substituting into (4), the maximum achievable instantaneous secrecy rate reads where the argument S is written to indicate the dependency of the achievable secrecy rate on the TAS protocol. Note that when the eavesdropping terminal is not capable of obtaining the indices of the selected antennas, (9) bounds the achievable instantaneous secrecy rate from below. Since the channels experience fading, R s (S) is a random variable whose statistics define different secrecy performance metrics, e.g., the ergodic secrecy rate and secrecy outage probability. In the sequel, we evaluate the asymptotic distribution of R s (S). III. LARGE-SYSTEM SECRECY PERFORMANCE The secrecy performances in Scenarios A and B are quantified via different metrics. In Scenario A, since the BS knows the eavesdropper's CSI, it transmits with rate R s (S) in each coherence time; thus, the secrecy performance is measured by the achievable ergodic secrecy rate. When the eavesdropper's CSI is not available at the BS, the transmitter assumes the secrecy rate to be R o . In this case, the secure transmission is guaranteed as long as R m − R e > R o . Consequently, in Scenario B, the secrecy performance is properly quantified by the secrecy outage capacity; see [44] for further discussions. Based on above discussions, the performance of the setting in both Scenarios A and B is described by statistics of R s (S). We hence derive an accurate large-system approximation for the distribution of R s (S) in Theorem 1. Here by the largesystem limit we mean M ↑ ∞. To state Theorem 1, we define the "asymmetrically asymptotic regime of eavesdropping". Definition 1 (asymmetrically asymptotic regime of eavesdropping): The eavesdropper is said to overhear in the asymmetrically asymptotic regime of eavesdropping when the number of eavesdropper's antennas per active antenna, defined as β e := N e /L, reads either β e ≪ 1 or β e ≫ 1. In Definition 1, β e ≪ 1 describes scenarios in which the eavesdropper is a regular mobile terminal with finite number of antennas. Moreover, β e ≫ 1 represents MIMOME settings with sophisticated eavesdropping terminals such as portable stations in cellular networks. In the sequel, we assume that the understudy setting operates in the asymmetrically asymptotic regime of eavesdropping. However, our numerical investigations later depict that the results are valid even when the system does not operate in this regime of eavesdropping. Theorem 1: Consider the TAS protocol S, and let for some non-negative real u which satisfies Ξ t which is given by and f Nr (·) which represents the chi-square probability density function with 2N r degrees of freedom and mean N r , i.e., for K t := U m + ρ m η t and C t := ρ m η t U m (U m − 1) /V m and the constant ψ = log e = 1.4427. Proof: The proof follows the hardening property of the main and eavesdropper channel. In fact, the results of [28] indicate that in the large-system limit, R m is approximately Gaussian with a vanishing variance. The eavesdropper channel is moreover shown to harden in the asymmetrically asymptotic regime of eavesdropping following the discussions in [45]. The detailed derivations are given in Appendix A. From (14b), one observes that the variance of the secrecy rate vanishes in the large-system limit. In fact, as M grows large, η t increases, and hence, the first term in (14b) tends to zero. Moreover, in the asymmetrically asymptotic regime of eavesdropping, U e /V e is significantly small and the two other terms are negligible. Consequently, in the large-system limit σ converges to zero. This observation could be intuitively predicted, due to the fact that the both channels harden asymptotically. The mean value η, however, does not necessarily increase as M grows, since it is given as the difference of two terms which can both asymptotically grow large. The latter observation indicates that increasing the number of selected antennas for this setup does not necessarily improve the secrecy rate. We discuss this argument later in Section IV. At this point, we employ Theorem 1 to investigate the secrecy performance of the system in Scenarios A and B. Remark: Theorem 1 gives a "large-system approximation". This means that for fixed L, N r and N e , R asy (S) accurately approximates the statistics of the instantaneous secrecy rate when M is large enough. Note that the theorem does not impose any constraint on the growth of L, N r and N e , and the approximation is valid as long as the assumptions of the theorem are fulfilled. Nevertheless, our numerical investigations show that even for M = 16, which is not so large, this approximation is highly accurate. A. Secrecy Performance in Scenario A When the BS knows the eavesdropper's CSI, the instantaneous secrecy rate is achievable in each transmission interval. Assuming that the symbols of the given codeword observe different realizations of the channel, the maximum average rate achieved by the transmitter is determined by the expectation of the instantaneous secrecy rate. This average rate is referred to as the achievable ergodic secrecy rate and is considered as an effective performance metric in this case. Using Theorem 1, the achievable ergodic secrecy rate R Erg (S) for our setup in the large-system limit is approximated as where ξ := η/σ. Using the inequality Q(x) < φ (x)/x for x > 0 and the fact that Q(−x) + Q(x) = 1, we can bound the ergodic secrecy rate as for ξ > 0. By numerical investigations, it is seen that the lower bound is tight when ξ is large enough. Fig. 1 illustrates the accuracy of the approximations, as well as the tightness of the bound. The figure has been plotted for L = 8 and M = 16 transmit antennas which is practically small. The SNR at the eavesdropping terminal is considered to be log ρ e = −5 dB and the receiving terminals have been assumed to have N r = N e = 2 antennas. As the figure shows, the approximation is consistent with the simulations within a large range of SNRs. The lower bound in (16) moreover perfectly matches R Erg (S) except for the interval of ρ m in which η is close to zero. This observation is due to the fact that the variance in the large-system limit tends to zero rapidly, and thus, ξ = η/σ grows significantly large even for finite values of η. Consequently, one can write Q(−ξ) ≈ 1 − φ(ξ)/ξ and approximate the achievable ergodic rate with η accurately. Although the approximation in Theorem 1 is given for the large-system limit and asymmetrically asymptotic regime of eavesdropping, one observes that the result is accurately consistent with the simulations even for not so large dimensions and β e = 1/8. B. Secrecy Performance in Scenario B In Scenario B, the eavesdropper's CSI is not known at the BS. This means that for a given realization of the channels, the instantaneous secrecy rate in (4) cannot be achieved. This is due to the fact that the transmitter achieves the secrecy rate in (4) by constructing its codewords based on the leakage rate achievable over the eavesdropper channel. For this scenario, the ǫ-outage secrecy rate is known to be the proper metric quantifying the secrecy performance. Considering a given rate Consequently, the ǫ-outage achievable secrecy rate R Out (ǫ) is defined as the maximum possible rate for which P Out (R o ) ≤ ǫ. The intuition behind defining the ǫ-outage secrecy rate as the performance metric can be stated as the following: Since the BS does not know the CSI of the eavesdropper channel, it assumes that the achievable secrecy rate is at least R o in all transmission intervals. Noting that the CSI of the main channel is known at the BS, the setting of the secrecy rate implicitly imposes this assumption on the quality of the eavesdropper channel that R e < R m − R o in which the term R m − R o is known by the transmitter. Consequently, the secrecy outage probability in (17) determines the probability of the eavesdropper having better channel quality than the assumed term R m − R o , or equivalently, the fraction of intervals in which the eavesdropper can decode transmit codewords at least partially. As a result, R Out (ǫ) determines the maximum achievable secrecy rate for which one can guarantee that the fraction of transmission intervals with information being leaked to the eavesdropper is less than ǫ. From Theorem 1, the outage probability is approximated as Consequently, the ǫ-outage secrecy rate is given by with Q −1 (·) being the inverse of the Q-function with respect to composition. Moreover, the probability of non-zero secrecy rate P NZS , defined as P NZS := Pr {R s (S) > 0}, in the largesystem limit is approximated as P NZS ≈ 1 − Q (η/σ). Fig. 2 shows the secrecy outage probability as a function of ρ m for R o = 5 considering various values of N r and L. Here, N e = 8 and log ρ e = −10 dB and the BS is considered to be equ-ipped with M = 128 antennas. As it is seen, the large-system approximation consistently tracks the numerical result for a large range of SNRs. Although Theorem 1 approximates the distribution of the instantaneous secrecy rate in the asymmetrically asymptotic regime of eavesdropping, one can see that the results closely match the simulations even for β e = 1. IV. SECRECY ENHANCEMENT VIA TAS In this section, we investigate the impacts of TAS on the secrecy performance in both Scenarios A and B. Let us start with Scenario A. As it was discussed, the secrecy performance in this case is characterized by the ergodic secrecy rate whose large-system approximation is given in Section III-A. Considering the ergodic secrecy rate R Erg (S) as a function of L, one observes that for different choices of ρ e , ρ m , N r and N e , the ergodic secrecy rate may strictly increase with L within the interval {1, . . . , M } or have a maxima at some integer L ⋆ < M . This observation suggests that for the considered setting the secrecy performance can be enhanced in some cases via TAS. Fig. 3 illustrates this point. In this figure, the ergodic secrecy rate is plotted as a function of L, for several realizations of the setting with M = 128 considering both the large-system approximation and numerical simulations. The SNRs at the legitimate receiver and eavesdropper are set to log ρ m = 0 dB and log ρ e = −10 dB, respectively. As the figure shows, the ergodic secrecy rate in some curves meets its maximum at some values of L which is significantly smaller than M . This observation depicts that TAS in these scenarios, not only benefits in terms of RF-cost and complexity, but also enhances the secrecy performance of the system. The intuition behind this behavior comes from the fact that the growth in the number of selected antennas improves the quality of the both channels. For some cases, including those shown in Fig. 3, the improvement from the eavesdropper's point of view dominates the overall growth in the secrecy rate, if a certain number of active antennas is surpassed. This means that by setting L to be more than this given number, the quality improvement at the eavesdropping terminal starts to exceed the enhancement at the legitimate receiver. Considering Scenario B, similar behavior can be observed in terms of ǫ-outage secrecy rate R Out (ǫ). In Fig. 4 the ǫ-outage secrecy rate for ǫ = 0.01 has been plotted in terms of L for several examples considering M = 128, log ρ m = 0 dB and log ρ e = −10 dB. A. Characterization of Secrecy Enhancement Based on the latter observations, one may intuitively state that TAS plays a constructive role on the secrecy performance when the eavesdropping terminal starts to experience prevailing improvements in its channel quality by growth in L at some L < M . The characterization of the settings in which this behavior is observed is however not trivial, since the performance metrics in general depend on several parameters. In the sequel, we invoke our large-system results to characterize these settings. For this aim, we first define the "prevalence set" for the legitimate receiver and the eavesdropper. The prevalence set for the eavesdropper Ë E is then defined as the set of all tuples (ρ m , ρ e , N r , N e ) for which the eavesdropper is relatively prevailing. Definition 2 partitions the realizations of the setting into two sets. In the former set, represented by Ë R , the growth in the number of active antennas improves the communication quality over the main channel always more than over the eavesdropper channel. The latter set, denoted by Ë E , moreover, encloses the settings in which the improvement at the eavesdropper channel starts to prevail when L exceeds at some L < M . Consequently, the secrecy performance in this case is enhanced by employing the protocol S. B. Sufficient Conditions for Prevalence Using the large-system approximation, one can determine Ë R and Ë E for large M analytically. The result is however of a complicated form in general. Alternatively, one may derive a set of sufficient conditions for which the prevalence of the legitimate or eavesdropping terminal is guaranteed. Theorem 2 gives a set of sufficient conditions for the legitimate receiver to be relatively prevailing. where the function F (ℓ) reads with u and η t being defined in Theorem 1, U m = min {ℓ, N r }, V m = max {ℓ, N r } and (23) Moreover, f R (ℓ|ρ m , N r ) and f E (ℓ|ρ e , N e ) are given by (25a) and (25b) on the top of the next page with E(ℓ) reading Then, the legitimate receiver is relatively prevailing in both Scenarios A and B if f (ℓ|T ) > 0 for all real ℓ ∈ [1, M ]. Proof: The proof follows bounding the first derivatives of the large-system approximations for the ergodic and ǫ-outage secrecy rate by a similar term, and is given in Appendix B. Theorem 2 intuitively indicates that the legitimate receiver is prevailing when the growth in the achievable rate over the main channel by increasing the number of active antennas always dominates the growth over the eavesdropper channel. In fact, the first two terms in the right hand side of (21) bound the rate growth over the main channel while f E (ℓ|ρ e , N e ) describes the improvement in the quality of the eavesdropper channel in the large-system limit. Using Theorem 2, one can discuss whether secrecy enhancement is achievable in the setting via TAS or not. One should note that this theorem states only a sufficient condition. This means that there exist tuples which do not fulfill the conditions given in Theorem 2 and still are optimal under full complexity in the sense of secrecy performance. For these cases, one may further study necessary conditions. In the following, we study some examples. From Theorem 2, one can show that for the setting in (a) the sufficient conditions are satisfied, and thus, the legitimate receiver is relatively prevailing in both Scenarios A and B. This result agrees with this intuition that the legitimate receiver is prevailing, since both the number of receive antennas and the SNR are relatively better at the this terminal. For (b), we invoke Theorem 2 and derive a set of conditions under which the legitimate receiver becomes relatively prevailing. Since M ↑ ∞, one can show that for this case Λ(ℓ) ≈ ψ/2 and E(ℓ) ≈ 1. Moreover, the function F (ℓ) in the large-system limit can be approximated as with u and f Nr+1 (u) given in Theorem 1. Substituting in (21), the constraints in (27) at the top of the next page is derived. When M ↑ ∞, one concludes that N r ≥ 1+ √ 2M is sufficient for the prevalence of the legitimate receiver. Note that this constraint does not depend on the SNRs. For instance, considering M = 128, a legitimate terminal with N r = 17 antennas is relatively prevailing for any choice of ρ m and ρ e . Fig. 5 shows the achievable ergodic rate for M = 128, N e = 1 and N r = 17 considering several choices of ρ e and ρ m . As the figure depicts, the optimal choice for L in all the cases is L ⋆ = M which agrees with the analytic result. V. OPTIMAL NUMBER OF ACTIVE ANTENNAS When the eavesdropper is relatively prevailing, the secrecy performance metric is maximized by choosing the number of active antennas optimally. We investigate this problem through some examples considering both Scenarios A and B. A. Scenario A The large-system approximation of R Erg (S) in (15) is a function of L whose maxima occurs at some L ⋆ ∈ [1 : M ] when the eavesdropper is relatively prevailing. We derive this maxima analytically for some examples in the sequel. Example 2 (Single-antenna receivers): Consider the scenario in which the receiving terminals are equipped with a single antenna, i.e., N r = N e = 1. Assume that the eavesdropper's CSI is available at the transmitter. We intend to derive the optimal number of active antennas L ⋆ which maximizes R Erg (S). To do so, we initially assume that with L ⋆ active transmit antennas the setting performs in the asymmetrically asymptotic regime of eavesdropping, i.e., L ⋆ ≫ 1. We later show that this prior assumption is true. By substituting into (14a) and (14b), η and σ 2 are determined as Under the assumption L ⋆ ≫ 1, the achievable ergodic secrecy rate for M ↑ ∞ is further approximated as R Erg (S) ≈ η. To find L ⋆ , we define the function for real ℓ. R(·) is the real envelope of the ergodic secrecy rate whose values at integer points give the ergodic secrecy rate for the given number of active antennas. In this case, one can straightforwardly show that for any choice of ρ m and ρ e = 0, there exists some real ℓ ∈ [1, M ] for which R ′ (ℓ) < 0. This fact indicates that with N r = N e = 1, the eavesdropper is relatively prevailing 1 as long as ρ e = 0, and thus, L ⋆ < M . To find L ⋆ , one notes that R ′′ (ℓ) ≤ 0 for ℓ ∈ [0, M ], and thus, L ⋆ is the closest integer to the maxima of R(·). Consequently, the optimal number of active transmit antennas is approximated as From (30), one observes that L ⋆ grows with M , and therefore, the eavesdropping regime is asymmetrically asymptotic, i.e., the initial assumption L ⋆ ≫ 1 holds. Moreover, by reducing ρ e ↓ 0 in (30), L ⋆ = M which agrees with the fact that in the absence of eavesdroppers, the achievable ergodic secrecy rate is a monotonically increasing function of L. Example 3 (Multi-antenna eavesdropper): Consider a scenario with a single antenna legitimate receiver whose channel is being overheard by a sophisticated multi-antenna terminal, i.e., N r = 1 and N e growing large. Assume that the BS knows the CSI of the eavesdropper. From Theorem 1, η and σ 2 are given by In contrast to Example 2, R Erg (S) in this example can not be approximated by η, since ξ = η/σ is not necessarily large. Consequently, we employ (15) to accurately approximate the achievable ergodic rate. Define R(·) over the real axis as where f (ℓ) and s(ℓ) are given by With similar lines of inference as in Example 2, one concludes that for any non-zero choices of ρ e and ρ m the eavesdropper is relatively prevailing. This result is intuitive, since the eavesdropper is more sophisticated compared to the one considered in Example 2. Consequently, L ⋆ ≈ ⌊ℓ ⋆ ⌉ where ℓ ⋆ satisfies 1 with h(ℓ) = f (ℓ)/s(ℓ). In Fig. 7, R Erg (S) is sketched versus L for N e = 16 assuming log ρ e = −25 dB, log ρ m = 0 dB and M = 128. From (34), the maxima of the function R(ℓ) is derived as ℓ ⋆ = 13.7 which recovers L ⋆ = 14 given by simulations. B. Scenario B Considering Scenario B, a similar approach can be taken to derive the optimal number of active antennas. We investigate this case through the following example. Example 4 (Passive eavesdropping): Similar to Example 2, consider a case with N r = N e = 1. Let ρ e = ρ m = ρ, and assume that the eavesdropper's CSI is not available at the BS. The performance metric is the ǫ-outage secrecy rate which in the large-system is approximated by (19) with η and σ η = log 1 + ρL 1 + ln M L −1 1 + ρL (35a) In order to investigate the prevalence, we define where q 0 = Q −1 (1 − ǫ). It is then trivial to show that By standard lines of derivation, one can show that for any choices of ρ, R ′ (M ) < 0. This indicates that for all SNRs the eavesdropper is relatively prevailing. Moreover, for with a ǫ := −3q 0 ψ/ √ 2, the outage secrecy rate is a decreasing function of L, and therefore, L ⋆ = 1. Nevertheless, when (38) does not hold, the optimal number of active transmit antennas antennas is given by 1 In Fig. 8, the ǫ-outage secrecy rate at ǫ = 0.1 for M = 128 has been plotted versus L for log ρ = −15 dB and log ρ = 15 dB. As the figure depicts, for the latter case, in which the inequality in (38) is satisfied, R Out (ǫ) is a decreasing function of L. For the case of log ρ = −15 dB, the simulations indicate that L ⋆ = 23. The analytic investigations moreover reports ℓ ⋆ = 22.97 which is consistent with the simulation results. VI. CONCLUSION In this paper, we characterized the impacts of TAS on the secrecy performance of massive MIMO wiretap settings. It was shown that in some scenarios, the secrecy performance is enhanced under TAS compared to the case of full complexity. We moreover developed an analytic framework to determine the optimal number of active antennas in these scenarios. The numerical investigations confirmed the accuracy of our framework even for settings with not so large dimensions. The analyses of this study implies that antenna selection in some massive MIMO wiretap setups enhances the secrecy performance. A possible direction for future work is to extend the current framework to scenarios in which other techniques, such as artificial noise generation, are employed along with TAS for secrecy enhancement. The work in this direction is ongoing. APPENDIX A DERIVATION OF THEOREM 1 We start by evaluating the large-system distribution of R m . It has been shown in [28,Lemma 2] that the distribution of the input-output mutual information of a Gaussian MIMO channel, under some constraints, is accurately approximated in terms of the random variables Tr{J} and Tr{J 2 } where J := H H H. Under the TAS protocol S, Tr{J} represents the sum of L first order statistics which at the large limit of M converges in distribution to a Gaussian random variable whose mean and variance are given by (10a) and (10b), respectively. Using some properties of random matrices, the large-system distribution of R m is then approximated as in [28,Theorem 1] with a Gaussian distribution whose mean and variance are given in terms of η t and σ 2 t . The next step is to evaluate the distribution of R e . Noting that the main and the eavesdropper channel are independent, it is concluded that the TAS protocol S performs as a random selection protocol from the eavesdropper's point of view. By considering the asymmetrically asymptotic regime of eavesdropping, one can invoke the asymptotic results for i.i.d. Gaussian fading channels in [45], and approximate the large-system distribution of R e is with a Gaussian distribution whose mean and variance respectively read η e = U e log (1 + ρ e V e ) (39a) Since the main and the eavesdropper channel are independent, R ⋆ = R m − R e is sum of two independent Gaussian random variables in the large-system limit; hence, it is Gaussian with mean and variance given in (14a) and (14b). APPENDIX B DERIVATION OF THEOREM 2 In the large-system limit, the achievable ergodic and ǫ-outage secrecy rate are accurately approximated by (15) and (19), respectively. To derive a sufficient condition, we first consider Scenario A. We define the function M A (·) on the real axis as where f (ℓ) and s(ℓ) are determined by replacing L with ℓ in the asymptotic terms given for η and σ in Theorem 1, respectively. M A (ℓ) is the real envelope of the achievable ergodic rate whose values at integer points within the interval and investigate a sufficient condition for which M ′ P (ℓ) > 0. Proof: Let f ′ (ℓ) > 0. In this case, for ℓ < min {N e , N r }, one can simply show that N e ρ e < N r ρ m u N r + ρ m η t (1 + ρ e ℓ) (42) where u and η t are defined in Theorem 2. As we have assumed an asymmetrically asymptotic regime of eavesdropping, the number of eavesdropper antennas reads 1 N e ≫ ℓ when ℓ < min {N e , N r }. This means that the inequality in (42) holds only when u takes values close to zero. By taking the first derivative of s(ℓ), it is then shown that for values of u close to zero, s ′ (ℓ) ≈ 0 for ℓ < min {N e , N r } which concludes (a).
2018-03-04T15:58:33.000Z
2018-03-04T00:00:00.000
{ "year": 2018, "sha1": "ddd3ac81b2672702e972a8135438fc89651ba0ff", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1803.01372", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8fa0c58f453c591fd116eed54b4052fafad6523d", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
116163419
pes2o/s2orc
v3-fos-license
Street as Sustainable City Structural Element Sustainability in architecture is nowadays of particular significance in the course of globalization and information density. The technospehere spontaneous development poses a threat to the sustainability of traditional urban forms where a street is one of the essential forming elements in the urban structure. The article proposes to consider formally compositional street features in relation to one of the traditional streets in the historic center of Ekaterinburg. The study examines the street-planning structure, the development of its skeleton elements, silhouette and fabric elevation characteristics as well as the scale characteristics and visual complexity of objects. The study provided architectural and artistic aspects of street sustainability, and limits of the appropriate scale and composition consistency under which the compatibility of alternative compositional forms existing at different times is possible. Introduction Sustainability is an integrative concept with all its aspects being complementary. The issue of sustainability in architecture is especially important nowadays. The living conditions in mega-cities pose a threat to values traditional for architecture. Conventional urban architectural elements like a street, a square or a neighbourhood are losing their integrity and the role in a new environment. Relevance of the research of compositional, artistic and symbolic sustainability aspects Most studies consider sustainability to be the ecological balance -environmental friendlinessefficiency -comfort -associated with the specifications of architecture while at the same time missing its artistic and symbolic terms such as clean air, vegetation, warmth and water being equally important for people. Considering the sustainability of green architecture or ecology is generally limited to economic indicators, rankings or better-selected building maintenance and technologies aimed at reducing resource consumption and maintaining live support systems [1]. Such aspects of sustainability are under active investigation; new techniques to solve some environmental or technical problems are being developed [2]. Nevertheless, the issues associated with transforming, deforming or demolishing classical urban forms fall outside the attention of researchers. The article considers the sustainability of streets as one of the key elements of the urban framework [3]. It analyses one of the distinctive historical streets in the city Ekaterinburg, Pushkin Street on the Sustainability aspects of the street being an urban structural element Urban layout may be regarded as sustainable when most of its tangible spatial structural elements during their life cycle hold the original compositional context and scale. A street is an essential forming element in any urban system [4]. Several types of a street can be identified as follows:  traditional type of a street (house façades face a streetway; there is no or little space between houses);  the early twentieth century type of a street (end walls face a streetway; the space between houses is big);  late twentieth -early twenty first century type of a street (houses are moved aside a streetway; the space between houses is big; there are trees, bushes and lawns near a streetway). Consider the limits of formally compositional sustainability of a traditional street (in our case, Pushkin Street is used for the demonstration of examples):  Skeleton element sustainability. The sustainability of skeleton elements is determined by the width and length stability of a streetway and its pavements, and by the preservation of buildings and structures of historic interest which specify artistic formation and scale [5].  Street fabric element sustainability. The sustainability of the fabric is determined by the integrity of formally compositional patterns in the process of destroying old buildings and constructing new ones on Pushkin Street [6]. Skeleton sustainability of Pushkin Street When founded Ekaterinburg was an industrial stronghold (figure 2(a)). It was something like "ideal" Renaissance cities in plan [7]. Tackling the problem of urban defense, Renaissance craftsmen were searching for the most efficient form of the city plan. Since an equilateral polygon, a sphere or a square have the least perimeter, most of the cities got similar outlines [8]. The following factors influenced regular urban road network:  relatively quiet area;  north-south set of a current;  well-defined dam outline. In 1723 -1800, mutually perpendicular axes of the river and dam were used to guide regular urban road network. Since then, street routing has remained unchanged. Fabric elements have actively undergone changes. By 1829 (figure 2(d)) the scale and composition of the city historic centre had been shaped but has remained in general terms until now. In the 1932 plan (figure 2(f)) one can see that the planning structure of the historic centre had not changed since 1829. In 1930, the Ekaterininskiy Cathedral on Torgovaya Square was destroyed thus enabling to introduce further changes. According to the analysis of the plans existing at different times, the planning structure of Pushkin Street is shown to have been laid in the XVIII century in the time of Ekaterinburg foundation. Such sustainability in time results from the vicinity of the street both to the river having maximum stability and to intercrossing axes of the dam and the river that underlay the urban planning structure. The sustainability of silhouette and elevation characteristics Though the urban fabric exists up to 300 years on average, most of the buildings do much less. Therefore, there remain only single objects more than 100 years old in the street structure. Despite reconstructions, demolitions or superstructures, the street fabric existing at different times may still possess composition integrity provided the limits of the admissible scale and composition consistency remain untouched [10]. Figures 3,4 show the silhouette pattern that organizes a building and the fabric forming elements of Pushkin Street. The majority of the buildings forming the street are historical and architectural monuments. Figure 3 shows the analysis of the block within the boundaries of Pushkin -Malyshev -Gorkiy -Lenin Streets. All buildings with their façades facing Pushkin Street are characteristics of its time. The buildings of the hotel "Russia" (2), Zagaynov officiary's house (3), Cherepanov's hotel "Hermitage" (4) and Cherepanov's estate (5) being all constructed in the end of the XIX century are co-scaled and have comparable visual complexity scale. The building of the clinic of medical specialists (1), the 2nd house of the City Council (6) and the building of the Ural Regional Executive Committee (7) differ greatly in their elevation characteristics from other buildings on Pushkin Street. The clinic of medical specialists is the building in Modern Style; the 2nd house of the City Council presents Constructivism Style. These buildings do not differ in respect of the elevation from the nearby building of the hotel "Russia" built at the end of the XIX century. The difference is approximately 1 -3 stories. The clinic of medical specialists differs from those around by 1 -2 stories. It is located at the corner of Pushkin and Malyshev Streets where the upward growth was justified by the overall composition of the buildings on the crossroad [11]. Decreasing visual complexity scale and increasing the scale in the post-revolution architecture led to the dramatic disharmony with the objects of the XVIII -XIX centuries [12]. Nevertheless, there remained the skeleton structure as well as the type of the street [13]. Ekaterininskiy Cathedral on Labour's Square built in 1823 was demolished in 1930. The square, however, forming the skeleton of the site under study has remained unchanged to present days thus ensuring the skeleton sustainability of Pushkin Street. [14]. It could not but affect the look of Pushkin Street. The mansions of the XIX century were in the shadow of Constructivism buildings superior in scale and with less manageable visual complexity scale [15]. Elevation characteristics of the block have undergone changes: the difference with the present historical buildings is three stories. The Cherepanov's estate of concern has been lost in the street composition since the elevation disharmony destroyed the composition balance. The number of buildings existing at different times is more on the even side of Pushkin Street than on the odd one. Most of them are ribbon buildings ( figure 4). Druzhilov's house (11), the doctor Assa's house (13), Uvarov' profitable house (15) and the Manor of "The association of A. Pechenkin and Co." (16) are co-scaled in elevation and similar in visual complexity scale being constructed around the same time: at the end of the XIX century and the beginning of the XX century. The building of the city pharmacy (9) was built at the end of the XIX century but its classical façade elements were replaced by the Modern décor in the beginning of the XX century. The visual complexity scale and elevation were unaffected by such reconstruction. The City Council house (8), the house of the Uralplan (10) and the building of the Federation of Trade Unions (12) were built in the period from the end of the 1920s to the beginning of the 1930s. Their scale is different (three stories higher) and their visual complexity scale is smaller in comparison with the buildings of the XIX century. The City Council house and the house of the Uralplan are historical Constructivism style monuments of regional importance. The building of the Sverdlovsk regional Union of Industrialists and Entrepreneurs (14) was built in the middle of the 1990s. It fits with the surrounding XIX century buildings in terms of elevation characteristics but losses in terms of visual complexity scale [16]. Architects were likely to set the task of adjusting the building to the historically established setting [17]. At the beginning of the XXI century, the bank office was constructed in the yard of the building of the Sverdlovsk regional Union of Industrialists and Entrepreneurs that destroyed the visual perception of the monument and towers now over the adjacent buildings as the difference account for 2 -5 stories. Elevation changes and the discrepancy of the visual complexity scale of the buildings on Pushkin Street resulted in a number of disharmony cases. The street has had so-called dual composition system during its two-century existence: the first system includes the building of the XIX century; the second one is made up of Constructivism style buildings. The survived traditional type of the street unites these two systems when façades stretch along the street and houses are built close enough. The survived historical type of the street provided the sustainability of architecture. The dual heterochrony of the street enriched architectural images [18]. Nowadays almost all buildings with their façades facing Pushkin Street belong to the architectural heritage. The lack of buildings with the difference in five stories has a favourable effect on the visual perception of the monuments and composition integrity of the city silhouette [19]. There are few Modern style asymmetrical buildings. The dynamism of their décor does not break the general structure but helps to put emphasis. Scale characteristics and visual complexity of projects The buildings' façades of the beginning of the XXI century do not face Pushkin Street. Located in the yard area, they have the background function. One can see the façade of the building (figure 4) between two buildings (13,14) that causes disharmony. Most of the building façade are glazed with minimum articulation. Large scale and building elevation do not fit with the street architecture. Conclusions Architectural and artistic sustainability of Pushkin Street ensures its skeleton sustainability. The composition principles of the skeleton and type of Pushkin Street were established since the city was founded and have remained unchanged. The street location, its vicinity to the river whose axis underlay the planning structure of Ekaterinburg contributed to the street location in the city. The silhouette characteristics of the buildings forming the fabric of Pushkin Street do not extremely differ in elevation: the average number of stories on both side of the street equals 2 -5 stories. The lack of buildings with the difference in five stories has a favourable effect on the visual perception of the street on the whole. The dual heterochrony of the composition structure enriches the architectural images of Pushkin Street. The mirror symmetry of the façades existing at different times provides additional terms for the composition sustainability. Each façade is closed on itself thus strengthening the individuality perception of each building.
2019-04-16T13:26:34.417Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "d176a0f7ae4613be6db536138faf109efee31ac7", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/262/1/012130", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a3dab76292d191e9ca93bb9bad87b42b4130c1f5", "s2fieldsofstudy": [ "Environmental Science", "Art", "Engineering" ], "extfieldsofstudy": [ "Physics", "Engineering" ] }
19316429
pes2o/s2orc
v3-fos-license
Physiological Peculiarities of Lignin-Modifying Enzyme Production by the White-Rot Basidiomycete Coriolopsis gallica Strain BCC 142 Sixteen white-rot Basidiomycota isolates were screened for production of lignin-modifying enzymes (LME) in glycerol- and mandarin peel-containing media. In the synthetic medium, Cerrena unicolor strains were the only high laccase (Lac) (3.2–9.4 U/mL) and manganese peroxidase (MnP) (0.56–1.64 U/mL) producers while one isolate Coriolopsis gallica was the only lignin peroxidase (LiP) (0.07 U/mL) producer. Addition of mandarin peels to the synthetic medium promoted Lac production either due to an increase in fungal biomass (Funalia trogii, Trametes hirsuta, and T. versicolor) or enhancement of enzyme production (C. unicolor, Merulius tremellosus, Phlebia radiata, Trametes ochracea). Mandarin peels favored enhanced MnP and LiP secretion by the majority of the tested fungi. The ability of LiP activity production by C. gallica, C. unicolor, F. trogii, T. ochracea, and T. zonatus in the medium containing mandarin-peels was reported for the first time. Several factors, such as supplementation of the nutrient medium with a variety of lignocellulosic materials, nitrogen source or surfactant (Tween 80, Triton X-100) significantly influenced production of LME by a novel strain of C. gallica. Moreover, C. gallica was found to be a promising LME producer with a potential for an easy scale up cultivation in a bioreactor and high enzyme yields (Lac-9.4 U/mL, MnP-0.31 U/mL, LiP-0.45 U/mL). Besides their fundamental importance in lignin degradation, LME are utilized in various biotechnological processes and applications ranging from pulp and paper, food, textile, dye and cosmetics industries to sustainable production of renewable chemicals, materials, fuels, organic synthesis and bioremediation [3,4]. With an increase in supply and demand for LME, their cost, yield and production efficiency for industrial and environmental applications need to be improved. Thus, several strategies such as search and bioengineering of new LME producing fungal species, The following available in large amounts in Georgia plant raw materials were tested as growth substrates in order to establish their impact to C. gallica 142 enzyme activity: residue after ethanol production from the wheat grains (EPR, contains 43% extractives, 7% cellulose, 34% crude protein), wheat bran (56% extractives, 8% cellulose, 6% lignin, 15% crude protein), sunflower oil cake (SOC, 25% extractives, 6% cellulose, 36% crude protein, 0.11% Cu), mandarin peels (67% extractives, 24% cellulose, 7% crude protein), walnut pericarp (52% extractives, 16% cellulose, 7% lignin, 10% crude protein), and banana peels (47% extractives, 18% cellulose, 10% lignin, 8% crude protein). All plant residues were oven-dried at 50 • C and ground to powder in a laboratory mill KM-1500 (MRC, Holon, Israel) prior to addition to the nutrient medium. The contents of water-soluble extractives were determined gravimetrically after suspension of these materials in water (10%, w/v) and autoclaving at 115 • C for 30 min. The total nitrogen was determined according to Kjeldahl method with Nessler reactive after pre-boiling of samples in 0.5% solutions of trichloroacetic acid for 15 min to remove non-protein content. True protein content was calculated as the total nitrogen multiplied by 4.38. Cellulose and lignin in samples were determined by the method of Updegraff [17] and the gravimetric method with 72% sulfuric acid, respectively. Shake-Flask Cultivation Conditions The submerged fungal cultivation was conducted in Innova 44 shaker (New Brunswick Scientific, Edison, NJ, USA) at 150 rpm. Cultivation temperature of P. chrysosporium was 37 • C while other fungi were grown at 27 • C. Homogenized mycelium (3 mL) was used to inoculate 50 mL of synthetic medium containing (per L): 10 g glycerol, 2 g peptone; 2 g yeast extract, 0.1 g CaCl 2 , and 0.005 g FeSO 4 . To study the effect of lignocellulosic materials on production of LME, 20 g/L of the above-mentioned plant residues were supplemented to the synthetic medium as additional growth substrates. To evaluate the effect of aromatic compounds on the enzyme production 0.5 mM (mol/L) of xylidine, veratryl alcohol, 2,6-dimethoxyphenol, pyrogallol, vanillic acid, guaiacol and 0.3 mM (mol/L) of trinitrotoluene (TNT) and hydroquinone were added into the control basal medium containing 40 g/L mandarin peels, prior to inoculation. To accelerate enzyme secretion, three known surfactants: Tween 80, polyethylene glycol (PEG) and Triton X-100 were added to the cultures. All chemicals used were of analytical grade and purchased from Sigma-Fluka-Aldrich (St. Luis, MO, USA). The pH of all media was adjusted to 5.0 prior to sterilization and all submerged culture experiments were carried for 14-17 days. At predetermined time intervals, 1 mL of culture was sampled and solids were separated by centrifugation (Eppendorf 5417R, Hamburg, Germany) at 10,000× g for 5 min at 4 • C. The supernatants were analyzed for pH, reducing sugars and enzyme activities. All experiments were performed twice using three replicates at each time point. All results were expressed as the mean ± SD with only p ≤ 0.05 considered as statistically significant. Cultivation in Bioreactor To scale up the C. gallica LiP production, cultivation was performed in the 7 L fermenter LILFUS GX (Incheon, South Korea) with three Rushton impellers. The fermenter was filled with 5 L of the optimized medium (per L): 40 g mandarin peels, 5 g glycerol; 1 g KH 2 PO 4 , 2 g peptone, 2 g yeast extract, 0.5 mM pyrogallol, 0.5 g MgSO 4 , 0.1 g CaCl 2 , 3 mL polypropylene glycol 2000 and the pH was adjusted to 5.0. After 5 and 8 days of fermentation, 200 mL of distilled water was added to the fermenter to compensate evaporation. The fermenter equipped with pH, temperature and pO 2 probes was sterilized (121 • C, 40 min) and inoculated with 500 mL of homogenized mycelium. Fermentation was carried out without baffles at 27 • C and at the constant airflow rate of 1 v/v/min. During the fermentation process, samples were collected daily and analyzed for enzyme activity. After 10 days of fermentation, fungal biomass was separated from culture liquid by successive filtration and centrifugation at 5400× g for 15 min. Enzyme preparation was isolated from the culture liquid by precipitation with (NH 4 ) 2 SO 4 at 70% saturation and the precipitate was dissolved in 0.05 M (mol/L) acetate buffer (pH 5.5). Enzyme Activity Assys Laccase activity was determined spectrophotometrically (Camspec M501, Cambridge, UK) at 420 nm as the rate of 0.25 mM (mol/L) ABTS (2,2 -azino-bis-(3-ethylthiazoline-6-sulfonate)) oxidation in 50 mM (mol/L) Na-acetate buffer (pH 3.8) at room temperature [18]. MnP activity was measured at 270 nm by following the formation of a Mn 3+ -malonate-complex [19] and by oxidation of Phenol Red [20] in the presence of 0.1 mM (mol/L) H 2 O 2 . LiP activity was determined spectrophotometrically at 310 nm by the rate of oxidation of 2 mM (mol/L) veratryl alcohol in 0.1 M sodium tartrate buffer (pH 3.0) with 0.2 mM (mol/L) hydrogen peroxide [21]. To establish true peroxidase activity, activities in the absence of H 2 O 2 were subtracted from the values obtained in the presence of hydrogen peroxide. One unit (U) of LME activity was defined as the amount of enzyme that oxidized 1 µmoL of substrate per minute. Endoglucanase (CMCase, EC 3.2.1.4) activity was determined in accordance with the IUPAC (International Union of Pure and Applied Chemistry) recommendations using 1% (w/v) carboxymethyl cellulose (sodium salt, low viscosity, Sigma, Schnelldorf, Germany) in 50 mM citrate buffer (pH 5.0) at 50 • C for 10 min [22]. Glucose standard curves were used to calculate the cellulase activity. Release of glucose was measured using the dinitrosalicylic acid reagent method [23]. One unit of CMCase activity was defined as the amount of enzyme releasing 1 µmol of glucose per minute. Screening of White-Rot Basidiomycetes for Lignin-Modifying Enzyme Production Sixteen WRB strains were screened for LME production in submerged cultivation experiments, both in synthetic and in lignocellulose-containing media. When cultivated in glycerol-based medium, all fungal strains grew in the form of small pellets, and an increase in pH (from 5.0 to 5.3-6.5) was observed ( Table 1). The measurements of enzyme activity revealed a diversity of LME expression in the growth medium. As expected, no laccase activity was revealed during cultivation of P. chrysosporium strains while low laccase activity was detected in culture liquids of P. radiata and T. ochracea. Among the fungi of genus Trametes, T. zonatus 540 and T. versicolor 159 secreted 4220 and 2350 laccase U/L, respectively. Four C. unicolor strains tested produced the highest laccase activities although the number of enzyme units differed significantly from strain to strain with expression of 3190 to 9410 laccase U/L. The same C. unicolor strains were capable of producing the highest MnP activities in cultivation in the synthetic medium. Other WRB either showed very low MnP activities or failed to produce this enzyme in the same cultivation conditions. Moreover, with the exception of C. gallica 142, none of the tested WRB expressed LiP in the glycerol-containing medium. Table 1. Screening of WRB for LME production in the synthetic glycerol-containing medium. Fungi were grown in shake flasks for 14 days at 27 • C (P. chrysosporium at 37 • C); the nutrient medium contained 10 g/L glycerol as a carbon source and 0.3 g/L veratryl alcohol as an enzyme synthesis inducer. Subsequently, selected WRB were screened for enzyme production in the same glycerol-containing medium supplemented with mandarin peels that promotes LME production by majority of the fungi studied in our group [5,9,24]. The cultivation process was accompanied by abundant fungal growth and changes in pH with final values ranging from 4.8 to 6.7 (Table 2). When laccase production was compared between strains cultivated in glycerol-containing medium as the sole carbon source to strains grown in mandarin peels-based medium, the latter proved to promote higher laccase secretion. For example, C. unicolor 303 and other tested strains of this species produced the highest laccase activities ranging from 16,620 U/L to 38,290 U/L. High laccase activities were also determined in cultivation of T. zonatus 540, T. ochracea 1009, C. gallica 142 and P. radiata 64658. The laccase activities for these species showed a 2-to 22-fold increase in cultivation with mandarin peels. Table 2. Screening of WRB for LME production in the glycerol + mandarin peels-containing medium. Fungi were grown in shake flasks for 14 days at 27 • C (P. chrysosporium at 37 • C); the nutrient medium contained 10 g/L glycerol and 20 g/L mandarin peels as carbon sources and 0.3 g/L veratryl alcohol as an enzyme synthesis inducer. Production of MnP by fungal cultures was measured using two substrates: manganese (II) ions and phenol red. The enzyme activity data listed in Tables 1 and 2 show no correlation between activity values obtained with two assays widely employed in MnP studies. It is possible that isoforms of MnP produced by the tested fungi had different affinity to the substrates used in tested assays. Based on Mn 2+ oxidation, the highest MnP activity was detected on day 7 for C. unicolor 301 submerged culture (2760 U/L, 3-fold higher than that in synthetic medium). Other WRB species secreted only low or trace MnP activities. Moreover, P. chrysosporium also produced manganese oxidizing enzyme, but no MnP activity was detected by phenol red oxidation assay. Minor amounts of LiP were detected in all tested fungal cultures, with the exception of M. tremellosus 206, when grown in medium with mandarin peels. Among the fungal strains tested, C. gallica 142, C. unicolor 300 and P. chrysosporium 1309 produced the highest LiP activities: 210 U/L, 160 U/L and 150 U/L, in respective cultures (Table 2). On the other hand, two P. chrysosporium ATCC strains, well-known for LiP secretion, showed only trace amounts (<0.02 U/mL) of this enzyme activity. Effect of Lignocellulosic Growth Substrates on LME Production Determination of optimal cultivation conditions for a variety of industrially important fungal species is of high practical value. Therefore, the subsequent experiments focused on the optimization of cultivation conditions for maximum LiP production by the new strain C. gallica 142 recently isolated from the forest close to Tetritskaro, Georgia. A common approach in the development of fermentation technologies is selection of an appropriate plant raw materials containing significant concentrations of soluble carbohydrates and inducers for an abundant growth of fungi and efficient production of LME. In this study, several plant raw materials were tested as growth substrates in order to assess the capability of C. gallica 142 to produce LME (Table 3). These residues are of great interest for the microbial fermentation as growth substrates since they are rich in readily available carbohydrates, nitrogen, and microelements. Moreover, walnut pericarp and mandarin peals are especially rich in a wide spectrum of aromatic compounds [25]. The lignified wheat straw (36% extractives, 36% cellulose, 18% lignin, 4% crude protein) also was tested for comparison. The first batch of experiments focused on exploring correlation between LME production and a type of lignocellulosic growth substrates added. Submerged cultivation of C. gallica 142 in medium containing different lignocellulosic materials revealed the highest laccase activities when the fungus was grown with SOC (27,280 U/L) or wheat bran (19,720 U/L) (Table 3). Unexpectedly, wheat straw, the most recalcitrant growth substrate, promoted comparatively high laccase secretion. In contrast, walnut pericarp and mandarin peels were rather poor inducers of laccase with only 6490 and 4680 U/L, respectively, produced. SOC appeared to be the least favorable growth substrate for MnP secretion, while fermentation in presence of banana peels and wheat straw provided the highest MnP activity of 240 U/L after four and seven days of cultivation, respectively. Among seven lignocellulosic materials tested, mandarin peels appeared to be the best growth substrate for LiP accumulation; this enzyme activity 2-to 5-fold exceeded those in the presence of other substrates. It is worth noting that fungal growth in all media was visually equal and significant levels of cellulase were produced to ensure culture with carbon and energy source ( Table 3). Table 3. The effect of lignocellulosic growth substrates on the C. gallica LME and endoglucanase activity. The fungus cultivation was performed in the nutrient medium containing 10 g/L glycerol and 20 g/L lignocellulosic materials as a carbon source; no inducer was added. To determine optimal concentration of mandarin peels for LiP production, four concentrations were tested in growth medium. With a gradual increase of the mandarin peels content (from 10 to 40 g/L), the level of C. gallica LiP activity increased up to 6-fold, from 600 to 3610 U/L (Figure 1). Similarly, the laccase activity increased and almost tripled from 2410 U/L when 10 g/L of mandarin peels concentration was used to 6570 U/L when 40 g/L of mandarin peels was added into the growth medium. C. gallica 142 MnP activity showed an increase from 720 U/mL to 2470 U/L when mandarin peel concentration was changed from 10 to 20 g/L. However, further increase in concentration of growth substrate caused a statistically insignificant decrease in MnP activity. Production of LME activities differed significantly dependent on the mandarin peels concentrations (Figure 2). For example, laccase production during the first five days of C. gallica cultivation (Figure 2A) was significantly enhanced (19-fold) with a parallel increase in mandarin peels concentration. Moreover, at lower substrate concentration, laccase activity gradually increased while two peaks reflecting laccase activity were present when 30 g/L and 40 g/L of mandarin peels were used. Then, activity of LiP increased gradually in all media peaking during the eighth day of cultivation with 10 g/L of mandarin peels and on the fourteenth day with 30 and 40 g/L of substrate ( Figure 2B). Effect of Nitrogen Source and Surfactant Concentration on LME Production To facilitate optimum growth conditions for C. gallica 142 culture, peptone concentrations were investigated in relation to the LME production. The experiment results show that laccase production increased from 4570 U/L to 6110 U/L when peptone concentrations increased from 0 to 4 g/L (Figure 3), suggesting a direct correlation with higher biomass production. However, when MnP activity was analyzed, a 3-fold increase in the enzyme activity at peptone concentration of 2 g/L was observed in comparison with the control medium, suggesting possible peptone-based induction. Additional increase in nitrogen concentration (>2 g/L) showed inhibition in MnP secretion. Similar correlation was observed for LiP production: when peptone concentration increased from 0 to 2 g/L, Effect of Nitrogen Source and Surfactant Concentration on LME Production To facilitate optimum growth conditions for C. gallica 142 culture, peptone concentrations were investigated in relation to the LME production. The experiment results show that laccase production increased from 4570 U/L to 6110 U/L when peptone concentrations increased from 0 to 4 g/L (Figure 3), suggesting a direct correlation with higher biomass production. However, when MnP activity was analyzed, a 3-fold increase in the enzyme activity at peptone concentration of 2 g/L was observed in comparison with the control medium, suggesting possible peptone-based induction. Additional increase in nitrogen concentration (>2 g/L) showed inhibition in MnP secretion. Similar correlation was observed for LiP production: when peptone concentration increased from 0 to 2 g/L, Effect of Nitrogen Source and Surfactant Concentration on LME Production To facilitate optimum growth conditions for C. gallica 142 culture, peptone concentrations were investigated in relation to the LME production. The experiment results show that laccase production increased from 4570 U/L to 6110 U/L when peptone concentrations increased from 0 to 4 g/L (Figure 3), suggesting a direct correlation with higher biomass production. However, when MnP activity was analyzed, a 3-fold increase in the enzyme activity at peptone concentration of 2 g/L was observed in comparison with the control medium, suggesting possible peptone-based induction. Additional increase in nitrogen concentration (>2 g/L) showed inhibition in MnP secretion. Similar correlation was observed for LiP production: when peptone concentration increased from 0 to 2 g/L, the maximum enzyme activity changed from 220 to 350 U/L. In the subsequent experiments, 2 g/L concentration of peptone was used as a nitrogen source. the maximum enzyme activity changed from 220 to 350 U/L. In the subsequent experiments, 2 g/L concentration of peptone was used as a nitrogen source. LME secretion may be influenced by presence and concentration of variety of surfactants. Addition of Tween 80 to C. gallica 142 culture did not significantly affect laccase, LiP or endoglucanase secretion (Table 4) but almost doubled the MnP activity when added at the concentration of 4 g/L. While PEG did not affect C. gallica enzyme secretion at any concentration tested, even low concentrations of Triton X-100 caused a 5-fold decrease in MnP activity and inhibited cellulase production. It is worth noting that, during the first five days of cultivation, Triton X-100 completely inhibited MnP and LiP production and caused an 8-fold decrease in laccase production. However, after the initial halt in LME production, increased enzyme synthesis was observed. Table 4. The effect of addition of surfactants on LME and CMCase production by C. gallica 142 in submerged fermentation of mandarin peels. Effect of Aromatic Compounds Supplementation of nutrient medium with aromatic/phenolic compounds seems to be one of the most effective approaches to increase production of LME by WRB. Several well-known LME synthesis modulators were tested in this study. Among the tested compounds, pyrogallol and veratryl alcohol caused a two-fold increase in C. gallica 142 laccase activity while supplementation of the medium with ferulic or vanillic acids showed only a slight increase in their secretion (Table 5). LME secretion may be influenced by presence and concentration of variety of surfactants. Addition of Tween 80 to C. gallica 142 culture did not significantly affect laccase, LiP or endoglucanase secretion (Table 4) but almost doubled the MnP activity when added at the concentration of 4 g/L. While PEG did not affect C. gallica enzyme secretion at any concentration tested, even low concentrations of Triton X-100 caused a 5-fold decrease in MnP activity and inhibited cellulase production. It is worth noting that, during the first five days of cultivation, Triton X-100 completely inhibited MnP and LiP production and caused an 8-fold decrease in laccase production. However, after the initial halt in LME production, increased enzyme synthesis was observed. Table 4. The effect of addition of surfactants on LME and CMCase production by C. gallica 142 in submerged fermentation of mandarin peels. Effect of Aromatic Compounds Supplementation of nutrient medium with aromatic/phenolic compounds seems to be one of the most effective approaches to increase production of LME by WRB. Several well-known LME synthesis modulators were tested in this study. Among the tested compounds, pyrogallol and veratryl alcohol caused a two-fold increase in C. gallica 142 laccase activity while supplementation of the medium with ferulic or vanillic acids showed only a slight increase in their secretion ( Table 5). None of the tested compounds affected MnP expression and only the presence of pyrogallol showed a 30% increase in LiP activity. Thus, 0.5 mM pyrogallol was used in C. gallica cultivation in fermenter. Table 5. The effect of addition of aromatic compounds on LME production by C. gallica 142 in submerged fermentation with mandarin peels. Enzyme Production in a Laboratory Bioreactor Evaluation of LiP production and scale up of C. gallica 142 culture were performed in a fermenter setting in the optimized mandarin peels-based medium. During the first five days, C. gallica 142 was cultivated at pH 5 to provide optimal conditions for polysaccharide hydrolysis and a steady supply of carbon source for the fungal growth. During the next five days, the medium pH was controlled at 5.7 to slow polysaccharide hydrolysis and limit the carbon source as well as to prevent a possible enzyme inactivation driven by higher pH conditions. Simultaneously, the agitation speed was decreased to 200 rpm to diminish the shear force effect on fungal hyphae and enzyme production. As shown in Figure 4, the presence of laccase activity was detected after the first day of fermentation, with a gradual increase in its activity throughout the cultivation time, reaching the maximum on day eight (9430 U/L). A low amount of MnP was released during the second day of fermentation with the maximum activity detected on day seven (310 U/L) followed by the sharp decrease through the reminder of cultivation time. LiP activity was detected during the fourth day of fermentation, reaching its maximum (450 U/mL) after nine days of cultivation. The final isolated and concentrated (100 mL) enzyme preparation from C. gallica 142 contained 263 U/mL laccase, 2 U/mL MnP and 16 U/mL LiP. None of the tested compounds affected MnP expression and only the presence of pyrogallol showed a 30% increase in LiP activity. Thus, 0.5 mM pyrogallol was used in C. gallica cultivation in fermenter. Enzyme Production in a Laboratory Bioreactor Evaluation of LiP production and scale up of C. gallica 142 culture were performed in a fermenter setting in the optimized mandarin peels-based medium. During the first five days, C. gallica 142 was cultivated at pH 5 to provide optimal conditions for polysaccharide hydrolysis and a steady supply of carbon source for the fungal growth. During the next five days, the medium pH was controlled at 5.7 to slow polysaccharide hydrolysis and limit the carbon source as well as to prevent a possible enzyme inactivation driven by higher pH conditions. Simultaneously, the agitation speed was decreased to 200 rpm to diminish the shear force effect on fungal hyphae and enzyme production. As shown in Figure 4, the presence of laccase activity was detected after the first day of fermentation, with a gradual increase in its activity throughout the cultivation time, reaching the maximum on day eight (9430 U/L). A low amount of MnP was released during the second day of fermentation with the maximum activity detected on day seven (310 U/L) followed by the sharp decrease through the reminder of cultivation time. LiP activity was detected during the fourth day of fermentation, reaching its maximum (450 U/mL) after nine days of cultivation. The final isolated and concentrated (100 mL) enzyme preparation from C. gallica 142 contained 263 U/mL laccase, 2 U/mL MnP and 16 U/mL LiP. Discussion The ligninolytic extracellular enzyme system of WRB is well known to degrade hazardous chemicals and to be utilized in a variety of industrial applications [4]. The large scale production of these enzymes is of great importance. Obviously, there is a large inherent variability within fungal species, with respect to carbon to nitrogen requirements for efficient production of LME [5,8,10,14]. Thus, two substrates, glycerol and mandarin peels, which are known to promote fungal growth and accelerate enzyme secretion [5,9], were added into the basal medium and tested with sixteen white rot fungal strains. To enhance LiP synthesis, the basal medium was supplemented with 0.3 mM veratryl alcohol [2,3,15]. Fungal growth in the tested medium resulted in production of an array of LME concentrations across fungal species (Tables 1 and 2). The data received clearly indicate that the tested fungal strains display a wide intra-and interspecies diversity in their ability to produce LME in the synthetic medium. Four C. unicolor strains, T. zonatus 540 and T. versicolor 159 secreted high laccase activity (2350 U/L to 9410 U/L) in synthetic glycerol-containing medium ( Table 1). In the other studies, C. unicolor C-137 also accumulated 4000 U/L laccase activity in the synthetic medium with glucose [26]. In contrast, only trace amounts of this enzyme were detected in cultivation of P. radiata and T. ochracea, like Ganoderma spp., Pleurotus tuber-regium [5,14] and Pycnoporus coccineus [4]. Moreover, only C. unicolor expressed comparatively high MnP activity in the synthetic medium, which is in agreement with observations of Hibi et al. [27], who showed that Cerrena sp. was capable of secreting laccase and three peroxidases in submerged cultivation in a glucose-containing medium. In contrast, Michniewicz et al. [26] revealed that C. unicolor C-137 did not secrete peroxidase activity in both glucose-containing synthetic and in complex tomato juice-based media. The present study highlights the role of lignocellulosic growth substrate in the LME activity expression. Supplementation of glycerol-containing medium with mandarin peels caused an overall increase in laccase activity for all screened WRB (Table 2). It is worth noting that, in several cultures, such as F. trogii 146, T. hirsuta 119, and T. versicolor 159, an elevated laccase activity may be explained by an increase in fungal biomass, similarly to other studies [8,14]. However, for cultures such as C. unicolor 301, M. tremellosus 206, P. radiata 64658 and T. ochracea 1009, this increase in laccase activity occurred due to induction in the presence of mandarin peels. Moreover, mandarin peels stimulated the MnP secretion by individual WRB and provided induction of this enzyme synthesis by M. tremellosus 206 and several other fungi. The enhanced/induced MnP production during cultivation on this substrate may be attributed to the pool of water-soluble aromatic compounds as well as flavonoids present in the mandarin peels [25] and release into the nutrient medium during sterilization and the substrate degradation. These observations suggest that the presence of the lignocellulosic substrate in the nutrient medium is a prerequisite for MnP production. The results of the presented study are in agreement with earlier findings, which showed a lack of MnP activity during cultivation of fungi in the synthetic media but significant MnP expression in the media with plant materials [13,14]. This study indicates that LiP production was induced in cultures of C. unicolor, F. trogii, T. ochracea, and T. zonatus by supplementing the medium with mandarin peels. Up to this date, this is the first evidence of LiP production by these four fungal species. On the contrary, two P. chrysosporium strains, well-known for LiP secretion, showed only traces (<0.02 U/mL) of LiP activity in cultivation with mandarin peels. This observation may be explained by either an absence of chemical compounds necessary for enzyme expression in the simple medium used for cultivation or a lack of secondary metabolism phase induced by nitrogen and carbon starvation conditions [2,6,7,16]. It is interesting that, in the comprehensive research, Kinnunen et al. [26] screened 53 species of basidiomycetes for lignin modifying enzymes when cultivated in liquid mineral, soy, peptone and solid state oat husk medium and specified that relatively high LiP activities were obtained in mineral medium under low carbon (5 g/L glucose) and nitrogen (2 mM) conditions. In our study, the basal medium supplemented with mandarin peels is characteristic of high carbon and nitrogen content, and it may inhibit or delay secondary metabolism that triggers LiP synthesis. Therefore, the presented data show that, unlike P. chrysosporium and several other WRB, C. unicolor strains and C. gallica have an ability to synthesize high levels of LME under comparatively high carbon and high nitrogen conditions during an active phase of growth. The production of LME by C. gallica 142 and the ratio of the individual enzymes, similar to other WRB [5,13,14,28] was clearly dependent on the lignocellulosic growth substrate and its concentration (Table 3, Figure 1). For example, sunflower oil cake provided maximum laccase activity of C. gallica 142, wheat straw promoted MnP secretion while mandarin peels increased the culture's LiP expression. The reason by which these materials specifically improve individual enzyme activity is not yet clear. It is evident that these lignocellulosic substrates have different chemical compositions and differ significantly in aromatic compounds content that may be released to the liquid medium during sterilization and fungal growth. It is also possible that new aromatic compounds appeared during lignocellulose metabolism, enriching the pool of new LME inducers. There are only a few comparative studies that summarize differences in fungal secretomes produced in response to the addition of plant-residue based natural phenolic elicitors, and little is known regarding activation of specific LME isoenzymes by these compounds. Therefore, additional studies are required to deepen understanding on the expression and regulation of LME genes in the presence of individual plant-derived inducers. A number of reports indicate that aromatic/phenolic compounds, especially those structurally related to lignin, play an important role in regulation of the LME production in WRB [2,3,5,13,15,29]. Some studies suggest that specific compounds contribute to expression of LME with the same aromatic compound playing dual roles, inducer or repressor, depending on the fungal species and enzyme tested [6,29]. Thus, addition of hydroquinone to the T. versicolor culture caused simultaneous threefold increase in laccase production and twofold decrease in MnP secretion. On the contrary, C. unicolor strain showed decreased laccase activity under the same cultivation conditions [29]. MnP and LiP production by P. chrysosporium [16] and several other fungi were significantly increased by the addition of veratryl alcohol. However, no effect of veratryl alcohol on the synthesis of LME was shown in this study. The overall results of this work aid in a better understanding of LME production by analyzed white rot fungi with a specific interest in LiP production by C. gallica. In particular, the C. gallica enzyme secretion is modulated by individual carbon and nitrogen sources, but it occurs at comparatively high concentrations and in the absence of toxic inducers. Undoubtedly, this fungus is a good candidate for a scale up fermentation and for production of selected LME. However, more detailed information on regulation of each individual LME synthesis by this fungal species is required for the development of cost-effective production and application technologies.
2018-04-03T01:48:55.688Z
2017-11-17T00:00:00.000
{ "year": 2017, "sha1": "7421c128f08a9870af95ee9a0753caf1aec53ff2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/5/4/73/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7421c128f08a9870af95ee9a0753caf1aec53ff2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
268424395
pes2o/s2orc
v3-fos-license
Field-free deterministic switching of all–van der Waals spin-orbit torque system above room temperature Two-dimensional van der Waals (vdW) magnetic materials hold promise for the development of high-density, energy-efficient spintronic devices for memory and computation. Recent breakthroughs in material discoveries and spin-orbit torque control of vdW ferromagnets have opened a path for integration of vdW magnets in commercial spintronic devices. However, a solution for field-free electric control of perpendicular magnetic anisotropy (PMA) vdW magnets at room temperatures, essential for building compact and thermally stable spintronic devices, is still missing. Here, we report a solution for the field-free, deterministic, and nonvolatile switching of a PMA vdW ferromagnet, Fe3GaTe2, above room temperature (up to 320 K). We use the unconventional out-of-plane anti-damping torque from an adjacent WTe2 layer to enable such switching with a low current density of 2.23 × 106 A cm−2. This study exemplifies the efficacy of low-symmetry vdW materials for spin-orbit torque control of vdW ferromagnets and provides an all-vdW solution for the next generation of scalable and energy-efficient spintronic devices. Introduc.on The discovery of emergent magne?sm in two-dimensional van der Waals (vdW) materials [1][2][3] has broadened the material space for developing spintronic devices for energy-efficient, non-vola?lememory and compu?ng applica?ons [4][5][6][7][8] .These applica?ons are par?cularlywell-served by perpendicular magne?c anisotropy (PMA) ferromagnets, which allow fabrica?on of nanometer scale, high-density and thermally stable spintronic devices.vdW materials provide strong PMA alterna?ves [9][10][11][12] to the few bulk op?mal material systems, like CoFeB/MgO [13][14][15] , while providing key advantages like scalability down to monolayer thicknesses, and s?ll maintaining an atomically smooth interface and minimal intermixing with the tunnel barrier of a magne?c tunnel junc?on (MTJ).Ability to switch the vdW PMA ferromagnets above room temperature is necessary for viable applica?ons to harness these capabili?es.Hence, recent reports on achieving current controlled switching of vdW PMA ferromagnets at room temperature are promising 16,17 .However, exis?ng schemes for room temperature current control of vdW ferromagnets u?lize spin-orbit torque (SOT) from heavy-metals or topological insulators and require applica?on of an in-plane magne?c field to allow determinis?c switching.This poses challenges to the development of highdensity, thermally stable SOT-switching devices using vdW ferromagnets.A very recent work (unpublished) has a>empted to use asymmetric growth of Pt on Fe3GaTe2 (single edge-coverage) to ar?ficially break lateral symmetry for showing field-free switching of the vdW ferromagnet up to 300 K 18 .However, such a mechanism is inherently unscalable precluding wafer scale processing and in addi?on, robust non-vola?leswitching remains to be achieved.Thus, the cri?cal challenge of field-free, determinis?c, and non-vola?lecontrol of PMA magne?sm in vdW materials above room temperature has remained unsolved. Here, we report the first demonstra?on of determinis?c and non-vola?leswitching of a PMA vdW ferromagnet above room temperature without any external magne?c fields.We achieved this by building a bilayer SOT system of room-temperature PMA vdW ferromagnet, Fe3GaTe2 (FGaT) with the low symmetry vdW material WTe2 to harness the unconven?onalout-of-plane an?-damping torque for SOT switching (Fig. 1A).While several approaches to enabling field-free SOT switching of PMA magne?za?on are possible, including STT-assisted SOT switching 19 , anisotropy ?l?ng in the ferromagnet 20,21 , ar?ficially breaking lateral symmetry 22 , and u?lizing intrinsically low symmetry spin-orbit coupling layers [23][24][25][26][27] , we have employed WTe2 because it is par?cularly interes?ng for control of vdW magnets allowing crea?on of vdW heterostructures and ensuring pris?ne interfaces and no laace strain.Charge current injec?on along the low symmetry a-axis of WTe2 generates an unconven?onal,out-of-plane an?-damping SOT, !" ##$ , of the form $ × ̂× $ ( $ , ̂ are unit vectors along ferromagnet magne?za?on and WTe2/ferromagnet interface) 28,29 and this torque can be u?lized for field-free switching of PMA ferromagnets [24][25][26][27] .However, this mechanism has not been previously demonstrated for room-temperature field-free switching of vdW materials.Employing our FGaT/WTe2 heterostructure devices, we demonstrate determinis?c switching using a low current density of 2.23 × 10 6 A/cm 2 up to 320 K.We also show that such field-free determinis?c switching is seen exclusively when the charge current is injected parallel to the low-symmetry axis of WTe2, asser?ng the role of crystal symmetry in enabling such field-free switching of PMA magne?sm. Results Our heterostructure devices use exfoliated sheets of FGaT and WTe2, with pa>erned electrical contacts, and hexagonal boron nitride (hBN) encapsula?onfor air-stability, as illustrated schema?cally through Fig. 1A.The heterostructures were assembled using dry viscoelas?ctransfer process 30 and electrodes were pa>erned using a combina?on of e-beam lithography and e-beam evapora?on of Ti/Au (more details in Methods).The Td-phase of WTe2 used here belongs to the Pmn21 space group.As shown in Fig. 1B, the crystal structure of WTe2 is such that it preserves mirror symmetry about the bc-plane ( %& ), while it breaks the mirror symmetry along the ac-plane ( '& ), where c is the out-of-plane crystallographic axis.As a result, spin-orbit coupling induced spin-accumula?on, and consequently the spin-orbit torque, in response to a current flowing along the a-axis and the b-axis varies significantly.These two cases are treated in detail in the following discussion, using two devices, D1 with FGaT (25.8 nm)/WTe2 (21.6 nm) and D2 with FGaT (17.9 nm)/WTe2 (23.8 nm).An op?cal image of the device D1 is shown in Fig. 1C, with the FGaT, WTe2 and hBN flakes indicated.The crystallographic a and b-axes of the WTe2 flakes were iden?fied using polarized Raman spectroscopy in the backsca>ering geometry ( , , ) ̅ , where , is a unit vector in the sample plane, along the azimuthal angle as defined in Fig. 1C.Fig. 1D shows a color plot of the polarized Raman spectra of the WTe2 flake in D1 (see Supplementary Fig. S2 for D2).WTe2 exhibits two types of prominent Ag peaks with two-fold symmetries, which can be used to iden?fy its a and b-axes 31,32 .The minima in the type-I peaks (81 cm -1 and 212 cm -1 ), which coincides with the maxima in the type-II peak (165 cm -1 ) corresponds to the a-axis of the WTe2 crystal. Magneto-transport characteriza?on of the FGaT/WTe2 devices using anomalous Hall effect helps to establish that the inherent ferromagne?c characteris?cs of FGaT are well preserved in the heterostructure device and can be effec?velyprobed through transverse voltage monitoring for current-induced magne?za?on switching experiments.Fig. 2A, B show the anomalous Hall effect curves for the device D1, for field swept along sample normal ( ∥ ) and temperatures in the range 10 K to 340 K.The device exhibits a large coercivity (up to 8.25 kOe at 10 K) at low temperatures, which diminishes with temperature (Fig. 2C) such that & = 210 Oe at 300 K and near-zero star?ng 330 K.The anomalous Hall resistance, () !*+ goes to zero above 320 K too, marking a ferromagnet to paramagnet transi?on between 320 K to 330 K.The anomalous Hall effect curve corresponding to field swept close to sample plane ( ⊥ ) is shown in Fig. 2D.It exhibits the characteris?cs of a PMA magnet, going to near-zero resistance values only at high inplane magne?c fields, with an anisotropy field of about 35 kOe, corrobora?ngthat the strong perpendicular magne?c anisotropy of FGaT is preserved in the heterostructure device. Fig. 3A provides a schema?crepresenta?on of the spin-orbit torque mechanism at play when the applied current is parallel to the high-symmetry, b-axis.In this case, the applied current has no effect on the crystal's bc-mirror plane symmetry ( %& ).In accordance with Curie's principle 33 , since the causali?es (crystal structure and applied current) preserve %& , the resultant spin-current (and accumula?on)must also preserve %& .This forbids a ver?cal spin-polariza?on ( , ) component in the ver?cally flowing spin-current, since the , pseudovector transforms an?-symmetrically upon reflec?on in the bc-plane.As a result, the spin-accumula?on at the FGaT/WTe2 interface only has an in-plane spin-polariza?on, similar to the case of heavy metal/ferromagnet and topological insulator/ferromagnet systems.Such an in-plane spin accumula?oncan only produce determinis?c switching in the presence of an externally applied field along the current direc?on.Fig. 3B shows the response of device D1 to current pulses applied along the b-axis of WTe2 in the absence of any external field, at 300 K.As expected, the in-plane an?-damping torque from spinaccumula?on at the FGaT/WTe2 interface drives the FGaT magne?za?on in-plane ( , = 0) resul?ng in a near-zero anomalous Hall resistance, for a current magnitude of about ±4.5 mA (9.51 × 10 5 A/cm 2 ).Upon lowering the current drive to zero, the FGaT remains effec?vely demagne?zed as its various domains orient randomly due to lack of a symmetry breaking field.The four curves in Fig. 3B verify this for all combina?ons of current drive (posi?ve or nega?ve) and ini?al magne?za?on direc?on ( , = ±1 ≡ () = ±1.2Ω).The ini?al magne?za?on state is set by applying a field of ±2 kOe along the sample normal before star?ng current sweeps.Contrary to the above case, driving a current of the same magnitude in the presence of a nonzero external field ( = ±500 Oe) parallel to current axis, ∥ ∥ , results in determinis?c, par?al switching of the FGaT magne?za?on.As shown in Fig. 3C, reversing the direc?on of applied field reverses the chirality of the current-induced switching loops, as is expected for such a system.Field-assisted determinis?c and non-vola?leswitching of out-of-plane magne?za?on of FGaT could also be achieved in D1 for the case of ∥ ∥ (Fig. 3D) at 300 K. In contrast to the above discussed case, when current is applied along the low-symmetry a-axis of WTe2, the applied current breaks the bc-mirror plane symmetry ( %& ).Thus, the causali?es break both the mirror plane symmetries ( '& broken by crystal structure, %& broken by applied current), and a ver?cal spin-polariza?on component in the ver?cal spin-current is now permissible.This scenario is depicted schema?cally in Fig. 4B.The ver?cal component of spinaccumula?on at the FGaT/WTe2 interface can now apply a symmetry breaking, unconven?onal,out-of-plane an?-damping spin orbit torque, !" ##$ , on the FGaT magne?za?on. !" ##$ is an?symmetric in current and hence, the FGaT magne?za?on can be toggled determinis?cally between , = ±1 by applying posi?ve and nega?ve current pulses.Device D2, with current applied along the a-axis of its WTe2 flake is used to study this scenario.Details on the device, its Raman spectra and magneto-transport data are included in Supplementary Fig. S2 and S3.Fig. 4A shows the field-free current induced switching loops of D2 for temperatures ranging from 300 K to 325 K.At 300 K, complete switching could be induced using ±8 mA (see Supplementary Fig. S4), equivalent to a current density of 2.23 × 10 6 A/cm 2 .Increasing the temperature from 300 K to 325 K resulted in shrinking of the anomalous Hall resistance spliang, un?l no clear looping behavior could be observed at 330 K and beyond (Fig. 4C).This aligns with the fact that magne?za?on of FGaT would decrease with increasing temperature, resul?ng in a decreasing () !*+ un?l it eventually vanishes beyond its Curie temperature (320 K -330 K).Fig. 4D shows the field-free determinis?c and non-vola?leswitching of PMA magne?za?on of FGaT by a train of current pulses, 1 ms long and ±8 mA in amplitude, applied along the low-symmetry axis of WTe2, ∥ , at 300 K. We could observe such determinis?cswitching right up to 320 K as reported in Supplementary Fig. S6, providing the first demonstra?on of field-free, determinis?c switching of out-of-plane magne?za?on in a vdW ferromagnet above room temperature. Conclusion We u?lize the unconven?onal,out-of-plane an?-damping spin orbit torque, !" ##$ , generated from WTe2 upon charge current injec?on along its low-symmetry a-axis to switch the magne?za?on of underlying FGaT, in the FGaT/WTe2 heterostructure devices.We clearly show that the !" ##$ induced field-free switching occurs exclusively for charge current injec?on along WTe2 a-axis, while charge injec?on along the b-axis results in demagne?za?on of underlying FGaT.Thus, we have reported the first demonstra?on of field-free magne?za?on switching of a perpendicular magne?c anisotropy vdW ferromagnet above room temperature (up to 320 K) using a low current density of 2.23 × 10 6 A/cm 2 .The proposed all-vdW architecture can also provide unique advantages like improved interface quality needed for efficient spin-orbit torques, possibili?esfor gate-voltage tuning to assist SOT switching, and prospects for flexible and transparent spintronic technologies.This work asserts the role of crystal symmetry in spin-orbit coupling layers of an SOT switching device using a low-symmetry vdW material, and provides a new, scalable all-vdW approach to developing energy-efficient spintronic devices. Device fabrica+on The Fe3GaTe2/WTe2 devices reported here were fabricated using heterostructure assembly of exfoliated vdW flakes.Bulk FGaT was grown using a previously reported process 17 .Bulk WTe2 and hBN were commercially sourced from HQ Graphene and Ossila, respec?vely.FGaT flakes were exfoliated on Si/SiO2 (280 nm) substrates using mechanical exfolia?on.WTe2 flakes, exfoliated on PDMS stamps were transferred on to selected FGaT flakes using the dry viscoelas?ctransfer process.Electrodes were then pa>erned on the FGaT/WTe2 heterostructure using a combina?on of e-beam lithography with the posi?ve e-beam resist PMMA 950, and e-beam evapora?on of Ti/Au (5 nm/60 nm).The devices were then encapsulated with thick exfoliated flakes of hBN, using dry viscoelas?ctransfer.All exfolia?on and vdW transfer processes were performed inside the inert environment of a N2-filled glovebox (O2, H2O < 0.01 ppm).Thicknesses of the cons?tuent flakes were characterized amer encapsula?onusing a Cypher VRS AFM.Polarized Raman spectra of WTe2 flakes was acquired using a 532 nm laser with a WITec Alpha300 Apyron Confocal Raman microscope, by rota?ng the polarizer and analyzer while the sample was sta?c. Transport measurements All transport measurements were performed in a 9 T PPMS DynaCool system.Measurements were performed by sourcing current using a Keithley 6221 current source and measuring the transverse voltage across the devices, using a Keithley 2182A nanovoltmeter.Anomalous Hall effect measurements with field sweeps were performed using a drive current of 50 -200 µA.For the current-induced switching measurements, a 1 ms pulse of write-current was followed by 999 ms of read pulses (±200 µA).Field could be applied in and out of the sample plane using the PPMS' horizontal rotator module.Fig. 3: Field-assisted (only) switching for ∥ .(A) Schema?c illustra?on of the scenario where current is sourced along the high-symmetry axis, ∥ .Symmetry constraints allow only an inplane component of spin-accumula?on along the FGaT/WTe2 interface, resul?ng in a non-zero inplane an?-damping torque ( !" -$ ≠ 0) but a zero out-of-plane an?-damping torque ( !" ##$ = 0).(B) Response of the device to current pulses applied along the b-axis for zero external field at 300 K.The blue and green (yellow and red) curves correspond to current pulses swept from 0 → −4.5 mA → 0 (0 → +4.5 mA → 0 mA), for the device ini?alized at , = 1 and , = −1, respec?vely.The device undergoes complete demagne?za?on by 4.5 mA in all the four cases.(C) Current sweeps up to 4.5 mA result in par?al magne?za?on switching in the presence of an externally applied field, ∥ ∥ of ±500 Oe, with changing the direc?on of field resul?ng in chirality reversal of the current-induced switching curves.Black dashed lines in (B) and (C) correspond to , = ±1.(D) Field-assisted determinis?c, non-vola?leswitching of FGaT magne?za?on using a train of 1 ms long current pulses, ±4.5 mA in magnitude, under +500 Oe in-plane magne?c field, ∥ ∥ .The curve at each temperature is an average of four consecu?ve current pulse sweeps acquired for that temperature, with error bars indica?ng standard devia?on of each data point across the four sweeps (Individual sweeps reported in Supplementary Fig. S5).Data offset along y-axis for clarity.(B) Schema?c illustra?on of this scenario where current is sourced along the low-symmetry axis, ∥ .Broken mirror plane symmetries allow an out-of-plane component of spin-accumula?on along the FGaT/WTe2 interface, resul?ng in a non-zero out-of-plane an?-damping torque ( !" -$ ≠ 0), asymmetric in current direc?on, enabling field-free determinis?c switching of the underlying FGaT's magne?za?on.(C) Temperature dependence of the anomalous Hall resistance spliang in the current-induced switching loops.Clear switching can be observed up to 325 K (green region), with decreasing () !*+ denoted with solid square point, while no clear switching loops could be observed star?ng 330 K (red region), and hence the () !*+ is set to zero (hollow square points).(D) Demonstra?on of field-free, determinis?c, non-vola?leswitching of out-of-plane FGaT magne?za?on in the FGaT/WTe2 device using 1 ms long pulses of current, ±8 mA in amplitude, applied along the a-axis.The data is acquired at 300 K in two sets of 50 s long pulsing sequences, with periodic and randomized current pulses, respec?vely. Fig. S6: Determinis?c switching up to 320 K. Field free determinis?c, non-vola?leswitching of the OOP magne?za?on of FGaT in device D2, using the train of current pulses, 1 ms long and ±8 mA in magnitude (top panel), with ∥ , at 310 K (middle panel) and 320 K (lower panel).The data is acquired in two sets of 50 s long pulsing sequences, with periodic and randomized current pulses, respec?vely. Fig. 1 :Fig. 2 : Fig. 1: Fe3GaTe2/WTe2 heterostructure device.(A) Schema?c diagram of the Fe3GaTe2/WTe2 heterostructure devices used in this study.(B) Schema?c model of WTe2 crystal's ab-plane, with the a and b-axes labelled.The crystal preserves mirror-plane symmetry in the bc-plane while breaks it in the ac-plane.(C) Op?cal image of device D1, with the WTe2 (21.6 nm), FGaT (25.8 nm) and hBN flakes labelled.Crystallographic axes of the WTe2 flake (determined through polar Raman spectra) and the defini?on of azimuthal angle in the Raman spectra are also indicated.Scale bar: 10 µm (D) Polarized Raman spectra of the WTe2-flake in (C).The minima (maxima) in type-I Ag modes at 81 cm -1 and 212 cm -1 (type-II Ag mode at 165 cm -1 ) around = 90 o corresponds to the a-axis of the WTe2 flake. Fig. 4 : Fig.4: Field-free switching for ∥ .(A) Response of the device D2 to current pulse sweeps, along a-axis, for varying temperatures without any external field.The curve at each temperature is an average of four consecu?ve current pulse sweeps acquired for that temperature, with error bars indica?ng standard devia?on of each data point across the four sweeps (Individual sweeps reported in Supplementary Fig.S5).Data offset along y-axis for clarity.(B) Schema?c illustra?on of this scenario where current is sourced along the low-symmetry axis, ∥ .Broken mirror plane symmetries allow an out-of-plane component of spin-accumula?on along the FGaT/WTe2 interface, resul?ng in a non-zero out-of-plane an?-damping torque ( !" -$ ≠ 0), asymmetric in current direc?on, enabling field-free determinis?c switching of the underlying FGaT's magne?za?on.(C) Temperature dependence of the anomalous Hall resistance spliang in the current-induced switching loops.Clear switching can be observed up to 325 K (green region), with decreasing () !*+ denoted with solid square point, while no clear switching loops could be Fig. S1: Topographical data for device D1. Fig. S1 : Fig. S1: Topographical data for device D1. (A) Op?cal image of the device, with red box indica?ng the region used for AFM measurements.(B) AFM topography micrograph of the region in red box.(C) Height profile along the red line in (B). Fig. S2 : Fig. S2: Device D2, its Raman spectra and topography.(A) Op?cal image of the FGaT/WTe2 heterostructure before pa>erning electrodes.crystallographic axes of WTe2 (determined using polar Raman spectra) and the defini?on of azimuthal angle in the polar Raman measurements is indicated.(B) Polarized Raman spectra of the WTe2 flake in D2. (C) Op?cal image of the device D2, amer pa>erning electrodes and encapsula?onwith hBN.Red boxes correspond to area scanned in AFM for determining the thicknesses of the cons?tuent WTe2 (box D) and FGaT (box F) flakes.(D) AFM topography micrograph of red box D and (E) the height profile along the red line.(F) AFM topography micrograph of the red box F and (G) the height profile along the red line. Fig. S5 : Fig. S5: Current-pulsing loops across varying temperatures.(A) Four consecu?ve current pulsing loops acquired for D2, with ∥ , without any external field field-assisted ini?aliza?on between consecu?ve loops either) at 300 K. Black dashed lines are a visual aid deno?ng the same loop spliang.(B-F) Similar data for temperatures 305 K -325 K in steps of 5 K.
2024-03-17T05:10:23.817Z
2023-09-10T00:00:00.000
{ "year": 2024, "sha1": "27031b05731126a4ffe6a8c57968df27593aad88", "oa_license": "CCBY", "oa_url": "https://www.science.org/doi/pdf/10.1126/sciadv.adk8669?download=true", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "091432b8d8d05878638efb1e9d380d8c5c3c6a20", "s2fieldsofstudy": [ "Physics", "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
237973903
pes2o/s2orc
v3-fos-license
Mechanical characterization of friction stir welded joint of dissimilar aluminum alloys AA6061and AA7050 Friction stir welding (FSW) process has gained attention in recent years because of its advantages over the conventional fusion welding process. These advantages include the absence of heat formation in the affected zone and the absence of large distortion, porosity, oxidation, and cracking. Experimental investigations are necessary to understand the physical behavior that causes the high tensile strength of welded joints of different metals and alloys. This paper focuses on the effect of welding parameters on microstructure and mechanical properties of welded joint by friction stir welding. In this work, the fabrication of AA6061 and AA7050 was successfully done by friction stir welding, the confidence interval has shown that tensile strength and hardness increased with increasing tool rotation. The maximum tensile strength and % strain at SZ was observed 269 MPa, and 21.5 HV at TRS of 1000 rpm, and TS of 60 mm/min, and the maximum hardness at SZ was observed 135 HV at TRS of 750 rpm, and TS of 80 mm/min. The grains size in the SZ at higher tool rotation (1000 rpm) was much finer than the lower tool rotation (800 rpm). The FSWed portion at 500 rpm shows the big and deep dimples, while equiaxed fine dimples were observed at TRS of 1000 rpm. Introduction Friction-stir welding (FSW) technology invented at The Welding Institute, UK, in December 1991 is a solid-state joining process for joining aluminum alloys and used increasingly in aerospace, transportation and car manufacturing industries [1,2]. This technology that is a solid state joining technique can be utilized to produce sound joints in aluminum alloys [3,4]. Fig. 1 show the schematic diagram of FSW. In the process of FSW, the tool rotational speed, welding speed (travel speed) and pin geometry essentially determine the material plasticization around the pin, weld geometry and consequently mechanical properties of the joints [5]. Lately, substantial work has been reported about the FSW process for similar and dissimilar metal joining due to its ability to eliminate local casting defects of the conventional fusion welding techniques [6]. The joining of such dissimilar materials by fusion welding techniques is quite challenging due to their different chemical, mechanical and thermal properties. Based on the above background, FSW is an effective technology for the reduction in defect in aluminum alloys. Some researchers tried to weld dissimilar alloys using the process of FSW. For example, Carlone et al. [7] examined the microstructural aspects in aluminum-copper dissimilar joining by FSW. Also, Habibnia et al. [6] investigated the effects of different operating conditions on FSW of dissimilar Abstract Friction stir welding (FSW) process has gained attention in recent years because of its advantages over the conventional fusion welding process. These advantages include the absence of heat formation in the affected zone and the absence of large distortion, porosity, oxidation, and cracking. Experimental investigations are necessary to understand the physical behavior that causes the high tensile strength of welded joints of different metals and alloys. This paper focuses on the effect of welding parameters on microstructure and mechanical properties of welded joint by friction stir welding. In this work, the fabrication of AA6061 and AA7050 was successfully done by friction stir welding, the confidence interval has shown that tensile strength and hardness increased with increasing tool rotation. The maximum tensile strength and % strain at SZ was observed 269 MPa, and 21.5 HV at TRS of 1000 rpm, and TS of 60 mm/min, and the maximum hardness at SZ was observed 135 HV at TRS of 750 rpm, and TS of 80 mm/min. The grains size in the SZ at higher tool rotation (1000 rpm) was much finer than the lower tool rotation (800 rpm). The FSWed portion at 500 rpm shows the big and deep dimples, while equiaxed fine dimples were observed at TRS of 1000 rpm. ORIGINAL ARTICLE sheets of 5050 aluminum alloy and 304 stainless steel. They found that tool welding speed had a negligible effect on tensile strength of the joint and increasing welding speed increased the tensile strength. Figure 1: Friction stir welding Guo et al. [8] discovered that the highest joint strength was obtained when welding was conducted at highest welding speed. New welding approach has been introduced to improve the welding quality of TIG welded joint, the influence of friction stir processing on TIG welded joint have been analyzed and they observed that mechanical properties and heat transfer of TIG+FSP welded joint [9][10][11][12][13][14][15]. Due to their high strength-to-weight ratio, good machinability, and high resistance to corrosion [16], aluminum alloys are an attractive lightweight metals for structural applications in the aerospace, automotive, and naval industry. However, the joining of Al alloys by conventional fusion welding techniques is known to be problematic [17], where some of these issues include the formation of secondary brittle phases, cracking during solidification, high distortion, and residual stresses [17]. Among aluminum alloys, the heat treatable 6XXX Al-Mg-Si and 7XXX Al-Mg-Zn systems [16] are some of the most widely advanced and used alloys. The AA6061 class have been extensively employed in marine frames, pipelines, storage tanks and aircrafts [18]. On the other hand, the AA7050 alloy is widely used in the aerospace industry and is known to have an improved toughness and corrosion resistance when compared to other alloys from 7XXX series [19]. The strengthening of these alloys is achieved by producing hard nanosized Mg-rich precipitates via solution heat treatment, and subsequent artificial aging [20][21][22]. Although the AA6061 alloy can be joined by conventional fusion welding, the AA7050 alloy is considered to be ''unweldable'' by these methods [23]. However, multiple studies have demonstrated the effectiveness of friction stir welding (FSW) for the joining of the AA6061 [24][25][26][27] and the AA7050 [28,29]. Materials and Methods Butt friction stir welds were produced using 6 mm thick rolled plates of AA6061 and AA7050. The chemical composition for both the materials is summarized in Table 1. The aluminum alloys were welded at three different tool rotational speeds ranges lie between 500 to 1000 rpm, while the welding transverse speed was fixed was lie between the 40-80 mm/min. The processing parameters was opted from the design expert software. The FSW of AA7050 and AA6061 Al-alloy was performed experiments using H13 tool steel of 6 mm diameter and 5.5 mm length of square pin with shoulder diameters of 19 mm. After the welding was completed, the top and bottom surfaces of the welded plates were machined down to a 3 mm of thickness. This was done to eliminate the stress raisers produced due to the flash material at the top of the weld. Flash material is produced on top of the welded plates due to the direct interaction of the tool shoulder and the underneath material that is been extruded and stirred around the pin. Then, specimens for microstructural and mechanical characterization were cut perpendicular to the welding direction by milling cutter. Microstructural characterization of the welds was carried out using optical (OM) and scanning electron microscopy (SEM. The transverse and longitudinal sections of the welds were prepared as per ASTM E8 standard. Vickers microhardness measurements were performed in the transverse cross section of the FSWed samples. To characterize the mechanical properties of the welds, monotonic tensile testing was performed on the FSWed coupons at room temperature. Tensile strength The tensile sub-test specimens were cut with the help of a milling machine as per ASTM E8 standard. A universal testing machine was used to perform these test at room temperature. Three test specimens were tested for each parameter and the average value have been taken as shown in table 2-4. The experimental results of tensile strength of the FSWed joint of AA6061 and AA7050 AA7050 are significantly varied when the TRS varies from 500 to 1000 pm. The maximum tensile strength of 269 MPa was observed at TRS 1000 rpm, TS 60 mm/min, while minimum tensile stress of 171.2 MPa was found at TRS of 500 rpm with a TS of 40 mm/min. The maximum joint efficiency of 78.43% was observed at TRS of 1000 rpm, TS of 60 mm/min, and the minimum joint efficiency of 49.91% was found at TRS of 500 rpm with a TS of 40 mm/min because a lower TRS i.e. 500 rpm experience a lower temperature distribution along with poor stirring action by the square pin and observed inadequate consolidation of FSWed material by the tool shoulder [30]. Hence the minimum joint efficiency or lower tensile strength was observed. When the TRS increased, the welded joint's heat input also increased, due to this a fine and equiaxed grain structure was observed which enhanced the mechanical properties. when the TRS increases from 1000 rpm it may experience as excessive stirred welded material on the top surfaces of the base plate, which causes the microvoids in the SZ. The increase in temperature as well as coarsening of grains and cooling rate at more than preferred temperature may decrease the tensile strength of the FSWed joints at high TRS. Some defects were also observed while the material flow around the AS of the weldment [31]. The percentage strain of the welded joint at 500 rpm was lower than 1000 rpm. The tensile stress-strain diagram of welded joints with different processing parameters was shown in fig. 2. At low TRS, the frictional heat that observed from the rotating tool and the base plate will not have produced adequate plasticized flow, leading to a defects in the FSWed joints., whereas at low TS, frictional heat produced high temperature in the welded region then have a possibility of excess heat flow in the FSWed joint due to this defects were observed in the welded region. Standard deviation, standard error and 95% confidence interval were investigated for Microhardness The microhardness directly affects by the dislocation density and phase dispersion microstructure. The micro-hardness values are less significant in affecting the mechanical properties of FSWed joint [32]. Due to the cooling rate, and solidification of FSWed joints, the hardness values at the bottom and the middle of the weldment observed the major effect. For recognizing the metallurgical phase, the hardness number plays a significant role. The maximum hardness value of 131 HV was found at TRS of 1000 rpm and TS of 80 mm/min, whereas the minimum hardness value of 110 HV was obtained at TRS of 750 rpm and TS of 40 mm/min as shown in fig. 3. The hardness test was performed from 1 st base metal (AA6061) to 2 nd base metal (AA7050) included welded region. At each point, three hardness values were observed and mean value were taken. The hardness value at the SZ was higher than the TMAZ zone because very fine and equiaxed grains structure was observed at SZ. Due to coarsening of grains and precipitates in the HAZ, the lower hardness value was observed. Whereas due to dissolution of precipitate, the decreasing trend of hardness was observed at TMAZ as shown in fig. 4 [33]. When the TRS increased, hardness value at the SZ also increased due to low heat concentration and fine microstructure [34]. That situation clearly showed that the grain size and amount of gaps between the grains decreased with increasing the TRS and TS. The increase of welding defects was predicted to be caused by insufficient plasticity temperature and a decrease in temperature [35]. The welding quality of samples at TRS of 1000 rpm with different TS were observed very well and occurred very less welding defect related to increasing TRS and TS. Fig. 6-8 shows the effect of TRS (500-1000 rpm) and TS (40-80 mm/min) on the microstructure of the FSWed joints of AA 6061 and AA7050. The temperature ranges at the SZ zone for all the specimens were observed 395-432°C but the thermocouple. It is reasonable to predict that the temperature in the SZ was greater than the TMAZ and HAZ region. At high TRS, the adequate frictional heat and extensive plastic deformation generate fine and recrystallized equiaxed grains in the SZ [36] as shown in fig. 8. The fine grain size was found at TRS of 1000 rpm as compared to 500 and 750 rpm. The average grain size in the SZ was found as 10.8 µm at 1000 rpm, whereas 30.5 µm and 22.6 µm grain size were observed at TRS of 500 rpm and 750 rpm respectively. Fractured surface analysis The fractured specimen's high magnifications were investigated as shown in fig.9. When the tensile force was applied to the FSWed joints of AA6061 and AA7050, the stress concentration take place in the low strength region or part of the FSWed joints, and subsequently the FSWed joints were failed in that region [37,38]. If the welded joints were defects free, the joints were failed on the A.S instead of R.S, which means the strength of R.S region is higher than the A.S region [39]. Fig. 9 clearly reveals that the large and deep dimples were observed at low TRS (500 rpm), while equiaxed tiny dimples were observed at high TRS (1000 rpm). This was the evidence of crack nucleation and growth 4 mm away from the weld line [40]. Conclusions The study of FSWed joints of AA6061 and AA7050 was successfully analyzed and the following conclusions have been observed.  The confidence interval has shown that tensile strength and hardness increased with increasing tool rotation.  The maximum tensile strength and % strain at SZ is 269 MPa, and 21.5 HV at TRS of 1000 rpm, and TS of 60 mm/min.  The maximum hardness at SZ was observed 135 HV at TRS of 750 rpm, and TS of 80 mm/min.  The grains size in the SZ at higher tool rotation (1000 rpm) was much finer than the lower tool rotation (800 rpm).  The FSWed portion at 500 rpm shows the big and deep dimples, while equiaxed fine dimples were observed at TRS of 1000 rpm.
2021-08-27T17:14:41.035Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "bc29c34ca32a08a4744e1f5ca999a4bf8b126535", "oa_license": null, "oa_url": "https://doi.org/10.36037/ijrei.2021.5503", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "19a92766fd1e729a5e728df77483a4a63d171a68", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
245183106
pes2o/s2orc
v3-fos-license
Molecular Characterization of Bacterial that degrades herbicides isolated from soil environment in Abuja. — The study was aimed to determine the molecular characterization of bacterial that degrades herbicides isolated from soil environment in Abuja. The systemic chemical herbicide was applied on an experimental plot of land with weeds and its effects on soil bacteria and the physicochemical properties of the soil was examined for a period of seven weeks. The chemical herbicide, glyphosate, reduced the plate count of bacteria from 120 x 10 5 cfu/g/dwt to 48 x 10 5 cfu/g/dwt some hours after application and the reduction continued till the end of the sampling period. The isolated bacterial species were Simulium tani, Bacillus firmus, Pseudomonas tolaasii, Acinetobacter beijerinckii, Entrobacter sp, Citrobacter freundii , Pseudomonas poae. Organisms that were eliminated following glyphosate application were Bacillus magaterium, Pseudomonas tolaasii, Proteus sp, and Simulium tani while those that persisted throughout the experiment were Staphylococcus aureus, Pseudomonas poae, Bacillus firmus, and Entrobacter sp. It was concluded that glyphosate altered the microbial counts and had a temporary inhibitory effect on the type of bacteria present in the soil. Introduction Herbicides are valuable tools for the selective control of un-desirable plants in crop production. However, various herbicides at recommended rates, whether applied to the foliage or soil, often persist in the soil for extended periods of time. These residues may cause serious damage to sensitive plant species grown the season(s) following application of the herbicides. The climatic and edaphic factors, e.g., temperature, moisture, pH, soil composition, and cation exchange capacity which affect the residual life of herbicides, are numerous and complex. Herbicides cause a range of health effects ranging from skin rashes to death. The pathway of attack can arise from intentional or unintentional direct consumption, improper Agricultural application resulting in the herbicide coming into direct contact with people or wildlife, inhalation of aerial sprays, or food consumption prior to the labeled preharvest interval.Pesticides can enter the human body through inhalation of aerosols, dust and vapour that contain pesticides, through oral exposure by consuming food and water, and through dermal exposure by direct contact of pesticides with the skin (Cooper and Dobson 2007). Herbicide are often applied directly to soil. They may also reach the soil through application to foliage via spray drift, run-off, or wash-off vectors. Once released to the environment, chemicals undergo various dissipation pathways, and the persistence of chemicals in the environment varies widely. Among factors affecting the local concentration of a compound are the amount of compound released, the rate of compound released, its persistence in the environment under various conditions, the extent of its dilution, its mobility, and the rate of biological or non-biological degradation (Ellis 2000 andJanssen et al. 2001). Herbicide biodegradation involves a wide variety of microorganisms including bacteria and fungi operating under dynamic anaerobic and aerobic conditions. It is suggested that biodegradation of pesticides in soil ecosystems can only take place through the synergistic interactions of a microbial consortium, the activity of which is affected by many soil physical and chemical properties, as well as the nature and extent of the pesticide contamination. Soil microbes make valuable contribution to soil fertility. Pesticides can exhibit, stimulate and neutral effect on soil microbes, depending on the nature and concentrations as well as strain or types of microbes (Busseet al., 2001). Herbicides will remain toxic in soil when conditions are not favorable for microbes. Degradation of the herbicide follows the population growth of the microbes. During the lag phase the microbial population increases in response to food source and rapid decomposition occurs. (Busse et al., 2001). It drastically reduces the microbial population when applied to any soil sample. Synthetic herbicides have the potential to influence plant disease by several mechanisms. They can enhance disease or protect plants from pathogens due to direct effects on the microbe, to effects on the plant, or to effects on both organisms. Collection of soil samples Soil samples were collected from the research farm of National Root Crop Research Institute, Nyanya Out Station, Abuja, Nigeria (Latitude 9.0765N and Longtitude 7.3986E). Soil samples were taken randomly with soil anger from each of the experimental plots and control plot, top soils of 0-15cm depth were used. Cassava and Maize arethecrops grown in the farm. These crops have been sprayed with glyphosate organophosphorusherbicide for the last 4-5years. Plots of land measuring 3x3m with four replicates arranged in randomized form was used for the experiment.Samples were collected before and after application of herbicide on from week one to the seventh week. Soil samples were sieved with 2.0mm mesh to remove stones and plant debris in soil. Samples were taken immediately to the laboratory into sampling bags for immediate analysis. (Makut and Ifeanyi 2017). The herbicide, Glyphosate, was dissolved in distilled water at recommended rate of 50ml to five liter of water was used in this study, the mixture was then applied to the experimental plots and distil water was added to the control plots for comparison. Isolation of Bacteria from soil contaminated with herbicide. One (1.0) gram of the soil sample was weighed using weighing balance suspended in 9ml of sterile water. It was properly mixed and a 10-fold serial dilution was carried out into seven dilutions. The identification and characterization of bacterial isolates were based on cultural, morphological and biochemical characteristic using standing method. (Mendes et al., 2017). Molecular Identification of Bacteria isolated from herbicide contaminated soil. The molecular identification of bacteria isolated from herbicide contaminated soil where carried out using Bacterial genomic DNA extraction, DNA quantification, 16SrRNA Amplification and Sequencing. Determining the effects of Temperature, pH and Days on biodegradation of herbicides Biodegradation experiment was carried out at two different temperatures pH and weeks (herbicides 3.0mg/ml) using the methods of Thavasi et al., (2007). Experiment to determine the effect of temperature on herbicides biodegradation by bacteria was carried out at various temperature for 15 days. The effect of pH on biodegrading potential of bacteria was determined by adjusting the pH between pH4.5 and pH8.5 and were incubated for 15days. Effect of Days on herbicides biodegradation was carried out by incubating for different weeks ranging from 1-7 weeks. (Jurado, et al, 2011). Quantification of Pesticide Residue This was carried out using Gas chromatography spectrophotometer on the biodegraded sample. The aqueous samples were analyzed by directly derivatizing an aliquot and the derivatizing reagent mixture was prepared fresh by mixing one volume of Heptafluoro-butanol to two volumes of Trifluroacetic Anhydride. III. RESULT AND DISCUSSION The herbicide treatment used was observed to have negative effect on the microbial load. In glyphosate treated soil, there was a gradual decrease in bacterial population, that is, from 120 x 10 6 cfu first day after application to 101 x 10 6 cfu after one week of application. But by the third week of application, there was a sharp decrease of 77 x 10 6 cfu to 48 x 10 6 cfu by the sixth week of application. The bacteria count from contaminated and non-contaminated soil is as given in Table 1. The screening for survival of different bacteria in herbicides broth is as given in Table 3. The ability of the bacteria isolated from contaminated soil with herbicides showed that Pseudomonas tolaasii, Pseudomonas poae, Proteus sp, Priestia flexa, Bacillus magaterium, Bacillus firmus, Simulium tani, Acinetobacter beijerinckii and Citrobacter freundii were able to survival in herbicide concentration broth. The effect of temperature on utilization of herbicide is as shown in Table 4. Pseudomonas tolaasii had the highest utilization at 35℃ (2.19±0.26 mg/ml) followed by 30℃ (2.06±0.64mg/ml) and least was at 26℃ (1.23±01.1mg/ml). The effect of pH on herbicide utilization by bacteria isolates is as shown table 5. Pseudomonas tolaasii had the highest utilization at pH 7.0 (3.5±0.3mg/ml) followed by pH6.5 (3.1±0.3mg/ml), pH6.0 (2.1±0.1mg/ml) and the least was pH 5.5 (1.7±0.3mg/ml). Correlation between different parameters Some correlations were also calculated from the results, at the end of the experiment when the organisms were suggested to be highly metabolic active. The negative correlation observed between pesticide degradation and colony count suggests the negative impact pollutants may have on biodiversity. These relationships would be useful to biodegradation of glyphosate and other organic contaminants in the environment (Showunmiet,al. 2020). IV. CONCLUSION From the findings of this study, it suggests that the organisms isolated and identified have the potential to degrade glyphosate pollutants when applied in the environmentally friendly technology clean-up (bioremediation) of glyphosate contaminated environment. Therefore factors promoting their growth should be encouraged. Herbicides are phytotoxic chemicals used for destroying various weeds or inhibiting their growth. It is important to also know that excess use of herbicides in agroecosystems may change composition of weed populations and diversity. Excess use of herbicide should be minimize in wildlands, as herbicides may increase the diversity of native species. Threats to plant biodiversity caused by habitat loss and invasive species are far greater than threats by use of herbicides. It is also important to properly managed lands that are spread with herbicide as spray runoff in sandy soils may cause tree injury if followed soon after with irrigation or rainfall. To prevent contamination of water bodies, management plans should carefully consider the hydrology of the system that is being treated. Hypothesize potential runoff scenarios and take appropriate measures (such as buffer zones) to prevent them. Underground aquifers and streams should be considered as well.
2021-12-16T17:19:15.491Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "51ef2ab841c6706cc73877552ed53d3ac8c3b194", "oa_license": "CCBY", "oa_url": "https://ijeab.com/upload_document/issue_files/21IJEAB-111202120-Molecular.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "ba89cd0d118f1d6477317f461165d8875b33dd81", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
73051070
pes2o/s2orc
v3-fos-license
MAPPA and mental health - 10 years of controversy some involvement in MAPPA policy. It seems timely to appraise the impact on psychiatric practice of multiagency public protection arrangements (MAPPAs) in the management of offenders over a decade after they came into force in 2001. Since their introduction, MAPPAs' relationship with mental health services has been characterised by controversy, raising questions as to whether their public protection function is appropriate or compatible with that of a medical or mental health service 'duty of care'. Despite such concerns, there is evidence that progress has been made in the multiagency management of high-risk violent and sexual offenders in the community since MAPPAs were introduced, particularly improvements in the consistency of their implementation throughout the country. However, significant difficulties remain, most notably a lack of clarity regarding issues of confidentiality and information-sharing between agencies, and variations in practice between different mental health services. Coincidentally, confidentiality and the management of sex offenders are again topical issues in the public domain following the closure of the tabloid newspaper News of the World in the aftermath of the phone hacking scandal. It is perhaps ironic that public support for the newspaper's strategy of nullifying the privacy and confidentiality of known sex offenders in their 'Sarah's law' campaign may have declined recently following the revelations that its journalists violated Sarah Payne's family's own privacy by hacking into their mobile telephone messages. 1 Background and developments Multiagency public protection arrangements were introduced in England and Wales with the aim of minimising the risk of sexual and violent offences to the general public posed by identified high-risk individuals living in the community. Increasing social and political concern about violent and sexual offenders in the 1990s fostered closer working relationships between the police, probation and prison services, which were incorporated into legislation in the Criminal Justice and Court Services Act 2000. This legislation introduced MAPPA in each of the 42 criminal justice areas in England and Wales. The police, probation and prison services were established as the 'responsible authority' to oversee statutory arrangements for public protection by the identification of high-risk offenders, the assessment and management of their risk, and the sharing of relevant information among the agencies involved. The Criminal Justice Act 2003 further strengthened these arrangements by imposing on health and social service agencies a 'duty to cooperate' with MAPPA. The purpose of this clause was intended to enhance multiagency work by the coordination of different agencies in assessing and managing risk, and to 'enable every agency, which has a legitimate interest, to contribute as fully as its existing statutory role and functions require in a way that complements the work of other agencies' (p. 196). 2 In practice, cooperating agencies, which include the National Health Service (NHS) and primary care trusts, youth offending teams, local housing authorities, local education authorities and Jobcentres Plus, are expected to attend case conferences, share information about offenders and provide advice regarding management. Mapping MAPPAs There are three tiers (levels) to the MAPPA management system at which risk is assessed and managed. Level 1 (ordinary risk management) is for offenders whose risk is classified as low or medium and who can be managed by one lead agency, such as the police, probation or mental health. Level 2 (local interagency risk management) is for offenders whose management requires the active involvement of more than one agency. Here the work is coordinated at monthly multiagency meetings where there is permanent representation of the core agencies of the police, probation Summary Multiagency public protection arrangements (MAPPAs) were established in England and Wales 10 years ago to oversee statutory arrangements for public protection by the identification, assessment and management of high-risk offenders. This article reviews MAPPAs' relationship with mental health services over the past decade. Despite areas of progress in the management of mentally ill offenders, inconsistent practice persists regarding issues of confidentiality and informationsharing between agencies, which calls for clearer and more consistent guidance from the Royal College of Psychiatrists, the Ministry of Justice and the Department of Health. Declaration of interest All the authors have some involvement in MAPPA policy. and prison services, supplemented by representatives of other involved agencies where needed. Level 3 (multiagency public protection panels) is reserved for the minority of offenders who are considered as posing the most serious risk and/or requiring complex risk management. These cases will be discussed at the regular monthly level 2 meetings, but also on an individual basis at emergency level 3 meetings. Overall, MAPPAs are meant to provide a strategic framework to manage high-risk offenders, enabling a focus on the small group of offenders responsible for a high proportion of crime. 3 In our work with MAPPA over the past 10 years, we have observed several positive developments. These include a shift towards adopting more stringent criteria for referral to MAPPA, enabling a more selective focus on a smaller group of high-risk cases; a more consistent and coordinated approach in MAPPA implementation and practice in different areas throughout England and Wales, with greater routine involvement of mental health services; the introduction of key performance indicators; and the inclusion of lay members on the regional MAPPA strategic management boards to provide an independent perspective. In our opinion, lay members have added a useful 'common sense' element to strategic discussions and have not, to our knowledge, been involved in breaches of confidentiality as some had predicted. It is important to note that lay members do not sit on level 2 MAPPA meetings at an operational level, although they may observe them as part of the monitoring function of strategic management boards. Challenges Measuring the effectiveness of such interagency collaboration, however, has proved more problematic. Despite anecdotal reports that serious further offence rates are lowered in offenders covered by the MAPPA process, hard evidence is lacking. A comprehensive review of the evidence on interagency collaborations in offender health and social care, including MAPPA, introduced by successive Labour administrations since 1997, revealed that although this subject area is awash with literature in the form of government policy, opinion and national evaluations, there is little independent research and systematic review. 4 The current evidence available confirms the presence of continued structural, procedural and cultural barriers that impede effective partnership working in interagency collaborations aimed at crime reduction. Key difficulties include conflicting targets imposed by individual agencies, and divergent ethical and professional values of the different agencies involved across the care control divide. One of the few published audits of a forensic mental health team's involvement with MAPPA 7 years ago highlighted the problems they encountered. 5 This included confusion regarding the role and contribution of mental health teams; additional burden on clinical teams with no increased financial resources; lack of protocols and guidelines; ambiguity about the meaning of 'duty to cooperate'; poor integration of criminal justice system members' views about risk with a forensic mental health perspective; and lack of cooperation of non-patient offenders with mental health teams. Despite the publication of clearer guidance on MAPPA by the National Offender Management Service, 2 many of these difficulties persist today, particularly tensions around information-sharing with health and social care agencies. Reluctance to pass on information regarding patients to MAPPA may arise for a range of reasons such as a lack of awareness of the appropriate guidance, concern about the potential for criticism by professional bodies such as the General Medical Council (GMC), and concern that disclosure could have adverse consequences for therapeutic trust and engagement. It can be argued that a breach of therapeutic trust could paradoxically increase risk by interfering with treatment that has the potential to reduce risk (e.g. a disclosure arising from an out-patient sex offenders group that results in a group member dropping out of treatment). Information-sharing may also lead to faulty risk assessment due to the sheer volume of information which may swamp MAPPA and prevent systematic analysis and informative and holistic risk assessment of the individual offender. 6 In our opinion, there is a risk of 'promiscuous' information-sharing due to the lack of clarity and discrepancies in the guidance available for psychiatrists regarding communication and disclosure of information about patients with the MAPPA process. There are implicit discrepancies between documents from the Royal College of Psychiatrists 7 and the Ministry of Justice, 8 and lack of sufficient detail that calls for urgent clarification as argued by Buchanan & Grounds. 9 The report on confidentiality and information-sharing published by the Royal College of Psychiatrists contains a short section on MAPPA (pp. 33-34). 7 This clarifies that the duty placed on health services to cooperate with MAPPA does not extend to any statutory duty to disclose information to other agencies involved in these multiagency arrangements. It also states that the same medical duty of confidentiality applies as in normal clinical practice, so that considerations about disclosure should be on a public interest basis. It states that requests for information from outside agencies, including the police, should be treated as all other requests, by informing the patient and seeking consent for disclosure, unless there are overriding considerations which may include statutory obligations, and that all employing organisations should have a policy governing their relationship with MAPPA. Most importantly, the report clarifies that although psychiatrists have a duty to cooperate with MAPPA, this does not mean an obligation to disclose. The duty to cooperate is not imposed on individual clinicians but is imposed on the mental health trust (as an agency bound by a duty to cooperate). It has been argued that in a mental health trust the information in clinical records is the property of the trust and therefore a chief executive of a trust has the discretion but not a duty to disclose. In practice, medical staff are often relied on to make decisions about records and disclosure. However, the brief section on MAPPA within the overall Royal College of Psychiatrists guidance document on confidentiality is vague and potentially at odds with current MAPPA guidance produced by the National Offender Management Service Public Protection Unit in 2009 (which is in the process of revision and due to be re-issued later this year), 2 and MAPPA guidance from the Ministry of Justice released in 2010. 8 This last document advocates the routine disclosure of information on 'MAPPAeligible' mentally disordered offenders at designated points in their care pathway. For detained patients on restricted hospital orders it is recommended that MAPPA is notified about any detained patient who is a MAPPA-eligible offender when there is any planned move of the patient outside the secure perimeter, such as leave or transfer to another hospital, and also at their first care programme approach (CPA) meeting where a discharge is considered. The Ministry of Justice 'strongly recommends' that 'the [MAPPA] Co-ordinator should be informed by the care team of any occasion when the patient will be unsupervised in the community' (p. 3). 8 Given that a patient detained under a restriction order has, at the point of sentence, been deemed by a criminal court to pose a risk of 'serious harm' to the public, then from a responsible authority's point of view routine notification is arguably justified as the criterion of 'serious harm' risk has been met. The expectation is that most mental health cases in MAPPA will be managed at level 1 and only referred to MAPPA when the CPA process is not adequate to manage risk or there is a need for multiagency management. Confusion arises in several areas when dealing with graduated leave and discharge from long-stay forensic mental health units. Current MAPPA guidance recommends that notification should be used at the point of first (usually unescorted) leave so that the MAPPA in the discharge locality area will be informed and can plan as necessary. As forensic patients may be in regional units away from their home area, initial leave may be in a different MAPPA locality from final discharge area, thus two MAPPA panels may be involved. In addition, although the Mental Health Casework Section of the Ministry of Justice makes leave decisions for restricted cases, it delegates MAPPA notification to the discretion of the mental health team, which becomes the conduit for information between two criminal justice agencies (MAPPA and the Ministry of Justice). Furthermore, notification does not necessarily request or require MAPPA to take any action, which may allow a MAPPA level 2 panel to have information about a patient but do nothing to manage or reduce their risk. The situation may be even more confusing for non-restricted patients where the criterion of 'serious harm' has not been established by a court and where the Ministry of Justice may no longer be involved, even though some unrestricted cases in forensic units may be former sentenced prisoners (Mental Health Act Section 47/49 transfers whose sentences have expired) with substantial risk histories. From a mental health perspective, our experience is that routine notifications may force the clinician into an unhelpful and counterproductive monitoring role, which may increase, rather than decrease, the patient's risk to self and others by interfering with a critical therapeutic alliance. For example, patients on planned escorted home leave may receive unexpected visits by the police, which may be experienced by the patient as intrusive and may disrupt the treatment process. Further risks in the blurring of professional boundaries may occur at MAPPA meetings where less experienced health representatives may be unprepared for the, often subtle, pressures placed on them to disclose information on patients known to them, without having the opportunity to consider the requests in detail and discuss with the mental health team. 10 Psychiatrists are also bound by other codes of practice regarding confidentiality, notably guidelines produced by the GMC 11 and the NHS Code of Practice on Confidentiality 12 produced by the Department of Health, which are guidelines for all NHS staff. Supplementary Guidance on Public Interest Disclosures 13 was added to the NHS Code of Practice in 2010. In addition to health-specific guidance, any decision by a public authority must also be compliant with the Article 8 'right to privacy' of the Human Rights Act 1998. Although MAPPA is not explicitly mentioned in the NHS Code of Practice or the Supplementary Guidance, these documents make additional important points regarding confidentiality and disclosure that are potentially at odds with the MAPPA guidance. The NHS Code of Practice highlights the centrality of seeking patient consent for the disclosure of confidential information, whereas in the MAPPA guidelines, although it is stated that 'It is preferable that the offender is aware that disclosure is taking place and, on occasion, they may make the disclosure themselves' (p. 70), 2 the specific issue of consent is not mentioned. Furthermore, the NHS Code of Practice stresses the importance of balancing the need for disclosure against not only the duty of confidentiality towards individual patients, but also against the interest of public confidence in the NHS as a confidential service. In this respect, the disclosure of confidential information for one patient could indirectly damage the treatment of other patients whose confidence in the service may be undermined. Finally, psychiatrists should remember that although legislation may create a 'statutory gateway' to allow information disclosure, this generally 'stops short of creating a requirement to disclose, therefore the common law obligations of confidentiality must still be satisfied' (p. 38). 12 This means that it is still the clinical decision of the doctor to judge, on a case-by-case basis, whether disclosure is necessary to prevent serious harm or abuse. Recommendations We fully support the College guidance that all health organisations should: (a) have policies that cover the role of psychiatrists and other members of the multidisciplinary team in the MAPPA process; (b) have representation at MAPPA meetings; (c) withhold and disclose information in accordance with good practice guidelines; (d) conduct assessments at the request of a MAPPA meeting; and (e) be represented on a MAPPA strategic management board. 7 However, given the ambiguities in the current available guidance documents, particularly regarding the frequency and circumstances of disclosure for detained MAPPAeligible patients, we are also recommending the publication of more explicit and detailed national guidance for psychiatrists on their involvement in the MAPPA process. Whether we like it or not, MAPPA is here to stay, and it is important that we, as mental health professionals, remain thoughtfully involved in protecting the interests of our patients, while being mindful of public protection.
2019-03-11T13:06:36.227Z
2012-06-01T00:00:00.000
{ "year": 2012, "sha1": "88ce8c7ae259a81169d5495bbbee01d7dc16ad40", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/B2D2459AAEFB701F72B4869004383816/S1758320900002237a.pdf/div-class-title-mappa-and-mental-health-10-years-of-controversy-div.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "03d42a5b3cffd40f651d76ef341c2f44c0a3bd57", "s2fieldsofstudy": [ "Psychology", "Law", "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
210081578
pes2o/s2orc
v3-fos-license
“Free won’t” after a beer or two: chronic and acute effects of alcohol on neural and behavioral indices of intentional inhibition Background Response inhibition can be classified into stimulus-driven inhibition and intentional inhibition based on the degree of endogenous volition involved. In the past decades, abundant research efforts to study the effects of alcohol on inhibition have focused exclusively on stimulus-driven inhibition. The novel Chasing Memo task measures stimulus-driven and intentional inhibition within the same paradigm. Combined with the stop-signal task, we investigated how alcohol use affects behavioral and psychophysiological correlates of intentional inhibition, as well as stimulus-driven inhibition. Methods Experiment I focused on intentional inhibition and stimulus-driven inhibition in relation to past-year alcohol use. The Chasing Memo task, the stop-signal task, and questionnaires related to substance use and impulsivity were administered to 60 undergraduate students (18–25 years old). Experiment II focused on behavioral and neural correlates acute alcohol use on performance on the Chasing Memo task by means of electroencephalography (EEG). Sixteen young male adults (21–28 years old) performed the Chasing Memo task once under placebo and once under the influence of alcohol (blood alcohol concentration around 0.05%), while EEG was recorded. Results In experiment I, AUDIT (Alcohol Use Disorder Identification Test) total score did not significantly predict stimulus-driven inhibition or intentional inhibition performance. In experiment II, the placebo condition and the alcohol condition were comparable in terms of behavioral indices of stimulus-driven inhibition and intentional inhibition as well as task-related EEG patterns. Interestingly, a slow negative readiness potential (RP) was observed with an onset of about 1.2 s, exclusively before participants stopped intentionally. Conclusions These findings suggest that both past-year increases in risky alcohol consumption and moderate acute alcohol use have limited effects on stimulus-driven inhibition and intentional inhibition. These conclusions cannot be generalized to alcohol use disorder and high intoxication levels. The RP might reflect processes involved in the formation of an intention in general. Background Imagine having cocktails with friends at a bar during happy hour time, and experiencing a strong urge to order one more. But then you realize that you need to prepare for an important meeting the next morning and you decide to refrain from having another drink. In examples like this, there is no external cue signaling a brake, yet you voluntarily suppress your urge for the sake of other priorities. Here, we refer to this type of cognitive control as intentional inhibition. In the current study, we will investigate how intentional inhibition 1) is associated with typical alcohol use and 2) affected by acute alcohol consumption. Alcohol use and inhibition Inhibitory control is defined as the ability to control one's attention, behavior, thoughts, and/or emotions and instead do what is more appropriate or needed [1]. This ability enables us to override strong internal predispositions or external lures, and do what is more appropriate or needed. Long-term alcohol use has been associated with structural as well as functional neural deficits that are related to inhibition [2]. For instance, alcohol-dependent patients show selective deficits in prefrontal gray and white matter volume [3]; compared to light drinkers, heavy drinkers were slower to stop inappropriate responses and showed deviant amplitudes of the P3 (a brain potential that correlates with the efficiency of response inhibition) [4]. Despite relatively robust neurological evidence for inhibition deficits, alcohol use severity is not consistently associated with impaired behavioral performance of response inhibition [5][6][7]. Acute alcohol use (moderate to high dosage), by contrast, was more consistently related with inhibition deficits [8,9] and reduced amplitudes of inhibition-related brain potentials [10]. Intentional inhibition Theoretically, motor inhibition can be classified into stimulus-driven inhibition and intentional inhibition based on the degree of endogenous volition involved [11]. A daily-life example of stimulus-driven inhibition is stopping to a traffic-light that suddenly turns to red. The past decades have seen abundant research efforts exclusively into the effects of alcohol on stimulus-driven inhibition (see reviews: [12][13][14]). However, rather than relying on external cues, deciding independently when and/or whether to abort an action plays an even more important role in daily life [15]. Intentional inhibition refers to the capacity to voluntarily suspend or inhibit an about-to-be-executed action at the last moment [16]. In terms of drinking, the priming dose effect of alcohol, i.e., loss of control over further consumption after a priming dosage, reflects the insufficiency of intentional inhibition rather than stimulus-driven inhibition [17]. There have been several attempts to study intentional inhibition using varieties of the Libet task [18], the Marble Task [19], and the modified go/no-go task [20,21]. To investigate intentional inhibition, these tasks usually included a free-choice condition, where participants were encouraged to act/inhibit voluntarily and roughly equally across all the trials. For instance, in the Marble task, participants view a white marble rolling down a ramp. In 50% of the trials, the marble turns green and participants have to stop it from crashing as fast as possible by pressing the button. If the marble remains white, the participants can choose between performing the prepared action (i.e., stop the marble) and execution of intentional inhibition (i.e., do not stop the marble). Such "free choice" design is suboptimal in at least three ways regarding the concept of intentional inhibition. First, the choice between acting and withholding is relatively arbitrary; little (if anything) really hinges on whether the participant decides to act or not on any particular trial. Accordingly, participants might behave in a way that they believe will satisfy the experimenters' definition of volition. Second, participants are subject to substantial time pressure, which may prevent the time-consuming development of spontaneous intentions. Third, participants may pre-decide on whether and when to inhibit ahead of time (even before the start of the trial) rather than on the fly [22], even when emphasizing that this is to be avoided. Thus, the study of intentional inhibition may be augmented by using more ecologically valid tasks. The present study To address these points, a novel task was developed, in which stimulus-driven and intentional inhibition can be measured under comparable conditions that are ecologically more representative (Rigoni, Brass, van den Wildenberg, & Ridderinkhof, unpublished manuscript). In the current study, we will investigate if and how alcohol use affects intentional inhibition in two complementary ways. Experiment I focuses on prolonged (i.e., last year) alcohol use in relation to intentional versus stimulus-driven inhibition with a relatively large sample. The Chasing Memo task, as well as the classic stop-signal task (SST), were administered. Experiment II investigates the behavioral and neural effects of acute alcohol use on the Chasing Memo task performance. Electroencephalographic (EEG) activity was recorded in a smaller sample, with a doubleblind, placebo-controlled, within-subject design. Introduction The aim of the Experiment I was to test whether past-year typical alcohol use influenced stimulus-driven as well as intentional inhibition. Extensive research into the effects of long-term alcohol use on stimulus-driven inhibition has been documented, but the conclusions are inconsistent. Some researchers found that compared to controls, heavy drinkers showed impaired stopping performance, signified by either longer stop-signal reaction time (SSRT) on the SST [4] or higher commission error rates in the go/no-go task (GNG) [23,24]. These findings, however, conflict with a series of other studies. For instance, a meta-analysis of differences between heavy drinkers and controls reported null-effects with respect to inhibitory impairments in 9 out of 12 GNG studies and in 7 out of 9 studies using the SST [13]. Similarly, in a recent retrospective epidemiological study among 2230 adolescents, longitudinal analyses showed that 4 years of weekly heavy drinking did not result in impairments in basic executive function, including inhibitory control [25]. In the literature, two types of impulsivity have been discerned that may trigger failures of inhibitory control: 'stopping impulsivity' and 'waiting impulsivity', which rest on largely distinct neural circuits [26,27]. 'Stopping impulsivity' refers to impairments in the ability to interrupt an already initiated action, whereas 'waiting impulsivity' refers to impairments in the ability to refrain from responding until sufficient information has been gathered or a waiting interval has elapsed. Stopping and waiting impulsivity have typically been tested in the SST and in the delay discounting task, respectively [28]. In the Chasing Memo task (Rigoni et al., unpublished manuscript), participants were asked to use the computer mouse to move the cursor and chase a small fish, called Memo, as it moves across the screen ("swimming" against a nautical background picture). Participants disengaged from visuomotor tracking in response to either an external stop cue (i.e., stimulus-driven inhibition) or at will (i.e., intentional inhibition). Meanwhile, to supplement and validate the stimulusdriven inhibition component of the new task, the conventional SST was also administered [29]. In addition to laboratory-based tasks, two sets of questionnaires were also administered. The Barratt Impulsiveness Scale (BIS-11) [30], and Dickman's Impulsivity Inventory (DII) [31], were used to test impulsivity. Substance use was tested by the AUDIT (Alcohol Use Disorder Identification Test) [32], the mFTQ (modified version of the Fagerström tolerance questionnaire) [33], the CUDIT-R (cannabis use disorder identification test revised) [34], and the CORE (the core alcohol and drug survey) [35]. The current study focuses on college students, for whom alcohol is one of the most frequently used substances, and it gives rise to unsafe drinking-&-driving behavior and the consumption of other substances [36]. Although prior work (as reviewed above) has not yielded consistent results, we tested the hypothesis that higher AUDIT scores (i.e., more risky alcohol use within the past 12 months) were associated with prolonged SSRTs (analogous to longer disengage latencies in the cued version of the Chasing Memo task). For intentional inhibition in the Chasing Memo task, we conceived of two opposing scenarios: analogous to stimulus-driven inhibition, past-year alcohol use induces 'stopping impulsivity' and delays intentional disengagement; alternatively, it induces 'waiting impulsivity' and results in faster disengagement times [27]. Although the lack of existing studies on alcohol and intentional inhibition prevents us from inferring strong theory-based hypotheses, the present task set-up will allow us to empirically distinguish between them. Methods 1 Participants Eighty-six undergraduate students (10 males) were recruited (age: Mean = 20.77, SD = 1.86). Inclusion criteria included: 1) between 18 and 25 years old; 2) no report of head injuries, colorblindness or seizures; 3) no prior and current diagnosis of depression; 4) proper mastery of Dutch, as all task instructions and questionnaires were shown in Dutch. Due to incorrect settings of refresh rates on some test computers, we cannot use the Chasing Memo data from a subset of 26 participants. 2 Thus, the analyses of the Chasing Memo task were based on the remaining 60 subjects (6 males, 20.75 ± 2.01 years old). Questionnaires The BIS-11 is a 30-item questionnaire designed to assess the personality/behavioral construct of impulsiveness [30]. The DII included two subscales: functional impulsivity (11 items) and dysfunctional impulsivity (12 items). The AUDIT is a 10-item survey used as a screening instrument for excessive or hazardous alcohol use [32]. It covers the domains of recent alcohol consumption (items 1-3), alcohol dependence symptoms (items 4-7), and alcohol-related problems (items 8-10). The mFTQ assesses the level of nicotine dependence among adolescents [33]. The CUDIT-R was used to identify individuals who have used cannabis in problematic or harmful ways during the preceding 6 months [34]. The CORE was originally designed to examine the use, scope, and consequences of alcohol and other drugs in the college settings [35]. In the current research, participants were asked to indicate how often within the last year and month they had used each of the 11 types of drugs. Reliability of these questionnaires can be found in Additional file 1. Behavioral tasks Chasing memo task In this task, an animated fish called Memo is moving ('swimming') at 360 pixels/sec against the background of the bottom of an ocean, changing directions at random angles between 0 and 115 degrees, at intervals between 556 and 1250 ms. The participants' main task was to track the fish by keeping a yellow dot (operated through the computer mouse) within close proximity of Memo (i.e., within a green zone of 2 cm radius surrounding it). Points were earned per second during successful tracking (i.e., as long as the cursor is within this green zone) and accumulated points were displayed in the bottom right corner of the screen (tracking points). These points accumulated faster as a linear function of time spent within the green proximity zone. Accumulation rate was indicated to the subject by a red/green bar, which turned from red to green as a function of accurate tracking (see Fig. 1). Upon failures to chase Memo (i.e., failing to keep the yellow dot within the green zone), accumulation rates were reset, and accumulation of points would again start slowly as soon as the participant resumed successful tracking and then rise as a function of accurate tracking time. Participants were told that tracking points were converted to real money, which can yield up to 5 euro extra at the end of the experiment. Thus, participants had a strong immediate incentive motivation to continue accurate tracking. A circle at the top left corner of the green zone served as the external signal to start and stop tracking. At the beginning of the trial, the circle was colored orange; after a variable delay (between 3 and 6 s) it turned blue (go signal), indicating that participants can start tracking the target. The specific instructions differed depending on the experimental condition. In the cued condition, participants were instructed to start tracking as fast as possible when the go signal appeared (cued engagement) and stop as soon as possible if the circle turned orange again, i.e., the stop signal (cued disengagement). Participants were asked to disengage by leaving the mouse completely still in its end position. The trial ended 2 s after tracking disengagement. Within the colored circle, there was a counter with a serial display of digits constituting a number (between 100 and 999). Every 100 ms, that number incremented by 1 until the value of 999 was reached, after which the counter was reset to 100. Participants had to remember the number when the stop cue appeared and type in the number by the end of a trial and how confident they were about their answers (from 1 to 7). This is used as the timing accuracy index. In the free condition, participants can freely decide when to start tracking after the go signal appeared. After uninterrupted successful tracking for 2 s, a bonus signal, signified by a yellow star, was displayed next to the red/green meter (Fig. 1). Its appearance signaled the beginning of a 20 s (participants did not know the length) temporal window within which participants were to continue tracking until they felt the urge to stop. Disengagement meant foregoing the immediate reward (increase in normal points) in favor of the future reward (bonus points). The number of bonus points varied between 2 and 50 and was determined by the disengagement moment. Participants were instructed that some variability in their tracking latency (within the margins of not stopping too soon nor too late) would benefit an optimal amount of bonus points. Unbeknownst to the participants, the time at which the star was lost was determined Fig. 1 The Chasing Memo Task. a Background display for the motor tracking task. Participants were instructed to track fish Memo around the screen by keeping the mouse within the green zone surrounding the target. On each trial, a counter was displayed on the bottom right of the screen which displayed the points earned during successful tracking; b When the circle turned from orange to blue, participants started tracking either at will (intentional condition) or as quickly as possible (cued condition); c During successful tracking, the half-circle red bar gradually turned green, signaling that the participant started to earn points; d In the cued condition, the circle switched back to orange to signal that the participant has to stop tracking as quickly as possible; e In the intentional condition, the appearance of a star indicated the beginning of a time window in which the participant can earn additional bonus points. In these trials, participants can decide voluntarily when to disengage from motor tracking in order to collect the bonus points stochastically by drawing randomly from a normal distribution, such that the optimum waiting time was 10 s on average; prolonged tracking would be highly beneficial on some trials but highly detrimental on others. Within each block of the free condition, bonus points were accumulated across trials and converted into extra time (1 second per earned bonus point) for tracking in a later bonus trial. In a bonus trial, participants can earn tracking-points 4 times as fast as that in a regular trial. Thus, more bonus points result in a higher total of tracking-points (and hence in greater earnings). In order to prevent undesirable response tendencies, participants were instructed and trained to follow their urge rather than preplan their time of disengagement or use external cues (such as spatial position or counter value) to determine the time of disengagement. As in the cued condition, participants now had to register and report the number of this counter at the time they first felt the urge (or conscious intention) to disengage, i.e., the W-moment [38]. Detailed instructions were provided at the beginning of the experiment, and participants performed a guided practice session to familiarize them with the task. The entire experimental session consisted of 6 cued and 6 free blocks of 10 trials each. Cued and free blocks were presented in alternating order and every free block was followed by a bonus trial. SST Similar to the task used by van den Wildenberg et al., (2006), participants were required to respond quickly and accurately with the corresponding index finger to the direction of a right-or a left-pointing green arrow (go trials). Arrow presentation was responseterminated. The green arrow changed to red on 25% of the trials (stop trials), upon which the go response had to be aborted. Intervals between subsequent go signals varied randomly but equiprobably, from 1750 to 2250 ms in steps of 50 ms, where a black fixation point (10 × 10 pixels) was presented. A staircase-tracking procedure dynamically adjusted the delay between the onset of the go signal and the onset of the stop signal (SSD) for each hand separately to control inhibition probability [39]. SSD started at 100 ms and increased by 50 ms after a successful inhibition, and decreased by 50 ms after a failed inhibition. The SST consisted of five blocks of 60 trials, the first of which served as a practice block to obtain stable performance [29]. The SST measures both the efficiency of response execution (mean reaction time to correct go-signals, go RT) and the latency of stimulus-driven inhibitory control (SSRT), where longer SSRT reflects a general slowing of inhibitory processes [40]. The integration method was used for SSRT calculation [41,42]. Procedure All participants signed informed consent prior to the laboratory session. They performed two computer tasks in a counterbalanced sequence, with a series of questionnaires in between, and the behavioral tasks were administered using Presentation® software [43]. The procedures were approved by the local ethics committee and complied with institutional guidelines and the declaration of Helsinki. Participants were rewarded either €15 or 1.5 credit points upon accomplishment. Data preparation and statistical analysis Chasing memo task Although Disengage RT was our measurement of primary interest, Engage RT was also analyzed to verify whether chronic alcohol use affected basic response speed. Engage RT (the time from the engage color change until the start of tracking) was calculated for both cued and free conditions. Engage RTs less than 100 ms were discarded from the analysis, resulting in 3360 (93.3%) out of 3600 trials for the cued condition and 3381 (93.9%) for the free condition. Disengage RT in the cued condition was calculated by subtracting the time of the disengage color change from the time at which tracking was completely halted. For the free condition, Disengage RT is the time from the appearance of the bonus star until the time of arrested tracking. Before analysis, 376 (10.4%) trials in the free condition were removed as intentional inhibition failures, i.e., participants did not stop tracking within the provided time window (20 s). The W-interval in the free condition was computed as the interval between the reported W-moment until the time of the actual stopping. In the cued condition, timing accuracy was the difference between the reported and the actual appearance moment of the stop signal. For all RT-related dependent variables, the median rather than mean value was used for further analysis as RT distributions were not normally distributed for all of the participants (skewed to the left for some participants and to the right for others). Engage RT and Disengage RT were analyzed using multiple linear regressions with AUDIT sum score 3 (AUDIT sum was nearly normally distributed with Skewness of 0.06 and Kurtosis of − 0.68) and Inhibition Category (free vs. cued) as predictors, controlling for gender. 4 The possible association between past-year alcohol use and timing accuracy was examined by Pearson correlation. W-interval was analyzed 3 Participants were not dichotomized into light and heavy drinkers during recruitment and data analysis stage as there was individual variance of alcohol consumption in these broad groups and artificial dichotomization reduces the power to detect subtle individual differences [44]. In addition, we replicated these analyses by replacing AUDIT total score by AUDIT-C (the first three items of AUDIT), which is not limited to the past 12 months. 4 Other substances use were not added as a covariate as they were highly correlated with the AUDIT score (see Table 2). with AUDIT score as a predictor and controlled for timing accuracy. These analyses were performed using SPSS 24.0 [45]. SST The successful inhibition percentages on inhibition trials ranged from 28.3 to 63.3% (M = 49.6%, SD = 4.67%), which meets the requirements of the integration method for SSRT calculation [41]. To compute go RT, only correct responses were taken into account. Afterward, similar regression analyses as the Chasing Memo task was performed for SSRT and go RT separately without the factor of Inhibition Category. We analyzed data once with all the participants (N = 86) and once with those also had Chasing Memo task performance (N = 60). In addition, two correlation matrices were built: 1) correlations between different substances use; 2) correlations between different measures of impulsivity (Disengage RT in the free condition, SSRT, BIS-11 score, and DII score). Combination of conventional and Bayesian-based analysis To quantify the strength of our findings beyond standard significance testing and to remedy the relatively small sample size caused by the technical failure, the main hypotheses were also examined by calculating a Bayes Factor using Bayesian Information Criteria [46][47][48][49]. The Bayes factor provides the odds ratio (BF 01 ) for the null versus the alternative hypotheses given a particular data set (BF 10 is simply the inverse of BF 01 ). A value of 1 means that the null and alternative hypotheses are equally likely; values larger than 1 suggest that the data are in favor of the null hypothesis, and values smaller than 1 indicate that the data are in favor of the alternative hypothesis. A BF 01 between 1 and 3 indicates anecdotal evidence for the null compared to an alternative hypothesis, 3-10 indicates moderate evidence and 10-30 indicates strong evidence [50,51]. The BFs were calculated with JASP 0.9.2.0., an open-source statistical package [52]. Sample characteristics Descriptive statistics (i.e., mean, standard deviation, minimum and maximum values) of the tested variables (demographics, substance use, task performance, and trait impulsivity) can be found in Table 1. Chasing memo task Task difficulty was assessed by the number of times one lost the star. Out of the 120 trials, on average participants lost the star 31 times (SD = 21), ranging from 6 to 145. This indicates that most of the participants have a good mastery of the task and should be able to allocate attention to their behavioral intentions. Variables used in the regression analyses were checked for multicollinearity using variance inflation factors (VIF) before being entered into the multivariate analyses; VIF for all variables were below 2 for the following regression models. The linear regression model for Engage RT was not significant (F (3, 116) = 0.99, p = 0.39), with a R 2 of 0.025. None of the explanatory variables significantly predicted Engage RT (AUDIT: β = 0.10, p = 0.29; Inhibition Category: β = − 0.02, p = 0.84; gender: β = − 0.12, p = 0.19). Bayesian linear regression showed that the null model provided a fit that was 2.2 times better than the model that added the factor gender, 3.0 times better than the model that added AUDIT and 5.1 times better than the model that added Inhibition Category. The linear regression model for Disengage RT was significant (F (3, 116) = 94.48, p < 0.01), with a R 2 of 0.71. Inhibition Category significantly predicted Disengage RT (β = 0.84, p < 0.01). Disengage RT was much longer in the free condition than in the stimulus-driven inhibition (8662 ms vs. 749 ms). Neither AUDIT (β = − 0.06, p = 0.27) nor gender (β = 0.06, p = 0.27) predicted Disengage RT. Bayes factor analysis confirmed this by showing that the model with factor Inhibition Category provided a fit that was 7.0 times and 7.2 times better than the model that further added factor Gender and AUDIT, respectively. Past-year risky alcohol consumption is not associated with alteration in timing accuracy (r = − 0.21, p = 0.10, BF 01 = 1.66). The linear regression model for W-interval was not significant (F (2, 57) = 0.14, p = 0.87), with a R 2 of 0.005. None of the explanatory variables significantly predicted W-interval (AUDIT: β = − 0.007, p = 0.96; timing accuracy: β = − 0.071, p = 0.60). Bayes factor analysis confirmed this by showing that the null model provided a fit that was 3.4 times, and 3.8 times better than the model that added the factor Timing Accuracy and AUDIT, respectively. SST There were no qualitative differences between the outcomes with different sample size (86 vs. 60). We report the results for the smaller sample size (same as the Chasing Memo task) below, and the larger sample size in Additional file 1. The linear regression model for SSRT was not significant (F (2, 57) = 0.47, p = 0.63), with a R 2 of 0.02. None of the explanatory variables significantly predicted SSRT (AUDIT: β = 0.11, p = 0.43; gender: β = 0.07, p = 0.58). Bayes factor analysis confirmed this by showing that the null model provided a fit that was 2.9 times, and 3.4 times better than the model that added the factor AUDIT and Gender, respectively. The linear regression model for go RT was not significant either (F (2, 57) = 2.40, p = 0.10), with a R 2 of 0.078. AUDIT was a significant predictor of go RT (β = − 2.68, p = 0.04), indicating the higher the AUDIT score the shorter the go RT. Gender was not a strong predictor of go RT (β = − 0.08, p = 0.52). Bayes factor analysis indicated anecdotal evidence for the effect of AUDIT, i.e., adding it to the model was just 1.6 times better than the null model. And the fitness of the null model is 3.3 times better than adding factor Gender. Results were very similar when AUDIT-C was used (see Additional file 1). Correlation matrix As was shown in Table 2, alcohol use and other substances use (e.g., cigarette and cannabis use) were highly correlated, which can be expected. In Table 3, the correlation matrix revealed three significant correlations between different impulsivity measures. SSRT correlated negatively with the attentional subscale of BIS-11 (r = − 0.20, p = 0.03, BF 10 = 1275), and correlated positively with the motor subscale of BIS-11 (r = 0.22, p = 0.01, BF 10 = 2122). In addition, the motor subscale of BIS-11 and the dysfunctional subscale of DII were negatively correlated (r = − 0.21, p = 0.02, BF 10 = 1395). Subscales of impulsivity, either measure by BIS-11 or DII were not correlated with Chasing Memo task performance. 5 Discussion In the first experiment, a past-year increase in risky drinking showed no relationship with any of the inhibitionrelated tasks and questionnaires. In the SST, alcohol use slightly speeded response latency, but had no influence on the inhibition process. In the Chasing Memo task, typical alcohol use hardly had any effect on Engage RT and Disengage RT, nor did it influence the W-interval. The correlation analysis confirmed the existence of polysubstance use and the multidimensional feature of impulsivity (i.e., impulsivity measures are not largely correlated). Stimulus-driven inhibition Our findings on stimulus-driven inhibition were comparable between the Chasing Memo task and the standard SST. For stimulus-driven inhibition as tested by the SST, the present null findings of past-year alcohol use are replications of some recent studies [25,53], but conflicted with some others [13]. Against the backdrop of the fairly inconsistent literature, it's time to re-assess the connection between recreational moderate alcohol use and stimulusdriven inhibition impairment. In the current study, alcohol use was regarded as a continuous variable, which allowed drawing conclusions from a relatively complete population. Relatedly, in our recent individual-level mega-analysis, very limited evidence supporting such deteriorating relationship was found across a broad range of substances [54]. As only a small proportion of the participants are diagnosed with Substance Use Disorder (SUD), it is still unclear whether these conclusions would also apply to SUD. By contrast, the so-called extreme group designs were frequently used in this field, e.g., comparing light/non-drinkers versus people with alcohol use disorder (AUD) [55]. Studies with such designs yielded more positive findings [56,57]. Seemingly, people located at the very right end of the continuum, i.e., those diagnosed with alcohol use disorder indeed have difficulties in inhibition. But it does not necessarily mean these findings can be generalized readily to the majority who drink alcohol on a regular/non-hazardous basis, at least on the behavioral level [58]. Intentional inhibition Given that this was the first attempt, we did not have firm a priori predictions on the presence and direction of effects of alcohol use on intentional inhibition. At least in the current context, there was no clear effect of alcohol use on intentional inhibition. The latency of intentional 5 We only expect a small to moderate relationship between SSRT and disengage RT as intentional inhibition engaged additional neural activation albeit common inhibition network with stimulus-driven inhibition [22]. inhibition was expressed by the Disengage RT in the free condition. Its histogram for each individual either showed a rectangle or approximately normal (with mean of near 10 s) distribution, which confirms the validity of the manipulation, in the sense that strategies other than 'following one's urge' (such as counting or waiting strategies) would have resulted in heavily peaked and/or skewed distributions. Though in the free condition participants appeared to start tracking as soon as possible, this did not invalidate the operationalization. As Engagement is less of our focus, we did not emphasize the 'free will' as much as for the Disengagement. Also, no consequences were associated with the engage response pattern. For the W-interval, participants reported to consciously feel the urge to stop about half a second before the actual disengagement. The W-interval was similar for both groups. In the Libet task, the W-moment was reported 200 ms before intentional action [38]. This difference in timing might be due to the dissimilarity between voluntary action and voluntary inhibition, as well as specific task features, which will require further investigation. Although some limitations may apply, the consistency of effects and the robustness of the evidence in favor of the null hypotheses (as confirmed by Bayesian analyses) appears to justify the conclusion that a limited period (i.e., 1 year or a bit longer) of heavy drinking does not affect intentional or stimulus-driven inhibition (at least not in university students). However, before accepting such a conclusion, we seek further evidence through adopting a manipulation that in past research has proven more potent in inducing alcohol-related effects on stimulus-driven inhibition. Alcohol use may increase maladaptive behaviors either because of lasting sequelae of chronic use or through its direct, acute effects [59]. Acutely, alcohol may impair cue-based inhibition and result in an increased likelihood of engaging in risky behaviors, such as driving while intoxicated. In addition, alcohol-induced impairments may also affect the likelihood of further unplanned consumption of alcohol [60]. Several laboratory studies showed that a moderate acute dosage of alcohol use leads to impaired inhibition on GNG and SST [61,62]. Therefore, as a next step, we explored if alcohol intoxication affects stimulus-driven and intentional inhibition. In addition to behavioral measures, we also used EEG to record neural activity. This may reveal the acute effects of alcohol on information processing that remain hidden when focusing on behavioral outcomes. For example, EEG highlighted the nature of the effects of alcohol consumption (vs. placebo) on performance monitoring and error correction [63]. Likewise, EEG signals have reflected differences between alcohol effects in light versus heavy drinkers in the absence of differences in behavior [10,64,65]. Introduction The aim of Experiment II was to test whether and how acute alcohol use influences intentional inhibition. Compared to chronic alcohol use, acute alcohol administration was more consistently related to impaired stimulusdriven inhibition [66][67][68][69][70][71]. By analogy, acute alcohol administration might also be more likely to influence intentional inhibition than chronic alcohol use. Loss-ofcontrol over drinking depicts the phenomenon that small to moderate amount of alcohol use induces physical demand/craving for further drink and promotes alcohol-seeking behavior [17,72,73]. In this way, people are likely to fail in intentional inhibition and drink more than planned on a typical drinking occasion. If alcohol affects intentional inhibition, it may affect not only the time of overt disengagement but also the temporal unfolding of that intention. With its unique temporal resolution, EEG may provide a useful candidate study tool for this purpose. The EEG component we are interested in is the readiness potential (RP) or Bereitschaftspotential. It was first recorded by Kornhuber and Deecke (1964) and attracted broad attention after Libet and colleagues' striking work in 1983 [38,74]. In their experiment, participants were instructed to press a response button whenever they became aware of the intention to do so and report the time of this urge (the W-moment). They found that the W-moment occurred some 200 ms prior to actual action and about 500 ms after the RP onset [38]. This finding was explained as the brain decides to initiate certain actions prior to any reportable subjective awareness, which raised perhaps unprecedented discussion in the literature. It was recently claimed that the RP might neither give rise to the W-moment (conscious intention) nor to the voluntary movement, as the RP occurs 1) before a motor act even without consciousness of commanding it; 2) in situations that do not involve movement, such as decision-making in mental arithmetic [75], and 3) in externally triggered action [76]. Our concern here is not so much with the interpretation but with the development and time course of the processes associated with intentional inhibition. Only a few studies have investigated the neural mechanisms of intentional inhibition using EEG [20,21,[77][78][79][80]. Tasks in those studies were suboptimal in terms of 1) the choice between acting and withholding is relatively arbitrary; 2) pre-decision on whether and when to inhibit cannot be excluded; 3) perhaps tapping into selective choice rather than inhibition, especially when equiprobable go and no-go trials are used [77,78]. Thus, the underlying mechanism might entail not only intentional inhibition but be confounded by other components. The Chasing Memo task remedies these limitations, at least to some extent. A further departure from some previous studies was that components that are closely related to stimulus-driven inhibition, such as N2/ P3 [81] were not analyzed. First, for intentional inhibition we focused on neural activities preceding rather than after intentional inhibition, as 1) this can help predict when intentional inhibition is likely to happen; 2) for voluntarily chosen action/inhibition, nearly all cognitive processes happened before execution of the action; 3) there is no external stop-signal to be time-locked to, which makes the comparison with cued-inhibition on N2/P3 less relevant. Second, N2/P3 comprises a complex of well-known EEG component that is typically associated with cued-inhibition. Since the focus here is not on replicating previous findings of cued inhibition but on exploring the neural activities relevant to intentional inhibition as compared to cued inhibition, and since no N2/P3 could be expected (or indeed observed) for intentional inhibition, our focus was on the RP rather than the N2/P3 complex. In Experiment II, we adopted a double-blind, withinsubject cross-over design with participants tested once under alcohol and once under placebo. Brain activities were recorded with EEG when they were performing the Chasing Memo task. We hypothesized that the RP appears only in the intentional inhibition condition but not in the stimulus-driven inhibition condition. Second, in line with Experiment I, acute alcohol use may incur either stopping impulsivity or waiting impulsivity in disengaging from the action. The finding reported by Libet and colleagues (1983) suggests that the RP is positively associated with cognitive engagement and effort with respect to the impending movement [38]. The more the participant thinks about the action, the earlier and larger is the RP [82]. Thus, in the case of stopping impulsivity, the activation required to implement and set off the disengagement from action may take longer to build up, and may require higher criterion levels of such activation; hence, acute alcohol should result in an earlier onset of the RP and a larger area between onset and peak (area under the curve, AUC). Likewise, in the case of alcohol-induced waiting impulsivity, a RP onset that occurs at a relatively brief interval relative to the time of disengagement and a smaller AUC of the RP should be expected. As exploratory measures of secondary interest, we also compute peak amplitudes, and RP interval (from onset latency to peak latency). Methods 6 Participants Twenty right-handed male adults independent from Experiment I participated in this study, with an age range of 21 to 28 years old (M = 24.6, SD = 2.3). Participants were psychology students recruited from the local campus. According to self-report, they had a normal or corrected-to-normal vision, were subjectively in good health, and had no history of head injuries or neurological or psychiatric disorders, including obesity and anorexia. Although all participants were light to moderate drinkers in daily life, they did not engage in excessive consumption of alcohol or drugs and were not addicted to alcohol or other drugs. The study was approved by the local ethics committee and complied with the declaration of Helsinki, relevant laws, and institutional guidelines. Alcohol administration Drinks were orange juice mixed with either 40% alcoholic vodka or water. The amount of vodka was calculated depending on the participants' body weight to obtain blood alcohol levels (BAC) of 0.05%. The mixture was divided into three equal portions. Two of the drinks were served with 5 min apart, prior to commencing the task. Up to 3 min was allowed for drinking each unit, followed by 2 min of mouth-wash to remove the residual alcohol in the mouth. About 40 minutes after the second drink, the third booster drink was served to reduce noise due to measuring during the ascending versus descending limbs of the blood alcohol curve [83]. To enhance the alcohol taste, all the drinks had a lemon soaked in vodka, and the glass in which drinks were served was sprayed with vodka beforehand. To mask the alcohol taste all drinks contained three drops of Tabasco sauce (McIIhenny Co., USA) [84]. Thus in either condition, participants were unable to distinguish alcohol from placebo on the basis of smell or taste. Procedure Each participant performed the experiment twice with 2 to 7 days in between. They were informed that they would receive a low dose and a high dose of alcohol for two sessions. This assured the presence of expectancy effects in both sessions. In one test session, they received alcoholic drinks; in the other session, they were actually given placebo drinks. Sessions took place between 12:00 and 6:00 p.m. at fixed times across conditions per individual. The order of experimental conditions was randomized in a doubleblind cross-over design. Breath alcohol concentration (BrAC) was measured using the Lion alcolmeter® SD-400 and registered at four times during each session (i.e., baseline, after the first two drinks, pre and post the third drink, and by the end of the computer task). BrAC was measured by a second experimenter, who also prepared the beverages, with the primary experimenter always remaining blind to alcohol conditions and BrAC. A short manipulation check interview was performed at the end of each session to make sure participants are aware of the alcohol content of the drink. Participants provided informed consent prior to participation and were compensated with 20 euro for participation, plus a maximum of 5 euro extra depending on their performance. They were allowed to leave the lab only when their BrAC value was below 0.02% in the drink session. Chasing memo task Task details were identical to those reported in Experiment I, except for a color adjustment (the circle that turned from orange to blue and vice-versa in Experiment I turned from red to green and vice-versa in Experiment II), to better mimic traffic light-related associations with stopping and going. A practice stage and a test stage containing three free blocks and three cued blocks were included. EEG data recording and preprocessing Continuous EEG data were recorded using the BioSemi ActiveTwo system [85] and sampled at 2048 Hz. Recordings were taken from 64 scalp electrodes placed on the basis of the 10/20 system, and two additional electrodes were placed on the left and right mastoids. In addition, four electrodes were used to measure horizontal and vertical eye movements. In the BioSemi system, the ground electrode is formed by the Common Mode Sense active electrode and the Driven Right Leg passive electrode. All EEG data were preprocessed and analyzed with EEGLAB v.13.5.4b [86], an open source toolbox for Matlab and Brain Vision Analyzer 2.0. Four participants were excluded from the analysis. One participant always disengaged when the star was presented on the screen (contrary to instructions). Three other participants had to be discarded due to technical malfunctions. Therefore data analyses were based on the remaining 16 participants. Data were imported to EEGLAB with average mastoids as the reference. Then, downsampled to 512 Hz and digitally filtered using a FIR filter (high pass 0.016 Hz and low pass 70 Hz, with an additional 50 Hz notch-filter). The EEG traces were then segmented into epochs ranging from − 3000 to 1000 ms (− 3000 to − 2500 was used for baseline correction), time-locked to the last disengagement moment before the completion of a trial. Before artifact removal, trials in the free condition without a valid voluntary disengagement (i.e., disengagement occurring within 2 s following the bonus star, after which the trial ended automatically) were discarded, as intentional inhibition cannot be verified in these cases. Subsequently, artifact removal was accomplished in two steps. The first step consisted of visual inspection of the epochs to remove those containing non-stereotyped artifacts such as head or muscle movements, on the basis of manual and semiautomatic artifact detection (50 μV/ms maximal allowed voltage step, 150 μV maximal allowed difference of values in the epoch). This resulted in averages (SD) of 45.06 (7.30), 44.56 (9.37), 53.0 (7.47), and 52.94 (7.45) trials for alcohol/free, placebo/free, alcohol/cued, and placebo/cued conditions, respectively. The number of epochs removed never exceeded 25%. Secondly, an independent component analysis (ICA) was performed using the 'runica' algorithm available in EEGLAB [87]. The extended option was used that implements a version of the infomax ICA algorithm [88] resulting in better detection of sources with sub-Gaussian distribution, such as line current artifacts and slow activity. Then we applied the algorithm ADJUST that automatically identifies artefactual independent components by combing stereotyped artifact-specific spatial and temporal features [89]. ADJUST is optimized to capture blinks, eye movements, and generic discontinuities and has been validated on real data. After exclusion of artefactual components the data were reconstructed based on an average (SD) of 55.57 (3.72), 57.69 (2.91), 56.75 (3.15), and 58.75 (3.21) ICA components in the alcohol/free, placebo/ free, alcohol/cued, and placebo/cued conditions, respectively. The number of independent components removed did not exceed 14% of the total in any of the conditions. Afterward, data were re-referenced using the current source density (CSD) transformation [90] as implemented in Brain Vision Analyzer [91] (with the parameters degree of spline = 4; maximum degrees the Legendre polynomial = 15). The CSD transformation uses surface Laplacian computation to provide a reference-free estimate of the local radial current density rather than distant/deep (neural) sources [92,93]. A major advantage is that CSD leads to the enhanced spatial precision of the recorded EEG activity [94,95] and thus acts as a spatial filter. Finally, epochs were averaged for each participant and experimental condition for further statistical analysis. Previous literature indicates that the supplementary motor areas contribute considerably to the generation of the RP. Although some studies have analyzed the RP based on a pool of electrodes surrounding FCz, several studies suggest that the activity of these regions is best captured by electrode FCz [96,97], especially after CSD transformation. This was confirmed by visual inspection for each participant. Statistical analyses were therefore conducted only on this electrode. Data preparation and statistical analysis Task performance The calculations for median Engage RT, Disengage RT and W-interval were the same as in Experiment I. Engage RTs of less than 100 ms were removed, resulting in 916 (95%), 885 (92%), 892 (93%), and 931 (97%) trials for alcohol/free, placebo/free, alcohol/cued, and placebo/cued conditions, respectively. For Disengage RT in the free condition, if the participant did not voluntarily disengage within the provided time, that trial was removed. This resulted in 788 (82%) trials for the alcohol condition and 836 (87%) trials for the placebo condition. Independent t-tests were performed to compare performance under placebo and alcohol conditions for each of these dependent variables. EEG Four indices extracted from the ERP topographic plots were analyzed, including RP onset latency, RP peak amplitude, AUC, and RP build-up interval (from onset latency to peak latency). For RP onset latency, since automated algorithms failed to yield consistent and robust latencies for most participants, three authors (YL, GFG, & RR) independently judged the EEG time courses for each individual trial, while they remained blind to Inhibition Category. The raters hand-picked (through computeraided scrolling procedures) the RP onset as the moment in time (in ms) when the signal began to deviate and showed a steady switch towards the negative direction. The inter-rater reliability calculated by intraclass correlation was 0.96, which indicated high consistency among raters. AUC was quantified as the total surface in the time window between onset latency and peak latency, using the R package 'stats' (version 3.3.0) [98]. A two-way withinsubject repeated-measures ANOVA was implemented with Alcohol (alcohol/placebo) and Inhibition Category (free/cued) as factors. Conventional and Bayesian-based analysis As in Experiment I, we did both conventional and Bayesian-based paired t-test and repeated-measures ANOVA analysis for the main dependent variables. Bayesian repeated-measures ANOVA compares all the models against the null model. BF was provided every time a main factor or interaction was added to the model, allowing us to establish how each main factor and the interaction contributed to the model. BrAC The descriptive values at each reading can be found in Additional file 1. In brief, BrAC peaked after the third drink, with a mean value of 0.06% and a standard deviation of 0.10. Task performance In brief, acute alcohol use did not exert meaningful effects on Engage RT/Disengage RT in either the cued or free condition. Similarly, alcohol did not influence timing accuracy and W-interval. More detailed information can be found in Additional file 1. The interaction between Alcohol and Inhibition Category was not significant (F (1, 15) = 0.29, p = 0.60). Bayesian repeated measures ANOVA showed that a model that contained only Inhibition Category in the model provided a fit that was 2.3 times better than model that added the factor Alcohol and 5.8 times better than a model that further added the interaction effect. These results together confirmed the significant main effect of Inhibition Category in the absence of main and interaction effects of Alcohol. Summary of EEG results Since the results of the analyses on RP peak amplitude and build-up interval were highly redundant to those of AUC, these results can be found in Additional file 1. In general, the four ERP indices provided a consistent pattern of the RP that was influenced considerably by the factor Inhibition Category but was not influenced by the factor Alcohol. Under free inhibition, the RP began to develop almost 1000 ms earlier than under cued inhibition. Also, under free inhibition, the RP reached higher peak amplitudes than under cued inhibition. Accordingly, the AUC is larger for free than for cued inhibition. Generally speaking, only under free inhibition condition, there was a clear RP before disengagement. But these effects were not impacted by the acute effects of alcohol. Discussion In this experiment, we tested how moderate acute alcohol use influences intentional inhibition and stimulusdriven inhibition, at behavioral as well as neural levels. RP developed over the frontocentral cortex about 1200 ms before intentional inhibition was effectuated but not before stimulus-driven inhibition. It turned out that alcohol administration had hardly any effect, either behaviorally or on neural correlates of intentional inhibition and stimulus-driven inhibition. These null-findings were corroborated by Bayesian analyses that confirmed there was stronger evidence for the null hypothesis than for the alternative hypothesis. Stimulus-driven inhibition In contrast to previous findings on impaired stimulusdriven inhibition after alcohol intake [67-71, 99, 100], no alcohol effects were observed on stimulus-driven inhibition as measured in the Chasing Memo task. Since the present study did not include a SST or a GNG task, we cannot tell whether the lack of effects is specific to the Chasing Memo task or pertains to our alcohol manipulation in the present sample. A number of potential reasons may explain the discrepancy between the present and previous findings in the literature. First, the doses of alcohol administered in the present study may have been too low to produce manifest alcohol effects. Previous studies have demonstrated effects on ERP components under comparable alcohol doses and sample size [101]. But compared with the flanker task they used, disengaging from visuomotor tracking in the Chasing Memo task was relatively easy. And it has been pointed out that the easier the task, the more alcohol is needed to cause performance impairments [17]. Our conclusions cannot be generalized to the full range of acute intoxication. Second, alcohol effects may be confounded with individual differences in alcohol expectancy effects [102]. For instance, it has been observed that those who expect less alcohol-induced impairment indeed displayed less impairment, irrespective of actual consumption [103][104][105]. Without an additional control group (participants who do not get any alcohol, and who know so) in the current study, it is difficult to distinguish between expectancy and pharmacological effects of alcohol [106]. Third, although alcohol intake resulted in similar BACs across participants, there might still exist non-trivial individual differences in the actual impairment instilled by alcohol [106]. Intentional inhibition Previous studies did not examine the EEG effects of alcohol on intentional inhibition. We observed no effects, neither from the perspective of stopping impulsivity nor waiting impulsivity. The factors that were discussed that potentially play a role in the absence of alcohol effects on stimulus-driven inhibition may also pertain to intentional inhibition. In particular, individual differences in the actual impairment caused by alcohol [106]. Indeed, individual data in our study showed that roughly half of the participants had earlier RP onsets under alcohol, while the opposite pattern was observed among the other half. Furthermore, a true effect might have been missed due to low power from the small sample size. Future studies may explore such individual differences more systematically and recruit a larger sample. Second, the requirement to report the W-moment might interfere with the main task at hand (continue/disengage tracking). This process required attention shifting (i.e., have a glance of the counter) and working memory storage (i.e., keep this number in memory). Meanwhile, the reliability of reported W-moment has been questioned [107]. Therefore, future studies not focused on consciousness may consider discarding this element. General discussion Many studies have investigated the relationship between alcohol use and inhibition, but all previous studies focused on stimulus-driven inhibition, typically tested with varieties of the GNG and SST. Here, we expanded this focus by testing alcohol effects on intentional inhibition in two studies: focused on past-year risky drinking and shortterm alcohol use respectively. Both intentional inhibition and stimulus-driven inhibition were tested. We found no relationship between past-year moderate recreational alcohol use with both types of inhibition and no differences related to moderate acute alcohol administration. The main finding was that the RP showed an earlier onset and higher peak values for intentional compared to stimulusdriven inhibition, independent of alcohol administration. Regarding stimulus-driven inhibition, its null association with past-year alcohol use is to some extent in correspondence with the literature. Presumably, a threshold effect rather than a linear relationship exists between typical alcohol use and response inhibition. That is, only when the accumulated alcohol consumption surpassed a certain threshold or a diagnosis of AUD is confirmed, long-term alcohol use is accompanied by impaired inhibition [108][109][110][111]. Accordingly, our conclusions cannot be readily generalized to the population with AUD. On the other hand, our lack of effects of acute alcohol use on stimulus-driven inhibition is more at odds with previous research. A study by Marczinski et al. (2005) using a cued GNG showed impaired inhibition of a button press (i.e., a discrete motor response) under the influence of alcohol [112]. However, alcohol did not influence inhibition performance if participants had to release instead of press a button (i.e., a continuous movement). This latter response type seems to resemble the ongoing tracking movements in the Chasing Memo task. The employment of discrete go responses can explain why the acute effects of alcohol are frequently reported on GNG and SST [67,69] but not in our task. Regarding intentional inhibition, our studies represent the first exploration of a potential link with alcohol use and misuse. Neither effects of trait drinking patterns (social/ problematic) nor acute alcohol effects were observed. This negative finding coincides with a recent finding in Parkinson patients. Three groups of participants (healthy control, Parkinson with and without impulsive-compulsive behaviors) did not differ on intentional inhibition performance measured by the Marble Task [113]. This suggests that populations that typically show comorbid impaired reactive inhibition, such as Parkinson disease, ADHD, and substance use disorder, can still keep intentional inhibition capability intact. At the neural level, a slow negative potential appeared 1200 ms exclusively before intentional inhibition, which provides evidence that the RP also reflects the preparation of stopping a motor action. Together with the evidence that the RP develops prior to the process irrelevant to action [114][115][116] and its amplitude is influenced by the degree of intentionality [117][118][119], it is concluded that RP reflects neural processes related to intention formation rather than motor preparation [114,120,121]. This can also be interesting in relation to the current discussion on the brain disease model of addiction [122] and with respect to the question if long-term alcohol-dependent patients show problems in intention formation and/or execution. We acknowledge a number of limitations of our study. First, in the Chasing Memo task, participants were obliged to disengage on all free trials. The moment of disengagement was 'at will', but disengagement at any point during a free trial was mandatory rather than voluntary. If we had added the 'whether' option and let participants determine more freely if and when to disengage, alcohol might still influence decisional aspects of intentional inhibition [123]. Just like the priming effect of alcohol, preload drinking promoted loss of control over further drinking behavior [17]. In that way, acute alcohol use should increase the probability of accepting another beer rather than when you accept it. We are currently exploring intentional inhibition and effects of alcohol in a modified version of the Chasing Memo task with a 'whether' option added. Second, gender was disproportionally distributed in both experiments. In Experiment I, there was more females than males. We, therefore, added gender as a covariate in the main analyses and confirmed its null effect. Experiment II included only male participants given sex differences in metabolic alcohol processing. We cannot be sure if the current findings generalize to females. Future studies might aim at more gender-balanced samples. Third, our sample size in Experiment II is relatively small, but studies with a similar topic and study design confirmed its power [77]. Fourth, there is room for alcohol administration and placebo conditions to be improved, given that although all participants reported they received alcohol in the placebo condition, the amount is less than that in the alcohol condition; the experimenter blind to alcohol condition may interact with participants differently in two conditions (alcohol/placebo) due to the participants' status (drunk/sober). We acknowledge this as a potential shortcoming, although these are common issues in this field, and generally not considered overly detrimental to interpretation. We end by providing a few suggestions for future research into this field. First, the target population may include heavier binge drinkers and/or alcoholdependent patients. It has been shown that impairments in inhibitory control after a moderate dose of alcohol are more pronounced in binge drinkers than in non-binge drinker subjects [124]. This might help explain that when these individuals become intoxicated, they are less able to refrain from the impulse or desire to consume more alcohol, leading to further binge drinking. Further, one might employ intravenous alcohol administration to keep the BAC at a steady level for a prolonged time [125]. This can help control the acute tolerance effect of alcohol (reduced impairment at a given BAC on the descending limb) [126]. In addition, alcohol-related cues may be embedded in the task as they are more salient for heavy drinkers (compared to light drinkers) and can impact on inhibitory processes [127,128]. Also, it is interesting to explore whether only a subgroup of the drinkers with specific drinking patterns and personalities show intentional inhibition deficits. Conclusion This is the first empirical study on the role of intentional inhibition in relation to alcohol use. In two experiments, we found that both past-year risky drinking and moderate acute alcohol did not affect intentional inhibition, suggesting that alcohol does not moderate the ability to stop at will in the present study. Factors that might explain these null findings, such as the lifetime amount of alcohol used, alcohol administration dosage, and research paradigms were discussed. Caution should be taken when extending these conclusions to AUD populations and higher intoxication levels (e.g., 0.08%). In addition, we found an event-related brain potential, the readiness potential (RP), that appeared 1.2 s before the intentional inhibition of action. No RP was visible before stimulus-driven inhibition. This indicates that the RP might reflect the formation of an intention in general rather than only signifying motor preparation. Additional file 1: Experiment I: 1) reliability of the questionnaires used; 2) results of the two computer tasks when AUDIT-C was used; 3) results of the stop-signal task for the 86 participants sample. Experiment II: 1) BrAC values at each reading; 2) behavioral findings; 3) EEG results for RP peak amplitude and RP build-up interval.
2020-01-09T09:15:20.481Z
2020-01-07T00:00:00.000
{ "year": 2020, "sha1": "3d0b0e27239c8288e3ade86c5a9061314a9046aa", "oa_license": "CCBY", "oa_url": "https://bmcpsychology.biomedcentral.com/track/pdf/10.1186/s40359-019-0367-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4b73e6c2d057170fe9ce65473423f17819b0d737", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [] }
234480566
pes2o/s2orc
v3-fos-license
TETRACYCLINE ANTIBIOTIC REMOVAL FROM AQUEOUS SOLUTION USING CLADOPHORA AND SPIRULINA ALGAE BIOMASS Cladophora and Spirulina algae biomass have been used for the removal of Tetracycline (TC) antibiotic from aqueous solution. Different operation conditions were varied in batch process, such as initial antibiotic concentration, different biomass dosage and type, contact time, agitation speed, and initial pH. The result showed that the maximum removal efficiencies by using 1.25 g/100 ml Cladophora and 0.5 g/100 ml Spirulina algae biomass were 95% and 94% respectively. At the optimum experimental condition of temperature 25°C, initial TC concentration 50 mg/l, contact time 2.5hr, agitation speed 200 rpm and pH 6.5. The characterization of Cladophora and Spirulina biomass by Fourier transform infrared (FTIR) indicates that the presence of functional groups of different components such as the Hydroxyl group (-OH), amides(N-H stretch) were responsible of surface adsorption processes. The isothermal study has been applied using Freundlich, Temkin, and Langmuir models. The data best fitted with the Langmuir model. Finally, The pseudo-second-order kinetic model was best fitted the kinetic data with a high coefficient of determination (R 2 < 0.97 and 0.99) when used Cladophora and Spirulina algae biomass, respectively. The study showed that both Cladophora and Spirulina algae were promising and economical biomass that could be used for a large scale bioreactor. INTRODUCTION Pharmaceutical compounds are considered emerging environmental pollutants that have a potentially harmful effect on the environment and human health. The excessive use of antibiotics increased their presence in water sources, affecting the aquatic ecosystems and the quality of water(2). Antibiotic usage has been rapidly increased all over the world. Of particular concern are antibiotic residues in the environment, which can induce antibioticresistant genes (ARGs) from extended exposure at relatively low concentrations (9). In these conditions, the environment encourages bacteria to evolve ways to protect themselves, causing ''Superbugs" is a term used to describe strains of bacteria that are resistant to the majority of antibiotics commonly used today. Worse, these bacteria, most of which are strains harmless to humans, can then share this resistance mechanism with disease-causing microbes(1). Traditional techniques for the removal of antibiotic from wastewater include evaporation, chemical precipitation, ion exchange, ozone treatment, photochemical oxidation, cation exchange membranes, ultrafiltration, nano-filtration, electro-chemical degradation, reverse osmosis, coagulation, membrane separation, and catalytic oxidation. These processes may be ineffective for large scale subtraction of antibiotic (22). The adsorption process is being widely used by various researchers for the removal of antibiotics from waste streams, offering significant advantages like the low cost, availability, profitability, ease of operation, and efficiency, in comparison with conventional methods, especially from economic and environmental points of view (4). Algae biomass used for biosorption antibiotics from wastewater. In this work, the ability of algae biomass to remove tetracycline from the water will be studied under different operation conditions using a batch system. In (Al-taweel, 2019) study the use of a pure and mix algae culture in a free and immobilized form for removing of lead and copper ion from liquid solution. Batch experiments relevant that efficient pHi value lies between (4 and 5) , the required equilibrium time was attained within 60 minute for all algae type and forms, the data result fitted with Langmuir and Freundlich isotherm model and conducted well with the Langmuir isotherm with R2 more than 0.99. By using chlorella (CA) , and mixes algae (MA) Pseudo 2nd order model of the kinetic study fit the obtained data well, that prefer chemisorption mechanism (8). MATERIALS AND METHODS Sorbate (Tetracycline) : Types of TC class Chlortetracycline, doxycycline, minocycline, o xytetracycline,and tetracycline belong to this group (15). Tetracycline (TC) used in this study Fig. 1 shows the scan UV-V for the TC test at AlKhawarizmi University lab where max wavelength 360 nm. Table 1 summarizes TC characteristics. Wavelength scanning at different TC concentrations yielded the respective adsorption spectra, the determination of TC (15). Stock standard solutions were prepared by adding 1 g of the pure substance and dissolving it in distilled water to obtain TC concentration 1000 mg/L. The pH value of the solutions was controlled during the experiments by addition of the buffer solution dropwise. Fig. 2. The algae collection location (Cladophora) A random sample of the collected wet mix algae was analyzed for their species and content percentage of each type by using a microscope at the laboratories of the Biology Department, Science College, the University of Baghdad as given in Table 2. The results showed that two types were found in this sample, Cladophora algae were the highest percentage. After the collected step, algae were washed many times with tap water to get rid of impurities, dirt and other unwanted materials such as (non-vertebrate animals, small worms , crustaceans, bird feathers), then with distilled water twice to ensure clearness. The washed algae were left under the sun for three days to dry (20). The dried algal was cut off, ground and sieved to get grain size powder or <63 µm for biosorption in batch experiments. Fig. (3) shows Cladophora algae biomass. The second type used in this study was a pure Spirulina Algae biomass (supplied from amazon) was used in this work as biosorbent for tetracycline removal from aqueous solution. As powder less than 63 µm in particle size. Fig. (4) shows Spirulina algae biomass. RESULTS AND DISCUSSION Fourier Transform Infra-Red (FTIR) Analysis: In the measurement of infrared spectroscopy of the samples, IR radiation passed through the sample. Some of that radiation absorbed via sample and some of the radiation transmitted. The spectrum result represented the transmission and molecular absorption capacity, making molecular fingerprint to the sample. That makes FTIR useful for several types of analysis (19). In order to identify functional groups (carbonyl, carboxylic, hydroxyl and others) involved in the biosorption process, the FTIR techniques were used. FTIR test can also provide an excellent information on the bands present nature present on the surface of the algae before and after the biosorption process, and has many advantages when used as an analytical technique: the test fast, nondestructive and requires only small quantities of the samples (21). Fourier transform infrared (FTIR) in the region of 4000-400 cm -1 resolution accomplished at AlKhawarizmi University. Fig.5 shows the FTIR spectrum of algae biomass before and after sorption of TC . Some peaks after TC biosorption disappeared, shifting or decrease in its intensity. Among the active sites in Fig.5 (a) were Hydroxyl group (-OH), amides(N-H stretch) and amine have been suggested to be responsible for the adsorption of TC. While the active sites in Fig.5 (b) were Hydroxyl group (-OH), amides(N-H stretch), Carboxyl (C-H aldehyde stretch), Alkyl halides. Fig. 6. show a decrease in the uptake rate of tetracycline with rising initial TC concentration. Reduction in the efficiency explained by the saturation of the available reactive adsorption sites on the sorbent surface while increasing initial TC concentration (7). All TC moleculas present in solution have ability to interact with available sites at lower concentrations and the removal efficiency was high in comparison with higher concentrations. This can be attributed to large active (binding) sites available in the biosorbent dosage used in the set. Hence, the initial concentration of 50 mg/L was used for remaining batch experiment Effect of Algal Biomass Dosage One of the important parameters that strongly affect the sorption capacity is the biosorbent dosage. This effect was studied by adding different dosages of biosorbent (0.05-1.25)gm. The batch tests were conducted with initial concentration of TC is 50 mg/L agitation speed of 200 rpm and contact time of 3 hr. at room temperature of 25ºC. TC removal has been studied by a wide range of algal biomasses, as shown in Fig. 7, the maximum dosage of Cladophora and Spirulina used are 1.25 g/100 ml and 0.5 g/100 ml respectively, , which removal efficiency 94% for two type. This is a logical behavior because the increasing of biosorbent dosage means a greater number of biosorption sites and, consequently, a higher removal of contaminant (18). It is clear there are not significant changes in the removal efficiency of TC in response to variation of biosorbent dosage from 0.5 to 1.2 g for spirulina algae biomass and this can be attributed to reaching the maximum sorption capacity. Effect of contact time The effects of contact time onto removal efficiency of TC is shown in Fig. 8. It was observed that the removal efficiency increases as the contact time increases and it remains constant after reaching equilibrium. This is due to a larger surface area of the biomass at the beginning of the biosorption process. the removal rate was gradually decreased due to decrease the vacant sites on the surface of the biosorbent and formation of repulsive forces (13). The maximum removal of TC was 95% percent after 150 minutes (2.5 h) of shaking time. Therefore, 2.5 h was taken as the equilibrium time for subsequent experiments. Effect of pH The effect of TC solution pH has been studied in the range of pH (3-10), Fig. (9) shows clearly that TC removal increased significantly between pH 6 and 7 removal percentage reaching around 95%. Removal of TC was approximately low at pH equal to 3 and this is due to high concentration of hydrogen ions. These ions can be competed the TC molecules for binding with available sites on the algae biomass which have high affinity for H + ions. Therefore, the decreasing of TC removal can be caused by increasing the proton concentration in the aqueous phase. Consequently, the increase of the TC removal (from 41 to 94.5 %) and (from 22 to 94.5%) by cladophora and spirulina algae biomass respectively as the pH increases (from 3 to 6.5) can be explained on the basis of a decrease in competition between TC and hydrogen species for the binding sites (23). It is clear that the maximum removal efficiency of TC was achieved at an initial pH value of 6.5. The pHpzc (Point Zero Charge) of the cladophora and spirulina algae biomass was determined to be (7-7.8) respectively (10). Oppositely at pH < pHpzc, ions of H+ are transferred to the particle surface and combined with OH-groups leading to a positive charge algae surface. Under these circumstances, the net surface charges of the algae biomass at pH > 7were positive (2). Effect of agitation speed The effect of agitation speed onto removal efficiency is shown in Fig. 10. It is observed that, the removal efficiency increased as the agitation speed increased. This is due to the fact that, at higher agitation speed the film thickness decreased and this eliminates the film resistance (14). Indeed, the high agitation speed is enhanced the diffusion of contaminants through reactive medium and a suitable contact can be developed between binding sites and the contaminant (7). In addition, the results signified that a shaking speed with the values ranged from 100 to 300 rpm was adequate for ensuring maximum TC uptake and no considerable change can be recognized after these values. Accordingly, the present study denoted that the required agitation speed for achieving the maximum removal efficiency is; 200 rpm for TC onto algae biomass. Isotherm Models The importance of sorption isotherm comes from its representation of how antibiotic molecules distribute between the solution and biosorbents at equilibrium (12). Several isothermal models were used for this purpose, including Langmuir and Freundlich, which are considered the most common models (5). Selecting a suitable model depends on fitting the experimental data onto model equations, and the correlation coefficient helps by indicating the models that are suitable to fit the data. 1-Langmuir Isotherm The Langmuir adsorption has been the most widely used adsorption isotherm for the adsorption of a solute from a liquid solution. Langmuir equation relates the coverage of molecules on a solid surface at a fixed temperature. The Langmuir model assumes a monolayer adsorption of sorbate onto a surface contained identical groups with homogeneous biosorption energy (10). It is simply represented by the linear equation: q e = q max. * b * C e /(1+b * C e )……… (1) where C e (mg/L) is the TC concentration at equilibrium, q e (mg/g) is the amount of TC adsorbed per gram adsorbent, qm = (mg/g) maximum capacity for sorption of TC from the solution, b = (L/mg) constant depend on binding sits alliance 2-Freundlich Isotherm an empirical expression in the Freundlich isotherm, and expressed as follows (13) Where K F = Freundlich's constant, a constant that is relative to the adsorption capacity (mg/g) (L/mg) 1/n . 1/n = constant indicates the intensity of the adsorption and gives indication on the favorability of the adsorption. Both n and K are constants, being indicative of the degree of nonlinearity between solution & concentration, and the extent of adsorption, respectively. If n = 1, the partition between the two phases is independent of the concentration. 1/n > 1 shows cooperative adsorption, but a 1/n < 1 shows normal biosorption (20). The Freundlich constant n (adsorption intensity) in the range (1-10) suggested that the bonding affiliation of adsorbate and the biosorbent was strong (7) 3-Temkin isotherm model (Temkin, 1940): Temkin isotherm takes into account the adsorbate-adsorbent interaction. By ignoring the extremely low and large value of concentrations, the model assumes that the fall in the heat of sorption (function of temperature) of all molecules in the layer is linearly rather than logarithmic with coverage (16). Adsorption is characterized by a uniform distribution of binding energies, up to some maximum binding energy (8). The Temkin isotherm equation is given by (11) Where: K T is the equilibrium bin ding constant (L/g), B T = Temkin isotherm constant, b T Constant related to the heat of sorption (J/mole). R is the universal gas constant (8.314 J/mole K) and T is the absolute temperature (K). All experimental data were analyzed and compared by using non-linear isotherm models, and the models parameters were evaluated by using Microsoft Excel SOLVER software. Furthermore, sum square error (SSM) and coefficient of determination (R 2 ) were used to measure the goodness of fit. SSE is defined as (10): SSR= ∑ (qe, calc − qe, exp) =1 2 ... (5) In the present study, Cladophora and Spirulina are characterized for its ability in TC removal, considering the parameters affecting the removal of TC from water such as pH, algal biomass dose, time, agitation speed, initial concentration. The adsorption of tetracycline on adsorbent is illustrated by the isothermal study. As show in Fig. 11, 12 and Table 3. Sorption kinetics model The kinetics of tetracycline adsorption onto algae biomass (Cladophora, Spirulina) were investigated using pseudo first order and pseudo second order model, using the experimental data at various initial concentrations. The calculated values obtained from the application of these models are tabulated in Tables 4. The values of R 2 (coefficient of determination) and q e calculated from the second order kinetic model show a well fit with the experimental data compared to other mentioned models. The linear plot of each biosorbent did not pass through the origin, as a result, intraparticle diffusion was not the rate-limiting step (3). While, the second order kinetic model expected that the rate limiting step may be chemical sorption (23). The linear form of first-order kinetic model model can be expressed by the following equation (24): ln ( q e − q t ) = ln ( q e ) − k 1 t …..(6) where qt and qe (mg/g), respectively, are the adsorption capacity at any time (t) and at equilibrium. k 1 (1/min) is the pseudo-firstorder rate constant. From the kinetic model data in Table 3 and Fig. 13, for the adsorption of TC, it can be concluded that data are poorly fitted to the kinetic model for TC. However, the second-order kinetic model Fig. 14, which expresses the presence of chemisorption process, is related to the difference between the equilibrium vacant adsorptive sites and the occupied sites (6, 17). The second-order kinetic model can be expressed by the following linear equation: t/ qt = 1/ q e 2 k 2 + t/ qe …..(7) where qt and qe (mg/g), respectively, are the adsorption capacity at any time (t) and at equilibrium. k 2 (g/mg min) is the pseudo-secondorder rate constant. By plotting ln(qe−qt) versus t and t/qt versus t in the previous equations (Eqs. (6) and (7)), all the adsorption kinetic parameters can be determined from the slope and the intercept. Data shown in Table 4 indicated that the adsorption of TC fit with the second-order kinetic model. Furthermore, differences between Qe calculated and Qe experiments are lower in the second-order kinetic model than the first-order kinetic model for TC. According to the fitness of the data to the second-order kinetic model, the adsorption of TC by Cladophora and Spirulina biomass may be chemisorption. CONCLUSION The ability of Cladophora and Spirulina biomass to remove TC from water samples reached 94%. Moreover, TC removal reached equilibrium within 2.5 hour contact time for both type of algae. The optimum pH of solutions is 6.5, agitation speed 200 rpm and TC concentration 50 ppm. Nevertheless, algal biomass cladophora and spirulina dose of 1.25 g/100 ml and 0.5 g/100 ml respectively were shown to be the optimum. The data best fitted with the Langmuir model. According to the fitness of the data to the second-order kinetic model, the adsorption of TC by Cladophora and Spirulina biomass may be chemisorption.
2021-05-13T08:58:34.870Z
2021-04-19T00:00:00.000
{ "year": 2021, "sha1": "901dea02c1b5e1d205a6bc2e8ff944a06a6957a7", "oa_license": "CCBY", "oa_url": "https://jcoagri.uobaghdad.edu.iq/index.php/intro/article/download/1295/876", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "901dea02c1b5e1d205a6bc2e8ff944a06a6957a7", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
240516100
pes2o/s2orc
v3-fos-license
Productive flight activity of bees in the active period in the conditions of Vinnytsia region The study aimed to identify the relationship between pollen collection, nectar Introduction One of the promising industries is beekeeping.In addition to Ukraine being among the top ten world leaders in honey exports in sales, crop productivity also depends on bees.Beekeeping products, namely honey, wax, bee pollen, perga, royal jelly, bee venom, propolis, and others, are widely used in various economic sectors (Razanova 2018).The most important role of ecological beekeeping services for the planet's ecosystem is to preserve biological diversity due to the ability of bees to pollinate flowering plants (Skoromna and Razanova 2019). Honey bees are the best among plant pollinators due to the ability of humans to control certain stages of the bee colony's life: control the volume of flying individuals and their activities, stimulate or limit their growth (Adamchuk 2020).The bee collects nectar, and during the flight from flower to flower, it transfers pollen for pollination.This process positively affects the viability of plants, increases their yields, and increases the number of plants on Earth (Hung et al 2018).When bees pollinate the flowers of crops, the yield increases by 30-60% and above, fruit tying and preservation of the ovary, the quality of fruits and seeds, their weight increases (Cherhyk et al 1972).Bees are the only factor that humans can control.The use of bee pollination in greenhouses makes it possible to obtain high yields of vegetables at the lowest cost and avoid laborious work on artificial pollination (Haeva 2015). Тhe diversity of wildlife is an indicator of the stability of ecosystems; therefore, conservation is one of the key tasks of modernity.Changes in the natural conditions and anthropogenic activity have caused a decrease in range and alterations in the functional organization of many plant species (Mudrak et al 2018;Mudrak et al 2019;Моskаlеts et al 2020;Pantsyreva et al 2020).The entry of various types of anthropogenic pollution into the atmosphere creates a high probability of entering toxic elements into beekeeping products during bees' active collection of nectar and pollen (Stroikov 1988). The flight activity of bees in the colonies depends on the biological condition of the bee colonies, the presence of honey collection, the number of bribes and the duration and temperature of the environment, wind speed, and precipitation (Abrol 2006). The optimum temperature for the flight of bees to collect food is from 17 to 32 °C.Flight activity in bees of summer generation at an outdoor temperature of 32 ° C is two times higher than at 21 °C.Bees that have emerged from hibernation show flight activity at pretty low temperatures but not below 8 °C.Bees' increased flight activity at 35 °C and above is due to the additional need for water (Kryvtsov and Lebedev 2019). Most species of honey plants of the Vinnytsia region begin to produce nectar at an air temperature of 14-15 °C, but there are also cherries, which secrete nectar at 10 °C.With increasing temperature from this limit, there is an increase in the nectar-producing activity of plants, which reaches its maximum in the range of 25-35 °C.Further increase in air temperature leads to a decrease in the amount of nectar released by plants. The flight activity of bees at wind speeds up to 5-8 m/s decreases to 40-45 %, 12-15 m/s -only 11-18 % of bees fly out of the hive, above 24 km/h -stops.The wind reduces the amount of nectar released by plants, especially in species with open nectaries (Losiev and Holovetskyi 2013). Abstract The study aimed to identify the relationship between pollen collection, nectar, and seasonal dynamics of the brood of the bee colonies of the Ukrainian bee breed to study their active activity when changing the period of the active season.We compared the collection of nectar and pollen from honey plants of the garden, white acacia, and sunflower.The number of bees bringing pollen and nectar to the hive was recorded.The active work of bees to collect pollen is associated with the work of the uterus to lay eggs and the number of broods in the hives.Bee colonies increase their flight activity by collecting pollen in early spring and summer.By the beginning of the main honey harvest, pollination work is minimized, especially during the flowering of white acacia, and they switch to collecting nectar.Analysis of the results allows us to conclude that bees' daily dynamics that produce pollen increase in the spring to May and June. Keywords bees, flight activity, fruit trees, sunflower, white acacia The optimum humidity for bees' flight at 20-25 °C is 20-60 %.Humidity affects the amount of nectar produced by plants and determines the concentration of sugars in it.As a rule, drought leads to an increase in the sugar content in nectar and reduces its amount (Moquet et al 2017). Precipitation determines the nature of the nectar secretion of plants depending on their intensity.Short-term rains affect the intensity of nectar secretion and nectar quality, leading to the increased honey collection, especially at temperatures close to 28 °C.Prolonged rains cause a decrease in the concentration of sugars in the nectar, leading to bees leaving the plants they visited before rainfall. The duration and range of bees also depend on many factors.The area of honey collection, from which bees intensively collect nectar, is within a radius of 2-2.5 km.The age of bees does not affect the flight range (Kryvtsov and Lebedev 2019). Depending on the level of honey collection and the distance from the source to the hive, the duration of bees' flight varies from 15 to 103 minutes.When collecting nectar, the flight duration is 10-60 minutes, and when collecting pollen -6-30 minutes.During the day, the bee makes an average of 8-10 flights.One flight into the hive brings 30-40 mg of nectar or 10-15 mg of pollen. The cost of feed for the flight activity of the average bee colonies during the season is 28-30 kg, for the life and work of bees in the hive -48-52kg per year (Kryvtsov and Lebedev 2019). The beginning of the active season in the central part of the Forest-Steppe falls in April when the air temperature reaches 12 °C.The whole honey harvest season for the bee colonies is divided into five periods (Kryvtsov and Lebedev 2017;Kryvtsov and Lebedev 2019).The first -early spring-mid of May.Flight activity of bees at this time, as a rule, is low.Nectar productivity of vegetation is small, so a significant part of bees brings water and pollen to the nest.Families accumulate the largest amount of bee bread. The second period is the last decade of May -the first two decades of June.In favorable weather, the stocks of honey in the nests noticeably increase.The duration of the flight day rises to 9 hours and, accordingly, the intensity of the flight.The third period is at the end of June-beginning of July.This time is characterized by a significant bringing of nectar by bees in the nests.The fourth period -the period of the main honey harvest and its onset depends on weather conditions in June.This period begins in the first decade of July and can last until the last decade of August.The duration of the flight day increases to 11 hours.The fifth period -autumn, which supports the nectar collection in the nest, comes very little.Pollen bribe can sometimes last until the last autumn flightthe second half of October. The type of honey harvest is determined by the set of species of honey plants, their number, and weather conditions during their flowering.According to the nature of the honey harvest, the forage conditions can be divided into two main types -those that: provide a supporting and main honey harvest.The main honey harvest is characterized by the flowering of the maximum number of honeybees with high nectar productivity and daily growths of the hive exceeding 2 kg.During the maintenance period, the daily gain of the hive does not exceed 1 kg. The biological processes that underlie the growth and development of bee colonies and their food collection have always attracted scientists and practitioners' attention.Flight activity is an indirect indicator of the building activity of bees because their release of wax proportionally depends on the amount of food entering the nest.Therefore, the purpose of the research was to study the flight activity of bees in different periods of honey collection. Materials and Methods The research was carried out in the apiary of the Vinnytsia region.During the experiment, the efficiency of bees' use of the honey collection in the zone of their productive flight was determined.The condition of bee colonies, their development, and productivity were also determined. Research methods used in the research: zootechnical -assessment of bee colonies, their productivity, determination of honey supply of fodder resources, the phenological -flowering time of plants, ethological -flightharvesting of bees, and statistical -biometric processing of research materials (Brovarskyi et al 2017). Three bee colonies of medium strength were used to assess the honey harvest conditions of fruit trees, white acacia, and sunflower.The count was performed every hour. Honey-producing conditions and nectar-bearing properties of plants were evaluated according to the size of the bribe -the amount of nectar that enters the hive in one day or for a certain period of flowering of certain honey plants. The flight activity of bees was studied by the number of individuals arriving in the hive in an average of 3 minutes. The number of broods in the families was determined using a frame grid with squares of 5x5 cm, each containing 100 cells. Biometric processing of research data was carried out according to the method of N.A. Plokhinsky using MS Excel software using built-in statistical functions. Results and Discussion In the conditions of the Vinnytsia region, the change of overwintering bees by the young generation ends in the first period, and the strength of families still does not change.The second period is a period of intensive growth, which lasts in strong families until mid-June, in weak ones much longeruntil early July.The third period -the accumulation of young bees before the honey harvest lasts all of July.Strong families can begin the productive use of the honey collection, and the weak use part of the main bribe for their development.At the end of July, there is a maximum number of bees in the families. The fourth period is the period of preparation for winter (August-September).During this period, the bulk of the bees of the older generation retreats and continues to reduce the strength of the bee colonies gradually.Families that had more bees at the beginning of the season retain this advantage in the fall. The daily flight activity of bees depends on the type of plant and weather conditions.The largest bee pollen is collected during the flowering of early spring honeybees, fruit and berry crops, and weeds.This stability continues until the flowering of white acacia.Increased activity of bees in collecting pollen is associated with the cost of bee bread stocks because, during this period, the bees are intensively growing brood.The number of flying bees and, accordingly, the frequency of bee visits by plants depends on the strength of the colonies (Adamchuk 2020).The largest amount of brood is grown by bees that consume fresh bee bread or pollen (Druzhbiak and Kyryliv 2010).The largest number of broods in bee colonies in late May-early June and, accordingly, during this period, bees bring the most pollen to the nest -271.38 g (Figure 1).Bee colonies collect significantly more pollen in the spring than in the fall.After all, the need for protein feed to feed the brood forced the bees to work harder to collect pollen.During the spring period (March-May), bees brought 184.2 g of pollen to their nest.In the summer (449.91 g) compared to the spring period, more by 265.71 g (P < 0.001), and the autumn (101.33 g) -less by 82.87 g (P < 0.01). The strength of the bee colony affects the percentage of pollen collection from different plant species.Mediumstrength bee colonies are especially active in collecting pollen, in which flight activity doubles (Mishchenko 2015).Due to the high need for protein feed, strong families collect it from all plant species within a productive range and medium and weak families from plants growing near the apiary. Bee colonies show high pollen collection activity in the spring when the bees build up before honey collection.The pollen collection by bees weakens in the post-spring period of the active season, during which the necessary stock is created in the nest.Instead, much of the bees switched to collecting nectar.If the percentage of bees engaged in pollen collection in the spring is 38.7% of the total number of arrivals in the hive, then during the summer honey harvest, this figure decreased by 20.3% (P < 0.001), Before the main honey harvest, the pollen activity of bees decreased even more.Compared to the previous period of summer honey harvest, the number of individuals decreased by 4.2% (P < 0.01), with the spring period -by 24.5% (P < 0.001) (Table 1). During daylight, there was a high intensity of pollen collection during the flowering of fruit trees.The largest bees that arrived with a brood were found from 10 to 18 hoursfrom 52 to 31 arrivals in 3 minutes (Figure 2).The peak of pollen collection occurs at 11:00 -52 arrivals and 16:00-17:00 -43-44 arrivals of bees with pollination.However, in the period from 13:00 to 14:00 hours, the activity of bees to collect nectar (39… 37) (P < 0.001) is higher than the work of collecting pollen (27… 31 arrivals).The activity of bees in collecting nectar was much lower than that of pollen, which is because during this period, mainly honeysuckle blooms, which have low nectar productivity and high pollen productivity (Figure 2). During the flowering of honeycombs with high nectar productivity, the work of bees to collect pollen is minimized.During the flowering of white acacia and sunflower, bees begin to intensify their work on collecting nectar long before dawn. In the honey harvest from white acacia, the flight activity of bees to collect pollen is significantly reduced.The redistribution of functions in the procurement of protein and carbohydrate feeds contributed to increased nectar collection.The collection of pollen from white acacia is almost uniform (Figure 3). In the evening (16:00-21:00), the number of arriving bees with a new one was much less than in the middle of the day.The peak in the collection of bee pollen occurs at 9:00 to 13:00 -33-55 bees, and from 14:00 to 20:00, the activity is almost at the same level -26-35 arrivals.The flight activity of bees to collect nectar in the morning to 14:00-15:00 hours is gradually increasing -from 11 to 86 arrivals in 3 minutes.From 16:00 to evening, collecting nectar activity decreases to 31… 12 arrivals in 20:00-21:00. Bees visit sunflowers most actively in the morning (from 10:00 to 16:00), although nectar flowers secrete nectar throughout the day (Figure 4). The dynamics of flight activity of bees on sunflower is radically different from previous periods.Pollen collection activity is uneven -from 6:00 to 9:00, it increases, then decreases to 12:00, and from 12:00 to 18:00, bees do not collect pollen.Instead, they intensify their work on collecting nectar.Flight pollen collection activities are resumed in the evening from 6 pm to 9 pm.Bees collected nectar from sunflowers from 9:00 to 19:00. Conclusions During the summer, the bees brought 265.71 g more during the autumn -82.87 g less pollination than the spring period.During the summer honey harvest, the percentage of bees engaged in pollen collection decreased by 20.3 %, to the main honey harvest by 24.5%.During the flowering of honeycombs with high nectar productivity, the work of bees to collect pollen is minimized.The peak for collecting bee pollen from fruit trees is 10-11, white acacia -9-13, sunflower -6-9 hours.In the honey harvest from white acacia, the flight activity of bees to collect pollen decreases, nectar -increases.On sunflower, bees from 12:00 to 18:00 do not collect pollen, only nectar. Figure 1 Figure 1 Dependence of pollen collection on the amount of brood in the beehive. Figure 2 Figure 2 Daily dynamics of flight activity of bees during the flowering of fruit trees. Figure 3 Figure 4 Figure 3 Daily dynamics of flight activity of bees during the flowering of white acacia. Table 1 Pollen collection of bees in different periods of honey collection.
2021-10-20T16:30:02.862Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "5d04487b10f2b9f51992d78e7c5a0f49538e7c97", "oa_license": null, "oa_url": "https://www.jabbnet.com/article/10.31893/jabb.21038/pdf/jabbnet-9-4-2138.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "32aec593f3fdc7d3fccce286adacb362aa772f3b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
235076820
pes2o/s2orc
v3-fos-license
Familial dilated cardiomyopathy with RBM20 mutation in an Indian patient: a case report Background Dilated cardiomyopathy (DCM) is a disease of the heart muscle characterized by ventricular dilation and a left ventricular ejection fraction of less than 40%. Unlike hypertrophic cardiomyopathy (HCM) and arrhythmogenic right ventricular cardiomyopathy (ARVC), DCM-causing mutations are present in a large number of genes. In the present study, we report a case of the early age of onset of DCM associated with a pathogenic variant in the RBM20 gene in a patient from India. Case presentation A 19-year-old Indian male diagnosed with DCM was suggested for heart transplantation. His ECG showed LBBB and echocardiography showed an ejection fraction of 14%. He had a sudden cardiac death. A detailed family history revealed it to be a case of familial DCM. Genetic screening identified the c.1900C>T variant in the RBM20 gene which led to a missense variant of amino acid 634 (p.Arg634Trp). Conclusion To the best of our knowledge, the variant p.Arg634Trp has been earlier reported in the Western population, but this is the first case of p.Arg634Trp in an Indian patient. The variant has been reported to be pathogenic at an early age of onset; therefore, close clinical follow-up should be done for the family members caring for the variant. Background Dilated cardiomyopathy (DCM) is characterized by ventricular dilation, impaired systolic function, reduced myocardial contractility, and a left ventricular ejection fraction of less than 40% with a frequency of 1:250 or greater [1]. Most of the DCM cases are sporadic, but approximately 30-48% have a positive family history [2] with an autosomal pattern of inheritance. Although more than 60 genes are linked with DCM [3], RBM 20 is associated with familial DCM and leads to the early age of onset and high mortality [4]. In the present study, we report heterozygous variant c.1900C>T in a severe case of DCM from India. This is the first case from India with c.1900C>T leading to sudden cardiac death. Case presentation A 19-year-old male was diagnosed with dilated cardiomyopathy. After 12 months, the patient was referred to AIIMS New Delhi with shortness of breath and was admitted to AIIMS ICU. His blood pressure was 94/52 mmHg and heart rate 73 beats per minute at the time of admission. NTproBNP, TropT, and CRP levels at the time of admission were 1881 pg/ml, 11.1 pg/ml, and 3 mg/ml, respectively. SGOT and SGPT levels were 76 and 109 units/l, respectively. The patient also had a history of celiac disease. Echocardiographic screening showed severe left ventricular systolic dysfunction with an ejection fraction of 14%, and electrocardiogram showed left bundle branch block (LBBB) (Fig. 1). The patient had no evidence of inflammation which was confirmed by endomyocardial biopsy and cardiac magnetic resonance imaging (Fig. 2). The patient was treated with steroids, IV immunoglobulins, and IV inotropic agents like dopamine and dobutamine at admission and was finally discharged after being stabilized. The patient was then put on diuretics, carvedilol, and sacubitral-valsartan and was advised to undergo a next-generation sequencing which showed heterozygous variant c.1900C>T in the RBM20 gene leading to a missense variant of amino acid 634 (p.Arg634Trp). The patient was readmitted after a month and was listed for heart transplantation but had a sudden cardiac death after 3 months. A detailed family history revealed that one of the cousins (III 19) of the patient diagnosed with DCM had sudden cardiac death at the age of 26 years, and his aunt (II 1) aged 60 years was also affected with DCM (Fig. 3). Further, the family was advised for echocardiography screening and Sanger sequencing for the variant c.1900C>T. Echocardiography screening revealed clinical features of DCM in the deceased cousin's father aged 50 years (II 9) and his sister aged 26 years (III 22). Both of them were asymptomatic and were put on beta-blockers. Sanger sequencing revealed the patient's father (II 11), sister (III 24), brother (III 25) uncle (II 9), and cousins (III 12, III 14, III 15, III 21, III 22) were carriers of the variant c.1900C>T. The deceased cousin's 4-year-old son was also a carrier of the variant (IV 14). The wife of the deceased cousin was married to his brother (II 21) who was expecting a child and was advised to undergo fetal screening for the variant but she refused. Discussion RBM20 is a regulator of heart-specific alternative splicing of Titin (TTN) gene which encodes the largest protein in mammals. Titin protein plays an important role in generating passive tension of cardiomyocytes; thus, regulation of alternative splicing in titin becomes important in normal heart functioning [5]. In an animal model study, it has been reported that deletion of RBM20 leads to the formation of unusually large Titin protein thus leading to DCM [6]. Mutation in RBM20 leads to 2-3% of familial and sporadic dilated cardiomyopathy. Penetrance, age of onset, and severity can vary among patients with the same variants within the family and also between identical twins. RBM 20 leads to arrhythmias, early age of onset, high mortality, and penetrance in most of the cases. In most of the cases, patients with RBM20 undergo heart transplantation or implantation of an ICD [7]. The gene acts in a highly gender-specific manner and affects the males more [7]. RBM20 comprises 14 exons, and the variant c.1900C>T is present in the arginine/serine (RS)rich region in exon 9. The c.1900C>T variant is highly conserved among species and has been earlier reported by Bruch et al. [4] among white European DCM patients. The variant was absent in 480 control samples [4]. In silico analysis by Mutation Taster, PolyPhen2, SIFT, and FATHMM has predicted the c.1900C>T variant to be damaging [8]. The mechanism by which RBM20 leads to DCM is unclear, but it is predicted that the RS-rich region is involved in protein-protein interaction, and any mutation in this region may affect the ability of RBM20 protein to interact with other spliceosome proteins thus disrupting the normal RNA splicing mechanism [9]. There is a strong physicochemical difference between arginine and tryptophan which is likely to impact secondary protein structure. To the best of our knowledge, this is the first case among the Indian population with the c.1900C>T variant leading to severe heart failure and leading to sudden cardiac death at a very early age. The penetrance of the variant can be observed in all generations, but the severity is high among males. Since the family members remain asymptomatic in most of the cases, therefore, a detailed family history and echocardiographic screening should be done along with genetic screening in case of familial DCM. Those family members who have been carriers of the variant c.1900C>T should undergo an echocardiography screening once annually. Pregnant women in the family are suggested to undergo a fetal screening for the variant. Conclusion We conclude that the RBM20 gene leads to the early age of onset of DCM causing sudden cardiac death and heart transplantation. Therefore, close clinical follow-up should be done in families with RBM20 variants. Male family members with RBM20 variants should undergo echocardiography screening frequently whereas females can be treated in a more conserved way. RBM20 variants lead to arrhythmia; therefore, early ICD implantation and antiarrhythmic drug therapy can be an option for treatment.
2021-05-22T13:41:32.503Z
2021-05-22T00:00:00.000
{ "year": 2021, "sha1": "7b747764c71ccedf04f28b3a968b116d3ae934ef", "oa_license": "CCBY", "oa_url": "https://tehj.springeropen.com/track/pdf/10.1186/s43044-021-00165-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7b747764c71ccedf04f28b3a968b116d3ae934ef", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257603314
pes2o/s2orc
v3-fos-license
Integrating interferon-gamma release assay testing into provision of tuberculosis preventive therapy is feasible in a tuberculosis high burden resource-limited setting: A mixed methods study The World Health Organization recommends the scale-up of tuberculosis preventive therapy (TPT) for persons at risk of developing active tuberculosis (TB) as a key component to end the global TB epidemic. We sought to determine the feasibility of integrating testing for latent TB infection (LTBI) using interferon-gamma release assays (IGRAs) into the provision of TPT in a resource-limited high TB burden setting. We conducted a parallel convergent mixed methods study at four tertiary referral hospitals. We abstracted details of patients with bacteriologically confirmed pulmonary tuberculosis (PBC TB). We line-listed household contacts (HHCs) of these patients and carried out home visits where we collected demographic data from HHCs, and tested them for both HIV and LTBI. We performed multi-level Poisson regression with robust standard errors to determine the associations between the presence of LTBI and characteristics of HHCs. Qualitative data was collected from health workers and analyzed using inductive thematic analysis. From February to December 2020 we identified 355 HHCs of 86 index TB patients. Among these HHCs, uptake for the IGRA test was 352/355 (99%) while acceptability was 337/352 (95.7%). Of the 352 HHCs that were tested with IGRA, the median age was 18 years (IQR 10–32), 191 (54%) were female and 11 (3%) were HIV positive. A total of 115/352 (32.7%) had a positive IGRA result. Among HHCs who tested negative on IGRA at the initial visit, 146 were retested after 9 months and 5 (3.4%) of these tested positive for LTBI. At multivariable analysis, being aged ≥ 45 years [PR 2.28 (95% CI 1.02, 5.08)], being employed as a casual labourer [PR 1.38 (95% CI 1.19, 1.61)], spending time with the index TB patient every day [PR 2.14 (95% CI 1.51, 3.04)], being a parent/sibling to the index TB patients [PR 1.39 (95% CI 1.21, 1.60)] and sharing the same room with the index TB patients [PR 1.98 (95% CI 1.52, 2.58)] were associated with LTBI. Implementation challenges included high levels of TB stigma and difficulties in following strict protocols for blood sample storage and transportation. Integrating home-based IGRA testing for LTBI into provision of TB preventive therapy in routine care settings was feasible and resulted in high uptake and acceptability of IGRA tests. Implementation challenges included high levels of TB stigma and difficulties in following strict protocols for blood sample storage and transportation. Integrating home-based IGRA testing for LTBI into provision of TB preventive therapy in routine care settings was feasible and resulted in high uptake and acceptability of IGRA tests. Background Tuberculosis (TB) is among the top ten causes of morbidity and mortality. In 2019, an estimated 10.0 million people fell ill with TB, and approximately 1.5 million people died from the disease in the same year [1]. Further, about a quarter of the world (approximately 2 billion persons) is infected with latent TB [2]. Among these, 10-15% will progress to active disease in their lifetime, usually within two years following exposure [1]. The risk of disease progression is increased by certain conditions e.g., age, immunosuppressive states like HIV, diabetes, cancer, and malnutrition [3,4]. Consequently, the World Health Organization (WHO) outlined provision of TB preventive therapy (TPT) for persons at risk of developing active TB as one of the key components in its strategy to end the global TB epidemic by 2035 [5]. In line with this provision, the WHO updated its guidelines for programmatic management of latent TB infection (LTBI) to recommend TPT for HIV negative household contacts (HHCs) older than 5 years in whom active TB has been ruled out. The guidelines also recommend testing for LTBI using the interferon-gamma release assays (IGRAs), where feasible, to identify individuals who would benefit most from TPT [6]. In Uganda, the WHO symptom screen remains the main stay for ruling out latent TB among persons in close contact with patients with confirmed TB. Although immunological tests e.g., as the Tuberculin Skin Test (TST) have better sensitivity and specificity than the WHO symptom screen, their wide-spread use is limited by the need for cold chain maintenance, inter-reader variability and low specificity due to cross-reactivity with the Bacille Calmette-Guérin (BCG) vaccine and other non-tuberculous mycobacteria. The interferon-gamma release assays (IGRAs) is an alternative immunological test for the presence of LTBI which uses whole blood. This test has several advantages over the TST because its interpretation is not user dependent and the test does not cross react with BCG vaccine resulting in higher specificity [7]. We aimed to explore the feasibility of incorporating LTBI screening using an IGRA test (QuantiFERON-TB Gold Plus test (QFT-Plus) into the national algorithm for management of LTBI among HHCs older than 5 years in Uganda. Study setting Between February and December 2020, we conducted a parallel convergent mixed methods study at four tertiary referral hospitals. To get a fair representation of the urban and rural settings, we selected one national referral hospital based in the capital city Kampala (Mulago national referral hospital) and three tertiary referral hospitals (RRH) based in the East (Soroti regional referral hospital), Northwest (Arua regional referral hospital), and West (Hoima regional referral hospital) of the country (Fig 1). 10% of the HHCs that we assumed would have a positive symptom screen for active TB disease, the required sample size was 424 HHCs. We assumed that each index TB patient would have 4 household members [8], and thus based on the sample size of 424, 106 index TB patients were needed to accrue this sample size. However, we attained 352 HHCs from 86 index TB patients due to the limited availability of test kits. justification [9]. We then used sampling proportionate to size to determine the number of patients to be selected from each hospital. For each hospital, we used systematic random sampling to select the required number of index TB patients. One index TB patient declined study participation due to non-disclosure to a new partner and was replaced with the next consecutive eligible index TB patient at the specific study site. Consequently, 86 index TB patients' homes were visited maintaining sampling proportionate to size for each of the participating hospitals. All HHCs who were eligible for the study and provided informed consent were included in the study. Data collection Selection of household contacts. Data collection among HHCs was carried out between February and December 2020 after obtaining permission from index TB patients to visit their homes to carry out household contact tracing. The study team consisted of qualified health workers who underwent 4 days training on the study protocol and procedures prior to implementation. A team comprising of a clinician (either a nurse, clinical officer, or doctor), counselor, and laboratory technician/phlebotomist conversant with the local dialect visited the index TB patient's home on a scheduled day and requested HHCs to consent to participate in the study. Detailed information about the study was provided and consent or ascent for screening and enrolment into the study was sought. The study team line listed all HHCs who consented to study participation excluding those who were <5 years, with a history of TPT within the past two years or currently on TB treatment. We screened HHCs using the WHO symptom screen. This involved asking the study participants if they had cough of any duration, weight loss, fevers, and night sweats. For all HHCs without signs and symptoms of TB, we collected sociodemographic data, home-based blood sample collection for LTBI testing using QFT-Plus (manufactured by QIAGEN QIAGEN-Gruppe Germany) and HIV counselling and testing (if HIV status was reported as negative or unknown). Study team phlebotomist collected five milliliters (mls) of whole blood: four mls for the IGRA test and 1 ml for HIV 1 & 2 testing using the national testing algorithm. Blood samples collected from the capital city (Kampala) were transported in QFT-plus blood-collection-tubes within the recommended 16 hours to the central laboratory, while blood samples collected from distant study sites were kept at room temperature for utmost three hours and transported in lithium heparin tubes in ice-cold boxes maintained at 2-8 0 C within 48hours to the central public health laboratory. In addition, we collected information on duration and nature of contact with the index patient. Data was collected electronically using the open data toolkit (ODK). All asymptomatic HHCs who tested positive on IGRA test were initiated on six months of isoniazid preventive therapy (IPT) while those who tested negative on the initial IGRA test had a second home-based IGRA test performed after nine months. The repeat IGRA test was initially planned to be done at 6 months to rule out LTBI. As a result of Covid-19 travel restrictions, it was performed at 9 months. Those found to be positive on the second test were initiated on TPT. Qualitative data. Qualitative data was collected from four tertiary hospitals through focus group discussions (FGDs) and key informant interviews (KIIs). Two focus FDGs were held for the participants from Mulago hospital because they had large teams that would meet the criteria for holding an FGD. KII were conducted across other RRHs. The days for the FDGs were specially arranged and the participants were informed on the agenda, date, and approximate duration of the meeting prior to the meeting. The FGDs were conducted in English. All discussions were audiotaped and transcribed. Participant identifiers were not used, but individual participants provided written informed consent and were assigned codes, e.g., five group members will be assigned 01-05. Individual responses in each group were coded by item. Using a phenomenological approach, we explored the experiences of health workers focusing on their experiences during IGRA study implementation. We purposively sampled health workers who had been involved in contact tracing & implementation of IGRA. Sampling was based on purposeful maximum variation that involved distinct categories of participants like nurses, clinicians, and laboratory technicians. The majority were laboratory staff this being a predominantly lab-based test, involving home-based blood draws, packaging, and transportation of blood samples to the central laboratory. Similarly, both females and males were included in the study. The interview guide consisted of five open ended questions with probes (S1 Text) and follow up questions to create additional depth. Interview questions were developed based on additional information required, the questions were kept sufficiently broad to encourage new concepts to emerge and minimize interviewer bias. Data collection and analysis was led by an independent senior behavioural scientist (AT) who was assisted by members of the research team. We interviewed respondents until saturation was achieved. Study definitions. For this study, we defined a bacteriologically confirmed TB patient as one with a positive Xpert MTB Rif test or positive sputum smear [10], an index case of TB as the initially identified case of new or recurrent TB in a person of any age with bacteriologically confirmed TB diagnosis, and a HHC as a person who shared the same enclosed living space as the index case for one or more nights or for frequent or extended daytime periods during the three months before the start of current treatment [6]. Finally, we defined LTBI as the presence of a positive IGRA test either on the date of first testing or on the date of second testing nine months later. Data analysis Quantitative data. We analyzed the data in Stata version 16.1 Special Edition (StataCorp, College Station, Texas, USA). We summarized the characteristics of study participants using frequencies and percentages for categorical variables, and medians with interquartile ranges for continuous variables like age. Study outcomes: IGRA test uptake, acceptability, and IGRA test positivity was summarized as frequencies, proportions, and compared across participants' characteristics using Chi-square test or Fisher's Exact if expected counts are less than 5. IGRA uptake was determined as the proportion of household contacts who took the IGRA test out of all contacts screened and were eligible to take the test. A multivariable multi-level Poisson regression model with exchangeable covariance matrix was used to examine factors associated with LTBI. Robust standard errors were used to correct for overdispersion. Variables were entered into the multivariable regression analysis if they had a p-value of <0.2 at unadjusted analysis. We used variance inflation factors (VIFs) to evaluate multicollinearity in fitted models, where in VIFs >10 were indicative of severe multicollinearity. Analyses were not corrected for multiplicity given the exploratory nature of the study. Qualitative data. Qualitative interviews were coded using an inductive approach with descriptive thematic coding. Interview transcripts, recordings and notes were reviewed for content related to the research question and a coding frame developed with flexibility to accommodate emergent new themes as coding evolved. Using the framework, each transcript was read and reread for recurrent ideas. Codes were assigned to relevant segments of the text; similar codes were aggregated to form themes that were then used to address the research questions and develop coherent narratives [11]. The initial coding framework was developed by a senior behavioral scientist (AT) experienced in qualitative research after reviewing 5% of the transcripts. Subsequent analyses of transcripts were carried out by two members of the research team (RMM and SM) who then compared and discussed their findings. Discrepancies were resolved by mutual agreement. To ensure trustworthiness, transcripts were coded independently, compared, discussed [12]. Ethics statement. The study protocol was approved by the Mengo Hospital Research & Ethics Committee (MHREC 57/5-2019) and the Uganda National Council of Science and Technology (UNCST HS 2721). All HHCs provided written informed consent and assent (for participants younger than 18 years) before undergoing any study related procedures. Similarly, written informed consent, including consent to audio-record interviews was obtained from healthcare workers who participated in the qualitative interviews. Results Between February and December 2020, we visited 86 households of index TB patients and identified 355 HHCs, of whom 352 (99.2%) accepted IGRA test. The median number of contacts per index TB patient were six and inter-quartile range of three and seven contacts. The proportion of indeterminate IGRA test results were 1% and 11% at baseline and at repeat testing on follow-up respectively. Fig 2 below shows the flow of study participants through the study. Of the 352 HHCs on whom IGRA test was done, 54% were female with a median age of 18 years (IQR 10-32), 61% had no employment of whom 64% (138/214) were children of school going age (5 to 14 years), the majority had at-least attained primary level education (>80%), while 73% were HIV negative (Table 1). Uptake and acceptability of IGRA test IGRA test uptake was 99.2% (352/355) (Fig 2). Of 352 that offered a blood sample for the IGRA test, 95.7% said their phlebotomy experience was good or excellent. The 4.3% that reported a bad phlebotomy experience were mainly among the younger age group, notably https://doi.org/10.1371/journal.pgph.0000197.g002 due to pain. Older age (P-value <0.01), level of education (P-value = 0.02) and health facility (P-value <0.01) were significantly associated with acceptability of IGRA test among HHCs ( Table 2, P values unadjusted) Prevalence of latent TB infection Of 352 household contacts on whom IGRA test was done, 115 (32.7%) had LTBI on the first IGRA test. Among the 231 who did not have LTBI, 146 (63.2%) received repeat IGRA testing at nine months, of whom 5(3.4%) had LTBI. Therefore, the total number of HHCs with LTBI in this study was 120/352 (34.1%) (Fig 1). Factors associated with a positive IGRA test At multivariable analysis, being aged � 45 years compared to age 5-14 years [Prevalence ratio (PR) 2.28 (95% CI 1.02, 5.08)]; being employed as a causal labourer compared to no Index TB patients' information Information from 53 out of 86 index TB patients was accessed at the health facilities, of whom, majority were male (73.6%) with a median age of 32, 98% were new cases of bacteriologically confirmed pulmonary tuberculosis (Table 4). Qualitative results Characteristics of the qualitative arm participants. In March 2020, we carried out two focus group discussions (FGDs) each with five participants, and 14 key informant interviews (KII) giving a total of 24 healthcare worker participants in this study. Thirteen of these (54%) were male. There were seven laboratory technicians, five nurses, two counsellors, three community healthcare workers, two clinical officers, two doctors, one laboratory scientist, one quantitative economist and one physician. Several key themes emerged from the data regarding the health workers experiences, challenges, and barriers to implementation of LTBI screening using IGRA. Positive health worker experience during implementation of LTBI screening using IGRA. Multi-disciplinary teams coupled with the eagerness and self-motivation of the health Importance of IGRA and its usefulness versus symptom screen as the current standard of care. The healthcare workers said that LTBI screening using IGRA helped them better appreciate the importance of TB preventive therapy. The exercise also helped them realize the importance of testing before treating for LTBI so as to target the limited supplies of TPT to those who need it most and lessen the chances of toxicities. ". . .and because we know, if someone is positive for latent TB, there are chances that he can progress to active TB. So. . ., those that are positive are given some therapy. (KII_Labtechnologist_1_CPHL_10) ". . .if we continue with the current standard. . . we expose people who do not truly have latent tuberculosis to a treatment that-1) is not going to benefit them, and 2) is going to expose them to toxicity. . . (FGD_IGRA study team_Mulago_2 _NRH_2) Barriers to implementation of LTBI screening using IGRA. The healthcare workers reported some challenges with homebased screening with IGRA. These included access, poor household ventilation, lack of privacy, stigma, sample storage and transportation to the central laboratory for testing. Access "Patients who have TB live in suburbs. . . to reach them you pass valleys and drainages, and you might actually need to park the car and get a motorcycle." (FGD_Hospital_4_Team_1_N01). ". . .the roads were quite bad; they were not accessible." (KII_Hospital_3_N05) Stigma. Whereas index patients were welcoming and comfortable with the visiting study teams, some of their household contacts were concerned about the neighbors' perceptions as to why the study teams were visiting those particular homes in the villages. Thus, the study teams were invited to sit inside some poorly ventilated houses of the index patients houses. This was to prevent the neighbors from seeing what was going on which could have resulted into stigma. "The challenge that I found was stigma. . .. the index TB patient was very inviting but when we reached the homes, the other parties, usually the wives, they had stigma. (FGD_IGRA study team_ Mulago _1_NRH_1) Fear of injection. Most people were fearing the injection. They thought it was taking off sputum. They were like, "but for us we know TB is tested through sputum, and now you people are coming with injections. . ." (KII_NURSING OFICER_Hoima_12) Poor ventilation. Due to stigma, all activities had to be carried out inside the houses of the index TB patient. Majority of which had poor ventilation with no open widows. History of recurrent TB disease was noted in some of the homes. ". . . some of them the windows were completely sealed or were not opened, so we had to educate on infection control, but we had to enter those houses to do the activity." (FGD_Hospi-tal_4_ Team_1_NRH_1). ". . . we found about three homes which had contacts having TB recurrently; one particular home had about 3 people who had TB. . . (FGD_Hospital_ 4_ Team_1_NRH_1). Sample storage and transportation. The test had to be transported to one central laboratory in the capital city. This limited time flexibility between sample collection, incubation, and analysis. Moreover, those processes had to be done under stringent conditions to ensure accurate results. The long distance increased the turn-around time & cost of the test. A dedicated team was required to ensure these timelines are met. The participants also reported difficulty in transportation of samples from recipients' homes to the laboratory. "Transportation of samples with this recent experience; it appeared a bit difficult, but I know with time it will be improved." (KII_Hopsital_2_N 15) Community response to the IGRA test. The healthcare workers found that the community was very accepting of IGRA testing. Community members who were contacts of confirmed cases were anxious to know if they were infected with TB while even those who were not contacts of the index case requested to be tested. ". . .the demand is really created because of the confirmed TB patients that are within the community. So, everyone is anxious to know their status (FGD_Hospital_4_Team_2 _N02.) '. . .everybody was willing, and many other people wanted to take the test although they were not contacts. (KII_Hospital_3_N13). Even among child participants, IGRA uptake was very high. The community was receptive of needle pricks. I also want to comment on the phlebotomy, taking of blood. Frankly, I was impressed thateven the children, nobody cried. I also fear injections; okay I do not know how N04 did it. . . somehow even the children never cried. And I think majority of-index patients were very positive [about the IGRA test], and I think they did a good job in counselling the participants at home. Because the injection bit was received very well; even the children who were 5, 6, 7, they really did not cry. . . . (FGD_Hospital_4_Team _1_N01) Preferred approach to LTBI screening using IGRA. The acceptability of the test was due, in part, to the fact that a homebased screening approach was employed which such that no transport costs were incurred in the process of receiving care. Discussion Using a parallel convergent mixed methods design, we determined the prevalence of latent TB and health worker experiences in using IGRA home-based screening for LTBI. We found a LTBI prevalence of 32.7%. The risk factors associated with latent TB included being aged � 45, being in formal employment or casual laborer, longer time spent with the index case, more intimate relationship with index case (parents or siblings) and sharing the same bedroom as the index case. The uptake and acceptability of the IGRA test among HHC of index TB patients was high at 99% and 95.7% respectively. Further, the test was viewed as useful by the health workers in detecting LTBI and bringing to light its true burden in our setting. Our study used a door-to-door approach, which provided the perspectives at the community level. It was found that the communities were receptive to the intervention. However, the challenges noted during IGRA implementation included difficult access to homes due to the poor state of roads in the slum dwellings, stigma, fear of injection, poor ventilation, challenges with sample storage and sample transportation, and delay in sample delivery. The uptake and acceptability of the IGRA test in this study was generally high, however those who decline were mainly children aged 5-14 years. Refusal was uncommon (1%), similar to another study done amongst immigrants [13]. The main reason cited for refusal to take the test was pain from the needle prick. The home-based approach to LTBI testing using IGRA could explain the high acceptability rates observed in our study. Similar home-based approaches in TB HHC investigation using other techniques like portable molecular diagnostics (portable GeneXpert-Instrument) [14] and home-based sputum collection [15] have showed that home-based approaches are convenient, trustworthy and help to overcome barriers to clinic-based testing like waiting time, distance and transportation costs. The prevalence of LTBI determined in this study was lower than that reported by other studies in Uganda which reported prevalence that ranged from 51% to 65% [8,16,17]. Several reasons could explain the observed difference. Previous studies were carried out in urban or peri-urban setting which tend to have more crowding and poor ventilation which encourage transmission of TB infection. In addition, a study by Kizza et al, used TST rather than IGRA [8]. TST has a lower specificity than IGRA due to cross reactivity with BCG antigen and environmental non-tuberculous mycobacteria. Furthermore, our study population was a predominantly young population with the majority being between the (5)(6)(7)(8)(9)(10)(11)(12)(13)(14) year age bracket compared to other studies where HHCs were older [8]. The factors associated with latent TB identified in our study were similar to those reported elsewhere. In India and China, LTBI was associated with increasing age and being in close contact to a case of tuberculosis [18,19]. In addition, our study found that being employed as a casual labor was associated with a higher risk for LTBI positivity [19]. Older age increases the cumulative lifetime exposure to Mycobacterium tuberculosis, while being employed increases the risk for latent TB infection acquisition outside the household setting [19]. Similar to our study findings, other studies found that proximity of contact to a TB index case was associated with an LTBI positivity [20,21]. In addition, our study showed that IGRA positivity was associated with increasing the time spent with the index TB patient which is similar to what was found in India [22]. Presence of a BCG scar was not found to be statistically significant in our study, however, previous other studies have found BCG to be a protective factor against LTBl [23,24]. This could be due to differences in the prevalence of TB between the study settings. Despite the challenges experienced, IGRA based latent TB screening was well received by the community largely because it was free, and it was delivered at home. Free home-based latent TB screening overcame two of the major barriers to IGRA testing that includes transport costs and the need to pay for the test. A study carried out in Uganda to assess barriers to TPT uptake found that having to attend clinic refill visits and the need to pay for the service decreased participants willingness to initiate TPT [25]. Similar to findings elsewhere in the Netherlands [26] and Brazil [27], TB stigma was a major barrier to LTBI services. Increased knowledge and awareness of LTBI led to an increase in expressed stigma [27]. This was also the case in our setting were HHCs did not want the health worker teams to carry out any procedures from outside the house as they expressed fear of stigma from neighbors. To overcome these challenges, there is need to develop strategies that address stigma at the community level to help those affected to resist TB related stigma through counselling, creating TB support clubs, and community dialogues [28]. Strategies to decentralize laboratory testing capability will help address challenges of sample storage and transportation. Further, due to the current COVID-19 pandemic, only 63.2% received repeat IGRA testing at 9 months. This period was characterised by Index TB patients and their HHCs moving away from urban residences to rural areas for socio-economic reasons. Innovative patient-centred approaches need to be developed and evaluated as these will become increasingly relevant [1]. The study had several strengths and some limitations. We had regional representation from different parts of the country and so the findings are likely to be representative of different settings across the country. The study combined both qualitative and quantitative methods of data collection, which elucidated different perspectives of the study variables e.g., acceptability of the IGRA test, associated risk factors and barriers to implementation. Further it enabled triangulation of methods, data sources as well as researchers that enabled better understanding of the research questions. One limitation of our study was the that the study population was heavily skewed towards children, given that children constituted the majority of HHCs in the study setting. The low sensitivity of IGRA in extremes of age [29] was mitigated by retesting at 9 months of follow up. In addition, this enabled identification of those who seroconvert later to be prioritized for LTBI treatment. Furthermore, our study had no age specific measures for acceptability, future studies should consider age-specific measures of acceptability to assess any differences in acceptability of the IGRA test among different age groups. Another study limitation is that perspectives in the qualitative research analysis were those of health personnel. More studies that explore the perspectives of TB household contacts need to be explored. Further, during the second study home visit, high rates of indeterminate IGRA results were reported as compared to first home visit. This may have been due to blood sample transportation delays due to political riots during pre-election campaigns. Finally, the accrued sample size fell short of the estimated sample size due limited availability of test kits. However, our study's sample size was larger than for any prior study in Uganda and the study was spread across the country. Therefore, our study still gives the best estimate of the prevalence of LTBI in Uganda. Conclusion Integrating home-based IGRA screening for LTBI into provision of TPT in routine care settings, resulted in high uptake and acceptability and was therefore feasible in a resource-limited setting. Addressing challenges identified will be critical to scaling up IGRA based LTBI screening. Recommendations 1. Targeted IGRA testing for household contacts is acceptable and therefore national TB programs need to adopt IGRA based LTBI screening which has better specificity; 2. Home-based latent TB testing strategy should be incorporated into the national algorithm for latent TB management; 3. Laboratory capacity for IGRA testing needs to be decentralized to subnational or point of care level to overcome storage and transportation challenges; 4. There is need to evaluate the cost effectiveness of IGRA based LTBI testing and budget impact analysis in resource-limited settings to inform scale up. Supporting information S1 Data. A dataset of household contacts of index TB patients with data on socio-demographics and other detailed information on research study human subjects that participated in the IGRA study.
2023-03-19T05:08:54.206Z
2022-07-06T00:00:00.000
{ "year": 2022, "sha1": "d498b8c59c9c8d3cfe81a7ed1826a44b39ecb881", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/globalpublichealth/article/file?id=10.1371/journal.pgph.0000197&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d498b8c59c9c8d3cfe81a7ed1826a44b39ecb881", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
242110578
pes2o/s2orc
v3-fos-license
Failure mode and effects analysis of LFP battery module The analysis of the charge/discharge curve helps judge the quality of cells and figure out some strange behaviors of the battery module. Especially, many unusual behaviors won’t exhibit on the appearance. It’s important to establish the charge/discharge profile database for effective manufacturing and troubleshooting. In this talk, the basic charge/discharge method of Li-ion battery, simple equivalent circuit model of battery, general charge/discharge curve of LFP battery will be introduced. Then a case-by-case failure mode and effects analysis will be presented. Up to ten cases are discussed in the slides.Dr. Hsien-Ching Chung was invited by Dr. Jim Lee (the chairman of Taiwan Battery Association) to give a talk about "Failure mode and effects analysis of LFP battery module" in the conference, "2018 Taiwan-Japan exchange conference on battery materials and battery manufacturing technologies." The conference was held in Center for Space and Remote Sensing Research, National Central University, Taoyuan, Taiwan on Dec. 18, 2018. There were about 20 keynote speakers and 150 participants. It's a good opportunity to realize new technologies in the battery industry and the future of the energy industry.  Introduction To raise the standard of Taiwan batteries industry and enhance the international competitiveness of the market, the Industrial Technology Research Institute (ITRI) established "Taiwan Battery Industry and Technology Development Union" in 1996, with more than 40 domestic battery industries which contain the up, middle and down-stream manufacturers. With the economy changes and the development of electronics industry, the multi-boom growth of the battery industry, The Taiwan Battery Association was formally founded in April 2006 . The Taiwan Battery Association (TBA) was established as a non-profit organization. It was devoted to meet the cooperation and development of Taiwan's battery industry, to enhance the international competitiveness of the battery industry, to assist in establishing the development strategy and direction of Taiwan's battery industry and to establish the battery industry's strategy and R&D alliances. Abstract The analysis of the charge/discharge curve helps judge the quality of cells and figure out some strange behaviors of the battery module. Especially, many unusual behaviors won't exhibit on the appearance. It's important to establish the charge/discharge profile database for effective manufacturing and troubleshooting. In this talk, the basic charge/discharge method of Li-ion battery, simple equivalent circuit model of battery, general charge/discharge curve of LFP battery will be introduced. Then a case-by-case failure mode and effects analysis will be presented. Up to ten cases are discussed in the slides. Basic charge/discharge method CC-CV mode charge method The battery is charged at a constant current until the voltage reaches a setting value, and then the voltage is held constant as the current decays to a cutoff current. CC mode discharge method The battery is discharged at a constant current until the voltage reaches a setting value. 7 Equivalent circuit model of battery Rest: After discharge (charge) process, the voltage will gradually increase (decrease). DCIR: A voltage drop at the beginning of discharge. A voltage raise at the beginning of charge. 9 The air conditioner is turned off after work. Failure mode and effects analysis: A case-bycase study 1. Cell 2. Pack/Module The following cases are picked from RD database. Case 1: Bad cell (abnormal Charge/discharge curve) The discharge curve isn't smooth. Case 5: Why the battery become unbalance at CV mode? The 8s battery is made by new cells with high consistency. When the battery is charged at CV mode with a high voltage near the battery limitation, the voltage unbalance appear, although the unbalance cannot be observed from the total voltage. Case 6: Why the cutoff voltage isn't at 2 V per cell? BMS setting: the relay will be turned off once one of the cells is under 2 V. Some people have a question. Why the power is cut before the voltage reaches 16 V? (This is an 8s system.) 16 CTE Case 7: Something wrong in manufacturing process will be figured out. The 8s battery is made by new cells with high consistency. Why the 5 th cell exhibit a larger DCIR than others? 17 Case 8: A poor battery charging without BMS. The voltages of bad cells rise too fast. The total voltage won't tell you the overcharging. 18
2019-11-22T01:17:27.051Z
2019-11-14T00:00:00.000
{ "year": 2019, "sha1": "fa92505ae18e4ea0fdcf1d8372dd3de0dd916a17", "oa_license": "CCBY", "oa_url": "https://engrxiv.org/preprint/download/736/1645", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "79afb3d70ab27d66698f2b51a47c91c79a0fc1bb", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [] }
235294315
pes2o/s2orc
v3-fos-license
Green IoT System Architecture for Applied Autonomous Network Cybersecurity Monitoring —Network security morning (NSM) is essential for any cybersecurity system, where the average cost of a cyber-attack is $1.1 million. No matter how much a system is secure, it will eventually fail without proper and continuous monitoring. No wonder that the cybersecurity market is expected to grow up to $170.4 billion in 2022. However, the majority of legacy industries do not invest in NSM implementation until it is too late due to the initial and operation cost and static unutilized resources. Thus, this paper proposes a novel dynamic Internet of things (IoT) architecture for an industrial NSM that features a low installation and operation cost, low power consumption, intelligent organization behavior, and environmentally friendly operation. As a case study, the system is implemented in a mid-range oil a gas manufacture facility in the southern states with more than 300 machines and servers over three remote locations and a production plant that features a challenging atmosphere condition. The proposed system successfully shows a significant saving ( > 65%) in power consumption, acquires one-tenth the installation cost, develops an intelligent operation expert system tools as well as saves the environment from more than 500 mg of CO2 pollution per hour, promoting green IoT systems. I. INTRODUCTION Network Security Monitoring (NSM) is defined as the collection, detection, and analysis of network security data as well as escalation of indications and warnings to detect and respond to intrusions on computer networks [1]. Network security monitoring tools typically features are: network-based threat detection, machine base threat detection, proactive network queries for security data and "hunting" for suspicious behavior, integration with one or more threat feeds, and create security alerts [2]. Information Network security has traditionally started via the United States Department of Defense (US DoD), categorizes the domains of Computer Network Defense (CND) [3]. NSM is based upon the concept that prevention eventually fails. No matter how much time and resources were invested in static securing a network, without employing a continuous monitoring operation, eventually the scenario will make bad guys win. By analogy, all Middle Ages castles eventually were fallen or surrender due to advanced weapon technology or political events. Thus, when this happens, there should be an organized technical system able to detect and respond to the intruder's presence so that an incident may be declared and the intruder can be eradicated with minimal damage done. Any NSM system essentially depends on a device that captures network traffic, detects the anomaly and performs analysis over various levels of details, this device is called NSM sensor. NSM sensors consist of a software suite that is very resource hungry and relay on expensive hardware. The storage disk is the main issue with sensor which can grow up to ∼1 Tbyte per day in some situations depending on the data type that is it uses. Additionally, NSM data can grow exponentially and require regularly scheduled maintenance, backup and means of accessibility. It is important to note that if the captured data lost, this would limit the ability to perform retrospective analysis that is crucial for a current investigation. Most of the current technologies consider the sensors as a passive device with two interfaces, one interface for management and logging and the other interface for traffic capture. The sensor is also usually used for just reporting to a centralized point for analysis and alert reporting, such as the Snort® repository etc., shown in Figure 1, where the green lines represent the management traffic to the central NSM HQ. The static operation of the sensors makes them frozen in time, role and functionality as well as the lack of the feedback control, that makes the sensor to look as a waste of investment that pushes back many industrial implementations to refuse adding dedicated NSM systems to their network until it is too late, which is one of the main issues with the NSM implementation. However, by boosting "life" to these sensors via communication, collaboration, and active control to sensor system architecture can increase their efficiency, increase their illusion of intelligence and reduce the overhead of operation and maintenance cost. Thus, this work applies a novel approach by injecting the dynamic Internet of Thing (IoT) concepts to the NSM sensors, that reduces their size, adds the communication and the control framework, and applies the messaging system to reduce the hardware requirements, lower the operation power consumption and makes the detection and prevention faster for many network intrusions. Additionally, as a proof of concept, this architecture was applied to the information system network of an oil and gas production facility with more than 300 machines and serves, serving three remote branches. The proposed architecture system saves more than one order of magnitude in equipment cost and more than 1.867MW of annual power consumption as well as saves the environment from more than 4000 mg of CO2 emission per day. II. BACKGROUND Oil and gas industrials support 10.3 million jobs in the United States and nearly eight percent of our nation's gross domestic product, with 32.5% of the market share. The oil and gas industry faces unique cybersecurity challenges, given their distributed, decentralized structures and the large operational technology environment that does not fit the traditional cybersecurity scenarios. Thus, the majority of the manufacturers do not have a full cybersecurity implementation due to cost, revenue, utilization, and investment. The investment gap has left most heavy industrials insufficiently prepared for the monitoring, detecting, and preventing threats. As a result, they are attractively targeted by cybercrimes, in 2018, nearly 60% of relevant surveyed organizations had experienced a breach that ended up with financial loss, several of which considered adding NSM system only after the cyberattack incidents [4]. The current technology features many advanced and sophisticated NSW systems for open source and commercial implementations [5]. However, no matter how complex the NSM system is, it still depends on the essential actor of the system, which is the NSM Sensor Platform (SP). The SP is a combination of hardware and software that perform: collection such as Packet Capturing (PCAP) or NetFlow [6], detection such as Signature-Based, Reputation-Based, and/or Anomaly-Based, and network threat analysis [7]. SPs can be classified by their functionality into three classes: Collection Only, Half-Cycle, or Full-Cycle, depending on what operations they perform, which can be: collection only, collection and detection, or collection, detection, and analysis, respectively. SPs usually required a lot of hardware resources; for example, a simple Security Onion® SP requires 12GB of memory, four cores processor, 200GB of disk storage, and two network interfaces [9]. Security onion system architecture is shown in Figure 2. Security Onion is a free and open-source Linux distribution for threat hunting, enterprise security monitoring, and log management. The Security Onion includes Elasticsearch, Logstash, Kibana, Suricata, Zeek, Wazuh, Stenographer, Hive, Cortex, CyberChef, NetworkMiner, and requires expensive hardware configuration [8]. NetFlow is an embedded instrumentation within Cisco IOS Software; it is used to characterize network operation and vision into the network that is an essential tool for IT and system analysts [10]. In response to new requirements and pressures, network operators are finding it critical to understand how the network is behaving including: application, network usage, network productivity, anomaly, and security vulnerabilities. A sample of the NetFlow data structure record is shown in Figure 3. NetFlow protocol is very useful; however, it consumes a high bandwidth on the network, it is vendor-specific, versionspecific and it also increases the network devices processor utilization (by around ∼ 20 %) [10] while reducing the cache availability that is highly depending on the network Fig. 3. NetFlow cash data sample with network behavior information [8]. performance, especially during peak hours which in the case study is between 8:00 am to 9:30 am, 1:30 pm to 2:30 pm as well as on major social and/or political events (e.g., elections). SPs are added to the network via two basic methods. The first method is done via port mirroring, which requires some reconfiguring of the network device such as switches and routers (which is not very suitable for the majority of established industrial implementations) that many system network administrators would refuse to perform without proper testing and several simulation runs of the whole network computer system, especially for the production automation facilities where a heterogeneous network profile of different sensor, actuators gauges, and automation devices are most probably installed and configured with less than minimum documentation and little options activated. The second method is via a network tapping that can be more transparent to the network administration and management team, where network taps could be implemented via a hardware tap or virtually via a software tap [11], [12]. Software tap is usually preferred for temporary solutions and remote installations. Both types of network taps provide basic access to the wired network lines to capture the outbound (Tx) and/or the inbound (Rx) data traffic. The data are basically seen as packets from the NSM level of operation in the TCP/IP stack. The essential data types the SPs process are: The Internet of Things refers to the ever-growing network of physical objects that feature identifiers for internet connectivity and data exchange communication that occurs between these objects and other systems [13]. This work proposes a novel NSM architecture based on the IoT concept that converts the static NSM sensors into an active IoT sensors framework, applying the concept of the NSM Hub and the IoT clouds. The proposed sensors are built on miniature board machines. The IoT hub is implemented on a single NSM machine with a backup (to avoid a single point of failure) and IoT cloud storage. The proposed system overcomes the cost associate with traditional NSM via reduced hardware, increased sensor utilization, and saved power via the low operational energy consumption. III. SYSTEM DESIGN AND SILENCE UNVEIL A. Proposed Architecture The higher level of the proposed system architecture is shown in Figure 4 where it shows the sensors were replaced by IoT Network Monitoring Sensor (IoTS) sensors, an IoT Hub was added for data aggregation, and an IoT Could storage was established, the following subsections discuss the detailed descriptions excluding cloud architecture as it is a standard implementation and out of the scope of this work. B. The IoTS NSM Sensor This work proposes a miniature compact dynamic NSM senor (i.e., IoTS) to replace the tradition static machines (servers that are NSM Sensors). The Proposed IoTS hardware is a custom Raspberry Pi build with two-gigabit ethernet ports and a Wi-Fi Antenna. The network ports are used for software tapping and capturing packets. The IoTS uses the Wi-Fi as a primary link for neighbor discovery and IoT Hub control signals. In the case study, the IoTS node is selected with the specs as shown in Table I. Additionally, it is important to notice that with this hardware configuration, the node cost does not exceed one-tenth of the cost of a traditional midrange NSM Sensor. The IoTS uses a compiled NSM software for packet capture, where the session packet capture process is done via FProbe, the PCAP is performed via daemonlogger, and the PSTR is based on URLsnarf software [14]. The IoTS node features the sensor functionality polymorphism. It is characterized by the ability to change the type of the packet, capturing the scope and the sensor role of the node according to the IoT Hub control signal that acts as a transition trigger. The IoTS role can switch between collection only, collection and detection, or collection and detection and analysis, as shown in Figure 5, role state machine. C. The IoT Hub Sensor To all appearance, the IoT Hub could be seen as a regular NSM sensor machine from the hardware configuration point; however, the proposed implementation adds some IT intelligence that manages the IoTS states base on the detection warning and alerts, saves the power state for the least needed operation, directs the data storage location when needed via the neighbor storage sharing mechanism (NSSM) which uses Dijkstra shortest pass algorithm the find nearby nodes (using network hops and network utilization as a metric) to find the nearest available node with extra storage to maneuver the data storage task when needed, schedules the data transfer to the IoT cloud, adds a second level detection alerts filtering to reduce false-positive alerts via a pooling mechanism to increase the systems precision. The IoT Hub also allows NSM HQ connection for flow monitoring and operation to an individual IoTS node or any other part of the system, including a communication route scenario. Additionally, the proposed IoTS enriches the monitoring capability to the point that even the management flow could be monitored for an extra security precautions with very low installation and operation cost with a low network configuration overhead. The IoT Hub uses the following criteria for heuristic information and decision-making mechanism to manage the state machine of each node in the network domain based on the data that it gathers from the node messaging system such as: sensor state, sensor role state, anomaly traffic detection, attack attempt detection, nearby nodes graph, and data collection location change. These data are stored in a database on the cloud with an instance cached on the IoT Hub for fast access. The database contains weight and probability values that are structured into a string that describes the whole system as a unit and assigns a proposed list of actions to be deployed to the IoTS to either change or keep their role state as well as predict the NSSM triggers. The initial values of the system states are set manually to the IoT Hub; the IoTS state is initially set to coll. & detn, (i.e., allowing the global Collection and Detection) for all the node; however, it could be configured according to the system operation needs such as: normal routine monitoring, warning threshold checks, anomaly detected, or attack attempt occurrence. Additionally, the system representation string database is used by a parametrized generic algorithm [15], [16] that provides a simple and primitive form of advice assistance to the network monitoring analysts team that is base on a supervised learning scheme that gains experience with time and can provide a smart performance on the long run. This unit is added to the system as a first step for a more tailored and case-specific informed decision-making mechanism for a fully automated IoTS system. The proposed system was implemented at an oil and gas mid-range production plant with 15 IoTS nodes distributed at the production plant, the sales and vendor access locations, the server room as well as three remote branches in Louisiana, Texas, and Oklahoma. The system was observed for ∼11 months and during extremely hot summer (>100°F) at dusty and greasy operation conditions. IV. RESULTS AND OBSERVATION The IoT system concept enhances the static NSM architect with dynamic behavior and helps the network monitoring analysts by adding more diverse operations via new functionalities such as: dynamic sensor role change, power saving options, and sensor agents-like smart behavior. The major enhancements, as seen from the industrial management point of view, are the lower power consumption and the reduced initialization, deployment, and operation cost, that are the main barriers that delay the NSM implementation for this sector of the industry as well as the harsh environment operation condition tolerance. The initial cost was reduced by an order of a magnitude. The low power and fan-less hardware boards are the winning solutions for the outdoors and unbearable grease and dust particles contamination atmosphere. Additionally, the average power computation during the eleven months operation was reduced from 1.38 kWh to 0.48 kWh (by ∼65% less) at a peak sensor utilization that leads to saving the environment from 563.4 mg of CO2 per hour, making the proposed system a green IoT solution. V. CONCLUSION The IoTS NSM system architecture approach demonstrates its efficiency and promotes an environmentally friendly so-lution, especially in the major polluting industries where its middle range sector is facing many challenges to enter the next cyber information age era. The proposed system characterizes the IoT architecture capabilities that give the network security analysts a new boost and novel tools of network security monitoring operation, optimization and functionality, cooperative thread detection and prevention mechanisms with a minimum effort from the system administration teams and zero-configuration task to the network engineers. However, the proposed system has several limitations. First, regarding the physical security on the nodes, which is the case with all wireless sensor networks. Second, the proposed system may suffer from packet drop that occurs during the transition time of an IoTS during the role transfer operation that can take up to 35 seconds at a worst-case, which could be solved by deploying multiple IoTSs on the same flow line to back up the transfer. Third, the quality of the hardware plays a very important role, especially the NICs, to prevent buffer overflow and packets loss which may jeopardize the precision of the NSM detection system. Finally, the parametrized learning algorithm needs high attention from the system analyst team, and such learning methods are pruned to learn wrong decision, which can be improved via adding a more sophisticated but lightweight learning technology such as adaptive pattern recognition such as Artificial Immune System, which the authors are considering for the future work.
2021-06-03T01:16:06.203Z
2021-06-14T00:00:00.000
{ "year": 2021, "sha1": "76a51bf597093dab08145ffe9a8a98e93851ca5d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2106.00834", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "76a51bf597093dab08145ffe9a8a98e93851ca5d", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
250328157
pes2o/s2orc
v3-fos-license
A Deeper Insight into Evolutionary Patterns and Phylogenetic History of ORF Virus through the Whole Genome Sequencing of the First Italian Strains Orf virus (ORFV) is distributed worldwide and is the causative agent of contagious ecthyma that mainly occurs in sheep and goats. This disease was reported for the first time at the end of 18th century in Europe but very little is currently known about the temporal and geographic origins of this virus. In the present study, the use of new Italian whole genomes allowed for better inference on the evolutionary history of ORFV. In accordance with previous studies, two genome types (S and G) were described for infection of sheep and goats, respectively. These two well-differentiated groups of genomes originated for evolutive convergence in the late 1800s in two different areas of the world (Europe for S type and Asia for G type), but it was only in the early 1900s that the effective size of ORFV increased among hosts and the virus spread across the whole European continent. The Italian strains which were sequenced in the present study were isolated on the Mediterranean island of Sardinian and showed to be exclusive to this geographic area. One of them is likely representative of the early European forms of ORFV which infected sheep and became extinct about one century ago. Such an ancient Sardinian strain may have reached the island simple by chance, where it quickly adapted to the new habitat. Introduction Orf virus (ORFV; family: poxviridae) is the etiological agent of the zoonotic disease contagious ecthyma (CE or ORF), also known as contagious pustular dermatitis (CPD). This virus belongs to the genus Parapoxvirus (PPV) [1] that, to date, includes four further recognized PPV species based on the classification of the International Committee on Taxonomy of Viruses (ICTV): bovine papular stomatitis virus (BPSV), grey sealpox virus, pseudocowpox virus (PCPV), and red deerpox virus (RDPV). Contagious ecthyma was reported for the first time in sheep by Steeb in 1787 [2], and as early as the end of the 19th century, was described by Hansen (1879) [3] in goats and humans as contagious dermatitis, ORF, crusta labialis, carbuncle of the coronary band, or coupar angus [4]. It was in 1929 that Howarth [5] showed that the disease in lambs and kids in California (USA) was due to a virus which was better described for specimens from Texas (USA) by Boughton and Hardy, in 1932 [6]. The patterns of lesions connected to ORFV infection can occur in a clinical spectrum ranging from mild papular and pustolar to severe proliferative. These types of ORFV lesions are generally reported in sheep and goats, but similar kinds of lesions have also been found in other animal species [7] and humans [1,8], leading to CE being considered as a worldwide-diffused zoonotic disease [9]. In ruminants, the lesions that are provoked by ORFV infection usually involve mouth, muzzle, nostrils, gums and tongue, and occasionally feet, udders, and sporadically, gastrointestinal tract and respiratory apparatus [10]. This disease usually affects lambs and kids, thus causing their death when severe lesions avoid milk sucking from mothers. The level of morbidity from this disease is very high and, although mortality is generally rare, it can rate up to 10% in lambs [11] and to 93% in kids [12,13]. In humans, lesions are reported mainly on hands and undergo a spontaneous benign resolution in a few weeks. However, malignant cases have been also described with atypical proliferative lesions, particularly in immunocompromised individuals [7,14,15]. ORFV's genome is a linear double stranded DNA which is 135 kb-long and encodes for 132 genes [7,16]. The central region contains highly conserved genes involved in viral replication and in the assembly of the viral structure, while terminal regions are more variable and include genes for virulence and immunomodulation [7,17]. The occurrence of high levels of nucleotide variation for the terminal regions of ORFV may be connected to the phenomenon of reinfection typical of this virus that also appears as a product of the regulation acted by viral genes on the host innate immune response [14]. In such a context, many studies focused their attention on the immunomodulatory action of specific ORFV genes with the aim to shed light on the complex processes involved in viral pathogenesis [18,19]. Only two studies, which were performed to describe novel ORFV genomes isolated in China [20] and France [21], provided phylogenetic inferences based on ORFV whole genomes. The first study [20] was performed on nine genomes and evidenced the occurrence of a well-supported genetic structuring between the strains isolated from sheep and those from goats. Consistently, results obtained in the second study [21], where twelve genomes from almost all the continents were used, evidenced a genetic differentiation depending on whether the virus host was a goat or a sheep. This latter study also described, for the first time, an ORFV genome isolated from a human after infection from sheep. A genetic affinity between genomes from human infections was also found in this study. These authors suggested the importance to analyze a larger number of ORFV whole genomes to confirm the possible occurrence of two types of ORFV (from sheep vs. from goats) and shed new light of the level of genetic variation associated with the host species. In such a context, in the present study, seven new ORFV whole genomes from the Mediterranean island of Sardinia (Italy) were isolated from sheep and goats and merged with all of the ORFV genomes currently available on public databases to perform a highresolution phylogenetic analysis of the virus. A similar phylogenetic deep inference was previously performed for samples from all over the world based on the ORFV gene, encoding the dsRNA-binding protein (VIR) [22]. Results evidenced a high worldwide mutational viral evolutionary rate, along with a well-supported genetic divergence between the viral strains isolated from sheep and those from goats. The aim of this study is to provide new insight into the evolutionary history of ORFV based on the analysis of whole genomes, to better understand the temporal origin of its strains with corresponding patterns of distribution, and to make inferences on the possible occurrence of two different genome types of ORFV associated with sheep and goat infections. Sampling Samples were collected, between May 2017 and March 2021, in five different Lacaune and Sarda breed sheep flocks and in one Sarda goat herd (see Table 1 for details). ORFV infection evidenced by clinical signs in individuals was confirmed by virological and molecular analyses of biological samples. Clinical CE-outbreaks have been detected and reported by private practitioners and veterinary public health components involved in the present research, during their diagnostic activity. Sampling collection protocols were as reported by Coradduzza et al., 2021 [22]. Viral DNA was extracted from lesions isolated in five infected sheep and two infected goats (see Table 1 for details on the samples). Virus Isolation ORFV was isolated on Vero cells from homogenates tissues of ovine and caprine mild and severe-type lesions according to the methodologies reported by Coradduzza et al., 2021 [22]. In particular, a total of 0.5 g of each tissue sample was homogenized in 5 mL (10% w/v) of DMEM medium with the use of the following antibiotics: 400 UI/mL penicillin, 400 µg/mL streptomycin, 300 µg/mL gentamicin, and 2.5 µg/mL amphotericin B. The suspension obtained was centrifuged at 1000× g for 15 min and used for the infection of Vero cells using cells from the 100th to 120th passage (BSCL86 ATCC, Istituto Zooprofilattico Sperimentale della Lombardia e dell'Emilia Romagna, Italy). Twelve-well plates of cells in DMEM were prepared with the addition of 10% (v/v) FBS and then incubated for 1 h in a thermostat at 37 • C in an atmosphere of 5% CO 2 . After 18-24 h (80-90% confluence), the medium was removed and the cells were incubated with 0.5 mL of tissue homogenate under the same incubation conditions. Furthermore, 3 wells without the virus, used as a negative control, along with 3 wells containing the virus, were also prepared; after incubation, the cells were washed three times with 1× PBS and the new DMEM with antibiotic, and fetal bovine serum was added (final concentration 100 UI/mL penicillin, 100 µg/mL streptomycin, and 0.5 µg/mL amphotericin B with 2% (v/v) FBS). The cytopathic effect (CPE) was checked daily, when it was detected, or on the 5th day of culture, the material was freeze-thawed three times, collected, and centrifuged at 200× g for 10 min. The collected supernatant was stored at 80 • for future use. Viral DNA Extraction, Sequencing and Genome Assembly After centrifugation at 10,000× g for 1 min at 4 • C, viral DNA was extracted from cell culture supernatant using a QIAmp UltraSens Virus Kit (Qiagen, Hilden, Germany), as described in Fiori et al. [23] to perform the whole genome sequencing using Illumina platform. Phylogeny, Molecular Dating and Evolutionary Rate Seven whole genome sequences were obtained for ORFV in the present study. The whole dataset (n = 26) was aligned using the Clustal Omega package [31] (available at https://www.ebi.ac.uk/Tools/msa/clustalo/; accessed on 15 April 2022) after editing by means of Unipro UGENE v.35 [29], and subsequently deposited in GenBank (see Table 2 for accession numbers). These Sardinian sequences were included in a large dataset containing all the ORFV whole genomes available on GenBank to date (see Table 2 for details and references). This dataset included a total of 26 sequences; 7 from the present study (Italy), 2 from Germany, 1 from France, 10 from China, 1 from India, 1 from New Zealand, and 4 from United States of America (USA) (see Table 2 for details). To identify potential genetic clusters within the dataset and to determine the dissimilarity represented by the genetic variability among genomes, a PCoA (principal co-ordinate analysis) was performed using GenAlEX 6.5 [38]. PCoA reconstruction was based on a genetic pairwise p-distance matrix. The jModeltest 2.1.1 software [39] was used to find the best probabilistic model of genome evolution with a maximum likelihood optimized search. MrBayes 3.2.7 [40] was used to carry out a Bayesian phylogenetic analysis, setting nst = 6, rates = invgamma, and ngammacat = 4 as model parameters. Two independent runs, with 4 Metropoliscoupled Markov chain Monte Carlo (MCMC) chains (1 cold and 3 heated chains), were run synchronously for 5 million generations, sampling trees every 1000 generations. The first 25% of sampled trees were discarded for burn-in. The average standard deviation of split frequencies (which should approach 0) was verified checking the convergence of chains [40] and the potential scale reduction factor (which should be approximately 1) [41] following Scarpa et al. [42]. When available, the collection date of the samples (at least month and year were necessary) were used to set molecular dating which was performed by means of a Bayesian approach under the MCMC algorithm as implemented in Beast 1.10.4 software [43]. Only the genomes whose sample collection date was known were included in this analysis. Both strict and uncorrelated log-normal relaxed clock models were tested with fast runs of 100 million generations to identify the best clock model to perform the dating analyses. Selection was made comparing Bayes factor values using Tracer 1.7 software [44]. All available demographic models (both parametric and nonparametric) were also tested. The phylogenetic time-scaled (ultrametric) trees and the evolutionary rates were co-estimatedafter the selection of the Bayesian skyline demographic model under the uncorrelated log-normal relaxed clock model-running 500 million generations, with samplings repeated every 50,000 generations. The resulting log files were inspected using Tracer 1.7 software [44]. Only values of ESS (effective sample size) ≥200 were accepted. The maximum clade credibility tree was drawn, visualized, and annotated by means of the software TreeAnnotator (Beast package) and FigTree 1.4.1, respectively. Beast software was also used to perform further runs under the coalescent Bayesian skyline demographic model [45] to estimate the evolutionary rates for ORFV strains isolated in sheep and goats. All phylogenetic/phylodinamic runs were carried out using the CIPRES Science Gateway (Cyberinfrastructure for Phylogenetic Research available at https://www.phylo.org/index.php/; accessed on 8 May 2022) [46] with a total effort of about 5000 CPU hours of computation. Results Assembly performed for the Italian ORFV strains with the reference genome (NC_005336) provided seven new whole genomes. See Tables A1 and A2, provided in Appendix A, for details regarding each new genome. Phylogeny, Molecular Dating, and Evolutionary Rate In the present study, a dataset including 26 ORFV whole genomes that were 137,151 bp long, was analyzed. The sequences were isolated in all continents with the only exception being Africa (see Table 2 for details). The nucleotide frequency analysis carried out on the whole dataset reveals that conserved sites amount to 118,671 (86.5%), variable sites amount to 18,478 (13.5%), parsimony informative sites amount to 11,801 (8.6%), and singletons amount to 6677 (4.9%). The phylogenetic mid-point tree analysis evidenced the occurrence of two fullysupported genetic clusters (cluster S and cluster G in Figure 1). The Bayesian phylogenetic time-scaled maximum clade credibility tree (ultrametric tree), used for the molecular dating, is provided in Appendix A (as Figure A1) with the confidence interval (C.I.) at 95% of the highest posterior density (HPD) indicated for each coalescence time estimate. Strains whose phylogenetic position was puzzling in the phylogenetic tree (GenBank # HM133903, MN648218) were excluded from this latter analysis. The phylogenetic mid-point tree analysis evidenced the occurrence of two fullysupported genetic clusters (cluster S and cluster G in Figure 1). The Bayesian phylogenetic time-scaled maximum clade credibility tree (ultrametric tree), used for the molecular dating, is provided in Appendix A (as Figure A1) with the confidence interval (C.I.) at 95% of the highest posterior density (HPD) indicated for each coalescence time estimate. Strains whose phylogenetic position was puzzling in the phylogenetic tree (GenBank # HM133903, MN648218) were excluded from this latter analysis. Furthermore, only genomes whose sample collection date was available were used to perform the molecular dating (two genomes were removed from the dataset used for molecular dating: GenBank # HM133903, DQ184476). Results obtained from the ultrametric tree ( Figure A1) are consistent with those obtained from the phylogenetic tree (Figure 1), with a few discrepancies due to the lack of three genomes in the dataset analyzed, as reported above. The coalescence time at each node of the phylogenetic tree was inferred according to the molecular dating estimates obtained with the ultrametric tree. The common ancestor of all the ORFV strains present in the analyzed dataset, which corresponds to the root of the tree, dates back to 257.59 years before 2021 (i.e., the end of the year 1763). The cluster G of the tree includes only ORFV genomes isolated from goats with a single exception. Indeed, within this group of sequences, a genome isolated in a sheep from Germany (GenBank # HM133903) is also present and sets in a basal position, as an ancestral and quite divergent strain, outside the ingroup of genomes from Italy, India, China, and USA. For this strain, the date of collection is unknown (probably before year 2010), and for this reason it was not possible to estimate its divergence time (indeed, this genome was not included within the dataset used to construct the ultrametric tree, see Figure A1). Within the ingroup, which dates back to 255.93 years before 2021 (i.e., the middle of the year 1765), the genomes from Sardinia (Italy) (two kids) that were isolated in 2019 and 2020, set as an external sub-cluster that originates 10.39 years before 2021 (i.e., the final part of the year 2010). The other genomes of cluster G were grouped in a sub-cluster The second sister clade of cluster S dates back to 70.62 years before 2021 (i.e., the end of the year 1950) and includes only sequences from Sardinia (Italy) collected between 2017 and 2020. In particular, within this Sardinian clade, a sequence isolated in an adult from the northern part of the island in 2020 sets in a basal position, while all the other strains (also isolated in the northern of the island), collected from a lamb and two adults, between the 2017 and 2020, grouped together in a sub-cluster that originates 57.62 years before 2021 (i.e., the end of the year 1963). Within this sub-cluster, the two sequences collected in 2019 and 2020 belong to a group that dates back to 19.19 years before 2021 (i.e., the end of the year 2001). The PCoA (Figure 2) performed to evidence the occurrence of genetic clusters among the genomes analyzed, was able to explain 61.14% of the variability (PCoA1/X axis: 41.21%, PCoA2/Y axis: 19.93%). Results were consistent with those obtained from the phylogenetic tree analysis, thus evidencing the occurrence of divergence between ORFV strains isolated from sheep and humans and ORFV strains isolated from goats (cluster S and cluster G, respectively, in Figure 2). Furthermore, it is interesting to note that in the PCoA graphic, the genome isolated in a sheep from Germany, which sets in a basal position of cluster G (goats) in the phylogenetic tree, is included as an outlier within the variability of strains isolated from sheep. In accordance with the phylogenetic tree analysis, the genomes from Sardinia (Italy) isolated in goats, set as outliers within the variability of strains isolated from goats. The two genomes from China isolated from a sheep and a goat, which are included as a highly divergent group within cluster S of phylogenetic tree, could be considered as divergent strains outside the variability of the two groups of genomes evidenced by PCoA. analysis, the genomes from Sardinia (Italy) isolated in goats, set as outliers within the variability of strains isolated from goats. The two genomes from China isolated from a sheep and a goat, which are included as a highly divergent group within cluster S of phylogenetic tree, could be considered as divergent strains outside the variability of the two groups of genomes evidenced by PCoA. Under the Bayesian skyline lognormal uncorrelated relaxed clock model, the viral evolutionary mean rate calculated for the ORFV whole genome dataset was estimated to be 1.324 × 10 −6 substitution/site/year, with a C.I. 95% HPD of 7.488 × 10 −10 -5.439 × 10 −6 (1.324 × 10 −6 [7.488 × 10 −10 -5.4396 × 10 −6 ]). The viral evolutionary mean rate was also calculated for clusters S and G of phylogenetic tree and principal co-ordinates analyses; the strains whose sampling collection dates were unknown, along with those whose phylogenetic position was puzzling (GenBank # HM133903, MN648218), were excluded from the datasets. The two values were estimated to be 4. Discussion The present study allowed for a 37% increase in the number of whole genomes available for ORF virus from all over the world. In particular, we provided the first Italian sequences for this virus, thus expanding the number of European countries for which ORFV genomes are known. According to Chi et al. [20], who were the first to report a great genetic divergence between goat ORFVs and sheep ORFVs, and Andreani et al. [21], who suggested that more indepth research was needed to understand worldwide ORFV distribution, the present research expands the knowledge on the evolutionary history of ORFV, thus providing the first hints of the temporal and geographic origins of the virus and evidencing the occurrence of two highly-differentiated types of genomes. They are Under the Bayesian skyline lognormal uncorrelated relaxed clock model, the viral evolutionary mean rate calculated for the ORFV whole genome dataset was estimated to be 1.324 × 10 −6 substitution/site/year, with a C.I. 95% HPD of 7.488 × 10 −10 -5.439 × 10 −6 (1.324 × 10 −6 [7.488 × 10 −10 -5.4396 × 10 −6 ]). The viral evolutionary mean rate was also calculated for clusters S and G of phylogenetic tree and principal co-ordinates analyses; the strains whose sampling collection dates were unknown, along with those whose phyloge- Discussion The present study allowed for a 37% increase in the number of whole genomes available for ORF virus from all over the world. In particular, we provided the first Italian sequences for this virus, thus expanding the number of European countries for which ORFV genomes are known. According to Chi et al. [20], who were the first to report a great genetic divergence between goat ORFVs and sheep ORFVs, and Andreani et al. [21], who suggested that more indepth research was needed to understand worldwide ORFV distribution, the present research expands the knowledge on the evolutionary history of ORFV, thus providing the first hints of the temporal and geographic origins of the virus and evidencing the occurrence of two highly-differentiated types of genomes. They are the lineages of ORFV associated with the infection of goats (type G genomes) and sheep (type S genomes). The strong genetic divergence found between these two genome types could be related to the general lack of transmission of the ORFV infection between sheep and goats that would have prevented the occurrence of viral recombination and genetic similarity, among types S and G strains. Indeed, although this virus can be transmitted among hosts by direct contact with tissue lesions or fomites, thus infecting animals through skin cuts and abrasions, in general it is reported to be transmitted from wild to domestic goats, but not to domestic sheep [47]. Furthermore, in accordance with Chi et al. [20], who found that ORFV genomes isolated from goats are more similar to each other than those isolated from sheep, the evolutionary rate calculated for type G genomes in the present study was lower than that calculated for type S genomes, suggesting the occurrence of a stronger selective pressure acting on the viral strains that infect sheep along with a higher capability of type S genomes to quickly adapt to environmental changes. A possible explanation for this finding may involve the immune response of sheep that could be highly reactive towards ORFV as a consequence of extensive vaccination campaigns, thus prompting the evolutive rate of the virus with the occurrence of frequent viral recombination among type S genomes. Furthermore, the evolutionary mean rate calculated for the ORFV type G genomes was similar to the value obtained in the present study when the strains from all hosts (goats, sheep and humans) were considered together. However, the evolutionary mean rate calculated for the ORFV type S genomes was higher and more similar to the value obtained by Coradduzza et al. [22] for the ORFV VIR gene which was isolated in hosts from all continents. This latter finding suggests a possible connection (that must be supported by further specific genetic studies) between the occurrence of contagious ecthyma in sheep and the expression of the gene encoding the dsRNA-binding protein in the viral strains that infect these animals. In accordance with Andreani et al. [21], in the present study, different ORFV strains isolated after a human infection from sheep were closely related to one another and belonged to an external heterogeneous genetic sub-group within the type S genomes clade. Interestingly, considering that human-to-human transmission has never been reported [48], with only few exceptions which involve direct contacts with other human lesions or fomites [49], and according to Andreani et al. [20], who suggested the use of many genomes to evidence the occurrence of ORFV variants related to the strains isolated from humans that may be able to again cross species barriers; in the present study an ORFV strain (GenBank # MN454854) from a cell culture infected with contagious ecthyma vaccine in Texas (USA) in 2019 might represent a viral strain that can easily produce new spillover between sheep and humans. These events are frequent during the whole evolutive history of zoonoses and represent the main reason for which infections originating in wildlife require constant monitoring (see and references therein) [50]. In such a context, although contagious ecthyma is considered as an occupational disease in "human risk populations", such as shepherds, sheep shearers, butchers, and veterinary surgeons, studies on a large number of ORFV genomes isolated from animals and humans are needed to better infer the evolutionary patterns occurring among types S lineages and to better describe the characteristics of the viral strains that infect sheep and can be easily transmitted to humans in specific conditions (i.e., direct or indirect contact with infected animals). In fact, considering that the ORFV strains isolated in farmers infected by sheep in the present study resulted in genetically divergence from those that are common among sheep, there is a strong possibility that only specific viral strains, with high levels of virulence, are able to infect humans even when environmental conditions make the possibility of ORFV transmission extremely low. In the phylogenetic tree (Figure 1) of the present study, an ORFV strain isolated from a sheep in Germany (GenBank # HM133903) before 2010, which was considered as a puzzling outgroup genome by Andreani et al. [21], was included within the G (goats) cluster, and thus confirmed the peculiar nature of this viral allelic variant (as evidenced in the previous studies) that could be representative of an ancestral form of ORFV. Notably, this same viral strain in the PCoA (Figure 2) of our research clustered within the type S genomes isolated from sheep evidencing that this latter analytical approach could provide a more representative description of the genetic structuring among viral genomes. Indeed, PCoA, which was reconstructed on a p-distances matrix (without information on evolutionary models) is able to depict divergences or similarities based on differences in the genomic nucleotide composition of strains. The common ancestor to all viral strains included in the present study (see also Figure 1 for molecular dating of the tree clusters), which could be considered as a proto-ORFV form not yet well-differentiated in the present types S and G ORFV, dates back to the end of 1763. This value represents a molecular dating which is consistent with the first description of contagious ecthyma in European sheep some years later by Steeb in 1787 [2], and in European goats and humans in 1879 [4]. In the present study, types S and G ORFV genome groups are almost coevals (1796 and 1765, respectively), and the rise of these two well-differentiated and contemporary genetic clades may represent a typical case of multiregional geographic origin by evolutive convergence of viral strains in different host species (indeed, identical pathological phenotypes/patterns of clinical lesions are reported for sheep and goats) (see e.g., Sackman et al., 2017 and references therein) [51]. In particular, regarding the type G strains, they may represent the first group of lineages that differentiated, thus suggesting that the early form of ORFV associated with the development of contagious ecthyma originated from viruses infecting goats. Their geographic origin could be tentatively placed in Asia, which currently represents the continent where the highest and the oldest (year 1831) level of genetic variability was evidenced in our analyses. Furthermore, considering that contagious ecthyma was described for the first time in goats from European farms in 1879 [4], ORFV's type G strains may have taken about one century (from 1765 to 1879) to spread from Asia toward Europe, thus increasing its population size among hosts. This virus may have reached Europe, not only transported by live hosts involved in animal trade, but also with goods of animal origin along the commercial routes between the two continents. Indeed, ORFV remains viable on the wool of animals (infected and recovered) and contaminated materials for long periods, enabling indirect transmission to new susceptible hosts [52,53]. In general, this virus remains viable on the host's wool and survives for about one month after the lesions have healed, but it can also be carried by clinically normal sheep. Furthermore, ORFV was reported to survive in laboratory lesion tissue samples for up to 12 years [48], and for up to 17 years in natural environments with a dry climate [14] as the trade commercial routes between Asia and Europe likely were in the 1800s. Within the type G clade, two sub-lineages are exclusive to the USA and Italy (Sardinia). In particular, the Italian allelic variants isolated in Sardinia may be considered as representative of a group of lineages exclusive to the island that originated in loco in 2011. Indeed, contagious ecthyma was first reported on this island in the early 1990s, according to a booklet edited by the Zooprophylactic Institute of Sardinia (Italy) as part of a health education project, and these recently-differentiated strains might have diverged from the first founders under the selective pressure of the new island habitat. The molecular dating (2011) provided for the Sardinian type G ORFV genomes cluster is consistent with the value (years 2009 and 2012) obtained by Coradduzza et al. [22] for strains isolated from Sardinian goats, on the basis of the ORFV VIR gene. Our results suggest that the ORFV type S genomes clade is 31 years younger than type G and differentiated in the year 1796. This finding is consistent with the first report of contagious ecthyma in European (maybe Danish) sheep by Steeb in 1787 [2], suggesting that the early form of ORFV, associated with the development of contagious ecthyma in sheep, originated in northern Europe. However, with the only exception being a Sardinian, likely ancestral strain (GenBank # ON691522), all the type S genomes included in the present study share a common ancestor which dates back to 1828. This latter molecular dating may correspond to the beginning of the ORFV expansion across Europe and Asia and it matches with the first detailed description of contagious ecthyma in European sheep provided as early as 1890 by Wallay [9]. In this case, considering the young age of the Asian (Chinese) cluster (year 1953) of type S genomes, ORFV may have taken about a century (or even less) to spread from Europe toward Asia. This evolutionary scenario in consistent with the extinction of most of the early European ORFV strains, as only those strains which performed better adaptative variation to the habitat are the ancestors of the genomes which expanded at the end of 19th century and, later in the 1920s, spread across the whole continent as suggested by several reports of contagious ecthyma in many countries [52][53][54][55]. The Sardinian ancestral type S strain (GenBank # ON691522) was isolated in 2021 from a lamb belonging to the Sarda sheep breed that lived on a farm in the southern part of the island. In particular, in accordance with personal communications obtained by these authors from the farm's owners, new heads were never introduced into their livestock and parent animals always either lived on the same farm or came from neighboring areas. In such a context, the Sardinian Sarda sheep breed was selected on the island in 1927/28 (in particular, in 1928 the first aries and the first sheep were registered in the Herd Book, they were owned by the itinerant Chair of Agriculture of Cagliari directed by Professor Francesco Passino) from heads arriving from the European mainland. Therefore, the ancestral ORFV Sardinian strain isolated in the present study could be a direct descendant of the early ancient ORFV strains that probably originated in northern Europe at the end of 18th century and then became extinct. This likely uncommon form of ORFV may be present in Sardinia simply by chance according to a typical effect of genetic drift and may have survived on the island as, after its arrival, it became well-adapted to the new environment. In such a context, this ORFV strain might be considered as a relic of the first ORFV variants that originated early on European mainland and is worthy of properly investigation to better understand the origins of this virus. Conversely, the other Sardinian ORFV strains included within the type S genomes clade of the present study were isolated in the northern part of the island and they belong to a divergent genetic group that dates back to 1950 and includes viral allelic variants that originated in 1963 and 2001. The strains that have differentiated in the 21st century may be exclusive to the island, specifically the northern area. Another point of interest is that the molecular dating provided by the present research for the common ancestor of the ORFV genome strains isolated from sheep in Sardinia (year 1950) is quite consistent with the estimation reported by Coradduzza et al. [22] based on the ORFV VIR gene (year 1925). The small discrepancy could be related to the different molecular datasets used to perform molecular dating (whole genome 137,151 bp vs. VIR gene 382 bp). In general, the results obtained in the present study for ORFV strains isolated from heads living in northern and southern Sardinia, suggest that different strains of this virus arrived on this Mediterranean island during distinct periods of the 20th century. These highly divergent allelic variants were introduced in different areas of the island where various kinds of selective pressure might have originated new private and endemic ORFV variants. As a consequence, and as previously reported by Coradduzza et al. [22], the enclosed nature of Sardinia may have prompted the growth of a strong genetic differentiation among viral strains from different geographic areas. In the future, a higher number of ORFV whole genomes isolated from Sardinian hosts could help to shed further light of the evolutionary history of ORFV on the island and to provide evolutive predictive models of the distribution of this virus for areas of the world with similar environmental and selective conditions. Conclusions In conclusion, further analyses on many ORFV whole genomes are needed to corroborate the multiregional geographic origin in the late 18th century for ORFV strains infecting sheep (type S) and goats (type G), in Europe and Asia, respectively, as it is suggested in the present study. Institutional Review Board Statement: Ethical review and approval were waived for this study, as this study did not involve any animal experiments. Samples were collected from sheep and goats using standard procedures by the Sardinian Veterinary Services A.T.S. and submitted to the Experimental Zooprofylactic Institute of Sardinia for ORFV testing. Special authorization for sampling activities was not necessary; this action is regulated by the Italian Ministry of Health and performed in the case of infectious diseases. Informed Consent Statement: Written informed consent has been obtained from the patient(s) to perform molecular analyses on biological tissue samples. Data Availability Statement: The sequences of the ORFV whole genomes obtained during the present study are openly available in the GenBank nucleotide sequence database. Accession numbers: ON691519-ON691525. See Table A2 in Appendix A for more information on the data accessibility. Conflicts of Interest: The authors declare no conflict of interest. * The contigs information row refers to de novo assembly. ** The percentage of identity has been inferred by using the blast tool implemented in the NCBI Virus portal. ON691519 SAMN28405493 SRR19732916 S10 ON691520 SAMN28405494 SRR19732915 S15 ON691521 SAMN28405495 SRR19732914 S19 ON691522 SAMN28405496 SRR19732913
2022-07-07T15:10:15.087Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "93e5a9d31d2dec1b6861baaf061701e886d35104", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/14/7/1473/pdf?version=1657070315", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1e25386427c83b91fc108007ef938967646a4cc8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
258728906
pes2o/s2orc
v3-fos-license
Mercury Biogeochemical Cycle in Yanwuping Hg Mine and Source Apportionment by Hg Isotopes Although mercury (Hg) mining activities in the Wanshan area have ceased, mine wastes remain the primary source of Hg pollution in the local environment. To prevent and control Hg pollution, it is crucial to estimate the contribution of Hg contamination from mine wastes. This study aimed to investigate Hg pollution in the mine wastes, river water, air, and paddy fields around the Yanwuping Mine and to quantify the pollution sources using the Hg isotopes approach. The Hg contamination at the study site was still severe, and the total Hg concentrations in the mine wastes ranged from 1.60 to 358 mg/kg. The binary mixing model showed that, concerning the relative contributions of the mine wastes to the river water, dissolved Hg and particulate Hg were 48.6% and 90.5%, respectively. The mine wastes directly contributed 89.3% to the river water Hg contamination, which was the main Hg pollution source in the surface water. The ternary mixing model showed that the contribution was highest from the river water to paddy soil and that the mean contribution was 46.3%. In addition to mine wastes, paddy soil is also impacted by domestic sources, with a boundary of 5.5 km to the river source. This study demonstrated that Hg isotopes can be used as an effective tool for tracing environmental Hg contamination in typical Hg-polluted areas. Introduction Mercury (Hg) is a highly toxic heavy metal that can travel a long distance in the atmosphere and is therefore considered a global pollutant [1]. The toxicity of Hg depends on its chemical form. The elevated levels of Hg in the air are mostly attributed to industrial emissions, such as coal burning, Hg mining, gold mining, wastes incinerators, and cement production [2]. Methylmercury (MeHg) is neurotoxic, and it has the ability to bioaccumulate and become ultimately biomagnified in the food web. Humans are exposed to MeHg mainly through the consumption of food [3][4][5]. The Minamata Convention went into effect in August 2017 to reduce the effects of Hg exposure on human health [6,7]. The Wanshan Hg Mine is considered the "capital of Hg" in China. Since 2002, mining activities have been banned at the site due to the depletion of Hg resources and the environmental implications [8,9]. However, long-term Hg mining activities have produced a large amount of mine wastes, which are an important source of Hg pollution in the surrounding atmosphere and surface water. Most Hg calcine piles are distributed at the source of the river. Under external forces, such as rainwater leaching, surface runoff, and wind erosion, the Hg from the mine wastes is released and enters the downstream water system [10][11][12]. Therefore, evaluating the ecological risks caused by Hg mines is crucial for local ecological restoration. The mine wastes from Hg mines can diffuse into the surrounding environment through water and atmospheric transportation. The paddy soils are more heavily contaminated by Hg in Hg mining areas compared with other areas [13]. Among the crops grown in the Study Area The Wanshan Hg Mine is located in Guizhou Province, southwest China ( Figure 1a). Mineralization at the Wanshan Hg Mine is primarily associated with thin-layered, laminated, fine-grained, dolomite or limestone beds of the mid-Cambrian age. The wall rocks are intensively altered by silicification, dolomitization, calcification, subordinate bituminization, and pyritization [32]. The primary ore mineral in the Hg deposits is cinnabar, with less metacinnabar [33].The Yanwuping Hg Mine (YMM) is one of the largest Hg mines in the Wanshan area. The Yanwuping Hg Mine is hilly and karstic, and it is located at an altitude of 340-1010 m. The climate is subtropical humid, with an annual rainfall of 1200-1400 mm and an annual temperature of 15 • C [34]. The Yanwuping Hg Mine's historic Hg extraction facility and about 3.1 × 10 5 m 3 of mine wastes are located at the upper Wengman River [35]. In 2011, the government renovated the YMM and tailing dams, but 1.3 × 10 4 m 2 of the calcine deposits remained. The Wengman River (Figure 1b) originates in the YMM zone and belongs to the Yangtze River basin, which has an average summer depth of 1 m and is directly affected by upstream mine wastes [36]. Mine wastes, surface-layer soils, and deep-layer soils were collected at Yanwuping Mercury Mine. W1-16 and S1-6 were the water and paddy soils sampling sites, respectively. The total gaseous Hg sampling sites were the same as those for mine wastes, soils, and paddy soils. Mine wastes, surface-layer soils, and deep-layer soils were collected at Yanwuping Mercury Mine. W1-16 and S1-6 were the water and paddy soils sampling sites, respectively. The total gaseous Hg sampling sites were the same as those for mine wastes, soils, and paddy soils. Sample Collection Water and atmospheric samples were collected and monitored twice, in December 2021 and August 2022, due to the high seasonal variability of the various indicators in the river water and atmosphere. The interannual variability in the soil Hg is not significant. Thus, soil and mine wastes samples were collected only once, in December 2021. There are two main types of mine wastes: calcines, the residues of Hg ore after high-temperature calcination, and waste rock, which is lower-grade surrounding rock [37]. Because most of the site has been restored, a total of 75 samples were collected from the surface layer and below 30 cm, and the difference between the restored area and bare area was compared and evaluated. Among them, 42 samples comprised surface soil, calcines, and waste rock, and 33 samples comprised deep soil, calcines, and waste rock. During the same period, the soil samples from the paddy fields downstream of the YMM were collected. For each site, a final sample composed of 3-5 subsamples was collected using the diagonal sampling method (15 paddy soil samples; Figure 1b). The collected soil, calcine, and waste rock samples were kept in clean polyethylene bags, air-dried, ground, and passed through a 200-mesh sieve, followed by total Hg (THg) and THg isotopic analysis. Sample Collection Water and atmospheric samples were collected and monitored twice, in December 2021 and August 2022, due to the high seasonal variability of the various indicators in the river water and atmosphere. The interannual variability in the soil Hg is not significant. Thus, soil and mine wastes samples were collected only once, in December 2021. There are two main types of mine wastes: calcines, the residues of Hg ore after high-temperature calcination, and waste rock, which is lower-grade surrounding rock [37]. Because most of the site has been restored, a total of 75 samples were collected from the surface layer and below 30 cm, and the difference between the restored area and bare area was compared and evaluated. Among them, 42 samples comprised surface soil, calcines, and waste rock, and 33 samples comprised deep soil, calcines, and waste rock. During the same period, the soil samples from the paddy fields downstream of the YMM were collected. For each site, a final sample composed of 3-5 subsamples was collected using the diagonal sampling method (15 paddy soil samples; Figure 1b). The collected soil, calcine, and waste rock samples were kept in clean polyethylene bags, air-dried, ground, and passed through a 200-mesh sieve, followed by total Hg (THg) and THg isotopic analysis. The YMM downstream rainwater and surface water of the Wengman River were sampled for unfiltered THg, filtered dissolved Hg (DHg), DHg isotopes, particulate Hg (PHg) isotopes, anions, and cations. The THg and DHg isotopes samples were acidified with ultrapure hydrochloric acid, the cation samples with distilled nitric acid, and the anion samples without acid. The water samples were sealed in double-layer polyethylene bags, sent back to the laboratory, protected from light, and stored in a refrigerator at 4 • C. The analytical tests were completed within 28 days. The total gaseous Hg (TGM) concentrations at the YMM and downstream paddy field sites were monitored 48 times using a portable RA-915+ Zeeman Hg Analyzer (Lumex, Saint Petersburg, Russia). The Lumex instrument's detection limit was 0.5 ng/m 3 . The instrument instantaneously displays the TGM concentrations per second, and each sampling point dataset represents an average monitoring time of at least 5 min in the field [37]. Analytical Methods Approximately 0.1 g of the mine wastes and soil samples (dry weight) were digested with a mixture of HNO 3 and HCl (v:v = 1:3) for 2 h in a water bath at 95 • C. BrCl was added to the samples, and they were stored for 24 h for the conversion of all forms of Hg to Hg 2+ , followed by the addition of acidic SnCl 2 to the solution to reduce the Hg ions to Hg 0 . They were analyzed using cold-vapor atomic absorption spectrometry (CVAAS, F732-S, Shanghai Huaguang Instrument Factory, Shanghai, China). The detection limit of this method was 0.1 µg/L. To determine the concentrations of THg and DHg in a water sample, BrCl was added to the sample and allowed to oxidize for 24 h. The Hg ions in the solution were then reduced to Hg 0 using acidic SnCl 2 . The samples were preconcentrated into gold tubes and were later tested using a cold-vapor atomic fluorescence spectrophotometer (CVAFS, Tekran 2500, Tekran, Toronto, Ontario, Canada). The detection limit of this method was 0.1 µg/L. The THg in the water passing through a 0.45 µm filter is defined as DHg; subtracting the DHg from the THg yields the concentration of PHg in the water [34]. The anions and cations were analyzed by automated Dionex ICS-90 ion chromatography (Dionex, Sunnyvale, CA, USA) and an inductively coupled plasma optical emission spectrometer (ICP-OES, Varian, Palo Alto, CA, USA), respectively [38]. The Hg isotopic composition was analyzed using Neptune Plus MC-ICP-MS (Thermo Fisher Scientific, Waltham, MA, USA)at the State Key Laboratory of Environmental Geochemistry, the Institute of Geochemistry, the Chinese Academy of Sciences, following the method described by Yin et al. [39]. The total soluble Hg (TSHg) of the Hg mine wastes was extracted using a leaching experiment, as the Hg isotopes were to be tested along with the digested soil sample [37]. To ensure the minimum Hg concentration required for the DHg isotopes analysis of aqueous samples, each filtered water sample was pre-enriched into 5 mL of 40.0% aqua regia absorbent solution (v:v, HNO 3 : HCl = 2:1), as shown in the method established by Li et al. [40]. For the Hg isotopes of PHg in the water samples, 2-5 L of water was filtered through a high-temperature purified Teflon membrane and freeze-dried. The Hg in the membrane was extracted into 5 mL of 40.0% anti-aqua regia absorbent solution using a tubular muffle furnace [41]. Hg Isotopes Analysis The Hg isotopic composition was calculated using the formula presented by Blum and Bergquist (2007). Mass-dependent fractionation is expressed as delta (δ), and the results were calculated as follows: where xxx is 199, 200, 201, 202, or 204. Mass-independent fractionation is expressed as "∆", and it was calculated using the following equations: In this study, the binary mixed model was used to calculate the two sources of DHg of River Water No. 1. The calculations were performed using Equations (5) and (6) [1,42,43]: δ 202 Hg 3 = δ 202 Hg 1 × F 1 + δ 202 Hg 2 × F 2 (5) 1= F 1 + F 2 (6) where F represents the percentage of the pollution source, subscript 1 represents the TSHg, subscript 2 represents the mountain spring water DHg, and subscript 3 represents the River Water No. 1 DHg. When calculating the two sources of River Water No. 1 PHg using a binary mixing model, subscript 1 represents the Hg mine wastes, subscript 2 represents the mountain spring water PHg, and subscript 3 represents the River Water No. 1 PHg. The fractions of Hg in the paddy soil were derived from rainwater sources, river water sources, and geological background sources, and they were calculated using a triplemember mixing model as follows: δ 202 Hg soil = δ 202 Hg rain × F rain + δ 202 Hg river × F river + δ 202 Hg nat × F nat (7) ∆ 199 Hg soil = ∆ 199 Hg rain × F rain + ∆ 199 Hg river × F river + ∆ 199 Hg nat × F nat (8) 1 = F rain + F river + F nat (9) where the subscripts rain, river, and nat represent rainwater sources, river water sources, and geological background sources, respectively, and F rain , F river , and F nat represent the percentages of rainwater sources, river water sources, and geological background sources, respectively. Data Analysis Statistical analysis of the data, including means, standard deviations, and t-tests, was performed using IBM SPSS Statistics 26.0 (IBM, Armonk, NY, USA) and Microsoft Excel 2019 software (Microsoft, Redmond, WA, USA) (statistical significance = p < 0.05). Origin 2021 (OriginLab, Northampton, MA, USA) was used for the graphical demonstration of the data, and Arcmap 10.7 (ESRI, RedLands, CA, USA) was used to plot the spatial distributions by inverse distance weighting. Hg Pollution in Mine Wastes Considerable variation was observed in the THg concentrations of the YMM mine wastes. Except for the highest Hg concentration of 1.98 × 10 4 mg/kg found in surface mine wastes, the THg concentrations in the remaining samples showed a geometric mean of 38.4 mg/kg, with a range of 1.60-358 mg/kg. For the deep mine wastes, the THg concentrations showed a geometric mean of 46.8 mg/kg, with a range of 14.5-1.07 × 10 3 mg/kg. The considerable variations in the calcine THg concentrations may be attributed to the different retort furnaces used at the YMM. As the early smelting methods were not so advanced, inadequate ore burning resulted in low Hg recovery and a high concentration of calcines. The advancement in smelting technology led to adequate ore roasting, increased Hg recovery (≥95.0%), and a lower Hg concentration in the calcines [47]. The THg concentrations in the deep mine wastes analyzed in this study were higher than that of the local bedrock (0.35 mg/kg) [32], which was similar to the previous study also conducted at the Wanshan Hg Mine (geometric mean of THg concentrations: 49.0 mg/kg; THg concentration range: 4.15-825 mg/kg) [33]. The THg concentrations in 38.5% (15/39) of the YMM mine wastes samples exceeded the second-type soil pollution risk screening value (38.0 mg/kg) [48], and for 17.9% (7/39) of the samples, the THg concentrations exceeded the soil pollution risk control value of the second category of construction land (82.0 mg/kg) [48]. This demonstrates that a significant proportion of Hg persists even after the high-temperature melting of Hg ore [29]. The THg concentrations in the soils covered by restoration showed a geometric mean of 7.70 mg/kg, with a range of 1.68-139 mg/kg. The THg concentrations in the surface samples and deep samples correlated significantly (p < 0.05), indicating that the surface soil has been polluted by calcines in the lower layer. The soil THg concentrations were much higher than the agricultural land soil pollution risk control value (4.00 mg/kg, 6.5 < pH ≤ 7.5) [49], which is 70 times higher than the Guizhou Province soil background value of 0.110 mg/kg [50]. A comparison of the calcine area before and after the YMM restoration is shown in Figure 2a,b. The distribution of the Hg pollution in the surface and deep layers reveals that the most serious Hg pollution occurs in the exposed calcines areas. The mine wastes in the YMM are still the primary source of Hg pollution in the surrounding ecosystem. The exposed calcines seriously impact the local ecological environment by continually releasing Hg into the atmosphere, entering surface water bodies, and leaching into downstream farming soil [8,33]. 2023, 11, x FOR PEER REVIEW 6 of 16 the data, and Arcmap 10.7 (ESRI, RedLands, Cal, USA) was used to plot the spatial distributions by inverse distance weighting. Hg Pollution in Mine Wastes Considerable variation was observed in the THg concentrations of the YMM mine wastes. Except for the highest Hg concentration of 1.98 × 10 4 mg/kg found in surface mine wastes, the THg concentrations in the remaining samples showed a geometric mean of 38.4 mg/kg, with a range of 1.60-358 mg/kg. For the deep mine wastes, the THg concentrations showed a geometric mean of 46.8 mg/kg, with a range of 14.5-1.07 × 10 3 mg/kg. The considerable variations in the calcine THg concentrations may be attributed to the different retort furnaces used at the YMM. As the early smelting methods were not so advanced, inadequate ore burning resulted in low Hg recovery and a high concentration of calcines. The advancement in smelting technology led to adequate ore roasting, increased Hg recovery (≥95.0%), and a lower Hg concentration in the calcines [47]. The THg concentrations in the deep mine wastes analyzed in this study were higher than that of the local bedrock (0.35 mg/kg) [32], which was similar to the previous study also conducted at the Wanshan Hg Mine (geometric mean of THg concentrations: 49.0 mg/kg; THg concentration range: 4.15-825 mg/kg) [33]. The THg concentrations in 38.5% (15/39) of the YMM mine wastes samples exceeded the second-type soil pollution risk screening value (38.0 mg/kg) [48], and for 17.9% (7/39) of the samples, the THg concentrations exceeded the soil pollution risk control value of the second category of construction land (82.0 mg/kg) [48]. This demonstrates that a significant proportion of Hg persists even after the high-temperature melting of Hg ore [29]. The THg concentrations in the soils covered by restoration showed a geometric mean of 7.70 mg/kg, with a range of 1.68-139 mg/kg. The THg concentrations in the surface samples and deep samples correlated significantly (p < 0.05), indicating that the surface soil has been polluted by calcines in the lower layer. The soil THg concentrations were much higher than the agricultural land soil pollution risk control value (4.00 mg/kg, 6.5 < pH ≤ 7.5) [49], which is 70 times higher than the Guizhou Province soil background value of 0.110 mg/kg [50]. A comparison of the calcine area before and after the YMM restoration is shown in Figure 2a,b. The distribution of the Hg pollution in the surface and deep layers reveals that the most serious Hg pollution occurs in the exposed calcines areas. The mine wastes in the YMM are still the primary source of Hg pollution in the surrounding ecosystem. The exposed calcines seriously impact the local ecological environment by continually releasing Hg into the atmosphere, entering surface water bodies, and leaching into downstream farming soil [8,33]. Atmospheric Hg The spatial distribution of the TGM at the YMM showed significant variations (Figure 3a,b). In wintertime, the TGM concentrations averaged 24.1 ± 6.90 ng/m 3 , with a range The TGM concentrations at the exposed calcines areas were found to be the highest both in winter and summer, and the TGM concentrations in summer were much higher than those in winter (winter: 45.0 ng/m 3 ; summer: 700 ng/m 3 ), while the TGM concentrations at the restored sampling point were much lower (winter: 23.6 ng/m 3 ; summer: 153 ng/m 3 ). The results show that mine wastes are still an important emission source of atmospheric Hg pollution, and remediated measures could effectively reduce the Hg emission flux at the interface between the mine wastes and air [51]. Compared with other Hg mining areas in China, the TGM concentrations in the YMM (43.3-700 ng/m 3 ) were much higher than those in the Xunyang (7.40-410 ng/m 3 ) and Wanshan (13.5-309 ng/m 3 ) Hg mining areas [33,52]. However, the concentrations were much lower than those in the Xiushan Hg mining area (29.0-4.21 × 10 4 ng/m 3 ) [53]. The mean concentration of TGM was three times higher when compared with the air quality reference standard of 50.0 ng/m 3 set by the Ministry of Environmental Protection of China, and it may pose a potential risk to local residents [54]. Therefore, the Hg emissions from mine wastes should be strictly controlled to reduce the environmental risks. Atmospheric Hg The spatial distribution of the TGM at the YMM showed significant variations ( Figure 3a,b). In wintertime, the TGM concentrations averaged 24.1 ± 6.90 ng/m 3 , with a range of 10.1-45.0 ng/m 3 . In summertime, the TGM concentrations averaged 153 ± 129 ng/m 3 , with a range of 43.3-700 ng/m 3 . The TGM concentrations at the exposed calcines areas were found to be the highest both in winter and summer, and the TGM concentrations in summer were much higher than those in winter (winter: 45.0 ng/m 3 ; summer: 700 ng/m 3 ), while the TGM concentrations at the restored sampling point were much lower (winter: 23.6 ng/m 3 ; summer: 153 ng/m 3 ). The results show that mine wastes are still an important emission source of atmospheric Hg pollution, and remediated measures could effectively reduce the Hg emission flux at the interface between the mine wastes and air [51]. Compared with other Hg mining areas in China, the TGM concentrations in the YMM (43.3-700 ng/m 3 ) were much higher than those in the Xunyang (7.40-410 ng/m 3 ) and Wanshan (13.5-309 ng/m 3 ) Hg mining areas [33,52]. However, the concentrations were much lower than those in the Xiushan Hg mining area (29.0-4.21 × 10 4 ng/m 3 ) [53]. The mean concentration of TGM was three times higher when compared with the air quality reference standard of 50.0 ng/m 3 set by the Ministry of Environmental Protection of China, and it may pose a potential risk to local residents [54]. Therefore, the Hg emissions from mine wastes should be strictly controlled to reduce the environmental risks. Previous studies have reported that the TGM released from pollution sources can settle into paddy soil after migration [55,56]. After the TGM monitoring in the downstream paddy fields, the following trends were noticed (Figure 3c). The results indicate that the TGM concentrations gradually decreased within 5.5 km of the YMM both in winter and summer (winter: 5.79 ng/m 3 ; summer: 21.4 ng/m 3 ). However, the TGM Previous studies have reported that the TGM released from pollution sources can settle into paddy soil after migration [55,56]. After the TGM monitoring in the downstream paddy fields, the following trends were noticed (Figure 3c). The results indicate that the TGM concentrations gradually decreased within 5.5 km of the YMM both in winter and summer (winter: 5.79 ng/m 3 ; summer: 21.4 ng/m 3 ). However, the TGM concentrations were still higher than the global background (1.50-1.60 ng/m 3 ) [33,57]. The TGM concentrations considerably increased at around 5.5 km, with the highest value of 68.7 ng/m 3 in summer, and then gradually decreased with the increasing distance. The increase in the TGM in [37,60]. The highest Hg concentration was found at the W1 site near the YMM (Figure 4a,b). The THg concentration in the winter surface water at this site exceeded the standard limit of 1000 ng/L stipulated by China's Class V surface water environmental quality standard [61], indicating the direct influence of the Hg mine wastes. In order to reduce the impact of the upstream calcine leachate on the downstream water system, Xu et al. [36] designed and built a weir 1.5 km away from the YMM, which can intercept 40.4% of the THg per year and significantly reduce the THg concentrations in the river. The upstream calcine leachate is mainly composed of PHg. In this study, the proportions of the water PHg to THg in the winter and summer flowing through the weir decreased by 76.5% and 52.9%, respectively, indicating that the removal ratios were higher than those presented by Xu et al. [36]. The difference between the two studies might be attributed to the flow of the river water, as previous studies have reported that water flow is the main factor that influences the transport and migration of Hg [53,62]. In this study, the average THg concentrations in the summer samples (34.4 ng/L) were lower than those of the winter samples (101 ng/L). This might be because the summer samples were collected after a heavy rain event and the erosive, and leaching effect of the rainwater was not significant. The river flows in summer were higher than those in winter, which mainly showed that the dilution effect resulted in lower Hg concentrations in the summer samples. However, the highest concentration still exceeded the threshold limit of 100 ng/L set by China's Class III surface water environmental quality standard [61]. The above studies have proven that the weir can indeed cause the particulate matter to settle, which is because the water flow slows down and the suspension time increases, thereby reducing the Hg pollution downstream. However, Xu et al. [36] only monitored before and after the weir, and they did not set sampling points downstream of the Wengman River. The results of this study show that at 4-5 km downstream of the Wengman River, the THg concentrations decreased to 10.3-11.3 ng/L, as a large amount of PHg had settled. The proportion of DHg to THg increased to 55.4%-98.9%, indicating the PHg sedimentation effect. The Hg concentrations at 4-5 km were close to the mean concentration of 7.09 ng/L in the tributaries, which indicated the baseline concentration of the surface water in this area. Except for W14, the water Hg concentrations in the tributaries in this study were similar to those of Qiu et al. [35] (tributary: 3.00-17.0 ng/L). After 5.5 km, the Hg concentration in the river water gradually increased, but it did not exceed the limit of 100 ng/L stipulated by China's Class III surface water environmental quality standard [61]. The Hg concentration at 6.5 km of the tributary (W14 sample) in wintertime reached 85.0 ng/L, exceeding the 50.0 ng/L limited stipulated by China's Class II surface water environmental quality standard [61]. This indicated that the surface water after 5.5 km may be impacted by other external sources of Hg. This study analyzed the anions and cations of the Wengman River, and it found that the Na + and Cl − concentrations increased significantly after 5.5 km in both winter and summer. The increase in the Na + concentration was much higher than that of Cl − (Figure 4c). Figure 4d shows that the ratio of Cl − :Na + in the sample at 1 km was 1:1, indicating that it was mainly derived from the dissolution of evaporite rock. However, the ratio of the water Cl − :Na + downstream gradually fell below the 1:1 ratio line. The ratio of Cl − and Na + ranged from 0.232 to 0.876, which indicates the contribution of sources other than the dissolution of evaporite rocks [63,64]. Because Liulongshan Township is 5.5 km away, with a dense population, domestic activities have significant impacts on the water chemistry [65]. Cl − is not affected by physical, chemical, or biological processes, and it is a good indicator of anthropogenic activities, such as the use of agricultural fertilizers, animal manure, and domestic sewage [66]. The Wanshan area belongs to the karst landform, and the main rock type is carbonate rock. The Na + concentration in the surface water is relatively low. The increase in Na + relative to Cl − may be due to silicate weathering (e.g., plagioclase) and the effect of the input from domestic pollution sources. In order to further elucidate the contribution of both to the excess Na + , this study used the molar ratio bivariate plots of the Na + -normalized Ca 2+ and Mg 2+ and Na + -normalized Ca 2+ and HCO3 − distributions ( Figure S1) to identify the contribution of rock weathering to the ion source in the river [67,68]. As shown in Figure S1, the ionic composition of the river water in the study area was mainly located near the weathered end element of the carbonate rock, with a trend towards the weathered end element of the silicate rock. This indicates that the contribution of silicate rocks to river water ions is small. Therefore, this study concluded that the increase in Na + concentration relative to the Cl − concentration after 5.5 km from the Wengman River may be caused by domestic pollution [69,70]. This domestic pollution may come from domestic This study analyzed the anions and cations of the Wengman River, and it found that the Na + and Cl − concentrations increased significantly after 5.5 km in both winter and summer. The increase in the Na + concentration was much higher than that of Cl − (Figure 4c). Figure 4d shows that the ratio of Cl − :Na + in the sample at 1 km was 1:1, indicating that it was mainly derived from the dissolution of evaporite rock. However, the ratio of the water Cl − :Na + downstream gradually fell below the 1:1 ratio line. The ratio of Cl − and Na + ranged from 0.232 to 0.876, which indicates the contribution of sources other than the dissolution of evaporite rocks [63,64]. Because Liulongshan Township is 5.5 km away, with a dense population, domestic activities have significant impacts on the water chemistry [65]. Cl − is not affected by physical, chemical, or biological processes, and it is a good indicator of anthropogenic activities, such as the use of agricultural fertilizers, animal manure, and domestic sewage [66]. The Wanshan area belongs to the karst landform, and the main rock type is carbonate rock. The Na + concentration in the surface water is relatively low. The increase in Na + relative to Cl − may be due to silicate weathering (e.g., plagioclase) and the effect of the input from domestic pollution sources. In order to further elucidate the contribution of both to the excess Na + , this study used the molar ratio bivariate plots of the Na + -normalized Ca 2+ and Mg 2+ and Na + -normalized Ca 2+ and HCO 3 − distributions ( Figure S1) to identify the contribution of rock weathering to the ion source in the river [67,68]. As shown in Figure S1, the ionic composition of the river water in the study area was mainly located near the weathered end element of the carbonate rock, with a trend towards the weathered end element of the silicate rock. This indicates that the contribution of silicate rocks to river water ions is small. Therefore, this study concluded that the increase in Na + concentration relative to the Cl − concentration after 5.5 km from the Wengman River may be caused by domestic pollution [69,70]. This domestic pollution may come from domestic sewage, domestic waste (such as batteries, thermometers, pigment and paint residues, fluorescent lamps, and so on), etc. [59,71]. Source Apportionment by Hg Isotopes The river water THg increased significantly when it flowed through the Hg mining area (Figure 4a,b), which is consistent with previous literature [9,72]. In winter, the δ 202 Hg and ∆ 199 Hg values in the river water samples downstream averaged −0.29‰ ± 0.30‰ (−0.71~0.11‰, n = 7) and −0.02‰ ± 0.07‰ (−0.12~0.06‰, n = 7), respectively (Table S1). Mercury isotopes were used to trace the source of Hg in the upstream water of the Wengman River. The results showed that the main contributing sources included Hg mine wastes and mountain spring water. This study assumes that the Hg from mine wastes and mountain spring water enters the River Water No. 1 sampling site in a rapid mixing process and that no significant MDF and MIF will occur during this process. The pollution sources of the DHg in the River Water No. 1 sample mainly include the TSHg from the Hg mine wastes and the DHg from the mountain spring water. For the TSHg of the Hg mine wastes, the DHg of the mountain spring water, and the DHg of River Water No. 1, the δ 202 Hg values were −0.90‰, −1.57‰, and −1.25‰, respectively, and the ∆ 199 Hg values were all close to zero. Many previous studies have demonstrated the usefulness of end-member mixing models for Hg-source tracking in water environments [73,74]. In this study, the relative contributions of the different sources for the River Water No. 1 DHg were calculated using a binary mixing model. The relative contribution ratio of the mine waste TSHg to DHg was 48.6%, and that of the mountain spring water was 51.4%. The pollution sources of the PHg in River Water No. 1 mainly included the Hg mine wastes and mountain spring water. The δ 202 Hg values in the Hg mine wastes, the PHg of mountain spring water, and the River Water No. 1 PHg were −0.35‰, −1.78‰, and −0.48‰, respectively, and the ∆ 199 Hg values were all close to zero. The observed ∆ 199 Hg values in Hg mine wastes are consistent with previous studies [29,75,76]. In this study, the relative contributions of the two pollution sources to the River Water No. 1 PHg were calculated using a binary mixing model. The relative contribution ratio of the Hg mine wastes was 90.5%, and that of the mountain spring water was 9.50%. The THg in the River Water No. 1 was 1.25 × 10 3 ng/L, while DHg accounted for 3.00%, and PHg for 97.0%. It was calculated that the Hg mine wastes directly contributed 89.3% to the river Hg pollution at the No. 1 site, indicating that the erosion of Hg mine wastes by runoff is the main process of the Hg pollution in the rivers near the Hg mine. This shows that the upstream water of the Wengman River is seriously polluted with Hg. The government needs to remediate the mine wastes left at the site and reinforce the tailings dam, which could reduce the Hg pollution in the downstream river and the health risk to local residents. Paddy Soil Hg Pollution The THg concentrations in the paddy soil downstream of the YMM averaged 3.58 ± 1.82 mg/kg with a range of 1.49-8.51 mg/kg. Compared with other Hg mining areas, the paddy soil THg concentrations at the YMM were lower than those in China's Xunyang (1.30-750 mg/kg), Xiushan (0.45-68.0 mg/kg), and Wanshan (0.50-188 mg/kg) Hg mining areas [52,53,77]. However, they were still much higher than the agricultural soil pollution risk screening value (0.60 mg/kg, 6.5 < pH ≤ 7.5) and Guizhou soil Hg background value (0.110 mg/kg) [49,50]. The results showed that the downstream paddy soils are still seriously polluted by THg and indicated serious ecological risks. It is critical to identify the sources and contributions of Hg pollution in paddy soil. Therefore, preventive measures should be taken to control the Hg release. The obtained results provide a reliable theoretical and scientific basis for the treatment and safe utilization of Hg-contaminated soil. The trends of the soil THg in the downstream paddy fields in this study were not consistent with those presented by Xu et al. [53], who reported that, with the increase in the distance from the Hg mining area, the soil THg concentrations tended to decrease. However, in this study, a different trend was noticed. A decreasing trend was observed prior to the 5.5 km distance; however, after 5.5 km, the THg concentrations at sites S4 and S5 significantly increased. The variation in the THg concentration in the downstream paddy soil is consistent with that of the TGM ( Figure S2). Apart from this, a significant correlation was found between the TGM and paddy soil THg (p < 0.05). This indicates that atmospheric dry and wet depositions play a vital role in the Hg pollution of paddy soil [35]. The paddy fields are located along the banks of the Wengman River, and the local people have been using the Hg-contaminated river water for irrigation for a long time, which could also be a key source of the Hg pollution in the paddy soil [18,78]. Source Apportionment by Hg Isotopes Previous studies have shown that soils can preserve the isotopic fingerprints of Hg pollution sources [42,79,80]. The paddy field downstream of the YMM are located on both sides of the Wengman River. There was a significant correlation between the TGM and THg in the paddy soil (p < 0.05), indicating that dry and wet atmospheric depositions are an important source of Hg pollution in the paddy soil. The contribution of wet deposition is of key importance as shown in Figure 5a,b. Pribil et al. [47] stated that the Hg in the soils to the north and east of the mining area may be the result of atmospheric deposition, geological background influence, and gaseous Hg emissions from calcines piles during Hg processing. The irrigation of paddy fields with Hg-contaminated river water is also one of the key sources [18]. water to the paddy soil increased to 62.0% at 6 km, which is consistent with the increase in the Hg concentrations in the river water, indicating the influence of domestic pollution sources. The contribution ranges were 30.0~62.0% by river water, 4.00~17.0% by rainwater, and 34.0~53.0% by geological background. With the increase in distance, the contribution ratio of the river water gradually decreased, and the contribution ratio of the rainwater and geological background sources increased again, indicating that the paddy soil pollution in this range was mainly attributed to domestic pollution sources. Conclusions The YMM is still the primary source of Hg pollution in the surrounding ecosystem, and especially of the significant atmospheric Hg emissions in the exposed calcines area. Peak concentrations of THg were observed at the river upstream, and the source apportionment for the DHg and PHg in the river water using a binary mixing model demonstrated that mine wastes were the main source of Hg in the surface water. The TGM concentrations downstream showed specific spatial distributional characteristics, indicating a large amount of Hg emissions from Hg mine wastes and unidentified domestic pollution sources. This study calculated the contributions of the river water, atmospheric wet dep- The Rain Water No. 1 sample was collected 2 km away from the mining area. Due to its proximity to the Hg mining area, the Hg mine wastes have a greater impact. As shown in Figure 5a, the Hg isotopes compositions were δ 202 Hg = −0.51‰ ± 0.05‰ and ∆ 199 Hg = −0.10‰ ± 0.04‰. The Rain Water No. 2 sample was collected at a distance of 6 km from the mining area, and we noticed that the Hg mine wastes had less of an impact. The ∆ 199 Hg in the rainwater was positive, which was similar to the Hg isotopic value of the rainwater in Guiyang (δ 202 Hg: −0.44~−4.27‰, ∆ 199 Hg: 0.19~1.16‰) [81]. The δ 202 Hg (δ 202 Hg = −0.34‰ ± 0.05‰) and ∆ 199 Hg (∆ 199 Hg = 0.30‰ ± 0.04‰) values in the rainwaters are presented in Figure 5b. The mean values of the δ 202 Hg and ∆ 199 Hg in the paddy soil were −0.73‰ ± 0.11‰ (−0.91~−0.56‰, n = 12) and 0.03‰ ± 0.05‰ (−0.05~0.10‰, n = 12), respectively. According to Song et al. [6], the mean values of the δ 202 Hg and ∆ 199 Hg in the paddy soils in the Wanshan area were −1.26‰ ± 0.06‰ (−1.30~−1.21‰, n = 2) and −0.07‰ ± 0.10‰ (−0.14~0.00‰, n = 2), which were used as the background values in this study (Table S2). As shown in Figure 5a,b, the combined characteristics of the ∆ 199 Hg and δ 202 Hg indicate that the paddy soil is a ternary mixture of different sources, such as rainwater, river water, and geological background sources. Therefore, the δ 202 Hg and ∆ 199 Hg of the corresponding point samples were used to trace the source of the Hg pollution in the paddy soil. This study calculated the relative contributions of the three sources using a ternary mixed model, as shown in Figure 5c. The results showed that the exogenous input of the Hg pollution in the paddy soil can be divided into two parts. The first part is the area contaminated by Hg mining activities. Within 5.5 km of the YMM, the river water is mainly polluted by Hg mine wastes, while the paddy soil pollution at 2 km is mostly attributed to river water, reaching 86.0%. The contribution range of river water was 28.0~86.0%, while in the case of rainwater, the range was 7.00~25.0%. The contribution range of the geological background was 7.00~60.0%. With the increase in distance, the Hg contribution of the river water gradually decreased, while the contribution ratio of the rainwater and geological background sources gradually increased. The paddy fields in this range were mainly contaminated by the Hg mine. The second part is the domestic Hg-polluted area. After 5.5 km from the YMM, the contribution of the river water to the paddy soil increased to 62.0% at 6 km, which is consistent with the increase in the Hg concentrations in the river water, indicating the influence of domestic pollution sources. The contribution ranges were 30.0~62.0% by river water, 4.00~17.0% by rainwater, and 34.0~53.0% by geological background. With the increase in distance, the contribution ratio of the river water gradually decreased, and the contribution ratio of the rainwater and geological background sources increased again, indicating that the paddy soil pollution in this range was mainly attributed to domestic pollution sources. Conclusions The YMM is still the primary source of Hg pollution in the surrounding ecosystem, and especially of the significant atmospheric Hg emissions in the exposed calcines area. Peak concentrations of THg were observed at the river upstream, and the source apportionment for the DHg and PHg in the river water using a binary mixing model demonstrated that mine wastes were the main source of Hg in the surface water. The TGM concentrations downstream showed specific spatial distributional characteristics, indicating a large amount of Hg emissions from Hg mine wastes and unidentified domestic pollution sources. This study calculated the contributions of the river water, atmospheric wet deposition, and geological background to the paddy soil Hg pollution by Hg isotopes, which were also verified by the spatial distributions of the river water Hg, river water anions and cations, TGM, and paddy soil Hg. The study shows that the paddy soil upstream at 5.5 km is mainly polluted by the Hg mine, while domestic sources are the main contributors after a distance of 5.5 km. This study provides an important scientific basis for the source control of Hg in the surface water and paddy fields in Hg mining areas. This is needed to control the Hg emissions from mine wastes to the river water and atmosphere, which would finally reduce the Hg bioaccumulation in agricultural crops and the associated human health risks in Hg-polluted areas. Conflicts of Interest: The authors declare no conflict of interest.
2023-05-17T15:05:26.393Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "20699795c7a28475b3cc82a7bda435a5c35b11fc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/toxics11050456", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "97a6296fe488dd107d6055114733d3a1f5555685", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
67187218
pes2o/s2orc
v3-fos-license
Changes of temporomandibular joint position after surgery first orthognathic treatment concept Orthognathic surgery treatment (OGS) after orthodontic treatment of dentofacial deformities is a widely performed procedure, often accompanied by a bilateral sagittal split osteotomy (BSSO). Positioning of the condyle during this procedure is a crucial step for achieving optimal functional and anatomical results. Intraoperatively poorly positioned condyles can have a negative effect on the postoperative result and the patient’s well-being. Changes of the condylar position during OGS Procedures and its effects on the temporomandibular joint in orthognathic surgical interventions (OGS) are subject of scientific discussions. However, up to date, no study has investigated the role of condyle position in the surgery first treatment concept. The aim of this study was to investigate the influence of OGS on the three-dimensional position of the condyle in the joint in a surgery first treatment concept without positioning device and to record the change in position quantitatively and qualitatively. Analysis of our data indicated that OGS in surgery first treatment concept has no significant effect on the position of the condyle and the anatomy of the temporomandibular joint. This study aims to clarify if there are significant changes in condyle position after free hand repositioning of the condyle during OGS, since these changes may affect the function of the TMJ and impair masticatory function of the patients after surgery. We postulate that free hand repositioning does not alter the condyle position after surgery. Materials and Methods This prospective study was conducted after approval of the Ethics Committee of the Medical University of Vienna, Austria (1449/2013) and is in compliance with the Helsinki Declaration. Written, informed consent was obtained from all patients that were included in our study. The study was carried out on CT scans of patients >18 years of age, suffering from front open bite. The patients were operated between June 2013 and August 2014 at the Department of Oral and Maxillofacial Surgery of the Medical University of Vienna. All patients had undergone orthognathic surgery before orthodontic treatment, and all operations were conducted by an experienced surgeon. Maxillary treatment was performed doing a Le Fort I osteotomy, using four L-shaped mini-plates with four screws each for fixation. Mandibular treatment was done by the standard bilateral sagittal split osteotomy (BSSO) technique according to Hunsuck and Epker 18,19 . All condyle bearing segments were positioned free-hand without positioning devices, using three bicortical setscrews with lengths from 12 mm-16 mm for rigid fixation. Data Collection. A CT scan of the skull (Philips Brilliance 64, Amsterdam, Holland; technical data see Table 1) was performed before surgery according to a standardized protocol with the patient in natural head position and gently biting in centric relation, resting his lips in a relaxed position. All data were saved in the Vienna General Hospital (Allgemeines Krankenhaus Wien, AKH) Picture Archiving Communication System (PACS), and later saved on a CD Rom for data analysis. Each operation was performed according to a standardized protocol by two experienced surgeons using a traditionally fabricated surgical splint for guidance. Six months after surgery, another CT scan was performed according to the protocol described in Table 1. The condyle position was evaluated bilaterally and compared with the postoperative CT with the program Slicer 3D (V.4.4.0 http://www.slicer.org/) 20 , reconstructing three-dimensional datasets for quantitative morphometric analysis. Only bone kernel data visible to the human eye (Table 2) were used for reconstructing a raw data set of the mandible. Data were thresholded, thus Hounsfield Unit scale (HU) below 60 are projected in white color, and all parameters above 80 HU are projected in black color. Alterations of the TMJ were quantified by determining preoperative to postoperative differences between previously defined adjacent anatomical landmarks: F1(right), the condyles (F1-F1′). In addition, the distance of both condylar longitudinal axes was calculated and the cutting angle between the axes (Intercondylar Angle) was calculated and measured. All measurements are in millimeters (mm), Angles are measured in degrees (see Fig. 1). CT Scans before and after surgery where analyzed according to these steps: statistical Analysis. Descriptive statistics were used to summarize the clinical characteristics of the study cohort. Assessed variables were the intercondylar distance (F1 to F1′; F3 to F3′), and the position of the condyles before and after surgery by means of an angle measured between the left (Co si.) and the right condyle (Co dext.). All data where imported into MS Excel 15.6 for Mac from the Slicer 3D 4.4.0 program for statistic calculation. We postulated that there is no significant difference in the condyle position after surgery. The position and the anatomical shape of the condyle where analyzed before and after surgery in a three-dimensional manner as described earlier. Then changes between the condylar position and condylar shape after surgery where analyzed and compared statistically. First, we performed a Shapiro Wilk Test to test whether the difference of the condyle position before and after surgery was within a standard distribution. If this was the case, we performed a paired t-test. If this was not the case, we performed a Wilcoxon-Mann-Whitney-Test. Statistical significance was defined as a two-sided p-value < 0.05. Statistical analysis of data was performed using the open source statistical programming environment R 3.1.1 21 . Results A total of 16 patients with malocclusion who underwent orthognathic surgery met the eligibility criteria for this study. All 16 patients were Caucasian aged 17-35, mean age 26,10 ± 5.1 years. Eight of them were women (aged 24.8 ± 5.3 ranging from 17.0 to 34.0 years) and eight were men (aged 27.3 ± 4.7 ranging from 22.2 to 35.7 years). There was no difference in age between men and women (Welch two sample t-test: −0.985, df = 13.723, p-value = 0.342). Analysis revealed no significant differences of the inter-condyle angle between bimaxillary and LeFort I osteotomy patients: Bimaxillary osteotomy patients showed an inter-condyle angle change of 3.3° ± 7.7 whereas LeFort I osteotomy patients showed an inter-condyle angle change of 2.8° ± 3.6. LeFort I osteotomy in comparison to bimaxillary osteotomy and mandibular set-back in comparison to mandibular advancement did not account for significant differences of inter-condyle angle change (see Table 3). Inter-and extra-condylar distance. Distance between the most medial points of both condyles was measured before and after surgery to evaluate whether the inter-condylar distance was affected by the operation. The mean intercondylar distance was 79.0 mm before and 80.4 mm after surgery (see Fig. 2). We performed the same calculation for the most lateral points of the condyles; the mean distance was 114.3 mm before, and 115.6 mm after surgery (see Fig. 3). The Shapiro Wilk test revealed that distances were not within a standard distribution (p = 0.0022). The paired T-test did not confirm significant changes between the distance of the lateral condyle position (p = 0.2114). The angle between the condyles was measured at the crossing of the lines along the longitudinal axis of the condyle. The mean angle was 146.0 ± 14.8° before and 142.7 ± 16.0° after surgery (see Fig. 4). Since the inter-condylar angle did not follow a standard distribution (Shapiro-Wilk normality test: W = 0.90758, p-value = 0.0097). The angular change was compared pairwise before and after surgery, using a paired Wilcox signed rank test. The paired Wilcoxon signed rank test did not confirm significant changes between the angle of the lateral condyles before and after osteotomy (paired Wilcoxon signed rank test: V = 93, p = 0.2114). Discussion The mechanisms underlying the correlation between pre-and postoperative condylar position changes have not been clarified yet. Hence, the aim of the present study was to examine the positional changes of the condyles in a group of 16 uniformly treated DFD patients with a follow-up of six months. Numerous studies have shown that the amount of change in condyle position varies in each individual and is influenced by numerous factors, such as surgical procedure, experience of the surgeon and patient In the present study however, no significant changes in the position of the condyle were measured six months after surgery. Our findings are supported by Tabrizi et al., who observed that the distance of the condyle to the superior in the external auditory meatus in the coronal plane underwent significant movements compared to the preoperative position on both sides. Furthermore, they found that the condyle was displaced inferiorly in the sagittal plane one month after mandibular advancement with maxillary superior repositioning. Then, it moved superiorly approximately in the initial position. In the second measurement, a month after surgery, the condyle displaced laterally in the sagittal plane and repositioned to its original position after nine months. In the third measurement, the condyle displaced anteriorly one month after the surgery; then it positioned more posteriorly than its initial position nine months after the surgery. Changes in the sagittal plane were controlled by evaluation of condylar changes to the superior point of the external auditory meatus in the coronal plane 23 . Our findings are also in line with the findings of a study by Chen et al., who found that the condyles tended to be positioned in a concentric position in relation to the glenoid fossa three months after the surgery and remained stable during the 1-year follow-up 24 . Contrarily, Harris et al. detected that 8 weeks after the SSO and mandibular advancement, most cases showed displacement of the condyle medially, posteriorly, superiorly, and angled medially 25 . Although these findings are contrary to the results of our study, they might be explained by the fact that their measurements were carried out 4 month earlier than ours. Contrary to our results, some studies demonstrated inward rotation of the condyle after BSSO [26][27][28] . These contrary results might be explained due to different surgical techniques used during the operation, the use of different fixation systems after BSSO (rigid vs. semi-rigid vs. non rigid fixation), or different parameters measured at different times after surgery. Since our measurements where performed 6 month after surgery, we might not have detected positional changes of the condyle as physiologic adaptive bone remodeling, induced by the recovery of masticatory functions might already have taken place 29 . Experimental and clinical studies have demonstrated that there is a close relation between condylar position in patients suffering from DFD and TMJ disorders [1][2][3][4][30][31][32][33][34] . Although OGS is a standardized and safe procedure, it has been shown that there is a correlation between OGS and TMJ symptoms in 2% of patients after surgery, especially in those patients who were already affected by TMJ disorders prior to surgery 7,34 . To prevent changes in TMJ position during surgery, several mechanical and computational devices have been developed to prevent the condyles from moving away from their original position, trying to address the correlation between OGS and TMJ symptoms 35,36 . In this study no condylar positioning device was used, but surgery was carried out by two experienced surgeons to prevent extended condylar movement during the procedure and preserve the anatomical relation to reduce the influence on condyle anatomy and TMJ Function 37,38 . Our 3D analysis of condylar positional and volumetric changes shows, that there is no significant change in condylar position or anatomy after surgery. This finding is supported by current literature, which suggests that patients do not benefit from the use of condylar positioning devices 9,10,14,15,39 . In accordance with recent research we can postulate that there is only little change in mandibular structure and function after OGS 32,33 . This is supported by other studies, revealing that small positional changes of the condyle are not associated with early skeletal relapse 11 . A unique aspect of surgery before orthodontic treatment is the possibility to evaluate the preoperative position of the condyle and the TMJ in its natural relation before orthodontic action. Although all measurements where done in a three-dimensional manner, this study is still subject to inherent limitations. All CT scans are prone to evaluation errors of condylar changes after orthognathic surgery. These errors are related to slice thickness, window level and width, matrix size, and rendering technique 40 . Another limitation is the small sample size, and the surgery first treatment concept as orthodontic treatment after surgery might have an impact on the position of the condyle. Another limitation of our study is that this method of measurement showed changes of intercondylar angles of 2.8 ± 3.6° in the LeFort I -only patients, although they underwent no relevant changes in the mandible. Consequently, this has to be interpreted as a measurement error or the effect of orthodontic treatment. Conclusion The present study revealed the clinical significance of changes to the TMJ after OGS in patients with DFD. We were able to demonstrate that free hand condyle positioning during orthognathic surgery has little effect on the natural condyle position in patients without orthodontic treatment. Further studies are needed to confirm the findings of this study.
2019-02-19T15:13:24.472Z
2019-02-18T00:00:00.000
{ "year": 2019, "sha1": "0c4c3271a70eaeb83505d643ecfd0e6bef59f1db", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-38786-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dbcefdb08f85688b8abcf782f1cf88682492398e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5148193
pes2o/s2orc
v3-fos-license
Cross-layer performance control of wireless channels using active local pro fi les —To optimize performance of applications running over wireless channels state-of-the-art wireless access technologies incorporate a number of channel adaptation mechanisms. While these mechanisms are expected to operate jointly providing the best possible performance for current wireless channel and traf fi c conditions, their joint effect is often dif fi cult to predict. To control functionality of various channel adaptation mechanisms a new cross-layer performance optimization system is sought. This system should be responsible for exchange of control information between different layers and further optimization of wireless channel performance. In this paper design of the cross-layer performance control system for wireless access technologies with dynamic adaptation of protocol parameters at different layers of the protocol stack is proposed. Functionalities of components of the system are isolated and described in detail. To determine the range of protocol parameters providing the best possible performance for a wide range of channel and arrival statistics the proposed system is analytically analyzed. Particularly, probability distribution functions of the number of lost frames and delay of a frame as functions of fi rst-and second-order wireless channel and arrival statistics, automatic repeat request, forward error correction functionality, protocol data unit size at different layers are derived. Numerical examples illustrating performance of the whole system and its elements are provided. Obtained results demonstrate that the proposed system provide signi fi cant performance gains compared to static con fi guration of protocols. I. INTRODUCTION T O optimize performance of applications in wired networks it is often sufficient to control performance degradation caused by packet forwarding procedures. Even though this is not a trivial task, dealing with wireless networks we also have to take into account performance degradation caused by incorrect reception of channel symbols at the air interface. These errors propagate to higher layers often contributing a lot to end-to-end performance degradation. As a result, the air interface could be a 'weak point' in any end-to-end performance assurance model would ever be proposed for IPbased wireless networks. To improve performance of wireless channels state-of-theart wireless access technologies incorporate a number of advanced features including multiple-in multiple-out (MIMO) antenna design, adaptive modulation and coding (AMC) scheme, different automatic repeat request (ARQ) and forward error correction (FEC) procedures, transport layer error Manuscript concealment functionality, etc. Although being implemented at different layers of the protocol stack all these features aim at performance improvement of information transmission over wireless channels. To decide which protocol parameters provide the best possible performance for a given traffic and channel conditions, wireless access technologies call for novel design of the protocol stack that should now include crosslayer performance optimization capabilities. To optimize performance of applications running over wireless channels different layers of the protocol stack should be allowed to communicate between each other exchanging control information. This information should be used by a certain performance control entity to determine the set of protocol parameters providing optimized performance at any given instant of time. Depending on the state of the wireless channel and traffic characteristics of an applications, it should be possible to dynamically change protocol parameters at different layers to obtain best possible performance for given wireless channel and traffic at any given instant of time. To achieve this aim a performance control system is needed. In this paper we propose the reactive performance control system responsible for dynamic adaptation of protocol parameters at different layers. Functionality of each component of the system is discussed in detail. Protocol parameters include error concealment capability of FEC code at the physical layer, ARQ scheme, size of frames, and buffer space at the datalink layer, rate of the active application. Depending on the current state of the wireless channel and application in terms of first-and second-order bit error and frame arrival statistics, the proposed system determines the set of protocol parameters that result in best possible performance. To determine the range of protocol parameters providing optimized performance for a wide range of channel and arrival statistics the proposed system is analytically analyzed for probability distribution functions of the number of lost frames and delay of a frame. Numerical examples illustrating performance of the system are provided. Our results demonstrate that dynamic error concealment techniques may provide significant performance gains compared to static configuration of protocols. Using the proposed system quality of the wireless channel can be dynamically regulated providing truly best effort service over the air interface. The rest of the paper is organized as follows. The need for cross-layer interactions between protocols at the air interface is explained in Section II. The structure of the cross-layer performance control system for wireless channels is proposed in Section III. Elements of the system are described in detail. Section IV provides mathematical foundation of the proposed performance control system. Numerical examples illustrating performance of the proposed system are provided in Section V. Conclusions are given in the last section. A. Cross-layer interactions in the protocol stack Both ITU-T OSI abstract protocol model and TCP/IP protocol model separate and isolate functionalities of each layer of the protocol stack. In these models each layer is responsible for a certain set of functions, communicates directly with the same layer of a peer communication entity and is usually unaware of specific functions of other layers. Both architectures do not allow direct communication of any kind between non-adjacent layers. Communications within the protocol stack are only allowed between adjacent layers using the so-called requestresponse primitives defined for service access points (SAP). Higher layers use functions provided by adjacent lower layers. Although the layered design of the protocol stack has proven itself to be efficient in wired networks, it is often inappropriate for wireless networks. To optimize performance of applications running over wireless channels state-of-the-art wireless access technologies require novel organization of the protocol stack at the air interface. Although interfaces between adjacent layers are still preferable, there is the need for direct interactions between non-adjacent layers. In fact, the network layer and above layers often need direct interfaces to the datalink layer for handover support. Another example concerns transmission parameters including transmission mode, channel coding and data-link layer retransmissions which must be related to application characteristics (e.g. type of information, source coding, etc.), network characteristics, user preferences and context of use. In order to take decisions on traffic management, data-link layer protocols should be aware of higher layers including network and transport layers' parameters and vice versa. In future wireless access technologies we can refer to the air interface protocol architecture with interactions among different layers. We define the cross-layer design of the protocol stack as a design that violates the respective layered structure of communication protocols. To date there were a number of proposals for cross-layers communication. We usually distinguish between the following types of cross-layer interactions: creation of new interfaces, merging of adjacent layers, design coupling, and vertical calibration of parameters across layers [1]. In order to directly exchange information between non-adjacent layers at the runtime new interfaces can be created. The information can be exchanged in the upward and downward directions. Merging of adjacent layers refers to joint definition and implementation of two or more adjacent protocols in the protocol stack. This technique allows to avoid new interfaces at the expense of more complicated implementation. Note that this approach is not inherently cross-layer but still violates the layered architecture of the protocol stack. According to the design coupling no information is exchanged between non-adjacent layers at the runtime. Instead, two protocols are just made aware of operational parameters of each other at the design phase. Vertical calibration of parameters across layers refers to the case when parameters of protocols at different layers are adjacent at the runtime such that a certain performance metric is controlled and optimized. This approach also requires new interfaces between non-adjacent layers. The common aim of all abovementioned cross-layer communication schemes is to explicitly or implicitly exchange information between layers of the protocol stack whether at the runtime or at the design phase. Detailed review of crosslayer design approaches can be found in [1]. Several examples of the cross-layer design methodologies are discussed in [2]. B. Cross-layer signalling schemes In order for non-adjacent layers to communicate between each other the cross-layer signalling scheme is needed. This scheme should be responsible for exchange of control information between different layers using appropriate interfaces. Up to date a number of signalling schemes for cross-layer signaling in the protocol stack were proposed. We distinguish between in-band and out-of-band signaling. In order to communicate between TCP and radio link protocol (RLP) in wireless IP-enabled networks authors in [3] proposed to use the wireless extension header (WEH) of IPv6 protocol. The advantage of this method is that it makes use of IP data packets as in-band signalling for information exchange between the transport and the data-link layer. Another method was proposed in [4]. In order to provide communication between different layers of the protocol stack authors proposed to use ICMP messages. According to it a new message is generated whenever a certain parameter changes. The common shortcoming of two abovementioned approaches is that only few layers can actually exchange information. A different 'network' approach was proposed in [5]. According to the proposal a special network service that gathers, stores, manages and distributes information about current parameters used at mobile hosts is introduced. Those protocols that are interested in a certain parameter can access this network service. This approach provides the cross-layer functionality via a 'third party' service. Usage of local profiles instead of remote network profiles was proposed in [6]. The concept is similar to that one proposed in [5]. The only difference is that information is stored locally and there is no need to access it via the network. This results in low overhead and low delay associated with this approach. In [7] the concept of active local profiles is proposed. The principal difference compared to [6] is that active local profiles do not only store protocol parameters but implement control procedures to optimize performance of application running over wireless channels. In [8] authors proposed a dedicated cross-layer signalling protocol for communication between layers in the protocol stack. The major advantage of this protocol is that non-neighboring layers can exchange control information directly without processing at intermediate layers. This approach, however, requires additional complexity to be introduced directly to the protocol stack. Authors in [8] provided a comparison between abovementioned signalling schemes and advocated out-of-band signalling. They argue that the signaling propagation path across the protocol stack is not efficient due to unnecessary processing of messages at intermediate layers. Additionally, the signaling message formats provided by in-band signalling schemes are not either flexible enough for signaling in both upward and downward directions or not optimized for wireless environment where the need for a new parameter to be exchanged between non-adjacent layer may occur. Finally, we note that the signalling scheme itself does not provide any advantages for a communicating entity. The ultimate goal of all cross-layer signalling schemes is optimization of protocol parameters at different layers to provide the best possible performance at any instant of time for any wireless channel and traffic conditions. Considering the cross-layer signalling and optimization of application performance jointly one has to choose whether to use distributed or centralized control of wireless channel performance. The in-band cross-layer signalling proposals [3], [4], [8] imply the distributed performance control strategy. According to those proposals, layers exchange their information and this information should then be used by the performance control entities implemented at each layer that participates in information exchange and allows its parameters to be dynamically controlled. Such approach requires modifications to be introduced to each layer of the protocol stack. It was pointed out in [9] that there are a number of problems associated with distributed control strategy. Indeed, when the decision regarding changes of parameters is taken independently at each layer the resulting effect may not be straightforward. Secondly, delay associated with information exchange between nonadjacent layers can be unacceptable. Finally, to take appropriate decisions on changes of protocol parameters performance optimization subsystem must be implemented at each layer of the protocol stack that participate in performance control. According to out-of-band signalling proposals [5], [6], layers export their current operational parameters to a certain external performance control entity via a predefined set of interfaces. This external entity not only saves information of all layers but optimize performance using controllable parameters of various protocols and then distribute information regarding what kind of protocol parameters should be used at the air interface. Thus, it makes an external intelligent cross-layer performance optimization system that incorporates features of the out-of-band cross-layer signalling system. This method is centralized in nature and generally in accordance with the signalling schemes proposed in [6], [7]. Note that distributed control is also possible with out-of-band signalling scheme. C. The layered architecture and the cross-layer design Any cross-layer design of the protocol stack violates the modular design concept. Dealing with cross-layer design we have to remind the problems it may bring. As it was pointed out in [9] the layered structure of communication protocols already proved itself to be easily manageable in wired networks. Particularly, it provides a modular system design that is important to understand the operation of the whole system. At the development phase system designers must specify a number of layers, what functionalities each layer should provide and interfaces to adjacent layers. Moreover, the layered design of the system significantly simplifies implementation and further manufacturing of the system allowing, for example, reuse of components (e.g. protocols, interfaces). Indeed, protocols of the system can be developed in isolation assuming a certain set of services that a given protocol receives from the lower layer and provides to the higher layer. Cross-layer protocol design may result in significant increase of complexity at the development and implementation phases. Functionality of the whole system may not be clearly understood due to a number of multi-layer loops. Additionally, since the modular design is no longer feasible, implementation and manufacturing costs can be high. Modification to any component of the system does not only change its own behavior but may also affect the performance of the whole system. Results of this influence are often difficult to predict. Additional efforts are required to ensure stability of the system. Summarizing, we conclude that there should be a rational trade-off between the layered architecture and the performance optimization of wireless channel performance using crosslayer design. The cross-layer performance optimization tries to fulfill the short-term goals in terms of better performance for a given wireless access technology [9]. Clear and easily understandable layered architecture leads to long-term benefits. Among others, the low per-unit cost for a certain performance is one of the most important driving factors [9]. Thus, dealing with cross-layer performance control we have to take into account architectural considerations making as less cross-layer interactions as possible and implementing the performance control system as well isolated from the protocol stack as feasible. A. Related work To optimize performance of applications running over wireless channels state-of-the-art wireless access technologies incorporate a number of advanced features including multiple-in multiple-out (MIMO) antenna design, adaptive modulation and coding (AMC) scheme, automatic repeat request (ARQ) procedures, dynamic forward error correction (FEC), transport layer error concealment functionality, adaptive compression and coding for real-time application, etc. These mechanisms affect performance provided to applications differently and their joint effect is often difficult to predict. Recently, to evaluate joint operation of various channel adaptation techniques, crosslayer performance models started to appear. These frameworks provide a starting point in cross-layer design of wireless channels describing joint performance of two or more channel adaptation mechanisms. Joint operation of AMC and ARQ was studied in [10]. Authors used finite-state Markov chain (FSMC) to capture changes in modulation and coding schemes. Performance of MIMO and AMC systems was studied in [11], [12], where authors introduced the notion of effective capacity of wireless channels. Joint operation of TCP congestion control and ARQ protocol was considered in [13], [14]. Performance gain provided to real-time and non-real-time applications by MIMO system was analytically shown in [15]. Liu et. al considered performance of TCP with AMC implemented at the physical layer, finite queue length and truncated ARQ at the data-link layer [16]. Note that implementation of AMC and MIMO system is rather complex and mainly available for wide/metropolitan area networks only. Up to date, there were no contributions that tried to evaluate the effect of dynamic error-concealment procedures at the data-link layer assuming non-stationary wireless channel and traffic characteristics. In this paper we consider real-time applications with adaptive compression and coding. The size of protocol data units (PDU) at different layers is allowed to be dynamically changed. Using the cross-layer approach we evaluate performance that an application receives running over wireless channels at the IP layer where it is standardized. Contrarily to those studies cited above we also propose the associated performance control system. B. The structure of the system The structure of the proposed performance control system is shown in Fig. 1, where CPOS stands for cross-layer performance optimization subsystem. The protocol stack is logically divided into three groups of protocols. The first group consists of an application itself that falls into a certain traffic class. We assume that a network is intended to deliver four traffic classes. These are conversational, streaming, interactive and background classes. Applications that fall in conversational class require to preserve time relation between information entities of the traffic stream and require stringent guarantees of end-to-end delivery of the traffic entities. Examples of these applications include real-time two-way voice communications, audio multicasting, etc. Applications that are classified to the streaming class require the network to preserve time relation between information entities of the traffic stream and do not require strict guarantees of end-to-end delivery. The most common example of these applications is the streaming video. Both interactive and background classes expect a network to guarantee the reliable delivery of information units. The difference between these two classes is that interactive class includes applications operating in request-response mode, thus, posing additional requirements on end-to-end delay. Applications that fall in background class are usually characterized by the socalled bulk transfers and do not require bounded delay in the network. Although the functionality of the performance control system for conversational and steaming applications is only considered here, the system can also be used for interactive and background applications. Considering the defined traffic classes one may observe that there is strict correspondence between the traffic class and protocols at the transport and network layers. Those applications that require strict delay requirements usually use (RTP)UDP/IP as the combination of the transport and network layer protocols. Those applications that require a network to preserve the content of the transmission use TCP and IP at the transport and network layers, respectively. Nowadays, TCP, UDP and IP protocols are well standardized. For the sake of interoperability with existing implementations, no modifications should be made to these protocols. Indeed, changes introduced to any of these protocols may require network wide modifications. For this reason, we require that protocols of the transport and network layers and their parameters should not be controlled. Wireless access technology determines how the traffic is treated at the wireless channel. It defines protocols of the datalink and physical layers. These protocols are usually specific for a given wireless access technology and may incorporate advanced features such as dynamic choice of the parameters to achieve the best possible performance for given wireless channel conditions. Since the main performance degradation in wireless networks stems from the stochastic nature of wireless channel characteristics, these features of state-ofthe-art wireless technologies provides a feasible option for performance control. According to operation of the performance control system, the application firstly determines the network protocol suit (TCP/IP or (RTP)UDP/IP) to be used during the active session. This decision is taken independently of the performance control system and the mapping is strict for a given application. The application implicitly notify the CPOS about protocols that are used at the transport and network layers providing the traffic class on informational output. It should also provide information concerning the expected performance level that should be provided at the local wireless channel. Alternatively, this information can be stored in CPOS. During the whole duration of the session CPOS monitors states of wireless channel and application in terms of covariance-stationary stochastic process. The current state of the application (traffic model), the current state of the wireless channel (wireless channel model) and protocols parameters at the data-link and physical layers are used to determine performance parameters that are important for a given application. These parameters may include frame loss rate, frame delay, delay variation, etc. Then, CPOS should determine which actions should be taken to provide the best possible performance for a given application at the current instant of time, that is, whether current protocol parameters should be changed and, if yes, what changes are required. The list of actions should include change of protocol parameters at the data-link and physical layers. This capability is already available in most state-ofthe-art wireless access technologies. Additionally, it may also include change of the applications parameters (e.g. rate of the codec for video and audio applications), change of the buffer space at the data-link layer, change of PDU size at different layers. The former capability is usually available for realtime applications such as streaming video or two-way voice communications. Note that when a certain application does not allow to change the rate at which the traffic is fed to the network, the feedback regarding the current rate should still exist. However, when this capability is available, controlling inputs should be provided to the source. At the beginning of the session, the CPOS should also be made aware of controllable protocols in the protocol stack. It can be made statically at the development phase. It is important to note that protocols can be initialized with default parameters. In this case, these parameters are immediately communicated to the CPOS. Another approach is to setup a predefined set of initial parameters for each class of applications and allow CPOS to initialize protocol parameters. During the active session the CPOS controls the performance perceived by an application by setting protocol parameters in response to changing traffic and channel conditions. C. The cross-layer performance optimization subsystem The core of the proposed performance control system is the CPOS. The structure of the CPOS is shown in Fig. 2. Three major components of this system are the real-time channel estimation module (rt-CEM), the real-time traffic estimation module (rt-TEM) and the performance evaluation and optimization module (PEOM). The rt-CEM is responsible for detecting changes in wireless channel statistics and estimation of the channel state in terms of the mathematical model. The rt-TEM performs the same functions for traffic observations. To enable these capabilities, wireless channel and traffic statistics are observed in real-time, pre-processed and then fed to the input of the respective change-point analyzer. Note that usage of rt-TEM is only mandatory for real-time applications that have unexpectable traffic patterns. The most common example of these applications is variable bit rate (VBR) streaming video. When the traffic pattern of an application is known in advance this block should be omitted and the predefined model should be used. Example of these applications include voice communications. Change-point analyzers test incoming observations for changes in parameters that affect performance of applications running over wireless channels. Recently, emerged methods of measurement-based traffic modeling allowed to recognize major statistical characteristics of the traffic affecting its service performance in a network [17], [18]. According to Li and Hwang [17] the major impact on performance parameters of the service process is produced by the empirical distribution of the arrival process and the structure of its autocorrelation function (ACF). Hayek and He [18] highlighted importance of empirical distributions of the number of arrivals showing that the queuing response may vary for inputs with the same mean and ACF. It was also shown [17] that accurate approximation of empirical data can be achieved when both marginal distribution and ACF of the model match their empirical counterparts well. Recently, it was also shown that the wireless channel statistics including the mean frame error rate and the lag-1 autocorrelation of the frame error process significantly affect performance parameters of applications running over wireless channels at the data-link layer with hybrid ARQ/FEC [19]. In [20] authors considered the effect of bit error propagation to the IP layer with FEC procedures implemented at the datalink layer. It was also found that the mean bit error rate and the lag-1 autocorrelation of the bit error process affect the performance response at the IP layer in terms of the mean number of lost IP packet. Similar conclusions have been made in [21], where authors considered the effect of bit errors on performance of applications at the IP layer. To monitor wireless channel statistics, SNR, bit error, or frame error processes can be used. The reason to use the bit error process is twofold. Firstly, it allows to abstract functionality of the physical layer of different wireless access technologies. As a result, a single cross-layer performance control system can be potentially applied to different wireless channels. Secondly, the bit error process is binary in nature. It allows to significantly decrease the complexity of the modeling algorithm as shown in [22]. It is also allowed to use the SNR process instead of the bit error process if the relationship between the bit error probability and the SNR value is known. Finally, the frame error process can also be used. Note, that frame error statistics can be directly obtained observing operation of ARQ protocols at the data-link layer. Monitoring the frame error process introduces significant delays in detection of the channel state. In this case, the system may not react timely to changes in wireless channel conditions. The advantage of monitoring SNR or bit error processes is that the reaction time decreases significantly. When the relationship between the SNR value and bit error probability is already available (e.g. obtained via filed measurements) the proposed scheme can be used for SNR observations too. Direct monitoring of the bit error process of the wireless channel may provide a feasible alternative to this approach. However, in order to estimate statistics of the bit error process in real-time the source should periodically transmit a predefined information at the wireless channel such that the receiver is aware of the content of this transmission and its exact placement. This feature can be implemented using either channel equalization bits or synchronization information. However, it is still unclear how much information should be transmitted to provide a satisfactory estimator of the channel state. Authors in [23] provided some insights on this problem. The change-point analyzers must signal those points when a change in either traffic or wireless channel statistics is detected. When a change is detected, the current wireless channel and traffic models are parameterized in the respective modeling blocks and then immediately fed to the input of PEOM. The current traffic and channel models are also stored in the respective modeling blocks for further usage. Note that it is allowed for PEOM to be activated in response to the change in either channel or traffic statistics only. Otherwise, no actions are taken except for continuous monitoring of the channel and traffic statistics. The structure of PEOM is shown in Fig. 3. According to the system design the current traffic and channel models are fed to the input of the decision module. Taking the reference performance of a given application at appropriate layer (e.g. data-link or IP) as another input, this module decides whether the current performance is satisfactory. In order to take this decision the module containing the performance evaluation framework (PEOF) is activated. If the performance is satisfactory, no changes are required and current protocol parameters are further used. Otherwise, the current wireless channel and traffic models are used to decide whether performance can be improved and, if so, which parameters have to be changed and how. Depending on particular protocols of the protocol stack and type of the application, new protocol parameters resulting in best possible performance for a given wireless channel and traffic statistics are computed in the PEOF and then fed back to the decision module. These parameters are used till the next change in input wireless channel or traffic statistics. The PEOF may implement the performance evaluation framework or just contain a set of pre-computed performance curves corresponding to a wide range of wireless channel and traffic statistics and different configurations of the protocol stack. Due to the real-time nature of the system the latter approach is preferable. To implement the system, the following has to be developed: • test for detecting changes in channel and traffic statistics; • model for channel and traffic observations; • cross-layer extension for wireless channel model; • performance evaluation model. In the following sections we provide solutions to these tasks. IV. COMPONENTS OF THE SYSTEM A. Change detection in input statistics 1) Ergodicity and stationarity: To determine parameters of stochastic processes, such as mean and variance, based on only one, sufficiently large realization we usually assume ergodicity for stochastic observations. The sufficient condition for a stochastic process to be ergodic is lim i→∞ K(i) = 0, where K(i), i = 0, 1, . . . , is its ACF. The concept of stationarity is an advantageous property of ergodic processes. It is of paramount importance in context of modeling of stochastic observations. Practically, if certain observations are found to be non-stationary, stochastic modeling is rarely feasible. A process is said to be strictly stationary if its all M -dimensional distributions are the same. Only few real-life processes are strictly stationary. A process is said to be weakly (covariance) stationary if the mean value of its all sections is the same and ACF depends on the time shift only, i.e. In what follows, we assume that wireless channel and traffic observations are either covariance stationary or can be segmented into a number of covariance stationary parts. Unfortunately, there are no effective methods to statistically test whether a limited set of observations are stationary or not. Most previous studies implicitly assumed that wireless channel and traffic observations are realizations of covariance stationary stochastic processes. One of the common approaches to model wireless channel observations (e.g. SNR process) is to divide the range of observations into a number of discrete values and associate each state of the modulating Markov chain with a distinct value. In this case the number of states of the Markov model can be large leading to the socalled overfitting problem, i.e. obtained model can be used to represent a given trace but is not appropriate for other traces. The resulting process is covariance stationary. However, even if observations are truly covariance stationary, this approach is not suitable for performance control purposes. Indeed, it allows to determine parameters of the data-link layer resulting in best performance in along run. These parameters may not be optimal for any given instant of time. 2) Wireless channel statistics: Important observations of bit error statistics have been published in [24]. Authors found their GSM bit error traces to be non-stationary and proposed an algorithm to extract covariance stationary parts. They further used doubly-stochastic Markov process to model those parts separately. The modeled trace is finally obtained by concatenation. Among other conclusions, authors suggested that a given bit error trace can be divided into a number of concatenated covariance stationary traces. Note that the bit error probability is a function of the SNR value, and frame error probability is a function of bit error probability. As a result, we can expect the same properties for SNR and frame error observations too. To illustrate the time-varying nature of wireless channel statistics we consider an arbitrary 11Mbps IEEE 802.11b bit error trace available from [25]. The whole number of bit error observations is 6.5E6. We divided the whole trace into 65 non-overlapping segments each of which contains 1E5 bit error observations. According to the setup of experiments this corresponds to approximately 24 transmitted frames where each frame was of 4096 bits in length. Point estimators of the mean, variance and lag-1 autocorrelation coefficient of these segments are shown in Fig. 4, where E[Y ] is the mean, σ 2 [Y ] is the variance, K Y (1) is the lag-1 autocorrelation. One may see that the bit error rate changes in time significantly. Recalling the fundamental property of covariance stationary binary stochastic processes, σ 2 , we can state the same conclusion for variance. This implies that when the mean value changes so does the variance. In [20] we have shown that such ranges of mean and variance may correspond to completely different loss performance experienced at the data-link and IP layers in terms of the mean number of lost packets and mean delay experienced by a packet. Additionally, one may observe that the range of lag-1 autocorrelation is not In [20] we have shown that this range does not result in significant difference in the mean number of lost packets and mean delay of a packet. However, our statistical studies revealed that the range of lag-1 autocorrelation coefficient can be much larger and should be taken into account. 3) Multimedia traffic statistics: Video traffic statistics were often claimed to experience a high degree of variability (see [26], [27], [28] among others). Over the past decade some studies of video traffic patterns revealed that they may also experience changes in their statistical characteristics often manifesting non-stationary behavior [29], [30], [31]. Self-similarity, long-range dependence and non-stationarity are three major underlying reasons for any traffic pattern to experience high variability. Practically, high variability implies that there are large bursts in a traffic pattern. There are also long time spans during which the local average of the a traffic pattern stays well below the global average. Whatever the underlying reason for high variability, static resource allocation results in ineffective usage of resources when the load is below than expected or inappropriate performance when the load is higher. In this study we use video traffic traces available from the University of Berlin [32]. These traces are represented by sequences of frame sizes where size of the frame is measured in bits. Traces were captured for a number of coding schemes including H.263 and MPEG-4 with different quality levels. More information concerning those traces can be obtained from [33]. Although, we use H.263 variable bit rate (VBR) sequences, we have checked that the same conclusions remain valid for MPEG-4 sequences from the same traffic achieve and MPEG-1 sequences from [34]. To illustrate the time-varying nature of multimedia traffic observations we consider an arbitrary H.263 VBR trace. The whole trace contains 5E4 observations. We divided the whole trace into 25 non-overlapping segments each of which contains 2000 observations. This corresponds to approximately 80 seconds of video data. Statistical characteristics of these segments are shown in Fig. 5 .73E6, which is around 150% of global variance of all segments. Such range corresponds to completely different loss performance experienced at the data-link and IP layers in terms of the mean number of lost PDUs. Additionally, one may observe that the range of lag-1 autocorrelation is also significant and given by max K A (1) − min K A (1) = 0.71. This range may result in significant difference in the mean number of lost PDUs at the data-link and IP layers. Similar observations have been made for traces from [32] and [34]. 4) Change-point statistical tests: Since the mean value of wireless channel and traffic observations may significantly change in time, for the proposed performance optimization and control system we suggest to monitor the mean values of respective observations. The whole task is then reduced to the so-called on-line 'change-point' statistical problem at the unknown point. Time instants at which changes occur must be detected using an on-line change-point statistical test. There are a number of change detection algorithms developed to date. The common approach to deal with this task is to use control charts including Shewhart charts, CUSUM charts, and exponentially weighted moving average (EWMA) charts [35], [36], [37], [38]. These charts originally came from statistical process control (SPC) where they are successfully used to monitor the quality of production. The underlying idea of control charts is that all causes of deviation of observations from the target process can be classified into two groups. These are common causes of deviation and special causes of deviation. The deviation due to common causes is the joint effect of numerous causes affecting the process. They are inherent part of the process. Special causes of deviation are not the part of the process, occur accidentally and affect the process significantly. Control charts signal the point at which special causes occur using two control limits. If observations are between them, a process is assumed to be 'in-control'. If some observations fall outside, the process is considered as 'out-of-control'. For detecting changes in wireless channel observations the following interpretation of causes of deviation is taken. We assume that common causes of deviation are those resulting from multipath propagation environment at a certain separation distance from the transmitter. Special causes are those caused by movement of a user including changes of the distance between the transmitter and a receiver, possible shadowing of the signal by obstacles, change of the nomadic state of a user (e.g. stationary, pedestrian, vehicular). For traffic observations we assume that special causes of deviation are those resulting from specific long-term characteristics of the video including scene changes. Common causes of deviation are due to short-term stochastic nature of video data. For both traffic and wireless channel observations the whole procedure is as follows. Initially, a control chart is parameterized using statistical estimates of moments. When a change occurs, a new process is considered as 'in-control' and the control chart has to be re-parameterized according to this process. statistic at the time n, denoted by L Y (n), is given by where parameter γ ∈ (0, 1) is constant. In (1) L Y (n) extends its memory not only to the previous value but weights values of previous observations according to constant coefficient γ. In (1) this previous information is completely included in L Y (n − 1). The first value of EWMA statistics, L Y (0), is usually set to the mean of {Y (n), n = 0, 1, . . . } or, if unknown, to estimate of the mean. As a result, for on-line real-time test there should always be a certain warm-up period involving estimation of the mean. The reason to use EWMA statistics is as follows. Although, according to (1), the most recent value always receives more weight in computation of L Y (n), the choice of γ determines the effect of previous observations of the process on the current value of EWMA statistics. Indeed, when γ → 1 all weight is placed on the current observation, L Y (n) → Y (n), and EWMA statistics degenerate to initial observations. Contrarily, when γ → 0 the current observation gets only a little weight, but most weight is assigned to previous observations. Nonzero value of γ makes EWMA control chart more resistant to occasional outliers while reactive properties of the chart decrease. Summarizing, EWMA charts give flexibility at the expense of additional complexity in setting γ. To parameterize the proposed EWMA control chart two parameters have to be provided. Firstly, parameter γ determining the decline of the weights of past observations should be set. Secondly, control limits (E[L Y ] ± C Y ) must be provided. Unfortunately, when the form of distribution prior to a change is not known there is no theoretical results clarifying what the width of control limits should be. In our experimental work we found that the control limits for autocorrelated process with normal marginal distribution provide fairly accurate results where σ[Y ] and K Y (1) are the the standard deviation and lag-1 autocorrelation of {Y (n), n = 0, 1, . . . }, respectively, k is a design parameter. Note that usage of (2) provides the trade-off between theoretical approach, where each time a change occurs probability distribution function of new incontrol process must be estimated, and practical applications, where the warm-up period should be as small as possible. The values of k and γ determine the wideness of control limits for a given process with certain σ 2 [Y ] and K Y (1). These two parameters affect behavior of the so-called average run length (ARL) curve that is usually used to determine efficiency of a certain change detection procedure. ARL is defined as the average number of in-control observation up to the first outof-control signal. Different parameters of k and γ for a given ARL, σ 2 [Y ] and K Y (1) are provided in [39], [40]. Finally, and K Y (1) are not usually known in practice and must be estimated from empirical data. Therefore, estimates of E[Y ], σ 2 [Y ] and K Y (1) should be used in (2). B. Arrival and error models After a shift in the mean value is detected, wireless channel and traffic observations should be represented using covariance stationary model. Note that the state of the wireless channel or application in terms of the model should be estimated using as little information as possible. As a result, there is a tradeoff between the accuracy of the model and the time required to take decision regarding its parameters. Additionally, one of the fundamental requirements for a model is to be suitable for fast on-line adaptation and refinement of parameters resulting in another trade-off between the complexity of the fitting algorithm and accuracy of the model. In this paper covariance stationary segments of arrival and error processes are modeled by special cases of discrete-time batch Markovian process (D-BMP). It is known as discrete-time batch Markovian arrival process (D-BMAP, [41]) in traffic modeling and hidden Markov chain (HMM) in signal processing. where φ l = π ∞ k=0 kD(k) g l h l ∞ k=0 kD(k) e, λ l is the ls eigenvalue of D, g l and h l are ls left and right eigenvectors of D, respectively, and e is the vector of ones. Note that ACFs of D-BMP and its mean process are generally different [41]. The number of terms composing ACF of the mean process of D-BMP depends on the number of eigenvalues. The number of eigenvalues is the function of the number of states of the modulating Markov chain. Thus, varying the number of states of the modulating Markov chain we vary the number of terms composing the ACF. Recall that it is also allowed for D-BMP to have different probability functions for each different pair of states. These properties have been used in many studies to derive models of various traffic sources with sophisticated distributional and autocorrelational properties (see [42], [43], [44] among others). In what follows we allow our D-BMP to have conditional probability functions that depend on the current state only. In this case, D(k), k = 0, 1, . . . have the same elements on each row. This process is known as Markov modulated batch process (MMBP). It is important that this process still has ACF distributed according to (4). We also use only two states of the modulating Markov chain. For this reason we refer to such processes as switched ones. 2) Frame arrival process: Let us denote the frame arrival process by {W A (n), n = 0, 1, . . . }. In this paper, terms 'frame' and 'codeword' are used interchangeably assuming that a single frame consists of exactly one codeword. This requirement is not fundamental and can be relaxed when needed as explained below. When MMBP {W A (n), n = 0, 1, . . . } is allowed to have only two states of the modulating Markov chain, S A (n) ∈ {1, 2}, and each state is associated with Poissonally distributed number of arrivals in a single slot, it reduces to switched Poisson process (SPP). The marginal distribution of SPP is a weighed sum of two Poisson distributions, where weighting coefficients are given by elements of the stationary distribution of the modulating Markov chain as follows where α A and β A are transition probabilities from state 1 to state 2 and from state 2 to state 1, respectively. The ACF of the mean process (4) of SPP reduces to where α A and β A are transition probabilities of the Markov modulating process from state 1 to state 2 and from state 2 to state 1, respectively. Note that where E[W A ] = π 1,A G 1,A + π 2,A G 2,A is the mean of SPP. In order to completely parameterize the mean process of SPP, we must provide four parameters (G 1,A , G 2,A , α A , β A ). If we choose G 1,A as a free variable with constraint G 1,A < E[W A ] to satisfy 0 < λ A ≤ 1, we can determine G 2,A , α A , and β A from the next set of equations [42], [44], [18] ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ The reason we use SPP as an example of the frame arrival process is that its parameters are easily controllable [18]. From (8) one may note that there is a degree of freedom in choosing the mean arrival rate in state 1. Indeed, G 1,A can take on any value from (0, E[X]) and still match the ACF and the mean of the arrival process. As a result, distributions of these processes are different for different choices of G 1,A . 3) Bit error model: Let us denote the bit error process by {W E (n), n = 0, 1, . . . }. When MMBP is allowed to have two states only, and at most a single arrival is allowed in a slot, it reduces to switched Bernoulli process (SBP). Since this process has only two states of the modulating Markov chain, its ACF (4) reduces to (6). Normalized ACF (NACF) is then K G (i) = λ i E , i = 1, 2, . . . . It is clear that the NACF of the mean process of SBP exhibits geometrical decay that may produce fair approximation of empirical NACFs exhibiting nearly geometrical decay for small lags. It should also be noted that lag-1 autocorrelation coefficient completely specifies the behavior of NACF. Since we are dealing with covariance stationary binary process, only mean and lag-1 autocorrelation coefficient out (1)) have to be captured. In our previous work [20], [22] we demonstrated that there is SBP model exactly matching mean and lag-1 autocorrelation of covariance stationary bit error observations. This model is given by where f 1,E (1) and f 2,E (1) are probabilities of error in states 1 and 2, respectively, α E and β E are transition probabilities from state 1 to state 2 and from state 2 to state 1, respectively, K E (1) is the lag-1 autocorrelation of bit error observations, E[W E ] is the mean of bit error observations. C. Extension to the data-link layer Bit error model cannot be directly used for performance control of application and should be firstly extended to the data-link layer. Assume that bits are consecutively transmitted over wireless channels, the length of frames at the data-link layer is constant and equals to m bits. . We only need D N (k), k << m. Note that computation according to (10) is still a challenging task. This becomes impossible when m is sufficiently large. Instead, we may use the recursive method as outlined below. To estimate (10) recursive procedure is proposed in [45]. where F T is the frame error threshold. Expressions (11) are interpreted as follows: if the number of incorrectly received bits in a frame is greater or equal to the computed value of the frame error threshold (k ≥ F T ), frame is incorrectly received and F (l) = 1. Otherwise (k < F T ), it is correctly received and F (l) = 0. Setting F T = 1 we get no FEC at the data-link layer. Practically, (F T − 1) denotes the number of bit errors that can be corrected by a FEC code. D. Performance evaluation 1) Service process of the wireless channel: The straightforward way to represent the frame transmission process over a dedicated constant bit rate (CBR) wireless channel is to use G A /G S /1/K queuing system, where G A is the frame arrival process, G S is the service process of the wireless channel, K is the capacity of the system. Here, the service process is defined as times required to successfully transmit successive frames over the wireless channel. Characteristics of this process are determined by the frame error process and error concealment schemes of the data-link layer. It is known that both interarrival time of frames and transmission time of frames till successful reception are generally not independent but autocorrelated. These properties make analysis of G A /G S /1/K queuing system quite complex task even when arrival and service processes are modeled by Markovian processes. Indeed, theoretical background of queuing systems with autocorrelated arrival and service processes is not well-studied. Among few others, one should mention BMAP/SM/1 queuing system and some modifications considered in [46], [47], [48]. Analysis of these systems is more computationally intensive compared to queuing systems with renewal service process. It usually involves imbedded Markov chains of high dimensions. From this point of view, such a performance model does not provide significant improvements over other approaches. 2) Service process with SW-ARQ/FEC: Consider a class of preemptive-repeat priority systems with two Markovian arrival processes. We allow both processes to have arbitrary autocorrelation structures of homogenous Markovian type. Assume that the first arrival process represents the frame arrival process from the traffic source. To provide an adequate representation of unreliable transmission medium, we assume that the second arrival process is one-to-one mapping from the frame error process. That is, every time an error occurs, an arrival happens from this arrival process. In what follows, we refer to this process as 'artificial arrival process'. According to this mapping, probabilistic properties of the stochastic model remain unchanged. Making this process to be high priority one, and allowing its arrivals to interrupt ongoing service of low priority arrivals, we assure that when an arrival occurs from this process it immediately seizes the server for service, while the ongoing service is interrupted. A frame whose service is interrupted remains in the system (if allowed) and enters the server again after service completion of high priority arrival. The service provided till the point of interruption is lost. It is interpreted as an incorrect reception of the frame from the traffic source. The priority discipline is referred to as preemptive-repeat. To emulate behavior of SW-ARQ protocol, we assume an infinite number of retransmission attempts. We also assume that the feedback channel is completely reliable (perfect). Indeed, feedback acknowledgements are usually small in size and well protected by FEC code. We also assume that the feedback is instantaneous. These assumptions were tested and used in many studies and found to be appropriate for (relatively) high speed wireless channels [49], [50], [51], [52]. Since we extended the wireless channel model to the data-link layer, FEC capabilities are explicitly taken into account. Note that the described model is also suitable to represent 'ideal' SR-ARQ scheme as in [53], [54]. In SR-ARQ frames are continuously transmitted and only incorrectly received frames are selectively requested. According to 'ideal' operation of SR-ARQ, round trip times (RTT) are assumed to be zero. In this case SR-ARQ and SW-ARQ schemes become identical and can be represented using the proposed model. Analysis of queuing systems with preemptive-repeat priority discipline is still a challenging task. However, a number of assumptions can be further introduced to make the queuing model less complicated. In what follows, we limit our model to the discrete-time environment and require each arrival from both arrival processes to have a service time of one slot in duration. According to such a system, arrivals occur just before the end of slots. Since there can be at most one arrival from the arrival process representing the frame error process of the wireless channel, these arrivals do not wait for service, enter the service in the beginning of nearest slots, and, if observed in the system, are being served. To provide adequate representation of erroneous nature of the wireless channel, we also have to ensure that all these arrivals are accommodated by the system. Following these assumptions, it is no longer needed to require preemptive-repeat priority discipline. Since all arrivals occur simultaneously in batches, it is sufficient for such a system to have non-preemptive priority discipline. 3 Here, we take the method of imbedded Markov chain. Time diagram of D-BMAP/D/1/K queuing system is shown in Fig. 6. According to such a system, frames arrive in batches, batch of frames arrives just before the end of a slot. Frames are not allowed to enter service immediately and the service of any frame starts at the beginning of a slot. Frames depart from the system just after a batch arrival (if any). The state of the system is observed just after the departure (if any) and these points are imbedded Markov points. The sojourn (service) time is counted as the number of slots spent by a frame in the system. The system can accommodate at most K frames. We assume partial batch acceptance strategy. According to this strategy, if a batch of R frames arrives when k frames are in the system and R > (K − k), only (K − k) frames are accommodated and (R − K + k) frames are discarded. n th slot (n+1) th slot time W A (n-1)+W F (n-1) Observing Fig. 6, it can be deduced that the arrival from the frame error process is not accepted by the system in the slot (n + 1) if and only if the number of customers in the system in the slot (n − 1) is zero, there is an arrival of K frames in the time slot n, and one frame arrives from the frame error process in the slot (n + 1). Contrarily, if there is at least one frame in the system in the slot (n − 1), one frame departs at the boundary between slots n and (n + 1), and there is always at least one position in the system for the next arrival. Thus, the frame from the frame error process (if any) is not lost in the slot (n + 1). To assure that the frame from the frame error process is always accepted by the system we do not allow the overall number of arrivals from both processes to be more than (K −1). This implies that the maximum number of arrivals from the frame arrival process is (K − 2), that is usually sufficient for real applications. 4) Loss and delay performance: The proposed system has been completely analyzed for loss and delay performance in [45]. Particularly, it was demonstrated that the probability function of the number of l = 1, 2, . . . , K − 2 lost frames in a slot from the frame arrival process is given by where D(l, k), l = 0, 1, k = 0, 1, . . . , K − 2 are transition probability matrices of the superposed arrival process with exactly l arrivals from the frame error process and j arrivals from the frame arrival process, x k = (x k1 , x k2 , . . . , x k(MF MA) ) is the vector containing steady-state probabilities that there are k frames Let the random variable Q, Q ∈ {1, 2, . . . } denote the full delay in the system (sojourn time) experienced by arrival from the frame arrival process and let f Q (q) = P r{Q(n) = q|W A (n) > L(n)}, q = 1, 2, . . . be its probability function, where W A (n) is the number of arriving frames from the frame arrival process in the slot n, L(n) is the number of lost frames in the slot n. In [] the delay was found to be where e is the unit vector of appropriate size, f Q (q, 0), q = 1, 2, . . . , K − 1, are the vectors containing probabilities that the tagged frame arriving in the slot n is at the position q just after the slot boundary between slots n and (n + 1) and the state of the superposed arrival process is j given that at least one frame arriving from frame arrival process is not lost, Vectors f Q (q, 0), q = 1, 2, . . . , K − 1 are given by where ψ v,i is the probability that the tagged arrival is accommodated at the place i in the system when there are v waiting positions available for arrivals from the frame arrival process, P r{W A (n) > L(n)} = P r{W A (n) ≥ 1} − P r{L(n) = W A (n) ≥ 1} is the probability that at least one arrival is not lost in the slot n, and P r{L(n) = W A (n) ≥ 1} is the probability that all arrivals from the frame arrival process in the slot n are lost. Probabilities ψ v,i , are given by Probability P r{W A (n) > L(n)} is found to be Matrices T (i, m), i = 0, 1, . . . , m, i ≤ m are given by . . . A. Bit error models In this section we use a number of SBP wireless channel models with different means and lag-1 autocorrelations. We constructed 90 models of the bit error process as follows. B. Performance evaluation We study performance of two Bose-Chaudhuri-Hochquenghem (BCH) FEC codes denoted by triplet (m, n, l), where m is the length of the frame in bits, n is the number of data bits in a frame and l is the number of incorrectly received bits that can be corrected. The codes are (255, 131, 18) and (255, 87, 26), whose rates are approximately 1/2 and 1/3, respectively. According to our model, the former coding scheme delivers approximately 1.507 times more data bits in a single slot. Fig. 8 where n is the number of data bits in a single frame. For K G (1) = 0.0, it can be computed that (255, 87, 26) FEC code is only better when the bit error rate is 0.08 and the lag-1 autocorrelation is less than 0.4. The performance at the data-link layer in terms of both metrics is significantly worse when the lag-1 autocorrelation of the mean frame arrival process increases from 0.0 to 0.5. Note that the final decision on the choice of the FEC code should also take into account the mean delay performance of both FEC codes as explained below. C. Performance control system Let us consider theoretical performance of the whole system. We assume that a certain channel is covariance stationary for 50% of time with mean bit error rate E[W E ] = 0.08 and lag-1 autocorrelation K E (1) = 0.0 and then changes to covariance stationary process with E[W E ] = 0.02, K E (1) = 0.0. Frame arrival process is assumed to be covariance stationary with E[W A ] = 0.5, σ 2 [W A ] = 0.5, K A (1) = 0.0. Only two FEC codes were allowed to be used. They are (255, 131, 18) and (255, 87, 26). ARQ was enabled. For all experiments the buffer space was set to K = 40. Initially, the performance control system was initialized with the most powerful FEC code, (255, 87, 26). We compare the results obtained for our system to those obtained when no performance control algorithm used at the wireless channel. Since we are dealing with multimedia applications, when the trade-off between losses and delays occurs the preference is given to delay. Performance results of the proposed control system are shown in Table I Table II demonstrates results obtained for the same wireless channel parameters and covariance stationary frame arrival process with E[W A ] = 0.5, σ 2 [W A ] = 0.5, K A (1) = 0.5. One can notice that the autocorrelation of the frame arrival process affects delay performance of the frame service process more severely compared to throughput. Let us now simulate performance of the proposed control system. For this reason two bit error traces were generated. Each trace consists of two covariance stationary parts. First 5E4 samples are observation of covariance stationary process with E[W E ] = 0.08, K E (1) = 0.0, latter 5E4 observations were generated using process with E[W E ] = 0.02, K E (1) = 0.0. Parameters of EWMA control charts were always set such that in-control ARL is kept at 400. Performance results of the proposed system with both FEC and ARQ enabled are demonstrated in Tables III and IV, where they are compared with theoretical results and fixed FEC code. One can see that fixed FEC codes leads to non-optimal performance of information transmission. Using the proposed performance control system we achieve near optimal results in terms of both delay and throughput. These results are close to theoretical which are presented in Tables I and II. VI. CONCLUSIONS For wireless access technologies with dynamic adaptation of the protocol parameters to time-varying wireless channel and traffic conditions the performance control system has been proposed. Controllable parameters include the strength of FEC code, ARQ functionality, size of PDU at different layers, the rate with which traffic is generated. According to the proposed system it is still possible to implement protocols at different layers independently. The only requirement we impose is that certain protocols should be controllable and must export information about their current parameters using appropriate set of interfaces. Note that there is no need for all protocols at all layers to be controllable. Instead, the proposed system can be implemented incrementally. We also highlight that the proposed performance control concept is not limited to those channel adaptation mechanisms, considered in this paper, but can be extended to include MIMO and AMC functionality of the physical layer. In this case appropriate modifications to the proposed system are required. The core of the proposed system is the change-point detection algorithm that adopted to detect parameter changes in time-varying arrival and channel processes. The current states of both processes are then considered as covariance stationary and used to estimate protocol parameters that provide the best possible performance at the current instant of time. Numerical results demonstrate that the proposed system provide significant performance gain compared to static configuration of protocols at different layers.
2017-05-02T20:08:08.416Z
2007-09-22T00:00:00.000
{ "year": 2007, "sha1": "5b53cb2cdc5b4379922c731de64cdb2f944491c1", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.24138/jcomss.v3i3.250", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2833c562e0ebb811f0589bb6b5b12a6a14656ccc", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
218995487
pes2o/s2orc
v3-fos-license
Periodic flows with global sections Let G={ht | t∈R} be a continuous flow on a connected n-manifold M . The flow G is said to be strongly reversible by an involution τ if h−t=τhtτ for all t∈R, and it is said to be periodic if hs = identity for some s∈R∗. A closed subset K of M is called a global section for G if every orbit G(x) intersects K in exactly one point. In this paper, we study how the two properties “strongly reversible” and “has a global section” are related. In particular, we show that if G is periodic and strongly reversible by a reflection, then G has a global section. Introduction Let X be a metric space, and let G be a group of homeomorphisms of X. For x∈X, the orbit of x under G is G(x) = {g(x) | g ∈ G}; and the isotropy subgroup at x is defined by For every subset A⊆X, we denote by A/G the space of orbits of points of A; The interior of A is denoted by int(A), and its closure is denoted by A. Throughout this paper, an n-manifold means a topological manifold of dimension n (that is, a Hausdorff space with a countable basis of open sets, each homeomorphic to R n ). A map τ :X −→X is called involution if τ is a homeomorphism satisfying τ 2 =id. The group G is called strongly reversible if there exists an involution τ of X satisfying g −1 = τ gτ, for all g ∈ G. An orbit G(x) is called symmetric (with respect to τ ) if it is a τ -invariant subset, that is, if τ (G(x))=G(x). Reversibility has received a lot of interest in recent years, it plays a role in dynamics and it is related to some problems in physics. A lot of interesting results have been obtained in many works such as [4], [7], [8], [10], [13] and [18]. A subset K ⊂X is called a global section (or global cross-section) for G if the following hold: 1. K is closed. 2. Every orbit G(x) intersects K in exactly one point. A local section (or local cross-section) at a point p∈X is a closed subset K p ⊂X satisfying the following conditions. 1. Distinct points of K p lie in distinct orbits. 2. G p =G q for each q∈K p . G(K p ) is a neighborhood of p. The concept of sections is a fundamental problem in the theory of dynamical systems. A natural question for cross-sections is existence. Given a group G of homeomorphisms, when G has local or global section? Which necessary and sufficient conditions for the existence of a cross-section? The existence of a section through a given point shows that local parallelizability of the system is fulfilled. This gives a very good tool for solving many problems as it is possible to describe very precisely the behaviour of a system in a neighborhood of any nonstationary point. There are many works on local and global cross-sections, a brief selection is [1], [6], [9], [14], [15], [16], [17] and [19]. A group G={h t | t∈R} of homeomorphisms of X is called continuous f low if is a continuous map satisfying: φ(0, x)=x, and φ(t, φ(t , x))=φ(t+t , x) for all x∈X, t, t ∈R. For t=0, h 0 =id is the identity map of X. The flow G is called periodic if h s =id for some s∈R * + . A point x∈X is called periodic if h t (x)=x for some t∈R * + , and it is called stationary if h t (x)=x for all t∈R. The existence of local sections was first proved in 1939 by M. Bebutov for flows (Whitney-Bebutov's Theorem [2]). In this paper, we would like to make comparison between "has a global section" and "strongly reversible". In particular, we study how the two notions are related for periodic flows on connected n-manifolds. The flow G is called parallelizable if there exists a subset K of X such that the restriction is a homeomorphism. It is easy to show that if G is parallelizable, then the following hold. 1. G has a global section (the subset K). 2. G has no periodic point. So, every periodic flow is nonparallelizable. One can ask: Under which conditions periodic flows have global sections? Let G be a compact abelian Lie group of homeomorphisms of an n-manifold M such that G x ={id} for every x∈M . Although it is well known that such a group G has local cross-sections everywhere on M (see [16] p. 221), the existence of a global cross-section for G is not guaranteed; a simple example is the flow G={h t :(z 1 , z 2 ) −→(z 1 e it , z 2 e it ) | t∈R} on C×C\{(0, 0)} which is a periodic flow with no global section (Hopf fibration). However, we show the following result. If f is a homeomorphism of X, we denote by F ix(f ) the fixed point set of f ; When a cross-section exists it need not be locally (n−1)-Euclidean (see Example 1.6). Let G be a nontrivial periodic flow on a connected n-manifold, in Theorem 1.3 we establish a necessary and sufficient condition for the existence of a global section which is an (n−1)-manifold. Theorem 1.3. Let G be a nontrivial periodic flow on a connected n-manifold M. Then the following are equivalent: 1. G has a global section K which is an (n−1)-manifold. 2. G is strongly reversible by an involution τ such that F ix(τ )\F has two connected components A and B and each of A∪F and B ∪F is a closed (n-1)-manifold. In the following theorem, we prove that for periodic flows on connected n-manifolds, "strongly reversible by a reflection" implies "have global section". Theorem 1.4. Let G be a nontrivial periodic flow on a connected n-manifold M. If G is strongly reversible by a reflection R, then G has a global section. Remark 1.5. 1) From Theorem 1.4, we can see that the flow on C×C is an example of a periodic flow that cannot be strongly reversible by a reflection since G has no global section. However G is strongly reversible by the involution τ : 2) For n≥4, a periodic flow on R n which is strongly reversible by a reflection R need not have a global section which is a manifold (Example 1.6). Example 1.6. Let B be the Bing's dog bone space. It is well known that B is not a manifold, and that B ×R is homeomorphic to R 4 . We define a flow G on Clearly, G is a periodic flow on B ×R 2 which is homeomorphic to R 5 . Moreover, G is strongly reversible by R: into two connected components B ×R * + and B ×R * − , and B ×R + is a global section for G, however, B ×R + is not a manifold. In studying periodic flows on connected manifolds, we will need the following theorem. The paper is organized as follows. In Section 2, we study compact groups with global sections, and we investigate their properties. In Section 3, we consider the particular case of periodic flows on connected n-manifolds; where we begin by proving some general results of such flows and we prove Theorem 1.1. In Subsection 3.1, we study periodic flows with global sections, and in Subsection 3.2, we study periodic flows which are strongly reversible, then, we prove Theorem 1.3 in Subsection 3.3. Subsection 3.4 is about Theorem 1.4. Compact groups with global sections Let G be a compact abelian group of homeomorphisms of a metric space M . In the following proposition we give a necessary condition for the existence of a global section for G. Proposition 2.1. Let G be a compact abelian group of homeomorphisms of a metric space M. If G has a global section K then G is strongly reversible by an involution τ . Proof. Since K is a global section for G, for every x∈M , the orbit G(x) intersects K in exactly one point, it follows that M ⊂G(K). Conversely, clearly G(K)⊂M . Then M =G(K). Define a map τ on M as follows: We will show that τ is an involution and G is strongly reversible by τ . The map τ is For showing the continuity of τ let (y n ) n =(g n (x n )) n be a sequence in G(K) converging to y=g(x)∈G(K). Let B ={y n , n≥0}∪{y}. By compactness of G and B, G(B) is compact. Let (g −1 n k (x n k )) k be a subsequence of (g −1 n (x n )), converging to some point b. Let g n ϕ(k) be a subsequence of (g n k ) k converging to some element g 0 ∈G. It follows that x n ϕ(k) converges to g 0 (b)∈K since K is closed and g n ϕ(k) (x n ϕ(k) )−→g 0 (g 0 (b))=g(x), which implies that g 0 (b)=x since K is a global section, and so g −1 0 (x)=g −1 (x)=b. It follows that (g −1 n (x n )) n converges to g −1 (x). It is easy to see that τ 2 =id, then τ is bijective and τ −1 =τ is continuous. So τ is a homeomorphism. Thus τ is an involution. Now, we show that for every f ∈G, Example 2.2. The converse implication in the above Proposition is not true; that is, a strongly reversible compact abelian group need not have global section. Consider for example the action of the circle group G on C×C given in Remark So, clearly G is a compact abelian group and is strongly reversible by τ :(z 1 , z 2 ) −→ (z 1 , z 2 ), however G has no global section. Proposition 2.3. Let G be a compact group of homeomorphisms of a metric space M having a global section K. If We will show that φ is a homeomorphism. It is easy to see that φ is well defined and continuous. Injectivity of φ follows from the fact that K is a global section and , so x=x 0 and g=g 0 since G x ={id}. We deduce that g n −→g and x n −→x. Thus φ −1 is continuous, and φ is a homeomorphism. In the following Lemma, we show that if a group G of homeomorphisms of a metric space M has a global section K, then K is homeomorphic to the orbit space M/G. Lemma 2.4. Let G be a group of homeomorphisms of a metric space M. If G has a global section K, then the following hold. 1. The restriction of the orbit map π:M −→M/G to K given by is a homeomorphism. If M is connected, then K is connected. Proof. (1) It is easy to see that the orbit map is continuous and open. Since K is a global section, then K/G={G(x) | x∈K}= M/G and the restriction is a bijection. Moreover, π |K is continuous since π is. For showing the continuity of (π |K ) −1 , let F be a closed subset of K, we will show that π(F ) is closed in M/G. Since K is closed in M , then F is closed in M , then π(F ) is closed in M/G. We conclude that π |K is a homeomorphism. (2) Assume that K is not connected, then there exist two open subsets V 1 and In the following Proposition, we show that if G is a compact Lie group of homeomorphisms of an n-manifold M such that G x ={id} for every x, then M is covered by a finite set of G-invariant open subsets such that the restriction of G to each of them has a global section. Proposition 2.5. Let G be a compact Lie group of homeomorphisms of an n-manifold M such that for every Proof. By Lemma 2.4.(1), the orbit map π:M −→M/G is a homeomorphism on local cross-sections. From the Proof of Theorem 4.2 of [15], there exists a collection F ={A i } i of closed subsets A i of M/G satisfying the following conditions: Periodic flows on connected manifolds Let G be a continuous flow on a metric space and it is said to be periodic if If x∈X is periodic, but nonstationary, then there is T >0 such that T is the smallest positive real satisfying h T (x)=x (see Theorem 2.12 of [6]). This real T is called the period of x, and for every real t>0 satisfying h t (x)=x, there is an integer n≥1 such that t=nT . The flow G is called periodic of period s>0 if h s =id and for every 0<t<s, h t =id. Proposition 3.1. Let G be a continuous flow on a metric space X. Then the following hold. In the remainder of this section, we take G={h t | t∈R} to be a nontrivial continuous flow on M which is periodic with period s∈R * + . We shall make use of the following notations: It is easy to see that M =L∪H ∪F . We denote by N =L∪H =M \F the set of nonstationary points of M . Clearly the set F is closed as intersection of closed sets (see Section 1 for the definition of a stationary point). Lemma 3.2. Let G be a periodic flow of period s on a metric space M. Then the subset H is open. Proof. For showing that H is open, we will show that L∪F is closed. Let (x n ) n be a sequence in L∪F such that x n −→x. We show that x∈L∪F . For every x n ∈L∪F , there exists 0<t n <s such that h tn (x n )=x n . Since (t n )⊂[0, s] compact, then we can assume that t n −→t 0 ∈[0, s]. If t 0 =0, then x∈F by Proposition 3.1.(2). If t 0 =s, then (s−t n )−→0 and h s−tn (x n )=x n for every integer n since h s =id. So x∈F by Proposition 3.1. (2). If 0<t 0 <s, then h t0 (x)=x, and either x∈F or x is with period 0<T <s, thus x∈L. We conclude that L∪F is closed. Proof. (1) Assume that statement (1) is not true, then there exists x 0 ∈V such that for every ε>0, there exists x ε ∈B(x 0 , ε)∩V such that T ε <T 0 ; where T ε is the period of x ε . Then for every n∈N * , there exists x n ∈B(x 0 , 1 n )∩V such that T n <T 0 ; where T n is the period of x n . We know that (T n ) has a convergent subsequence, we may assume that (T n ) converges. But (T n )⊂{ s k | k∈N * } since G is with period s, then when n−→+∞, either T n −→0 or T n −→ s k0 . If T n −→0, since x n −→x 0 and h Tn (x n )=x n for every n, then by Proposition 3.1.(2), x 0 is stationary which is a contradiction since x 0 ∈V ⊂L. Then T n −→ s k0 . So there exists a positive integer n 0 such that for all n≥n 0 , T n = s k0 . When n−→+∞, we obtain h s k 0 (x 0 )=x 0 , which implies that s k0 =pT 0 for some p∈N * but s k0 ≤T 0 , then p=1 and s k0 =T 0 which contradicts the fact that T n <T 0 for all n. We conclude that (1) is true. (2) Let x 0 ∈V . By Item 1 there exists ε>0 such that every point is a nonempty open subset in V on which all points have the same period s k . Proposition 3.4. Let G be a nontrivial periodic flow of period s on a connected n-manifold M. Then the following hold. 1. G is a compact connected Lie group of dimension 1, and N =M \F is connected. The subset H is open and everywhere dense in M. Proof. (1) If F =∅, then clearly N =M is connected. Now, assume that F =∅. We will show that N =M \F is connected. Suppose that M \F is nonconnected, then there exist two open subsets U 1 and U 2 such that U 1 ∩U 2 =∅ and M \F = U 1 ∪U 2 . Since G is connected, then for every t∈R, h t (U 1 )=U 1 and h t (U 2 )=U 2 . Let G ={h t | t∈R} be the flow on M defined by Clearly G is a compact Lie group since G is, and since G =id on U 2 , then G = {id} on M by Theorem 1.7, so G={id} on U 1 , and hence G={id} on M , which contradicts the fact that G is nontrivial. Thus M \F is connected. 2), H =M . For every i, we will show that the subset U i is closed in G(U i ). Let (x n ) be a sequence in U i converging to some point x in G(U i ), we must show that x∈U i . We have x=h t (x 0 ) for some real t and some x 0 ∈U i , on the other hand U i ⊂K i and since K i is closed, x∈K i ∩G(U i ). Therefore, x and x 0 are in K i and are in the same orbit, so by the definition of a local section, x=x 0 ∈U i . Now, we can easily see that U i is a global section for G |G(Ui) , and G |G(Ui) is strongly reversible by Proposition 2.1. Periodic flows with global sections Let G be a periodic flow of period s on a connected n-manifold M having a global section K. Recall that G is compact (Proposition 3.4). In this subsection, we like to prove some properties of G. We begin by showing, in the following Lemma, that G is strongly reversible, and that the fixed point set of h s 2 coincides with the set F of stationary points of G. Lemma 3.5. Let G be a periodic flow of period s on a connected n-manifold M. If G has a global section K, then the following hold. G is strongly reversible by the involution Since K is closed, K ∩H ⊂K. Assume that K =K ∩H, then there exists y∈K such that y / ∈K ∩H. By ( * ), y=h t (x) for some x∈K ∩H ⊂K, then y=x (since K is a global section); which is a contradiction. Thus K =K ∩H. (2) The flow G is strongly reversible by the involution τ (see Proof of Proposition 2.1). It is easy to see that if F =∅ then F ⊂K since M =G(K). Let y=h t (x)∈ G(K) such that τ (y)=y, then h −t (x)=h t (x), equivalently, h 2t (x)=x. Then either x=y∈F or 2t=nT for some n∈N * ; where T is the period of x, equivalently, t=n T 2 . If n=2p is even, then t=pT , so y=h pT (x)=x∈K. If n=2p+1 is odd, then t=pT + T 2 and y=h T 2 (x). Thus F ix(τ )⊂K ∪{h T 2 (x) | x∈K \F and x is with period T }. The converse inclusion, is easy to see. So F ix(τ )=A∪B ∪F ; where K =A∪F and is a nonempty open subset in K containing x and satisfy- In the following proposition "dim" means the topological dimension. The topological dimension of a topological space E is defined inductively as follows: the empty set is assigned dimension −1, and E is said to be n-dimensional at a point p if n is the least number for which there are arbitrarily small neighborhoods of p whose boundaries in M all have dimension <n. The space E has topological dimension n if its dimension at all of its points is ≤n but is equal to n at one point at least. Proposition 3.6. Let G be a nontrivial periodic flow of period s on a connected n-manifold M. If G has a global section K, then G is strongly reversible by the involution τ : satisfying the following properties Moreover, each of A and B is connected since A is connected. Thus F ix(τ )\F =A∪B has two connected components A and B; that is, F divides F ix(τ ) into two connected components A=K \F and B =h s 2 (A). (2) Since H is G-invariant, then H =G(K ∩H) and K ∩H is a global section for the restriction G |H . Moreover, G x ={id}, for every x∈K ∩H, then the map is a homeomorphism by Proposition 2.3. Since H is open in M , then dim H = n=dim(G×(K ∩H))≤dim G+dim(K ∩H). On the other hand, since G is compact, it is well known that dim(G×(K ∩H))>max{dim G, dim(K ∩H)} (see [11]). Strongly reversible periodic flows Let G be a nontrivial periodic flow of period s on a connected n-manifold M , and assume that G is strongly reversible by an involution τ such that F ix(τ ) is an (n−1)-manifold. In the following Proposition we prove an important property of G that will be used in the rest of the paper saying that every orbit G(x) in N intersects F ix(τ ) in exactly two points y and h T 2 (y); where T is the period of x. For every x∈N , G(x) is symmetric and intersects F ix(τ ) in exactly two points y and h Proof. (1) If N ∩F ix(τ )=∅ then F ix(τ )⊂F and G |F ix(τ ) =id. By compactness of G, we can find a G-invariant open subset U of M such that (F ix(τ )∩U ) divides U into two connected components U 1 and U 2 . Since G is connected, each of U 1 and U 2 is G-invariant, then we can extend the restriction G |U1 by the identity on U 2 ∪(F ix(τ )∩U ). The extension group is a compact Lie group that coincides with the identity on a nonempty open subset, then by Theorem 1.7 it must be trivial. So, G |U1 ={id}. It follows that G is trivial, which is a contradiction. Thus N ∩F ix(τ ) =∅. (2) By Proposition 3.4.(1), N is connected. So, by Item 1 and ( [3], Theorem 1.2.(2)) every orbit G(x) is symmetric, in particular, we have τ (x)=h t (x) for some real t, and by reversibility of G we obtain h t 2 (x)=τ (h t 2 (x)). So, G(x)∩F ix(τ ) =∅. Let a∈G(x)∩F ix(τ ), and assume that b is another point in G(x)∩F ix(τ ), then b=h t (a) for some real t. Let T denote the period of a, then 2t=nT for some integer n. The precedent equality implies that b=h n T 2 (a). Then either a=b or b=h T 2 (a). We conclude that G(x)∩F ix(τ )={a, h T 2 (a)}. (3) By Item (2), N ⊂G(F ix(τ )). By Proposition 3.4.(2), H =M . Since H ⊂ N ⊂M , then N =M . We will show that F ⊂F ix(τ ). Let x∈F , then there exists a sequence (x n )⊂N such that x n −→x. For every integer n, x n =h tn (x n ) for some h tn ∈G, x n ∈F ix(τ ). By compactness of G, we can assume that h tn −→h t ∈G, then on the other hand, τ (x n )−→τ (x), so, τ (x)=x. We deduce that F ⊂F ix(τ ) and so M =G(F ix(τ )). Lemma 3.8. Let G be a nontrivial periodic flow of period s on a connected n-manifold M, which is strongly reversible by an involution τ such that F ix(τ ) is an (n-1)-manifold. Assume that there exist three subsets A, B, and C of F ix(τ ) satisfying the following conditions: (2) and (3) Lemma 3.9. Let G be a nontrivial periodic flow of period s on a connected n-manifold M, which is strongly reversible by an involution τ such that F ix(τ ) is an (n-1)-manifold. Assume that there exists a closed subset C in F ix(τ ) such that C ⊂F ix(h s 2 ) and F ix(τ )\C is nonconnected, then C divides F ix(τ ) into two connected components A and h s 2 (A), and M =G(A∪C). An equivalent condition to the existence of a global section Using strong reversibility, we give an equivalent condition to the existence of a global section which is an (n−1)-manifold by proving Theorem 1.3. (2)=⇒(1). Clearly, A is open in A∪F , then A is an (n−1)-manifold, in the same way B is an (n−1)-manifold, then A∪B is an (n−1)-manifold. By Proposition 3.4.(1), M \F is a connected open submanifold of M and G |M \F is strongly reversible by the involution τ |M \F . Since F ix(τ |M \F )=F ix(τ )\F =A∪B is an (n−1)-manifold and is nonconnected, then G |M \F satisfy the conditions of Lemma 3.9 with C =∅, therefore by Lemma 3.9, B =h s 2 (A) and by Proposition 3.7, M =G(F ix(τ ))=G(A∪F ). It remains to show that every orbit intersects (A∪F ) in exactly one point. Assume that there exists y∈M \F such that G(y) intersects A in two points a and b. Since B =h s 2 (A), we will have a, b and h s 2 (a) three distinct points in G(y)∩F ix(τ ) which contradicts the fact that every orbit G(y) in M \F intersects F ix(τ ) in exactly two points (Proposition 3.7.(2)). Therefore A∪F is a global section for G. Periodic flows that are strongly reversible by reflections have global sections Let G be a periodic flow on a connected n-manifold which is strongly reversible by a reflection. In this subsection, we will show that G has a global section by proving Theorem 1.4. We begin by proving two important properties of G in the following theorem. Theorem 3.10. Let G be a nontrivial periodic flow of period s on a connected n-manifold M. If G is strongly reversible by a reflection R, then the following hold. 1. F ix(h s 2 )=F . . It remains to show that every orbit intersects (A∪F ) in exactly one point. If not, assume that there exists y∈M \F such that there exist a =b∈G(y)∩A, then h s 2 (a)∈G(y)∩h s 2 (A) and h s 2 (a) =a, h s 2 (a) =b since A∩h s 2 (A)= ∅; which is impossible since every orbit intersects F ix(R) in exactly two points (Proposition 3.7). We conclude that (A∪F ) is a global section for G. F divides Fix(R) into two connected components A and We end the paper by an example of a nonperiodic flow which is strongly reversible by a reflection but has no global section. Then, one can easily see that G is strongly reversible by the reflection s:x −→−x, moreover it is nonperiodic and with no global section.
2020-04-30T09:04:18.460Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "58d77c3d113f3204e2e6bcf8f89a49e6f1017bfe", "oa_license": null, "oa_url": "https://www.intlpress.com/site/pub/files/_fulltext/journals/arkiv/2020/0058/0001/ARKIV-2020-0058-0001-a003.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "93bf0b086c57deef389e2cbb18f94253e645f846", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
236169694
pes2o/s2orc
v3-fos-license
Spin excitations in metallic kagome lattice FeSn and CoSn In two-dimensional (2D) metallic kagome lattice materials, destructive interference of electronic hopping pathways around the kagome bracket can produce nearly localized electrons, and thus electronic bands that are flat in momentum space. When ferromagnetic order breaks the degeneracy of the electronic bands and splits them into the spin-up majority and spin-down minority electronic bands, quasiparticle excitations between the spin-up and spin-down flat bands should form a narrow localized spin-excitation Stoner continuum coexisting with well-defined spin waves in the long wavelengths. Here we report inelastic neutron scattering studies of spin excitations in 2D metallic Kagome lattice antiferromagnetic FeSn and paramagnetic CoSn, where angle resolved photoemission spectroscopy experiments found spin-polarized and nonpolarized flat bands, respectively, below the Fermi level. Although our initial measurements on FeSn indeed reveal well-defined spin waves extending well above 140 meV coexisting with a flat excitation at 170 meV, subsequent experiments on CoSn indicate that the flat mode actually arises mostly from hydrocarbon scattering of the CYTOP-M commonly used to glue the samples to aluminum holder. Therefore, our results established the evolution of spin excitations in FeSn and CoSn, and identified an anomalous flat mode that has been overlooked by the neutron scattering community for the past 20 years. In general, spin-flip excitations in a magnet can be interpreted in terms of either a quantum spin models [1,2] with local moments on each atomic site [ Fig. 1(a)], or a Stoner [3][4][5][6]13] itinerant electron model. In insulating ferromagnets such as EuO, magnetic excitations can be fully described by a Heisenberg Hamiltonian [31] with spins on Eu lattice sites. In ferromagnetic metals, magnetic order breaks the degeneracy of the electronic bands, splitting spin-up majority and spin-down minority electrons [ Fig. 1(b)] [6] . For 3D metallic FM Fe and Ni, the low-energy spin waves are strongly damped when they enter a broad Stoner continuum of band-electron spin-flips that extends over several eV in energy [ Fig. 1(c)] [8][9][10][11][12] . For a paramagnetic metal, there is no splitting of the degenerate electronic bands, and one would not expect to observe a Stoner continuum [13] . In strongly correlated materials like copper and iron-based superconductors, the subtle balance between electron kinetic energy and short-range interactions can lead to debates concerning whether magnetism has a localized or itinerant origin [32,33] . In some 2D crystals, electrons can be confined in real space to form flat bands, for example through geometric lattice frustration [19][20][21] . The flat bands of magic-angle twisted bilayer graphene [23] provide one example of this route toward strong electronic correlation [24] . The kagome lattice depicted in Fig. 1 [20] , provides a second. Recently, a spin-polarized flat electronic band has been identified in the AF kagome metal FeSn at an energy E = 230 ± 50 meV below the Fermi level by angle-resolved photoemission spectroscopy (ARPES) experiments [25] . FeSn is a A-type AF with antiferromagnetically coupled FM planes [34] , which we will view as 2D ferromagnets. Neutrons should in principle detect the electron-hole-pair Stoner excitations from the majority-spin flat band below the Fermi level to minority-spin bands near or above the Fermi level [ Fig. 1(b)] [6,13] . Since neutron scattering measures electron-hole-pair excitations, having a flat spin-up electronic band below the Fermi level is a necessary, but not a sufficient condition to observe a flat Stoner continuum band. Instead, such a dispersionless narrow energy spin excitation band also requires a flat spin-down electronic band above (or near) the Fermi level [13] . Unfortunately, ARPES measurements cannot provide any information concerning such an electronic band above the Fermi level, although density functional theory (DFT) calculations suggest its presence [25] . For comparison, although ARPES measurements have also identified flat band at an energy E = 270 ± 50 meV below the Fermi level in CoSn [26,27] , one would not expect to observe a flat Stoner continuum band due to degenerate electronic bands and paramagnetic nature of the system [35] . In this paper, we report inelastic neutron scattering (INS) studies of spin excitations in 2D metallic Kagome lattice antiferromagnetic FeSn [34] and paramagnetic CoSn [35] . For FeSn, our initial measurements reveal well-defined spin waves extending well above 140 meV and a narrow 24 meV wide band of excitations that cannot be described by a simple spin-wave model. While these data suggest the presence of electron-hole-pair Stoner excitations from the majority-spin flat band below the Fermi level to minority-spin bands near or above the Fermi level in FeSn, subsequent experiments on paramagnetic CoSn also have the same flat mode coexisting with expected paramagnetic spin excitations. Through careful analysis of INS spectra under different conditions, we conclude that the observed flat mode actually arises mostly from hydrocarbon scattering of the CYTOP-M commonly used to glue the samples to aluminum holder [36] . Therefore, our results established the evolution of spin excitations in FeSn and CoSn, and identified an anomalous flat mode that has been overlooked by the neutron scattering community for the past 20 years. We have carried out INS experiments to study spin waves and search for anomalous Stoner excitations in AF kagome metallic FeSn [34] and paramagnetic CoSn [35] . The structure of FeSn consists of 2D kagome nets of Fe separated by layers of Sn, and exhibits AF order below T N ≈ 365 K with in-plane FM moments in each layer stacked antiferromagnetically along the c-axis [ Fig. 1(d)] [34] . Since each unit cell contains three Fe atoms [ Fig. 1(e)], we expect one acoustic and two optical spin-wave branches in a local moment Heisenberg Hamiltonian [34,37,38] . Figures 1(f,g) show the reciprocal spaces corresponding to the crystal structures of FeSn depicted in Figs. 1(d,e), respectively. CoSn has the same crystal structure as that of FeSn but is paramagnetic at all temperatures [35] . ferromagnetic kagome planes. We also observe an easy-axis anisotropy gap ∆ a ≈ 2 meV due to single-ion magnetic anisotropy [ Fig. 2(h)] [34] . To understand these observations, we start with a local moment Heisenberg Hamiltonian (x ) direction [39,40] , we define A (< 0) to be the single-ion anisotropy. The experimental in-plane FM exchange couplings obtained from this fit are smaller than theoretical predictions, while the the c-axis exchange coupling is larger by a factor of two [34] . To determine whether or not the magnetic excitations of FeSn can be understood within a Heisenberg Hamiltonian with S = 1 [34] , we consider the energy dependence of the local dy- , obtained by integrating the imaginary part of the generalized dynamic spin susceptibility χ ′′ (Q, E) over the first Brillouin zone [the green shaded region in Fig. 1(g)] at different energies [41] using where S(Q, E) is the measured magnetic scattering in absolute units, E is the neutron energy transfer, and k B is the Boltzmann's constant. Since the static ordered moment per Fe is M ≈ 1.85 µ B at 100 K [34] , the total magnetic moment M 0 of FeSn satisfies per Fe, where the local fluctuating moment m 2 ≈ 2 µ 2 B is obtained by integrating χ ′′ (E) at energies below 150 meV. In the local moment Heisenberg Hamiltonian with S=1, the total moment sum rule implies that M 2 0 = g 2 S(S + 1), where g ≈ 2 is the Lande g-factor, requiring a fluctuating moment contribution of g 2 S = 4 µ 2 B per Fe, which is a factor of 2 larger than the measured m 2 ≈ 2 µ 2 B per Fe. The solid and dashed lines in Fig. 3(h) are the calculated χ ′′ (E) in absolute units assuming S = 1, and 0.5, respectively. These unusual fluctuation properties suggest that electronic itineracy contributes plays a role in both the observed static ordered moment of FeSn [34] . We remark that flat spin wave flat bands can occur in kagome lattice ferromagnets with Dzyaloshinskii-Moriya (DM) interactions [37] , but these have an entirely different origin [38] . If the flat band observed in Figs suggesting that the mode may not have a magnetic origin. Since our FeSn and CoSn samples are glued on the aluminum plates by CYTOP-M which is an amorphous fluoropolymer but contains one hydrogen to facilitate bonding to metal surface [36,42] , the hydrogenated amorphous carbon films formed between sample and aluminum plates should have C-H bending and stretching vibrational modes occurring around 150-180 meV and 350-380 meV, respectively [43,44] . Fig. 5(c) confirms that the scattering at 170 meV arises from the C-H bending mode [43,44] . To further test if pure FeSn without CYTOP-M can also be contaminated by hydrocarbons, we prepared fresh single crystals of FeSn and carried out measurements at 5 K using unaligned single crystals on SEQUOIA. We find weak and broad excitations at 170 meV and shifting the incident beam neutron away from the sample using a motorized mask, the hydrocarbon contamination is still present with the similar intensity ratio between 170 meV and 360 meV modes (Table 1). Our careful infrared absorption spectrum analysis on the thermal shielding suggests the presence of hydrocarbon contamination, probably a silicone oil accidentally contaminating the vacuum system at SEQUOIA. Therefore, we conclude that the observed scattering at 170 meV in FeSn arises mostly from solid CYTOP-M with small additional contamination from hydrocarbons on thermal shielding. DISCUSSION To account for electronic itineracy, we calculate the electronic structure of FeSn in the paramagnetic and AF ordered states using a combination of DFT and dynamical mean field theory (DFT+DMFT) [45] . In the paramagnetic state, the mass enhancements of the Fe to the values in iron arsenide superconductors, we conclude that FeSn is a Hund's metal [46] with intermediate strength correlations. Figure 4 Although CYTOP-M has been used by neutron scattering community as a glue to mount small samples for over 20 years, its characteristics at high energies have not been reported [36] . This is mostly because the difficulty in carrying INS at energies above 100 meV at traditional reactor sources. The development of neutron time-of-flight measurements at spallation sources allows measurements at energies well above 200 meV, and the flat mode was missed in previous work [41] because of its weak intensity and its weak Q dependence. Our identification of the flat C-H bending and stretching vibrational modes should help future neutron scatterers to separate these scattering from genuine magnetic signal. Sample synthesis, structural and composition characterization. Single crystals of FeSn and CoSn were grown by the self-flux method. The high-purity Fe (Co) and Sn were put into corundum crucibles and sealed into quartz tubes with a ratio of Fe (Co) : Sn = 2 : 98. The tube was heated to 1273 K and held there for 12 h, then cooled to 823 K (873 K) at a rate of 3 (2) K/h. The flux was removed by centrifugation, and shiny crystals with typical size about 2×2×5 mm 3 can be obtained. The single crystal X-ray diffraction (XRD) pattern was performed using a Bruker D8 X-ray diffractometer with Cu K α radiation (λ = 0.15418 nm) at room temperature (Fig. S1). The elemental analysis was performed using energy-dispersive X-ray (EDX) spectroscopy analysis in a FEI Nano 450 scanning electron microscope (SEM). In order to determine composition of FeSn accurately, we carefully polished FeSn surface using sandpaper and carried out EDX measurements on five FeSn crystals (Fig. S2). The average stoichiometry of each crystal was determined by examination of multiple points (5 positions). As shown in Table S1, the atomic ratio of Fe:Sn is close to 1:1. To further determine the crystalline quality and stoichiometry of the samples used in neutron scattering experiments, we took X-ray single-crystal diffraction experiments on two pieces of these samples at the Rigaku XtaLAB PRO diffractometer housed at Spallation Neutron Source at Oak Ridge National Laboratory (ORNL). The measured crystals were carefully suspended in Paratone oil and mounted on a plastic loop attached to a copper pin/goniometer (Fig. S3). The single-crystal X-ray diffraction data were collected with molybdenum K radiation (λ = 0.71073Å). More than 2800 diffraction Bragg peaks were collected and refined using Rietveld analysis (Table S2). We find no evidence of superlattice peaks indicating possible Fe vacancy order (Fig. S3). The refinement results indicate less than 1.5% possible Fe vacancy (Fig. S4), suggesting that the single crystals are essentially fully stoichiometric. To determine whether the AF phase transition in our sample is consistent with earlier work [34] , we carried out temperature and field dependence of the magnetization measurement. work [34] . INS measurements on FeSn were carried out using the MAPS time-of-flight chopper spectrometer at the ISIS Spallation Neutron Source, the Rutherford Appleton Laboratory, UK [47] . INS measurements on CoSn and FeSn are also performed using the SEQUOIA spectrometer at the Spallation Neutron Source, Oak Ridge National Laboratory [48] . Fifty pieces of single crystals of FeSn with the total mass of 0.97 g were co-aligned on one single piece of aluminum plate and mounted inside a He displex. Figure S6 shows that the mosaic of aligned single crystals is about 6 degrees. The crystal structure of FeSn is hexagonal with space group P 6 /mmm with lattice parameters a = b = 5.529Å, and c = 4.4481Å [34] . The lattice parameters of CoSn are a = b = 5.528Å, and c = 4.26Å [35] . We define the momentum transfer Q in 3D reciprocal space inÅ temperature is set at 5 K. The neutron scattering data is normalized to absolute units using a vanadium standard, which has an accuracy of approximately 30% [41] . Heisenberg model fitting to spin waves of FeSn. We use the Heisenberg model and least-square method to fit spin waves of FeSn (Figs. S7-S9). The software package used was SpinW and Horace [49] . The Heisenberg Hamiltonian as discussed in the main text. Note in our Heisenberg Hamiltonian fit to spin wave data, we only used dispersion relations from experiments, and assumed S = 1, which is close to the 1.86 µ B per Fe ordered moment [34] . The overall intensity from SpinW fit, when considered in absolute units, is considerably higher than the experiment [ Fig. 2(h)]. This suggests that Heisenberg Hamiltonian overestimates the spin wave intensity contribution from the ordered moment. We first determine the interlayer coupling J c . Using linear spin wave theory, we find that the spin-wave band top along the c Note that the error bars of these parameters are estimated as follows: First, calculate the least square error using best-fit parameter J 0 and denote it as R 0 ; then determine the parameter J ′ 0 when the square error values give 2R 0 , and the error bar is given by ∆J = |J ′ 0 − J 0 |. This error range gives 68% confidence interval, meaning that if the residual of the fit has a Gaussian distribution, the calculation generated by range [J 0 − ∆J, J 0 + ∆J] can cover 68% (i.e. 1σ in Gaussian distribution) of the data points. DFT+DMFT calculations. The electronic structures and spin dynamics of FeSn in the paramagnetic and magnetically ordered states are computed using DFT+DMFT method [45] . The density functional theory part is based on the full-potential linear augmented plane wave method implemented in Wien2K [50] . The Perdew-Burke-Ernzerhof generalized gradient approximation is used for the exchange correlation functional [51] . DFT+DMFT was implemented on top of Wien2K and was described in detail before [52] . In the DFT+DMFT calculations, the electronic charge was computed self-consistently on DFT+DMFT density matrix. The quantum impurity problem was solved by the continuous time quantum Monte Carlo (CTQMC) method [53,54] with a Hubbard U = 4.0 eV and Hund's rule coupling J = 0.7 eV in both the paramagnetic state and the magnetically ordered state. Bethe-Salpeter equation is used to compute the dynamic spin susceptibility where the bare susceptibility is computed using the converged DFT+DMFT Green's function while the two-particle vertex is directly sampled using CTQMC method after achieving full self-consistency of DFT+DMFT density matrix [55] . For the magnetically ordered state, the averaged Green's function of the spin up and spin down channels is used to compute the bare susceptibility. In the paramagnetic state, an electronic flat band of dominating d xz and d yz orbital characters locates a few meV above the Fermi level (Fig.4b). In the magnetic state, the spin exchange interaction leads to about Figure S10 shows orbital-resolved band structures of FeSn in the paramagnetic, spin up, and spin down magnetically ordered state. We also note that the possible ∼1.2% iron deficiency in FeSn obtained from X-ray refinement (Table S2) is not expected to modify the band structure. The infrared absorption measurements. To determine if the thermal heat shielding of SEQUOIA acquired an organic coating, we cut a small piece of the shielding right after the experiment and carried out the infrared absorption spectrum measurement on that piece. The spectrum in Fig. S11 shows The data that support the plots within this paper and other findings of this study are available from the corresponding authors upon reasonable request. The codes used for the DFT+DMFT calculations in this study are available from the corresponding authors upon reasonable request. ACKNOWLEDGEMENT First and foremost, we wish to express our sincere appreciation to the anomalous referees who reviewed this paper, particularly referee 2. In the original draft of the paper, we only have data for FeSn. It is the comment of referee 2 that inspired us to carry out measure- In the present study, the color bars represent the vanadium standard normalized absolute magnetic excitation intensity in the units of mbarn meV −1 per formula unit, unless otherwise specified. The calculated spin wave intensity in (c,e,g) is in absolute units assuming S = 1 in the SpinW+Horace program [49] . The error bars in (h) represent statistical errors of 1 standard deviation.
2021-03-25T01:15:43.923Z
2021-03-23T00:00:00.000
{ "year": 2021, "sha1": "579f3c032a315a63afd8430c447cd0291cb1a768", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s42005-021-00736-8.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "579f3c032a315a63afd8430c447cd0291cb1a768", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
222357500
pes2o/s2orc
v3-fos-license
COMPARISON BETWEEN THE EFFECTS OF THYMOQUINONE OBTAINED FROM THE SEEDS OF NIGELLA SATIVA AND VERAPAMIL ON THE VOLUME AND ACIDITY OF GASTRIC ACID SECRETION Background and Objectives: The over production of gastric acid results in peptic ulcer. This study was done to compare the effects of thymoquinone and Verapamil on volume and acidity of carbachol induced gastric secretion. Methods: There were 24 rabbits used, weighing 1-1.5 kg. The rabbits were kept on fasting for 48 hours. After fasting, the pylorus of each rabbit was ligated. Thymoquinone 5 mg/kg, Carbachol 600g/kg and Verapamil 10 mg/kg body weight were administered intraperitoneally. Pylorus ligation method was used for getting gastric contents and titration method was used for finding out acidity. Results: Verapamil has been proved very effective for the treatment of many diseases. The drug verapamil inhibits the release of histamine, acetylcholine and gastrin. Verapamil has also shown effects in reducing the secretion of gastric acid. It was found that Thymoquinone reduced the volume, free and total acidity of gastric secretion, which were statistically highly significant when compared with Carbachol (P=0.000) but when we compared the results of Thymoquinones with that of Verapamil, it was nonsignificant. Conclusions: It was concluded that Thymoquinone can be used for the treatment of peptic ulcer and all other diseases which are caused by increased gastric acidity like dyspepsia, gastritis and reflux esophagitis. INTRODUCTION In clinical practice, peptic ulcer is one of the most common medical complain. In majority of the patients, peptic ulcer is caused by elevated acid production from gastric mucosa. In patients who are achlorhydric, ulcers are not found. Ulcers mostly occur in Zollinger-Ellison (Z.E) syndrome which is caused by excess acid secretion 1 . The goal of treatment of peptic ulcer is to inhibit the over production of acid. Nigella sativa is a part of the botanical family of Ranunculacae. It is commonly cultivated in Europe, Middle East and Westren Asia. All over the world, it is called by many names like habbat al-baraka or Kali jeera. In the light of Hadith "Use this Black seed, it has a cure for every disease except death" (Sahih Bukhar), The Nigella sativa (N. sativa) seeds, are mostly used in many Arab countries like Saudi Arabia, Middle Eastas a natural remedy for many diseases. Nigella sativa seeds contain many active ingredients, the most important of which is thymoquinone (nigellone) 2 . Because of the large variety of uses of Nigella sativa, many researchers have conducted various in vitro and in vivo studies on laboratory animals and human beings in order to find out their pharmacological activities. These include antiinflammatory 3 , analgesic, anti-pyretic activity 4 , antimicrobial 5,6,7,8 , antifungal 9,10,11 , hypoglycemic 12 and anti-tuberculosis 13 . Thymoqui-nione administration can prevent and improve the murine-DSS (dextran sodium sulfate) induced colitis. It could also serve as an effective therapeutic agent for the treatment of patients with inflammatory bowel disease. It also helps to prevent colitis and diarrhea in patients 14 . MATERIALS AND METHODS The active ingredient thymoquinone was obtained from Amidis chemical company PVT limited, China. Thymoquinone was extracted from the Nigella sativa plant by the company itself. Source of chemical: The chemical verapamil was purchased from VPL chemicals, private limited India. Method Twenty-four rabbits of local breed were selected for this study. The animals were healthy and were of both sexes. All the chemicals were administered through intraperitoneal route according to the body weight of the animals. The animals were not given any food for 48 hours. Only there was free availability of water before they were administered the drugs. The animals were separated into 3 groups each group had 8 animals. Group 1 was administered Carbachol. The dose of carbachol was 600 µg/kg body weight, Group 2 was administered Thymoquinone with a dose of 5mg/kg and group 3 was given Verapamil. The dose of verapamil was 10 mg/kg body weight. After15 minutes Carbachol with a dose of 600 µg /kg body weight was administered to Group 2 and 3. The gasric juice was obtained from all rabbits by the method of pylorus ligation, as explained by Vischer et al., 15 . The anesthesia was given with ether in a big gloss desicator, and all the animals were weighted. The abdomen was cut by an incision in mid-line and the pylorus was disconnected with silk suture. Suture clamps were used to close the wall of abdomen. The inhibitory effect of the drug was better understood after the stimulation produced by carbachol. When the anesthesia was stopped, the animals became conscious. After a period of 4 hours, all the animals were slaughtered, their abdomen was again opened and the cardiac end of the stomach was ligated. It was cut from both ends outside the knot. The incision was given to stomach at greater curvature. The gastric juice was finally procured and was titrated against 0.1 N NaOH solution by the procedure explained by Varley 16 . This procedure is being performed since 1954, for the calculation of all forms of acidity i.e. free, combined and total. In this process, one ml of centrifuged gastric juice is titrated against 0.1 N NaOH using Topfer's reagent as an indicator for calculation of free acidity and 1% phenolphthalein as an standard for combined acidity. The acidity of the gastric juice was determined by using the normality equation formula-NIVI=N2V2. Where, N1 is the normality of unknown acid/base, N2 is normality of known acid/base, V1 is volume of unknown acid/base and V2 is volume of known acid/base. Total acidity was calculated by the sum of the two titrations. The data obtained was subjected to statistical analysis for any significance. The data was entered into SPSS-IBM Version 19. P value of <0.05 was considered to be statistically significant. RESULTS The volume, free acidity and total acidity in group 1 were 28.125±2.031 ml, 6.225±1.188 m.Eq./dl and 7.650±1.243 m.Eq./dl respectively. Similarly, the mean values for volume, free acidity and total acidity of gastric secretion in group 2 (Thymoquinone+ Carbachol treated group) were13.625±1.355ml, 2.412±.626 m.Eq./dl and 3.750±.833 m. Eq./dl respectively. There was a reduction in all the parameters and was found to be highly significant when compared with Carbachol group (P=0.000). All these changes are shown in Table 1. Likewise, the mean values for volume, free acidity and total acidity of gastric secretion in group 3 (Verapamil + Carbachol treated group) were 13.212±1.501 ml, 2.200±.575 m.Eq./dl and3.575±.497m.Eq./dl respectively. There was a reduction in all the parameters and was found to be highly significant when compared with Carbachol group (P=0.000). All these changes are shown in Table 1. Similarly, when we compared the mean values for volume, free and total acidity of Verapamil+ Carbachol group to those of Thymoquinone + Carbachol group, p values were 0 .392, 0.204, 0 .412. All these changes are non-significant and shown in Table 2. DISCUSSION Nigella sativa seed and its constituents are mostly utilized as a natural cure for many diseases. A lot of scientific work has been conducted to determine the pharmacological activities of this plant. Most of the research has established its importance in traditional medicine as an analgesic, anti-inflammatory, antioxidant, anti-cancer, anti-microbial, anti-parasitic, antihypertensive and as an immune booster. The main neurotransmitters/hormones that directly increase the secretion by the gastric organs are acetylcholine, gastrin and histamine 17 . The stimulation of these neurotransmitters depends on the influx of Ca ions 18 . The increase in gastric volume and acidity is due to the intravenous administration of calcium which results in hypercalcemia 19 . During an in vitro study, it was explained that Nigella sativa, successfully stopped the release of histamine from mast cells, through lowering the intracellular calcium and blockage of protein kinase C. In a study relating to hypertensive rats, it was found that Nigella sativa extract produced a significant hypotensive effect when compared to that of 0.5 mg/kg/day of oral calcium channel blocker nifedipine 20 . Nigella sativa antagonized methacholine induced contractions of isolated guinea-pig tracheal chain 21 .This study explained the anticholinergic effect of Nigella sativa which could be the reason for inhibiting the gastric acid secretion. Further, it was found that this study is reported to show significant response as concluded by other scientists who found that calcium channel blocker Verapamil significantly reduces gastric acid secretion 22,23 . Calcium channel blockers block the calcium influx, which causes a reduction in volume and acidity of gastric juices. Also, the lipoxygenase pathway, a step-in metabolism of arachidonic acid, is also blocked by calcium channel blockers. As a result, the leukotrienes, the harmful substance is not formed and all the arachidonic acid is metabolized by cyclooxygenase pathway. This cause the production of prostaglandin which couples with guanine nucleotide binding protein (Gi protein) and blocks the adenyl cyclase and thus decreases the gastric secretion 24 . Release of histamine from mast cells is critically dependent on external calcium ions, so by blocking calcium ions can block the release of histamine. Histamine is an important factor for increasing the gastric acid secretion 23 . In this study it was investigated that thymoquinone, which is obtained from Nigella sativa showed maximal reduction in gastric secretion and acidity. This study can be correlated to the study done by El-Dakhakhani et al., 25 . El-Dakhakhani et al., observed the effect of Nigella sativa oil on HCL secretion and ethanol-induced ulcer in rats. The gastroprotective and anti-secretory effects of N. sativa seed powder significantly reduced gastric secretion, volume, pH and gastric acid-output. This was explained as the acid reducing effect of thymoquinone is due to its antihistaminic effect. Histamine is the main stimulus for histamine release that cause high gastric hydrochloric acid secretion leading to peptic ulcer, gastritis etc. Thymoquinone inhibits histamine, also increase gastric mucous secretion which is protective to stomach 26 . Significant increase in mucin content, glutathione level as well as a significant decrease in mucosal histamine content and ulcer formation. From above discussion it is clear that thymoquinone obtained from the seeds of Nigella sativa can significantly reduce the gastric acidity by blocking the release of histamine from histamine H2 receptors. It also contains calcium channel blocking activity which reduces the release of acetylcholine and histamine. CONCLUSION It is concluded that thymoquinone obtained from the Nigella sativa seeds may be effectively used in patients with diseases of peptic ulcer and other medical conditions caused by excess secretion of HCL. For the evaluation of these effects, further experiments should be done in human subjects. Current study concluded that thymoquinone significantly decreased the Carbachol stimulated acid secretion. Further work on the mechanism of action is suggested.
2020-10-15T05:16:26.592Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "20764b25e89e55b02aac92d6ba6de316b894caed", "oa_license": "CCBY", "oa_url": "https://www.ujpr.org/index.php/journal/article/download/453/802", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "20764b25e89e55b02aac92d6ba6de316b894caed", "s2fieldsofstudy": [ "Medicine", "Chemistry", "Biology" ], "extfieldsofstudy": [] }
76665444
pes2o/s2orc
v3-fos-license
A Logic-Gated Modular Nanovesicle Enables Programmable Drug Release for On-Demand Chemotherapy It remains a major challenge to achieve precise on-demand drug release. Here, we developed a modular nanomedicine integrated with logic-gated system enabling programmable drug release for on-demand chemotherapy. Methods: We employed two different logical AND gates consisting of four interrelated moieties to construct the nanovesicles, denoted as v-A-CED2, containing oxidation-responsive nanovesicles (v), radical generators (A), and Edman linker conjugated prodrugs (CED2). The first AND logic gate is connected in parallel by mild hyperthermia (I) and acidic pH (II), which executes NIR laser triggered prodrug-to-drug transformation through Edman degradation. Meanwhile, the mild hyperthermia effect triggers alkyl radical generation (III) which contributes to internal oxidation and degradation of nanovesicles (IV). The second AND logic gate is therefore formed by the combination of I-IV to achieve programmable drug release by a single stimulus input NIR laser. The biodistribution of the nanovesicles was monitored by positron emission tomography (PET), photoacoustic, and fluorescence imaging. Results: The developed modular nanovesicles exhibited high tumor accumulation and effective anticancer effects both in vitro and in vivo. Conclusions: This study provides a novel paradigm of logic-gated programmable drug release system by a modular nanovesicle, which may shed light on innovation of anticancer agents and strategies. Introduction Because of the numerous physical and chemical characteristic that can be engineered into nanometerials, the field of nanomedicine has exploited these materials for sensing environmental parameters, providing images of human diseases, and providing drug delivery [1][2][3]. As of 2016, more than 50 nanodrugs had been approved by the U.S. Food and Drug Administration (FDA) and another 77 were undergoing clinical trials [4]. Specifically for cancer therapy, nanomedicine has shown appreciable advantages over traditional medicine, for example, through prolonging circulation time and/or shielding systemically toxic drugs by integrating with stimuli-responsive release strategies [5][6][7][8][9]. Despite the Ivyspring International Publisher potential, many of these strategies have not resulted in prolonged patient survival. This lack of enhanced efficacy is thought to be due, in part, to the nonspecific drug release occurring in healthy tissues resulting in system toxicity [10,11]. Therefore, there is a need to develop more advanced strategies of engineering therapeutic nanoparticles (NPs) where drug release is explicitly controlled to maximize drug utilization and minimize systemic side effects [12]. For purposes of controlling drug release, stimuli-responsive NPs have been engineered to recognize tumor-specific internal stimuli (e.g., pH, redox state, and enzymes) and external stimuli (e.g., heat, magnetic field, light, and ultrasound) [13][14][15][16][17][18]. These strategies enable tailored drug release profiles in a spatiotemporally controllable manner [19][20][21][22][23]. However, a major caveat is that although materials sensitive to single factor can facilitate therapeutic delivery to tumor sites, individual biomarkers are rarely unique to all tumor sites. For example, acidic pH and reducing conditions are also shared by the stomach [24] and intracellular milieu [11] of living subjects, respectively, leading to suboptimal selectivity for targeted drug delivery and drug release. To improve site specificity of drug release, logic-gated systems that respond only when presented with multiple inducements provide a promising solution [25][26][27][28][29][30]. Although it's still in its infancy, logic-gated systems have been emerging as a useful platform affording programmable drug release for cancer therapy [31][32][33][34][35][36]. Polymeric nanoparticles [37], including micelles [38], nanogels [39], and vesicles, have proved to be a viable nanotechnology platform for effective drug delivery as demonstrated by their use in a series of pharmaceutical products for more than 40 years [40]. In particular, polymeric vesicles (e.g., polymersomes) have been extensively engineered with unique properties enabling simultaneous loading of both hydrophilic and hydrophobic molecules [41,42]. Compared to widely investigated liposomes, polymersomes have increased mechanical robustness [43,44]. Therefore, we anticipate that polymeric nanovesicles could be the preferred platform for integrating well-ordered logic-gated nanomedicine. Here we present a logic-gated drug release nanoformulation that integrates both external stimulus and internal stimulus for controlled drug release and subsequent cancer treatment ( Figure 1). Controlled drug release was accomplished by incorporation of a heat sensitive prodrug version of doxorubicin (DOX) and a vesicle structure sensitive to degradation by reactive oxygen species (ROS). The nanovesicles were manufactured by self-assembly of the ROS responsive amphiphilic block-polymer, poly(propylene sulfide)-poly(ethylene glycol) (PPS-PEG). [42,45] The hydrophobic DOX prodrug, denoted as CED2, was prepared by conjugation of two molecules onto the dye croconaine (CR780) using an Edman linker [46,47]. The prodrug and hydrophilic free radical precursor, 2,2'-azobis[2-(2-imidazolin-2yl)propane] dihydrochloride (AIPH), [48] were loaded onto the membrane and into the inner space of nanovesicles (denoted as v-A-CED2), respectively. Drug release is controlled by two logical AND gates constructed by four interrelated units. The first AND gate requires mild hyperthermia (I, generated by photothermal agent CR780 under external 808 laser stimulation) and acidic pH (II, provided by the tumor microenvironment) that converts the prodrug into DOX by Edman degradation. The second logical gate requires the heat generated by laser irradiation to generate radicals from the decomposition of AIPH (III) and subsequent radical-induced oxidation of the PPS-PEG that leads to nanovesicle degradation (IV) and release of drug into the tumor. In other words, upon NIR irradiation, the photothermal agent CR780 in the prodrug CED2 generates heat and elevates the temperature, which then activates AIPH to generate free radicals and subsequent radical-induced oxidation of the PPS-PEG. Overall, these processes lead to nanovesicle degradation and release of prodrug into the tumor cells. Finally, DOX are released from the prodrug under the tumor acidic microenvironment and mild hyperthermia through Edman degradation. Measurement of photothermal effect of CED 2 Temperature increase was evaluated by a) irradiation of 200 μL solution (10% DMSO in PBS) containing various concentrations of CED 2 with an 808 nm NIR laser at 0.5 W / cm 2 for 5 min, or B) exposing 20 μM CED 2 to different power densities (0.1 -2 W / cm 2 )of the NIR laser. Photothermal stability was evaluated by exposing the 20 μM to a NIR laser (0.5 W/cm 2 ) for five cycles. The NIR laser power density was determined by a laser energy meter (Coherent Inc., CA, USA). An SC300 infrared camera was employed to record the real-time temperatures of the solutions. Edman degradation behavior of CED 2 and CD 2 Solutions of CED 2 (1 mg/mL) were prepared at different pH values (5.0, 6.5, 7.4) in phosphate buffer (10% DMSO). The resulting solutions were heated at different temperatures (37, 42 and 50 o C) for 10 min. After cooling to room temperature, the solution was filtered and the filtrate analyzed by HPLC (A: 50 mM ammonium acetate buffer; B: CH 3 CN) at a flow rate of 1 mL/min according to the following gradient program: 0-2 min, 5% of B; 2-15 min, 5%-80% of B; 15-20 min, 80% of B; 20-25 min, 80%-5% of B. The ratio of the peak area for DOX divided by the sum of the peak areas for DOX plus CED2 times 100 is reported as the degradation percentage. For comparison, CD2 was also dissolved in different pH (5.0, 6.5 and 7.4) phosphate buffer (10% DMSO) but all were heated at 50 o C for 10 min and then analyzed by HPLC. Formation of PPS-PEG based vesicles The formation of PPS-PEG based vesicles was followed by a general procedure of solvent exchange or thin film method. For the preparation of PPS-PEG only vesicles (denoted as v), amphiphilic PPS-PEG (5 mg) copolymers were dissolved in chloroform (3 mL), the chloroform allowed to evaporate onto a surface, and the dry samples were re-dispersed in distilled water (1 mL) by subsequent hydration and sonication (2 min). Drug loading and release Initially, CED 2 or CD 2 were dissolved in DMF (1 mg/mL) and AIPH was dissolved in distilled water (1 mg/mL) as stock solutions. During the self-assembly of PPS-PEG vesicles, the above solutions containing different formulations were applied to disperse different vesicle formulations. The v-A, v-A-CED2, v-A-CD 2 , v-CED 2, and v-CD 2 samples were purified by a centrifugal filter (Amicon Ultra, Millipore) and repeated for three times. The supernatant was collected and the concentration of residual nonencapsulated CED 2 or CD 2 measured by UV absorption; according to the UV absorption standard curve of DOX at 480 nm. The drug loading content (DLC) and loading efficiency (LE) were calculated according to the following equations: Mass of encapsulated drug = mass of fed drug -mass of residual drug in supernatant DLC (%) = mass of encapsulated drugs / mass of carriers and encapsulated drugs × 100% LE (%) = weight of encapsulated drugs / weight of fed drugs× 100% Drug release profiles of DOX from PPS-PEG based vesicles were performed either with or without the application of NIR laser irradiation (0.5 W/cm 2 for 5 min) in different pH solution (pH 5.0 or 7.4). After the irradiation, the sample was centrifuged to precipitate the vesicle and the released DOX concentration measured at 480 nm. Figure 1. Schematic illustration of logic-gated drug release from the modular nanovesicles (v-A-CED2) for on-demand chemotherapy. A) Four units: mild hyperthermia (I, generated by photothermal agent CR780), acidic pH (II, tumor microenvironment), free radicals (III, from AIPH decomposition), and nanovesicle degradation (IV). The first AND logic (I, II) leads to prodrug-to-drug transformation through Edman degradation. The units III and IV cascade moieties cause oxidation and degradation of nanovesicles. The second AND logic combining I-IV leads to drug release from the nanovesicles. B) Self-assembly of amphiphilic polymer PPS-PEG, hydrophobic prodrug CED2, and hydrophilic component AIPH, forming v-A-CED2. Programmed drug release is achieved through a logic-gated mechanism external stimulus from a NIR laser. C) Schematic of NIR triggered drug release in a cell positioned in tumor microenvironment. The released DOX can enter cell nucleus and trigger cell death. Generation of ABTS + · free radicals The generation of ABTS + · was performed by taking advantage of the reaction between ABTS aqueous solution (2 mg/mL, 0.2 mL) and v-A aqueous solution (2 mg/mL, 0.2 mL). The mixture was protected from light irradiation and allowed to proceed for 0.5 h at 37, 42 or 50 °C. Then, the absorbance of diluted ABTS + · solution (with DI water) in the range from 400 nm to 950 nm was recorded using a UV-Vis spectrometer. Cell viability assay U87MG cells were seeded in a 96-well plate with a concentration of 1×10 4 cells/well. After 24 h incubation at 37 °C, various nanoparticle formations were added to each well in different concentrations (n = 3). NIR laser irradiation was conducted with 0.5 W/cm 2 for 5 min reaching about 42 o C or a higher optical power density (1 W/cm 2 for 5 min reaching about 50 o C). After 24 h, cell viability was evaluated using Cell Counting Kit-8 (CCK-8) method and calculated as percentages by referring to control (with or without NIR laser depending on experiments). Reactive oxygen species detection in vitro U87MG cells were seeded with a density of 5×10 5 per well in 12-well plates. After incubated for 24 h, the culture medium was replaced with 1 mL of fresh medium. Freshly prepared carboxy-H 2 DCFDA was added into each well as loading solution with the final concentration of 2 µM and incubated for 20 min under cell culture condition. After washing by PBS for three times, cells were treated with various nanoparticle formations with or without laser irradiation (808 nm laser, 0.5 W/cm 2 , 5 min) and allow for further incubation. Subsequently, the cells were washed with PBS and collected for flow cytometry study. Green fluorescence was recorded on the FL1 channel. All experiments were performed triply and independently with a total of 10 4 cells analyzed for each experiment. Apoptosis assessment in vitro Apoptosis rates were studied using R-phycoerythrin (R-PE) annexin V and SYTOX green (Thermal Fisher Scientific) through flow cytometry following the manufacturer's instructions. Briefly, U87MG cells were stained with annexin-V conjugated to R-PE and SYTOX green for 15 min at 37 °C in a CO2 incubator at 2 h after treated with different formulations and NIR laser irradiation (0.5 W/cm 2 for 5 min). SYTOX green fluorescence versus R-PE fluorescence was plotted and analyzed using CellQuest Pro software (BD Biosciences). All experiments were performed triply and independently with a total of 10 4 cells analyzed for each experiment. Radiolabeling and in vitro stability studies 64 CuCl 2 (222 MBq) was diluted in 2 mL of 0.1 M sodium acetate buffer (pH 5.5) and mixed with 50 μL of CED 2 (1mg/mL in aqueous solution with 10% DMSO). The reaction was incubated at 50 °C for 15 min and the labeling yield was evaluated by iTLC. For the preparation of 64 Cu-v-A-CED 2 , the method was the same as the drug loading method. To test the stability of 64 Cu-v-A-CED 2 in vitro, it was incubated in PBS and mouse serum at 37 °C for 24 h. MicroPET imaging About 80 µCi of 64 Cu-v-A-CED 2 was intravenous injected into U87MG tumor-bearing mice and then were scanned at various time points with a micro PET(Siemens Inveon) scanner. The tumor uptake was calculated according to the 3-dimensional region of interests (ROIs) drawn on the tumor area in decay-corrected PET images. In vivo photoacoustic and fluorescence imaging All animal experiments were performed under the National Institutes of Health Clinical Center Animal Care and Use Committee (NIH CC/ACUC) approved protocol. The tumor model was established by subcutaneously injecting U87 MG cells (2×10 6 ) into the right back flank of mice (athymic nude, 5 weeks old). When the tumor size reached ∼100 mm 3 , 100 µL of CED 2 , v-A-CED 2 or v-A-CD 2 (0.5 mg/mL CED 2 or CD 2 content) was intravenously injected into the tumor-bearing mice (n = 3). Time points included one recording before injection (pre) and at 1 h, 4 h, 24 h, and 48 h after injection. The PA signals were performed by Visual Sonic Vevo 2100 LAZR system at a wavelength of 780 nm. The quantified PA intensities were obtained from the region of interests (ROIs). In vivo thermal imaging When the tumor size reached ∼60 mm 3 , 100 μL of CED 2 , v-A-CED 2 or v-A-CD 2 (corresponding to 100 µM CR780), was intravenously injected into the tumor-bearing mice. Thermal imaging was recorded by an SC300 infrared camera (FLIR) when the tumors were exposed to 808 nm laser (LASERGLOW Technologies) of power density at 0.5 W/cm 2 . In vivo tumor therapy study After the tumor size reaching around 60 mm 3 , mice were randomly grouped into 7 groups (n = 5). The mice were intravenously injected with different formulations, including v-A-CD 2 , v-A-CED 2 , v-A, v-CED 2 , free DOX, and PBS (2 groups), with a normalized dose of 4.0 mg/kg DOX (or equivalent amount of PPS-PEG if not applicable) per mouse. After 24 h, 6 in 7 mice groups were treated with NIR laser irradiation (0.5 W/cm 2 , 4 min), including v-A-CD 2 + L, v-A-CED 2 + L, v-A + L, v-CED 2 + L, free DOX + L and PBS + L. The tumor size and body weight were recorded every two days after each treatments until 14 days post-irradiation. Mice were euthanized when any dimension of tumor was close to 2 cm or when mouse body weight was lost by over 20%. The tumor volumes were calculated by the equation: V = width 2 × length/2. The survival rates were recorded until 40 days post-irradiation. Results were analyzed using GraphPad Prism 5 (La Jolla, CA). Results and Discussion Rational design and preparation of prodrug and nanovesicle to achieve logic-gated drug release system To develop this nanomaterial, we first needed to prepare a pH and temperature labile prodrug. We selected CR780, as a linker between two molecules of DOX, because it has photothermal properties that would allow external laser irradiation to heat tissue. CR780 was conjugated to two molecules of lysine via the epsilon-amine. The alpha-amine was converted to a phenylthiourea and the lysine carboxylic acid conjugated with DOX (Scheme S1A) to give CED2. We refer to the phenylthiourea moiety as an Edman linker because it can be degraded in response to dual stimuli of elevated temperature and acidic pH [49,50]. For comparison purposes, CR780 conjugated DOX (CD 2 ), without the Edman degradable structure, was also synthesized by a direct one-step amide condensation (Scheme S1B). Chemical analyses of these compounds are presented in Figures S1-S9. CED2 and CD 2 showed similar absorption peaks with that of free DOX at 480 nm. However, the peak absorption of both CED 2 and CD 2 has a slight red shift compared with that of unmodified CR780 in the NIR region (Figure 2A). Similarly, a slight blue shift of fluorescence emission was found after conjugation ( Figure S10). There was negligible change in NIR fluorescence intensity after modification ( Figure S10). We also observed that CED2 exhibited excellent photothermal stability after at least five cycles of NIR laser irradiation (808 nm, 0.5 W/cm 2 ) at pH 7.4 ( Figure S11A). Meanwhile, the photothermal effect of CED 2 demonstrated a good linear dependence on its concentration and NIR laser power density ( Figure 2B, Figure S11B-C), which allows for temperaturedependent degradation of CED 2 . Since the Edman degradation depends on both temperature and pH (figure 2G), we evaluated the Edman degradation efficiency of CED 2 at different temperatures (37 o C, mild hyperthermia 42 o C, and hyperthermia 50 o C) and pH (5.0, 6.5 and 7.4). The degradation, expressed as the percent peak area of DOX at 480 nm, after a 10 min incubation are displayed in Figure 2C The amount of degradation was linearly correlated with both temperature and pH value. CED2 showed less than 2% degradation under normal physiological conditions (37 o C, pH 7.4), indicating that even if CED 2 was released from the nanovesicles in normal tissues, it may not produce severe toxicity. However, the release of DOX reached about 30% at pH 6.5 when treated with mild hyperthermia (42 o C) for 10 min. On the other hand, the control compound CD 2 showed little to no degradation even after being treated with the harshest condition (50 o C, pH 5.0) for 10 min ( Figure S11D). The degradation products were confirmed as DOX (MW calculated 543.52, found 544. 19) and the expected side product croconaine-bis-phenylthiohydantoin (MW calculated 1018.30, found 1019.32) ( Figure S12). With prodrug in hand, we then constructed the nanomaterial to evaluate stimuli-responsive logicgated drug release. The PPS-PEG copolymers were first synthesized according to literature procedures [45], and then self-assembled into nanovesicles with a size of around 100 nm. The transmission electron microscopy (TEM) image showed that the membrane thickness of the nanovesicles was about 6-8 nm ( Figure 3A), and the overall hydrodynamic diameter was around 100 nm from dynamic light scattering (DLS) analysis ( Figure S13). During self-assembly of PPS-PEG copolymers, hydrophilic molecule AIPH was loaded into the interior cavity and hydrophobic prodrug CED2 was encapsulated into the hydrophobic membrane of the nanovesicles simultaneously. The obtained v-A-CED 2 nanovesicles showed a dark contrast on the shell from the TEM image ( Figure 3B), further indicating the successful encapsulation of CED 2 within the nanovesicles. For comparison purposes, different formulations including AIPH loaded nanovesicles (v-A), CED 2 encapsulated nanovesicles (v-CED 2 ), and AIPH, as well as CD2 co-loaded nanovesicles (v-A-CD2), were also prepared ( Figure S14). We then studied the drug loading content (DLC) of the nanovesicles by using the weight fraction of CED 2 . In a typical protocol, 10 mg of PPS-PEG polymer and 4 mg of CED 2 were used as starting materials for self-assembly which yielded a DLC of about 22.4% with a loading efficiency of about 83.7% in v-CED 2 samples. The remarkably high DLC in the v-CED 2 nanovesicles could be attributed to the hydrophobicity of CED 2 evidenced by the significant drop of solubility compared with CR780 or DOX alone. Additionally, v-A-CED 2 nanovesicles were obtained by a similar procedure to v-CED 2 but with the presence of 0.8 mg of AIPH, which turned into an AIPH loading content of about 3.7% and a loading efficiency of about 46.2%. The other nanovesicles, including v-A, v-CD 2 , and v-A-CD 2 , were prepared according to a similar protocol but using different starting materials. Since drug release requires decomposition of the nanomaterial, we evaluated the decomposition of AIPH and v-AIPH (v-A) and measured the formation of free radicals as a function of temperature by measuring the absorption of 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) free radicals (ABTS +• ) at 500-950 nm. As shown in Figure S15, the generation of ABTS +• was temperature-dependent when ABTS was incubated with AIPH and v(AIPH) (v-A) at 37, 42, and 50 o C. The amount of ABTS +• at 50 o C was considerably higher than that at 42 and 37 o C, indicating much more free radicals generated by AIPH at a higher temperature. Notably, the generation of ABTS +• from v-A was significantly lower than that from free AIPH, which can be attributed to the presence of PPS vesicles that consume free radicals. TEM image of v-A after incubation at 50 o C for 0.5 h showed significant degradation of the particle ( Figure 3C), which was further confirmed by DLS measurement after NIR laser irradiation ( Figure S16). Evaluation of the logic-gated stimuli-responsive drug release of the nanovesicles in vitro After validating the degradation behavior of CED 2 and nanovesicles, we sought to characterize the logic-gated stimuli-responsive drug release of the nanovesicles. Four kinds of nanovesicles (v-A-CED 2 , v-A-CD 2 , v-CED 2, and v-CD 2 ) were treated with different possible combinations of NIR laser and pH, and the cumulative drug release was measured using DOX fluorescence of the supernatant after ultra-centrifugation (figure 3D). When these vesicles were incubated in an acidic solution (pH 5.0) and treated with NIR laser until reaching 42 o C (0.5 W/cm 2 , 5 min), v-A-CED 2 and v-A-CD 2 showed release of DOX within the first 1 h after NIR laser treatment, up to 25% and 10%, respectively ( Figure 3D). Notably, drug release of the v-A-CED 2 gradually increased to 38% at 48 h after laser treatment, while the late-time drug release of v-A-CD 2 was minimal (less than 20%). In contrast, the v-CED 2 and v-CD 2 showed little drug release in response to NIR laser irradiation. The drug release for v-A-CED 2 and v-A-CD 2 at pH 7.4 may be attributed to free CED 2 and CD 2 released from nanovesicles, respectively ( Figure S17A). The negligible difference was found between the conditions at pH 5.0 and 7.4 for samples without laser irradiation (Figure S17B and C). The prodrug CED 2 and the vesicles released very little DOX (< 5%) in vitro when incubated in mouse serum at 37 o C for 48 h ( Figure S18). These results support that the drug release of v-A-CED 2 proceeds through a logic gated sequence. At the first AND gate, the simultaneous events of heating caused by NIR irradiation and low pH caused unmasking of the drug. At the second AND gate, the free radical released by NIR decomposition of AIPH and the presence of the oxidationsensitive polymer vesicle, resulted in the release of anticancer drug into the tumor cell environment. Using confocal microscopy, we observed the localization of DOX within U87MG cells following treatment of with the various vesicle formulations and laser treatment ( Figure 3E). The confocal images showed that cells treated with v-A-CED2 + L exhibited obvious accumulation of free DOX in nucleus owing logic-gated release following Edman degradation and vesicle rupture after NIR irradiation. All control vesicles showed fluorescence signal emitted by CED 2 (group v-CED 2 + L) or CD 2 (group v-A-CD 2 + L) in cytosol of cells. Since our second logic-AND-gate requires the generation of free radicals to break down the vesicles and release drug, we evaluated the radical generation properties of the particles. As we expected, NIR laser irradiation of v-A-CED2 produced the highest ROS level in cells ( Figure 4A and 4B). We then quantified the cytotoxicity of different formulations with or without NIR laser irradiation against U87MG cells using cell counting kit 8 (CCK 8) assay. Cells were treated with v-A, v-A-CD2, v-CED 2 , v-A-CED 2 or free DOX at various concentrations normalized to the amount of DOX (or polymers) and with NIR irradiation (0.5 W/cm 2 , 5 min, T = about 42 °C), and 24 hours later assayed for cell viability. v-A-CED 2 exhibited comparable cytotoxicity with free DOX but much higher cytotoxicity than control groups ( Figure 4C). NIR irradiation did not cause any additional cytotoxicity under this mild hyperthermia. However, when treated with a higher optical power density NIR irradiation (1 W/cm 2 for 5 min) reaching about 50 o C, the v-A-CD 2 + L and v-CED 2 + L groups showed increased cytotoxicity that can be ascribed to the chemo-photothermal combination therapeutic effect ( Figure S19A). Additionally, these nanovesicles, as well as the free prodrug CED2 without NIR laser irradiation, exhibited little to no cytotoxicity after 24 h incubation ( Figure S19B). These results were confirmed by calcein AM and propidium iodide (PI) staining assay ( Figure S20). Furthermore, we used Annexin V R-PE/SYTOX green to evaluate the apoptotic mechanism of cell killing effect by different formulations ( Figure 4D). The results illustrated that cells treated with nanovesicles and laser irradiation underwent both apoptosis and necrosis, where v-A-CED2 + L group showed significantly higher proportion of apoptotic (27.3%) and necrotic (47.5%) cell death (in total of 74.8%) under NIR laser irradiation than any of the control groups ( Figure S21). Logic-gated drug release for on-demand chemotherapy guided by multimodality imaging Encouraged by the promising in vitro cytotoxicity, we set out to explore in vivo applications. It is demonstrated that croconaine dyes can bind with divalent metal ions at the carbonyl oxygens [51], thus we sought to do the radiolabeling for CED 2 with 64 Cu, which enables quantitative pharmaco-imaging to monitor drug distribution in vivo by PET imaging. The radiochemical yield of 64 Cu-CED 2 is about 63% evaluated with instant thin layer chromatography (iTLC) ( Figure S22A). Then the 64 Cu-v-A-CED 2 was obtained by self-assembling with PPS-PEG copolymers and AIPH, which was very stable in PBS and mouse serum ( Figure S22B-22D), making it suitable for in vivo PET imaging. As shown in Figure 5A, the decay-corrected PET images displayed a high tumor-to-normal contrast. The quantification illustrated that the tumor accumulation of 64 Cu-v-A-CED 2 reached a peak (about 8% ID/g) at 24 h post-injection ( Figure 5B). The concentration of 64 Cu-v-A-CED 2 in blood was obtained by quantifying the left ventricle from PET images, which illustrated that the radiotracer had a relatively long blood half-life ( Figure S23). After the imaging, tumors and major organs were harvested for biodistribution study by gamma countering (Figure S24). We also evaluated photoacoustic and fluorescence imaging of the v-A-CED2, v-A-CD 2 , and free CED 2 in a subcutaneous mouse tumor model as a measure of tumor uptake. As shown in Figure 5C-5F and Figure S25, the nanovesicles (v-A-CED 2 ) showed considerably higher tumor accumulation compared with CED 2 , representing the good passive tumor targeting effect of the nanovesicles. In subsequent in vivo studies, our logic-gated construction was evaluated in a xenografted mouse tumor model. Using a normalized dosage of DOX (4.0 mg/kg) (n = 5/group), the tumor temperature increase was monitor by NIR camera 24 hours post-injection of the various formulations during NIR irradiation (0.5 W/cm 2 , 4 min). As expected, the temperature in the tumor of mice treated with v-A-CED 2 , v-CED 2 or v-A-CD 2 rapidly increased to around 42 o C within 2 min. In contrast, the other groups (PBS, v-A, and free DOX) maintained the temperature at about 36 o C after laser irradiation ( Figure 6A and 6B). Thus the photothermal properties of CR780 allow temperature elevation required for the logic-gated release of the drug. In the continued evaluation of these vesicles for anti-tumor therapy ( Figure 6C), favorable results were observed for the v-A-CED2 + L group, where the tumor growth was effectively inhibited (97.0%). In comparison, both v-CED 2 + L and v-A-CD 2 + L groups showed moderate tumor growth inhibition (44.8% and 27.1%, respectively), which can be explained by the lower efficacy of the AND gates since the lower temperature causes less prodrug release and less vesicle degradation. Moreover, the less effective tumor suppression of v-A-CD2 + L than that of v-A-CED 2 + L clearly demonstrated the necessity of NIR triggered Edman degradation of drug release for effective cancer therapy. Correspondingly, mice treated with v-A-CED 2 + L exhibited the highest survival rate over other treatment groups, in which all the mice were alive for at least 40 days after treatment ( Figure 6D). Furthermore, the hematoxylin and eosin (H&E) staining results also showed that tumors from the v-A-CED2 + L group indicated greater apoptotic and necrotic tumor cell death compared to the other groups ( Figure 6E). It should be noted that the other normal organ sections showed no obvious sign of damage ( Figure S26). In addition, the low systemic toxicity was demonstrated by the maintenance of the body weights of the mice treated with nanovesicles ( Figure S27). Conclusions In summary, we developed a novel approach to engineer modular nanomedicine with logic-gated responsiveness to environmental cues. The drug release was programmed by two different logical AND gates with four interrelated moieties, mild hyperthermia (I), acidic pH (II), free radicals (III), and the degradation of nanovesicles (IV). The external stimulus NIR-laser acted as the single input to activate these two logical AND gates. The established logic-gated modular platform showed effective anticancer efficacy both in vitro and in vivo. The nanovesicles (v-A-CED2) significantly suppressed tumor growth in a subcutaneous xenograft model with a rate of 97% and prolonged the survival of mice. We anticipate that the strategy of developing modular nanomedicine may find great utility in targeted drug delivery and programmable drug release, as well as in applications for precision medicine.
2019-03-15T02:58:03.571Z
2019-02-14T00:00:00.000
{ "year": 2019, "sha1": "d87a07370d879b4a64fada44625a1875a95e0ed1", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.7150/thno.32106", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d87a07370d879b4a64fada44625a1875a95e0ed1", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
2071778
pes2o/s2orc
v3-fos-license
Determination of the efficacy of EVICEL™ on blood loss in orthopaedic surgery after total knee replacement: study protocol for a randomised controlled trial Background After total knee replacement, overall blood loss is often underestimated, although it exceeds the visible blood loss caused by bleeding into the tissues or into the joint. The use of fibrin sealants during surgery has been suggested to reduce perioperative blood loss and transfusion rates and may be beneficial for patient recovery and the postoperative function of the joint. Methods/Design This will be a single-centre, single-blinded, randomised controlled trial with a parallel design, for which 68 patients undergoing total knee replacement will be recruited and followed up at 3, 6 and 12 months; 34 will be control patients who will receive the standard orthopaedic surgery treatment (electrocoagulation), and the other 34 will receive the same treatment plus 5 ml EVICEL™ applied during surgery and used according to the manufacturer’s instructions. The primary objective is to test the null hypothesis that the effect of EVICEL™ for controlling haemostasis and reducing postoperative blood loss in patients undergoing total knee replacement is not superior to the use of electrocoagulation alone. The secondary objective is to show that EVICEL™ reduces the need for transfusion, increases range of motion, improves clinical outcome and wound healing, and reduces the need for analgesics. The tertiary objective is to show that EVICEL™ reduces the costs of total knee replacement treatment. Discussion So far, studies on the effect of fibrin sealants in total knee replacement have delivered inconsistent and ambivalent results, indicating that there is still a need for high-evidence studies as proposed in the presented study protocol. Trial registration German registration number DRKS00007564; date of registration: 26 November 2014. Background Implementing a new technology means using a new device in existing procedures with the global target of improving health care, thereby leading to substantial improvements in patient outcomes. In orthopaedic surgery, especially following total hip and total knee arthroplasty, the overall blood loss is often underestimated as it exceeds the visible blood loss due to bleeding into the tissues or into the joint [1]. A large loss of blood means stress to the cardiovascular system and retards the patient's recovery process [2]. Some patients undergoing total knee replacement (TKR) require the transfusion of allogeneic blood products in order to avoid cardiovascular complications. In addition, a postoperative haematoma may lead to an impairment of knee range of motion (ROM) [2]. Since 1972, the supportive use of fibrin sealants in selected surgical procedures has become current practice to control haemostasis and to reduce blood loss after surgery. However, the use of fibrin sealants in orthopaedic knee and hip surgery, two procedures often associated with a considerable amount of postoperative blood loss, is not considered standard. EVICEL™ is a fibrin sealant indicated for use as a supportive treatment in surgery for the improvement of haemostasis where standard surgical techniques are insufficient. Bearing in mind that a new orthopaedic surgery guideline was published recently recommending that acetylsalicylic acid (aspirin) or clopidogrel regimens in patients undergoing orthopaedic surgery not be reduced or interrupted [3], the use of EVICEL™ in daily clinical practice might contribute to a reduction of blood loss, especially in these patients. As a consequence the use of fibrin sealants might improve healing and reduce the impaired ROM, leading to less use of analgesics after surgery, shorter hospital stays and reduced total costs of TKR treatment. Study design This study will be a single-centre, parallel-design, randomised controlled trial (RCT) in which 68 patients undergoing TKR will be recruited according to the inclusion/exclusion criteria for RCTs, and treated within a 12-month period at the Department of Orthopaedic Surgery. It is registered in the German registry under number DRKS00007564. Ethical approval has been obtained from the Institutional Review Board of the Hannover Medical School under process number '6170 M mono'. The study will be conducted in accordance with the Helsinki Declaration. Thirty-four (34) control patients will receive the standard orthopaedic surgery treatment (electrocoagulation). Another 34 will receive the same treatment as control patients plus 5 ml EVICEL™ applied during surgery and used according to the manufacturer's instructions. The main trial period is the stay in hospital for the surgery and postoperative surveillance. Follow-up examinations of all patients will be conducted 3, 6 and 12 months after surgery. With a recruitment period of 12 months and a follow-up period of 12 months, the total length of the study period is calculated to be approximately 24 months. The study will be terminated when the necessary 34 patients per group have completed the study. Additionally, the sponsor has the right to terminate the study at any time for reasonable medical or administrative reasons. Also, the principal investigator can decide to terminate the study at any time for reasonable medical or administrative reasons. The results manuscript will follow the advice from the CONSORT guide and its extension to cluster trials. Trial objectives This randomised controlled study has three main objectives. The primary objective is to test the null hypothesis that the effect of EVICEL™ on controlling haemostasis and reducing postoperative blood loss in patients undergoing TKR is no different than with the use of standard orthopaedic surgery. The secondary objective is to show that EVICEL™ reduces the need for transfusion and increases ROM when measured 7 days after surgery and in the long term, improves wound healing and reduces the need for analgesics. In addition, two clinical outcome scores will be assessed. The tertiary objective is to determine the influence of EVICEL™ use on the overall cost of TKR treatment. Primary and secondary study endpoints The primary endpoint of this study is postoperative blood loss after TKR, measured by the difference in Hb levels at baseline (recorded in the 3 days prior to randomisation) with respect to the detected minimum in the first 7 days postoperatively and compared between the study group and the control group. During this period, blood samples will be taken regularly according to the in-house protocol, and results will be compared with control values derived from patients treated under the same conditions but without the use of EVICEL™. During the operation, all factors that affect the postoperative level of Hb will be documented. Secondary endpoints are the need for at least one allogeneic blood transfusion or one autologous transfusion, the ROM of the operated joint, the postoperative use of analgesics, time until wound healing, clinical outcome scores, length of hospital stay, and the overall cost of the treatment. A follow-up measurement of ROM will be conducted 3, 6 and 12 months after surgery. The ROM of the operated joint in the study and the control group will be measured using the angle of maximum flexion. Factors that might influence the ROM (for example, the use of a peripheral nerve block) will be documented and might lead to exclusion from the secondary endpoint calculation (for example, preoperative maximal flexion <90°). The need for at least one blood transfusion will be compared between the study and the control group to demonstrate whether or not EVICEL™ reduces the need for transfusions. Furthermore, it should be shown that the use of analgesics (according to the WHO pain ladder [4]) can be reduced by treatment with EVICEL™ and that the time until the wound is completely dry is shorter. In addition, it should be shown that the clinical outcome in the EVICEL™ group is better than that in the control group. Clinical outcome is assessed by two different clinical outcome scores, the clinician-completed Knee Society Score (KSS) and the patient-completed Knee Injury and Osteoarthritis Outcome Score (KOOS) [5,6]. For the overall cost of the treatment, differences in duration of hospital stay, as well as differences in the total cost of treatment between the two groups will be compared, including the need for physiotherapy. The length of the inpatient stay depends not only on the patient's condition but also on social and organisational factors, such as the capacity of the inpatient rehabilitation centre. Therefore, a theoretical discharge date will be defined and evaluated in addition to the actual date. The theoretical discharge date is defined as the first day on which the patient achieves the following three criteria: a dry wound without signs of infection, maximum knee flexion of at least 90°and the ability to climb stairs using crutches. For the sake of safety, patients will be under permanent surveillance during the operation. Adverse events (AEs) and serious adverse events (SAEs) will be documented. Clinical signs of infections will be monitored daily, and blood parameters collected regularly. Study population A total of 68 men or nonpregnant women scheduled for primary unilateral TKR will be recruited and treated in the context of this RCT. The inclusion criteria are as follows: Patients receiving a prosthesis different from the Stryker Triathlon CR or Triathlon PS system owing to intraoperative circumstances (for example, bone fractures or ligament insufficiencies). 14. Intraoperative deviation from the agreed haemostatic procedure (for example, use of the tourniquet deviating from the agreement, for example in case of surgery lasting >2 h). Patients with surgery lasting >2 h will not be randomised. Study procedure On the day of admission eligible patients will be informed about the study protocol. If they agree to participate in the study and sign and date the informed consent form, the following procedure will apply (Table 1): Preoperative examinations (baseline) The preoperative examinations to establish baseline measures will include the following: 1. Orthopaedic examination of the lower extremity, including measurement of ROM, and confirmation of the medical indication. 2. Blood samples taken and tested according to in-house protocol. Relevant parameters for the study are: INR, PTT and complete blood count (CBC) without differential, including Hb and leucocytes. The amount of blood needed for this routine procedure is about 26.5 ml. 3. Subcutaneous injection of certoparin-sodium (for example, Mono Embolex™) at a dose of 3,000 IU/day performed from the preoperative evening onwards. An equivalent drug might be used as an alternative, depending on the guidelines (for example, no preoperative administration in the case of some oral products). The following general data will be collected on the corresponding case report form (CRF): patient ID; gender and date of birth; height and weight; BMI; medical history, especially regarding thromboembolic events; smoking status; comorbidities and concomitant medications. In addition, the following specific data will be assessed: indication(s) for TKR, Hb level (g/dl), C-reactive protein (CRP) (mg/dl), leucocytes (1/μl), virology (hepatitis B/C and HIV screening), haemostatic parameters (INR and PTT), ROM (angle of maximum flexion), details of medications used as prophylaxis against thrombosis, and assessment of KSS and KOOS. Details of the operation and of EVICEL™ administration Investigators will perform the operation according to the established in-house standard and the guidelines of the prosthesis manufacturer. The operation will follow a very strict protocol in order to minimise adverse factors. The prosthesis implanted will be the Stryker Triathlon™ system. All components are always cemented with antibiotic-containing cement (Refobacine Palacos). The operation will be performed in a bloodless field: prior to the skin incision, the tourniquet is set to 250 mmHg. During the operation haemostasis will be performed conventionally by electrocoagulation. For the study group, haemostasis is extended by the use of EVICEL™ according to the manufacturer's guidelines and recommendations. The application will be performed by the surgeons themselves. The first application takes place immediately after lavage, before implanting the prosthesis: 2 ml of EVICEL™ are sprayed into the popliteal fossa, which cannot be approached subsequently. Where cancellous bone surfaces are bleeding after cementation of the prosthesis, EVICEL™ is sprayed on those areas. After hardening of the bone cement, functional tests and conventional haemostasis are performed as usual. The second administration of EVICEL™ is carried out on the meniscal and capsular blood vessels as well as in the superior recess (a total of 2 ml). A wait of 2 min without any action is necessary before starting the suture. After suturing the joint capsule, 1 ml of EVICEL™ is sprayed into the subcutaneous tissue. Suction or swabbing must be strictly avoided in the areas where EVICEL™ has been administered. In case of complications occurring during the administration of EVICEL™ (for example, intraoperative thromboembolic events or allergic reactions), the application will be stopped immediately in the subject concerned. For both groups, a sterile compression bandage is placed after skin suturing. The tourniquet is then removed. In order to ensure better comparability of the results, no wound drainage system will be used in this study as the drained volume depends on the hardly reproducible position of the drainage in situ. The tourniquet must be removed after 2 h at the latest. If the operating time from incision to suturing exceeds 2 h, the tourniquet is removed earlier in the procedure. In this situation, the patient will be withdrawn from the study. The following specific data concerning the operation are collected: duration of surgery; type of anaesthesia (spinal or general); management of fluid balance; use of tourniquet (pressure in mmHg and time in minutes); amount of EVICEL™ used (ml); intraoperative complications; use of peripheral/nerve blocks; number of allogeneic and autologous blood units transfused; and documentation of autologous retransfusion of blood collected during surgery. Postoperative procedure The postoperative procedure is described as follows: 1. Prophylaxis against thrombosis is continued with certoparin-sodium 3,000 IU (or alternative) once daily until the patient reaches a physiological level of activity. 2. The bandages are removed on day 1 after surgery. Physiological exercises, manual lymphatic drainage and mobilisation of the patient under full weight-bearing begin on day 1, according to the established in-house standard. Sutures are removed between days 10 and 14. 3. The standard postoperative transfusion criterion at the site is an Hb level <6 g/dl, or in case of values between 6 and 8 g/dl, more than mild symptoms of anaemia. However, the criteria are soft and must be adapted according to the patient's individual situation and his/her comorbidities, especially any cardiac comorbidities. The decision for or against a transfusion is therefore made by the treating anaesthetist or orthopaedic surgeon, according to their clinical experience. Thus, the unlikely possibility of transfusing patients with Hb levels between 8 and 10 g/dl cannot be completely ruled out. For each transfusion performed, the reasons will be documented precisely. 4. Collection of blood parameters, their analysis and the documentation of the secondary endpoints are carried out as described in point 6 (Assessment of Efficacy). The amount of blood taken for each of the four blood tests is about 8.2 ml. The following postoperative data are collected in the corresponding CRF (parameters marked with an asterisk are also assessed during the follow-up examinations): The estimation assumes the following mean values and standard deviations (SD) of postoperative decreases in Hb [7]: 25 g/l (SD 10) (study group) and 37 g/l (SD 12) (control group). The level of significance is set at P = 0.05 and a two-sided t-test is performed with equal numbers from the two groups. With a power of 90 % and an estimated SD of 15 g/l, the required number of cases is 34 per group. For this sample size estimation we assumed that the use of additional information such as baseline Hb levels as covariates would increase the power of the test. For a total sample size of 68, approximately 120 patients must be screened because about 40 % of the eligible patients are expected either not to meet the inclusion criteria or not to consent to participate, owing to inconvenience with the follow-up examinations; not being randomised due to intraoperative circumstances; or to drop out for other reasons. Since the primary endpoint is blood loss during the first 7 days, which is calculated using routine blood tests during the inpatient stay, no drop-outs regarding the primary endpoint should be expected after surgery (Fig. 1). Statistical methods The statistical analysis will be carried out at the end of the study; interim analyses are not intended. However, the analysis of the primary endpoint may be carried out earlier, as soon as the data from the last patient have been assessed. For this analysis, ANCOVA for Hb level differences baseline with respect to the detected minimum in the first 7 postoperative days for both treatment groups with adjustment for baseline Hb levels will be used. If the upper limit of the two-sided 95 % confidence interval (CI) of the difference in means between EVICEL™ and standard orthopaedic surgery as estimated from the ANCOVA model is <0, the superiority of EVICEL™ will be concluded. Missing values for Hb observations will be replaced by the last observation carried forward method (LOCF). If there are no postoperative Hb values available for a particular patient, that patient will be recorded as having the largest decrease in Hb of the control group. AEs and SAEs will be evaluated descriptively by chisquared tests. For the secondary endpoints, the need for at least one transfusion will be analysed by odds ratios (OR) together with 95 % CIs. The ROM and the length of hospital stay will be analysed by t-tests. The total cost per patient will be analysed using Wilcoxon's signed ranks test. Time until wound healing will be tested by log rank tests. Secondary and tertiary analyses are exploratory and will be performed descriptively. The P values will be assessed descriptively and will be deemed significant when P <0.05. The primary analysis will be conducted on the intention to treat (ITT) population, that is, for all randomised patients. Sensitivity analyses will be performed in the per-protocol (PP) population, including all patients who completed the study according to the protocol. Randomisation/blinding The process of randomisation is performed centrally by the Institute for Biometry. A randomisation list is created and assigns the patients to either the control group or the study group. When the surgical procedure arrives at the point at which EVICEL™ treatment has to be performed in the active group (after conventional haemostasis), the investigator then initiates the randomisation and queries the patient's assignment at the Institute for Biometry by phone. This procedure allows the statement to be made that the operation prior to the application of EVICEL™ is concealed and performed without any bias. In this RCT, the patients will be blinded but owing to the nature of the intervention the surgeons clearly cannot be blinded. The treating surgeons will conduct the therapy in a way that complies with the single-blinded design of the study. In the rare case of a spinal anaesthesia, it will be ensured that patients will have no opportunity to realise whether they receive the medicinal product under investigation (IMP) or not. The medical team involved in the operation and patient's care is responsible for keeping the treatment blinded. The nurses, physiotherapists and physicians collecting data after surgery will also be blinded to the treatment allocation. Subject withdrawal As the application of EVICEL™ is conducted only at a single time point in a three-step procedure, subjects cannot be withdrawn from the IMP treatment. The only exceptions are complications occurring during the operation, leading to an immediate cessation of EVICEL™ use. Patients undergoing a thromboembolic or any other event demanding anticoagulation that exceeds the standard prophylaxis measures during the first 7 days after surgery will be recorded as having the smallest Hb value before anticoagulation. Patients receiving anticoagulation during the operation will be recorded as having the largest Hb difference of the control group. The same procedure applies to patients receiving any kind of blood product during or after the operation. Because in nearly all cases there will be a blood sample collection prior to a transfusion or to a change in anticoagulation medication, these patients are expected to account for <1 % of all patients and will be recorded separately. The following procedure applies to patients receiving any kind of blood product or an extended anticoagulation medication during or after the operation. Patients receiving blood transfusions during the operation but before randomisation will not be randomised. Patients receiving blood transfusions during the operation and after randomisation will be recorded as having the largest Hb difference of the control group. Patients receiving a blood transfusion after the operation would require measurement of Hb level before transfusion. Patients deteriorating despite transfusion will be recorded as having the smallest observed Hb level. For each transfusion, the Hb value at the time of transfusion, the reason for the transfusion (with specification of the symptoms) and the time of transfusion will be documented precisely. Patients with an intraoperative deviation from the agreed haemostatic procedure (use of EVICEL™, electrocoagulation, or of the tourniquet deviating from the agreement, e.g. in the case of surgery lasting >2 h) will not be enrolled. In most cases, it is foreseeable before randomisation that the operation will probably take longer than 2 h; therefore, those patients will not be randomised, and the percentage of subjects not included because of the duration of surgery is expected to be <1 %. Insurance Mandatory patient insurance for this trial according to AMG § 40 (3) has been obtained. Because of this, any damage to patient health during the conduct of the study will be insured, with a maximum amount of coverage of €500,000 per patient. This covers all damage that may occur to the patient either indirectly or directly as a result of the study medication or interventions in connection with the RCT. Overview of the medical product under investigation EVICEL™ is a fibrin sealant kit consisting of two human plasma-derived components, human clottable protein containing mainly fibrinogen and fibronectin (component 1), and human thrombin (component 2), both produced by Omrix Biopharmaceuticals S.A. [8]. EVICEL™ is a further development of QUIXIL, which has been approved for marketing in 14 EU countries since 2003, first in the UK in 1999. One difference between EVICEL™ and QUIXIL is the final composition of the fibrinogen component, but the thrombin component remains the same. The fibrinogen component of QUIXIL contains the synthetic antifibrinolytic agent tranexamic acid (TA), which inhibits the degradation of fibrinogen. However, because TA is potentially neurotoxic QUIXIL is contraindicated for use in neurosurgery and all procedures where contact with the cerebral spinal fluid (CSF) and dura mater might occur. By specifically removing plasminogen the need for stabilisation with TA is avoided, and EVICEL's fibrinogen is formulated without TA. Its protein concentration is 30 to 50 % higher, requiring the submission of a new application for marketing authorisation. EVICEL is indicated as an adjunct to haemostasis for use in patients undergoing surgery, when control of bleeding by standard surgical techniques (such as sutures, ligatures or cautery) is ineffective or impractical. EVICEL™ is intended for epilesional use only, and the dosage should always be oriented towards the underlying clinical needs of the patient. The manufacturer recommends a dosage of 5 ml for TKR. Risks The following potential risks may occur when administering fibrin sealants: Discussion The clinical significance of and rationale for the conception of this study was the fact that after TKR only the 'visible' blood loss is usually known, whereas 'hidden' blood loss often is underestimated. In a study involving 101 patients undergoing TKR, the hidden blood loss contributed 49 % to the 'true' total blood loss from bleeding into the tissues or into the joint [1]. A decrease in postoperative Hb levels may be caused by either intraoperative or postoperative blood loss. EVICEL™ is used at the end of the surgery, so that postoperative blood loss is the only fraction that might be affected by its use. Therefore, the study was designed to keep intraoperative blood loss as low as possible in order to provide better conditions for the effect of the fibrin sealant and to achieve meaningful results. For this reason, all operations were planned to be conducted in a bloodless field. Several clinical studies have shown that the preceding fibrin sealant, QUIXIL, significantly reduces blood loss in patients undergoing total hip or total knee replacement [7,[9][10][11]. In contrast, another study evaluating blood loss and number of blood transfusions in patients undergoing TKR could not prove any benefit from the use of QUIXIL [12]. In a study with 165 patients undergoing TKR, the authors concluded that the use of platelet gel and fibrin sealant improves ROM, reduces hospital stay and may reduce the incidence of arthrofibrosis [2]. However, during the planning phase of the study protocol presented here, there were no data concerning the newer fibrin sealant EVICEL. Owing to delays in obtaining approval for the study from the competent national authority associated with product changes, by the time the study was finally ready for submission there were several published studies in the literature evaluating the effects of EVICEL in TKR. Randelli et al. [13] reported in their RCT that the application of EVICEL reduces neither perioperative blood loss nor the need for allogeneic blood transfusion. Skovgaard et al. [14] found that the drain output in knees treated with fibrin sealant and those treated with placebo was similar, and that no statistically significant differences could be seen regarding either swelling, pain, strength of knee extension or ROM. In another study [15], however, the results suggested that transfusion rates in anaemic patients undergoing TKR were significantly lower than in a control group, and that the use of EVICEL resulted in a significant reduction of blood loss. In their meta-analysis, Liu et al. [16] reported that the use of fibrin sealants in TKR significantly reduced total blood loss, drainage blood loss and haemoglobin loss as well as transfusion rates. In conclusion, a literature search delivers inconsistent and ambivalent results indicating that, despite the presence of comparable studies, there is still a need for highevidence studies clarifying the role of fibrin sealants in TKR, as proposed in the study protocol presented here. Trial status This RCT was designed as an investigator-initiated study for which Hannover Medical School was assumed to be acting as sponsor. The study was funded by Ethicon Inc., the parent company of Omrix Biopharmaceuticals, which produces the IMP. A contract was signed and initial financial support was provided according to the contract terms, which included approval by the Independent Ethics Committee. Approval by the national competent authority was not obtained following feedback from the European Medicines Agency. When the issues were resolved, the study was prepared for resubmission to the national competent authority. However, in the meantime, ETHICON changed their business strategy and further funding was withdrawn. Therefore, the study has not yet started patient recruitment.
2017-06-27T08:44:32.067Z
2015-07-11T00:00:00.000
{ "year": 2015, "sha1": "d05177197e258e274fa9a6dbd69e3405191779ed", "oa_license": "CCBY", "oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/s13063-015-0822-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e341681fca99621dfa949c74dda109c2e91c16cb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
158686294
pes2o/s2orc
v3-fos-license
Migration, housing and attachment in urban gold mining settlements Mining settlements are typically portrayed as either consisting of purpose-built housing constructed by mining companies to house their workers, or as temporary makeshift shelters built by miners working informally and inhabited by male migrants who live dangerously and develop little attachment to these places. This paper contributes to these debates on the social and material dynamics occurring in mining settlements, focusing on those with urban rather than rural characteristics, by highlighting how misconceived these archetypal portrayals are in the Ghanaian context. Drawing on qualitative data collected in three mining settlements, we explore who is moving to and living in the mining towns, who is building houses, and how attachments to place develop socio-temporally. Through doing so, the paper provides original insights on the heterogeneous nature of mining settlements, which are found to be home to a wide range of people engaged in diverse activities. Mining settlements and their attendant social dynamics are shown to evolve in differing ways, depending on the type of mining taking place and the length of time the mines have been in operation. Significantly, we illustrate how, contrary to popular understandings of incomers to mining settlements as nomadic opportunists, migrants often aspire to build their own houses and establish a family, which promotes their attachment to these settlements and their desire to remain. These insights further scholarship on the social and material configuration of mining settlements and feed into the revival of interest in small and intermediate urban settlements. Received December 2016; accepted August 2018 Introduction As mining and urban research tend to be conducted by separate cohorts of researchers, there has been relatively little research on the growth of mining settlements (Bryceson and MacKinnon, 2012;Gough and Yankson, 2012). Such settlements are typically portrayed as either consisting of purpose-built housing constructed by mining companies to house their workers, referred to as company towns (Marais et al., 2018), or as temporary makeshift shelters built by miners working informally and inhabited by male migrants who live dangerously and quickly move on to new mining sites, so called rush sites (Jønsson and Bryceson, 2017). Whilst such settlement types clearly exist, they are far from the full picture and little is known about who is moving into and settling in mining settlements and whether they develop an attachment to these places. This paper contributes to filling this gap by drawing on the case of gold mining in Ghana to explore the changing demography of mining settlements and the process of house building within these urban centres, generating novel insights on attachment. Whether and how residents develop an attachment to these places, and how this impacts on their future plans, is discussed. The paper thus feeds into the revival of interest in small and intermediate urban settlements, taking it in a new direction. As Satterthwaite (2016) claims, this growing interest is a reflection of the recognition that a large proportion of urban populations live in urban centres other than large cities, a concern regarding the ability of local governments in small and intermediate urban centres to adequately cater for their inhabitants, and a desire to stem the flow of migrants to the major cities. Ghana presents an interesting case for exploring small and intermediate urban mining settlements as the country is endowed with significant mineral wealth (Cuba et al., 2014) and has a long history of migration and urban settlement (Coe, 2011;Van der Geest, 2011). Although minerals including manganese, diamonds, bauxite, limestone, silica and salt have all long been exploited in commercial quantities, gold is by far the most important mineral mined (Akabzaa, 2009;Bloch and Owusu, 2012). The history of gold mining in Ghana is far from smooth, however, and the production of gold has waxed and waned in line with changing government policy and world market prices (Hilson, 2002;Teschner, 2012). As elsewhere in sub-Saharan Africa (Hashim and Thorsen, 2011), Ghana has a long history of population movements of varying duration, distance and frequency, often closely associated with the search for improved livelihoods. The dominant trend, however, has been from the poorer north to the wealthier south of Ghana (Awumbila et al., 2011;Yaro et al., 2011). One consequence of this migration is the increasing level of urbanisation resulting in Ghana having just over half (51%) of the population living in urban areas 1 (Government of Ghana, 2010), making it one of the most urbanised countries in sub-Saharan Africa. Although many urban residents are tenants (Arku et al., 2012), the dream of owning a house is shared by all Ghanaians, and becoming a homeowner is considered a key measure of success and social standing (Gough and Yankson, 2011;Yeboah, 2003). The paper starts by setting the scene, presenting the origins of the three study settlements and an account of the methodology employed. The role that mining and migration have played in instigating demographic changes in the towns is then examined, followed by an analysis of changes in the nature of housing provision over time, highlighting the similarities and differences between the three settlements. The attachment to place experienced by residents of the mining settlements, including the role housing plays in this, is then explored. The paper makes an important contribution to the literature by revealing new insight into the growth and nature of mining settlements, and furthering discussions regarding the complex links between migration, housing and attachment to place. Setting the scene Gold mining in Ghana is typically characterised as having two sectors: large-scale mining conducted by multi-national companies and small-scale mining, referred to locally as galamsey, which is a corrupted form of the English expression 'gather and sell.' Most of these miners, called galamseyors, work informally without permits as they do not have mining concessions and operate from sites they do not have titles to. Galamsey is widely viewed in Ghana as being illegal, which, as Hilson (2013) claims, is partly a result of government policy and is detrimental to those working in the sector. The increasing informality of galamsey is due largely to barriers associated with galamseyors' obtaining land and licences (McQuilken and Hilson, 2016). In early 2017, the government introduced a countrywide ban on all small-scale mining in Ghana, which was still in place in mid-2018, in an attempt to bring order into the sector. Formalisation of the operations of smallscale mining operations in many mineralrich countries, however, has not been successful and there has been the tendency for informality to persist (Verbrugge, 2015). Three mining settlements -Obuasi, Prestea and Kenyasi (Figure 1) -were selected for this study because of their differing characteristics in relation to size of population, mining types and length of time mining has been conducted. Obuasi was selected to represent an old mining centre dominated by large-scale mining but where small-scale mining is becoming increasingly important. Located in the Ashanti Region, Obuasi is the principal gold mining settlement in Ghana, with a current population of almost 150,000 (Government of Ghana 2010 census). Although gold mining has been carried out in Obuasi for centuries (Hilson, 2002), the growth of Obuasi as a mining settlement stems from the late 19th century when the British colonial powers opened a series of gold mines, the most important being operated by the Ashanti Goldfields Company (AGC). Prestea was selected to represent an old mining centre now dominated by smallscale mining but where large-scale surface mining is taking place. Located in the Western Region, Prestea is the creation of mining companies that worked the Prestea concession starting in the 1920s with the British Ariston Gold Mining Company, which established underground mining in the area. Production deteriorated during the 1980s and was halted in 1998. Employees then formed Prestea Gold Resources to run the operation, though the underground mine closed down a few years later because of unprofitability (Hilson and Yakovleva, 2007). Today surface mining is carried out by the Canadian multi-national company, Golden Star Resources (GSR), and the town's population is almost 27,000 (Government of Ghana 2010 census). Kenyasi was selected as a new mining settlement dominated by small-scale mining but where a large-scale mine has been established close by. The settlement is located in the Brong-Ahafo Region and consists of two separate towns named Kenyasi I and Kenyasi II. According to oral testimonies, some residents moved out of Kenyasi I to establish Kenyasi II following a disagreement. This study focuses on Kenyasi II (hereafter just referred to as Kenyasi), which is the larger and more rapidly growing of the two, with a population of around 11,500 (Government of Ghana 2010 census). Since the discovery of gold in the area in 2004, the multi-national company Newmont has established large-scale open cast mining in the district and many small-scale mining operations have also begun operating (Kala, 2016). This paper is based on qualitative data collected in all three mining settlements using in-depth interviews, semi-structured interviews and focus group discussions, supplemented by cultural events and observations. All of these data were collected by the authors, with the language of communication being either English or Twi depending on the preference of the interviewee. Most interviews were recorded and subsequently transcribed, though where this was not possible either because of a noisy location or the interviewee preferring not to be recorded, detailed notes were taken. In each settlement, the initial in-depth interviews were with elderly male and female residents to obtain an overview of how the settlements and mining activities have changed over time. These individuals were typically located by asking a local assembly member (elected local government representative) to select suitable long-term residents. The snowballing method was then used to locate further interviewees, who included residents engaged in a range of incomegenerating activities including: galamseyor in all three settlements; miners working for a large-scale mining company in Obuasi and Prestea; and male and female business owners, in particular, shop keepers. In all of these in-depth interviews, the respondents were asked about their life histories, focusing on their residential, household, housing, occupational and financial histories. Discussions also revolved around their investments, building and feeling at home, advantages and disadvantages of living in a mining settlement, and their future plans. A total of 30 interviews were conducted with residents of the mining settlements, equally divided between the three settlements. Semi-structured interviews were conducted with service providers in education, health and local government, as well as a range of officials in local government units (Municipal/District Assemblies), in particular Town Planning Officers and Environmental Health Officers. In each case the informants were asked generally about changes in the mining settlements but also more specific questions relating to their occupation, for example, teachers were asked how mining had impacted on school attendance and facilities, and health officials on how mining had affected the health of the population. In addition, officials from national government ministries, departments and agencies, along with the relevant Member of Parliament representing the electoral constituencies of the three study towns, were interviewed. In all, interviews were held with 22 policy makers from national and local levels. In order to gain the perspective of young people growing up in mining settlements, focus group discussions (one per settlement, i.e. three in total) were conducted with a mixed group of between seven and twelve males and females aged between 17 and 34 in suitable locations within the settlements, such as an empty classroom. Both interview types were taped and subsequently transcribed verbatim. They were then analysed using in vivo coding to identify categories and trends within the text material, and to build themes that connect the empirical findings to broader literature and concepts. 'Digging deeper' cultural events were also held in one school in each of the three study towns. Pupils participated through producing plays, paintings and poems about growing up in a mining town. Furthermore, while staying in the study towns to conduct the fieldwork, detailed observation of the range of neighbourhoods within the settlements, the galamsey mining sites and the open cast mining was undertaken. These extensive qualitative data are drawn on in the following analysis of migration, housing and attachment to place in gold mining settlements, supplemented with secondary data extracted from the Government of Ghana census surveys. Migration and changing demography of mining settlements Migration into gold mining settlements typically starts as soon as word spreads that gold has been found (Dickson, 1969;Nyame et al., 2009). Initial migration into Obuasi and Prestea was primarily by men looking for work in the large-scale underground mining operations. This included both unskilled and skilled miners as well as white-collar workers. Consequently, not only poorer migrants moved to the mining towns but also higher-income individuals attracted to the mining sector moved from larger cities. Salaries and associated benefits in large-scale mining were very attractive, resulting in people relocating, for example, from the capital city Accra to Obuasi. This type of population movement from larger to smaller urban settlements is unusual, indicating how the growth of mining settlements can differ from that of other urban centres that do not have a similar resource base. As the Municipal Chief Executive 2 of Obuasi explained in relation to miners in the early days: What they were taking home at that time, compared to the average Ghanaian worker, was far better. Then there were so many privileges attached to the fact that you work for AGC. Every month they were given food rations, provisions and a whole lot of things that were the envy of people around. This statement highlights both the financial and additional benefits that used to be associated with working for a large-scale mining company and why this stimulated migration into the mining settlements. Unlike many mining towns elsewhere, mining and the urban settlement are intertwined in Obuasi and Prestea as a result of the two developing concurrently. The arrival of a multi-national mining company does not necessarily, however, result in a major influx of population, as the inhabitants of Kenyasi have discovered. Despite Kenyasi being the closest settlement to the mine operated by Newmont, the mine employees are housed in the larger settlement of Sunyani and bussed to the mine on a daily basis. Consequently, to the frustration of the locals, the establishment of a large-scale mine a short distance away from Kenyasi has not resulted in an influx of formal sector employees who have relatively high spending power. The most recent surge in migration into the mining settlements is primarily due to the expansion of opportunities to work in galamsey in all three cases. As an assemblyman (elected local government representative) in Obuasi explained: For the past ten, fifteen years the population has shot up because of the gold and galamsey operations here. We have people from the Volta and the north -they dominate -and all parts of the country. You get all the tribes here and the population is now increasing day in and day out. . People come here every day never to return again. The only time they go back is maybe Christmas, just to visit their relatives in their hometowns for about one week and then they come back here. So gradually the population is rising up. This quote touches several issues: how many people are migrating, where they are migrating from, and whether they remain in the settlements. Table 1 shows the changing population of Obuasi, Prestea and Kenyasi. As these data show, and our interviews confirmed, all three towns have experienced quite dramatic changes in their populations linked to opportunities in mining. As the mining operations expanded in Obuasi, the population grew rapidly and it is now one of the most important intermediate sized towns in Ghana, with the country's most important gold mine. Prestea's population growth has been closely linked to the varying fortune of mining in the town, experiencing especially rapid population growth up to the 1960s, then slow growth into the 1980s, accelerating once again as small-scale mining expanded from the mid-1990s. Kenyasi was a small agriculture-based settlement of just over 5000 inhabitants in 1984. During the first decade of the new millennium, following the discovery of gold, Kenyasi's population increased by more than 50% to become a small town (Table 1). As interviewees reported, and the census data confirm, those migrating to the mining settlements came from all over Ghana. The largest groups came from: the Central, Western and Upper West Regions in the case of Obuasi; Central, Ashanti and Volta Regions to Prestea; and Ashanti, Upper East and Northern Regions in Kenyasi (Government of Ghana census data, 2010). The contribution of in-migration to the growth of Prestea and Obuasi, however, has fallen over the years as natural growth has become increasingly important (Table 2), showing how migrants' in situ family formation serves to quickly contribute to urban growth. Interestingly, a similar proportion of around two-thirds of the population of all three settlements was born in the same settlement, although the processes that lie behind this figure, we argue, differ. In Prestea and Obuasi, which originated as mining settlements, this ratio is caused by the expansion of the families of migrants, whereas in Kenyasi it is due to migration into the settlement to engage in galamsey being a relatively recent phenomenon. According to the interviewees, movement into mining settlements is due not only to new migrants but also to the return of indigenes to their hometowns attracted back by new opportunities in the mining sector. Especially in the case of Kenyasi, many indigenes who had left the town in search of better opportunities elsewhere have returned to engage in galamsey or in the increased trading and retail opportunities that a growing population creates. It is important to recognise, however, that the residents interviewed in the mining settlements are inevitably those who have stayed or returned; the indigenes and migrants who have left the mining settlements are not present to tell their stories. Migration in relation to mining settlements is thus complex, with people moving in and out in relation to changing perceived opportunities. Such migration is especially well highlighted by the following statement from a young man in a focus group discussion in Obuasi, which is worth quoting in full because of its complexity: What we are saying is that Obuasi is like a toll. While some are coming in, others are going out. This man went to do a practical with AGC and has finished but they didn't employ him. So if he goes to a place like Dunkwa and they give him a license for small-scale mining over there he will stay and do it. Somebody else would leave Dunkwa for Obuasi to come and do galamsey work. It is just like farming, someone from here would like to go and farm somewhere else, whereas someone from there would also like to come here and do galamsey. I am from Obuasi but I left here for Diaso-Denkyira to do galamsey there but I have told my landlord that if I get land I will stop the galamsey and cultivate cocoa. As a result, I have used a portion of the land that I bought for the galamsey work to cultivate maize so I will go there next week. This notwithstanding, someone else also wants me to bring him to Obuasi to do galamsey. This quote highlights how there is movement in and out of mining settlements to other mining towns to work or to rural areas to engage in agriculture (see also Yaro et al., 2011). Consequently, there are numerous cases of multi-spatial households where family members are living in different localities to maximise incomes and minimise risk. This type of mobility is commonplace in Ghana where mobility has been shown to be the norm rather than the exception (Awumbila et al., 2011;Olwig and Gough, 2013;Yaro et al., 2011), and individuals often combine a variety of occupations simultaneously (Esson et al., 2016). The interview data support claims that such mobility is closely linked to people's life stage and is especially common amongst young unmarried miners who are freer to move from place to place as word spreads regarding which are the most lucrative mines (Jønsson and Fold, 2011). A male teacher living in Prestea explained how he saw the migration of galamseyor as follows: Those galamseyor who came here after the mine collapsed, their intention was not to raise their family here. Their intention was to come here to work so that in a few months or few weeks they will go back. In those days the galamsey was a bit illegal. They were afraid that their work could be terminated at any time but now some of them are permanently stationed here, some of them have married here, some of them have brought their wives so they are raising their family here. . Those who have not married they move from one galamsey community to the other. When they hear that galamsey has proved good in an area they will go there. Similar views were expressed by other respondents, highlighting not only how mobility is linked to a miner's stage of life but also how, despite galamsey being viewed by the government as an illegal activity, miners now feel secure enough in their source of livelihood to bring their families to live with them. Significantly, this further illustrates why the narrative of mining towns being populated primarily by male migrants, who live dangerously and develop little attachment to the places where they reside, implicitly overlooks the complex array of motivations in different contexts for specific individuals, and how this in turn can influence the dynamics occurring in mining settlements. Interestingly, all three settlements have a slightly higher proportion of women (Government of Ghana 2010 census), illustrating how not only does the presence of women in mining activities tend to get overlooked, especially in small-scale mining (Lahiri-Dutt, 2012), but a wide range of other activities also take place in mining settlements, including trading and farming, which women are heavily engaged in (Kala, 2016). Youth in the focus group discussions highlighted how galamsey creates incomegenerating opportunities for women, with one young man from Prestea saying, '[galamsey] gives work to the women who didn't have work to do. Some sell water to us, some sell iced-kenkey, some sell various items to us and we also buy the items.' The role of women in maintaining the household and wider economy is particularly evident during periods of high male unemployment, which in the case of Prestea was caused by the shift from labour intensive underground mining to more machine reliant surface mining. As a 60-year-old female provision storeowner in Prestea observed, 'Because the men were not working, all the economic burdens came unto us the ladies. If you don't sit up [to work and support the family], your child or grandchild will be wayward.' Many young people now see their future being in their hometown, rather than migrating elsewhere, as was commonplace before the advent of mining. Young indigenes growing up in Obuasi, Prestea and Kenyasi expressed a strong attachment to their hometowns. As a young male participant in the focus group discussion in Obuasi explained, 'I was born in Obuasi here. Even if I travel outside of this town I always return.' And another added, 'Those who emigrate later on regret leaving Obuasi because some of them don't even get places to sleep, especially in Accra.' Since the expansion of galamsey, many of the young people believe that their employment prospects are greater in their hometown than elsewhere. As a young woman from Kenyasi explained in a focus group discussion, 'Those of us [who stay] here are more because if you travel it is because of work that is why you are travelling. Because the galamsey is here, young people stay here and work.' Those who have migrated within Ghana or abroad maintain strong ties with their hometown and often invest there; as a former large-scale miner from Prestea explained, 'Some leave here to work at Kenyasi, Konongo and other places and the money that they get they come and invest it back here.' These investments are often in rental properties, though owning a house subsequently acts as a draw for returning to their hometown in old age. In view of the ebb and flow of migrants in and out of the three study settlements, we now turn to the nature of housing provision and use home ownership as a lens to explore how and why some people establish roots in mining settlements. Housing in mining settlements The most common form of housing in Obuasi, Prestea and Kenyasi is compound housing inhabited by multiple households, which constitutes the 'traditional' form of housing in Ghana and is still the most common housing inhabited by low-income groups (Ardayfio-Schandorf et al., 2012;Korboe, 1992). A compound housing unit consists of a number of (sleeping) rooms that usually open to an interior courtyard and where the space and that of other facilities, for instance for bathing, cooking, storage, are shared by the resident households. Where demand for accommodation is high, compound houses have been extended to enable the renting out of additional rooms (Yankson and Gough, 2014). Renting is the most common form of tenure, accounting for almost 60% of accommodation in Obuasi and Prestea and roughly 40% in Kenyasi, reflecting the lower proportion of the non-indigenous population in the latter, who are more likely to rent. In all three settlements, the houses are remarkably similar in construction, built predominantly of cement blocks with a metal roof and concrete floor, though there are slightly more houses in Kenyasi with mud walls and floors, reflecting its more rural nature (Table 3). The type of housing found in the case study settlements thus does not fit the classic picture of either bungalows built for large-scale mining employees or makeshift shelters built by transient miners (cf. Bryceson and MacKinnon, 2012), but rather is similar in terms of structure, building materials and tenancy to the type of housing found in non-mining settlements in Ghana. As large-scale mining and galamsey impact on house construction in mining settlements in differing ways, they will be discussed separately here. Large-scale mining and housing The first miners who moved to Obuasi and Prestea to work in large-scale mining faced significant problems finding a place to stay given that the numbers of migrants far outweighed the availability of accommodation. A 51-year-old miner who had lived in Obuasi for 30 years, described how there was an immediate influx of people as soon as the mine was set up: People started trooping into the town after we finished sinking the shaft. As a result, people started looking for a place to lodge. When I came here securing a job wasn't difficult but where to lay your head was the problem because all of us were in that small place, where could one live? The mining company established staff quarters elsewhere and put on transport to the mine for the workers, though this was not popular. Subsequently, the company built quarters close to the mine itself, which the miners initially occupied in shifts during the 24 hour working day. Later more substantial estate houses were built to house some miners, the size of the housing allocated depending on the rank (and hence race) of the miner. In the words of the Obuasi Municipal Chief Executive, the first housing was built for 'the whites who were living in bungalows which were within the mines setup . The first buildings which were built for the blacks were called Seven African Bungalow and were designated for the black senior officers.' The mines quarters were well serviced in comparison with the rest of the town, which had inadequate water and electricity supplies. Similar to Obuasi, the AGC in Prestea built bungalows for senior staff located next to the mine on a hill overlooking the town, and constructed compound houses for the workers. The level of subsidy for housing and services is illustrated by a former miner who, after explaining that he did not have to pay rent, went on to say that, 'even your electricity bill was borne by the company. I can remember when I came here in 1962 even your bulb when it spoils, the company has some electricians who will come and change it for you. You won't pay anything.' In Kenyasi, however, the provision of housing linked to Newmont has differed, not only because large-scale mining started at a much later date but also because of the company's policy of bussing employees from Sunyani to the mine on a daily basis. Hence, in Kenyasi, Newmont is notable for both its concurrent presence and absence. Its arrival has greatly affected the town both directly in terms of compensation paid to inhabitants (see below) and indirectly in terms of the subsequent influx of galamsey miners. But at the same time there is a sense of 'absence' because the development the inhabitants had expected would accompany the arrival of Newmont, in the form of housing and infrastructure, has not occurred. In Prestea, some of the miners who still reside in the former company housing are disputing whether they should pay rent and the case is currently with the courts. Meanwhile, GSR has gained a large concession that includes part of the town of Prestea and the former bungalows. Some of these, including the former Club House, have been demolished because of surface mining. As a male teacher and long-time resident of Prestea bemoaned, 'When this company took over they bulldozed all those structures so now those of us who are from this place we cannot tell somebody who is not from here the legacy the old mine left.' This quote reveals how mining activities are literally eating into Prestea town as government concession allocations do not protect or even take into account already existing settlements. In the early days, although the large-scale mining companies were not able to house all their employees, miners did not venture into building houses in Obuasi and Prestea for fear of being suspected of gaining wealth for house-building by stealing gold from the company. A teacher and long-term resident of Prestea explained why miners did not construct housing: 'There was a fear that you would be sacked. The workers were closely monitored so if you tried to put up a building or if you tried to decorate your rooms sometimes you will be sacked.' These claims were substantiated by the Municipal Chief Executive for Obuasi, who summarised the challenges miners faced as follows: They [mining officials] would sit in the committee and the question they will ask you is, what is your source of income? If you are not able to convince them, they will not give you the land. What this meant was that those who even had the money to develop the local economy and build mansions and houses, on suspicion of being gold dealers were not given access to land here. The smartest ones who had money moved to either Kumasi or Although building a house does not tie an individual to a place, by preventing workers, especially migrants, from building their own houses, the mining company removed one of the key means for miners to solidify attachment to the town and hence indirectly encouraged carefree ways of spending earnings. The prevention of miners in Obuasi and Prestea from building homes and other physical infrastructure thus offers an example of how inhabitants of colonial and postcolonial African towns were prevented from defining and developing urban locales on their own terms, and prohibited from making use of the urban space in ways they deemed appropriate (see Simone, 1998). An additional factor related to the early lack of housing investment was miners' belief that their wealth would continue indefinitely. According to a former mining employee in Prestea: When the mines were vibrant we didn't think so much about the future because we thought the mines would always be there so we didn't use the money for something good but when the mines collapsed and the galamsey took over everybody is now using the money to do something to develop the town and the people are building houses so this has made the town expand in size. As the mining companies became increasingly unable to provide enough housing for all the miners, and miners were restricted from building their own homes, many had to resort to renting. Most only rented a single room but even these were hard to come by, especially since landlords often charged several years' rent in advance in line with rental housing practices throughout Ghana (Arku et al., 2012). Given the difficulty of raising such large sums of money, a former miner claimed that the situation of fearing to build and having to rent on such terms, 'went on until we could no longer bear it. It got to a time we were not able to pay the rent advance so the workers started to build their own houses even if it was just a one room building.' Obuasi's Municipal Chief Executive explained how in the central part of Obuasi the buildings put up by the workers were 'very, very small and became shanty in character.' Yet not all housing built by miners is of poor quality. With the growth of Obuasi into an important intermediate-sized city, a range of banks and lending institutions opened up branches, enabling miners with salaries to obtain loans to build. As a miner in Obuasi explained: If you go to a place like Gausa Extension [on the outskirts of Obuasi] it is mostly the miners who through loans and other sources like Christmas bonus have been able to put up houses there and as a result, even if you give him one of the flats by the mines to rent, he would not accept it. This points to miners' changing preference for owning rather than renting property, even if it is on relatively favourable terms from the mining company. Whereas miners used to return to their hometowns after their work ceased, given that was where they had access to rent-free housing in their family homes or where they had constructed housing for themselves, nowadays, miners are increasingly investing in houses in situ and are more likely to remain in the mining settlement. This was highlighted by an assemblyman in Obuasi, who noted that the change in attitude towards miners' building activities has influenced their decisions on where to live after retirement: 'Before when they [miners] retired they used to go back to their hometowns but now that it is allowed for them to build here, they don't leave here when they retire.' This new attitude has been propelled by improved access to mortgages, which previously were unavailable in Ghana (Abdulai and Hammond, 2010). This option, however, is only accessible to miners working in large-scale mines with regular and verifiable salaries, showing how most are excluded from such financial opportunities. The mining companies also indirectly affected house construction in mining settlements through the payment of compensation to former miners and farmers who had their land taken. Following the closure of the underground mines in Prestea, some miners used their severance pay to build homes in the town. More recently Kenyasi has seen an influx of compensation paid by Newmont to farmers who have lost their land to the mining concession, many of whom have subsequently invested this money in house construction. A male youth in the Kenyasi focus group noted that 'People just go and plant things on their lands but will not attend to it expecting that Newmont will come for it and pay compensation on it. They call it ''mehuri so'' (I jump to catch it). They use the money to build houses and rent them out [to migrants].' In the eyes of an indigenous female trader in Kenyasi: If your land is affected and you are compensated you will never be poor again. One of my father's sons whose land the gold was first discovered on, the compensation that was given to him, he has used it to buy a lot of houses in Accra, Berekum and even here. These insights show how investments made with compensation from mining companies are not restricted to the local area and economic ties are also forged with other, often urban, areas. A play, entitled 'The first payment', presented by school children in Kenyasi, reflected the positive aspects of compensation but also a more problematic side. The children enacted how families had used their first tranche of compensation from Newmont in differing ways. Whilst some became rich from investing the compensation money in businesses, one family decided to send their son abroad to earn money. He decided to travel with a friend through the desert but died on the journey, and his friend returned home to break the sad news to the parents of the deceased. The fact that the children chose to tell this story indicates that aspirations and investment strategies linked to compensation payments from mining companies must be managed carefully because the outcomes are far from certain. Galamsey and housing Contrary to what might be expected, in recent years it is galamsey rather than largescale mining that has stimulated house building in the mining settlements. As there is no housing provision for galamsey miners, their arrival in an area results in a sudden and cumulative increase in demand for accommodation. Whilst some of this demand is satisfied by renting out existing rooms, it also stimulates construction of additional rental accommodation. Interviews in Kenyasi revealed that before galamsey started, the indigenes were investing in construction elsewhere, such as in Kumasi. Now they are concentrating their building investment in Kenyasi as they can rent out houses/ rooms to the migrant miners. In Prestea and Obuasi many householders are also building additional rooms onto their existing homes to rent out and some are building entire new houses for rent. Thus, many houses in the mining settlements have shifted from being primarily housing for family occupation to also being a source of income through renting. Whilst this occurs in other urban centres in Ghana (Yankson and Gough, 2014;Yankson et al., 2017), the rapid rate at which rooms are being converted/built for renting is linked to the influx of miners to work in galamsey. This in turn has a pervasive multiplier effect in the local economy, encouraging housing construction by both the miners and other residents who have benefited from the miners' everyday purchases. The Municipal Chief Executive of Obuasi explained how: There is capital injection in the galamsey business. They are also building. They are buying cement and the market is also doing very well. You will see stores are opening by the day, motorbikes are being sold, so the market is good now. Now unlike before traders are putting up houses. The interviews reveal how galamsey miners are not able to access formal bank loans as they do not have a reliable income yet; contrary to popular perception promoted by the media, they are not just eking out a living or squandering their money but are managing to invest in housing construction. According to members of a male focus group discussion in Prestea, 'When the galamsey came to this town there has been a lot of buildings put up,' and in Obuasi an assemblyman explained how 'Our brothers in the smallscale mining industry, by dint of their hard work, are putting up buildings. If you see the kind of buildings they are putting up they are not small buildings.' There also appears to have been a cultural change in attitudes towards building linked to a previous unwillingness by miners to demonstrate their wealth. Discussions with the Obuasi Small Scale Miners Association highlighted how in the past when those engaged in small-scale mining made major investments they disguised their name because they did not want to be victimised, but now anyone can do what they want with their money without worrying about public opinion. Today, even though galamsey is a far more unstable occupation than large-scale mining was in the 'golden days', miners are moving to Obuasi, Prestea and Kenyasi on a more permanent basis and creating attachments to the towns that are more enduring than in the past. In Obuasi, for example, the Small-Scale Miners Association has invested in the provision of public toilets for the community, which they are paying people to manage and maintain. This sort of investment in community development projects and job creation activities points towards a long-term commitment to the town. The embeddedness of galamsey migrants in Prestea is reflected in their efforts to build houses, open shops, marry locals and put their children in school. This attachment to settlements, perhaps because of narratives of migrants in mining towns as nomadic opportunists, was overlooked by the authorities when they tried, unsuccessfully, to remove miners in the 2006 so-called 'Fight against illegal mining' (Hilson et al., 2007). The data highlight how the looming spectre of being forced to move, alongside economic precarity, means that it is not uncommon for migrants to build multiple houses, finances permitting. This is illustrated well by Kwesi, a 24-year-old miner from Abura-Dunkwa who was a galamsey miner in Prestea. Kwesi's experiences not only show how attachments to mining settlements are closely linked to material and social relations, but also illustrate how dynamics occurring in mining towns speak to broader urban processes taking place, particularly the 'greater diffusion of household livelihoods geographically as a means of accessing and protecting against oscillating employment opportunities and sources of income' (Simone, 2014: 223). Kwesi's original decision to move to Prestea was influenced by the fact that he has an aunt resident there, with whom he stayed when he first arrived. Any money he was making above immediate living expenses was being put aside to buy a plot of land and put up a structure 'even if it is only one bedroom.' In addition, he had been joined by his wife who established a store in front of the home to bolster the household's finances, and his younger brother had subsequently joined him thus further consolidating his embeddedness in Prestea. Kwesi explained, however, how his attachment to Prestea is accompanied by a desire to also build a house in his hometown, which he was in the process of doing. Like many of his peers, Kwesi is aware of the potential instability of galamsey, hence he has acquired land in his hometown where he is cultivating oil palm. By creating income-generating opportunities in two separate locations he is devising a long-term strategy to overcome economic uncertainty that might arise in either location. Kwesi's example illustrates how migrants' development of attachment to place in mining settlements includes but goes beyond house-building; they forge long-term attachments to the area by bringing their families with them and placing their children in schools or by marrying locally and raising families in situ. Attachment to mining settlements emerges out of social processes and relationships (cf. Ralph and Staeheli, 2011), as well as material ones such as house building, though the two are clearly interlinked. This is illustrated well by an elderly former miner who had been living in Prestea since 1962, who explained how, 'There are so many tribes in Prestea. So evening time when you go to the streets we have Dagomba people, Sissala people, Dagarti, Senya, Fante, all with their traditional dances.' It is through these everyday experiences and the forging of social relations that migrants develop a sense of attachment to mining settlements. This serves to confirm our broader argument that contrary to popular understandings of incomers to mining settlements as opportunists, these migrants often aspire to build their own houses, which -alongside a range of social relations -contributes to their attachment to the mining settlements and desire to remain. Conclusions As indicated at the start of this paper, little is known about migration and settlement processes in urban mining settlements and whether and how residents develop attachment to such places. Drawing on research conducted in the mining settlements of Obuasi, Prestea and Kenyasi, three key findings are highlighted here. First, we have revealed how mining settlements experience considerable in-migration coinciding with the discovery of gold but how their evolution varies depending on the type of mining and the length of time the mines have been in operation. The two older mining settlements (Obuasi and Prestea) experienced a number of waves of migrants associated initially with large-scale mining and subsequently with galamsey, whereas the younger settlement (Kenyasi) has only experienced in-migration in relation to galamsey. Yet mobility in all three mining settlements is more nuanced and complex than typically found in understandings of mining settlements, with migrants and indigenes moving in and out, at times establishing multi-spatial households, and often combining a variety of occupations simultaneously. A second key finding emerging from this paper is that housing construction in urban mining settlements is closely linked to mining types as well as migrant waves. In the early days, large-scale mining companies provided housing for some of their workers; those who were ineligible for company housing rented rooms. Mining employees did not invest in housing construction for fear of losing their jobs because of company policy of suspecting any miner who was building of having stolen gold from the company. Ironically, contrary to what might be expected, it has been the influx of galamsey miners, most of whom are working informally in insecure conditions, that has led to increased investment in house construction in mining settlements. The switch to open-cast mining has also affected house construction as the compensation paid to those who have lost their land to the new concessions is often invested in house building for habitation and for rent, often to galamsey miners. Importantly, the paper shows how the impacts of large-scale mining and galamsey on settlement development differ but are closely interlinked. Third, although having a reliable incomegenerating activity is paramount in the decisions of migrants living in the mining settlements to stay, many develop a close attachment to these places via social and cultural processes. These include their families joining them or new families being established in situ, as well as engaging in activities associated with their hometown or region. Such social factors are buttressed by material considerations, in particular building a house, hence, importantly, the security of having one's own dwelling is shown to increase migrants' sense of attachment to a settlement. When migrants become embedded in a locale, both physically through a property and socially through kinship and friendships, they are much more likely to remain. Moreover, their purchasing power greatly increases trading and the demand for services in the settlements. This benefits indigenes and migrants alike who set up a wide range of primarily retail and service businesses to meet this demand. Overall, the paper shows how migrants living in mining settlements should not be viewed as temporary residents, as many endeavour to and succeed in establishing roots in such towns, contributing to the settlements' social and economic vibrancy. It is vital that policy makers attempting to address the negative side effects of informal mining recognise these trends and do not repeat mistakes of the past, such as in Prestea (Hilson and Yakovleva, 2007;Hilson et al., 2007), where trying to remove galamsey miners caused considerable social unrest. Moreover, these findings reinforce claims regarding the importance of small and intermediate urban settlements as places that migrants move and become attached to, highlighting how such settlements are part of wider economic processes that are shaping and shaped by national legislation. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: We are grateful to the Department for International Development (DfID) and the Economic and Social Research Council (ESRC RES-167-25-0488) for their financial support of the Urban Growth and Poverty in Mining Africa (UPIMA) research programme. ORCID iD Katherine V Gough https://orcid.org/0000-0002-9638-9879 Notes 1. A settlement is defined as urban in Ghana if it has a population of over 5000 inhabitants. 2. A Municipal Chief Executive is the appointed public servant who leads a municipality, with a role similar to that of an elected Mayor in other countries.
2019-05-20T13:06:25.656Z
2018-11-28T00:00:00.000
{ "year": 2019, "sha1": "d400ed0a2cc4efb687616f760964088e63c6a1e7", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0042098018798536", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "906ab7dd84e1ead1f8786f955148f4b7625766bb", "s2fieldsofstudy": [ "Sociology", "Geography" ], "extfieldsofstudy": [ "Geography" ] }
251113682
pes2o/s2orc
v3-fos-license
Fate of Microplastics Released by Discarded Disposable Masks The pandemic of COVID-19 has led to a surge increase in the production of masks. Due to the rapid propagation of COVID-19 and the long survival time of plastic surfaces, a large number of masks are discharged into the environment without treatment. In this paper, the release of microplastics (MPs) in nature was simulated by using mask samples irradiated by ultraviolet (UV) light. After 28 days of ultraviolet radiation, part of the main chain of the mask was broken and a large number of transparent MPs fell off. The longer the UV irradiation time, the larger the proportion of small particle MPs. The middle layer of surgical mask is the most difficult to release MPs due to charge treatment, and N95 mask is the most difficult to degrade the inner material. Introduction COVID-19 virus has been rampant in the world since December 2019, causing a series of severe respiratory syndrome [1,2]. Scientists around the world have found that COVID-19 virus can be transmitted not only through direct contact, but also through contact with contaminated surfaces/wastes, air/respiratory droplets and feces [3,4]. Today, the global infection rate is increasing geometrically, and a large number of people have symptoms of infection and died. In this case, the researchers found that masks can control the spread of the COVID-19 virus, so it is recommended that people wear masks in public places to reduce the risk of virus transmission [5]. The severe epidemic situation makes the mask as a simple anti-virus measure rapidly popularized in many countries. As a disposable consumer goods, the production of masks is growing rapidly in the world. It is estimated that 129 billion masks are needed worldwide every month to stop the spread of COVID-19 [6]. In March 2020, the daily output of masks in China was 10 times that in January 2020, reaching 200 million [7]. Japan's production of masks has also increased rapidly, with orders for masks reaching 600 million in April 2020 [8]. The raw materials of disposable masks are various polymer materials for the production of various plastic products [9]. With the COVID-19 pandemic, all kinds of plastic wastes are increasing [10]. India produces 22 kg of plastic waste for every 1000 COVID- 19 RT-PCR tests [11]. 1.1 tons of disposable plastic are produced for every 250 tons of medical waste [12]. Due to the characteristics of rapid transmission of COVID-19 and extended survival time on the plastic surfaces, people prefer plastic production rather than recycling [13,14]. Most of the discarded masks go directly into landfills, fresh water and oceans without any treatment, and were degraded into smaller microplastics (MPs) (plastic debris and particles less than 5mm in diameter) through natural degradation/fragmentation or decomposition [15,16]. Researchers found that once disposable masks entered the environment, a large number of MPs could be produced in a short time and have the potential to spread across the globe [17]. Recent researchers have found that a large number of disposable masks have been found on the beaches of Hong Kong, the Magdalena River in Colombia and along the highway and drainage of Ile Ife in Nigeria [18,6,19]. A large number of masks may enter from beaches to the sea and become available to affect global environment [20]. Therefore, more and more attention has been paid to the large amount of MPs produced by disposable masks entering the environment during the pandemic of COVID-19. Untreated disposable masks enter the environment and gradually become the main birthplace of MPs after a series of conditions such as wind, light, water and wear [9,21,22]. The researchers confirmed this conclusion through the detector of the infrared spectrum [19]. MPs in the environment were difficult to be degraded by microorganisms, so they gradually accumulate in the environment, affecting aquatic organisms, agriculture, forestry and tourism [23]; threatening human health and safety [24,25]; and posing a serious threat to biodiversity [26,27]. Therefore, a large number of MPs produced by disposable masks entered the environment during the pandemic of COVID-19. So far, the characteristics and rules of the released MPs from masks in the natural environment have not been systematically studied. In this study, we used ultraviolet (UV) irradiation to simulate the natural light, and investigated the characteristics of MPs released by N95 masks and surgical mask in water. The release of MPs from disposable masks in natural environment was studied, and the influencing factors of large release of MPs were investigated. To provide a basis for the mandatory recycling and disposal of plastic waste during the epidemic. Preparation of Masks In this paper, we chose N95 masks (3M9501/9502, USA) and surgical masks (Winner, China) commonly used during COVID-19 to investigate the released rules and characteristics of MPs in natural environment. Both masks were produced in 2020 to prevent the virus. First of all, the complete new mask was cut into 1 cm × 1 cm, each block was divided into outer layer, middle layer and inner layer. Then the mask was cleaned with non MPs water prepared in the laboratory to remove the MPs particles on the surface of the mask. Finally, the mask was dried under natural conditions and repeatedly used mask during the period of COVID-19 was prepared. Experimental Design MPs Fall off Caused by UV Radiation Since light irradiation is the most widely used technology for MPs aging, and it is reported that the polymer of mask can be significantly degraded under UV-A (320-400 nm) and UV-B (280-320 nm) radiation [28]. Put one 1cm × 1cm single-layer mask block into a beaker and add 100 ml non MPs water and a certain amount of glass beads to simulate the friction of mask in nature. Then, the single-layer mask block was irradiated 24 hours a day with UV lamp (1W/m 2 , 365nm) for 28 days. During the experiment, 12 single-layer mask blocks (including 4 outer layers, 4 inner layers and 4 middle layers) of each kind of mask were randomly selected for UV irradiation. Three mask samples (one inner layer, one outer layer and one middle layer) were randomly taken out every 7 days for detection, and the temperature was maintained at 22±3ºC. This experiment was repeated three times from March 2021 to May 2021. Qualitative and Quantitative Analysis of MPs Take out the sample after UV irradiation, clean it with ultrasonic for 15 minutes (multiple use of masks samples without UV irradiation were used as blank samples), and repeatedly clean the mask sample with 200 ml non MPs water for 3 times. After cleaning, clamp out the sample with clean tweezers and stand for 30 seconds to remove excess liquid from the sample. Collect the cleaning water for three times and filter it with polycarbonate membrane (bore diameter 10 μm, Whatman). The MPs particles falling off by UV light in the cleaning water were trapped on the polycarbonate membrane. After filtration, take the filtered polycarbonate membrane and put it into the oven to dry (40ºC) for standby. All the instruments involved in this experiment were cleaned with water without MPs before the experiment to prevent the MPs on the instrument hindering the results. In order to identify the quantity of visible to the naked eye MPs on the filter membrane (N 1 ), a stereomicroscope (SMZ1270, Nikon, Japan; 16×) was used to locate and identify the MPs particles by a digital camera. Then the micro-Fourier Transform Infrared Spectrometer (micro-FTIR) equipped with Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy Attenuated Total Reflection (ATR) (Perkinelmer Spectrum Spotlight 200i, PerkinElmerInc., USA) was used to count, determine the color and measure the size of the MPs particles invisible to the naked eye. Finally, the measurement results were compared with FTIR database to determine the type and structure. In this process, the total number of the MPs particles invisible to the naked eye on the filter membrane (N 2 ) can be calculated by the following formula: where N 20 is the number of MPs particles of the area detected by micro-FTIR imaging; S 0 /S is the ratio of the area observed by micro-FTIR imaging to the area of the filter membrane. The sum of N 1 and N 2 is the total number of MPs plastics in the filter membrane. The specific detection methods refer to Li et al [29]. Changes in Structure of the N95 Mask and Surgical Mask under UV Radiation In this paper, ATR-FTIR was used to analyze the structure changes of each layer of two kinds of masks after UV irradiation (Fig. 1), and found that the mask material used in the experiment was polypropylene. After 28 days of experiment, each layer had obvious changes, and the changes were similar. Each layer of the two kinds of masks had uniform and obvious absorption bands at 2945 cm -1 and 2867 cm -1 , and the peak value decreased in varying degrees after UV irradiation, which depended on the stretching vibration of methyl group [30]. After 28 days of UV irradiation, the peak values also decreased at 2920 cm -1 and 2836 cm -1 , which may be caused by the stretching vibration of methylene group [31]. In addition, the peak values at 1450 cm -1 and 1380 cm -1 decreased, which proved the efficient oxidation of aliphatic hydrocarbons under UV irradiation. Similarly, C-C stretching, C-H rocking vibration, methyl group asymmetric rocking vibration and methylene group asymmetric rocking vibration also occur in hydrocarbon related components (1164 cm -1 ) after UV irradiation. It can be seen that UV radiation provides enough energy for C-C and C-H bond breaking to produce alkoxy and peroxy groups. The results showed that the two kinds of masks after 28 days of UV irradiation led to the fracture of part of the main chain and the decrease of molecular weight, caused the mechanical properties of the mask were reduced and a large number of MPs were produced [32]. This is consistent with the previous conclusion that UV can make some groups of polypropylene react and produce free radical ions, resulting in the shedding of a large number of MPs [33]. The different structural changes of the two kinds of multiple use of masks after UV irradiation were mainly reflected in mask's middle layer. The intensity of each peak in the middle layer of surgical mask decreased slightly with the increase of UV irradiation time, but some small peaks appeared between 1640-1550 cm -1 . These small changes mean that part of the C-H bond breaks, forming a double bond structure [34]. The middle layer of surgical mask is charged, which is the most significant difference between it and N95 middle layer. UV irradiation and humid environment can easily reduce the charge content of fibers in the middle layer of surgical mask [35], resulting in the falling off of MPs. Effect of UV Radiation on MPs Released by Masks The mask after ultrasonic treatment was prone to aging under UV irradiation, and a large number of MPs particles fall off from N95 and surgical masks (Fig. 2a). This was consistent with the conclusion that the mask was discarded into the environment, the number of MPs increased sharply under the conditions of light, wind, rain, friction and so on [36]. The amount of MPs released by the two kinds of masks increased with the extension of UV irradiation time. It can be seen that the number of MPs falling off of surgical mask was significantly greater than that of N95 from 0-7 days (Fig. 2a). That may be due to more loose plastic fibers on the surface than N95 during the production of surgical mask, which fall off in short-term UV irradiation. This process was similar to the ultra-fine fiber discharged from the fabric in the washing process [37]. At 7-14 days, the total number of MPs produced by the two kinds of masks was basically the same, and the increase of MPs from day 7 to day 14 was not obvious. It can be seen that it takes a certain time for mask fibers to wear and degrade into MPs in the natural environment. During the period of 14-28 days, the two kinds of masks ushered in a big outbreak of MPs falling off. At this time, the number of MPs falling off N95 was significantly higher than that of surgical masks. It can be seen that after 14 days of UV irradiation, the mask fiber was worn and degraded, resulting in a large number of falling off of MPs. With the passage of time, the macroplastics falling off from the mask in the early stage may gradually decompose into MPs, which couldl greatly increase the number of MPs [38,39]. Some studies have found that macroplastics have a high ability to release MPs into water [40]. From 21 to 28 days, the shedding amount of MPs of both masks decreased, previous studies have shown that mask polymers can be crosslinked with some functional groups through short-term UV irradiation, resulting in the reduction of MPs production [41]. In general, when the mask is in the environment for more than 14 days, it will produce explosive MPs pollution, and the amount of MPs produced by N95 is higher than that of surgical mask. In order to avoid a large amount of MPs produced by the mask, the mask should be collected and treated within 14 days. Fig. 2b) shows the color distribution of MPs produced by N95 mask and surgical mask after UV irradiation. Three kinds of MPs were detected in the MPs produced by the mask after 28 days of UV irradiation, and the transparent MPs accounted for the vast majority, about 94%-100%. This is consistent with the conclusion with Wu et al. [42] that most of the MPs falling off from different kinds of masks were transparent. This is mainly because the masks are mainly composed of colorless fibers, so the fallen MPs transparent dominated [43]. The longer the mask was in the environment, the more color MPs were produced. After 28 days of UV irradiation, the blue and red MPs of N95 mask reached 8.5% and 2.9% respectively, and the blue and red MPs of surgical mask reached 6.8% and 3.0% respectively. Other colors were detected as red and blue, and the main color was blue. Blue fabric usually appears in the outer layer of the mask, which makes them more vulnerable to radiation shedding [44]. After UV irradiation, it is first absorbed by the unsaturated bond or chromophore in the outer mask material to form the polymer free radical (unsaturated point). The free radical reaction eventually leads to the chain breaking and cross-linking of the polymer. Therefore, the outer layer with color material is the easiest to degrade and produce a large number of MPs particles. It is found that colored MPs can adsorb more harmful substances such as heavy metals and organic pollutants, thus causing more toxic effects on the organisms that like to eat colored MPs [45], thus increasing the threat to the environment. The Particle Size of MPs Produced by UV Irradiation of Masks Fig. 3 shows the particle size distribution of the MPs produced by UV irradiation of N95 and surgical mask. All MPs samples were divided into 6 groups according to different particle sizes. During the 28 days of the reaction, the total amount of MPS of <100 um, 100-500 um and 500-1000 um accounted for more than 70% of the MPs produced by the two kinds of masks, and the highest content reached 81.83%. It can be seen that the MPs produced by the mask after UV irradiation was mainly small particle size. Moreover, after 28 days of UV irradiation, the total amount of MPs by N95 mask and surgical mask reached the highest, which were 114 particles/cm 2 and 87 particles/cm 2 , respectively. Researchers studying plastics such as polyethylene, polypropylene and polystyrene found that it took time for the pieces of plastic to get smaller ones [46]. Because the waste cannot be recycled in time, the discarded disposable masks accumulated in environment for a long time, which may increase the time to produce smaller MPs Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy extension of reaction time, from 36.94% on the 7 th day to 22.24% on the 28th day. The size of MPs particles dropped from surgical mask was slightly different from that of N95, and the proportion of MPs particles <100 um showed a trend of first increasing and then decreasing. During the whole process, the proportion of 100-500 um MPs of the two kinds of masks increased continuously. The proportion of N95 masks increased from 24.56% (7 th day) to 38.24% (28 th day), while that of surgical masks increased from 34.78% (7 th day) to 42.56% (28 th day). That is to say, if the dense plastic fibers on the two kinds of masks want to fall off, they need longer UV irradiation time, and they are preferentially in the form of larger particles. The MPs particles larger than 1000 um also account for a small proportion of the MPs particles that eventually fall off. On the one hand, it may be that the UV irradiation time is insufficient; on the other hand, it may be that UV irradiation alone can not make a large number of large particle size MPs fall off [44,47]. The MPs of 1000-1500 um in surgical mask fell off explosively on the 7 th day, and decreased significantly in the later stage. This may be due to the large particle size of the loose plastic fiber adhered to the mask in the production process. MPs Produced by Aging of Different Layers in Mask When the mask is exposed to UV radiation, the surface will be weathered, resulting in changes in physical and chemical properties. The surface of each layer of mask will become rougher, fragile and easier to form MPs [44,48]. It can be seen from Fig. 4a) that for N95 mask, the concentration of MPs particles falling off the outer layer was the highest, followed by the middle layer, and the concentration of MPs particles released from the inner layer was the lowest in the whole 28 days of reaction. This result is consistent with the results of previous studies. The inner layer of N95 mask is nonwoven, which has certain comfort. particles. This explains the reason why the amount and proportion of three kinds of small-size MPs in two kinds of masks increase with time. Studies have found that small particles of MPs are difficult to be removed by sewage treatment plants, and small particles of MPs are easier to enter the organism, causing biological toxicity to cells, dissolved oxygen, etc. Therefore, a large number of discarded disposable masks into nature will inevitably lead to environmental deterioration. It can be seen from Fig. 3 that the proportion of MPs particles <100 um in N95 mask decreased with the The middle layer is made of superfine polypropylene fiber melt blown material, which can provide better filtration performance. The outer layer is nonwovens and ultra-thin polypropylene melt blown material layer, which has certain waterproof performance. The only difference between the outer layer and the inner layer of N95 mask was that the outer layer contains ultra-thin polypropylene melt blown material layer. Therefore, after 28 days of UV irradiation, the ultrathin polypropylene melt blown material layer of MPs will fall off. The diameter of the fiber in the middle layer is about 2 um, which is only one tenth of that of the non manufactured fabric. Therefore, the thinner middle layer is easier to shed MPs particles under UV irradiation, while the inner layer obviously falls off more slowly. Surgical masks usually consist of three layers: the outer hydrophobic spinning adhesive layer, the middle melt blown cloth layer and the inner non-woven cloth layer. The biggest difference between N95 and surgical mask is that the filter layer in the middle of the surgical mask and the melt blown fabric used to make the mask are charge treated [35]. It can be seen from Fig. 4b) that the outer layer of surgical mask was most likely to fall off a large amount of MPs. In the 28 days of reaction, the number of MPs falling off from the outer layer was always the largest among the three layers. The second largest proportion was the inner layer. The most difficult to form MPs was the middle layer. The middle layer is a charged melt blown filter material, which will accelerate the aging of the material in the humid environment, resulting in the reduction of the charge content of the fiber in the middle layer. Therefore, there will be a certain degree of MPs falling off after UV irradiation. The outer layer of surgical mask is usually coated with polyurethane, which makes the polypropylene non-woven material have better wear resistance, good tensile resistance and less deformation. But because the main culprit of polyurethane aging is UV light, the outer mask is easy to fall off a large number of MPs particles under UV light. Suggestions on Management of Discarded Masks Our results show that the discarded masks without effective treatment may release a lot of MPs pollution to the environment. Therefore, suggestions on waste management was needed. UV radiation will lead to the breaking of some main chains, the reduction of molecular weight and the reduction of mechanical strength, resulting in the release of a large number of MPs. And the MPs produced by the mask are mainly small particle size. The longer the UV irradiation time, the more small particle size MPs are produced, which is more harmful to the environment. Therefore, waste masks should not be exposed to the sun for a long time after collection to prevent MPs, especially small particle size MPs, from polluting the environment. After 14 days of UV irradiation, the number of MPs produced by masks ushered in a big outbreak. Therefore, the discarded masks should be collected and disposed within 14 days as far as possible. The outer layers of N95 and surgical mask are easier to release MPs particles, and will produce more harmful colored MPs. Therefore, the outer layer of the mask should be collected and treated preferentially. In this paper, the causes of MPs shedding of mask under UV irradiation were studied, and the particle size, color and different layer of MPs were investigated. This has guiding significance for mask shedding off MPs in the environment. However, the mask experiences wind, light, heat, wear and other factors, resulting in the shedding of MPs in natural environment. Therefore, the number of MPs falling off mask needs to be studied in practice. To sum up, future research of shedding characteristics of MPs should focus on real environment or multi factor comprehensive simulation experiment. The regulations on mask treatment should be formulated in terms of policies to avoid more MPs entering the environment. Conclusions Due to the characteristics of rapid transmission of COVID-19 and extended survival time on the plastic surfaces, the management of masks recycling is inappropriate. Therefore, a large number of masks enter the environment without any treatment, resulting in explosion of MPs in the environment. The specific conclusions are as follows: (1) UV radiation caused part of the main chain of the mask to break, causing a large number of MPs to release. (2) The masks release MPs has exploded after 14 days in the environment. Therefore, discarded masks should be collected and disposed of within 14 days. (3) The longer the UV irradiation, the more release of small particle size MPs. (4) During the 28 days of the experiment, the middle layer of surgical mask and the inner layer of N95 mask were difficult to release MPs.
2022-07-28T15:09:30.714Z
2022-07-26T00:00:00.000
{ "year": 2022, "sha1": "2c519769386fde4ffc761e16511fbe7863b035e5", "oa_license": null, "oa_url": "http://www.pjoes.com/pdf-149410-77221?filename=Fate%20of%20Microplastics.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ea51a821b1565f6e3eaf61a4eda2f6f21ff7da2f", "s2fieldsofstudy": [ "Materials Science", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
237142526
pes2o/s2orc
v3-fos-license
Trace formalism for motivic cohomology The goal of this paper is to construct trace maps for the six functor formalism of motivic cohomology after Voevodsky, Ayoub, and Cisinski-D\'{e}glise. We also construct an $\infty$-enhancement of such a trace formalism. In the course of the $\infty$-enhancement, we need to reinterpret the trace formalism in a more functorial manner. This is done by using Suslin-Voevodsky's relative cycle groups. Introduction Let f : X → S be a flat morphism of dimension d between schemes of finite type over a field k.Let Λ be a torsion ring in which the exponential characteristic of k is invertible.In [SGA4, Exposé XVIII, Théorème 2.9], the trace map Tr f : Rf !f * Λ(d)[2d] → Λ satisfying various functorial properties is constructed.Here, the cohomological functors are taken for the étale topoi.Furthermore, the trace map is characterized by such functorialities.This trace map is fundamentally important, and for example, it is used to construct the cycle class map.In other words, we may view the trace formalism as a device to throw cycle-theoretic information into the cohomological framework.The main goal of this paper is to construct an analogous map for the motivic cohomology of Voevodsky, and its ∞-enhancement.The ∞-enhancement of the trace formalism will serve as an interface between "actual cycle" and "∞-enhancement of motivic cohomology" in [Abe22b]. Let us explain the method to construct the trace formalism.From now on, we consider the six functor formalism of the motivic cohomology theory with coefficients in Λ := Z[1/p], where p is the characteristic of our base field k.The principle that makes the construction of the trace map work is the observation that the higher homotopies vanish.More precisely, we have (1.1)R i Hom Rf !f * Λ(d)[2d], Λ = 0 for i < 0. A benefit of this vanishing is that if we take an open subscheme j : U → X such that U s ⊂ X s is dense for any s ∈ S, then constructing Tr f and constructing Tr f •j are equivalent.In [SGA4], this property is used ingeniously to reduce the construction to simpler situations.Another benefit which is more important for us is that the vanishing allows us to construct the map "locally".Namely, by the vanishing, constructing Tr f is equivalent to constructing a morphism R 2d f !f * Λ(d) → Λ of sheaves.In the case of étale cohomology, since it admits proper descent, by de Jong's alteration theorem, the construction is reduced to the case where S is smooth.We note that we commonly use de Jong's alteration theorem to reduce proving properties to smooth cases, but to reduce constructions to smooth cases needs control of higher homotopies, which requires great amount of effort in general.In the case where S is smooth, the construction is easy because we have an isomorphism Hom Rf !f * Λ(d)[2d], Λ Hom Rp X! p * X Λ(d X )[2d X ], Λ , where p X is the structural morphism for X and d X := dim(X), using the relative Poincaré duality, namely the isomorphism p * S (d S )[2d S ] p ! S .In the case of étale cohomology, in [SGA4], the relative Poincaré duality theorem is established by using the trace formalism, and the argument we explained here is somewhat circular.However, in the theory of motives, the relative Poincaré duality follows from theorems of Morel-Voevodsky, Ayoub, and Cisinski-Déglise which use completely different methods, and the above argument actually works.Now, assume we wish to enhance the trace map ∞-categorically.The first question that immediately comes up with is that what it means by "∞-enhancement" in this situation.To address the question, we need a reinterpretion of the trace map, and to motivate our reinterpretation, let us discuss a defect of traditional formalism.Let f be a flat morphism between non-reduced schemes such that f red is not flat.In this situation, we have the trace map Tr f .However, since motivic or étale cohomology is insensitive to nil-immersions, Tr f induces a similar map for f red .This observation gives us an impression that the trace map should be associated with a "cycle" rather than a "scheme".To realize this idea, we use the relative cycle group of Suslin and Voevodsky.For a morphism f : X → S, they defined a group denoted by z equi (X/S, d) which is a certain subgroup of the group of cycles in X equidimensional of dimension d over S (see [SV]).When f is flat of dimension d, the cycle [X] is an element of z equi (X/S, d).Using these observations, we show that there exists a morphism z equi (X/S, n) → Hom Rf !f * Λ(n)[2n], Λ for any n, such that, when f is flat of dimension d, the image of [X] ∈ z equi (X/S, d) is the traditional trace map.The object Hom Rf !f * Λ(n), Λ is often called the Borel-Moore homology, and is denoted by H BM (X/S, Λ(n)).Note that we are considering it as an object of the derived category (or as a spectrum).The associations z equi (X/S, n) and H BM (X/S, Λ(n)) to X/S are functorial with respect to the base changes of S and pushforwards along proper morphisms X → X over S.These functorialities yield (∞-)functors from a certain category Ar to the ∞-category of spectra Sp.The ∞-enhancement of the trace map can be formulated as a natural transform between these ∞-functors, and we will show the existence of such an ∞-functor in the last section.This ∞-enhancement of the trace map is one of the crucial ingredients in [Abe22b]. Before concluding the introduction, let us present the organization of this paper.In Section 2, we recall the six functor formalism of the theory of motives after Voevodsky, Ayoub, Cisinski-Déglise.In Section 3, we formulate our main result.To describe the functoriality of z equi (X/S, n) and H BM (X/S, Λ(n)) above, it is convenient to use the language of "bivariant theory" after Fulton-MacPherson.We start by recalling such a theory, and we state our main theorem.We conclude this section by showing an analogue of (1.1) in the motivic setting.In Section 4, we construct the trace map in the case where the base scheme S is smooth.In Section 5, we construct the trace map in general and show the main result.In Section 6, we establish the ∞-enhancement.We note that, even though we use the language of ∞-categories throughout the paper for convenience and coherence, it is straightforward to formulate and prove the results of Sections 2 to 5 using the language of model categories, as in [CD15,CD19].Using the language of ∞-categories is more essential in Section 6. Let P r L st be the full subcategory of P r L (cf. [Lur09, Definition 5.5.3.1])spanned by stable ∞-categories.We have the functor SH : Sch We may find a summary of the axioms of what this means in [Abe22a, §6.1], and also references.Among other things, we may use "six functors".In this ∞-categorical context, we can find a construction of six functor formalism in [Abe22a, §6.8], which follows the idea of [Kha16].Let X ∈ Sch /S .Then D X is a symmetric monoidal stable ∞-category.Given a morphism f : X → Y in Sch /S , the functor D induces the functor D Y → D X , which we denote by f * in accordance with the six functor formalism of Grothendieck.The functor f * admits a right adjoint, which we denote by f * .We also have the "extraordinary pushforward functor" f !: D X → D Y as well as its right adjoint f ! .We have the natural transform f !→ f * which is an isomorphism when f is proper. The orientation on HΛ yields an orientation on D in the sense of [CD19, Definition 2. Remark.If the reader feels uncomfortable with using ∞-categories, it is essentially harmless to replace P r L st by the the (2, 1)-category of triangulated categories T ri above.Then, we may regard D T as a triangulated category.The only exception might be that when we consider descents.In order to consider descents inside the traditional framework, we need to introduce the category of diagrams as in [CD19,§3].Therefore, strictly speaking, simply considering the functor Sch op /S → T ri is not enough for the theory of descent.We leave the details to the interested reader. Let Here, we view H BM as a spectrum.When the coefficient ring Λ is obvious, we abbreviate H BM (X/S, Λ(n)) by H BM (X/S, n).We write H BM m (X/S, F ) for π m H BM (X/S, F ), and call it the Borel-Moore homology.Note that π m H BM (X/S, n) coincides with (HΛ) BM m,n (X/S) in [Deg18].Assume we are given a closed subscheme Z ⊂ X and denote the complement by U .By localization sequence of 6-functor formalism, we have the long exact sequence 2.3. We introduce the pdh-topology as follows. Definition.We define pdh-topology on Sch /k to be the topology generated by the following two types of families: (1) {f : Y → X}, where f is finite surjective flat morphism of constant degree power of p; (2) cdh-covering. We call dh-topology what is called dh-topology in [CD15, §5.2].Obviously, cdh-topology is coarser than pdh-topology, and pdh-topology is coarser than dh-topology for any p.Let S be an object of Sch /k .Recall that the theorem of Temkin [T], which is a refinement of Gabber's prime-to-alteration theorem, states as follows: there exists an alteration S → S whose generic degree is some power of p and S is smooth.Without Temkin's theorem, pdh-topology might have been useless, but armed with the theorem, we can show the following statement as usual. Lemma.For any S ∈ Sch /k , there exists a pdh-covering f : T → S such that T is a smooth k-scheme.We may even take f to be proper. Proof.Even though the argument is standard, we recall a proof for the sake of completeness.We use the induction on the dimension of S. Using Temkin's theorem, take an alteration T 1 → S whose generic degree is power to p and T 1 is smooth.By using Gruson-Raynaud's flattening theorem, we may take a modification S → S with center Z ⊂ S such that the strict transform T 2 of T 1 is flat over S .By construction T 2 → S is finite surjective flat morphism whose degree is power to p, and thus, {T 2 → S } is a pdh-covering.By induction hypothesis, we may find a proper pdh-covering W → Z such that W is smooth.Because {Z, S → S} is a pdh-covering, {W , T 2 → S} is also a pdh-covering.This covering factors through {W , T 1 → S}, so the latter is a pdh-covering as well.Thus, we may simply take T := W T 1 . For any S ∈ Sch /k , we may find a pdh-hypercovering S • → S such that S i is k-smooth by standard use of the lemma above and [SGA4, Exposé V bis , Proposition 5.1.3]. 2.4. We have the following pdh-descent, which is a straightforward corollary of a dh-descent result by S. Kelly. Lemma.Assume p −1 ∈ Λ.Then any object of D S satisfies pdh-descent.In other words, if we are given a pdh-hypercovering p • : S • → S and F ∈ D S , the canonical morphism F → lim We wish to show that C 0, and for this, it suffices to show that C ⊗ Z[1/p] Z ( ) 0 for any prime p (cf. [CD15, proof of Proposition 3.13]).To show this, we must show that for any compact object G ∈ D S , we have Hom(G, C ⊗ Z ( ) ) 0. We have . We may further compute as where the 1 st and 4 th equivalences follow from the compactness of G and p * i G respectively.By [CD15, Theorem 5.10], F ⊗ Z ( ) admits dh-descent, in particular, pdh-descent.Thus, combining with the computations above, we have C ⊗ Z ( ) 0 as desired. T. Abe 6 T. Abe Now, let G ∈ D S .Then we have We write Hom , and induces a spectral sequence Let us recall the definition of bivariant theory after Fulton and MacPherson very briefly. Definition.A bivariant theory T over k is an assignment to each morphism f : X → Y in Sch /k a Z-graded Abelian group T (f ) equipped with three operations: (1) (Product) For composable morphisms f : X → Y and g : Y → Z, we have a homomorphism of graded groups (2) (Pushforward) Assume we are given composable morphisms f and g as in (1).If, furthermore, f is proper, we have the homomorphism f * : Then we have the homomorphism g * : T (f ) → T (f ). These operations are subject to (more or less straightforward) compatibility conditions.Among these compatibility conditions, let us recall the projection formula for the later use.We consider the diagram (3.1) such that g is proper, and a morphism h : Y → Z. Assume we are given α ∈ T (f ) and Given bivariant theories T , T , a morphism of theories T → T is a collection of homomorphisms T (f ) → T (f ) for any morphism f in Sch /k compatible with the operations above.We refer to [FM81, §2.2] for details. (1) Definition 3.2.Let T be a bivariant theory over k.An A 1 -orientation of T is an element η ∈ T 1 (A 1 → Spec(k)), where T 1 is the degree 1 part.Let T be another bivariant theory endowed with an A 1orientation η .A morphism of bivariant theories F : T → T is said to be compatible with the orientation if Remark.Fulton and MacPherson called an orientation a rule to assign an element of T (f ) to each f in a compatible manner.Since our A 1 -orientation can be regarded as a part of this data, we named it after Fulton and MacPherson's.This has a priori nothing to do with orientation of motivic spectra. Our Borel-Moore homology H BM a (X/S, Λ(b)) defines a bivariant theory (in an extended sense because it is bigraded), cf.[Deg18, §1.2.8].By associating the graded group k H BM 2k (X/S, Λ(k)) to X → S, we define the bivariant theory denoted by H BM 2 * (X/S, Λ( * )).This bivariant theory has a canonical orientation as follows.Let q : A 1 S → S be the projection.Then we have a morphism (1) In our situation, "confined maps" are "proper morphisms" and any Cartesian squares are "independent squares". 3.4. Let us introduce another main player of this paper, z(−, −), from [SV].Let f : X → S be a morphism, and d ≥ 0 be an integer.Recall that Suslin and Voevodsky (2) introduced Abelian groups z equi (f , d) and z(f , d), or z equi (X/S, d) and z(X/S, d) if no confusion may arise.We do not recall the precise definition of these groups, but content ourselves with giving ideas of how these groups are defined.Both groups are certain subgroups of the free Abelian group Z(X) generated by integral subscheme of X.If we are given an element w ∈ Z(X) we may consider the "support" denoted by Supp(w) in an obvious manner. Naively thinking, we wish to define z(X/S, d) as a subgroup of Z(X) consisting of w such that Supp(w) → S is equidimensional of dimension d over generic points of S.However, if we defined z(X/S, d) in this way, the association z(X T /T , d) to T would not be functorial.In order to achieve this functoriality, Suslin and Voevodsky introduces an ingenious compatibility conditions.We do not recall these compatibility conditions, but here is an illuminating example: Let Z ⊂ X be a closed immersion such that the morphism Z → S is flat. Proof.Given any morphism α : T → S, the pullback homomorphism is then defined in [SV, right after Lemma 3.3.9].Given a proper morphism X → Y , the pushforward homomorphism is defined in [SV, Corollary 3.7.5].We may endow with A 1 -orientation by taking η := [A 1 ].The compatibility conditions for these operations have also been proven in [SV]. Our main theorem is as follows. Theorem.Recall that the base field k is a perfect field of characteristic p > 0, and let Λ := Z[1/p].Then, there exists a unique map of bivariant theories compatible with A 1 -orientation: (2) In fact, Suslin and Voevodsky used the notation z(X/S, d) as a presheaf on Sch /S .Our z(X/S, d) is the global sections of it. T. Abe 8 T. Abe A proof of this theorem is given at the end of Section 5. Let us introduce a notation.Let f : X → S be a flat morphism of relative dimension d.Then [X] is an element of z equi (f , d).If we are given τ as above, we have τ( . This element is denoted by Tr τ f .Remark 3.6. (1) Our theorem produces trace maps only for motivic Eilenberg-MacLane spectrum, and the reader might think that our theorem is too restrictive.However, this is not the case since the motivic Eilenberg-MacLane spectrum is universal among "absolute SH-spectrum E with orientation which is Λ-linear and whose associated formal group law is additive" by [Deg18 ) → Λ S , coincides with the trace map defined in [SGA4, Exposé XVIII, Thórème 2.9].Thus, the morphism τ can be seen as a generalization of the trace map of loc.cit., at least when the base field is perfect. (3) When X → S is a g.c.i.morphism, Déglise defined a similar map in [Deg18, Theorem 1].In fact, our map can be considered as a generalization of [Deg18] (even though we only consider over a field), or rather, is built upon Déglise's map.(4) The theorem also holds in the case where p = 0 and Λ = Z.Furthermore, in the case where p > 0 and if we assume the existence of the resolution of singularities, we may, in fact, take Λ = Z in the theorem.The proof works with obvious changes, and the detail is left to the reader.(5) The theorem, in fact, holds for any field k, not necessarily perfect.In fact, let l := k perf be the perfection.The compact support cohomology H * c (X/S) is compatible with arbitrary base change.Thus, by [EK20, Corollary 2.1.5],or alternatively [CD15, Proposition 8.1], the pullback homomorphism H BM p (X/S, Λ(q)) → H BM p (X l /S l , Λ(q)) is an isomorphism since p −1 ∈ Λ.Using this isomorphism, the trace map for H BM 2 * (X l /S l , Λ( * )), constructed above, induces the trace map for H BM 2 * (X/S, Λ( * )) as well. 3.7. Before going to the next section, let us show the most important property to construct the trace map, namely the vanishing of suitable higher homotopies.For a morphism f : X → S, we put dim(f Proposition.For a morphism f : X → S in Sch /k and an integer d such that dim(f ) ≤ d, we have 2m+n (X/S, Λ(m)) = 0 in one of the following cases: (1) for any m > d and any n, (2) when m = d and for any n > 0. In general, we proceed by the induction on the dimension of X.We may assume X is reduced.There exists Z ⊂ X such that X \ Z is smooth and dim(Z) < d since k is assumed perfect.We have the exact sequence , and the claim follows by the smooth case we have already treated.We next assume that S is smooth over k.We may assume that S is of equidimension e.Let π : S → Spec(k) be the structural morphism.Then we have we get the vanishing by the S = Spec(k) case. Finally, we treat the general case.We take a pdh-hypercovering S • → S so that S i is smooth.Let F , G ∈ D(S).Then by pdh-descent spectral sequence (2.1), we have If E p,q 2 = 0 for q < 0, then Hom i (G, F ) = 0 for i < 0. Thus, we get the claim by applying this to Remark.Consider the case where p may not be invertible in Λ.If S is smooth, then the proposition holds. If we further assume the resolution of singularities, the proposition also holds for any f . Construction of the trace map when the base is smooth Let f : X → S be a flat morphism.When S is smooth, we will construct a map which is supposed to be the same as Tr τ f in this section. 4.1. For a scheme Z, we often denote dim(Z) by d Z .Let f : X → S be (any) separated morphism of finite type such that S is smooth equidimensional, and put d f := d X − d S .In this case, let us construct a morphism which we will show to be equal to Tr f when f is flat. Let us start to construct t f .Considering componentwise, it suffices to construct the morphism when S is connected.For any separated scheme X of finite type over k, we have the canonical isomorphism by [Jin16, Corollary 3.9].We have where the first isomorphism follows since g * d ∼ − → g ! for any equidimensional smooth morphism g of relative dimension d.Let X = i∈I X i be the irreducible components, and let I ⊂ I be the subset of i such that dim(X i ) = d X .Let ξ i be the generic point of X i .The element in H BM 2d (X/S, Λ(d)) corresponding via the isomorphism above to the element i∈I lg(O X,ξ i ) • [X i,red ] ∈ CH d X (X; Λ) on the right hand side is defined to be t f . Let us end this paragraph with a simple observation.Let U ⊂ X be an open dense subscheme.Then the restriction map H ) is an isomorphism.Indeed, we have , where r X is the set of irreducible components of X of dimension d X by the computation above.Since r X and r U are the same, we get the claim. 4.2. By the setup 2.1, we may apply [Deg18, Introduction, Theorem 1].In particular, for a morphism between smooth schemes f : X → Y we have the fundamental class η f ∈ H BM 2d f (X/Y , Λ(d f )).When Y = Spec(k), we sometimes denote η f by η X .As we expect, we have the following comparison. Lemma.Assume f : X → Y is a morphism between smooth equidimensional schemes.Then Proof.Assume Y = Spec(k).In this case, f is smooth.Then by [Deg18, Theorem 2.5.3], the fundamental class η f is equal to the one constructed in [Deg18, Proposition 2.3.11],which is nothing but the one we constructed above by [Jin16, Proposition 3.12].Let us treat the general case.For a k-scheme Z, denote by p Z the structural morphism.Unwinding the definition, our t f is the unique dotted map so that the following diagram on the right is commutative: Here, t adj g denotes the morphism given by taking adjoint to t g .Equivalently, t f is the unique dotted map so that the diagram above on the left is commutative.Thus, it suffices to check that the diagram replacing the dotted arrow by η f commutes.From what we have checked, t adj p Z = η adj p Z for any smooth scheme Z.Thus, the desired commutativity follows by the associativity property of fundamental class (cf.[Deg18, Introduction, Theorem 1.2]). Lemma 4.3. (1) Assume we are given morphisms such that Y and Z are smooth and equidimensional.Let the composition be h.We have (2) Consider the Cartesian diagram (3.1).Assume further that Y and Y are smooth equidimensional and f is flat.The map Proof.Let us check the claim (1) of the lemma.By construction of t f , we may assume that X is reduced.By §4.1, we may shrink X by its dense open subscheme since H BM 2d h (X/Z, Λ(d h )) remains the same.Thus, we may assume that X is smooth as well.In this case, we get the compatibility by Lemma 4. Since Y , Y are smooth, we may factor g into a regular immersion followed by a smooth morphism.Thus, it suffices to check the case where g is a regular immersion and a smooth morphism separately.In both cases, consider the following diagram: The When g is smooth, the verification is easy, so we leave it to the reader.Assume g is a regular immersion.In [Jin16, Definition 2.31], Jin defines a morphism R f (g) : . By construction, this is defined as the composition Applying [Deg18, Introduction, Theorem 1.3], this is the same as p X! η g .Now, since g !([X]) = [X ] in CH d X (X ) by the flatness of f , [Jin16, Proposition 3.15] implies that the following diagram on the left commutes: Taking the adjunction, the verification is reduced to the commutativity of the right diagram above.This follows by the following commutative diagram: Here, proj are the morphisms induced by the projection formula (or more precisely [Deg18, (1.2.8.a)]), and we conclude the proof.Lemma 4.4.Assume we have a morphism of bivariant theories τ in Theorem 3.5.Then for a flat morphism f : X → S such that S is smooth and equidimensional, we must have an equality Tr τ f = t f .Proof.First, consider the case where X = S. Since τ preserves the product structure, τ(id S ) must send the unit element 1 = [S] ∈ z equi (S/S, 0) to 1 = id ∈ H BM 0 (S/S, 0).By [Jin16, Proposition 3.12], t id is equal to id as well, and the claim follows in this case.When f is an open immersion, we may argue similarly. T. Abe 12 T. Abe Now, let f : X → S be a finite étale morphism such that S is smooth and equidimensional of dimension d.We may assume X and S are integral, and the degree of f is n.By f * : z equi (X/S, 0) → z equi (S/S, 0), [X] is sent to n • [S] in z equi (S/S, 0) by definition of f * .This implies that f * (Tr τ f ) = n • id where f * : H BM 0 (X/S, Λ(0)) → H BM 0 (S/S, Λ(0)).On the other hand, we have the following commutative diagram by [Jin16, Proposition 3.11]: This implies that, since Λ = Z[1/p] is torsion free, the left vertical map is injective, and so is the right vertical map.Thus Tr τ f is characterized by the property that f * Tr τ f = n • 1, and it suffices to check that and the commutative diagram again implies that f in this case.Consider the case where S = Spec(k) .We may assume that X is integral, and we may shrink X by its open dense subscheme since H BM 2d X (X, Λ(d X )) does not change by §4.1.Then we may assume that f can be factored into where the first morphism is étale.By shrinking X further, we may assume we have the factorization X g − → V → A d f of g where g is finite étale.Since the trace map is assumed to preserve A 1 -orientation, we must have Tr τ p = t p where p : A 1 → Spec(k) by [Jin16, Proposition 3.12].Thus, by Lemma 4.3-(1), we have Tr τ f = t f .Finally, let us treat the general case.Let U ⊂ X be an open dense subscheme such that U red is smooth over k.Let e be the dimension of S. We have an isomorphism F : H BM 2d (X/S, Λ(d)) H BM 2(d+e) (U , Λ(d + e)), again, by §4.1.By construction, this morphism sends x to η S • x.In view of Lemma 4.2, this is equal to t S • x.Now, we have where the 2 nd equality follows by what we have already proven, the 3 rd by the transitivity of the trace map, the 4 th by what we have already proven, and the 5 th by Lemma 4.3-(1).Thus, we conclude the proof. Construction of the trace map In this section, we prove the main result. 5.1. Let f : X → S be a morphism.To a morphism T → S, we associate which defines a presheaf of spectra H BM (X/S, n) on Sch /S .We denote by H BM m (X/S, n) the Abelian presheaf π m H BM (X/S, n) on Sch /S .Here, π m is taken as a presheaf and do not consider any topology.Lemma 5.2. Proof.Let us show the claim (1) of the lemma.Let T • → T be a pdh-hypercovering of q : T → S ∈ Sch /S .We must show that the canonical map . By applying q * , taking into account that q * commutes with arbitrary limit by the existence of a left adjoint, we have an equivalence 5.3. Let X → S be a morphism.Let us recall the Abelian group Hilb(X/S, r) for an integer r ≥ 0 from [SV, §3.2].This is the set of closed subschemes in X which are flat over S. We denote by ΛHilb(X/S, r) the free Λ-module generated by Hilb(X/S, r).Now, assume that S is smooth.For a (flat) morphism g : Z → S in Hilb(X/S, r), we constructed t g ∈ H BM 2r (Z/S, Λ(r)) in §4.1 when S is equidimensional.Even if S is not equidimensional, by considering componentwise, we define the element t g .By associating to Z the image of t g via the map H BM 2r (Z/S, Λ(r)) → H BM 2r (X/S, Λ(r)), we have the map Hilb(X/S, r) → H BM 2r (X/S, Λ(r)).This yields the map ΛHilb(X/S, r) → H BM 2r (X/S, Λ(r)).Now, let I X/S ⊂ ΛHilb(X/S, r) be the submodule consisting of elements λ i Z i ∈ ΛHilb(X/S, r) such that the associated cycle λ i [Z i ] = 0 (cf. the paragraph before Theorem 4.2.11 in [SV]). (4)Since t g only depends on the underlying subset and its length, the above constructed map factors through I, and defines a map T (X/S, r) : ΛHilb(X/S, r)/I X/S → H BM 2r (X/S, Λ(r)) Lemma 5.4.Let h : T → S be a morphism between smooth k-schemes.Then we have the following commutative diagram of Abelian groups ΛHilb(X/S, r) ΛHilb(X T /T , r) T (X T /T ,r) / / H BM 2r (X T /T , Λ(r)) Proof.This follows immediately from Lemma 4.3-(2). 5.5. Let f : X → S be a morphism.Let Z(X/S, r) be the presheaf of Abelian groups on Sch /S which sends T to ΛHilb(X T /T , r)/I X T /T , and z(X/S, r) be the presheaf which sends T to z(X T /T , r).Consider the where the 2 nd isomorphism follows by [SV, Theorem 4.2.11], the last isomorphism follows since z(X/S, Λ(r)) is an h-sheaf by [SV, Theorem 4.2.2]and, in particular, a pdh-sheaf.Now, a pdh-hypercovering S • → S is said to be good if S i is smooth for any i.Let HR(S) be the (ordinary) category of pdh-hypercoverings of S (cf.[SGA4, Exposé V, §7.3.1]).Denote by HR g (S) the full subcategory of HR(S) consisting of good pdh-covers.Recall that HR(S) op is filtered (cf.[SGA4, Exposé V, (4) In [Kel13, §2.1], Kelly pointed out a problem in the definition of the map cycl of [SV] used in the definition of I X/S above. Note that we may employ Kelly's definition of cycl to define I X/S , but we get the same ideal, and it does not affect our arguments.Théorème 7.3.2]).For any S • ∈ HR(S), we can take S • ∈ HR g (S) and a morphism S • → S • by [SGA4, Exposé V bis , Proposition 5.1.3]and 2.3, which implies that HR g (S) op is cofinal in HR(S) op (cf.[SGA4, Exposé I, Proposition 8.1.3]).Put X • := X × S S • .Thus we have the isomorphisms z(X/S, Λ(r)) z(X/S, Λ(r))(S) Lemma 5.6. ( Consider the following diagram: .By definition, we may assume that Z is smooth, and Supp(x) ⊂ X and Supp(y) ⊂ Y are flat over Z.By projection formula of bivariant theories (cf.§3.1), we may assume that Y = Supp(y) (with reduced induced scheme structure).Then, by the compatibility with pushforward, we may replace X by Supp(x).In this situation, we are allowed to shrink Z by its open dense subscheme because H BM (X/Z, d + e) does not change by §4.1, we may further assume that y = [Y ].Now, for an open immersion j : U ⊂ X, we have restriction morphisms z equi (X/Z, n) → z equi (U /Z, n) and H BM (X/Z, n) → H BM (U /Z, n) and we may check easily that these are compatible with τ X/Z .Since f : X → Y is dominant, we may take open dense subschemes U ⊂ X and V ⊂ Y such that f (U ) ⊂ V , U → V is flat, and V is smooth.The compatibility with open immersion allows us to replace X by U , it suffices to show the claim for U → V → Z, and in this case, we have already treated in Lemma 4.3-(1) together with Lemma 4.4. ∞-enhancement of the trace map In this section, we upgrade the trace map to the ∞-categorical setting. 6.1. Let Ar be the category of morphisms X → S in Sch /k whose morphisms from Y → T to X → S consists of diagrams of the form (6.1) where α is proper.The composition is defined in an evident manner, and we refer to [Abe22a, §5.2] for the detail.We often denote an object corresponding to X → S in Ar by X/S.For Y / 6.3. Assume we are given a morphism F : (Y /T ) → (X/S) in Ar as in (6.1).Then we have the morphism of spectra With this morphism, we can check easily that the association H BM (X/S, d)[−2d] to X/S ∈ Ar yields a functor H : Ar op → hSp.It is natural to expect that this morphism can be lifted to a functor of ∞-categories Ar op → Sp.We put the existence as an assumption as follows: Assume we are given a functor H BM (d) : Ar op → Sp between ∞-categories whose induced functor between homotopy categories coincides with H above. We constructed such a functor in [Abe22a, Example 6.8], and also in [Abe22b, §C.3] using a slightly different method.Now, we have the following ∞-enhancement of the trace map. Theorem 6.4.There exists essentially uniquely a morphism of spectra-valued sheaves τ † : z(d) → H BM (d) on Ar for any d such that the composition coincides with the morphism τ of Theorem 3.5. Proof.Let π 0 z equi (d) ⊂ π 0 z(d) be the subsheaf so that the value at X → S is z equi (X/S, d).Note that π 0 z equi (d) is just a notation and not π 0 of some presheaf z equi (d).We first define the trace map for π 0 z equi (d).Let Ar d be the full subcategory of Ar consisting of objects f : X → S such that dim(f ) ≤ d. First, let us construct the map after restricting to Ar d .We have already constructed the map of spectra-valued presheaves Here, the equivalence follows by Lemma 5. Here, the vertical morphisms are defined by taking the adjoint to φ.We claim that the left vertical map is equivalent.For this, it suffices to show that π 0 z equi (d) is in fact a left Kan extension of π 0 z equi (d) • t.Let Finally, let us extend this map to the required map.The ∞-presheaf H BM (d) on Ar is in fact an ∞-sheaf.Indeed, let H BM (d) → LH BM (d) be the localization morphism.We must show that this morphism is an equivalence.Recall that for an ∞-category C, a simplicial set S, and a morphism α : F → G in Fun(S, C), α is an equivalence if and only if α(s) is an equivalence for any vertex s of S. We believe that this is well-known, but a (fairly) indirectly way to see this is by applying [Lur09, Corollary 5.1. op /k → P r L st sending T to Voevodsky-Morel's stable homotopy ∞-category SH(T ) (cf. [CD15, §2.1] or [CD19, Example 1.4.3] for model categorical treatment and [Abe22a, §6.7] and references therein for ∞-categorical treatment).Let Λ be a commutative ring.Then Voevodsky defined the motivic Eilenberg-MacLane spectrum HΛ k , which is an E ∞ -algebra of SH(Spec(k)).By pulling back, this spectrum yields a spectrum HΛ T /k on SH(T ), and defines an "absolute ring SH-spectrum" in the sense of [Deg18, Definition 1.1.1].The absolute ring SH-spectrum HΛ T /k is equipped with an "orientation" in the sense of [Deg18, Definition 2.2.2] by [Deg18, Example 2.2.4].Under this situation, all the results of [Deg18, Introduction, Theorem 1] can be applied.We do not try to recall the definitions of each terminology, but instead, we sketch what we can get by fixing these data.We put D T := Mod HΛ T (SH(T )), the symmetric monoidal ∞-category of HΛ T -module objects in SH(T ).Then the assignment D T to T can be promoted to a functor D : Sch op /S → P r L st which yields "motivic categories" in the sense of [CD19].This can be checked from [CD19, Proposition 5.3.1 and Proposition 7.2.18]. 4.12] by [CD19, Example 2.4.40] and [Deg18, §2.2.5].For n ∈ Z, we denote the n-th Tate twist by (n), the n-th shift by [n], and (n)[2n] by n .We often denote the unit object of D T by Λ T .By fixing an orientation, we have a canonical isomorphism f * (d)[2d] f ! for any smooth morphism f in Sch /S (cf.[CD19, Theorem 2.4.50]).In fact, the fundamental class constructed in [Deg18, Introduction, Theorem 1] can be seen as a generalization of this isomorphism. ( geometric) morphism of sites Sch /S,pdh a − → Sch /S,cdh b − → Sch /S .Then we have (5.1) 1) Consider the Cartesian diagram (3.1).Assume dim(f −1 (y)) ≤ d for any point y of Y , in which case the same property holds for f .Then we have g * • tr f = tr f • g * .Y be morphisms and put f := f • g.We assume that for any y ∈ Y , dim(f ( )−1 (y)) ≤ d and g is proper.Then we have tr f • g * = g * • tr f .Proof.Let us check the claim (1) of the lemma.Take a good pdh hypercovering α : Y • → Y .Then we are able to find a good pdh-hypercovering α : Y • → Y which fits into the following diagram, not necessarily Cartesian: Both external squares are commutative by the functoriality of z(−, d) and H BM (−, d), and the middle as well by 4.3-(2).The claim (2) follows immediately from Lemma 4.3-(3). 5. 7 . Proof of Theorem 3.5 First, let us construct a morphism z equi (−, d) → H BM 2d (−, d).Let f : Y → T be a morphism, and w ∈ z equi (Y /T , d).Let W be the support of w, and i : W → Y be the closed immersion.Then w is the image of an element w ∈ z equi (W /T , d) via the morphism i z * : z equi (W /T , d) → z equi (Y /T , d).Since w ∈ z equi (Y /T , d), the dimension of each fiber of f •i is ≤ d.Thus, we have already constructed the morphism tr f •i : z equi (W /T , d) → H BM 2d (W /T , d).We define τ Y /T (w) := i H * • tr f •i (w ), where i H * : H BM 2d (W /T , d) → H BM 2d (Y /T , d) is the pushforward.This defines a map τ Y /Z : z equi (Y /T , d) → H BM 2d (Y /T , d).In view of Lemma 5.6-(2), this map is in fact a homomorphism of Abelian groups.This map is compatible with base change and pushforward by Lemmas 5.6-(1) and 5.6-(2).The uniqueness of the map follows by Lemma 4.4 and construction.It remains to show the compatibility with respect to the product structure.Let X f − → Y g − → Z be morphisms, and x ∈ z equi (X/Y , d), y ∈ z equi (Y /Z, e) π 0 z equi (d)| Ar d τ − − → π 0 H BM (d)| Ar d ∼ ← −− − τ ≥0 H BM (d)| Ar d −→ H BM (d)| Ar d . ( X/S) ∈ Ar, and we denote by App X/S the fiber of s over (X/S).Since s op is a coCartesian fibration, by invoking [Lur09, Proposition 4.3.3.10], it suffices to show that π 0 z equi (d) (X/S) is a left Kan extension ofπ 0 z equi (d) • t | App opX/S along the canonical map App op X/S → {X/S} for any X/S.This amounts to showing that the morphism of spectra (namely the colimit is the "derived colimit") lim − − →D∈App op X/S z equi t(D)/S, d −→ z equi (X/S, d) is an equivalence.Let C(X/S) be the category of closed immersions Z → X such that the composition Z → S is in Ar d .The category App op X/S is filtered and the inclusion C(X/S) → App op X/S cofinal.Thus, the colimit is t-exact by [Lur18, Proposition 1.3.2.7], and it suffices to show the morphism lim − − →Z∈C(X/S) z equi Z/S, d → z equi (X/S, d) of Abelian groups is an isomorphism.This follows by definition.Thus we have the map π 0 z equi (d) → H BM (d) of spectra-valued presheaves. 2.3]to the diagram (∆ 0 ) → Fun(S, C) given by α.Now, let (Y /T ) ∈ Ar.Since the verification is pointwise by the recalled fact, it suffices to show thatH BM (d) • ι Y /T → (LH BM (d)) • ι Y /T is an equivalence.By (6.2), this morphism can be identified with ι * Y /T H BM (d) → L ι * Y /T H BM (d), which is an equivalence since ι * X/T H BM (d) H BM (X/S, d) is already a cdh-sheaf (cf.Lemma 5.2-(1)).Thus, by taking the sheafification to the morphism π 0 z equi (d) → H BM (d), we get the morphism z(d) → H BM (d).The essential uniqueness follows by construction, and the detail is left to the reader. , Remark 2.2.15].More precisely, if we are given such an absolute SH-spectrum E, we have a unique map φ : HΛ → E. Associated to this map, we may consider the composition z equi (−, Λ( * )) where the last object is the Borel-Moore homology associated with E, and we get trace maps for E. (2) Choose E to be the -adic étale absolute spectrum H ét Q for p.By construction above, we have z equi (X/S, d) → H BM ét,2d (X/S, d), where H BM ét, * (X/S, * ) is the -adic Borel-Moore homology.If f is a flat morphism of dimension d, the image of [X] ∈ z equi (X/S, d) by this morphism is denoted by Tr ét f .This element of H BM ét,2d (X/S, d), considered as a morphism H * c (X/S, Λ S (d)[2d] map g * t f is the unique straight dotted arrow redering the left small square diagram commutes, and t f is the unique bent dotted arrow rendering the outer largest diagram commutes.Since f is flat, f is transversal to g in the sense of [Deg18, Example 3.1.2].This implies that f * (η g ) = η g by [Deg18, Introduction, Theorem 1.3].By taking the adjoint, this implies that the right square is commutative.Since Y , Y are assumed to be smooth, we have t by the previous lemma.Since g, p Y , p Y are gci morphism, the bottom semicircular diagram is commutative by [Deg18, Introduction, Theorem 1.2].In order to check the equality in the claim, it remains to check that the ♣-marked diagram commutes. Λ S ).Thus, the claim follows by definition.Let us show the claim (2) of the lemma.The Abelian sheaf π i H BM (X/S, d) is the pdh-sheafification of the Abelian presheaf associating π i H BM (X/S, d)(T ) to T ∈ Sch /S .Since π i H BM (X/S, d)(T ) H BM i (X T /T , d), this vanishes if i > 2d by Proposition 3.7.Furthermore, since lim ← − − is left exact, 1 and the vanishing for i > 2d imply that π 2d H BM (X/S, d) is already a pdh-sheaf on Sch /S , and the claim follows. T ∈ Ar, let Cov(Y /T ) be the set of families {Y T i /T i → Y /T } i∈I where {T i → T } is a cdh-covering.The category Ar does not admits pullbacks in general, but each morphism (Y T i /T i ) → (Y /T ) is quarrable, in other words, for any morphism(Y /T ) → (Y /T ), the pullback (Y T i /T i ) × (Y /T ) (Y /T ) → (Y /T ) exists.Indeed, we can check easily that (Y T i /T i ) × (Y /T ) (Y /T ) (Y × T T i /T × T T i ).Thus, this family defines a pretopology on Ar in the sense of [SGA4, Exposé II, §1.3].Now, fixing (Y /T ) ∈ Ar, we have the functor ι Y /T : Sch /T → Ar sending T → T to (Y × T T /T ).This functor commutes with pullbacks.Putting the cdh-topology on Sch /T , the functor ι X/T is cocontinuous (cf.[SGA4,Exposé III, §2.1]) by [SGA4, Exposé II, §1.4].By associating the Abelian group z(Y /T , n) to Y → T , we have a functor z SV (n) : Ar op → Sp ♥ .Then z SV (n) is an Abelian sheaf on Ar.Indeed, we must show the Čech descent with respect to the elements of Cov(Y /T ) by [SGA4, Exposé II, §2.2].This is exactly the contents of [SV, §4.2.9].We define z(n) to be the sheafification of z SV (n) regarded as a spectra-valued presheaf on Ar.Now, by [ES21, Lemma C.3], we have the following commutative diagram of geometric morphisms of Note that, since local objects (with respect to a localization) are stable under taking limits by definition, (ι s Y /T ) * is commutes with limits by [ES21, Lemma C.3], which justifies that ι s Y /T is a geometric morphism.Moreover, by [Lur18, Proposition 20.6.1.3],the functor (ι s Y /T ) * is given by composing with ι Y /T .In particular, z 6.2. 2-(2) since we are restricting the functor to Ar d .Now, let the category App be the full subcategory of Fun(∆ From now on, we abbreviate F • t op , F • s op by F • t, F • s to avoid heavy notations.By (6.3), we have the map π 0 z equi (d) • t → H BM (d) • t of spectra-valued presheaves on App.Now, we have the following diagram of ∞-categories where F is either π 0 z equi (d) • t or H BM (d) • t.Since s is a Cartesian fibration, s op is a coCartesian fibration.Since the ∞-category Sp is presentable, any left Kan extension exists by [Lur09, Proposition 4.3.2.15].We denote by LKE(F) : Ar op → Sp a left Kan extension of the above diagram.We have the following diagram of spectra-valued presheaves: LKE(π 0 z equi (d) • t) 1, Ar) spanned by the morphisms h : (X/S) → (Y /T ) in Ar such that (Y /T ) belongs to Ar d .We have functors s, t : App → Ar where s is the evaluation at {0} ∈ ∆ 1 , and t is at {1}.Namely, for h above, we have s(h) = (X/S) and t(h) = (Y /T ).By [Lur09, Corollary 2.4.7.12], s is a Cartesian fibration.Note that we have the natural transform s → t and this induces the morphism of functors φ : F • t op → F • s op for any functor F : Ar op → Sp.
2021-08-18T01:16:09.781Z
2021-08-17T00:00:00.000
{ "year": 2021, "sha1": "ef2425ea4535e9ed8b7efb1c2aee0a28a58f5a5d", "oa_license": "CCBYSA", "oa_url": "https://epiga.episciences.org/11092/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6d15da1e886dee2f934aedaf645ac5e9a18bb4c3", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
251082951
pes2o/s2orc
v3-fos-license
Low-Dimensional Nanomaterial Systems Formed by IVA Group Elements Allow Energy Conversion Materials to Flourish In response to the exhaustion of traditional energy, green and efficient energy conversion has attracted growing attention. The IVA group elements, especially carbon, are widely distributed and stable in the earth’s crust, and have received a lot of attention from scientists. The low-dimensional structures composed of IVA group elements have special energy band structure and electrical properties, which allow them to show more excellent performance in the fields of energy conversion. In recent years, the diversification of synthesis and optimization of properties of IVA group elements low-dimensional nanomaterials (IVA-LD) contributed to the flourishing development of related fields. This paper reviews the properties and synthesis methods of IVA-LD for energy conversion devices, as well as their current applications in major fields such as ion battery, moisture electricity generation, and solar-driven evaporation. Finally, the prospects and challenges faced by the IVA-LD in the field of energy conversion are discussed. Introduction As a guarantee for the rapid development of modern society, the important role of energy in all walks of life cannot be overstated. However, the energy crisis caused by the depletion of traditional energy sources and the environmental pollution caused by the improper use of energy are becoming increasingly prominent. Therefore, in order to tackle the global problem, an environmentally friendly and sustainable energy-saving conversion system is urgently needed [1][2][3]. The IVA group elements, represented by carbon, are widely distributed and stable in the earth's crust and have received a great deal of attention from scientists [4,5]. The combination of the IVA group elements and nanotechnology has expanded their application. Fadaly et al. have broken through the limitation of silicon technology requiring use in conjunction with direct band gap lightemitting devices by synthesizing silicon and germanium alloy to achieve direct band gap high luminescence, enabling the integration of electronic and photoelectric functions on a chip [6]. More importantly, the unique electronic and energy band structures [7][8][9] of the IVA group elements make them widely available for efficient energy conversion and storage. Graphene, for example, is a zero-band semiconductor material with π (p) bonds, where the strength of the bonded electrons is not sufficient for these electrons to leap from p to p*. The abundant conjugated π (p) bonds promote the excited electrons at almost all wavelengths of sunlight, giving the materials a black color. The excited electrons jump from ground state orbital (HOMO) to high energy orbitals (LUMO) and then jump back to the tions that lead to an increase in temperature, enabling efficient solar-thermal ene version. With the abundance of methods for the preparation of low-dimensional nan als, better morphology control and modification of loading groups can be achiev bling better applications for energy storage and conversion, and showing superio mance. The synthesis of nanomaterials is mainly divided into top-down methods stripping [10], etching and laser, etc., and bottom-up methods, such as in-situ hydrothermal synthesis self-assembly, etc. Li et al. used the concept of "Phoenix N to synthesize graphene, through reconstituted graphene nanoparticles to obtai structure of three-dimensional (3D) graphene; the obtained porosity, electrical co ity, mechanical strength, etc., increased greatly [11]. Stetson et al. found that the fo mechanism of the initial solid electrolyte mesophase on the silicon wafers with oxide and chemically etched thermal oxide coating is different, and the structural of SEI is achieved by chemical etching, which can be used to improve the servi anode materials [12]. Zhang et al. tuned the electronic properties of reduced g oxide (rGO) by TI atoms, and the Fermi level drop significantly reduced the se nection of carbon-based electrodes resistance, thus greatly improving the power sion rate of C-PSC [13]. This review first describes the different morphologies and properties of IVA lowed by an overview of the synthesis methods of these typical low-dimension materials. Finally, the excellent applications and advances in energy conversio low-dimensional nanomaterials are presented. 0D Quantum Dots Nanostructures in different dimensions, such as quantum dots, nanotube rods/wires, and nanosheets, have provided satisfactory solutions for the rapid d ment of energy storage and conversion devices, as shown in Figure 1. Quantum dots are zero-dimensional (0D) semiconductor particles, only a few nanometers in size, sometimes referred to as atoms. Like a naturally occurring atom or molecule, it has bound discrete electron states. As a carbon nanomaterial, 0D carbon quantum dots (CQDs), have attracted increasing attention in recent years because of their low cost, non-toxicity, large surface area, high electrical conductivity, and abundant outstanding properties. In addition, CQDs have excellent electrochemical reaction performance due to their abundant quantity, low price, unique electron transfer capability, and large specific surface area. More importantly, CQDs can be doped with heteroatoms to change properties. For example, the fluorescence properties of CQDs can be changed by doping with heteroatoms. A facile and high-output strategy to fabricate selenium-doped carbon quantum dots (SeCQDs) [15] with green fluorescence was developed by the hydrothermal treatment of selenocystine under mild conditions. The selenium heteroatom imparts redoxdependent reversible fluorescence to Se-CQDs. Once Se-CQDs are internalized into cells, harmful high levels of reactive oxygen species (ROS) in the cells are reduced. With their fast electron transfer and large surface area, CQDs are also promising functional materials. Similarly, silicon and germanium nanostructures as high refractive index materials have been extensively studied as a new type of photoresonance structure. It is shown that silicon quantum dots (SQDs) can increase the internal potential of graphene/Si Schottky junctions and reduce the light reflection of photodetectors. Ting Yu et al. [16] achieved a faster response of photodetectors by coupling graphene with SQDs, and could further improve the performance of photodetectors by changing the size of silicon quantum dots and the number of graphene layers. Their excellent transmission and optical properties have potential applications in semiconductor lasers, amplifiers, and biosensors. Currently, the main method for the synthesis of quantum dots is the colloidal method. Colloidal synthesis involves heating the solution at a high temperature, decomposition of the precursor solution to form monomers, followed by nucleation, and formation of nanocrystals. This method can be used to synthesize quantum dots in large quantities. 1D Nanowires/Rods/Tubes Nanowires are one-dimensional (1D) structural materials that are laterally confined to less than 100 nm. Compared to conventional bulk materials, nanowires tend to exhibit better photoelectric properties for macroscopic applications. For example, nanowires can naturally concentrate solar energy into a very small area of the crystal, concentrating light 15 times more intensely than ordinary light. This has important implications for the development of solar cells and the use of solar energy [17,18] because of the resonance of the light intensity within and around the nanowire crystal, which helps to increase the conversion efficiency of solar energy. Silver nanowire electrodes exhibit easily adjustable photoelectric and mechanical properties. Atomic-level chemical welding of silver nanowire electrodes [19] can be used to construct a flexible organic solar cell with high efficiency. Si/InP core-shell nanowire-based solar cell using etched Si nanowire [20] confirm the formation of radial nanowire heterostructures. In this cell, more photons can be absorbed. Compared to traditional solar cells, the performance is greatly improved. CNTs are one of the highest hardness and best strength of synthetic carbon materials. In the CNTs, the C-C bonds are mainly sp 2 hybridized, and the hexagonal mesh structure is bent to a certain extent, forming a spatial topology, in which certain sp 3 hybridized bonds can be formed. sp 2 hybridized C-C bonds are strong chemical bonds, which makes the CNTs have very high mechanical strength. For CNTs with an ideal monolith wall, its tensile strength is about 800 GPa. CNTs are also very flexible and can be stretched. The factor that usually determines the strength of a material is the aspect ratio, the ratio of length to diameter. If the aspect ratio reaches 20, it is an ideal flexible material. CNTs are flexible materials with high thermal conductivity because their aspect ratio can reach more than 1000 and their heat exchange performance along the length direction is very high. CNTs are also divided into single-walled carbon nanotubes (SWCNT) and multi-walled carbon nanotubes (MWCNT). The geometric structure of SWCNT can be regarded as a single layer of graphene crimp, with excellent electronic and mechanical properties. MWCNT is made of layers of graphene seamlessly coiled into concentric tubes. Compared with SWCNT, their elasticity and tensile strength are slightly inadequate. Nanofibers composed of a single polymer often have poor electrical conductivity and weak mechanical properties, so their applications are limited. Therefore, CNTs are often used as reinforcement fillers to prepare nanofibers after compounding with other polymers, which can effectively improve the properties of nanofibers. Fe 2 O 3 /C/CNT composites [21] sprayed by ultrasonic can be used as an anode for lithium-ion batteries. In these composites, the high conductivity of CNTs makes charge transfer faster, which improves the performance of lithium-ion batteries. The structure along the CNTs is the same as the sheet structure of graphite, so it also has very good electrical properties. Moreover, it has also realized excellent characteristics in thermal and optical aspects, so it has a very good prospect in battery, sensing, medical treatment, and other aspects. Similarly, silicon and germanium nanotubes are suitable to be used as anodes of lithium-ion batteries because of their cyclic stability. In recent years, further progress has been made on the crystal phase transformation of anode materials during battery charging and discharging, which is expected to improve the performance of lithiumion batteries. It is worth mentioning that Chen et al. [22] deposited gold (Au), platinum (Pt), nickel (Ni), and indium tin oxide (ITO) onto the surface of tin dioxide nanotubes to prepare different kinds of electrodes. High sensitivity detection of hydrogen and benzene is achieved, and the power consumption is only 1% of that of commercial sensors. Monolayer Because of their excellent electronic, optical, and mechanical properties, twodimensional (2D) nanomaterials graphene, silicene, and germanene have attracted wide interest. Graphene is a new material formed by a single layer of carbon atoms and is also the basic building block of other carbon-based nanomaterials, such as fullerenes, CNTs, and graphite. Graphene can be rolled into 0D fullerenes and 1D CNTs, or it can be stacked in a certain way to form a graphite bulk material. Graphene also exhibits remarkable physical properties due to its unique internal structure. 1. Graphene has excellent electronic properties [23]. It can carry a considerable amount of charged ions, so it is commonly used as a basic raw material for batteries and electrical equipment. 2. Graphene is so flexible that it can bend and fold to a certain extent with little change in its properties. Therefore, graphene has a very good prospect in the research field of flexible wearable electronic devices [24]. But the use of graphene is limited by its lack of a semiconductor band gap. Therefore, it is a difficult problem to study how to open the energy gap. At present, there are two main methods commonly used. The first one is to increase inherent defects of graphene, exposing more active sites. The quantum size effect of the electronic structure can be achieved by changing the morphology of graphene. For example, graphene can be changed into 0D graphene nanoribbons and 0D graphene quantum dots. The second is a chemical modification, which promotes the redistribution of charge on the surface by changing the number and type of heteroatoms incorporated into the graphene. The chemical modification includes surface modification and substitution doping. Graphene surface modification is achieved by hybrid adsorption of gaseous metals or organic molecules on the graphene surface. Alternating doping is the introduction of heteroatoms into the carbon lattice of graphene. At present, this modification method is very mature, and various elements have been widely introduced into graphene. Monoatomic derivatives of graphene can be obtained by adding halogen atoms to the graphene skeleton. Graphene derivatives exhibit different properties due to the different electronegativity of heteroatoms. Among them, fluorographene has large negative magnetoresistance, high optical transparency, and high reactivity. As well, it is easy to generate many derivatives, such as graphene acid and cyanogen graphene. Graphene acid is a novel graphene platform whose carboxylic acid groups are selectively and uniformly located on the surface of the carbon network. This structure enables the graphene acid to have more uniform functionalization and stronger electron conduction. Such good performance also proves the excellent catalytic activity of graphene acid [25]. Furthermore, the selectivity of different oxidation products can be precisely modulated by adjusting the structure of the graphene acid. It is widely used in selective electrochemical sensing and catalysis [26]. Cyano graphene is also one of the graphene derivatives, capable of complex 2D chemical reactions and high yield covalent functionalization of graphene [27]. Since graphene was discovered in 2004, researchers have proposed silicene, germanene, and stanene with graphene-like honeycomb structures. Silicene has since been designed as a cathode to develop zinc-ion hybrid capacitors with enhanced capacitance and hypercyclic stability. As the research progressed, the researchers designed a hybrid honeycomb silicene, combining the electron band gap of specific silicon with the high electron mobility energy of honeycomb silicon. Zhao et al. confirmed that germanene is a potentially high energy density anode material. They prepared small layer germanene nanosheets by the liquid phase stripping method and measured their cyclic stability after mixing with rGO [28]. As mentioned above, we reviewed the different morphology and properties of IVA-LD. In fact, in research and application, they are not applied alone, but are often used in combination with other low-dimensional or bulk materials, which also shows the advantages of low-dimensional materials for easy composite. The composite of IVA-LD can not only have the excellent properties of each part of the material, but also forms heterojunction at the interface of the composite. Heterogeneous junctions form at the composite interface, among which the Van der Waals heterojunctions (vdWHs) formed by 2D materials stacking has attracted the researchers' attention for the first time. Some studies show that vdWHs can provide the largest area for the separation and transfer of carriers, showing application potential in photoelectric detection. Some studies have shown that van der Waals heterojunction can provide the largest area for the separation and transfer of carriers, showing the potential of application in photoelectric detection. Dhungana et al. [29] introduced the concept of Xene heterostructures based on an epitaxial combination of silicene and stannene on Ag(111), promising to optimize the responsivity and speed of photodetectors. Hydrothermal Synthesis Hydrothermal synthesis is a simple synthesis method in which the prepared and stirred solution is put into an autoclave and reacted for a period at a certain temperature and pressure, using a hydrothermal system and a high-temperature and high-pressure closed environment to obtain IVA-LD. Many parameters, such as surfactant, solution pH value, reaction temperature, and reaction time need to be ensured in the experiment, which makes the product more sensitive to environmental changes. The advantages are high concentration, good dispersion, easy control of particle size, simple operation, large output, low cost, a mild and safe method of hydrothermal synthesis process, the sample reacts uniformly in an aqueous solution at a high rate under high pressure. Hydrothermal synthesis has shown great versatility and high efficiency in the preparation of holey materials. Xu et al. [30] synthesized porous graphene oxide (GO) frames with abundant planar nanopores through a solvothermal reaction involving a mild defect-etching process. A homogeneous aqueous mixture of GO and hydrogen peroxide was stirred and heated at 100 • C for 4 h to prepare a solution of hydrogen graphene oxide (HGO) (Figure 2a). The authors concluded that the oxidative etching reaction initiates and propagates mainly in oxygen-deficient regions, preferentially removing oxygen-containing carbon atoms, generating carbon vacancies, and eventually forming nanopores in the GO nanosheets. A simple metal-organic framework production strategy was proposed by Zhu et al. [31] Layered porous CoMoO 4 -CoO/S@rGO nanopolyhedra were synthesized by hydrothermal S-doping, as shown in Figure 2b. The preparation process can be divided into two parts; the first step synthesizes dodecahedral-shaped ZIF-67 crystal as the initial precursor. Secondly, the NPs of layered porous CoMoO 4 -Co(OH) 2 NPs were formed by injecting a dielectric sodium-molybdenum solution into the suspension of ZIF-67 crystals via the etching ion exchange effect. Then, TAA and GO solutions were added to the suspension of the above intermediates with CoMoO 4 -CoO/S@rGONPs, and the final products were obtained by low-temperature hydrothermal process and heat treatment hydrothermal process and heat treatment. ing, as shown in Figure 2b. The preparation process can be divided into two parts; the first step synthesizes dodecahedral-shaped ZIF-67 crystal as the initial precursor. Secondly, the NPs of layered porous CoMoO4-Co(OH)2NPs were formed by injecting a dielectric sodium-molybdenum solution into the suspension of ZIF-67 crystals via the etching ion exchange effect. Then, TAA and GO solutions were added to the suspension of the above intermediates with CoMoO4-CoO/S@rGONPs, and the final products were obtained by low-temperature hydrothermal process and heat treatment hydrothermal process and heat treatment. Template-Directed Synthesis Template-directed synthesis is an efficient method for preparing multifunctional nanomaterials with multiple morphologies and structures because it allows direct tuning of the morphology and size of the nanomaterials by adjusting the preparation conditions and selecting a suitable template. A typical template growth process involves depositing or synthesizing a precursor on a substrate (template) and then removing the template through an etching process to produce porous nanosheet products [32]. For example, porous and polygonal magnesium oxide layers can be used as templates to obtain monolayer and bilayer porous graphene nanosheets. The resulting graphene nanosheets have a porous nanonet structure with a pore size distribution of 6-10 nm and an SSA of up to 1654 m 2 g −1 [33]. Sacrificial template guidance is an extension of the conventional template method and is a more effective way to prepare porous materials by applying a template as a precursor system. Graphene and its derivatives are important sacrificial templates for the synthesis of various ultra-thin, porous 2D nanosheets. Recently, Peng et al. proposed a general method for in-situ synthesis of 2D porous transition metal oxide (TMO) nanosheets [34]. Figure 3 shows that GO was first employed as a template to grow various TMO precursors on its surface, and then the TMO precursors are transformed into 2D porous TMO nanosheets after heat treatment due to the synergistic effect of chemical interconnection and GOs-controlled decomposition of TMO nanoparticles. Two transition metal (TM) cations are mixed with GO and then anchored on surfaces of the rGO template during solution-phase reaction. After removal of the rGO template during post-calcina- Template-Directed Synthesis Template-directed synthesis is an efficient method for preparing multifunctional nanomaterials with multiple morphologies and structures because it allows direct tuning of the morphology and size of the nanomaterials by adjusting the preparation conditions and selecting a suitable template. A typical template growth process involves depositing or synthesizing a precursor on a substrate (template) and then removing the template through an etching process to produce porous nanosheet products [32]. For example, porous and polygonal magnesium oxide layers can be used as templates to obtain monolayer and bilayer porous graphene nanosheets. The resulting graphene nanosheets have a porous nanonet structure with a pore size distribution of 6-10 nm and an SSA of up to 1654 m 2 g −1 [33]. Sacrificial template guidance is an extension of the conventional template method and is a more effective way to prepare porous materials by applying a template as a precursor system. Graphene and its derivatives are important sacrificial templates for the synthesis of various ultra-thin, porous 2D nanosheets. Recently, Peng et al. proposed a general method for in-situ synthesis of 2D porous transition metal oxide (TMO) nanosheets [34]. Figure 3 shows that GO was first employed as a template to grow various TMO precursors on its surface, and then the TMO precursors are transformed into 2D porous TMO nanosheets after heat treatment due to the synergistic effect of chemical interconnection and GOs-controlled decomposition of TMO nanoparticles. Two transition metal (TM) cations are mixed with GO and then anchored on surfaces of the rGO template during solution-phase reaction. After removal of the rGO template during post-calcination, 2D porous MTMO nanosheets consisting of interconnected MTMO nanocrystals were formed. Reprinted with permission from Ref. [33]. Copyright 2012 American Association for the Advanced Energy Materials. Liquid Phase Stripping Nanosheet Method Based on the weak interlayer van der Waals interactions in layered compounds, a top-down synthesis method has been developed to overcome interlayer forces and to prepare 2D nanosheets by direct physical or chemical stripping from their bulk layered nanomaterials. High-quality micro-scale-width films have been obtained from bulk crystals using the sellotape method [35]. However, the process is time-consuming and difficult to control and is not suitable for the mass production of 2D nanosheets. To achieve highquality large-scale synthesis of 2D nanosheets, as shown in Figure 4, liquid phase stripping of layered materials is usually carried out using the following three main methods: ion intercalation, ion exchange, and solvent ultrasonic treatment. Firstly, ion intercalation refers to the adsorption of guest molecules into the gap between layers, and this method is widely used in certain layered materials [36]. Ion intercalation usually increases the interval between layers, weakens the interlayer adhesion, and reduces energy, which is usually a disadvantage of ion intercalation methods. Also, another disadvantage of the ion intercalation method is that they are sensitive to environmental conditions [37]. However, ion intercalation methods are still under development and a large number of intercalation methods and intercalating agents are emerging. As scientific research continues, ion intercalation will play a greater role in the use of nanosheets. Liquid Phase Stripping Nanosheet Method Based on the weak interlayer van der Waals interactions in layered compounds, a top-down synthesis method has been developed to overcome interlayer forces and to prepare 2D nanosheets by direct physical or chemical stripping from their bulk layered nanomaterials. High-quality micro-scale-width films have been obtained from bulk crystals using the sellotape method [35]. However, the process is time-consuming and difficult to control and is not suitable for the mass production of 2D nanosheets. To achieve highquality large-scale synthesis of 2D nanosheets, as shown in Figure 4, liquid phase stripping of layered materials is usually carried out using the following three main methods: ion intercalation, ion exchange, and solvent ultrasonic treatment. Firstly, ion intercalation refers to the adsorption of guest molecules into the gap between layers, and this method is widely used in certain layered materials [36]. Ion intercalation usually increases the interval between layers, weakens the interlayer adhesion, and reduces energy, which is usually a disadvantage of ion intercalation methods. Also, another disadvantage of the ion intercalation method is that they are sensitive to environmental conditions [37]. However, ion intercalation methods are still under development and a large number of intercalation methods and intercalating agents are emerging. As scientific research continues, ion intercalation will play a greater role in the use of nanosheets. Ion-exchange methods refer to the process of displacing ions between insoluble solid layered materials, such as Montmorillonite (MMT) and hydrotalcite, which normally carry exchangeable ions, and ions of the same charge in solution. In suspensions like MMT, for example, this layered structure combined with the unique and convenient migration of water molecules between layers allows ions to exchange with ions in body solution. The ion exchange between MMT and cations can peel off their layered structure, thus opening up a new avenue for novel 2D nanosheets [38]. The method lays the foundation for a general route to prepare large area monolayer nanosheets and the basic properties of 2D nanomaterials and develops a number of potential applications. The final presentation is an ultrasound-assisted liquid phase exfoliation strategy, which is also a popular method that is widely used due to its high yield. Ultrasonic generates cavitation bubbles or shear forces that separate layered material into monolayer to multilayer nanosheets. However, it also has many disadvantages such as poor structural integrity, size limitations, and low monolayer yield. This treatment destroys the layered microcrystalline structure and produces stripped nanosheets. The stability of ultrasound-treated nanosheets depends on various parameters and the choice of solvent is very important. As with MMT, it is difficult to separate monolayer nanosheets with 2D structures. Ion-exchange methods refer to the process of displacing ions between insoluble solid layered materials, such as Montmorillonite (MMT) and hydrotalcite, which normally carry exchangeable ions, and ions of the same charge in solution. In suspensions like MMT, for example, this layered structure combined with the unique and convenient migration of water molecules between layers allows ions to exchange with ions in body solution. The ion exchange between MMT and cations can peel off their layered structure, thus opening up a new avenue for novel 2D nanosheets [38]. The method lays the foundation for a general route to prepare large area monolayer nanosheets and the basic properties of 2D nanomaterials and develops a number of potential applications. The final presentation is an ultrasound-assisted liquid phase exfoliation strategy, which is also a popular method that is widely used due to its high yield. Ultrasonic generates cavitation bubbles or shear forces that separate layered material into monolayer to multilayer nanosheets. However, it also has many disadvantages such as poor structural integrity, size limitations, and low monolayer yield. This treatment destroys the layered microcrystalline structure and produces stripped nanosheets. The stability of ultrasoundtreated nanosheets depends on various parameters and the choice of solvent is very important. As with MMT, it is difficult to separate monolayer nanosheets with 2D structures. In addition, peeling materials with low reduction potentials, such as graphene, by adding hydroxyl and epoxy groups on its surface, produces hydrophilic properties that allow solvent water to be embedded and large-scale peeling, with dispersed sheets mainly in single layers, often spanning hundreds of nanometers stripping by extending the interlayer spacing. In addition to multi-step embedding, in-situ reactions of the embedding agent can be used to overcome interlayer forces and enable exfoliation. At a later stage, improved methods have been proposed to efficiently obtain high-quality graphene. In re- In addition, peeling materials with low reduction potentials, such as graphene, by adding hydroxyl and epoxy groups on its surface, produces hydrophilic properties that allow solvent water to be embedded and large-scale peeling, with dispersed sheets mainly in single layers, often spanning hundreds of nanometers stripping by extending the interlayer spacing. In addition to multi-step embedding, in-situ reactions of the embedding agent can be used to overcome interlayer forces and enable exfoliation. At a later stage, improved methods have been proposed to efficiently obtain high-quality graphene. In recent years, the controllability of the stripping process and the function of the product have been further developed based on the method of liquid phase stripping methods for the synthesis and practical application of 2D nanosheets of controllable quality. Battery Classical graphene nanomaterials are good electronic conductors, with a zero-band gap structure and excellent electron transport capabilities making them good electrode materials. However, in many cases, graphene needs to be compounded with different materials to achieve fast electron and ion transport effects. Many examples have been reported of the design of hybrid structures of graphene with many oxides (Nb 2 O 5 , TiO 2 , MoO 3 , etc.) to achieve the mentioned functions [39][40][41]. Various carbon carriers such as nanotubes, graphene-based materials, and porous carbon not only act as electron channels, but also form heterojunctions between oxide and carbon atoms, thus influencing the electronic properties of both materials. Another way to achieve fast electron and ion transport is through the construction of 2D heterostructures, which facilitate the combination of highly conductive and high-energy density 2D materials. Since at least one material in the hybrid structure has good electrical conductivity, graphene is often the primary material of choice in this [42]. To date, this approach has been applied quite commonly and a large number of metallic conductors and active materials are also available. A class of 2D transition metal nitrides and carbides (MXenes) has been widely reported as a promising paradigm in the field of energy conversion and storage. Both MXene and graphene can be produced from their conventional materials using "top-down" stripping techniques (MAX material or graphite). This stripping method allows for the large-scale fabrication of ultra-thin 2D nanomaterials, down to a single atom or multiple atomic layers, resulting in a variety of unique chemical and physical properties. Second, ideal MXene and graphene materials have large specific surface areas and high electrical conductivity, making them excellent candidates for a variety of energy conversion and storage applications. In addition, heteroatoms (metallic or non-metallic) can be used to dope or modify the surface of the microstructure to improve performance. For example, a freestanding, ultra-lightweight, additive-and binder-free Ti 3 C 2 TxMXene was recently prepared by Olgani et al. [43]. It was shown that Ti 3 C 2 TxMXene aerogel could be aligned along a temperature gradient in the sub-millimeter region with a strain tolerance of up to 50%. MXene aerogel has an excellent electrochemical response, excellent rate performance, high specific capacity, and high cycle stability. This study shows that preventing re-stacking of MXene flakes during aerogel manufacturing eliminates the need for electrochemical cycling to achieve maximum volume. The excellent electromechanical properties of MXene aerogel result from the directional assembly of the 2D sheet in their structure, which makes them high-quality strain sensors. Zhang et al. [44] prepared a composite material containing MXene and SnS by a hydrothermal method. The introduction of SnS increases the interlayer spacing and enhances the reversibility and electrical conductivity of the composite. However, A pristine MoS 2 electrode exhibits quick capacity attenuation and rate performance. A 2D composite material was prepared by Huang and coworkers [45] with the help of a hydrothermal technique. MoS 2 nanosheets were introduced into the interlayer of Ti 3 C 2 T x MXene to create the composite. In a nutshell, the MXene@SnS and MXene@MoS 2 composites exhibit outstanding electrochemical characteristics and have promising application possibilities due to the synergistic impact of SnS and MoS 2 with high theoretical capacity and Ti 3 C 2 T x with superior electrical conductivity. Lithium-ion batteries (LIBs) dominate the power supply market for a wide range of devices, from electronics and new energy vehicles to networking applications [46]. The structure of IVA-LD plays an important role in improving the electrochemical performance of LIBs, such as power/energy density and cycling stability. Nanostructured electrodes that improve the overall performance of LIBs include ultra-thin, well-defined 2D nanomaterials, shortened lithium-ion transport channels, and abundant surface area for lithium-ion storage activity [47]. Despite the advantages of 2D nanomaterials in LIBs applications, the problem of self-filling of 2D nanomaterials in electrode manufacturing has been an impediment to their practical use in LIBs. During material processing or electrode fabrication, 2D nanomaterials can easily re-agglomerate into dense structures due to the weak van der Waals force between them [48], severely hindering electrolyte and ion penetration into the internal structure of the electrode and thus leading to rapid capacity decay. Chen et al. have designed a horizontally aligned, high tortuosity porous rGO and used it as an efficient sulfur host [48]. Sulfur species can be firmly encapsulated in sandwiches of 2D carbon material, which can act as a barrier to suppress shuttle effects due to its inherent high conductivity and laminar flow confinement. These neatly aligned rGO nanosheets are made into sandwiches to limit the diffusion of dissolved LiPSs. The experimental results show that the curvature of rGO affects the inhibition of LiPSs on diffusion and dissolution. Higher electrode curvature may help ions to diffuse outward mass transfer paths to inhibit LiPSs diffusion from the cathode, as shown in Figure 5a,b. Based on these advantages, the cell achieved an ultra-high cathode area capacity of 21 mAh cm −2 , after 160 cycles, with a capacity retention of 98.1%. Following this idea, the core concept of sulfur limitation in the conductive matrix can be further applied to the design of 3D frame hosts. rGO is a good carrier for coupling well with other materials, and Lei et al. [49] designed a singledispersed molecular cluster catalyst composite comprising of a polyoxometalate framework [Co 4 (PW 9 O 34 ) 2 ] 10− shown in Figure 5e and a multilayer rGO. The composite demonstrates efficient polysulfides adsorption and reduced activation energy for polysulfides conversion due to interfacial charge transfer and exposure of unsaturated cobalt sites, making it highly advantageous for use as a bifunctional electrocatalyst. Figure 5c,d shows a significant increase in reactive polarization to 267 mV and 442 mV for the rGO/S cathode, and a slight increase in overpotential for the Co 4 W 18 /rGO/S cathode. Furthermore, the activation barrier of Li 2 S can be reduced on the Co 4 W 18 /rGO/S electrode compared to the rGO electrode, which further demonstrates the more significant oxidation kinetics of Li 2 S on the Co 4 W 18 /rGO/S cathode. To improve the performance of the LIBs, the key is to develop novel electrode materials that could improve the energy density, extend the power capacity and prolong the life cycle [47,50]. Graphene/CNTs composites display great merits in the preparation of the anode materials for LIBs, and a lot of successes have been achieved by incorporating graphene/CNTs composites into LIBs. To improve the performance, Chen et al. [51] and Li et al. [52] synthesized Graphene/CNTs composites using CVD based methods and prepared lib based on these composites. These batteries are proven to have greater capacity, cyclability, and rate capability. The results show that the composite can achieve the highest thermal conductivity and temperature rise inhibition when the Graphene/CNT mass ratio is 7/3 [49], which indicates that the composite has great potential in the thermal management of lithium-ion power batteries. The bonding behavior between graphene and carbon nanotubes is of great significance to the electrical properties of composites. To further improve battery performance, different types of metals and metal oxide nanoparticles, Ni nanoparticles [53], Ge nanoparticles [54], V 2 O 5 [37], SnO 2 [55,56], Co 3 O 4 [57], TiO 2 [58] and MoS 2 nanoparticles [59][60][61] are also added to the Graphene/CNTs composites to produce anode materials. These nanoparticle/Graphene/CNTs composites could be used to make anodes for LIBs with enhanced performance. Graphene/CNTs-based Si nanocomposites can also improve the performance of the active materials in LIBs. However, these candidates still produce severe capacity attenuation due to electrical disconnections and fractures caused by large volume changes in long cycles. Therefore, Tian et al. [62] designed a novel 3D crosslinked graphene and SWCNTs structure to encapsulate Si nanoparticles. The synthesized 3D structure is attributed to the excellent self-assembly of CNTs with GO and the heat treatment process at 900 • C. This special structure provides sufficient gap space for Si nanoparticles to expand in volume and provides channels for ion and electron diffusion. In addition, the cross-linking of graphene and SWCNTs also enhances the stability of the structure. As a result, the volume expansion of Si nanoparticles is limited. Specific capacity keeps at 1450 mAh g −1 after 100 cycles at 200 mA g −1 . This well-defined 3D structure helps to achieve superior capacity and cycle stability compared to the mechanically mixed composite electrodes of bare silicon and graphene, single-walled carbon nanotubes, and silicon nanoparticles. In the same year, a porous Si/rGO/CNT composite was developed by facile chemical etching with a self-encapsulating process as anode material for full cell LIBs [63]. What is more, SnO 2 @carbon nanotube/reduced graphene oxide (SnO 2 @CNT/RGO) composite is rationally designed and fabricated [64], in which nano SnO 2 nanoparticles (NPs, 6 nm) are anchored onto 3D conductive CNTs/RGO skeleton by first assembling SnO 2 onto CNTs and then entangling SnO 2 @CNT nanofibers in 3D graphene networks. The synergistic effect of CNTs and RGO significantly improved the conductivity and prevented the aggregation of active substances. In addition, the mesoporous structures constructed by CNTs and rGO can adapt to the volume changes of SnO 2 NPs and form more stable SEI layers during repeated discharge/charge processes. Moreover, Graphene/CNTs composites could also promote the cathode function of LIBs [65][66][67][68] and inhibit the dendrite formation of the lithium anode of lithium-ion batteries [69], providing more opportunities for a huge increase in battery capacity and energy density. The addition of Graphene/CNTs composites helps to solve the existing problems such as slow reaction kinetics, polysulfide diffusion caused by insulating sulfur, and severe capacity loss, and further promotes the development of LIBs in the next generation of energy storage systems. Silicon is also commonly used as a battery anode material, with a recording capacity (about 4000 mAhg −1) more than ten times higher than the graphite used in commercial batteries [70]. As a result, silicon has attracted considerable interest in recent years as an anode material for lithium-ion batteries. However, the application of silicon is severely limited by its rapid degradation when used as an electrode, the potential for lithium/dehydrogenation processes to cause volume expansion effects of up to 300%, in addition to the numerous drawbacks faced by the electrode/electrolyte such as the instability of the interface and the low electrical conductivity of the material. To address these issues, scientists have explored many aspects. Most of these studies have attempted to come up with practical solutions using innovative electrode structures or silicon-based composites. They usually require silicon at the nanoscale (nanoparticles, core-shell structure, yolk-shell structure, nanoporous structure, nanowires, nanotubes, nanofibers, films, etc.) [71][72][73][74]. Nanostructured Si-based materials allow for high loading and cycling stability but remain a process and engineering. Haon et al. [75] designed a Si nanowires-grown-on-graphite one-pot composite (Gt−SiNW) via a simple and scalable route, as shown in Figure 5f. The uniform distribution of SiNW and the ordered arrangement of graphite flakes prevent electrode pulverization and accommodate volume expansion during cycling, resulting in very low electrode swelling. As shown in Figrue 5g, the Gt−SiNW anodes perform well in terms of ICE (72%), cyclability (900 mAhg −1 and 72% capacity retention at 300 cycles), rate capability (1145 mAhg −1 at 2 C rate), and extended cycling at high rate (629 mAhg −1 after 250 cycles at 2 C rate). It overcomes the technical hurdle of severe volume change with Si-rich anodes and exhibits an acceptable 20% electrode expansion after 50 cycles. This study found that graphite plays a key role in maintaining high energy density: it facilitates rapid electron transport and adaptation and directly influences the volume change during cycling, thereby improving long-term mechanical integrity. Metallic tin-based materials have been a promising substitute due to their high specific capacity of up to 992 mAh g −1 , proper lithium insertion potential, abundant natural resources, low price, and non-toxic and environmental friendliness. However, tin-based anodes suffer from an extreme volume expansion of up to 300% in the lithiation process (formation of Li 4.4 Sn). The severe volume change causes a significant structural collapse and an unstable SEI, resulting in substantial deterioration in the cycling performance [76]. However, similar to the silicon anode, the Sn anode suffers from a massive volume change due to a large amount of lithium insertion and extraction, which leads to pulverization of the electrode and loss of active material [77]. Li et al. [78] propose a shell-to-yolks evolution strategy to synthesize a novel structured Sn-based composite that can well address this issue. The as-prepared composite with multiple Sn cores embedded in one hollow nitrogen-doped carbon sphere is called multiple-yolks-shelled Sn@nitrogen doped carbon (MYS@Sn@NxC). The appropriate voids between Sn particles inside the sphere can well accommodate the volume change during cycles. As well, the robust NxC shell maintains a stable structure of the electrode. Moreover, through the metal-Sn synergistic effect, the volume expansion and rapid capacity decay for LIBs application can be effectively alleviated. Wang et al. [79] rationally design a Cu-Sn (e.g., Cu3Sn) intermetallic coating layer (ICL) to stabilize Sn through a structural reconstruction mechanism. The low activity of the Cu-Sn ICL against lithiation/delithiation enables the gradual separation of the metallic Cu phase from the Cu-Sn ICL, which provides a regulatable and appropriate distribution of Cu to buffer volume change of Sn anode. The proposed structural reconstruction mechanism is expected to open a new avenue for electrode stabilization for high-performance rechargeable batteries and beyond and more metal synergies need to be developed. Metallic tin-based materials have been a promising substitute due to their high specific capacity of up to 992 mAh g −1 , proper lithium insertion potential, abundant natural resources, low price, and non-toxic and environmental friendliness. However, tin-based anodes suffer from an extreme volume expansion of up to 300% in the lithiation process (formation of Li4.4Sn). The severe volume change causes a significant structural collapse and an unstable SEI, resulting in substantial deterioration in the cycling performance [76]. However, similar to the silicon anode, the Sn anode suffers from a massive volume change due to a large amount of lithium insertion and extraction, which leads to pulverization of the electrode and loss of active material [77]. Li et al. [78] propose a shell-to-yolks evolution strategy to synthesize a novel structured Sn-based composite that can well address this issue. The as-prepared composite with multiple Sn cores embedded in one hollow nitrogen-doped carbon sphere is called multiple-yolks-shelled Sn@nitrogen doped carbon (MYS@Sn@NxC). The appropriate voids between Sn particles inside the sphere can well accommodate the volume change during cycles. As well, the robust NxC shell maintains a stable structure of the electrode. Moreover, through the metal-Sn synergistic effect, the volume expansion and rapid capacity decay for LIBs application can be effectively alleviated. Wang et al. [79] rationally design a Cu-Sn (e.g., Cu3Sn) intermetallic coating layer Transducer The burning of fossil energy sources not only harms the environment, but also faces exhaustion. Therefore, the development of new, green non-polluting, and clean energy sources is a key to solving this problem. Water is ubiquitous in the atmosphere and has an amazing amount of energy. With the development of science and technology, obtaining clean energy from water has aroused great interest. We have found that water can interact directly with many functional materials to generate power. However, in the absence of chemisorption, the interaction between water vapor and solid surface is very weak. To enhance their interaction, it is necessary to change the composition of the surface structure or enlarge the effective area of the interaction. Nanomaterials are not only very small in size, but also have many tiny pore structures. It greatly increases the specific surface area of the nanomaterial considerably and promotes interaction with moisture. Moreover, nanomaterials are highly sensitive to external stimuli. It can therefore be optimized for functionalization by doping with other elements, changing the functional groups, or coupling to an external substrate. Nanomaterials, therefore, stand out among the materials used for the generation of electricity from water. Materials typically used for power generation mainly include carbon nanostructured materials. The most prominent carbon nanomaterial in the field of moist-electric generation (MEG) is graphene, a monolayer of graphite with a honeycomb lattice. The chemical state of the graphene surface can be modified by oxygen-related functional groups [80], in which the graphite is oxidized and exfoliated to give GO, as shown in Figure 6a. In contrast to graphene, GO is modified by several oxygen-containing functional groups, such as -OH, -COOH. The addition of these functional groups increases the reactivity of graphene and greatly improves the hygroscopic and desorption properties when in contact with moisture [81]. It releases protons (H + ) from the oxygen-containing functional groups, creating an ion gradient. The protons move from high to low concentrations, producing a stable voltage. There are therefore three main strategies to improve output performance. small in size, but also have many tiny pore structures. It greatly increases the specific surface area of the nanomaterial considerably and promotes interaction with moisture. Moreover, nanomaterials are highly sensitive to external stimuli. It can therefore be optimized for functionalization by doping with other elements, changing the functional groups, or coupling to an external substrate. Nanomaterials, therefore, stand out among the materials used for the generation of electricity from water. Materials typically used for power generation mainly include carbon nanostructured materials. The most prominent carbon nanomaterial in the field of moist-electric generation (MEG) is graphene, a monolayer of graphite with a honeycomb lattice. The chemical state of the graphene surface can be modified by oxygen-related functional groups [80], in which the graphite is oxidized and exfoliated to give GO, as shown in Figure 6a. In contrast to graphene, GO is modified by several oxygen-containing functional groups, such as -OH, -COOH. The addition of these functional groups increases the reactivity of graphene and greatly improves the hygroscopic and desorption properties when in contact with moisture [81]. It releases protons (H + ) from the oxygen-containing functional groups, creating an ion gradient. The protons move from high to low concentrations, producing a stable voltage. There are therefore three main strategies to improve output performance. Figure 6. Electricity generation from graphene materials with instantaneous output. (a) Synthesis of GO from graphite through Hummers' method. There are many oxygen functional groups on sheets of GO due to the strong oxidization of graphite. Reprinted with permission from Ref. [80]. Copyright 2018 American Association for Joural of Physics D-Applied Physics; (b) A large-scale, rollable HEG integration. (c) Schematic drawing of GHEG preparation. GO film and rGO interdigital electrodes and circuits that were in situ were reduced by direct laser writing. A pair of gold electrodes was physically pressed on rGO electrodes of GHEG and applied with a 6 V bias under a high-humidity environment. The oxygen-containing group distribution between the electrodes after the polarization process, which shows a concentration difference. Reprinted with permission from Ref. [82]. Copyright 2019 American Association for Advanced Materials. (c) Schematic drawing of GHEG preparation. GO film and rGO interdigital electrodes and circuits that were in situ were reduced by direct laser writing. A pair of gold electrodes was physically pressed on rGO electrodes of GHEG and applied with a 6 V bias under a high-humidity environment. The oxygen-containing group distribution between the electrodes after the polarization process, which shows a concentration difference. Reprinted with permission from Ref. [82]. Copyright 2019 American Association for Advanced Materials. (1) Asymmetric treatment of material structure. As suggested by Yang et al., a potassium hydroxide (KOH) solution is added to the GO solution to introduce a large ionic gradient [81]. Most of the oxygen-containing functional groups of GO are destroyed by reaction with KOH. The structure of GO is destroyed, leaving potassium ions (K + ) between the lamellar structures, forming rGO. GO and rGO come into contact through overlapping. An ionic solution is formed in the middle of the layered structure when exposed to moisture. So, the potassium ions (K + ) are distributed asymmetrically throughout the system. The potassium ions (K + ) move spontaneously from the rGO side to the GO side, generating a stable voltage and current. A graphene hygroelectric generator also has been prepared by laser treatment of graphene. The graphene hygroelectric generators can be folded [82], stretched, or even stripped in three dimensions. The rGO is formed by engraving the GO film using a direct laser writing technology, as in Figure 6b,c. On this basis, the gradient distribution of oxygen-containing groups between the positive and negative poles is changed by a moisture-electric annealing polarization process. When the device encounters moisture, the free hydrogen ions released from the oxygen-containing groups in it form an ion gradient and create a concentration difference within. (2) Treatment of functional groups. GO can not only be reduced, but also acidified. After acidification, the density of functional groups on GO can be adjusted to make the functional groups dissociate more easily, resulting in a larger proton gradient difference between the upper and lower surfaces of the GO films. In zhu et al.'s. experiment [83], GO/PVA treated with 32% HCI could produce a voltage of 0.85 V. Apparently, acidification can greatly increase the voltage output, as shown in Figure 7a. rGO side to the GO side, generating a stable voltage and current. A graphene hygroelectric generator also has been prepared by laser treatment of graphene. The graphene hygroelectric generators can be folded [82], stretched, or even stripped in three dimensions. The rGO is formed by engraving the GO film using a direct laser writing technology, as in Figure 6b,c. On this basis, the gradient distribution of oxygen-containing groups between the positive and negative poles is changed by a moistureelectric annealing polarization process. When the device encounters moisture, the free hydrogen ions released from the oxygen-containing groups in it form an ion gradient and create a concentration difference within. (2) Treatment of functional groups. GO can not only be reduced, but also acidified. After acidification, the density of functional groups on GO can be adjusted to make the functional groups dissociate more easily, resulting in a larger proton gradient difference between the upper and lower surfaces of the GO films. In zhu et al.'s. experiment [83], GO/PVA treated with 32% HCI could produce a voltage of 0.85 V. Apparently, acidification can greatly increase the voltage output, as shown in Figure 7a. The GO composite has substantial micropores facilitating water molecule absorption and an abundant cross-linking network providing ion channels for fast carrier migration. Reprinted with permission from Ref. [84]. Copyright 2019 American Association for Energy&Environmental Science. (3) Composite with other materials. GOs are also frequently combined with other materials to enhance their MEG function. Huang et al. proposed a moisture electric generator based on porous GO and PAAS composites [84], as shown in Figure 7b. In this material, the large specific surface area and hydrophilic groups work together to enhance its water absorption, which substantially promotes ion dissociation and efficient transport. In addition, the heterogeneous structure of the material and the asymmetric metal electrode allow the system to construct Schottky contact, which facilitates unidirectional ion transport and significantly improves the device performance. Carbon nanotubes can also be combined with GO. A MEG is fabricated by the end-to-end connection of two equal asymmetric regional sandwich structural GO/CNT composite films [85]. Proper addition of CNT helps to create continuous CNT network channels and generates a voltage as water flows over the CNTs surface, thus improving output performance. The generator uses exhaled moisture to generate electricity. As a person continues to breathe, electricity can be continuously generated. Water Evaporation In recent years, the photothermal conversion to obtain drinkable fresh water from abundant seawater has attracted great attention due to its energy source of inexhaustible solar energy, which is greener and more sustainable than other methods. Currently, the main factor limiting the application of photothermal conversion in drinking water is the low water evaporation rate, among which photothermal materials are the core devices, and the factors affecting the performance of photothermal materials are light absorption performance, photothermal conversion efficiency, thermal insulation, and water transport [86,87]. IVA-LD are mainly carbon and semiconductor materials. Carbon-based materials and semiconductor materials have received increasing attention from researchers as excellent photothermal materials. Carbon-based materials with π-π conjugate structure require a variety of doping in broadband absorption of sunlight compared with the semiconductor materials and carbon black materials have itself has an excellent absorption effect by themselves and are inexpensive and have great application advantages in this regard. Semiconductor materials absorb rapidly when exposed to sunlight, and the absorption of photons leads to the transition and relaxation of electrons (Figure 8a). During this process, there will be a great opportunity to release energy into the lattice and convert it into phonons, which will turn into heat for photothermal conversion. For example, monolithic tin monoselenide (SnSe), with its strong photo material coupling, wide absorption wavelength range, and outstanding quantum confinement effect, has great potential to effectively utilize solar radiation and convert it into heat [88]. Although silicon has been used for research in solar power generation due to its excellent photochemical properties, it is not suitable for use as a photothermal material for photothermal conversion due to its poor absorption of low-wavelength sunlight. Initially, the photothermal material was modified by loading silicon nanoparticles with silicon as a modifier [89], and the transformation of the material from hydrophilic to hydrophobic was achieved, as shown in Figure 8 b-d, which improved the absorption of sunlight (Figure 8e). Then by doping gold, silver, and other precious metals with silicon [90,91], the researchers realized the absorption of the full band of sunlight, which is used as a photothermal material generated by solar-thermal steam to study. Compared with silicon nanocrystals, germanium nanocrystals receive less attention, but have more excellent photothermal efficiency (Figrue 8f). Sun, et al. [92] prepared GeO by thermally induced dehydration of Ge(OH) 2 to obtain size-controlled ncGe verified the superior photothermal performance of Ge nanocrystals over silicon nanocrystals, and expanded their research on its application in photothermal water evaporation and seawater desalination. The natural black color of carbon-based materials allows them to absorb sunlight at a wide wavelength range. The π-π conjugated structure is widely present in carbon-based materials, allowing excited electrons to jump from the highest occupied molecular orbit (HOMO) to the lowest unoccupied molecular orbit (LUMO) after absorbing photons and then return to the ground state orbit by releasing heat, as shown in Figrue 9a. Compared with homologous semiconductor materials, carbon-based materials have significant advantages such as excellent light absorption and photothermal conversion properties, easy processing, and low price. They have a wide range of promising applications in photothermal conversion, evaporation, and desalination of water. In recent years, researchers have conducted extensive and in-depth research on carbon-based materials of different dimensions, such as carbon quantum dots, carbon nanotubes, graphene nanosheets, etc. As an excellent dopant, CQDs were firstly doped into the permeable membrane [93,94], which showed excellent deconfliction and purification ability and increased pure water flux, drawing the attention of researchers. After that, researchers introduced CQDs into photothermal devices, which can improve the photothermal conversion performance of photothermal materials while solving the problem of desalination and decontamination. Chao et al. [95] loaded carbon quantum dots (LCQD) prepared by hydrothermal method onto delignified wood (DW) substrates to improve the photothermal materials, and their sunlight absorption performance and photothermal performance were improved, as shown in Figure 9b,c. The longitudinal and transverse anisotropy of thermal conductivity of macroscopic carbon nanotube arrangement [96] makes it a natural photothermal material with excellent thermal management by integrating external and main water insulation and internal water evaporation heat conduction. Chen et al. [97] pioneered the coating of CNTs on chemically treated wood substrates to realize a photothermal device that integrates sunlight absorption, heat management, and water transmission. Using the unique unidirectional water permeability property of the all-fiber structure, Zhu et al. [98] loaded CNTs onto the substrate to achieve water transmission control, thus promoting the increase of water evaporation rate, as shown in Figure 9d. Later, Zhao et al. [99] integrated CNTs with SMP to achieve flexible folding of photothermal devices and achieved thermal management of sunlight absorption due to the presence of carbon nanotubes. IVA-LD have significant advantages in material integration to improve performance. Graphene is an excellent photothermal material due to its excellent mechanical properties such as high toughness, better thermal conductivity than CNTs and excellent optical properties, and easy modification and assembly. By introducing thermally responsive PNIPAm into the microporous graphene frames, the biomimetic materials prepared by Zhang P et al. [100], achieved reversible regulation of pore size and hydrophilicity under different lighting conditions, and realized self-regulation of water for evaporation, as shown in Figure 9e. Cui et al. combined graphene photothermal materials with solar cells to achieve the combined photoelectric thermal effect, and the water evaporation rate reached the highest rate reported at that time 2.01-2.61 kg m −2 h −1 at 1 sun. More importantly, the integrated utilization of the photoelectric thermal effect broadened the application potential of graphene as a photothermal material. In particular, Lu et al. [101] used graphene materials for membrane distillation and successfully developed a catalytic pyrolysis process for the preparation of ultrathin NG membrane from solid carbon sources. The prepared graphene with high porosity atomic layer thickness, combined with the hydrophobic property of graphene naturally implements the vapor high selectivity, high flux through the graphene membrane, and the salt intercept (>99.8%). Meanwhile, the photothermal properties of graphene were utilized to achieve a temperature difference of 65/25 • C on both sides of the membrane under solar irradiation to achieve a high flux of LMH for water purification, which is much higher than the membrane flux (<80 LMH) currently reported in natural mode. Reprinted with permission from Ref. [89]. [98]. Copyright 2021 American Association for the Advanced Science; (e) Under the different intensity of solar irradiation, the water transport channels can be autonomously turned on and off by the opening and closing of microstructures. Reprinted with permission from Ref. [100]. Copyright 2018 American Association for the Angewandte Chemie-International Edition. Summary and Prospects In this review, we systematically introduce IVA-LD, mainly carbon-based nanomaterials and semiconductor nanomaterials, and their different morphologies and unique properties. The last few years have witnessed a wide range of applications of IVA-LD in energy conversion due to their unique mechanical, optical, electrochemical, and thermal (b) UV-Vis absorption of wood, DW, LCQDs, and LCQDs-DW. The standard AM 1.5 solar spectrum was set as the background and the visible light wavelength ranging from 380 nm to 780 nm was marked by a green dashed-line. Reprinted with permission from Ref. [95]. Copyright 2021 American Association for the Chemical Engineering Journal; (c) Temperature rising curve for pure water, wood, DW, and LCQDs-DW. Reprinted with permission from Ref. [95]. Copyright 2021 American Association for the Chemical Engineering Journal; (d) Photographs and SEM images of nonwoven (inset is the cross-section SEM image of PP/PE fiber) with MWCNTs. Reprinted with permission from Ref. [98]. Copyright 2021 American Association for the Advanced Science; (e) Under the different intensity of solar irradiation, the water transport channels can be autonomously turned on and off by the opening and closing of microstructures. Reprinted with permission from Ref. [100]. Copyright 2018 American Association for the Angewandte Chemie-International Edition. Summary and Prospects In this review, we systematically introduce IVA-LD, mainly carbon-based nanomaterials and semiconductor nanomaterials, and their different morphologies and unique properties. The last few years have witnessed a wide range of applications of IVA-LD in energy conversion due to their unique mechanical, optical, electrochemical, and thermal properties. In addition, recent advances in the synthesis and preparation methods of IVA-LD are reviewed. Also, new synthesis and preparation methods have promoted the application of IVA-LD in energy conversion to a certain extent. We present in detail the progress of applications in battery energy storage, MEG, and photothermal evaporation. Group IVA elements with similar valence electron structures exhibit efficient solarthermal conversion properties, especially carbon-based materials that have been widely used in battery, MEG, and photothermal evaporation in recent years due to their diverse morphology and low price and are expected to be commercialized in a green way. In the aspect of battery, the application of carbon materials and silicon materials in electrode materials is explored. The doping of some transition metals will significantly improve their performance, and starting with cathode materials, inhibiting the dissolution of active substances in the direction of future research. In the aspect of MEG, the voltage and current are caused by the directional motion of charged particles driven by concentration gradient. Some methods to improve the power generation performance, such as increasing specific surface area and modifying their surfaces, are also discussed. As the understanding and research of the material have improved, the voltage output has been increasing while becoming more stable. MEG have been widely used in sensors and self-powered electronic devices. For efficient photothermal evaporation, it is important to realize the broad-band solar energy absorption, heat insulation management, and water transport management simultaneously in photothermal materials. IVA-LD are easy to process and load, and easy to integrate with other materials to add, photothermal management, while retaining the properties of other materials, thus enabling broad-band solar energy absorption, heat insulation management, and water transport management as one of the photothermal materials. These characteristics enable biomimetic materials, foldable flexible materials and devices integrated with solar cells, with potential applications in solar-thermal evaporation. Although good results have been achieved in battery, MEG, and photothermal evaporation, many aspects are still not perfect, and many problems need to be solved. Here is a summary of what we can work on and the challenges ahead. Increasing the cycle stability and safety, power output, and water yield remains a top priority. While the cycle stability and safety, voltage output, and water yield has been improved, it is nowhere near enough to scale production. It still does not meet our most pressing needs. On the one hand, we consider exploring the properties of IVA-LD, where the combination of pore structures of 3D porous materials as intelligent and integrated materials will be their future direction. On the other hand, it is also important to find new materials that can be used for energy conversion. There is a wide variety of materials in nature, and many potential resources remain untapped. The discovery of new materials may lead to new directions for battery energy storage, MEG, and photothermal evaporation. Conflicts of Interest: The authors declare no conflict of interest.
2022-07-27T15:08:32.704Z
2022-07-22T00:00:00.000
{ "year": 2022, "sha1": "35104c38afcf2ca891b24a0ea2e802200cae73b6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/12/15/2521/pdf?version=1658489953", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c9551b2d190cc1a0ffdcb7bec50a81138fba950d", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
16335436
pes2o/s2orc
v3-fos-license
Identification of Fusarium virguliforme FvTox1-Interacting Synthetic Peptides for Enhancing Foliar Sudden Death Syndrome Resistance in Soybean Soybean is one of the most important crops grown across the globe. In the United States, approximately 15% of the soybean yield is suppressed due to various pathogen and pests attack. Sudden death syndrome (SDS) is an emerging fungal disease caused by Fusarium virguliforme. Although growing SDS resistant soybean cultivars has been the main method of controlling this disease, SDS resistance is partial and controlled by a large number of quantitative trait loci (QTL). A proteinacious toxin, FvTox1, produced by the pathogen, causes foliar SDS. Earlier, we demonstrated that expression of an anti-FvTox1 single chain variable fragment antibody resulted in reduced foliar SDS development in transgenic soybean plants. Here, we investigated if synthetic FvTox1-interacting peptides, displayed on M13 phage particles, can be identified for enhancing foliar SDS resistance in soybean. We screened three phage-display peptide libraries and discovered four classes of M13 phage clones displaying FvTox1-interacting peptides. In vitro pull-down assays and in vivo interaction assays in yeast were conducted to confirm the interaction of FvTox1 with these four synthetic peptides and their fusion-combinations. One of these peptides was able to partially neutralize the toxic effect of FvTox1 in vitro. Possible application of the synthetic peptides in engineering SDS resistance soybean cultivars is discussed. Introduction Sudden death syndrome (SDS) is an emerging disease caused by the fungal pathogen, Fusarium virguliforme. Between 1999 and 2004, the average annual yield suppression due to SDS was estimated to be $190 million [1]. The disease was first recorded in Arkansas in 1971 [2]. Now the pathogen has been detected in all soybean-growing states of North America [3]. The disease has two components: (i) foliar SDS and (ii) root necrosis. Major crop losses occur from the foliar SDS. F. virguliforme is a soil borne pathogen. It over-winters in crop residues or soil in the form of chlamydospores that initiate root-infection in subsequent years. The pathogen has never been detected in the above ground diseased tissues. The application of fungicides in furrow during planting or as seed treatments has little success in controlling this fungal pathogen; and similarly, foliar application of fungicides has little success on controlling the disease because the foliar symptoms are caused by toxins produced by the pathogen in infected roots [4][5][6][7][8]. The F. virguliforme can be maintained in culture media. Earlier, a 17 kDa protein was purified from the F. virguliforme culture filtrate that causes necrosis on detached wounded soybean cotyledons [5]. The pathogen releases a large number of proteins to the culture medium [6]. One of these proteins, FvTox1, has been shown to cause foliar SDS [7]. Investigation of knockout fvtox1 mutants established that FvTox1 is the major toxin for foliar SDS development in soybean [8]. The toxin requires light to cause foliar SDS symptoms [7,9]. Expression of an anti-FvTox1 single-chain variable fragment antibody reduced foliar SDS development in transgenic soybean plants [10]. Growing of SDS resistant soybean cultivars has been the main method of controlling this disease. Unfortunately, the SDS resistance is partial and encoded by a large number of quantitative trait loci (QTL), each conditioning a small effect. Thus, breeding SDS resistant soybean cultivars is very challenging. Creation and application of alternative SDS resistance mechanisms is becoming urgent to complement the partial SDS resistance of soybean cultivars. As the foliar SDS is the most important component of the disease, generation of an anti-FvTox1 antibody to neutralize the toxicity of FvTox1 could improve foliar SDS resistance by complementing the partial resistance of soybean cultivars. Unfortunately, the anti-FvTox1 plant antibody designed earlier to enhance foliar SDS resistance in transgenic soybean plants [10] was developed based on mRNAs, extracted from a mammalian hybrid cell line; and therefore, soybeans of such transgenic plants are unsuitable for human consumption. Like single variable fragment plant antibodies created based on mammalian mRNA molecules, linear peptides also have the ability to specifically bind and alter functions of target proteins. Compared to macromolecular antibodies, interacting peptides possess several attractive features. For example, they bear high structural compatibility and recognition specificity to the target proteins. Furthermore, small sizes allow peptides to cross cell membranes into intracellular compartments [11]. High structural compatibility and small sizes, make peptides more attractive to alter functions of target proteins [11,12]. In vivo or in vitro studies have shown that peptides can block functions of proteins including toxins and inhibit microbial infections [13][14][15][16][17]. A peptide with antibacterial activity has been identified from a phage display library [18]. Peptides can also be used as molecular diagnostic tools based on their binding affinity to certain target proteins or molecules [19][20][21]. Phage display is an extremely powerful strategy for isolating synthetic peptides that specifically bind to target proteins. In this technology, a library of synthetic oligonucleotides are fused to a coat protein gene so that a library of recombinant fusion peptides are displayed on the surface of the engineered bacteriophage for in vitro interaction with the target proteins. For example, in bacteriophage M13 displayed peptides are N-terminal fusions to the minor coat protein pIII that is involved in adhesion to bacterial F pilus for infection [22]. Over 50 peptide-based products, generated through phage display systems, have been approved for clinical uses [11]. There are a few examples of peptide-discovery to inhibit or monitor plant pathogens including virus, bacteria, fungi and nematode [19]. Phakopsora pachyrhizi causes Asian soybean rust, a devastating disease in many soybean-growing countries including Brazil and Argentina. Peptides isolated from a phage display library were able to inhibit growth of P. pachyrhizi germ tube when mixed with germinating spores [14]. For enhancing resistance of potatoes to nematodes, chemoreception disruptive peptide has been expressed in transgenic plants. These peptides have shown to suppress nematode parasitism up to 61% as compared to the non-transgenic control [23]. In vitro peptides binding to zoospores of the fungal pathogen Phytophthora capsici caused premature encystment of the zoospore [24]. One of the drawbacks of the peptides discovered using phage display technology is that peptides often bind to their targets with low affinities [16]. In animals, peptides are attached to a synthetic scaffold to enhance the potency of peptide-binding [25]. In an earlier study, peptides that bound to P. capsici zoospores were fused to maize cytokinine oxidase/dehydrogenase as a display scaffold. When peptides were secreted from transgenic tomato roots as fusion proteins with the maize cytokinin oxidase/dehydrogenase to the rhizosphere, they were effective in inducing premature zoospore encystment deterring zoospores from landing on the root surfaces resulting in enhanced root resistance to zoospore mediated infection [26]. Here we investigated if small, linear synthetic FvTox1-interacting peptides with ability to neutralize toxicity of FvTox1 can be identified from M13 phase display libraries for enhancing foliar SDS resistance in soybean. Our study revealed that one FvTox1-interacting peptide was able to partially suppress the toxicity effect of FvTox1 in vitro. Western Blotting We developed an assay based on western blotting to identify the FvTox1 (His-tagged at C-terminus)-interacting phage particles, which is described in details in S1 File. Expression and Purification of His-Tagged Proteins in E. coli Single strand DNA sequences of four isolated peptides along with nucleotides encoding GGGSGGGS linker were synthesized in Integrated DNA Technologies 1 (Coralville, IA). For constructing peptide fusion genes, the PCR products of desired single peptide carrying complementary cohesive ends were ligated into expression vector pRSET (Life Technologies, Carlsbad, CA). Constructed plasmids were sequenced to avoid any mutations. To clone each synthetic gene into protein expression vector pET41, we designed two primers carrying either BamHI or XhoI restriction site (S1 Table). For protein expression, constructed plasmids were transformed into E. coli BL21 (DE3) pLysS cells. When the E. coli BL21(DE3) pLysS cell lines were at OD600 0.6, the transformed plasmids were induced with 1 mM IPTG at room temperature for overnight. We purified soluble His-tagged proteins using Ni-NTA agarose (Qiagen, Valencia, CA). Individual purified protein samples were filtered through Amicon Ultra-0.5 Centrifugal Filters for 3 kDa pore size (EMD Millipore, Billerica, MA). 10 units of thrombin was added to the recombinant protein, which was then filtered through a Amicon Ultra-0.5 Centrifugal Filter Units with 30 kDa pore size to obtain GST-tag free proteins. Purified protein samples were separated on SDS PAGE gels to determine their extent of purity. Protein concentrations were quantified using protein assay dye reagent concentrate (Bio-Rad Laboratories, Inc., Hercules, CA). Pull Down Assay Pull down assay was conducted as suggested earlier [27] with some modifications, and is presented in details in S1 File. Yeast Two Hybrid and β-Galactosidase Activity Assay Synthetic FvTox1-interacting peptides genes were PCR amplified and cloned into the pB42D vector. FvTox1 was cloned into the pLexA vector. FvTox1-pLexA plasmid was co-transformed with each synthetic FvTox1-interacting peptide gene in pB42AD into the yeast EGY48 [pSH18-34] isolate, which carries two reporters, LacZ and LEU2. The transformed cells were plated on minimal agar plates (SD/-His/-Trp/-Ura) to select colonies containing both plasmids. To test the activation of both reporter genes (LacZ and LEU2), 5 clones from each transformation were selected to individually inoculate 3 ml of SD/Glucose/-His/-Trp/-Ura liquid media and grew for overnight. Details of screening protocol and analyses of clones are presented in S1 File. Stem Cutting Assay The stem cutting assay was conducted according to standard procedure [7]. Details of stem cut assays and analyses of the treated plants are presented in S1 File) Identification of M13 Phage Clones Displaying Putative FvTox1-Interacting Peptides The target FvTox1 was expressed and purified from an insect line by following a protocol described earlier [7] and stored at -20°C (S1A Fig). The stem-cutting assay was performed to confirm that the purified FvTox1 was functional and can cause the typical interveinal chlorosis symptom. The leaves of soybean cultivar, 'Williams 82' fed with FvTox1 showed the typical interveinal chlorosis (S1B Fig). Three M13 phage display peptide libraries were mixed in equal proportions for panning using FvTox1, immobilized on 12-well microtiter plate surface ( Fig 1A). In order to improve the stringency of panning, the amount of FvTox1 coated to the plate wells in the second and third rounds was reduced significantly (Table 1). At the same time the binding time was reduced from 60 to 30 min and the concentration of Tween-20 in washing buffer was increased from 0.1% to 0.5% (Table 1). The number of eluted phage particles in the third round of panning were increased 1,000 times compared to that in the second round of panning suggesting enrichment in FvTox1-interacting M13 phage particles. We conducted western blot analysis to identify the M13 phage clones displaying putative FvTox1-interacting peptides. Over 160 M13 clones were identified in the first round of western blotting. Selected clones were amplified by infecting E. coli ER2738 and plated onto LB agar amended with X-gal/IPTG for the second round of western blotting. Thirty-nine M13 positive clones were chosen from the second round of western blotting for sequencing (Fig 1C-1F). Classification of the Putative FvTox1-Interacting Phage-Displayed Peptides Based on sequences of displayed peptides, we classified the 35 M13 phase clones identified through western blotting (Fig 1) into four classes. Class I contains 26 M13 phage clones, Class II seven phage clones, and Classes III and IV carry single phage clone each (Table 2). Generation and Expression of Synthetic FvTox1-Interacting Genes In order improve the interaction of four classes of putative FvTox1-interacting peptides ( Table 2) to FvTox1, we applied a PCR-based cloning approach to generate nine distinct fusion genes (Fig 2A; Table 3). DNA sequence encoding GGGS linkers were added to DNA sequences encoding the four classes of peptide (Table 2; S2 Table). At the initial stage, we used the pRSET expression vector, which carries the His and Xpress tags. Expression of five fusion peptides containing two or more of the four M13 displayed peptides was successful in this plasmid vector. We used pET41 plasmid vector carrying the GST tag to express the four M13 displayed peptides (single peptides). The GST tag was removed from the recombinant proteins through thrombin digestion (Fig 2B; S3 Table). Interaction of Nine Fusion Peptides with FvTox1 To determine the strength of nine fusion peptides with FvTox1 (Fig 2), we applied two approaches: (i) in vitro pull-down assay, and (ii) in vivo interaction in yeast. In in vitro pull down assays, similar amounts of purified His-tagged peptide/fusion proteins were mixed with GST-tagged FvTox1 and pulled down the protein complex using glutathione resin. Western blot analysis of the pulled down protein complexes with the anti-His antibody revealed that fusion proteins, generated by fusing individual FvTox1-interacting phage displayed peptides, showed improved interaction with FvTox1 as compared to that of the individual single peptides with FvTox1 ( Fig 3A; Table 4). In vitro interaction provides only an indication of possible in vivo interactions between two proteins. To gain a better insight into the possible in planta interaction of the FvTox1-interacting peptides with FvTox1, we conducted in vivo interaction studies in yeast. Yeast two-hybrid assays were conducted using the LexA two-hybrid system. FvTox1 was expressed in the pLexA vector as a fusion to activation domain of the prokaryotic LexA transcription factor, while the nine peptides (Pep1 through Pep9; S4 Table) fused individually to the activation domain in the pB42AD plasmid. The interaction of FvTox1 to each of the nine peptides was observed in yeast ( Fig 3B). However, quantitative β-galactocidase activity assays indicated that the five fusion proteins generated from four phage-displayed peptides were not better than the individual progenitor single peptides for interaction with FvTox1 ( Table 4). Addition of a cysteine residue on each side of the individual peptides to improve structures of the Fv-Tox1 interacting peptides also did not improve any interaction strength of the nine peptides with FvTox1 (Table 4). Biological Activity of the Putative FvTox1-Interacting M13 Phage Displayed Peptides To investigate if any of the four single peptides identified through screening of phage display libraries can suppress foliar SDS symptom development, eight peptides with and without a His-tag added at the C-terminus of each putative FvTox1-interacting single peptide (PEP1 Table). L represents linker. P1, P2, P3, and P4 are four peptides, PEP1, PEP2, PEP3, and PEP4, respectively, identified from four classes of phages, Classes 1, II, III and IV, respectively (Table 2). L, linker sequence GGGSGGGSGGGS. (B), Purified nine putative FvTox1-interacting proteins expressed from the nine synthetic genes (A) in E. coli (S2 Table). Arrows show the respective proteins. doi:10.1371/journal.pone.0145156.g002 peptides. Individual single peptides were preincubated with the F. virguliforme culture filtrate, which causes foliar SDS in cut soybean seedlings [28]. Pre-incubation of PEP 1 with cell-free F. virguliforme culture filtrates significantly reduced the foliar SDS symptom development as compared to that following feeding cut soybean seedlings with only the cell-free F. virguliforme culture filtrates (Fig 4B and 4C, S3 Fig). Discussion Currently, there is no suitable soybean cultivar that is completely resistant to SDS. Expression of plant antibodies designed based on genetic information from mammals against pathogen proteins has been shown to protect plants from invading pathogens [29]. We have demonstrated earlier that expression of an anti-FvTox1 single-chain variable fragment antibody neutralizes the toxic effect of FvTox1 and enhances foliar SDS resistance [10]. Although our study (Fig 2) to FvTox1, which was immobilized on the GST-column. The FvTox1-interacting peptides pulled down by FvTox1 were detected with an anti-His antibody. The strengths of interactions between individual synthetic peptides with FvTox1 are presented in Table 4. (B), In vivo interactions of nine putative FvTox1-interacting fusion peptides with FvTox1 in a yeast two-hybrid system. Nine synthetic genes shown in Fig 3A, were cloned as fusion genes with the DNA activation domain of the pB42D plasmid. In nine additional constructs, two cysteine residues were added, one on each side the nine FvTox1-interacting peptides. β-galactosidase activities showing the extent of interaction of individual Fv-Tox1-interacting peptides with FvTox1 are presented in Table 4 established that expression of plant antibody against pathogen toxins could be a suitable strategy in enhancing resistance against toxin-induced plant diseases, the mouse-based anti-FvTox1 antibody expressed in soybean is not suitable for human consumption. We therefore investigated if artificial genes encoding FvTox1-interacting peptides can be created to neutralize FvTox1 for enhancing foliar SDS resistance in transgenic soybean lines. Peptides can interact very specifically with proteins [30]. In plants, peptides have been shown to be useful in inhibiting pathogen infection or adhesion to host [14,21,23,26]. Phage display peptide-screening is an ideal approach to identify peptides that bind to a target protein. We have screened three M13 phage display peptide libraries (New England Lab, Woburn, MA) and discovered four classes of M13 phage clones encoding FvTox1-interacting peptides. Three classes of clones carry peptides of 12 amino acid (aa) residues; only one class of a single clone carries a peptide of seven aa residues. This may suggest that long peptides rather than the short peptides have the advantage of binding to the target, FvTox1. Of the 33 positive M13 clones sequenced, 25 carry the Class I peptide suggesting that PEP1 probably strongly interact with FvTox1 as compared to the other three peptides under the conditions of library screening. Alternatively, it could also be due to decreased viability or growth of M13 clones carrying PEP2, PEP3 and PEP4 as compared to those carrying PEP1 ( Table 2). Expression of some , Chlorotic and necrotic leaf symptoms were recorded on day 8 following feeding of cut soybean seedlings with cell-free Fv culture filtrates that were pre-adsorbed with individual M13 phage displayed peptides (Table 5). (B), Reduced foliar SDS symptoms were induced in seedlings that were fed with cell-free Fv culture filtrates pre-adsorbed with PEP1 as compared to cell-free Fv culture filtrates (CF), or CF, pre-adsorbed with any of the other three peptides, PEP2, PEP3 or PEP4. (C), Reduced chlorophyll contents in all treatments except water control and CF pre-adsorbed with PEP1. (D), In vitro pull down assays of FvTox1 from CF using His-tagged FvTox1-interacting peptides (Table 5). FvTox1 was detected using anti-FvTox1 antibody [7]. Error bars indicate the standard errors calculated from means of three biological replications. displayed peptides as fusion proteins with M13 pIII protein involved in adhesion to bacterial F pilus required for host infection could decrease the infectivity of M13 phage particles [22]. The four FvTox1-interacting peptides did not show any conserved residues suggesting that the peptides may interact to different epitopes of FvTox1 and fusing the peptides could improve the binding affinity to FvTox1, which was apparent from pull down assays of at least few peptides generated by fusing two or more of the phage-displayed peptides (Fig 3A; Table 4). We applied multiple approaches to confirm the binding affinity of the peptides to FvTox1. Both in vitro and in vivo protein-protein interaction studies established that the isolated four peptides should be suitable to determine their possible role in neutralizing FvTox1 toxin for enhancing foliar SDS in transgenic soybean plants. In vitro binding of FvTox1 of cell-free F. virguliforme culture filtrates with each of the four phage-displayed peptides indicated that at least one peptide (PEP1) was able to neutralize the toxic effect of FvTox1 and reduce foliar SDS development to some extent in stem cutting assays as compared to the control (Fig 4). We added two cysteine residues, one on each side of the nine synthetic peptides, for improving the binding affinities of the nine fusion peptides to FvTox1 (Fig 3). It is expected that formation of the sulfhydryl bridges between the added two flanking cys residues could improve the binding affinities of the FvTox1-interacting peptides to FvTox1. When two flanking cys residues were added, the in vivo binding affinity of FvTox1 with two (Pep5 and Pep8) of the five fusion peptides in yeast was improved as compared to their corresponding original forms with no cys residues at their flanking sites (Table 4). On the contrary, reduced interaction strength of FvTox1 with PEP7 with two flanking cys residues was observed as compared to that of FvTox1 with PEP7 with no cys residues (Fig 3; Table 4). These results suggest that expression of one or more of the 18 FvTox1-interacting peptides could neutralize FvTox1 in planta and enhance foliar SDS resistance in transgenic soybean plants. In mammals, antibodies bind to target using six antibody complementarity determining regions [31]. The single peptides have weaker affinity compared to antibody. In animals, to increase affinity of binding single peptides to target proteins many copies of peptides are attached to synthetic scaffold such as liposome [16,25]. In plants, peptides can be fused to selected proteins as display scaffold for delivering to correct cellular or extracellular spaces [26]. FvTox1 is the major toxin that induces foliar SDS. The toxin protein has been localized to chloroplasts (H.K. Brar and M.K. Bhattacharyya, unpublished). Light is essential for FvTox1induced foliar SDS symptoms [7,9]. A thioredoxin protein, GmTRX2, localized to chloroplasts, has been identified as the candidate FvTox1-interacting target soybean protein (R.N. Pudake, and M.K. Bhattacharyya, unpublished). Targeting the FvTox1-interacting peptides to chloroplasts using a suitable chloroplast protein such as the FvTox1-interacting GmTRX2 as display scaffold could compete with endogenous FvTox1-interacting protein for FvTox1 binding and thereby suppress the foliar SDS development in transgenic soybean plants. If successful, this could be a suitable biotechnological approach for enhancing SDS resistance in soybean. (100 ng/μl) was placed on each strip of nitrocellulose membrane buffer and air-dried. C1, a membrane was hybridized to M13 phage particles (1×10 14 pfu) in PBS buffer. After overnight incubation of the strip with M13 phage particles at 4°C, strip was hybridized to the primary anti-M13 monoclonal antibody, and then to anti-mouse secondary antibody. C2, strip was first hybridized to anti-FvTox1 monoclonal antibody [7] and then to a secondary anti-mouse antibody (New England Lab, Woburn, MA). P1, M13 phage (#29) containing Pep1; P2, M13 phage (#26) containing Pep2; P3, M13 phage (24) containing Pep3; P4, M13 phage (#31) containing Pep4 ( Table 2). For hybridization of FvTox1 with M13 phage particles, each strip was immersed in an individual tube containing a selected phage clone to a final concentration of 1×10 14 pfu in PBS buffer. After overnight incubation of the strips with individual phage particles at 4°C, strips were hybridized to the primary anti-M13 monoclonal antibody and subsequently with to a secondary anti-mouse secondary antibody. (PPTX) S3 Fig. Foliar SDS symptom development by cell-free Fv culture filtrates preincubated with the FvTox1-interacting peptides. Chlorotic and necrotic leaf symptoms were recorded on day 8 following feeding of cut soybean seedlings with cell-free Fv culture filtrates that were preadsorbed with individual M13 phage displayed peptides with no His tags (
2016-04-07T22:52:55.727Z
2015-12-28T00:00:00.000
{ "year": 2015, "sha1": "ddca7575758951e236a3f566d96bb5c62f26aac3", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0145156&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5bab93c92ea973fef2eb0ae42dbe5c5b2973a66f", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
201141803
pes2o/s2orc
v3-fos-license
The power analysis technique in determining sample size for military equipment test and evaluation Test and evaluation is a statutory behaviour to verify that certain design of military equipment has met established operational requirements. The determination of its sample size directly affects the scientific rigor and impartiality of test identification. This paper summarizes the basic theory of power analysis, discusses the algorithm of performing power analysis, and some suggestions for parameter setting of power analysis in different applications are given. Introduction In statistics, power refers to the probability of being able to reject the null hypothesis correctly under the significance of α, usually in the form of (1-β), when the alternative hypothesis is true [1]. Where α and β are the probabilities of the errors of type I & II. Simply put, α, β and (1-β) are the probabilities of "wrong judgment", "missed judgment" and "right judgment" respectively. The so-called power analysis is a statistical technique to study the relationship between α, β, sample size N, and effect size ES [2]. The effect size ES refers to the actual difference between the true value of the tested index and the comparison value. Before the power analysis, though impossible to get the true value, ES does exist, and can be estimated by historical or simulation data, so it can be regarded as fixed. Therefore, efficacy analysis generally has three applications [3]: one is to specify α and β before the test to get the required N; the other is to specify α and N before the test to get the actual value of β; the third is to determine N before the test and give the relative importance of α and β, which is generally expressed in the form of ratio q=β/α, and then determine the appropriate value of α and β. Because the power analysis provides a theoretical basis for determining the sample size, it effectively enhances the scientific rigor and impartiality of test & evaluation. In addition, test and evaluation, as an acceptance behaviour of weapon equipment design results, is bound to contain a large number of hypothesis testing contents for indicators. Therefore, the application of research efficacy analysis in test and evaluation is of great theoretical and practical significance. In this paper, the single-tailed t-test [4] on the right side is taken as an example to discuss the algorithm implementation of power analysis, study the relationship between various parameters of power analysis, and then give relevant suggestions in practice. Algorithm implementation Take the single-tailed t test on the right side as an example, when the null hypothesis is true, the distribution curve is the t-distribution [5] probability density curve on the left side of FIG. 1 (when statistic t equals to the comparison value); otherwise, it is the non-central t-distribution [6] probability density curve on the right side. If α and N are determined, a critical value t c can be determined by using the t distribution in figure 1, and then the integral of the probability density function of the non-central t distribution less than t c can be used to calculate β. Note that the meaning of each symbol in the flowcharts is: μ0-mean of null hypothesis, μ1-mean of alternative hypothesis, σ-standard deviation of sample, N-sample size, q-the ratio of β&α, tol-error allowed by the program, step-the step size of α, pdf-probability density function;and the meaning of "change sign" in the flowchart is that the tested value changes from negative to positive or from positive to negative。 Sample size N and statistical power (1-β) Using Cohen d [7] as the measure of ES, i.e. = 1 − 0 ⁄ . When ES and α remain unchanged, the statistical power (1-β) is consistent with the increasing trend of sample size N by observing the calculation results in table 1(ES=0.5, α=0.05). For example, reducing the requirement for (1-β) from 0.9855 to 0.8483 can reduce the sample size from 60 to 30. This is mainly because the smaller the N is, the smaller the non-central parameter [6] is, that is, the closer the distance between the t-distribution and the non-central t-distribution is, and the smaller the N is, the larger the sample standard deviation is, that is, the wider the probability density curve is, which results in the larger overlap coverage of the two distribution curves. If α remains constant, it inevitably leads to an increase in β, i.e., a reduction in statistical efficacy (1-β). According to statistical practice, 0.8 is generally set as (1-β). By observing table 1, it can be seen that a value between 15 and 30 is appropriate for the sample size N at this time (the accurate value is N=27). However, α could not also be loosen indefinitely. As shown in figure 5, when N<38, α would be greater than 0.1, which means that if the design under test does not achieve the desired purpose, in 10 tests, there could probably be more than one "passed". In the field of procurement of military equipment, this risk is often unbearable, because they have a bearing on the success or failure of front-line operations and the life and death of soldiers. Sample size N and effect size ES As can be seen from the calculation results in FIG. 6, when there is no reason to relax the significance level α, and statistical power (1-β) is not expected to be sacrificed, but there is evidence that the effect size ES is larger than previously thought, then, the demand of sample size N can also be reduced. For example, when ES changes from 0.5 (medium effect size) to 0.8 (large effect size), the sample size requirement decreases from 27 (rounded up from 26.14) to 12 (rounded up from 11.14). The statistical principle of this trend is that a larger ES will cause a greater deviation between the t-distribution and the non-central t-distribution, which will result in less overlap between the two distributions, and thus a smaller N can meet the requirements of alpha and (1-β). Taking the graph of FIG. 1 as example, when N is smaller, the corresponding curves of the t-distribution and the non-central t-distribution will be closer and become wider, but the larger ES will push them further, thus reducing the influence of the reduction of N. Suggestions In order to improve the comprehensive benefits of test and evaluation, it is usually expected to reduce the sample size N of the tested equipment as much as possible. However, based on the above analysis, In order to strictly control the quality of the projects with mature technology or wide application in the front line of the army, you would rather miss the design which has met the demand. To guarantee the "right judgment" Fix β, increase α For major scientific and technological innovation projects or projects with forward guidance, if the design really meets the needs, and the risk of wrong judgment is tolerable, try to select it. Both Fix both For projects with relatively mature technology or extensive front-line military applications, under the premise of strict quality control, there is evidence to prove that the degree of performance better than the index requirements is greater than previously expected, so the sample size requirements can be reduced without sacrificing the probability of correct selection. Conclusion Taking t test as an example, this paper expounds the basic principle, calculation method and basic application of statistical power analysis in the test of military equipment design, the basic way of adjusting test sample size by using power analysis and the precondition that should be paid attention to are given, which can better serve the related work of equipment design and acceptance。 When analyzing the power of F test, chi-square test and other hypothesis test methods, the basic principle is consistent with the content of this paper, and the solution method, application premise and applicable scenarios are basically similar, therefore, in practical work, the content elaborated in this paper can be adjusted appropriately according to the type of hypothesis test to meet the actual needs.
2019-08-22T20:24:44.493Z
2019-08-02T00:00:00.000
{ "year": 2019, "sha1": "98bb82de25743f376449fdd3c6e25fe41e519e7c", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/573/1/012008", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "6bc181ab23d5d0a419e3496028fa50565f210b67", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
239998219
pes2o/s2orc
v3-fos-license
A Geometric Perspective towards Neural Calibration via Sensitivity Decomposition It is well known that vision classification models suffer from poor calibration in the face of data distribution shifts. In this paper, we take a geometric approach to this problem. We propose Geometric Sensitivity Decomposition (GSD) which decomposes the norm of a sample feature embedding and the angular similarity to a target classifier into an instance-dependent and an instance-independent component. The instance-dependent component captures the sensitive information about changes in the input while the instance-independent component represents the insensitive information serving solely to minimize the loss on the training dataset. Inspired by the decomposition, we analytically derive a simple extension to current softmax-linear models, which learns to disentangle the two components during training. On several common vision models, the disentangled model outperforms other calibration methods on standard calibration metrics in the face of out-of-distribution (OOD) data and corruption with significantly less complexity. Specifically, we surpass the current state of the art by 30.8% relative improvement on corrupted CIFAR100 in Expected Calibration Error. Code available at https://github.com/GT-RIPL/Geometric-Sensitivity-Decomposition.git. Introduction During development, deep learning models are trained and validated on data from the same distribution. However, in the real world sensors degrade and weather conditions change. Similarly, subtle changes in image acquisition and processing can also lead to distribution shift of the input data. This is often known as covariate shift, and will typically decrease the performance (e.g. classification accuracy). However, it has been empirically found that the model's confidence remains high even when accuracy has degraded [1]. The process of aligning confidence to empirical accuracy is called model calibration. Calibrated probability provides valuable uncertainty information for decision making. For example, knowing when a decision cannot be trusted and more data is needed is important for safety and efficiency in real world applications such as self-driving [2] and active learning [3]. A comprehensive comparison of calibration methods has been studied for in-distribution (IND) data [4], However, these methods lead to unsatisfactory performance under distribution shift [5]. To resolve the problem, high-quality uncertainty estimation [6,5] is required. Principled Bayesian methods [7] model uncertainty directly but are computationally heavy. Recent deterministic methods [8,9] propose to improve a model's sensitivity to input changes by regularizing the model's intermediate layers. In this context, sensitivity is defined as preserving distance between two different input samples through layers of the model. We would like to utlize the improved sensitivity to better detect 35th Conference on Neural Information Processing Systems (NeurIPS 2021), Sydney, Australia. Out-of-Distribution (OOD) data. However, these methods introduce added architecture changes and large combinatorics of hyperparameters. Unlike existing works, we propose to study sensitivity from a geometric perspective. The last linear layer in a softmax-linear model can be decomposed into the multiplication of a norm and a cosine similarity term [10,11,12,13]. Geometrically, the angular similarity dictates the membership of an input and the norm only affects the confidence in a softmax-linear model. Counter-intuitively, the norm of a sample's feature embedding exhibits little correlation to the hardness of the input [11]. Based on this observation, we explore two questions: 1) why is a model's confidence insensitive to distribution shift? 2) how do we improve model sensitivity and calibration? We hypothesize that in part an insensitive norm is responsible for bad calibration especially on shifted data. We observe that the sensitivity of the angular similarity increases with training whereas the sensitivity of the norm remains low. More importantly, calibration worsens during the period when the norm increases while the angular similarity changes slowly. This shows a concrete example of the inability of the norm to adapt when accuracy has dropped. Intuitively, training on clean datasets encourages neural networks to always output increasingly large feature norm to continuously minimize the training loss. Because the probability of the prevalent class of an input is proportional to its norm, larger norms lead to smaller training loss when most training data have been classified correctly (See Sec. 3.1). This renders the norm insensitive to input differences because the model is trained to always output features with large norm on clean data. While we have put forth that the norm is poorly calibrated, we must emphasize that it can still play an important role in model calibration (See Sec. 4.1). To encourage sensitivity, we propose to decompose the norm of a sample's feature embedding and the angular similarity into two components: instance-dependent and instance-independent. The instance-dependent component captures the sensitive information about the input while the instanceindependent component represents the insensitive information serving solely to minimize the loss on the training dataset. Inspired by the decomposition, we analytically derive a simple extension to the current softmax-linear model, which learns to disentangle the two components during training. We show that our model outperforms other deterministic methods (despite their significant complexity) and is comparable to multi-pass methods with fewer training hyperparameters in Sec. 4.1. In summary, our contributions are four fold: • We study the problem of calibration geometrically and identify that the insensitive norm is responsible for bad calibration under distribution shift. • We derive a principled but simple geometric decomposition that decomposes the norm into an instance-dependent and instance-independent component. • Based on the decomposition, we propose a simple training and inference scheme to encourage the norm to reflect distribution changes. • We achieve state of the art results in calibration metrics in the face of corruptions while having arguably the simplest calibration method to implement. Related Work Methods dedicated to strengthening calibration can be divided into two camps: multi-pass models and single-pass deterministic models. The current state-of-the-art multi-pass models are: Bayesian Monte Carlo Drop Out (MCDO) [7] and Deep Ensembles [14]. Bayesian methods are the most principled way to model uncertainty. Instead of optimizing max likelihood for a single set of parameters, Bayesian methods obtain a posterior distribution over possible parameters given a prior distribution over parameters and calculated data likelihood assuming some process noise. The posterior distribution over parameters captures epistemic uncertainty or uncertainty due to the limits of what the model knows . The final predictive distribution is obtained by marginalizing out model parameters. While Bayesian methods are theoretically sound, they are intractable in practice. Deep Ensembles averages multiple models trained using different random initialization so they learn different classification functions. A recent trend is to use a single-pass deterministic non-Bayesian model to improve uncertainty estimation. Two recent works DUQ [8] and SNGP [9] propose to improve uncertainty-awarenesss of deterministic networks by improving the networks' sensitivity to input changes. Intuitively, a sensitive model should map samples further from the training data as they become more out-of-distribution. This can be achieved at two levels: feature level and output level. At the feature level, both methods require the feature extractors (CNNs) to be regularized to prevent feature collapse, which is the mapping of two different data points to the same embedded vector. This is ensured by having input distance awareness, which is equivalent to ensuring bi-Lipschitz continuity [15]. In order to achieve this, DUQ [8] uses a two-sided gradient penalty [16] and SNGP [9] uses bounded spectral normalization [15]. The output level needs to reflect the changes in feature space. This can be done by adopting distance-aware classifiers. DUQ [8] uses a RBF networks with learned centroids for each class and SNGP [9] uses an approximate Gaussian Process layer. We were inspired by temperature scaling [4], which is another method for bettering calibration, but fails under distribution shift [5]. Our method does not require input distance awareness and instead leverages the geometric intuitions about the output layer, specifically properties of the norm of the input embedding. Method Following our hypothesise that the insensitivity of the norm is responsible for bad calibration on distribution shifted data, we propose geometric sensitivity decomposition (GSD) for the norm. We first introduce the geometric perspective of the last linear layer in Sec. 3.1 and then derive GSD in Sec. 3.2. To improve sensitivity of the norm and model calibration on shifted data, we propose a GSD-inspired training and inference procedure in Sec. 3.3 and Sec. 3.4. Norm and Similarity The output layer of a neural network can be written as a dot-product < x, w y >, where x is the embedded input and w y is the weight vector associated with class y. Though seemingly simple there are strong geometric and calibration related intuitions drawn from this. Several prior works [10,12,11] have studied the effects decomposition of the last linear layer in a softmax model can have on classification. The output layer can be decomposed into angular similarity cos φ y and norm x 2 . where w y 2 is the norm of a specific classifier in the linear layer. We'll use this geometric view of the linear layer instead of the dot-product representation. Based on this perspective, we base the foundation of our work on the following observations from prior works [10,12,11]: 1) The probability/confidence of the prevalent class of an input is proportional to its norm [12]. 2) While the norm of a feature strongly scales the predictive probability, due to it's unregularized nature the norm is not sensitive to the hardness of the input [11]. In other words, the norm could be the reason for bad sensitivity of the confidence to input distribution shift. Consequently, the insensitive norm can be causally related to bad calibration. We will examine a strong correlation between the quality of calibration and the magnitude of norm in Sec. 4.2. Geometric Sensitivity Decomposition of Norm and Angular Similarity To motivate the subsequent geometric decomposition, we can revisit the softmax model, P (y|x) ∝ exp ( w y 2 x 2 cos φ y ). There are three terms contributing to the magnitude of the exponential function, w y 2 , x 2 and cos φ y . Due to weight regularizations, w y 2 is most likely very small, while cos φ y ∈ [−1, 1]. Therefore, the only way to obtain a high probability/confidence on training data and minimize cross-entropy loss is to 1) push the norm x 2 to a large value and 2) keep cos |φ y | of the ground truth class close to one, i.e., |φ y | close to zero. This is further supported by [17], where it was shown that logits of the ground truth class must diverge to infinity in order to minimize cross-entropy loss under gradient descent. In this process, models tend towards large norms and small angles for all training samples. Therefore, we propose to decompose the norms of features into two components: an instanceindependent scalar offset and an instance-dependent variance factor, which we define in Eq. 2. The role of the instance-independent offset C x is to minimize the loss on the entire training set and the instance-dependent component ∆x accounts for differences in samples. Therefore, if we can disentangle the instance-independent component from the instance-dependent component, we can obtain a norm that is sensitive to the hardness of data. Following this logic, we decompose the norm into two components. Similarly, we relax the angles such that the predicted angular similarity does not need to be close to one on the training data, i.e., making the angles larger. To achieve this, we introduce an instanceindependent relaxation angle C φ and an instance-dependent angle ∆φ y . Analogous to the norm decomposition, the scalar C φ serves solely to minimize the training loss while the instance-dependent ∆φ y accounts for differences in samples. Because we need to account for the sign of the angle, we put an absolute value on it. The ∆x 2 , |∆φ y | are the instance-dependent components and C x , |C φ | are the instance-independent components. We can rewrite the pre-softmax logits in Eq. 1 with the decomposed norm and angular similarity. (Detailed derivation in Sec. A.1 in the Appendix.) We can simplify the equation by assuming cos |φ y | is close to one, which means |φ y | is small. This is due to the fact that |φ y | is the angle between the correct class weight and x, which means as training ensues, the angle converges to 0 and thus the cosine similarity converges to 1. (Please see Sec. A.2 for empirical support.) cos |C φ | sin |∆φ y | sin |C φ | cos |∆φ y | = sin (|∆φ y | + |C φ |) + sin |φ y | sin (|∆φ y | + |C φ |) − sin |φ y | ≈ 1 Therefore, Eq. 4, omitting the absolute value on angles because cos is an even function, simplifies: Because cos C φ and C x are instance-independent, we denote them as α and β respectively. This geometric decomposition of norm and cosine similarity inspires us to include α and β as free trainable parameters in a new network and the network can learn to predict the more inputsensitive ∆x 2 and ∆φ y instead of the original x 2 and φ y . While both the angle and norm can be decomposed we direct the focus to the norm as the angle is already calibrated to accuracy [11]. In other words, angles have been shown to be sensitive to input changes in [11]. Disentangled Training Following the derivation in Eq 6, we replace the norm, x 2 , in Eq. 1 by 1 α ∆x 2 + β α and φ y by ∆φ y . ∆x 2 and ∆φ y are now learned outputs from a new network instead as shown in Eq. 6: The new model can be trained using the same training procedures as the vanilla network without additional hyperparameter tuning, changing the architecture or extended training time. Even though the outputs of the new network, ∆x 2 and ∆φ y , only approximate the original geometric relationships with Eq. 6, the effect of α and β reflects the decomposition in Eq. 3 and Eq. 2. • β encodes an instance-independent scalar C x of the norm. A larger β corresponds to a smaller instance-dependent component ∆x 2 . • α encodes the cosine of a relaxation angle C φ . A larger arccos α corresponds to a larger C φ and therefore a larger ∆φ j . Because β encodes the independent component, the new feature norm ∆x 2 becomes sensitive to input changes and maps OOD data to lower norms than IND data as we can see in Fig. 3a, 3b. We regularize α such that the instance-independent component C φ is small. Specifically, we penalize We empirically found that a larger relaxation angle C φ deteriorates performance because the angular similarity already correlates well with difficulty of data [11] and we do not need to encourage a large relaxation. Sec. 4.3 will empirically verify this. Disentangled Inference The decomposition theory in Sec. 3.2 provides a geometric perspective on the sensitivity of the norm and the angular similarity to input changes and inspires a disentangled model in Sec. 3.3. The new model uses a learnable affine transformation on the norm ∆x 2 . Let's denote the affine transformed norm as the effective norm N (∆x) . = 1 α ∆x 2 + β α . However, the training only separates the sensitive components of the norm and angular similarity, the model can still be overconfident due to the existence of insensitive components. Therefore, we can improve calibration by modifying insensitive components, e.g., β in our case. We propose a two-step calibration procedure that combines in-distribution calibration (Fig. 1b) and out-of-distribution detection (Fig. 1c) based on two observations: 1) overconfident IND data can be easily calibrated on a validation set, similar to temperature scaling [4]. 2) for OOD data, without access to a calibration set for OOD data, the best strategy is to map them far away from the IND data given that the model clearly distinguishes them. The first step is calibrating the model on IND validation set (note our method does not rely on OOD validation data), similar to temperature calibration [4]. However, instead of tuning a temperature parameter as shown in Fig. 1a, we simply tune the offset parameter β on the validation set in one of two ways: 1) grid-search based on minimizing Expected Calibration Error (see Sec. 4) 2) SGD optimization based on Negative Log Likelihood [4]. Because these are post-training procedure, both methods are very efficient. We denote the new parameter as β . As shown in Fig. 1b, by changing the offset, we decrease the magnitude of the norms after the affine transformation. Formally, The second step approximates the calibrated affine mapping in Eq. 8 by a non-linear function which covers a wider range of the effective norm as shown in Eq. 9 and maps OOD data further away from IND data. Intuitively, when a sample is more likely IND, the non-linear function maps it closer to the calibrated transformation. When a sample is OOD, the non-linear function maps it more aggressively to a smaller magnitude, exponentially away from the IND samples. where c is a hyperparameter which can be calculated as in Eq. 10. The non-linear function grows exponentially close to the calibrated affine mapping in Eq. 8 dictated by 1 − e −c ∆x 2 as shown in 1c. Therefore, e −c ∆x 2 can be viewed as an error term that quantifies how close the non-linear function is to the calibrated affine function in Eq. 8. Let µ x and σ x denote the mean and standard deviation of the distribution of the norm of IND sample embedding calculated on the validation set. We use the heuristic that when evaluated at one standard deviation below the mean, ∆x 2 = µ x − σ x , the approximation error e −c(µx−σx) = 0.1. Even though the error threshold is a hyperparameter, using an error of 0.1 lead to state-of-the-art results across all models applied. In summary, the sensitive norm ∆x 2 is used both as a soft threshold for OOD detection and as a criterion for calibration. While similar post-processing calibration procedure exists, such as temperature scaling [4] (illustrated in Fig. 1a and further introduced in A.9) it only provides good calibration on IND data and does not provide any mechanism to improve calibration on shifted data [5]. Our calibration procedure can improve calibration on both IND and OOD data, without access to OOD data, because the training method extracts the sensitive component in a principled manner. Just as temperature scaling, the non-linear mapping needs only to be calculated once and adds no computation at inference. Brier [19] and Expected Calibration Error (ECE [20]). Our goal is for our model is to produce values close to 0 in these metrics, which maximizes calibration. Please refer to Sec. A.3 (Appendix) for more detailed discussion on these metrics. Following prior works [9,8,5], we will use CIFAR10 and CIFAR100 as the in-distribution training and testing dataset, and apply the image corruption library provided by [1] to benchmark calibration performance under distribution shift. The library provides 16 types of noises with 5 severity scales. In this section, we show that our model outperforms other deterministic methods (despite their significant complexity Compared Methods We compare to several popular state-of-the-art models including stochastic Bayesian methods (multi-pass): Deep Ensemble [14] and MC dropout [7], and recent deterministic methods (single pass): SNGP [9] and DUQ [8]. Experiments on Calibration Results In Tab. 1 and 2, we compare our model to the most recent state of art deterministic methods SNGP and DUQ using Wide as the model backbone and each model evaluated using the average of 10 seeds. We report accuracy, ECE and NLL on clean and corrupted CIFAR10/100 datasets [1]. Our method outperforms all single-pass methods on calibration when data is corrupted, and even surpass ensembles on error metrics for corrupted data. We had 2 versions of our model: Grid Searched: grid search β on the validation set to minimize ECE and Optimized: optimize β on the validation set via gradient decent to minimize NLL for 10 epochs, similar to temperature scaling. We report additional results with ResNet18 in Sec. A.4 and Sec. A.5 (Appendix) with image noise and rotation respectively. Generalizability We explored how generalizable our method (Grid Searched) is by applying it to 12 different models and 4 different datasets in Tab. 3. We can see consistently that our model had stronger calibration across all models and metrics, including models known to be well calibrated like LeNet [22]. All models were tested on CIFAR10C and CIFAR100C datasets offered by [1] where the original CIFAR10 and CIFAR100 were pre-corrupted; these were used for consistent corruption benchmarking across all models. All non-CIFAR datasets were corrupted via rotation from angles [0,350] with 10 step angles in between and the average calibration and accuracy was taken across all (a) CIFAR100 Accuracy (b) CIFAR100 ECE (c) CIFAR100 Norm (d) CIFAR100 Cosine Figure 2: Accuracy, ECE, norm and cosine similarity on CIFAR100 validation set with clean and Gaussian noise trained on vanilla ResNet. In the shaded region, increase in norm is responsible for increase in ECE because cosine similarity is relatively flat. Throughout training, sensitivity of the cosine similarity improves while that of the norm remains insensitive. degrees of rotation. Our models included: DenseNet [23], LeNet [22] and 6 varying sizes of ResNet, which are described in [24]. The datasets we experimented on CIFAR10 [25], CIFAR100 [25], MNIST [26] and SVHN [27], CIFAR10C [1], CIFAR100C [1]. We report Optimized results in Tab. 14 in A.7 (Appendix). Both tuning methods yield similar performance. Qualitative Comparison The current state-of-the-art single pass models for inference on OOD data, without training on OOD data, are SNGP [9] and DUQ [8]. The primary disadvantages of these models are: 1) Hyperparameter Combinatorics: Both DUQ and SNGP require many hyperparameters as shown in Tab. 13 in A.6 (Appendix). Our model only has one hyperparameter that is tuned post-training, which is quicker and less costly than the other methods that require pre-training tuning. 2) Extended Training Time: DUQ requires a centroid embedding update every epoch, while SNGP requires sampling potentially high dimensional embeddings of training points, thus increasing training time while our model trains in the same amount of time as the model it is applied to. Bayesian MCDO [7] and Deep Ensemble [14] are considered the current state-of-the-art methods for multi-pass calibration. Bayesian MCDO requires multiple passes with dropout during inference. Deep Ensembles requires N times the number of parameters as the single model it is ensembling where N is the number of models ensembled. The main disadvantage of multi-pass models is high inference complexity while our model adds no overhead computation at inference. Importance of the Norm While we have shown and conjectured that the norm of x is uncalibrated to OOD data and not always well calibrated to IND data, one might suggest to simply remove the norm. We show in Tab. 4 though the norm is uncalibrated it is still important for inference. We trained ResNet18 on CIFAR10 and then ran inference with ResNet18 modified in the following: dividing out the norms of the weights for each class, dividing out the norm of the input and then dividing out both. As we can see the weight norm contributes minimally to inference as accuracy decreased by 0.03% without it and as previous work has shown the angle dominates classification. We can see with ||x|| removed the entropy is at it's highest while calibration is very poor, implying the distribution is much more uniform when it should be peaked, as a larger entropy implies a more uniform distribution. Thus the root of the issue does not lie in the existence of the norm, but it's lack of sensitivity. Reasons for Bad Calibration under Distribution Shift To identify the cause of bad calibration, we record the accuracy, ECE, norm and cosine similarity of a model during training of a vanilla ResNet model. Specifically, we record the evaluation statistics on clean data and also on data corrupted with Gaussian noise on CIFAR100. Fig. 2a and 2b show the accuracy and ECE respectively. We observe that evaluation on Gaussian noise corrupted data yields lower accuracy and higher ECE compared to evaluation on clean data. This demonstrates that the model's confidence fails to adapt to the decreasing accuracy. Fig. 2c and 2d show the change of average norm and average cosine similarity throughout training. The difference between Gaussian noised data and clean data is also reported. We observe that the norm of clean data and the norm of Gaussian noised data are close and the difference remains constantly low whereas the cosine similarity of the two diverges with training. This indicates that sensitivity of cosine similarity increases whereas sensitivity of the norm remains low with training. In the shaded region of Fig. 2b-2d where ECE increases the most, we observe that the norm also increases but the cosine similarity only increases slowly. The observation also holds for other noises and architectures. We further present Pearson correlation between ECE and cosine similarity or norm on 4 models and 3 noises in Tab. 5. A large correlation coefficient indicates a higher positive correlation. Norm is consistently positively correlated with ECE whereas the similarity is either negatively or not correlated with ECE. This shows that the worsening of ECE (large ECE) is correlated with the increasing norm. Based on supporting literature [12], [11] and this correlation, the observation supports the conjecture that the insensitivity of the norm is responsible for bad calibration. In the first set of experiments, we show that α and β reflect the effects of the geometric decomposition as claimed in Sec. 3.2 with different α − β configurations. From Fig. 4a -4d, we observe that the norm decreases linearly with β for fixed α. From Fig. 4e -4h, we observe that the angle increases linearly with arccos(α). The observations are consistent with the original geometric motivation. β encodes an instance-independent portion, C x , of the norm. As β increases, C x increases and therefore the magnitude of the dependent component, ∆x 2 decreases linearly. α encodes the inverse of the cosine of a relaxation angle, C φ . As arccos(α) increases, the resulting angle, ∆φ increases linearly due to the increased relaxation angle encoded by α. Empirical Support for the Disentangled Training In the second set of experiments, we show that the new model effectively increases the sensitivity of both the norm and the angle to input distribution shift as claimed in Sec. 3.3. Specifically, we measure OOD detection performance of the models using both the norm and the cosine similarity with the Area Under the Receiver Operating Characteristic (AUROC) curve metric. We use CIFAR10/100 as the IND data and SVHN [27] as the OOD data. In Tab. 6a and 6b we show two configurations of models in addition to vanilla ResNet18: (α-regularized) we regularize α such that it stays close to one as described in Sec. 3.3; (α-unregularzed) we optimize both α and β freely without constraints. Compared to vanilla ResNet, the norms predicted by our models achieve significant improvement in separating IND data from OOD data. Additionally, we visualize the distribution of norms in Fig. 3a and 3b. The separation between IND and OOD data increases significantly compared to vanilla ResNet18. However, a large α (see α-unregularzed in Tab. 6a and 6b) leads to marginal cosine similarity sensitivity improvement on CIFAR10 and CIFAR100. This indirectly confirms our observations in Sec. 4.2 and in prior works [11] that cosine similarity correlates well with distribution shift. Introducing further angle relaxation might not be always beneficial. While we mainly focus on calibration, our method also strengthens its base model's ability for OOD detection. The assumption that OOD data have smaller norms is based on the expectation that a model should be less confident on OOD data. Practically, the norm acts as a temperature in softmax as shown in Eq. 1. Intuitively, larger always yields more peaked/confident predictions, and smaller always yields flatter predictive distributions. Therefore, we expect less confident data such as OOD data to have smaller because we expect the output distribution to be flatter. The assumption is supported by the following empirical evidence. In Tab. 7 we show the norm of in-distribution and out-of-distribution data on CIFAR10 using ResNet50-GSD (ours). The OOD data is produced by the 15 corruptions used in the paper. OOD data have consistently smaller norms and the accuracy decreases with decreasing norm with a Pearson correlation of 0.9 as an indicator of more out-of-distribution. Conclusion In this paper, we studied the geometry of the last linear decision layer and identified the insensitivity of the norm as the culprit of bad calibration under distribution shift. To encourage sensitivity, we derived a general theory to decompose the norm and angular similarity. Inspired by the theory, we proposed a simple yet very effective training and inference scheme that encourages the norm to reflect distribution changes. The model outperforms other deterministic single pass-methods in calibration metrics with much fewer hyperparameters. We also demonstrated its superior generalizability on a variety of popular neural networks. Note that our problem and method have positive societal impact, as calibration under shift improves overall confidence and robustness of these models. A.1 Extended Derivation for Equation 4 In the main paper, we proposed to decompose the norm and angular similarity into instanceindependent and dependent components. The ∆x 2 , |∆φ y | are the instance-dependent components and C x , |C φ | are the instance-independent components. We can rewrite the pre-softmax logits in Eq. 1 with the decomposed norm and angular similarity. We can simplify the equation by assuming cos |φ y | is close to one, which means |φ y | is small. This is due to the fact that |φ y | is the angle between the correct class weight and x, which means as training ensues, the angle converges to 0 and thus the cosine similarity converges to 1. (Please see Sec. A.2 for empirical support.) cos |C φ | sin |∆φ y | sin |C φ | cos |∆φ y | = sin (|∆φ y | + |C φ |) + sin |φ y | sin (|∆φ y | + |C φ |) − sin |φ y | ≈ 1 A.2 Small Angle Assumption in Equation 5 One reason for the small angle assumption in Eq. 5 is the observation that high-capacity models tend to be more miscalibrated [4] and our method is especially more effective in this case. When a model is sufficiently high-capacity compared to the diversity of the dataset, the assumption of small-angle is empirically more valid and the method can provide more significant improvement. All ResNet models are high-capacity deep models and corresponding cosine similarity to the true class is close to one during training as assumed in Sec. 3.2. Tab. 8 shows the average cosine similarity to the ground truth class on the training data. A.3 Definitions of Metrics The problem tackled in this paper is supervised image classification in the face of noise. Assume a data point X i ∈ X, i ∈ [1, N ] each associated with a label Y ∈ Y = {1, ..., K}. We would like our model M where M (X i ) = (Ŷ i ,P i ) whereŶ i is the class prediction andP is the probability/confidence given by the model to be as close to the ground truth distribution P (Y i |X i ). IdeallyP i is well calibrated which means that it represents the likelihood of the true eventŶ i = Y i . Perfect calibration [4] can be defined as: Ways of evaluating Calibration are as follows: A.3.1 Expected Calibration Error (ECE) Expected Calibration Error [20] evaluates calibration by calculating the difference in expectation between the confidence and accuracy or: This can also be computed as the weighted average of bins' accuracy/confidence difference: where n is the total number of samples. Perfect calibration is achieved bins when confidence equals accuracy and ECE = 0. A.3.2 Negative Log Likelihood (NLL) A way to measure a model's probabilistic quality is to use Negative Log Likelihood [18]. Given a probabilitist model P (Y |X) and N samples it is defined as: whereP is the predicted distribution of the ground truth P and Y i is the true label for input X i . NLL belongs to a class of strictly proper scoring rules [28]. A scoring rule is strictly proper if it is uniquely optimized by only the true distribution. NLL is the negative of the logarithm of the probability of the true outcome. If the true class is assigned a probability of 1, NLL will be minimum with value 0. A.3.3 Brier The Brier score [19] measures accuracy of probabilistic predictions. Across all predicted items N in a set of predictions, the Brier score measures the mean squared difference between the predicted probability assigned to possible outcome for i ∈ [1, N ] and the actual outcome. Where R is number of possible classes, N is overall number of instances of all classes. f ti is the approximated probability of the forecast o ti in one hot encoding. Brier score can be intuitively decomposed into three components: uncertainty, reliability and resolution [29] and it is also a proper scoring rule. A.4 Calibration in the Face of Differing Levels of Noise We report additional calibration ECE, NLL and Brier results in the face of different levels of corruption using ResNet18 in Tab. 9, 10 and 11 respectively. CIFAR10 and CIFAR100's validation set was corrupted using a library of common corruptions [1] with 5 levels of severity. In Tab. 9, 10 and 11 we show how differing levels of common corruptions effect the calibration of models. Across all levels of corruption our model consistently had the stronger Brier score in CIFAR100 and much strong ECE and NLL on CIFAR10. A.5 Calibration in the Face of Rotation In Tab. 12b, 12a we rotated CIFAR10 and CIFAR100 validation data set by [0, 350] degrees with 10 degree steps in between, the calibration metrics and accuracy were then averaged. For each model 5 seeds were trained, for MCDO 5 passes were done on each model for inference with a dropout rate of 50% as suggested in the original paper and 5 models were ensembled for Deep Ensemble. β for our models were 4 on CIFAR10 and 10 for CIFAR100. A.6 Qualitative Comparison: Extended Discussion GSD vs. Single Pass Models The current state-of-the-art single pass models for inference on OOD data, without training on OOD data, are SNGP [9] and DUQ [8]. The primary disadvantages of these models is: 1) Hyperparameter Combinatorics: Both DUQ and SNGP require many hyperparameters as shown in Tab. 13. SNGP requires the most hyperparameters out of all the single pass models. The large combinatoric scale, in addition to the fact that these hyperparameters must be tuned via pre-training grid search, make these methods costly as a full training procedure with multiple epochs are required before evaluating calibration. Our model only has one hyperparameter that is tuned post-training with 1 epoch on validation set. 2) Extended Training Time: DUQ requires a centroid embedding update every epoch, while SNGP requires sampling potentially high dimensional embeddings of training points for generating the covariance matrix as well as updates to the bounded spectral norm on each training step, thus increasing training time while our model trains in the same amount of time as the model it is applied to. GSD vs. Multi-Pass Models Bayesian MCDO [7] and Deep Ensemble [14] are considered the current state-of-the-art methods for multi-pass calibration. Bayesian MCDO requires multiple passes with dropout during training and inference in order to achieve stronger calibration. Deep Ensembles A.7 Generalizability: Extended Table Generalizability We explored how generalizable our method is by applying it to 12 different models and 4 different datasets in Tab. 14. We report results for both variants of our model: Grid Searched: grid search β on the validation set to minimize ECE and Optimized: optimize β on the validation set via gradient decent to minimize NLL for 10 epochs, similar to temperature scaling. We can see consistently that our model had stronger calibration across all models and metrics, including models known to be well calibrated like LeNet [22]. All models were tested on CIFAR10C and CIFAR100C datasets offered by [1] where the original CIFAR10 and CIFAR100 were pre-corrupted; these were used for consistent corruption benchmarking across all models. All non-CIFAR datasets were corrupted via rotation from angles [0,350] with 10 step angles in between and the average calibration and accuracy was taken across all degrees of rotation. Our models included: DenseNet [23], LeNet [22] and 6 varying sizes of ResNet, which are described in [24]. The datasets we experimented on CIFAR10 [25], CIFAR100 [25], MNIST [26] and SVHN [27], CIFAR10C [1], CIFAR100C [1]. A.8 Training Parameters and Dataset License We train all our models using stochastic gradient descent for 200 epochs and a batch size of 128 on RTX 2080 GPUs. We use a starting learning rate of 0.1 and a weight decay of 5.0e − 4. For ResNet18 experiments, we use a cosine scheduler for learning rate. For Wide ResNet-20-10 experiments, we use a step scheduler which multiplies the learning rate at epoch 60, 120 and 160 by 0.2. A.9 Introduction to Temperature Scaling Temperature scaling is a simple form of Platt scaling [30]. Temperature scaling uses a scalar T to adjust the confidence of the softmax probability in a classification model. Following the notation from the main paper, let l denotes the logits. The temperature scalar is applied to all classes as following: As described in Fig. 1a, the temperature effectively changes the slope of x 2 from 1 to 1 T . The temperature parameter is optimized by minimizing negative log likelihood on a validation set while freezing all the other model parameters [4]. Temperature scaling calibrates a model's confidence on IND data and does not change accuracy. However, it does not provide any mechanism to improve calibration on shifted distribution and is inferior to other uncertainty estimation methods in terms of calibration [5].
2021-10-28T01:16:43.499Z
2021-10-27T00:00:00.000
{ "year": 2021, "sha1": "085d75a233c3660ba7219231f047056b85f847a5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "085d75a233c3660ba7219231f047056b85f847a5", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
15023490
pes2o/s2orc
v3-fos-license
Iranian Journal of Neurology © 2014 Original Paper Predicting In-hospital Mortality in Iranian Patients with Spontaneous Intracerebral Hemorrhage Background: Intracerebral hemorrhage (ICH) is the most fatal subtype of stroke. Despite limited effective therapy, there is no accepted clinical grading scale to predict in-hospital mortality, especially in developing nations. The purpose of this study was to assess the predictors of in-hospital mortality among a sample of Iranian patients with spontaneous ICH for use at the time of the first evaluation. and laboratory data of ICH patients were collected. Hematoma volume and perihematoma edema (PHE) were measured on brain computed tomography scan using ABC/2 formula. Logistic regression analysis was performed to determine independent variables contributing to in-hospital mortality. Results: Of a total 167 consecutive ICH patients, 98 patients met inclusion criteria. Mean ± standard deviation age of patients was 70.16 ± 12.52. After multivariate analysis, five variables remained as independent predictors of in-hospital mortality included: age [odds ratio Conclusion: Our results indicate that older age, diabetes mellitus, higher NIHSS, as well as larger volume of hematoma, and smaller PHE on admission are important predictors of in-hospital mortality in our ICH patients. Introduction Spontaneous intracerebral hemorrhage (ICH) is the most fatal stroke subtype worldwide caused by spontaneous vascular rupture due to hypertension or amyloid angiopathy. [1][2][3] The overall 30-days mortality rate for patient with intracranial hemorrhage is estimated to be 30-55%, of which about half die within the first 48-h. [4][5][6][7] ICH is accounted for 10-20% of all strokes in the United States and between 20% and 30% in Asia. 8 Predicting the risk of mortality following ICH is useful to determine prognosis, standardize the assessments and select the optimal treatment option. 9,10 Moreover, accurate prediction of ICH outcome would assist both families and physicians to decide whether patients need to be transferred to an extended care facility. 11 Although several grading scales have been proposed to predict mortality and outcome after ICH, an optimal predicting scale for inhospital mortality that has been widely incorporated Iranian Journal of Neurology into clinical practice is lacking. 9,[11][12][13][14] The absence of such a standard grading scale has led to the inclusion of heterogeneous patients samples in randomized clinical trials and inconsistency in the clinical management of ICH patients. 9,10 Although several studies on prognostic scales of ICH outcome have been conducted in western countries, 9,[11][12][13][14] fewer studies have evaluated the effect of prognostic indicators on in-hospital mortality. Moreover, few studies have attempted to investigate factors related to in-hospital mortality in developing countries where socioeconomic, geographic, and ethnic/racial characteristics are different. 15,16 Therefore, this study was performed to assess the easily identifiable predictors of in-hospital mortality in a sample of Iranian patients with spontaneous ICH for use at the time of the first evaluation. Materials and Methods This prospective study was conducted at Poursina Teaching-Hospital in Rasht, Iran, which serves as the main tertiary referral center for stroke in the province, from January 2010 to the end of January 2011. This study was approved by the Human Research Committee at the Guilan University of Medical Sciences (GUMS) (Rasht, Iran). All patients presented to the emergency department (ED) within the first 24 h of acute onset of focal neurological deficit whose brain Computed tomography (CT) scan on admission was compatible with a diagnosis of ICH were included in the study. Exclusion criteria were: (1) evidence of head trauma, (2) concomitant epidural or subdural hematoma, (3) history of stroke, bleeding tendency disorders, dementia, cancer, or any other severe concomitant illness, (4) secondary ICH (e.g., vascular malformations, aneurysm, tumor, trauma, vacuities, etc.), (5) neurosurgical intervention, and (6) transfer to another facility. All patients were initially managed in the ED according to the guideline provided by American Heart Association/American Stroke Association for the initial management of cerebrovascular accident (CVA). 17 A non-contrast spiral brain CT-scan was conducted within the first 6 h of admission using 7-10 mm slice thickness in the supratentorial regions and 5mm slice thickness in the posterior fossa, scanned without gap. The aim of the study was explained to patients or their relatives, and written informed consent was obtained before participation. Demographic and clinical information, including age, sex, smoking, and history of hypertension (mmHg), diabetes mellitus, coronary artery disease, medication use, and other concomitant major illnesses were collected upon admission. Level of consciousness and severity of stroke was evaluated on arrival at the ED using Glasgow Coma Scale (GCS) and National Institutes of Health Stroke Scale (NIHSS), respectively. Two neurologists who were blinded to the radiological data collected patient's demographic and clinical data. Baseline volumes of hematoma and perihematoma edema (PHE) volume, which were measured using ABC/2 method, 18 location of ICH, presence of intraventricular hemorrhage (IVH) and subarachnoid extension of hematoma were recorded. The outcome was defined as in-hospital mortality. All images were reviewed by a neurologist and an expert radiologist who were blinded to the patient's identity and clinical status. They measured the hematoma volume and PHE volume, and the consensus measurements were considered. All statistical analyses were performed using SPSS for Windows 17.0 (SPSS Inc., Chicago, IL, USA). All independent variables used in the analysis of ICH outcome were chosen from medical literatures. 9 The Kolmogorov-Smirnov test was applied to assess normality of data distribution. Chi-square test was used to compare categorical variables. Student's t-test was used when data were normally distributed; otherwise, the Mann-Whitney U test was employed. The Spearman's test was used to determine the correlation between hematoma volume and PHE. Variables with a P-value less than 0.100 in the univariate analysis were considered eligible for inclusion in the final multivariate model. Multiple logistic regression analysis using a backward likelihood ratio method was done to create a prediction outcome model. The goodness of fit of the model was evaluated by Hosmer and Lemeshow test. The level of significance was considered to be P < 0.050. Results Of the total 167 consecutive ICH patients who admitted to the ED of our Hospital, 109 cases met inclusion criteria. Of these, 6 patients were excluded from the study because they lacked complete baseline data, 5 patients underwent neurosurgical intervention, 21 presented to the ED with delay more than 24 h; 12 had concomitant major abnormal findings other than ICH on brain CT-scan and 14 had a positive history of diseases consistent with the exclusion criteria. In addition, 9 patients were excluded during the inhospital follow-up period, of which 5 were diagnosed as having secondary ICH, 2 had radiologic evidence of head trauma, and 3 were discharged against medical advice and continue their treatment in a private hospital. Finally, the remaining 98 patients constituted the population of our study. All patients were treated conservatively. Conservative management included airway protection, stabilization of vital signs, control of hypertension, treatment of complications like increased intracranial pressure. http://ijnl.tums.ac.ir 6 October The demographic and clinical characteristics, as well as radiologic findings of the patients and the association of each variable with in-hospital mortality, are given in table 1. Mean age of patients was 70.16 ± 12.52 years, the average time between the onset of symptoms and admission was 7.05 ± 4.75 h, mean GCS was 11.00 ± 4.04, and mean NIHSS score was 16.44 ± 9.75. Male to female ratio was 1.08/1.00. The prevalence of diabetes mellitus, hypertension, and coronary artery disease among ICH patients were 27.6%, 63.3% and 21.4% respectively. IVH was seen in 42.9% of patients. Mean hematoma volume and mean volume of PHE were 29.38 ± 23.09 and 23.29 ± 7.60 cc, respectively. Most frequent locations of hematoma were thalamus (34.7%), basal ganglia (32.7%), cerebral hemispheres (20.4%), cerebellum (8.2%), and brainstem (4.1%). The overall mortality rate in this study was 30.6% in the hospital. Of these, 40.0% occurred in the first 2 days of hospitalization. On univariate analysis, deceased patients were older (P < 0.001) and had significantly lower GCS (P < 0.001), higher NIHSS score (P < 0.001), and larger PHE (P = 0.006) and hematoma volume (P < 0.001) than those who survived. Moreover, diabetes mellitus [OR = 2.9, 95% confidence interval (CI) = 1.16-7.48, P = 0.020], IVH (OR = 3.16, 95% CI = 1.29-7.74, P = 0.010), and location of hematoma (P = 0.001) were significantly associated with increased risk of inhospital mortality (Table 1). Spearman's rank test showed that hematoma volume was significantly correlated with PHE (P < 0.001, r = 0.736). The results of multivariate analysis using a logistic regression model are summarized in table 2. The whole model was significant (P < 0.001) and the overall accuracy of the model was 92.9%. The Hosmer and Lemeshow test demonstrated a very good fit of the model (P < 0.999). Presence of IVH, and GCS no longer remained significant, after adjustment for the other confounding variables. Furthermore, location of the hematoma was not associated with increased risk of mortality, even after exclusion of four patients with brainstem hemorrhage and redefinition of the supratentorial hemorrhages as both hemispheric and thalamic hemorrhages in reanalysis. After adjustment for potential confounding factors, five variables remained as significant predictors of in-hospital mortality: diabetes mellitus (OR = 10.86, 95% CI = 1.08-109.24, P = 0.009), NIHSS score (OR = 1.41, 95% CI = 1.08-1.68, P ≤ 0.001), volume of hematoma (OR = 1.10, 95% CI = 1.03-1.17, P = 0.003), PHE (OR = 0.75, 95% CI = 0.60-0.93, P = 0.010), and age (OR = 1.12, 95% CI = 1.03-1.23, P = 0.009). As, patients with diabetes mellitus were 10.86 times more likely to die compared to those without diabetes mellitus and each 10-year increase in age increased the odd of death by 3.3 folds. Moreover, for every 10 score increases in NIHSS, the risk of death increased nearly 32 folds. Finally, each 10 cc increase in hematoma volume increased the rate of death 2.7 fold, while for every 10 cc decline in PHE the risk of death increased by 16.7 folds. Discussion Developing a standard clinical grading scale has an essential role in triage, assessment, and treatment of patients with ICH and designing clinical trials. 9 Until date, numerous clinical studies have paid attention to determine the prognostic factors of outcome after ICH and have proposed several grading scales in different populations. 9,[11][12][13][14]19 However, few studies have attempted to investigate factors related to in-hospital mortality, especially in developing countries. The goal of our study was to investigate independent prognostic factors of in-hospital mortality in Iranian patients with spontaneous ICH for use at the time of the first evaluation in the ED. In contrast to most previous studies, 11-14 which did not consider the time of CT scan, we only included patients who presented to our hospital within 24 h of symptoms onset whose CT scan was performed within 6 hours of admission. It is, therefore, likely that our results would generalize to more typical cases of ICH. We found that older age, diabetes mellitus, neurological impairment according to NIHSS score, larger hematoma volume and smaller PHE at admission are five important predictors of in-hospital mortality in our patients with ICH. However, we did not identify any association between the in-hospital mortality and other variables, including history of hypertension, gender, and location of hemorrhage. Even after exclusion of brainstem hemorrhages and redefinition of the supratentorial hemorrhages as hemispheric and thalamic hemorrhages, location of the hematoma was not remained significant in the final regression model. Previous studies have proposed several predictors for mortality of ICH patients. 12,[14][15][16] Consistent with the most previous studies, 12,14-16 older age was the important predictor of in-hospital mortality in our study both on univariate and multivariate analysis. The results of the current study also support a finding by others 15,20 that diabetes mellitus is independently associated with high mortality rate of ICH patients, albeit with a rather wide CI, this association needs to be interpreted with caution. The result of our study supports the finding of previous studies, 21,22 that greater neurological severity, as assessed by NIHSS, is associated with poor outcome. However, in contrary to the findings of Ruiz-Sandoval et al. 14 GCS was not associated with in-hospital mortality in our study, when adjusted for other potential variables. The advantage of NIHSS over GCS in predicting outcome following ICH could be explained by the fact that NIHSS not only has a wider spectrum than GCS to assess neurological dysfunction but also is able to measures the level of consciousness. 23,24 However, GCS which is a reliable tool for assessment of severity and level of consciousness, especially following traumatic brain injury, is not an indicator of the whole neurological status. 21,22 This study confirmed previous research findings 12,14,15,25 indicating that increased hematoma volume is associated with mortality of ICH patients. However, contrary to most prior studies, 10,14,15 IVH was not an independent predictor of mortality, when adjusted for other factors. Prognostic significance of early PHE on clinical outcome of ICH patients is controversial in the literature. [26][27][28] Arima et al. 27 showed that neither absolute nor relative increase in PHE volume during the first 72 h was associated with death or 90-day functional outcome, when adjusted for age, sex randomized treatment and baseline hematoma volume. Surprisingly, although PHE was directly associated with in-hospital mortality on univariate analysis, we found that decreased PHE volume is significantly associated with higher in-hospital mortality of ICH patients on multivariate analysis. This difference might reflect the confounding influence of other variables. Similar to previous studies, 27 This study had some limitations: (1) Patients with secondary ICH and pre-existing disability were excluded. So we cannot generalize our results to all ICH patients. (2) In contrary to most previous studies, we considered in-hospital mortality as our outcome measure to evaluate the function of the hospital stroke team and to eliminate the potential effects of socioeconomic status, as well as other patient and family-related factors on later clinical outcomes. (3) Because our study performed in a teaching hospital serves as a referral tertiary center of stroke patients, we might have selected patients with lower socioeconomic status at presentation and also during the follow-up periods, as we lost three patients who decided to leave our hospital to continue treatment in another private hospital. Therefore, potential referral and selection bias are the third limitation of this study. (4) The lack of data on some other factors such as the use of anti-platelets and anti-coagulants, baseline laboratory values, as well as CT evidence of midline shift, hydrocephalus, and herniation, which may restrict the generalizability of our findings. (5) Although the use of ABC/2 method has been validated for the estimation of ICH volume, recent studies showed that this technique is not accurate enough in measuring the volume of hematoma and PHE. (6) Our relatively small sample size, which may have limited our statistical power to detect all of the variables associated with in-hospital mortality. Conclusion Our results indicate that age, diabetes mellitus, NIHSS, and volume of hematoma, and PHE can predict the risk of in-hospital mortality on presentation in patients with spontaneous ICH.
2018-05-08T17:49:29.292Z
0001-01-01T00:00:00.000
{ "year": 2014, "sha1": "fd0e80a96529984384dd63d3aeafbea2c12c440a", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "fd0e80a96529984384dd63d3aeafbea2c12c440a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237214122
pes2o/s2orc
v3-fos-license
Nintedanib-Induced Renal Thrombotic Microangiopathy Nintedanib is a unique tyrosine kinase inhibitor used to suppress fibrosis in patients with idiopathic pulmonary fibrosis (IPF). Nintedanib has been shown to suppress multiple processes of fibrosis, thereby reducing the rate of lung function decline in patients with IPF. Since vascular endothelial growth factor is one of this agent's targets, nephrotoxicity, including renal thrombotic microangiopathy (TMA), is a possible major adverse effect. However, only 2 previous cases of nintedanib-induced renal TMA have been published. Our patient was an 83-year-old man with IPF. As adverse effects including liver enzyme level elevation, diarrhoea, anorexia, and nephrotoxicity developed, the nintedanib dosage was reduced after 9 months. The digestive symptoms resolved promptly, but the proteinuria and reduced kidney function remained. Although the kidney injury had improved to some extent, we performed a percutaneous renal biopsy. The biopsy revealed typical TMA findings such as microaneurysms filled with pale material, segmental double contours of glomerular basement membranes, and intracapillary foam cells. After discontinuation of nintedanib, the patient's nephrotoxicity improved. Nintedanib-induced renal TMA is reversible and is possibly dose-dependent. Here, we report the clinical course of our case and review the characteristics of nintedanib-induced renal TMA. Idiopathic pulmonary fibrosis (IPF) is an interstitial lung disease characterised by the progressive loss of pulmonary function and the resultant degradation of quality of life. Nintedanib inhibits VEGFR, platelet-derived growth factor receptors, and fibroblast growth factor receptors and has been approved for the treatment of IPF since 2014 [9]. Nintedanib was initially developed for treating cancer, but the attenuation of fibrosis in a rat model of bleomycin-induced lung fibrosis has altered its indications [10]. This agent has shown efficacy in reducing the rate of lung function decline in patients with IPF by suppressing multiple processes of fibrosis. Only 2 previous cases of nintedanib-induced renal TMA have been reported [11,12]. Here, we report the clinical course of our case and review the characteristics of nintedanib-induced renal TMA. Case Report An 83-year-old Japanese man with proteinuria, hypoalbuminemia, and kidney dysfunction was referred to our hospital. The detailed laboratory data upon presentation are shown in Table 1. Approximately 1 year and 3 months before the current evaluation, he had experienced dry cough and was diagnosed with IPF. One year and 2 months before, nintedanib 150 mg twice daily had been initiated, and a urine test strip showed urinary protein (±). Although mild liver enzyme elevation occurred 4 days after nintedanib treatment was initiated, it returned to the normal range within a few days of treatment with ursodeoxycholic acid. Diarrhoea and anorexia occurred 3 months later and continued for 6 months, although the patient's signs and symptoms of IPF had improved. During this period, the serum albumin decreased gradually (shown in Fig. 1); however, the urinary protein was not evaluated. The patient's diarrhoea and anorexia recovered during the 10-day discontinuation of nintedanib, so nintedanib 100 mg twice daily was then initiated. Since the hypoalbuminemia had gradually deteriorated, urinary protein was checked 1 month later, at which time the urine protein/creatinine (Cr) ratio was 7.90 g/g Cr (normal range: <0.15 g/g Cr). However, subsequent ratios were approximately 2 g/g Cr. Additional medications administered to the patient included doxazosin mesylate 2 mg twice daily, nifedipine controlled-release 20 mg twice daily, telmisartan 80 mg, bisoprolol 2.5 mg, furosemide 40 mg, and trichlormethiazide 1 mg. We performed a percutaneous renal biopsy, and periodic acid-Schiff staining indicated evidence of renal TMA (shown in Fig. 2a): typical microaneurysms filled with pale material, segmental double contours of glomerular basement membranes, and intracapillary foam cells [13]. Direct immunofluorescence identified non-specific deposits of immunoglobulin (Ig) A, IgM, and C1q, and no deposit of IgG and C3. Electron microscopy revealed microaneurysmal dilatation of a capillary loop with marked widening of the subendothelial spaces (Fig. 2b). Because these findings strongly indicated nintedanib-induced TMA, we consulted a pulmonologist and elected to discontinue the treatment. Although the nephrotic syndrome recovered within 1 year, the renal dysfunction persisted (Fig. 1). His hypertension also deteriorated with nintedanib and subsequently improved without it. Discussion This case indicates that nintedanib-induced TMA might be reversible and dose-dependent. Hence, regular urine tests are important to check the urinary protein. In this case, nintedanib induced hypertension, proteinuria, nephrotic syndrome, and eventually renal TMA. In many reports of the other TKIs, the drug discontinuation criterion was proteinuria ≥ grade 2 (2 + or 3 + on urine test strips; 1.0-3.5 g/day). We discontinued the agent because of the evidence of nephrotic syndrome and renal TMA in the patient. The patients in both previous published cases of nintedanib-induced TMA recovered with discontinuation of the drug [11,12], and however, the discontinuation of nintedanib must be carefully discussed with a pulmonologist. Although diarrhoea and anorexia continued during the course of nintedanib 150 mg twice daily, such digestive symptoms improved after reduction to 100 mg twice daily. The hypoalbuminemia also improved with this reduction. This course indicates that the adverse effects of nintedanib are potentially dose-dependent. Therefore, the agent could have been
2021-08-20T05:22:32.757Z
2021-07-22T00:00:00.000
{ "year": 2021, "sha1": "ad6063d6b2d8a2e920e9360c7c413c9bfa946048", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/517692", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ad6063d6b2d8a2e920e9360c7c413c9bfa946048", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
17077054
pes2o/s2orc
v3-fos-license
Patient Outcomes Are Better for Unruptured Cerebral Aneurysms Treated at Centers That Preferentially Treat with Endovascular Coiling: A Study of the National Inpatient Sample 2001–2007 BACKGROUND AND PURPOSE: Practice patterns vary widely among centers with regard to the treatment of unruptured aneurysms. The purpose of the current study was to correlate outcome data with practice patterns, specifically the proportion of unruptured aneurysms treated with neurosurgical clipping versus endovascular coiling. MATERIALS AND METHODS: Using the NIS, we evaluated outcomes of patients treated for unruptured aneurysms in the United States from 2001 to 2007. Hospitalizations for clipping or coiling of unruptured cerebral aneurysms were identified by cross-matching ICD codes for diagnosis of unruptured aneurysm with procedure codes for clipping or coiling of cerebral aneurysms. Mortality and morbidity, measured as “discharge to long-term facility,” were evaluated in relation to the fraction of cases treated with coils versus clipping as well as the annual number of unruptured aneurysms treated by individual hospitals and individual physicians. RESULTS: Markedly lower morbidity (P < .0001) and mortality (P = .0015) were noted in centers that coiled a higher percentage of aneurysms compared with the proportion of aneurysms clipped. Multivariate analysis showed that greater annual numbers of aneurysms treated by individual practitioners were significantly related to decreased morbidity (OR = 0.98, P < .0001), while the association between morbidity and the annual number of aneurysms treated by hospitals was not significant (OR = 1.00, P = .89). CONCLUSIONS: Centers that treated a higher percentage of unruptured aneurysms with coiling compared with clipping achieved markedly lower rates of morbidity and mortality. Our results also confirm that treatment by high-volume practitioners is associated with decreased morbidity. S tudies of outcomes for unruptured aneurysms treated in the United States between 1996 and 2000 demonstrated that patients treated by high-volume hospitals and physicians had significantly lower morbidity and modestly lower mortality than those treated by low-volume hospitals and physicians. 1,2 During the time studied by these previous publications, coiling was not as widely used as it is today. It is, therefore, important to reassess the relative risks of coiling and surgery at high-and low-volume centers with the latest data available to understand recent trends. It would also be important to know if centers that have avidly adopted endovascular coiling have different outcomes from those that have not. The purpose of the current study was to correlate outcomes data available from the NIS data base (2001)(2002)(2003)(2004)(2005)(2006)(2007) with practice patterns, specifically the proportion of unruptured aneurysms treated with open clipping versus endovascular coiling. Patients We purchased the NIS hospital discharge data base for 2001-2007 from the HCUP of the Agency for Healthcare Research and Quality, Rockville, Maryland. The NIS is a hospital discharge data base that represents 20% of all inpatient admissions to nonfederal hospitals in the United States. Stratification of Hospital Volume Hospital codes for each patient were available, so we were able to determine the number of unruptured aneurysms treated at each institution in a given year. We stratified hospital volume on the basis of the number of unruptured aneurysms clipped and the number of unruptured aneurysms coiled per year. For each year between 2001 and 2007, we stratified the hospitals included in the NIS data base into 4 groups based on annual volume: 1) hospitals treating Յ5 unruptured aneurysms, 2) hospitals treating 6 -20 unruptured aneurysms, 3) hospitals treating 21-44 unruptured aneurysms, and 4) hospitals treating Ͼ44 unruptured aneurysms. Hospitals were assigned a separate stratification for the number of aneurysms clipped and the number of aneurysms coiled. Stratification of Physician Volume Physician identifiers for each patient were available, thus allowing us to determine the number of unruptured aneurysms treated by each interventionalist or neurosurgeon at each institution in a given year. For each year between 2001 and 2007, we stratified the physicians included in the NIS data base into 4 groups based on annual volume: 1) physicians treating Յ5 unruptured aneurysms, 2) physicians treating 6 -10 unruptured aneurysms, 3) physicians treating 11-20 unruptured aneurysms, and 4) physicians treating Ͼ20 unruptured aneurysms. Physicians were assigned a separate stratification for the number of aneurysms clipped and the number of aneurysms coiled. Data Collection The major demographic factors we collected were age, race, and sex. The 2 major end points examined in this study were 1) discharge to long-term facility, which we use to define "morbidity" in the context of the current study; and 2) in-hospital mortality. Discharge to longterm facility was studied by using the HCUP variable name "DIS-PUNIFORM." In-hospital mortality was studied by using the binary HCUP variable name "DIED" and calculating the number of patients who had died during their hospital stay. Statistical Analysis For the purposes of statistical analysis, we summed the data from 2001 to 2007 according to stratification. 2 tests were used to compare categoric variables, and 1-way analysis of variance was used to compare continuous variables. For determining predictors of death and discharge to other than home, we performed a multivariate logistic regression analysis by using the variables of age, sex, race, treatment technique, hospital volume, and physician volume. ORs are presented as unit ORs for continuous variables such as age, hospital volume, and physician volume (ie, the OR is presented as per change in regressor during each year in the case of age and each patient treated in the case of physician/hospital volume). The weights provided in the NIS data base were not applied to statistical analysis, thus our study represents data only from hospitals that participated in the NIS data base during this time period. All statistical analysis was performed by using the SAS-based statistical package JMP (www.jmp.com). Patients Between 2001 and 2007, a total of 10,644 patients in the NIS data base underwent treatment for unruptured aneurysms. Of these patients, hospital volume data was available for 10,624 with 5219 (49%) patients undergoing surgical clipping and 5405 patients undergoing endovascular coiling. The average age of the patients was 54.7 Ϯ 12.6 years; 7942/ 10,580 (75%) of the patients were women. Race information was available for 7168 patients: 5535 were white. Patients undergoing coiling were significantly older than those undergoing clipping (56.1 Ϯ 13.2 and 53.5 Ϯ 11.6 years, respectively, P Ͻ .0001). There was no significant difference in the race and sex distributions between the 2 groups. Data on demographics and the number of patients treated for given hospital and physician volumes are provided in Table 1. Relative Coiling Volume and Outcomes A strong relationship existed between the proportion of aneurysms coiled at an institution and both morbidity and mortality. As the proportion of aneurysms coiled increased, the rate of discharge to long-term facilities decreased significantly (r ϭ 0.14, P Ͻ .0001). At centers that coiled Ͼ75%-99% of unruptured aneurysms, the rate of discharge to long-term facility was 5.9% (128/2158), compared with 16.8% (303/1195) at centers that coiled 0% of aneurysms. Patient mortality also decreased as the proportion of aneurysms coiled at an institution increased (r ϭ 0.11, P ϭ Ͻ.0001). Centers that coiled 0% of aneurysms had a mortality rate of 2.0% (21/1195), while centers that coiled 75%-99% of aneurysms had a mortality rate of only 0.5% (11/2158) (P ϭ .0015). Centers that coiled 100% of unruptured aneurysms had the best outcomes with only 4.3% (27/627) of patients discharged to long-term facilities, and a mortality rate of 0.5% (2/627). These trends are demonstrated in Fig 1. Hospital Volume and Distribution of Procedures A majority of patients (3233/5219, 62%) who underwent clipping of their unruptured aneurysms did so at centers treating Յ20 cases/year, whereas only 39% (2131/5405) of patients who underwent coiling did so at centers treating Յ20 cases/year. Overall, patients who were coiled tended to be treated at higher volume coiling centers, whereas patients who were clipped tended to be treated at lower volume clipping centers (P Ͻ .0001). These data are summarized in Table 1. Between 2001 and 2007, there was a large decline in the proportion of patients being coiled at low-volume centers (cases per year, Յ20) compared with the proportion treated at high-volume centers (Fig 2). In 2001, 53% of coiled patients were treated at low-volume centers (cases per year, Յ20), whereas in 2007, 6.0% were coiled at low-volume centers. In 2001, 29% of clipped patients were being clipped at low-volume centers, whereas in 2007, only 18% were being clipped at low-volume centers. When assessing the distribution of clipping and coiling in relation to center volume, we found that for centers that practice both clipping and coiling, there was no significant difference in the distribution of clipping and coiling volumes (P ϭ .08). These data are summarized in Fig 3. Annual Number of Unruptured Aneurysms Treated versus Death Rate and Discharge Status of Hospitals For patients being clipped, a total of 63/5202 (1.2%) patients died during their hospitalization. There was no significant association between the number of patients who died and the hospital clipping volume (P ϭ .14). For patients being coiled, a total of 43/5417 (0.8%) patients died during their hospitalization. No significant association was found between the number of patients who died and hospital coiling volume. Patients who were clipped were significantly more likely to die during their hospitalization than patients who were coiled (P ϭ .03). For patients being clipped, a total of 14.1% (735/5202) of patients were discharged to long-term facilities. There was a significant association between hospital clipping volume and discharge to long-term facilities (P Ͻ .0001) because larger volume centers had a lower proportion of patients discharged to long-term facilities than low-volume centers. Hospitals clipping Յ5 unruptured aneurysms discharged 19.2% (237/ 879) of patients to long-term facilities, while centers clipping Ն45 unruptured aneurysms per year discharged 11.1% (116/ 1042) of patients to these facilities. For patients being coiled at all centers, 5.0% (270/5417) were discharged to long-term facilities. Again, a significant association was noted between hospital volume and the proportion of patients not being discharged to home. Centers that coiled Յ5 unruptured aneurysms per year discharged 7.4% (37/499) of patients to longterm facilities, and centers coiling Ն45 unruptured aneurysms per year discharged 4.3% (78/1826) of patients to these facilities (P Ͻ .0001). Patients who were coiled were significantly less likely to be discharged to long-term facilities than those who were clipped (P Ͻ .0001). These data are summarized in Table 2. Physician's Annual Number of Unruptured Aneurysms Treated versus Death Rate and Discharge Status For patients being clipped, no significant association existed between the physician's volume of clipped unruptured aneurysms per year and the death rate (P ϭ .27). There was a significant association between the physician's volume of clipped unruptured aneurysms per year and patient discharge status because physicians with the lowest volumes (Յ5 cases/year) had 16.8% (248/1074) of patients discharged to long-term facilities, whereas physicians with the highest volumes (Ͼ20 cases per year) had only 11.1% (74/669) of patients discharged to long-term facilities (P ϭ .001). For patients being coiled, there was a significant association between the death rate and the physician's annual volume of coiled unruptured aneurysms. Practitioners who coiled Յ5 unruptured aneurysms per year had a death rate of 1.5% (19/ 1239), compared with physicians who coiled Ͼ20 unruptured aneurysms per year who had a death rate of 0.3% (3/1121) (P ϭ .0005). There was also a significant association between the physician's volume and the proportion of patients being discharged home because patients treated by physicians coiling Յ5 unruptured aneurysms per year had a discharge to long-term facility rate of 6.6% (82/1239),while those who were treated by the highest volume physicians had a discharge to long-term facility rate of 3.3% (37/1124) (P Ͻ .0001). These data are summarized in Table 3. Predictors of Discharge to Other than Home and Predictors of Death Our multivariate logistic regression analysis demonstrated that independent factors associated with discharge to longterm facilities were increased age (P Ͻ .0001), sex (male Ͼ female, P ϭ .04), being clipped rather than coiled (P Ͻ .0001), and the number of cases per year for the practitioner (P Ͻ .0001). Total hospital volume (clipping ϩ coiling) was not associated with discharge status based on this model. We found the independent factors associated with in-hospital death were increased age (P ϭ .0002), sex (male Ͼ female, P ϭ .03), and being clipped rather than coiled (P ϭ .01). In this model, physician volume and hospital volume were not associated with in-hospital death. These data are summarized in Table 4. Discussion This study shows significantly lower morbidity and mortality rates among patients treated for unruptured aneurysms at centers that treated a higher percentage of patients with coiling than with clipping. Some have recommended guidelines for the treatment of unruptured intracranial aneurysms to include "microsurgical clipping rather than endovascular coiling as the first treatment choice in low-risk cases." 3 In keeping with these guidelines, it would be reasonable to expect that all patients treated with clipping in the NIS were offered clipping because the surgeon thought that it was a reasonably low-risk procedure relative to coiling. The outcomes for the NIS data base, however, suggest that outcomes of surgical clipping were less favorable than those for endovascular coiling, even when we compared hospitals and physicians with high-volume clipping with those with low-volume coiling. This finding would indicate that the guidelines recommending microsurgical clipping as the first treatment choice should be reconsidered. We also found that outcomes of surgical clipping and endovascular coiling of unruptured intracranial aneurysms were significantly better when treatment was performed by higher volume physicians. Multivariate analysis showed that hospital volume was not an independent predictor of outcome; thus, hospital volume appears to be associated with good outcome largely because high-volume physicians work at high-volume hospitals. We have also found that treatment of unruptured aneurysms with endovascular coiling is associated with improved discharge status. This was true across all volumes. Thus, these data suggest that the best outcomes for the treatment of unruptured intracranial aneurysms are seen in those patients who are treated by high-volume physicians with endovascular coiling. With regard to surgical clipping, lower rates of adverse outcomes have been reported by high-volume surgeons 4 and high-volume hospitals. [5][6][7] Prior studies used the NIS to assess the effect of hospital volume on morbidity and mortality for unruptured aneurysm treatment from 1996 -2000 for surgical clipping 2 and coiling. 1 For that time period, it was found that patients with unruptured aneurysms treated with both surgical clipping 2 and coiling 1 at high-volume centers had lower rates of discharge to sites other than home. A study of the NIS data base from 1995 to 1999 recommended that patients with cerebral aneurysms be referred to high-volume centers to improve outcomes. 8 A later study by using state hospital data bases from 1998 to 2000 found that outcomes of patients admitted for subarachnoid hemorrhage were significantly better at high-volume centers compared with lower volume centers. 9 Centers that are highly experienced in both clipping and coiling might offer improved outcomes because they are best suited to select the optimal treatment technique for each patient. Barker et al 2 noted that the availability or frequent use of endovascular therapy at the same hospital had no effect on surgical outcome after adjustment for volume of surgical care. However, Berman et al 10 and Johnston 6 showed a relationship of improved outcome to endovascular availability. Johnston also showed that the availability of endovascular procedures was associated with a reduction of in-hospital death. Our study has taken a different approach, by showing that centers that preferentially offer endovascular therapy tend to have less morbidity and mortality than centers that preferentially offer surgical clipping. The use of coiling was less widespread from 1996 to 2000 than between 2001 and 2007. Indeed, there were only 421 cases treated with coiling in the NIS during 1996 -2000 1 compared with 5420 cases during 2001-2007. An interesting trend observed in our study is the proportion of aneurysms treated with clipping and coiling at high-and low-volume centers. From 2001 to 2007, 58% of cases clipped and 75% of cases coiled were treated in high-volume centers (Ͼ20 cases per year) versus 27% of cases clipped 2 and 23% of cases coiled 1 from 1996 to 2000. In our study period, 18% of clipped aneu- rysms were treated at low-volume hospitals (Յ5 cases per year), whereas only 4% of coiled unruptured aneurysms were treated at low-volume hospitals. There is no formal process of regionalization in the United States leading to referral to higher volume centers, but it appears to be occurring without an organized effort. It is difficult to understand why it is occurring, but factors might include malpractice concerns and an overall shortage of neurosurgeons. The reason that surgical clipping appears to be the treatment mode of choice at these very low-volume centers may be that surgical clipping expertise is available but endovascular expertise is not. Limitations Many of the limitations of this study are intrinsic to the use of administrative data bases. 11 We acknowledge that coding inaccuracies undoubtedly occur, which affect the retrospective evaluation of an administrative data base. Because our center is not included in the NIS data base, we were unable to perform an audit of our own cases to determine the degree of error in coding. Reasons that patients were discharged to longterm facilities are not collected as part of the NIS. Due to the lack of a specific code for iatrogenic subarachnoid hemorrhage, we were unable to report on unruptured aneurysms that ruptured during treatment. In addition, we are unable to determine whether discharge to a long-term facility was related to important factors such as anesthesia and pre-and postoperative care. Another limitation associated with using an administrative data base is the retrospective nature of the data. Patients in this study were not treated in a randomized manner. Therefore, there is significant potential for selection bias that might affect outcomes of clipping or coiling. For example, the NIS does not provide data on aneurysm size and location, which can affect outcomes of treatment. In addition, there is no means of determining treatment efficacy (eg, degree of angiographic occlusion). It is possible that data on treatment efficacy may favor surgical clipping because this is considered definitive in the treatment of intracranial aneurysms. This study does not intend to provide a threshold for the number or proportion of aneurysms that should be treated with clipping or coiling to optimize outcomes, but rather it provide readers with an understanding of a general trend in outcomes during the study period. In our study, ruptured aneurysms were not included as part of physician and hospital volume. Thus, the volume of unruptured aneurysms may not accurately represent the total vol-ume of aneurysms that a center may treat because some physicians and medical centers may treat significantly more ruptured aneurysms than unruptured ones. Conclusions Centers that treated a higher percentage of unruptured aneurysms with coiling versus clipping have less morbidity and mortality. Our results also confirm that outcomes for treatment of cerebral aneurysms by higher volume practitioners is associated with decreased morbidity. Aneurysms treated with endovascular coils have significantly better outcomes than those treated with clipping regardless of hospital or physician volume. Finally, there does appear to be an element of regionalization in the treatment of unruptured intracranial aneurysms because the proportion of aneurysms coiled at higher volume centers is significantly greater than those at very-low volume centers.
2017-04-10T08:08:33.950Z
2011-06-01T00:00:00.000
{ "year": 2011, "sha1": "053b3874928825cbbc1aa972144bf51b713e46d8", "oa_license": "CCBY", "oa_url": "http://www.ajnr.org/content/ajnr/32/6/1065.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "758b660e6576f5d9b8a5847bb1b89cd335b71117", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253998815
pes2o/s2orc
v3-fos-license
RUSSIAN PRESIDENTIAL ACADEMY OF NATIONAL ECONOMY AND PUBLIC ADMINISTRATION (RANEPA) The strengthening of Germany's position after 1990 allowed it to expand its geopolitical ambitions in regions previously little included in the foreign policy agenda of Germany. One of these regions has become the Arctic, the development and study of which takes place through participation in international Arctic organizations, as well as through bilateral agreements with Arctic states. However, at the end of the twentieth century, until 2013, Germany's Arctic position can be characterized as unambitious and passive, due to the lack of funding for German delegations to participate in Arctic Council working groups, insufficient coordination policy between federal ministries and agencies whose areas include the development and implementation of Arctic policy, as well as the lack of common interest in the implementation of national priorities in the region through the structures of the Arctic Council. Germany participates in the development of projects mainly related to ecology. This position creates a favorable image of Germany as the patron saint of the Arctic, which does not detract from the scientific value of German climate and environmental research. For the Arctic policy of Germany, it is also characteristic to involve broad segments of society in scientific research, thereby instilling the idea of a stable German presence in the Arctic. The experts of the Working Group on Families and Children of the Open Government have calculated several basic scenarios of demographic development of Russia, including inertial, optimistic and pessimistic scenarios and the scenario corresponding to the goals stated in the "Concept of Demographic Policy of the Russian Federation," up through the year 2050. The inertial scenario shows that if no new measures are taken to support the birth rate and to reduce excessive mortality, the population will decrease to 140 million people by 2020 and to 113 million by 2050; the working-age population will decrease by 8.7 million by 2020, and by more than 26 million people by 2050. In the worst scenario, Russia's population could shrink to 100 million people by the early 2040s. In the optimistic scenario, a combination of effective measures to support the birth rate and reduce excessive mortality will help to bring Russia's population to nearly 155 million by 2040. Thus, the price of today's decisions on demographic policy could be as high as the lives of more than 50 million of our fellow citizens, that is, more than one third of the population. At the moment, demographic policy in Russia is represented in a number of legal acts, the most important of which is the Concept of Demographic Policy of the Russian Federation for the period up to 2025 (hereinafter Concept of Demographic Policy). The Concept has played an important role in the development of Russian demographic policy, and consequently, in improving the demographic situation in Russia. However, it has been a while since the Concept was adopted, and the demographic indicators have changed, which implies the need for the new set of the demographic policy measures. Also, the target indicators on fertility, mortality and migration as stated in this Concept will not be sufficient to secure population stabilization (let alone growth) in Russia, given the forthcoming demographic dip, and therefore they should be revised. Hence there is a need for new work to develop concrete, evidence-based and updated targets for demographic policy that are cognizant of the huge scale of the projected demographic dip, and which will balance efforts to influence fertility, mortality and migration in favorable directions. In the current report we present a set of measures tried in various countries and studied in international sociology, which are very likely to have a positive impact on Russia's population. However, the time frame for effective action is limited. The next decade is absolutely crucial, as the proportion of people in the prime child-bearing years (aged 20-40 years old) will remain high for only another 5-7 years, after which it will be increasingly affected by the echo of the 1990s' demographic collapse. The priority objectives of population policy in the next two decades should be to raise the birth rate to replacement level (about 2.1 children per woman), and to reduce mortality to levels congruent with Russia's overall level of economic development, especially targeting the extremely high yet preventable mortality of working-age males. The data show that the reproductive attitudes of Russians are not static, and depend on their socio-economic situation. For a long time, sociologists and demographers of the European countries with successful population policies and high birth rates have analyzed the empirical data and had vigorous debates about exactly which social policies exert the greatest effect on fertility. In this case, the potential for growth in the birth rate in Russia is much higher than in most European countries, and measures to support families with children in Russia may give better results at a lower cost than in the OECD countries that implement large-scale family policies. Russia rates higher on both desired family size and adherence to traditional family values than most European countries, including some countries with higher fertility. Evidence shows that when countries apply truly effective measures of family policy, spending for these purposes no less than 2% (and sometimes even 3-4%) of GDP, they can achieve a systematic fertility increase which is not limited to a bare 2-3 years' time span. In general, the birth rate in the developed world is substantially higher in countries with higher spending on family policy. European countries with higher fertility levels have reached a level of 1.8-2.0 children per woman at a cost of spending 3-4% of GDP on family policies -provided that these funds are used effectively. Spending on family policy in Russia (calculated according to the OECD method), including the maternity capital, was 1.5 % of GDP in 2010, well below the 3 -4% required to reach the level of 1.8 -2 children per woman. The volume of payments to families with children in Russia (excluding maternity capital) in 2010 amounted to approximately 0.58 % of GDP, which is lower than in those countries with the most successful family policies, such as France or Sweden. In terms of payments to families with children Russia lags behind nearly all OECD countries. One factor that may make more efficient family policy possible in Russia is that poverty in Russia is unusually high compared with OECD countries. There is a concentration of poverty among families with children, especially among large families and single-parent families. All of the most effective measures to support the birth rate would significantly reduce poverty levels among families with children. According to experience and research, the most effective way to raise fertility levels is to provide a combination of cash allowances and tax benefits for families with children, together with government programs and laws to support women in combining work and childbearing (access to the services of kindergartens, nannies, flexible work schedules for mothers). France, for example, which has one of the most broad-based programs of family support in Europe, has enjoyed steadily rising fertility for nearly two decades. Allowances and tax benefits for families with children are considered, according to the research, the most effective measures to elevate fertility. However, in terms of payments to families with children, Russia lags behind nearly all OECD countries. Moreover, in most developed countries, child-payment systems alone are not sufficient to reach the highest levels of fertility. Combining work and motherhood is the key to a successful population policy in the modern world. As a rule, in the demographically successful developed countries mothers with children under the age of 3 go to work more often than mothers with young children in developed countries with low birth rates. An effective system of early childhood care (kindergartens, nursery schools, etc.) is therefore an essential part of an effective policy to support the birth rate. Among all types of expenditure on family policy in OECD countries it is the cost of childcare services that best correlates with the level of fertility. Developing the childhood care system for children under 3 is particularly important. All demographically successful countries in Europe have achieved high coverage of children under 3 with a free or subsidized childcare system. However, in Russia there is insufficient access to such services due to the lack of places in the state kindergartens and high fees in non-governmental ones, which makes the latter unaffordable for most families. In 2009, the coverage of children up to 6 years old for pre-school education in Russia was only 58% (compared with about 90% in France). Significant support for fertility increases can also result from housing-related policies, such as securing those families having 3 and more children with priority rights for sociallysupported housing, the right to acquire housing at cost on interest-free mortgages, etc. Rapid growth in life expectancy is possible in Russia, which is proved by the examples of historically close countries such as Estonia and Poland, as well as other countries of Central and Eastern Europe in the post-Soviet period. An analysis of gender and age differences in mortality from various causes in Russia and these countries shows that mortality can be reduced significantly by limiting the availability of strong alcoholic beverages (including especially illegal spirits) and tobacco. In recent years, Russia has adopted legislation to implement most of the key recommendations of the World Health Organization to reduce the harmful use of alcohol. At the moment priority should be directed to securing the practical implementation of these laws, as well as to combating the illegal manufacture and trade of alcohol. With regard to tobacco control it is necessary not only to enforce the Act for Clean Air in public places (adopted in 2013) and other restrictions already adopted, but also to legislate a total ban on tobacco advertising without any exception, and to increase excise taxes to the level of Eastern European countries. Modernization of the health care system is also a potentially large-scale resource for mortality reduction in Russia, especially for middle and older age groups. One of the major barriers to the development of the Russian health care system is its lack of financing. In the more developed European countries (with markedly higher levels of GDP per capita), the share of health expenditure in GDP is approximately two times higher than in Russia. Thus, a real reduction of this gap requires an increase in the share of spending on health care in the Russian GDP by a significant amount. However, not all improvements require greater funding; there is also room for gains through greater efficiency in medical spending. Modern health care systems gain large cost savings from greater engagement of outpatient treatment as opposed to hospital care, and from a greater role for general practitioners and nurse practitioners in the treatment of patients. The most important area for improvement is to accelerate the implementation of evidence-based effective practices through protocols and clinical practice guidelines in Russia, e.g. through harmonization with those in Europe, the USA, Australia, Canada, etc., as well as the motivation of health personnel to follow their use, including the motivation to abandon inefficient methods of diagnosis, prevention and treatment of diseases. The quality and availability of emergency medical care is also vital to reduce mortality. Mortality from cardiovascular diseases will undoubtedly benefit from increased availability of emergency medical assistance, especially for cardiovascular events (heart attacks, strokes). The number of such emergency care centers in most regions is not nearly sufficient. In the Russian context, with its vast geographic territory, it is important to preserve access to health care (including emergency care) in rural and sparsely populated areas. This will require the preservation of obstetric units, and enhancing the empowerment of nursing staff. In terms of reducing mortality from cancer the most efficient and financially viable approaches (in addition to anti-smoking measures) include mass screening for colorectal cancer and mass vaccination of girls under 16 years old against human papillomavirus (to reduce the incidence of cervical cancer). Other effective evidence-based approaches include reducing and enforcing traffic speed limits and automatic speed control, campaigns to control drunken driving, the use of helmets, seat-belts and child restraints, bringing the road transport infrastructure in line with international safety standards, establishment of modern safety requirements for vehicles produced and imported into the territory of the Russian Federation, and provision of timely and high-quality emergency care for those involved in road accidents. Russia's migration policy should aim to both eliminate the push factors and thus reduce emigration, and to promote and streamline processes of immigration, as well as stimulating internal migration towards the Eastern parts of the country. Migration policies should selectively attract the necessary categories of immigrants on the basis of cultural and qualification parameters, and maintain annual net migration at a target level of 300 thousand people as defined in the Concept for Demographic Policy. Our calculations show that without maintaining net migration at this level avoiding the severe negative scenarios for Russia's demographic future is impossible. Reducing the emigration exodus is possible only through a radical change in the conditions for the private sector to reward entrepreneurship, and to raise the incomes of professionals and skilled workers to compete with opportunities in Europe. This will require reducing bureaucratic barriers to business development, the elimination of corruption pressures on people, creating jobs and opportunities for self-fulfillment in the professional and skilled labor markets, and improvements in the investment climate. The recognition of dual citizenship and the simplification of procedures for the preservation of Russian citizenship to emigrants and their descendants could also strengthen Russia's ties with compatriots, as well as attract an additional number of compatriots to Russia. It would be helpful to improve the current program of repatriation enacted in Russia in 2007, by providing improved access to Russian citizenship, housing for the participants, simplification of the procedures needed to provide land for housing and agriculture, and tax deductions for opening businesses, especially in geopolitically important (particularly border) territories. This report focuses on fertility, mortality, and migration at the national level in the Russian Federation. However, demographic change is highly sensitive to local social contexts. Factors such as rural/urban differences in base fertility and mortality levels and in responses to population policies, trends in internal urbanization and migration, and regional differences in demographic behavior due to culture, religion, and local conditions all can affect Russia's future population trajectory. This report touches briefly on these matters in the appendices. They will be further discussed in future revisions of this report and in further research at the RANEPA Research Laboratory in Political Demography and Social Macro-Dynamics. In sum, there are a large number of policies that could raise fertility, reduce mortality, and optimize migration in Russia. Some are a matter of making more efficient use of resources or changing laws; others involve significant increases in social spending, though these costs would be offset by keeping more men over 50 and women with children in the active labor force. However, unless a broad-based strategy of diverse policies to boost fertility and reduce mortality are undertaken quickly, Russia's population will likely be reduced by several tens of millions over the next three decades. Only prioritized and urgent implementation of new and effective demographic policy measures can allow for retaining the successful achievements of the last few years and preventing a very significant population loss due to the demographic dip of the 1990s. This opportunity will be irreversibly lost in 10 years. SECTION I. CURRENT DEMOGRAPHIC SITUATION The demographic situation has improved notably in Russia since 2005, largely through the implementation of population policies, anti-alcohol measures, and healthcare system improvements. Natural population decrease has slowed down from 687,000 in 2006 to the period low of 2,500 people in 2012. Preliminary data for 2013 shows the first natural increase since 1991. Nationwide population stabilized at 143 million people, while the Concept of Demographic Policy in Russia until 2025 (hereinafter referred to as the Concept of Demographic Policy) previously had targeted this level by 2015. For the first time since 1992, when including net migration, Russia finally saw a notable population increase in 2011 and 2012 1 . Birthrates Over 2006-2012, Russia posted the strongest gains in Europe and second fastest growth globally in total fertility rate (TFR) -from 1.3 to 1.697 2 births per woman, or up by 30%. As a result, Russia jumped from 35th to 12th in Europe in terms of its TFR. In absolute figures, live births reached 1,902 thousand children in 2012, which is 416 thousand births above the level of 2006 (up 28%). The crude birth rate over this period rose to 13.3 per 1,000 people from 10.3 4 . Back in 2006, age structure determined nearly 50% of the TFR change with the other half driven by the increase in birthrates; but since 2009 TFR growth has been totally attributed to 1 Socio-economic situation in Russia. January 2013. Moscow increases in births per woman. Statistical analysis shows that increased fertility rates came exactly from second and further childbirths 5 . Following some slowdown in 2008-2011, an upsurge in fertility observed in 2012 confirmed the efficiency of the newly introduced measures. Unfortunately, this phenomenon has remained understudied by sociologists and demographers so far. A tentative explanation links this resumed growth, which continues into 2013, with some regional measures such as land allocations after third childbirth, introduction of the regional maternity capital, and allowances to families with three or more children. Attributing this fertility growth to the so-called "timing shift" became quite popular in recent years; and it seemed reasonable to suppose that many women, offered the maternal capital benefit for having a second child, were simply acting to move up births that they would have had anyway in the following years. However, statistics on birth intervals available for 35 Russian regions convincingly demonstrate the weakness of this reasoning. If the fertility increase had only occurred due to timing shifts, we would expect shorter intervals between the first and second childbirths. However, this interval actually widened considerably 7 . In addition, timing shifts would presumably move fertility to younger age groups, but fertility, quite the opposite, rose more among older women. Based on statistics available for 35 Russian regions, average mother's age at childbirth of any order increased year-on-year, and to even a greater degree in 2007-2011 than in 2006 relative to 2005 8 . 5 Rosstat. Analytical report on the results of the sample survey of reproductive plans of population [Rosstat. Analiticheskiy Rosstat. Analytical report on the results of the sample survey of reproductive plans of population [Rosstat. Analiticheskiy otchet po itogam vyborochnogo nablyudeniya reproduktivnykh planov naseleniya]. 2012. Section 4, tables 23-24. 8 Second and third births have increased most notably in Russia 9 . This is evidenced by considerable TFR gains for second childbirths in 2007 (for third births the absolute increase of the indicator was less profound -0.027 vs. 0.071, but the relative increase was even higher -27.6% vs. 17.6%), while fertility rates for first births showed zero growth. We can assume with a certain degree of conviction that fertility rate changes for second and subsequent births in 2007-2011 largely resulted from the new population policies implemented in Russia since 2007. Over the full period 2007-2011, TFR for second and subsequent births increased by 0.247-from 0.543 in 2006 to 0.790 in 2011-a remarkable gain of 45%. Thus, the support given to families with second and third children evidently resulted in a substantial increase in the birthrate after 2006, rather than merely a short-lived forward shift in birth-giving. We arrive at similar results if we consider the monthly fertility changes observed in Russia over the period after 2000 (as illustrated by Figure 1 To assess the influence of the newly introduced maternity capital on the birthrate changes in 2007 it makes sense to figure out the month when this measure should have had a visible effect on the births dynamics. At first glance, it would seem logical to expect the initial signs of its influence on the number of second births only after September-October 2007 taking into account the law's effective date (January 1, 2007). However, we have good grounds to state that the influence of maternity capital could and should have manifested itself somewhat earlier. Indeed, we can reasonably suppose that in the first place introduction of maternity capital influenced not so much the decision of some families to have a second child as the refusal of some women in their second pregnancy to have an abortion. Since most women make abortion decisions in the first two months of pregnancy, introduction of maternity capital should have made its first strong impact on the women who had gotten pregnant in November-December of 2006. Hence, maternity capital should have influenced the number of births in July-August 2007 10 . In fact, the abortions-to-live births ratio was down by 14% in 2007 (a record fall in all modern Russian history) serving as the first proof of the idea that the introduction of maternity capital benefits in 2007 affected birthrate dynamics mainly through promoting decisions against abortions. The fact that just a year earlier, in 2006, abortions had exceeded live births (106 to 100) 11 points to the massive potential of abortion rate decline in 2007 to affect the live birth rate. That same year, also for the first time in modern Russian history, live births outnumbered abortions (100 to 92) 12 . We now move to discussing limited to 130 thousand births. We note that the latter figure was still well above the average numbers of the period between 2003 and 2006. Rather often, we come across claims that maternity capital had no effect on fertility increase in Russia at all. The argument is that during the second half of the 2000s fertility was on the rise not only in Russia, but also in virtually all European countries with low or extremely low birthrates in late 1990s. Moreover, "the lower was the TFR downfall, the more considerable was the subsequent bounce" 14 with no maternity capital initiatives in other countries. This leads to the conclusion that "country-specific fertility dynamics over the past decade do not show any significant relations, which could allow one to definitely attribute the changes to economic successes or socio-economic state policies pursued" 15 . Considering such statements, we need to note that maternity capital introduction resulted in a fertility increase more than just comparable in scale to the gains observed in other European countries over the same period. According to Figure 1 It would also make sense to compare the birthrate dynamics in Russia after 1999 with those of the Western Europe countries that had the lowest birthrates back in 1999 (see Figure 1.5). The diagram shows that Russia's TFR in 1999 was below even the lowest rates in Western Europe. Indeed, fertility rates climbed in 1999-2006 in all of these countries. The five countries under consideration moved into the range of 1.3-1.4 births per woman by 2006 and fertility rates stabilized further on within the limits of 1.35-1.45. Apparently, such countries can reach this range, rather uppermost, only through improving economic situation, and the interval of 1.35-1.45 presents some kind of attractor. The same Figure demonstrates that Nevertheless, Russia's TFR remains substantially below both the level of simple population replacement (2.1 births per woman) and its fertility rate of 1990 (1.89). According to the sampling surveys of reproductive life plans 17 , the expected number of children (1.92 for both men and women) also falls short of the level needed for replacement of the population and the target fertility rate set by the Concept of Demographic Policy (1.95). Single-child families still prevail in the Russian society and account for almost 2/3 of all households with children; this eventually means inadequately fulfilled potential for second, third, and subsequent order births. 16 Imminent demographic dip Despite the current upward fertility trends, the demographic crisis is not yet over and Russia is facing new challenges. The major problem is that in the coming years Russia will face the consequences of the catastrophic birthrate collapse seen in the late 1980s and early 1990s (i.e. the consequences of the so-called demographic dip of the 1990s) 18 . We have to emphasize the unprecedented scale of the upcoming demographic dip, even more significant-because it will be sustained much longer-than the post WW II demographic crisis (see Figure 1.6). In other words, the number of Russians that were not born due to the fertility collapse in late 1980s and early 1990s is several times higher than the number of Russians that were not born as the result of WW II. The young people born in the early 1990s -the least numerous generation in the postwar period -are now entering their childbearing years. In Russia today, the number of 15-year-olds is only half the number of 25-year-olds. The number of women in their active childbearing years (age 20-29), who account for almost 2/3 of total births nationwide, will almost halve in a decade's time; this will inevitably lead to a marked reduction in births. Mortality Russia's demographic crisis has two parts. The first is a low number of births; as we have just noted, while there has been a recent uptick in fertility, the imminent sharp decline in young women of child-bearing age means that the 'birth dearth' will probably decline. The second part is extraordinarily high mortality for an industrialized middle-income country. Russia has a rather high death rate by global standards, and the primary problem here lies not only in an ageing population but in extremely high mortality rates among working-age men. Men aged 30-70 years old account for approximately one third of excessive deaths among 18 A demographic dip usually means a decrease in births due to smaller cohorts entering childbearing age compared to preceding generations. 19 Calculations by Justislav Bogevolnov based on the Federal State Statistics Service data. The quality of medical services system has also made progress, as evidenced by the fact, for example, that infant mortality dropped by almost 15% over the period 2006-2012. 1960 1965 1970 1975 1980 1985 1990 1995 2000 2005 2010 the leading contribution of circulatory system diseases and external causes to Russia's high male mortality. According to the data available, at least 38% of male deaths in Russia are preventable, including 18% of total deaths from cardiovascular diseases, 12.7% of deaths due to external causes, and 2% of deaths from diseases of the respiratory system. The substantial contribution of alcohol to the tragically high mortality listed as due to external causes (homicide, suicide, accidents, drowning and submersion, etc.) in Russia is well known, and has been substantiated by analysis of correlates of mortality dynamics 26 , results of forensic autopsies 27 , and case-specific retrospective longitudinal analysis of mortality 28 . Therefore, despite a certain decline of deaths listed as alcohol-related from 2006, statistics suggest that alcohol remains a huge contributor to Russia's high male mortality. Also, various studies confirm that mortality rates correlate not so much with alcohol consumption in general, but rather with consumption of spirits (or hard liquors), both legal and illegal 29 . People consuming spirits usually take considerably more alcohol at once compared to weaker beverages such as beer or wine. Consumption of large alcohol doses per occasion boosts the probability of death due to heart diseases, hypertension, cerebral hemorrhage, accidents, assaults, and so on. At the same time, the toxicity of illegal spirits compares to that of legal liquor 30 . 26 The three major reasons behind excessive mortality from diseases of the circulatory system are high levels of spirits consumption, one of the highest levels of tobacco consumption/exposure globally, and poor management of cardiovascular diseases (including prevention, diagnosis, and treatment) that fails to conform to best international practices. Moreover, we can attribute elevated mortality from diseases of the respiratory system among men of 40-60 years to the smoking epidemic in Russia, too. In Russia, excessive mortality among women compared to that in Germany amounts to only 16%, with external causes playing a far less significant role. Excessive deaths from diseases of the circulatory system is the main difference, and these deaths are concentrated in elder age groups as compared to men. This suggests that the major potential of reducing mortality in Russian women lies in the area of medical care improvements. Обозначения 500-1000 300-500 100-300 50-100 Migration Russia is a net migration recipient country, as immigration outnumbers emigration, even excluding temporary -including labor -migrants, many of whom de facto end up residing in Russia. Temporary labor migration represents the single largest international migration flow into the Russian Federation. Russia's labor market remains attractive for able-bodied workers from the CIS member countries. In 2012, Russia granted 1.6 million work permits to foreign citizens. Four former Soviet Union (FSU) countries currently provide the largest number of foreign workers -Uzbekistan (c. 40%), Tajikistan, Ukraine, and Kyrgyzstan, totaling about 70% of all work permits. Notable flows also come from China, Turkey, Vietnam, and North Korea among non-FSU countries 31 . As for the sex-age structure, male immigrants dominate in Russia (about 90%) with the overwhelming majority of workers being between 18 and 39 years; these account for c. 80% of all male immigrants. In recent years, labor migrants to Russia tend to "grow younger" -from 2007, the age group of 18-29 years has been prevailing over the category of 30-39 years. Education and skill levels have been on the decline, too. Despite Russia having eased entry regulations for highly qualified labor migrants, the occupational makeup remains largely unchanged thus far. Temporary labor immigration to Russia remains mainly a low-skilled flow with as little as 44 thousand out of 1.4 million work permits issued to high-skilled specialists. Corrupt practices penetrate labor migration as new entrants seek legalization; meanwhile employers lack economic stimuli to hire native labor due to the availability of cheap foreign workers with few or no labor rights or protections 32 . Foreign labor immigrants are distributed unevenly in Russia with Moscow, Moscow Oblast, Saint Petersburg, and Leningrad Oblast being the unquestionable leaders. These regions combined account for about 58% of all foreign workers in Russia. Additional sizeable portions of migrants concentrate in oil-rich Okrugs -Yamalo-Nenets and Khanty-Mansi. The Far East of the country hosts some 10% of labor immigrants coming mostly from China, North Korea, Central Asia, and Vietnam. In July 2010, Russian authorities de facto legalized foreign workers hired by individuals by introducing licenses -special work permits for citizens from visa-free countries who work for private persons. Based on formal statistics of the Federal Migration Service, over 2 million people obtained such licenses in 2010-2012 33 . Formally, the share of international labor inmigrants among people employed on the Russian labor market remains relatively low at about 5%. However, this share is rather substantial in some sectors such as construction, where it reaches almost 19% by official estimates. With non-formal workers factored in, Sergey Riazantsev estimates that this share could run up to 50-60% in such sectors of economy as construction, utilities, transport, trade, and services. There is a material gap between official data and the real scale of labor migration. The number of undocumented labor immigrants (estimates are rather approximate) exceeds the officially reported headcount by several times. The census of 2002 gave more or less realistic statistics, finding about 2 million people in Russia not accounted for in earlier counts. Another census of 2010 "added" 1,000,000 people to the country's population and temporary labor immigrants presumably were responsible for such an addition. Calculations based on the estimates of primary categories of undocumented foreign workers in Russia suggest their headcount could total some 5 million persons. Citizens of other CIS countries form the vast 31 majority as they have rights to visa-free entry to Russia, but then they fail to register as temporary residents or obtain work permits as stipulated by the legislation. Many newcomers reside in Russia for several years or pay visits to their homelands now and then 34 . Student migration Despite its considerable potential in the area of educational services, Russia attracts just 80-90 thousand foreign students annually and has a mere 3% share of the global education market. Students mainly come to Russia under government programs or along well-trodden routes -either their parents had studied in Russia, or they are ethnic Russians whose parents intend to relocate to Russia in due course. The top source countries include Kazakhstan, China, India, Ukraine, Vietnam, and Uzbekistan. Russian language the similarity of educational systems, and relatives attract students from CIS, while non-FSU students find Russian tertiary education institutions cheap compared to the Western ones. Apparently, Russia's policy with regard to student migration is far from active and the country does not seek to bring in crowds of foreign students. In addition, unreasonable barriers for foreign students preclude them from working more than a certain amount of hours in Russia. Some institutions of higher education feature poor amenities and studying conditions. Moreover, information resources for international promotion and effective tools to form Russia-oriented student flows are nonexistent. In many cases, Russian institutions take uncoordinated actions to attract foreign students; at times, they compete against each other. Finally, graduates of local institutions of higher education face rather complicated and time-consuming naturalization procedure despite a de-jure relaxed regulation granting Russian citizenship for them. Nor does Russia have a proactive state strategy of inviting foreign postgraduates for graduate, doctoral, internship or professional development programs 35 . Emigration Over 1.2 million people have left Russia for permanent residence in non-FSU countries following the USSR's breakdown. Germany, Israel, and USA have traditionally been and remain the main destinations for Russian expatriates. Among newer destinations of the Russian emigration we can highlight European countries (Finland, Spain, and the United Kingdom in the first place), Canada, Australia, New Zealand, and China. In addition, Russia has become a rather large exporter of labor force to the international markets as 45-70 thousand Russians leave their homeland annually under work contracts only. The largest portion of temporary labor migrants heads for the United States and Europe. In recent years, Russians have had an increasingly marked footprint on the labor markets in Asia and Australia. Major employing countries include the USA, Cyprus, Malta, the Netherlands, Germany, and Greece. CIS countries look far less attractive for Russians against the background of the "old" foreign universe, although they host some small shares of labor migrants from Russia. Although legally hired, many Russians obviously have not notified state agencies of their overseas employment. Overall, reducing emigration (through improved living and working standards in Russia for corresponding population cohorts) which includes mostly well-educated and qualified specialists, young and active persons, accompanied by businesses and capitals outflow among other things, presents a fairly important demographic reserve for Russia. Although reducing emigration will do little to offset the impact of Russia's mortality and low fertility on overall population, its importance is magnified by the fact that much emigration is of high-skilled workers while very few immigrants are in the high-skilled category 36 . Prospects of smaller migration gains for Russia It is critical to note that any hopes to overcome the population crisis in Russia by means of migration alone are groundless. It is next to impossible to make up for population losses due to Russia's extremely high adult mortality and low fertility even through aggressive encouragement of migration as all CIS countries (Russia's principal "demographic donors") face their own demographic dips related to abrupt downturn in births of their own in the 1990s. As the result, increasingly smaller age cohorts will be entering the CIS labor markets in the years to come. These will greatly reduce the surplus labor force in CIS countries, restraining Russia's potential migration gains 37 . Russians residing abroad can potentially contribute to immigration to a certain extent. However, their role in forming migration flows should not be overestimated; they could make some compensatory component at best. We will discuss the opportunities related to our expatriate fellow countrymen in the context of addressing Russia's demographic issues in Section 3. 36 Ryazantsev S., Pis'mennaya E.. Emigration of scientists from Russia and the Russian scientific diaspora "circulation" or "brain" drain [Emigratsiya uchenykh iz Rossii i rossiyskaya nauchnaya diaspora: "tsirkulyatsiya" ili "utechka" umov]. Experts from the Open Government's working group on family and children have analyzed the main scenarios of Russia's demographic development 38 including no-action (inertial) and best-case scenarios, as well as the scenario envisaged by the Concept of Demographic Policy 39 . Table 2.1 outlines the targets as set by the Concept. However, according to our calculations, achieving these objectives would not suffice to halt Russia's depopulation. Fertility, mortality, and migration target rates, as set by Russia's Demographic Policy, cannot assure subsequent long-term population growth (see Figure 2.1), and population decline will resume as soon as in 2025. Under the no-action scenario, Russia's population will diminish to 140 and 113 million people by 2020 and 2050, respectively 43 , unless additional measures to support births and prevent deaths are implemented. Analysis shows that at the current fertility rate (notably below the level needed for population replacement) and mortality rate (very high by international standards), for all the improvements attained, Russia's population will rapidly contract in the decades to come -to 138.5 and 112.4 million people by 2020 and 2050, respectively. If the inertial scenario unfolds, upcoming depopulation and changes in Russia's age structure will likely affect all aspects of socioeconomic development: • Labor and economic potential. Unless Russia takes immediate and meaningful measures aimed at total elimination of its excessive mortality and boosting fertility, this country will face a dramatic contraction of its working-age population: by 7-8 million people by 2020 and by over 26 million people by 2050 (see Figure 2.2). The age structure of the economically active population will become a great deal more mature, endangering projected economic growth, investment appeal, and structural modernization of the economy. • Human resources. Some spheres of economy, which are directly associated with modernization prospects, such as industry, and engineering, will suffer the most from the aging of the workforce, as they will soon start to lose their senior personnel. Despite the innovation-based economic growth desperately needed, the oil and gas industry, as well as the financial sector, will likely continue to "prosper" amid looming personnel shortages since they offer higher salaries and can attract sought-after educated young people. • Healthcare and public welfare. Increasing numbers of people at much higher ages will result in higher healthcare costs for the state, as senior citizens consume medical services per capita significantly above average. In addition, the rapid escalation of demand for specialized medical services for seniors will require changes in medical specialties and doctors' training. The need for emergency medical services and integrated social security centers for the elderly will increase substantially. • Education. Shrinking cohorts of Russian students will result in fewer institutions of occupational education if not compensated by educational and educational-labor migration. Ageing workers will require a new system of lifelong learning intended for reeducation and conversion training to keep them productive. The demand for initial professional and vocational secondary schools will reduce. • Pension system. The national pension system will also face challenges, as the ratio of the working-age population to unemployable citizens will drop from the current 2.7 to below 2.0 by 2035 and further to 1.6 in 2050 44 . Assuming no change in taxation or pension age requirement with regard to the pension system will result in the percent of retirees' income replaced by pensions dropping to 26% in 2030 from 36% last year. With the demographic situation unchanged and in absence of pension reforms, maintaining current replacement rates will entail additional expenditures to the amount of about 0.2% of GDP annually 45 . • Defense capabilities. By 2020, the draft-age (18-27 years) male population will fall by 3.8 million men (more than one third) and by 4.5 million (or more than 40%) by 2050, which will pose a problem in terms of manning the armed forces. • Politics. Political stability depends directly upon the state's ability to fulfill its social obligations. Destabilization and loss of faith in the government can in turn contribute not only to deterioration of the socioeconomic situation, but also to an intensified demographic crisis, similar the disaster seen in the 1990s with negative trends gaining momentum. The population of the Far Eastern Federal District could shrink to less than 4,000,000 people (by almost 40%) by 2050 for the reasons of low fertility, elevated mortality, and migration outflow. Such developments would also endanger the territorial integrity of Russia as the single largest state. In view of its demographic challenges, Russia runs medium-term risks of losing economic growth momentum and competitiveness, while its social, political, and geopolitical stability might come under pressure in the longer term unless additional measures are taken today, aimed at mitigating the consequences of the 1990s demographic dip. A WORST-CASE SCENARIO At the same time, the inertial scenario is obviously not the worst case. In fact, this scenario assumes life expectancy in Russia through 2050 will remain at its 2010 level and total fertility rate at the 2011 level. Yet the years of 2010 and 2011 were hardly among the worst in post-Soviet Russian history -actually, these years turned out to be among the most favorable in terms of birth and death rates. Unfortunately, there are not sufficient grounds to exclude the possibility of a deteriorating situation in Russia with regard to fertility or mortality. In Russia's recent history, we have seen fertility rates and life expectancies rise, but then collapse to levels below those preceding the upturn (see Figure 2.3). Our worst-case scenario reflects Russia's demographic future in the case of a victory of the alcohol and tobacco lobbies and reduced financing to support families with children, when lead to a regress of mortality and birth rates to the lows of the 1990s. It also incorporates an economic crisis, producing a dramatic upsurge in unemployment with subsequent decreases in migration gains to zero by 2022. While this scenario may seem unduly dismal, recent proposals -including ending maternity capital payments for higher-order births, cancelling full payments for public nurseries, allowing a 150% increase of kindergarten payments for middle-class parents with two children, and freezing or even decreasing the excise taxes on vodka and cigarettes -make this gloomy scenario ever more realistic. Figure 2.4 summarizes the results of our calculations for the worst-case scenario, compared to the inertial trajectory. In the worst outcome, Russia's population may shrink to 100,000,000 people as soon as in early 2040s. Other European countries that have attained stable fertility rates closer to replacement have invested heavily in family-support policies. To examine the effects of adopting similar policies in Russia, we modeled the effect of effective investing 3% of GDP in such state policies by smoothly (over a ten-year period) bringing age-specific fertility rates of 2020 to the level of Iceland in 2005 (which corresponds to a total fertility rate of 2.05 births per woman), while leaving age-specific mortality rates intact at 2010 level. Figure 2.5 presents the projected change in the population of the Russian Federation compared to the inertial scenario. Under this scenario, Russia's population will decrease to 133.5 million people by 2040 rather than to 122 million as seen by the no-action scenario. Measures to support fertility alone thus could produce a very strong effect on the long-term population trend (adding 11.5 and 17.6 million human lives by 2040 and 2050, respectively), but these measures alone will not suffice to prevent Russia from still experiencing population decline. Millions Fertility support scenario Inertial scenario Potential effects of stronger anti-alcohol policies To find other ways to improve Russia's demographic prospects, we examined the potential effects of strong anti-alcohol policies. Our estimates prove that the long-term demographic potential of a vigorous anti-alcohol policy remains rather high in the current situation (Figure 2.6 and Table 2.2). Millions Elimination of alcoholic excess mortality Inertial scenario The above data clearly points at the huge demographic potential that could be unlocked through the implementation of standard World Health Organization recommendations 47 in respect to the future of this country. If implemented, these measures -not just low-cost, but quite the opposite, outright beneficial for the state budget -could save more than twelve million Russian lives by 2040. The measures should include real (i.e. manifold rather than by some percent) hikes in excise taxes on spirits or the introduction of a government monopoly on retail liquor sales, among other things. Therefore, in the short to medium term, rigorous antialcohol policies offer even larger demographic potential compared to fertility support measures, and at far lower cost (on the other hand, encouraging birth-giving has greater long-term potential as discussed below). Interestingly, at this time the potential demographic effect through 2040 of a full-blown anti-alcohol policy has somewhat contracted from 16.6 to 12.4 million people, as compared to the previous similar projection which began with the higher baseline of age and sex based mortality rates of 2007 rather than 2010 48 . Generally, this is a positive and welcome development as it means that even the compromise measures to curb alcohol affordability that have been implemented in this country in recent years should save the lives of more than four million of our compatriots in the decades to come (provided these measures stay in place, of course). These same figures show how little we have done in this regard compared to what could be achieved, and how far we have to go. Strong effects of the complete elimination of excess mortality Complete elimination of excess mortality in Russia can produce a particularly strong long-term effect on demographics. In addition to a vigorous anti-alcohol campaign, it should involve a full-scale anti-smoking policy and major improvements in the national healthcare system, with at least 10% of GDP allocated to these purposes. We modeled the effects of these policies through bringing age-specific mortality rates of Russia in 2020 to the level of Norway in 2009. (Note that this scenario does not suggest that Russia will catch up with Norway by 2020, as Norway will likely further reduce its mortality in the coming decade. Rather, it assumes that Russia will be able to reduce the gap; that is Russia in 2020 will reach Norway's 2009 level, although this scenario is still somewhat optimistic.) Under this scenario, if complete elimination of Russia's excess mortality could be achieved, the Russian population would grow to 142.7 million people by 2040, rather than drop to 117 million as seen by the no-action scenario. To put it differently, Russia in 2040 will return to the current level of about 143 million inhabitants. In the short to medium term, therefore, the complete elimination of Russia's excess mortality will have a particularly strong demographic effect (20.7 million saved lives by 2040), notably stronger than childbirth support measures. However, the elimination of excess mortality would have its main impact on the next generation; in the longer term it would not fully counter the effect of smaller youth cohorts and low fertility over several generations. Therefore, despite its large near-term impact, eliminating excess mortality alone will not prevent Russia's population from eventually returning to decline. As shown in Figure 2.7, this change will suffice to halt population loss by the mid-2010s and even ensure a certain population growth through to late 2020s. However, the Russian government should also adopt a fertility boosting package of policies to maintain current 47 birthrates; otherwise, Russia's population will start to decrease from the early 2030s, with this contraction gaining momentum over the following years. Combination of measures to prevent depopulation: the best-case scenario We highlight that, given the recent severe demographic dips in the 1990s, and current adverse trends, only a combination of effective measures to support fertility and eliminate excess mortality could prevent Russia from eventually dying out. We include this combination as the best-case scenario in our analysis (Figure 2.8). We should carefully note the huge spread between the lowermost (worst-case) and uppermost (best-case) scenarios. Indeed, should Russia develop under the worst-case scenario, its population will total less than 102 million people in 2040, while the best-case scenario suggests almost 155 million. Therefore, the cost of decisions made now potentially equals more than 50 million human lives of our compatriots, or more than one third of today's nationwide population. It is worth a special mention that, in early 2040s, Russia will start to experience the consequences of the demographic dip of the 1990s even under the best-case scenario, as the children of the fewer mothers born in the 1990s enter their prime childbearing age. Nevertheless, in the optimum scenario, in the latter half of the century Russia's population will finally stabilize at slightly above its current levels (see Figure 2.9). As we have noted, in the near term (the next 30 years), the greatest impact on demographic trends will come from eliminating excess mortality. However, our forecast analysis through 2100 suggests that for the period after 2040, the greatest long-term potential demographic improvements come from fertility support measures. Indeed, without a fertility boost, even with full elimination of excess mortality, Russia's population will experience an accelerating decline after 2040 that parallels the projections of the inertial or pessimistic scenarios (see Figure 2.10). As we see, only a marked rise in births can prevent Russia's population from experiencing a long-term decline. However, unaccompanied by excess mortality elimination, this is achievable only in the second half of this century. Only the combination of fertility boosting measures and eliminating excess mortality can achieve both a prevention of immediate population decline and stabilize long-term population at current or higher levels. If the fertility boost can be achieved, then the range of future population projections is for Russia to have less than 132 million people in 2100 if the problem of potential depopulation is addressed through fertility boost measures only. The best-case scenario encompassing both a rise in births and elimination of excess mortality would produce a population of more than 158 million. Hence, at least 26 million lives are at stake. In sum, according to our estimates achieving the target level of 145 million people by 2025 will primarily require: • Life expectancy of no less 79.9 years y 2025 (77.6 for men and 82.2 for women, respectively) • Fertility rate of 2.05 births per woman by 2025 • Actions needed to maintain migration at the levels of recent years (c. 300,000 migrants per year) with improved quality of migration gains. SECTION III. DEMOGRAPHIC POLICY MEASURES Top governmental officials of the Russian Federation are already aware of the destructive consequences of the inertia development scenario, with its threat of large-scale depopulation, and recognize the necessity of taking action aimed at stabilizing the size of the population.. In particular, Vladimir Putin, the current President of the Russian Federation, stated during his term as the Prime Minister that the main priority of the state is to save the nation 49 : "Unless Russia implements a long-term comprehensive agenda for demographic development to build up its human potential and develop its territories, it risks turning into a geopolitical "void," whose fate would be decided by other powers. Today, Russia's population is 143 million. Experts forecast that in case of an "inertia scenario" -that is, with no new measures introduced, and with all the present trends still in place -by 2050 Russia will only be some 107-million strong. But if we manage to formulate and implement an efficient, comprehensive policy for population saving, then Russia's population may increase up to 154 million. The historic cost at stake in choosing between action and inertia is therefore some 50 million lives within the next 40 years". 50 At the statutory level, this priority has been primarily fixed in the Concept of Demographic Policy for the period up to 2025 51 which came into force in 2007 (the second stage of the Concept is being implemented currently), and in Presidential Executive Order No 606 of May 7 th , 2012 "On Measures to Implement the Demographic Policy of the Russian Federation" 52 . Threats to Russia's demographic development are serious. However, a thorough research of international best practices makes it possible to identify approaches and state policies that may positively affect demographic indicators. However, the time window of opportunities is limited for a number of indicators. Russia now has a unique resource which enables it to reach the optimistic scenario of demographic development -this is having one of the world's highest shares of population in the active reproduction and working ages (15-60 years). This includes a high percentage of people in the prime working and parenting ages (20-40). This resource will be available over the next 5-7 years; after that, the effect of the demographic dip of the 1990s will become more and more pronounced each year. Meanwhile, these 5-7 years may suffice to take Russia to the optimistic demographic scenario -provided that large-scale, effective, "concentrated" demographic policy is implemented. Russian President Vladimir Putin emphasized in his State-of-the-Nation Address the necessity to use the resource of having young population groups. "Today, the share of the young, active, working population aged 20 to 40 years in Russia is one of the highest among the developed countries. But in just 20 years, this age group could be reduced by half. If nothing is done, this trend will continue. Either right now we can open up a lifelong outlook for the young generation to secure good, interesting jobs, to create their own businesses, to buy housing, to build large and strong families and bring up many children, to be happy in their own country, or in just a few decades, Russia will become a poor, hopelessly aged (in the literal sense of the word) country, unable to preserve its independence and even its territory". V The current young generation should resolve two critical tasks at once Priority goals of the demographic policy over the next two decades should include an increase in birthrates to the population replacement level (about 2.1 childbirths per woman) and a reduction in mortality, especially liquidation of excessively high mortality of the working-age males. If each of these strategic areas is supported by nationwide implementation of effective evidence-based state policies, the optimistic scenario of the demographic future for our country becomes a reality. However, state policy per se does not suffice to achieve the optimum demographic scenario of demographic development. Active participation and involvement of business, private sector, mass media and, finally, the society itself is necessary as well. President Putin stressed in his State-of-the-Nation Address two main spheres of action. In order to end the population loss and ensure that Russia's population has fully overcome the consequences of the demographic dip of the 1990s (i.e. approached the pre-dip population number of 1990), the following policies need to be implemented over the next ten years: "Demographers say that the decision to have a second child is a potential decision to have a third. It is important that more families take this step. And, despite some experts' doubts (with all due respect), I still believe that families with three children should become the standard in Russia. But a great deal must be done to make this a reality". V.V. Putin, State-of-Nation Address, 12 December 2012 "In the past four years life expectancy in Russia has grown by almost 2.5 years (this is a good indicator) and has exceeded 70 years. However, the mortality rate remains very high, especially among middle-aged men. Together we must fight the frankly irresponsible attitude in society towards healthy living. Along with the development of public healthcare more attention should be paid to preventive care. Naturally, this does not mean that we should focus less attention on improving healthcare and increasing its accessibility -not at all. However, it is not enough to limit our efforts to medicine. The Government should introduce programs for replacing jobs with hazardous conditions and improving road safety. Only smoking (we know this well as we have discussed this many times already), alcohol and drug addiction cause hundreds of thousands of premature deaths in our country every year". V.V. Putin, State-of-Nation Address, 12 December 2012. age Achieving these goals is hard, but still possible. Russia has significant potential for both increasing birthrates and reducing mortality rates and this potential can be activated with effective demographic policies. The adoption of the Concept reversed the trend of the state policy in Russia, positioning the state strongly towards supporting demographic growth. However, the targets set in the Concept are insufficient to overcome the demographic crisis in Russia because of the imminent echo of the demographic dip (as shown in Section 2 of this report). Achieving the fertility, mortality and migration target values set in the Concept will not ensure the subsequent long-term growth of Russian population in the long term. This means that even more aggressive goals and policies than those implied by the Concept will be necessary, if Russia is to overcome the echo of the demographic dip. In addition, the policies proposed in the Concept and some other population policy documents lack a detailed description necessary for their practical implementation. CURRENT DEMOGRAPHIC POLICY IN RUSSIA At the same time, the international experience in the social policy public management shows that the effective policy measures should be specific to work to be applies within certain thresholds and in particular circumstances. These policy measures are usually identified based on the research of international, regional and national practices of a certain social policy issue management. The key policy directions according to the Concept are in line with the policy recommendations derived from sociological research on these matters. However the set of policy measures of the Concept can be implemented either effectively or ineffectively. Therefore, this Report contains a set of specific evidence based measures based on sociological research which are highly likely to have significant positive effect on the demographic indicators. Order of the President of the Russian Federation No. 606 of 7 May 2012 "On Measures to Implement the Demographic Policy of the Russian Federation" sets new targets for demographic development and contains a number of quite effective measures, but it is not a systemic policy document. Resolution of the Russian Federation Government No. 1142 of 3 November 2012 "On Measures to Implement Order of the President of the Russian Federation No. 1199 of 21 August 2012 "Оn the Assessment of Performance of Executive Bodies in the Constituents of the Russian Federation," although it introduced a number of demographic indicators to guide the performance of governors in Russia's constituent entities, does not propose any policies that will definitely facilitate the achievement of these performance indicators. Additionally, this document is that it does not take into consideration the specific features of the various regions, such as the variations in their social and demographic trends, and the resources in place that can be used to improve each particular region's demographic situation. The State Program of the Russian Federation "Healthcare Development" contains a number of measures which may produce a tangible impact on mortality reduction. Nonetheless, the resources allotted to the Program are clearly insufficient to achieve the targeted crude death rate of 11.4 per 1000 by 2020, and the measures of the program are detailed enough to evaluate their possible effectiveness. Migration, as we have shown in section 2 above, will play a critical role in sustaining Russia's population size. However, the Concept of Migration Policy of the Russian Federation up to 2025, approved by the President of the Russian Federation on 13 June 2012 does not contain any quantitative indicators. The State Program stimulating return of compatriots to Russia, approved by is directly linked to demography. However, according to the Audit Chamber, 8,800 thousand people moved to Russia by 2012 as part of this Program, or only 13.5% of the target number for these years 53 , which clearly points to the currently low effectiveness of this Program. Thus, Russia's demographic policy needs to be revised with a view to making its measures more efficient based on analysis of international and Russian experience of demographic policy and its components, and taking into consideration the massive scale of threats from the approaching demographic 'dip' in the generation born in the 1990s. Measures to support fertility Russia needs a "concentrated" demographic policy -as over a limited period of time (given the approaching echo of the demographic dip) it is necessary to implement the most effective policies to increase fertility. The family policy should be focused on bringing down the existing obstacles to families in having their desired number of children. The desired number of children can be influenced by state policies of support for families with children. According to a nation-wide Russian survey, women estimate the probability of having their 2 nd and 3 rd birth over the next 3 years to be 40% higher and 66% higher accordingly if the state offers additional support for families apart from the current policies 54 . It should be noted that people who grew up in two-and three-child families are currently in their active reproductive age, which considerably increases the likelihood of second, third and subsequent childbirths in their families. It is advisable to actively use the experience of the developed countries which managed to raise their fertility rates to the population replacement level or have maintained this level for a long period of time. Such examples do exist in the developed world, even though the Western decline in fertility seemed irreversible. However, the last decade showed that this trend is reversing. Many Western and Eastern European countries are experiencing strong fertility growth. Targeted family policy measures were the main driver behind a significant rise in fertility in recent years, in particular, in Great Britain (from 1.63 childbirths per woman in 2001 to 1.94 in 2008) and Slovenia (from 1.2 childbirths in 2003 to 1.53 in 2008). Such countries as Belgium, Norway, Finland, Iceland, the Netherlands, Australia, Latvia, Spain, Bulgaria, etc. also managed to significantly raise their birthrates. There is widespread skepticism regarding the effectiveness of family policy measures intended to stimulate fertility, since they allegedly result in only a short-term rise in fertility (for 2-3 years) due to a shift in a birth calendar, with birthrates subsequently declining again. However, actual data show that countries which implement truly effective family policy measures and spend at least 2% (sometimes 3-4%) of their GDPs for these purposes manage to achieve consistent fertility growth (rather than effects lasting for 2-3 years only) (see According to surveys, despite a widespread stereotype, immigrants played a relatively insignificant role in this fertility growth 56 . For a long time, sociologists and demographers of European countries with successful demographic policy and high fertility rates have actively discussed, based on empirical data, what precise social policy measures have proved the most productive for increasing birthrates. 57 . Data from the Organization for Economic Development and Cooperation (OECD) are most commonly used for such calculations. OECD members include most post-Socialist countries of Eastern Europe. Therefore, a survey based on sampling from OECD countries is quite valuable for analyzing potential trends in Russian society. According to these surveys, it is possible to gain an increase in fertility by 0. Sweden measures to support families with children helped to bring total fertility from 1.6 to 2.07 childbirths per woman from 1994 to 2010, restoring fertility to the population replacement level. Also in Sweden, such measures raised fertility from 1.5 to 1.98 childbirths per woman over 1999-2010. Such fertility gains may be sufficient (needless to say, provided that Russian excessive mortality is liquidated) to prevent depopulation of our country 58 . International and Russian practice shows that the most effective measures of support for families with children in terms of effects on fertility are as follows: • sufficient levels of family policy spending; • increased payments and allowances to families with children and tax refunds for parents; • accessibility of child care services, especially for children under three years; • flexible working hours for mothers; • housing for families with children. We believe the most important policy actions in regard to fertility were mentioned in the State of the Nation Address by Russian President Vladimir Putin (12 December 2012): these are to create favorable conditions for combining motherhood and professional activity, to develop the childcare and pre-school education system and to provide housing support to families with children. Below we consider each of these areas, including existing successful international experiences and opportunities for their adaptation to Russian conditions. THE HIGH IMPORTANCE OF FAMILY VALUES IN RUSSIA Before going into detail on specific policies to raise fertility we should note one exceptionally favorable factor for future fertility growth: in recent years the commitment of Russians to family values has surged higher. Inasmuch as Russians already desire to have more children than they actually have now, the potential for policy measures to increase fertility is in some ways much stronger in Russia than in other European countries. Therefore, measures to support families with children may yield good results in Russia with less spending than in some OECD countries pursuing large-scale family policies. In terms of commitment to traditional family values Russia is better positioned than most European countries, including countries with higher fertility rates (France, Finland). According to numerous surveys, family is a top priority for the Russian people and the main value for the absolute majority of people. Families want to have more children: more than 50% of families would like to have two children and more than 25% three children. The desired number of births in a family (2.33) is higher than the replacement requirement, and this value is set to rise as the total fertility rate grows. 160-196. According to the latest wave of World Values surveys, 90% of polled Russian people said that a family is very important for them. This indicator is average as compared to other countries worldwide: Russia lags behind such countries as Georgia, Egypt, the USA etc., but outpaces most Western European countries, including Finland, Germany, Switzerland, the Netherlands, etc. Moreover, the share of Russian people saying "family is very important for me" has been rising steadily: from 79% in 1990 to 84% in 1999 and 90% in 2008, or up 11% within 19 years (see Figure 3.2). The fact that family is valued more highly in Russia than in some European countries with stronger fertility (France, Finland) suggests that Russia's potential for further stimulation of fertility rates by family support measures is quite large. The high importance of family is also shown in a recent Russian survey of life priorities. For instance, when asked "What targets would you like to achieve in your life?", the most common response from respondents in all age groups was "Create a happy family and bring up good children" (93% of all people polled). The following three most popular answers were "Have reliable friends" (91%), "Live my life honestly" (90%) and "Have an interesting job" (86%) 60 . Another measure of how important families are for the Russian people is the unprecedented growth of trust in the family, which has recently doubled, rising from the lowest level worldwide into the top ten. In fact, the share of respondents who fully trust their families has climbed to a record high in Russia -from 46% in 1990 (the lowest globally) to 91% in 2007 -ranking 10th among 53 countries. With that, Russia has notably outstripped such developed countries as the USA, France, Switzerland, Germany and many post-socialist countries, such as Ukraine, Poland, Romania, Moldova (Figure 3.3). 3.1.2. FAMILY POLICY SPENDING By and large, OECD countries with higher family policy spending have higher fertility 62 . We see that fertility rates clearly correlate with public family spending. As showcased by the chart (Figure 3.4), European countries reach the fertility rate of 1.8-2 births with family policy spending at 3-4% of GDP, provided that these financial resources are spent effectively. It is true that several OECD countries have spent that much or more on family policies without raising fertility. However, this Central European cluster is an example of ineffective family policy spending, based on a model of stay-at-home married mothers raising their children. These countries' family policies virtually ignore women who work and single mothers, and thus fits poorly to the realities of child-bearing and family structures in modern industrialized societies. These countries have erred by allocating money only to bonuses for families with children but not to the more important and effective policies of supporting working women with children through funding for child-care and preschools. 63 Russia's family policy spending (calculated using the OECD methodology), including maternity capital, amounted to 1.5% of GDP in 2010. According to the Audit Chamber, public 62 For comparative purposes, the Organisation for Economic Cooperation and Development (OECD) has developed its standard indicator Family policy spending. This indicator includes expenses on children benefits, birth and maternity leave payments, baby care service fees (child day care centres, nurseries, childminders), including payments to parents for these purposes, tax refund for families with children. 63 A 200 Billion Euro Waste: Why Germany is Failing to Boost its Birth Rate. Der Spiegel, February 5 th , 2013. URL: http://www.spiegel.de/ international/germany/study-shows-germany-wasting-billions-on-failed-familypolicy-a-881637.html. financing to support family, women and children amounted to 0.79% of GDP in 2010, not including regional spending 64 . Payments to families with children in Russia (not including maternity capital) in 2010 were about 0.58% of GDP, much lower than in countries with successful family policies, such as France or Sweden. Tax refunds in 2010 amounted to 0.044% of GDP, which is quite low as compared to other countries. However, there is no reason to believe that this type of support to families is more important than the other ones. Russian public spending on children's services (child day-care centers, nurseries) are slightly below the average in the OECD countries, but it is far below the spending in those OECD countries with near-replacement fertility. Obviously, in the future the amount of family policy spending should increase and these allocations should become more effective. The experience of successful OECD countries in raising fertility to near-replacement levels shows that what is necessary is an integrated and diverse set of family policies, that provide both material support for families with children (income or housing subsidies, tax refunds, bonuses) and institutional support that enables women with children to continue working (day-care centers and pre-schools, protected maternity leaves, job security). It is by 64 Analysis of the efficiency of public spending. Report making children neither an excessive financial burden, nor a hindrance to work and career, that successful family policies have promoted higher fertility rates. However, this combination has usually required an allotment of over 2.7% of GDP to the full range of family-support policies. ADDRESSING CHILDREN AND FAMILY POVERTY We have to end the situation where the birth of a child causes a family financial difficulties or pushes them to the edge of poverty 65 . Vladimir Putin. Address at the meeting in Naberezhnye Chelny on implementation of demographic policy and regional programs targeting the healthcare system progress. 15.02.2012 The policy to address poverty, including family and child poverty, depends on how the poverty line is determined. Russia uses the absolute poverty level, measured as the share of population with money income below the minimum subsistence level (MSL), which is calculated based on the consumer goods basket. In Russia, this metric declined from 33.5% in 1992 to 12.7% in 2011 66 . However, in this case the key statistical characteristic is the function of the approved composition of the consumer goods basket. OECD countries use the relative poverty approach, where the poverty line is determined as 60% of the median income (the EU methodology). The level of relative poverty in Russia, calculated using this method, was in the range of 26-33% in 2010, or much higher than the average for the EU countries (16.4%), which is a direct consequence of the extremely high level of inequality in Russia 67 . The USA widely uses for social discussion purposes the Self-Sufficiency Standard, or the level of income at which a family may realize its basic needs, including food, housing, child care services, healthcare, transport and other necessary expenses 68 . It's noteworthy that the minimum subsistence level in 4Q 2012 was RUB 6,705 per month, 60% of median income in 2011 was RUB 9690 per month, and the threshold of selfsufficiency standard in 2012 was RUB 12,400-14,100 per month, subject to family composition. Thus 12.7% of the population was considered poor in terms of income below the minimum subsistence level in 2011, while 25.5% of Russian people had income below the 60% of median income 69 . The high level of inequality in Russia (by the European standards) has a strong impact on children. Relative child poverty in Russia is 29.3%, while it ranges from 6% to 8% in Europe. At present, the risk of falling below the poverty line increases for a family with each subsequent birth. In 2011, the share of low-income households (with per capita income below the minimum subsistence level) was 18% among single-child families, 26% among families with two children and 46% for full families with three and more children 70 . The level of poverty in Russia is extremely high as compared to OECD countries and concentrated among families with children, especially large and single-parent families. Households with children and children below 16 years of age have the maximum exposure to poverty; poverty among children below 16 years old in 2011 was 75% higher than the Russian average. The share of families with children among the low-income population increased against the backdrop of positive GDP and consumer income growth. 40% of large families experience significant problems with housing (old, damp housing in urgent need of overhaul), and with seasonal clothing and footwear supply to children; one third of families are unable to purchase all the medicines prescribed by doctors and have to underfeed themselves; and children in 25% of larger families are unable to obtain secondary education, since they have to earn their living (only 4% for families with one or two children). These extreme difficulties of large families currently act as effective anti-advertising for large families and high fertility rates 71 . Single-parent families with children, or about 19% of all families with children below 18 years old, represent a very sensitive group. Children in single-parent families are exposed to becoming low-income due to massive non-payment of alimony, with regular payments accounting for only 30% of all cases, and 50% of broken partnerships see no alimony payments at all. Moreover, in 50% of cases alimony payments account for less than half of a child's MSL 72 . Even 16% of full single-child families were unable to overcome some characteristics of poverty and nearly 30% of families with two children had income below the minimum subsistence level in 2009. Poverty among single-child families may not be considered a norm, since this means that average salary of one or both parents is below 1.5 MSL 73 . The reason behind high levels of family and child poverty (both in absolute and relative terms) is that sufficient public measures have not been taken to address poverty and inequality among households with children. Societies with a high level of inequality (like in Russia) will inevitably have a high level of child poverty, since children are among the most sensitive economic categories of population, especially where no special measures have been taken to support families with children. Measures to address poverty in the society as a whole, including mechanisms to distribute income among various categories of citizens and among regions and target benefits to population with the lowest income, require separate consideration. As regards the current family support system, including benefits, it does not provide sufficient social support to families with initially low income and high risk of poverty and offers almost no opportunities for low-income families to overcome poverty. For most types of families, the cumulative "children's" package per child remains lower than the minimum subsistence level per child. Meanwhile, in European countries family policy measures produce a significant impact on children poverty. Specifically, in France, they have more than halved the poverty level in large (three and more children) families. All of the most effective measures to support birth rates have a considerable positive impact on poverty among households with children. Benefits to families with children represent a strong instrument to address children and family poverty. In addition, measures to support mothers on the labor market, such as affordable child day-care centers, public nurseries, and flexible working hours for mothers also represent an important means of reducing family and children poverty, since they increase the number of salary earners in the family and add to earlier and better employment of mothers after their maternity leave. 71 BENEFITS AND TAX REFUNDS FOR FAMILIES WITH CHILDREN According to various surveys, benefits and tax refunds for families with children rank among the most effective measures which positively impact the fertility rate 74 . However, it is in the share of payments to families with children as a percent of GDP where Russia is lagging behind OECD countries. International practice analysis shows that the following types of benefits are used for families with children 75 : Universal child benefit is paid as long as a child is under age or until he/she gets a university degree. Such benefits add to income redistribution in the society in favor of families with children, since children by definition have no income of their own. The amount of the benefit may vary subject to the child's order in the family and the family's material standing. Universal child benefit is paid in the following countries: Austria, Belgium, United Kingdom, Denmark, Germany, Ireland, Italy (for the third and subsequent childbirths), Luxembourg, Mexico, the Netherlands, Norway, Poland (since 2004), Portugal, Slovakia, Finland, France (for the second and subsequent births), Switzerland, Sweden. Russia has no universal federal child benefit which largely explains its relatively low budget spending on payments to families with children as compared to OECD countries. Some regions allocate such payments to large families. However, as a rule, the amount of this payment is low. Child benefits for low-income families with children -unlike the universal benefit, such benefits are paid only to low-income families with children. In 2013, Russia introduced a monthly benefit for poor families with 3 or more children in regions with unfavorable demographic situations in the amount equal to the minimum subsistence level determined in the region. The benefit is paid until the child becomes three years old 76 . Maternity payments are paid to mothers and, in some countries, to fathers, to take care of their babies from childbirth to the age ranging from 2 months to 3 years. The benefit amount may be equal to a certain percentage of the mother's or father's salary. In Russia, such benefits amount to 40% of the salary of a woman (or a man where benefit is paid to a man) for the previous two years, but no more than RUB 16,241.14 per month. Federal Law No. 255-FZ of 29 December 2006 "On Mandatory Social Insurance in Case of Temporary Disability and due to Maternity" provides for a minimum monthly child care allowance, and the amount of this allowance to non-working women from 1 January 2013 was set at RUB 2,425 per month for first child and RUB 4,907.85 per month for second and subsequent children 77 . The benefit is paid until a child gets 1.5 years old. It should be noted that the allowance has a low compensation ratio and that a lower compensation ratio will be less effective as a measure to support fertility rates for women with high income. We would also point out the extremely low level of support to Russian nonworking women. 74 International research shows positive 78 , statistically insignificant 79 and negative 80 impact of maternal leave duration on fertility rates. Thus, it is not clear whether longer maternal leave increase or decrease fertility, but in any case the effect is small. Baby bonuses are intended to compensate expenses which a family faces directly after a baby is born. Some countries where such bonuses were introduced soon saw a notable increase in birthrates, including in Spain (EUR 2,500), Australia, Singapore, Canada 81 . In Russia, a lump-sum maternity benefit is paid in the amount of RUB 8,000 (as provided by Article 12 of Federal Law No.81-FZ of 19 May 1995 "On State Benefits to Citizens with Children"). Also, Federal Law No. 256-FZ of 29 December 2007 "On Additional Measures of State Support to Families with Children" provides for maternity capital to a family at second childbirth. Upon the child reaching three years of age, this capital may be spent to improve housing conditions, get education or increase a mother's pension accruals. A small part of maternity capital may be paid to parents in cash. Judging by the increase in fertility rates in Russia after maternity capital was introduced, which appeared to be the strongest among European countries, maternity capital has proved to be a successful innovation that may also be implemented in other countries looking to increase fertility. Meanwhile, the administration of maternity capital needs adjustments and wider application. The issue is especially important for rural regions where the low income problem is much more relevant than housing, education and pensions. This is indirectly underscored by fraud cases which are mainly caused by poverty and the impossibility of financing current children's needs. International family policy experts recommend as a top priority to support families with small children. Therefore, it makes sense to increase the share of maternity capital that may be paid in cash at childbirth and unconditionally. Regions should be granted an opportunity to participate in decision-making on the wider application and uses of maternity capital and, concurrently, a right to control how funds are used. Tax refunds are provided to working parents. Such measures are considered more effective than benefits in terms of encouraging parents' employment, while benefits appear to be more effective in terms of supporting fertility rates. According to Article 218 of the Russian Tax Code, since 1 January 2012 the standard tax refund in Russia is RUB 1,400 at first childbirth; RUB 1,400 at second childbirth and RUB 3,000 at third and subsequent childbirths. This refund is provided for persons with annual salary of at least RUB 280,000. Possible solutions in regard to material benefits to support higher fertility: • increase consolidated budget spending on family policy from 1.5% to 3% of GDP; • develop family economic security standards and introduce them in regions as an additional factor for poverty assessment purposes; • provide targeted support to low-income families on a social contract basis. • introduce a universal child benefit; • increase the minimum and maximum amount of maternity benefits; • in addition to the benefit, introduce at childbirth a certificate (voucher) for a minimum children's goods package, such as bed, baby carriage, clothing etc. 78 • wider application of part of maternity capital for current needs on a social contract basis, and in the case of rural families for setting up their farms, family businesses and car acquisition. • introduce a minimum amount of alimony payment and the possibility of paying them in case a parent avoids payments through the specialized fund with subsequent collection of payments from the non-payer. • co-finance payment of regional maternity capital for third and subsequent childbirths to the level of the federal payment in demographically depressed areas. • increase tax benefits and refunds for parents with large families to the level at least equal to the child's minimum subsistence level. COMBINING MOTHERHOOD AND CAREER "We need to create favorable conditions primarily for women so that they did not fear that having a second and third child would close the path to a career, to good jobs and make them limit themselves just to housekeeping. What we have started to do is to resolve problems of waiting lists to child day care centers, professional retraining programs for women with children, support to flexible employment would directly impact a family's choice in favor of second and third child." V.V. Putin, State-of-the-Nation Address, 12.12.2012 An opportunity to combine work and parenthood, including motherhood, is a key to successful demographic policy in the modern world. International experience in developed countries underscores that fertility is currently higher in areas where the percentage of working mothers is higher, where the level of women's education is higher and where the unemployment rate is lower (whereas in late 1970s the correlation was the opposite) 82 . For instance, in such countries as Greece, Spain, Italy, Slovakia, Hungary, only 50-60% of women with children have paid employment, and fertility rates in these countries are quite low, much lower than the replacement level (1.25-1.5 childbirths per woman). Meanwhile, economic activity among their counterparts with no children is 5-10% higher, i.e. childbirth prevents a woman from participating in the labor market. On the contrary, in more demographically successful developed countries, such as Iceland, France, Sweden, Finland, and Denmark, where the fertility rate ranges between 1.9-2.2 childbirths per woman, 75-85% of all mothers aged 25-54 have paid employment and the gap between the employment rate of mothers and childless women is minimal 83 . As a rule, mothers with children under three years old more often go back to work in demographically successful countries than in countries with low fertility. For instance, about 60% of women with children under three years old work in France, more than 70% in Sweden and Denmark, whereas in the Czech Republic and Hungary only 15-18% of such women are working. It is extremely important to provide an opportunity for combining motherhood with career to women with a high level of education. Taking into consideration that about 83% of young people of relevant age in Russia now have higher education, it is hard to overestimate the importance of taking measures that facilitate combining motherhood and careers. Creating favorable conditions for employees to combine job and parenting duties is not a burden for employers either. effectiveness or profitability. Expenses to create a "family friendly attitude" are rewarded by additional motivation of employees, fewer sick leaves, declining churn rate, improved productivity and employee satisfaction levels. Introduction of family-friendly jobs is extremely effective for highly qualified professionals who are hard to replace and for positions with flexible working hours. It's noteworthy that firms with high quality management started to introduce such practices, as did firms where women have a strong presence in management 84 . Russia-specific features As noted above, the current generation of young working-age people in Russia is called to resolve two tasks, economic and demographic, at the same time. Taking into consideration the upcoming massive loss of the working-age population due to the demographic hole, the value of each Russian employable person for the national economy will increase. The country cannot allow a large number of working women (including highly qualified specialists) to "fall out" of the labor market for several years due to the fact that they have to stay with children at home solely because of a lack of supporting conditions to enable them to easily combine parenthood and professional activity. On the other hand, constant competition for the best jobs and social positions in modern market societies leads to postponement or eventual cancellation of childbirths. Therefore, a woman is more inclined to decide in favor of having a second and third child in those societies where motherhood does not produce a strong obstacle to her income and career. Therefore, creating favorable conditions for a combination of motherhood and career is a strategic priority to support fertility and families with children. According to surveys, an absolute majority of Russian women are inclined to select a combination of work and motherhood as their life strategy. For instance, according to the 2008 survey, more than 80% of Moscow women choose this as a preferred life strategy 85 . One of the top priorities in providing support to working mothers is to run an affordable childcare system providing diverse services of high quality. This sphere will be considered in detail in the next section. AN AFFORDABLE AND DIVERSE SYSTEM OF CHILDCARE SERVICES (NURSERIES, CHILD DAY-CARE CENTRES, CHILDMINDERS, ETC.) Development of an effective childcare system (child day-care centers, child-minders, nurseries) is one of the most effective measures of any fertility support policy. As shown in Figure 3.6, among all types of family policy spending in OECD countries, spending on the childcare system (child day-care centers, nurseries, child-minders) most strongly correlates with the fertility rate. We can see that most countries are divided into two quite clearly defined groups: • countries with low fertility rates and low public spending on their child care systems (including many South and Central Europe countries, and a number of former socialist countries); • countries spending a considerable part of GDP (0.75-1.3%) to run a comprehensive childcare system and having fertility rates close to the population replacement level (France, Great Britain, Scandinavian countries). In addition, it is extremely important to develop within the child care system not only institutions for children above three years old, but also a range of services for the youngest (below three years old) children. According to our analysis, all demographically successful European countries have ensured high coverage of under-3 children within their childcare system. For instance, 40% of under-3 children visited various child care institutions in France and Great Britain, more than 50% in Norway and Iceland and 66% in Denmark. By comparison, countries with lower fertility rates have much weaker coverage, such as only 2-3% in the Czech Republic and Slovakia and 18% in Germany. However, in no way does this mean that priorities of the family policy should only include the under-3 childcare system, leaving older pre-school children out of attention. Demographically successful countries actively develop all types of services for all children of pre-school ages, which ensures very high coverage by this system of children below three years (see above) and children from three years to the time they go to school. For instance, this coverage was above 90% of children in Great Britain and 99% in France. Such measures result in a substantial increase in mothers' participation in the labor market, which would considerably reduce children and family poverty (see below). As a matter of fact, the risk of child poverty is the lowest in families where both parents work. The Situation in Russia Russian families have unequal access to child day care centers, since those centers charge fees (and, therefore, are hardly accessible for the vulnerable households) and generally are insufficient in numbers to provide full coverage of children of relevant ages 86 . In 2009, 58% of children below six years were covered by preschool education (compared with about 90% in France) 87 . With a reduction in the number of child day-care centers, more than 1.9 million children are on the waiting list to be assigned to preschool educational institutions. In 2000-2009, the number of families on the waiting list climbed nearly seven times 88 . Creation of the necessary capacity through building new child day-care centers is an extremely expensive measure (cost of construction may reach RUB 1 million per new child). An effective solution may be in the active development of the private sector of preschool education and childcare services, which is currently limited by excessive statutory regulation. The question of childcare provision is most acute for the youngest children, with only 16% of children below three years provided with institutional care services (compared with 31% on the average for OECD and 48% in France). During the Soviet period, this issue was resolved through a system of nurseries. However, judging by the 2005 data, nurseries no longer function as a key preschool institution. Meanwhile, for many low-income and single mothers, nursery is almost the only institution which enables mothers to return to work and retain the level of family income, especially given that the payment of child allowance stops when a child becomes 1.5 years old. Nurseries also appear to be an important means to support women who wish (or even have to) go back to work as soon as possible. Therefore, the need for nurseries is high and restoration of this childcare institution for young children should become a priority 89 . Russian Prime Minister Dmitry Medvedev said on 29 May 2013 that RUB 50 billion would be allocated to develop children's preschool institutions 90 . However, only children's preschool institutions for children from three to seven years old were at issue. International experience The EU Summit in Barcelona in 2002 announced the goal of achieving full employment and therefore set targets to remove barriers to women's participation in the labor market. The Summit set targets of covering 33% of under-3 children and at least 90% of pre-school children with daycare and preschool slots by 2010 91 . Only some countries managed to achieve this target; however, those have been by far the most demographically successful (their fertility is mostly close to the replacement level). Structure of child care services: France's experience 92 It is interesting to consider the child care system established in France. It's noteworthy that such services cover 48% of children below three years and nearly 100% of children from three years to school age (most all children of this age category attend ecole maternelle). In 2009, the Government set a target to establish capacity for 200,000 children and it has been 70-80% achieved, which is considered a very good result. In families with both parents working, various forms of child care institutions cover 64% of children below three years. 37% of these children are cared after by certified child-minders, 18% attend collective nurseries, 5% attend children's development care centers and other development centers, and personal child-minders visit 4% of children at their homes. Often, even where both parents work, they manage to take care of children below three years themselves (27%) -if one parent works at home or if parents have different working hours which enables them to take care of their children themselves. Grandmothers/grandfathers or other relatives help to take care of about 9% of children. If one parent does not work, only 63% of under-3 children stay at home. Other children either use nurseries (10%) or they are taken home to their child-minder (18%) or a child-minder comes to the child's family (2%). Needless to say, relatives also offer their help (4%). Services for children below three years: Certified child-minders accepting children at home Most children below three years old (37% if both parents work) are looked after by specially trained tutors who accept children at home. Today, 300,000 such child-minders work in France and provide service for more than 1 million children. On average, one tutor cares for three children. Currently, such home nurseries are the most widespread and accessible form of taking care of very young children. Collective nurseries These are the main type of care for the 18% of children below three years old in families with both parents working. Nurseries work from 8:00 to 16:00 and may be combined with additional child care for 2-3 hours, if parents so desire. There are 10,500 such institutions (municipal, corporate, interdepartment) with a total capacity of about 400,000 children. The average capacity of each nursery ranges from 20 to 60 children. The permitted age for attendance is 0-6 years, with most children being under three years old, since most children later move to "mother schools" (similar to child day-care centers in the range of three-six years). 93 . In addition, smaller collective nurseries known as "micro-nurseries" have been developing actively since 2007, first in the experimental mode, and since 2010 this form was approved at the statutory level for wide application. These nurseries best meet families' needs and may adjust themselves to the working hours of parents. Their maximum capacity is 10 children. These nurseries have less staff and they cost less. They are most often established by private firms jointly with local authorities. Firms buy out capacities for children of their employees for 2-3 years, thereby ensuring permanent financing of children's institutions. In addition, children's stay may be financed from other sources -through the Family Allowances Fund as a "single sources benefit" or through public aid to families. Child-minders attending parents at home A less widespread option (covering 4% of all children below three years) is employment of child-minders attending parents' homes and taking care of about 1-2 children at the same time. Today, almost 45,000 child-minders offer their services, though this form is more widespread in Paris and remains underdeveloped outside the capital or major cities. Organization, financing and quality control of children care services Organization and control of services provided by home tutors General Councils (regional parliaments) approve specialized healthcare institutions (PMI) which, jointly with the Family Allowances Funds, arrange the entire process: they perform selection, training, and certification of child-minders and ensure control over their activities. The state guarantees qualifications of tutors and the manager of an institution. Tutors have no diplomas, but there is a statutory norm whereby child-minders have to listen to a special course of 160 hours. The same organization (PMI) issues a permit to accept children at home in accordance with the established criteria: total housing area, availability and age of own children, pets, good command of French, etc. Even the most insignificant norms related to children's safety and development are approved at the statutory level. Statutory norms provide for one tutor per five non-walking children and one tutor per eight walking children. Quality control is performed about once a year and also at the parents' request. Organization and operational control of nurseries The Family Allowances Fund finances or co-finances nearly all child care institutions. The mandatory requirement is to establish differentiated payment, subject to family income. Municipal or corporate nurseries receive cash directly, while certified child-minders are paid by parents who receive an allowance. Professional tutors are considered employees of an individual. Families declare expenses which are sent to the regional Family Allowances Fund (CAF) where the amount ещ be paid to a family is calculated. Subject to the parents' income, the state reimburses to a family part of child care expenses; prior to 2004, parents had to pay EUR 1,800 per child in private nurseries, while now, with the government's assistance, this amount equals only EUR 350. The Family Allowances Fund encourages construction of new nurseries. The target for 2009-2012 was to create capacity for 30,000 children in partnership with local authorities or enterprises. An investment fund is established to achieve this target, and EUR 7,400-14,000 (about RUB 300,000-560,000) is allocated from this fund, while the full amount to create capacities is about EUR 20,000 per child. This means the Family Benefit Fund co-finances more than half of the cost. If municipal authorities, firms or organizations wish to establish nurseries, they apply to PMI for a permission and then to the Family Allowances Funds for co-financing. Financing for nurseries Since 2004, private firms setting up nurseries may obtain public co-financing. However, they are obliged to have the same child care tariff as municipal nurseries have, as this is an obligation to the state. 25% of the service fee per child is covered by parents, 25% by the Family Allowances Fund and 50% by the firm. As a result, income tax is decreased by 33%. As a result, a firm pays EUR 199 per month per child. Profitability of private nurseries is in the range of 10-15%. Advantages of private nurseries for employers include retaining highly qualified employees with young children. In addition to investments, the Family Allowances Fund participates in financing current operations of preschool education institutions. The hourly service tariff is EUR 8 per child. Payments to parents, as reimbursement of their payments for nursery, are subject to the number of children and family budget. With the current cost distribution system, families only have to pay 20% of EUR 8. The Family Allowances Fund pays 45% of EUR 8. The balance is financed by local self-governing organizations and, more often, directly by firms, which is an example of social solidarity. Services for children from three years to school age: Starting from three years, all children in France are entitled to go to ecole maternelle, with almost 90% of children attending this institution. These "schools" are fully free of charge for parents, except for food (however, food costs are fully subsidized for low-income families). There are also child day-care centers with different working hours and fees charged 94 . However, it would be extremely expensive to exactly reproduce a similar system in Russia, since it requires building and running a lot of new capacity in child day-care centers with almost 100% public financing. Moreover, 100% public infrastructure of preschool education and care for children above three years may hardly be flexible enough to adjust to the needs of parents and children. It is necessary to involve the private sector in providing child care services. Norway has a successful track record of resolving the problem of access to child daycare centers by public financing of private and public child day care centers. About 50% of child day-care centers are private. The service fee of taking care of a child in a child day-care center, whether public or private, is about 50% covered by the state, 30% by municipal authorities and no more than by 20% by parents. The number of public and private child day-care centers in Norway is almost equal. However, the ratio of children in them is about 60:40, since public child 94 day-care centers generally have a bigger capacity than private 95 ones. Active involvement of the private sector in provision of child care services has helped Norway to cover more than 50% of children below three years old and nearly 95% of children more than three years old. "A special focus should be on preschool institutions, including support to private institutions of this kind. The Government has already removed many barriers hindering their development. My request is to fully complete this cleanup as early as in the first half of the next year and regional authorities are requested to actively use new opportunities. We need to let people normally work, open everywhere home and small child day care centers, school groups with extended hours, and therefore, provide parents with an opportunity to select a preschool institution without putting them on waiting lists or getting on their nerves." V.V. Putin, State-of-Nation Address, 12 December 2012 Possible Solutions in Regard to Child Care In order to resolve the problem with waiting lists to child day-care centers and develop various, including private, forms of preschool education, the Ministry of Science and Education of the Russian Federation formulated proposals to improve sanitary and epidemiologic requirements to establish, operate and organize various forms of preschool education. Based on analysis of the successful experiences of other developed countries it was proposed to formulate the sanitary and epidemiologic requirements and provide for invariant (which primarily ensures children's safety) and variable components. In addition, a number of specific proposals to modify norms in terms of the number of floors and ceiling height in buildings were made to make it easier to convert spaces to create capacity. Standards were set to develop playgrounds and sunshades, arrange catering services, ensure hot water supply, hand-washing and toilets etc. Special attention in the list of proposals has been paid to making amendments to those norms which currently hinder wider expansion of family preschool groups (home child day care centers). An extremely important step to increase coverage by preschool education is the decision by Prime Minister Dmitry Medvedev adopted in spring 2013 to allocate to Russian constituent entities a total of RUB 50 billion in subsidies to develop the system of children's preschool education institutions. In this context, it is advisable to point to the experience of some Russian regions which have developed various models to organize preschool education and care after young children. The practice of developing home child day care centers (i.e. certified home tutors) which is very popular among parents with young children and goes in line with France's experience is especially interesting. This practice is being successfully pursued in the Belgorod and Lipetsk Regions. It is important to set up the system of training and certification of home tutors in order to develop home child day care centers. The system must be run on the co-financing basis, i.e. the home tutor's fees are to be partly paid by parents and partly subsidized to them (or allocated directly to the tutor) by the government. As underscored by regions pursuing this model, we may single out the following cost breakdown: the government spends RUB 50,000 to train one tutor. A tutor is paid 50% by parents (RUB 5,000 per month) and 50% by the government (also RUB 5,000 per month) per child. 96 Introduction of this model nationwide will make it possible to significantly increase the share of children covered by child care services and reduce the waiting list to attend child day care centers. The most significant improvement will be in the segment of care for children below three years (the waiting list to attend child day care centers is mostly represented by young 95 Appendix 1. An Overview of ECEC Systems in the Participating Countries. Norway. URL: http://www.oecd.org/edu/preschoolandschool/1942347.pdf. Cited on 09.08.2013. 96 However, this would not be affordable for families in poverty. They would require complete subsidies. children aged in the range of 1.5-3 years). In addition, resolution of the waiting list problem by establishing home child day care centers is almost 10 times less expensive for the budget than construction of new child day care centers. Liquidation of the waiting list by construction of child day care centers will cost the Russian budget about RUB 1 trillion. In addition, construction is a long-term project. Liquidation of the waiting list by developing home child day care centers will cost less than RUB 100 billion. In addition, this scenario will bring an additional economic effect and ensure full payback for the budget. Full-fledged implementation of the program will bring about one million of employable mothers (20-40 years), who currently need to stay at home with children, to work and they will start contributing to GDP (more than RUB 500 billion) and pay taxes to the budget (about RUB 150 billion). Increased employment among mothers will also help increase family income and reduce the share of low-income households. Moreover, about 300,000 new jobs (certified home tutors) will be created and new employees will contribute to GDP and pay taxes to the budget. Pursuing this model throughout Russia may ensure: • For economically stronger regions (90 million inhabitants): RUB 5,000/ RUB 5,000 from parents/the state budget. • One tutor releases three young women for work. • A million of employed women increase GDP by RUB 500 billion per year. • More than RUB 150 billion out of these RUB 500 billion are tax payments to the budget. Economically stronger regions • RUB 50 billion per year to implement the program. • RUB 100 billion per year of budget revenue. • 90,000 children per year of increased fertility. Depressed regions • RUB 25 billion per year to implement the program. • RUB 20 billion per year of budget revenue. • 40,000 children per year of increased fertility. FLEXIBLE WORKING HOURS FOR WORKING MOTHERS Flexible working hours for working mothers is another effective measure to support the fertility rate 97 . International experience reveals the following mechanisms to encourage flexible employment for parents: • a statutory right for a parent to transfer to a part-time job after the birth of a child due to the need to care for young children; • a statutory right for parents who transferred to a part-time job to resume their full-time job when the need to care for their young children expires; • statutory protection of equal rights of full-time and part-time employees; • right of an employee with young children to set up the time to start and finish work on a constant or temporary basis; • statutory protection to remote employees, removing statutory barriers to remote employment; • encourage part-time employment; • encourage employers to permit employees with children to independently regulate the time to start and finish work on a constant or temporary basis; • encourage employers to let employees take time-off in lieu and leave their work stations during certain hours (either unpaid or to be compensated afterwards), if necessary. • As underscored by practice, it is primarily mothers of young children who use such opportunities in societies where flexible employment opportunities are offered. These measures help to bring to the labor market women who would not be able to combine family duties and work under less favorable circumstances. Current Russian employment law provides for such measures as a mother's right to request a part-time job due to the need to take care of a child. However, such measures (already pursued in other countries) as a mother's right to return to her full-time job after a child care period expires, has not been implemented in full. According to a recent survey, Russian employers quite often use part-time jobs, but seldom use such measures as flexible working hours. Flexible working hours and an opportunity to devote certain periods of time during working hours to family duties may be an important mechanism which makes it possible to combine family duties with a job 98 . 97 OECD. Doing better for families. Paris: OECD, 2011. P. 149-158. URL: http://www.oecd-ilibrary.org/socialissues-migration-health/doing-better-for-families_9789264098732-en. 98 OECD. Babies and Bosses: Reconciling Work and Family Life: A Synthesis of Findings for OECD countries. Paris: OECD, 2007. URL: www.oecd.org/els/social/family. HOUSING MEASURES "Therefore, now, at a new stage, we need to resolve the housing issue for wider categories of our citizens: young families ... take measures to increase volumes of commissioned affordable budget housing and significantly expand housing rental opportunities." V.V. Putin, State-of-Nation Address, 12 December 2012 Low incomes for families limit their opportunities to acquire housing and make improvements: 40% of families with children are located in premises not equipped with hot water, 33% are in premises without centralized heating, and 15% are in premises without water supply 99 . There are special federal programs addressing the housing problems of large families. The average term on the waiting list for participants of the program Housing for Young Families within the federal target program "Housing" for 2011-2015 with the current level of financing is about 8-10 years. Meanwhile, according to surveys, low housing accessibility is a strong factor blocking fertility growth; making housing more accessible may therefore produce a significant positive impact on fertility growth 100 . We can point to data from surveys showing that families living in their own houses has a notable positive impact on fertility 101 . Meanwhile, it should be noted that there are still no internationally demonstrated housing measures which have proven effectiveness in regard to raising fertility 102 (though the Russian "maternal capital" may well be regarded as such a measure, as, on the one hand, it has increased the fertility in Russia in a very significant way [see above], and, on the other, it has been predominantly used just to improve families' accommodation). Thus, of course, this does not mean that such measures are unnecessary, especially since Russia suffers from greater housing deficits than most other industrialized countries. Such measures to support fertility as maternity capital were not proven internationally either, but it proved its high effectiveness. Surveys performed in Russia suggest that housing measures may have a strong positive impact on fertility gains. However, lack of adequate proven evidence of their application shows that it is advisable to begin pursuing most of these measures as pilot projects, starting from the most demographically depressed regions. In case certain measures appear to be effective in individual regions, they may start making their way to other regions as well. Possible Solutions in Regard to Housing • provide families after second childbirth with the right to purchase housing at a subsidized cost and a subsidized reduced interest rate; • provide families after third childbirth with the right to purchase housing at a subsidized cost through interest free mortgage; • increase financing of subprogram "Housing to Young Families" and expand its coverage to large families, without the 16-year age limitation for a younger child; • develop sub-program "Housing for Large Families"; • introduction of regional subsidies to large families for commercial hire and housing rent and reduction of subsidy rate for utilities bills payments; • development of low-rise affordable housing construction, especially units for larger families with priority purchasing for families with at least three children. POLICY Proper administration of demographic policy and full implementation of the measures and policies described above require a decent management infrastructure. An important success factor for family policy is the presence of strong institutions that ensure effective management, coordination between levels of authority and partnership between sectors. In France, these bodies are represented by the Supreme Family and Children Council, the National Family Allowances Fund and the National Union of Family Associations. In Russia, family policy institutions have yet to be properly developed 103 . Currently, there are no bodies at the national level and in most constituent entities of the Russian Federation that are in charge of family policy, and coordination between various departments and levels of authority has yet to be established. There is no long-term federal target program in the family policy area. To ensure optimum implementation of effective demographic policy, the following management decisions are proposed 104 : 1. Ensure management coordination: establish bodies responsible for family policy implementation at all management levels, create the Family Policy Council overseen by the President of the Russian Federation with participation of religious leaders, and family policy councils under the regional governors and heads of municipal administration. 2. Creation of the Family and Children Support Fund with regional branches like the National Family Allowances Fund in France (CNAF), the senior managing partner in the family policy area. Among other resources, its budget must be increased by excises on alcohol, tax and gambling business. The Fund may ensure more effective administration of maternity capital resources currently managed by the Russian Pension Fund, development of the system to take care of young children below three years, work with difficult families, etc. Creating support centers for families with children in each urban district and municipal area, jointly with non-profit organizations of traditional religious confessions for consulting support to families, including social work with families on a social contract basis. 4. Develop the "Family and Children" public program which provides for step-by-step implementation of a set of measures to support large families, starting from demographically depressed regions. 5. Training and retraining of government employees in charge of demographic and family policy. 6. Arrange for the system of independent social expert examination in the Open Government system format to assess control impact of relevant decisions on the standing of families and children. 7. Expanding statistical surveys for families with children; make family studies more active. 8. Develop social well-being standards for families with children. According to expert estimates, the self-sufficiency level (SSL) for families with children is 150% higher than the minimum subsistence level. The share of families with income equal to or greater than SSL should be considered a target and summary indicator of successful economic policy in the Russian Federation and of effective activities of regional and local authorities aimed to develop human potential. First and foremost, these measures must be implemented in demographically depressed regions. According to expert estimates, such regions are home to approximately one third of Russia's population and large families account for only about 1% of all families with children. This means that even the strongest measures will not be expensive and if they become such, we would be able to state that a crisis in these regions has been overcome. INTERNATIONAL EXPERIENCE REGARDING AN INCREASE IN LIFE EXPECTANCY A considerable increase in life expectancy of Russian people requires analysis of international experience in this area. Precedents of rapid growth in life expectancy have recently been registered in such countries close to Russia in terms of culture as Estonia and Poland, further post-Socialist countries in Central and East Europe during the post-Soviet period. Gender and age analysis of mortality from various causes in Russia and in these countries shows that mortality may be significantly reduced through limitation of access to hard alcohol drinks, including illegal alcohol and tobacco 106 . It is noteworthy that all new EU member countries have implemented a key measure to reduce tobacco consumption -a hike in cigarette excises to the EU minimum level of EUR1.28 per package which has reduced tobacco consumption 107 . Annual avoidable mortality from tobacco consumption is at least 150,000 people per year (given mortality estimate includes difference between per capita cigarette consumption in Russia and in countries with effective anti-tobacco policy). Avoidable mortality from abuse of alcohol, including hard alcohol most likely exceeds 200,000 people per year. In recent years, Russia has approved amendments to the legislation aimed to implement most key recommendations of the World Health Organization to reduce harmful alcohol consumption, including limitations to alcohol sales in terms of time, geographical access to alcohol beverages (by prohibiting alcohol sales in kiosks), higher prices and excise on alcohol products. At present, a focus should be on work aimed to execute these laws and prohibit alcohol sales to minors. In particular, a big problem is illegal production and tax avoidance of hard Estonia Russia alcohol producers. To resolve this problem, it is necessary to lower the threshold at which sales of illegal and non-excise alcohol are subject to criminal prosecution and to improve law enforcement mechanisms. However, it is critically important to continue to introduce those effective measures that have not yet been introduced. As regards the anti-tobacco issue, only two measures have been approved and will come into force in 2013-2014, out of four key measures capable to effectively reduce tobacco consumption. These are prohibition of smoking in public places and graphic warning on cigarette packages. However, advertising has yet to be completely banned and excises have yet to be hiked to the level approved in Eastern Europe countries. Excise hikes is the most effective measure to counteract smoking (especially among children and teenagers). Russia's current tobacco excises are 5 times lower than the minimum EU rate (which is also effective in countries with lower per capita income than in Russia, such as Bulgaria and Romania). This is the fact that explains record high tobacco consumption levels among adults and teenagers in Russia. HEALTHCARE SYSTEM PROGRESS Experience in the Central and Eastern European countries (Poland, Estonia, Czech Republic and others) shows that another strong resource for reducing mortality in Russia, especially in older age categories, is modernization of the healthcare system. In the past, the Soviet healthcare made considerable contributions to extending life expectancy in the USSR. However, in the 1970s it became evident that this healthcare system was lagging behind those in the West, as reflected in the relatively higher sickness rates and lower life expectancy of Soviet citizens. Especially notable differences were evident in the gains in life expectancy in the West from measures to control cardiovascular diseases. These included not only changes in lifestyle, but also massive increases in the prescription of medications to control cholesterol, blood pressure, and blood sugar levels (the so-called "cardiovascular revolution"). The weakness of the Soviet medical and health care system, as compared to its Western counterpart, was due not only to greater Western financial support for continuous improvements in the healthcare system, but also to the rapid development of clinical epidemiology in the West, which improved the methodology for biomedical research and processing medical information. Since the 1990s, a drive to implement more rigorous evidential approaches in medical care began to contribute to improvements in clinical practice in the West. These efforts focused on implementing medical interventions whose effectiveness and safety had been demonstrated in high quality biomedical and clinical research 108 . This made it possible to identify and eliminate a number of ineffective interventions, and moreover to identify and implement 'best practices as the standard of treatment through the system of medical guidelines. The Russian healthcare system was also involved into this process, but the language barrier and financial difficulties hindered Russia from achieving the same level of progress. At this point, further development of the Russian healthcare system to compare with Western systems requires an increase in financial resources. According to World Bank data, Russia's spending on healthcare as a percent of GDP is still quite low by world standards (131st out of 190 countries in the World Bank's ranking). In addition, Russia ranks last in Europe in terms of this indicator (along with Romania) (see We can see that the share of medical expenses relative to GDP in most of the more developed European countries (with notably higher GDP per capita) is roughly twice that of Russia. It is therefore obvious that this share should by no means be lower and preferably 110,111 grow higher in Russia to narrow the considerable gap in health with Western countries. In fact, even in many OECD countries with lower income, the share of GDP spent on healthcare considerably outstrips that spent in Russia. The shortage of financing for the Russian healthcare system is aggravated by insufficiently an ineffective distribution of resources. Modern healthcare model provides savings by far greater use of outpatient treatment modes as opposed to hospital treatment, and a bigger role for nursing and general practitioners in treating patients. The savings can be allocated to supplying pharmaceuticals to patients to control chronic conditions, and to paying more attractive salaries to medical staff-which decreases shadow payments and corruption in the medical sector. Some Central and Western European countries shifted to the most effective Western healthcare practices more rapidly during the post-Soviet period than Russia did, due to integration processes as part of their admission to the European Union. In particular, the Baltic States had largely transitioned to healthcare system with a focus on general practitioners by the late 1990s 112 . 109 Moldova is a country with the lowest income in Europe. Therefore, a high share of healthcare expenses in GDP only in part offsets extremely low total per capita GDP. Meanwhile, in 2008 (the last year for which we have comparative data) life expectancy even in Moldova was higher than in Russia (and higher share of healthcare expenses in Moldova clearly contributed to this result). 110 For instance, the gap between men's life expectancy in Russia and Switzerland in 2008 was 18 years and GDP per capita (in 2009) was more than seven times higher in Switzerland than in Russia. Obviously, should Russia allocate to healthcare the same share of GDP as Switzerland does, the gap between our countries would still be huge. But our share is less than 50% of the same value in Switzerland. As a result, the gap in healthcare spending (per capita) between Russia and Switzerland becomes really great -more than 15 times! 111 The calculations were based on the following data: World Bank. Meanwhile, since the structural changes in the healthcare systems of post-socialist countries of the European Union significantly differed 113 , and a rise in life expectancy was observed in all of them, it is most likely that the key component of improvement was not a particular organizational structure, but rather their harmonisation of medical practices with worldwide standards, in particular, through adoption of best practice medical guidelines. In Russia, given its exceptionally high mortality due to cardiovascular diseases, increased prescription of medicines to control cholesterol and arterial tension should make a significant contribution to decreasing mortality rates. This approach has become the most important component of the so called "cardiovascular revolution" in developed countries. It is economically viable for the state to finance accessibility of such medicines from the federal budget, since this directly affects the number of disability cases resulting from heart attacks, strokes, etc. providing savings in health care costs that would offset the cost of medications. In sum, Russia's medical and healthcare system could be dramatically improved by acceleration of the process to introduce the most effective practices (protocols and procedures to treat diseases), including by harmonisation with those in Europe, the USA, Australia, Canada, etc., and systems to encourage medical staff to use these practices and motivate them to terminate ineffective methods used for diagnostics, prevention and treatment of diseases. More accessible emergency medicine, especially in cases of the so called "cardiovascular catastrophes" (heart attacks and strokes), will also help reduce mortality from cardiovascular diseases. This task will requires establishing inter-disciplinary medical brigades based on existing therapeutic institutions, mandatory use of computed or magnetic tomography scanners in healthcare institutions providing medical aid at early stages of cardiovascular catastrophes (less than 12 hours), equipment of such therapeutic institutions with fibrinolytic medicines with proven clinical effectiveness. The number of such centres in most regions is insufficient. However, not all required changes are high-tech and high cost. Western best practice of administering aspirin immediately at the onset of heart attack symptoms is a low-cost way to reduce mortality, if doctors could instruct emergency staff and their patients at risk of cardiovascular events to be prepared and act promptly. Similarly, daily aspirin therapy is now recommended to prevent heart attacks and strokes in at-risk patients. In Russia, with its vast spaces, it is important to maintain healthcare services (including emergency medicine) that are accessible in rural and other remote areas. This will require retaining feldsher-obstetric stations, expanding training courses for paramedical personnel and extension of their authorities. It is also necessary to increase the economic accessibility of medicines for patients suffering from inveterate and widespread diseases, including oncologic diseases. Specifically, this will reduce mortality among oncologic patients. Other effective and financially viable means to reduce mortality from oncologic diseases (apart from addressing tobacco smoking) include screening for rectal and colon cancer (colonoscopies) and universal vaccination of young girls below 16 years against human papillomavirus (to reduce cervix uteri cancer). If the entire set of these approaches could be implemented, mortality rates should fall rapidly in the Russian Federation, and could approach the levels in such countries as Estonia, the Czech Republic, Poland or Chile. Despite the significant fall in mortality rates in 2005-2010, Russia still ranks 22 nd highest worldwide in terms of mortality. 114 The main reason behind this situation is high mortality rates among employable men. Given current mortality rates, one third of 15 year-old men will die before they are 60 years old 115 . Each fifth death in Russia is related to alcohol (about 400,000 deaths annually) 116 . Another 330,000-400,000 deaths annually are caused by tobacco diseases, and at least 100,000 deaths by consequences of drug use 117 . Measures to counteract alcoholism, tobacco smoking and drug addiction are a top priority to reduce accelerated mortality of Russian population. MEASURES TO REDUCE MORTALITY FROM EXTERNAL CAUSES Methods to reduce mortality from external reasons require special consideration. The key method is to reduce national consumption of alcohol, primarily hard alcohol. However, many other preventable non-disease causes of mortality can also be prevented by better policies. According to the World Health Organisation, effective measures to prevent suicides represent timely identification and treatment of depressive and other mental disorders, arranging online psychological consulting for people, including teenagers and young adults in difficult situations, support to people who attempted to commit suicides, and limiting access to means of suicide, such as firearms, chemicals, and medicines 118 . Avoidable mortality from road accidents in Russia amounts to at least 15,000 deaths per year. Proven effective approaches include speed limitations and automated speed control, control over driving with alcohol intoxication, using helmets, seat belts, and baby seats, bringing road infrastructure into compliance with international safety standards, setting modern safety requirements for cars manufactured and imported in the Russian Federation, and ensuring timely and high quality emergency aid victims of road accidents 119 . Mortality from fires may be significantly reduced (by around 40%) not only by implementing anti-alcohol measures, but also by introducing the requirement to only manufacture in Russia cigarettes with improved combustion characteristics (fire safe cigarettes) with fire retardant paper. As a result, a cigarette fades out if a smoker does not inhale within several seconds. EU countries prohibited the manufacture and sale of all cigarettes, except for flameproof cigarettes, on 17 November 2011. The price of this novelty is insignificant, about 0.01-0.02 Euro cents per a pack of cigarettes. Such a measure is also in effect in a number of states in the USA, Canada, Australia, South Africa. ANTI-ALCOHOL POLICY Introduction of a vigorous anti-alcohol policy with a focus on the experience of Scandinavian countries will help to reduce mortality by more than 400,000 people annually and save up to 2% per GDP per year 120 . Key measures would include: Step-by-step increase in alcohol prices by hiking excise taxes and minimum prices at a pace exceeding inflation over the next 3-5 years, by at least 150% to the level of the Baltic states, will make it possible to prevent deaths and disability in Russia for 300,000 people annually (according to the approved three-year plan). This measure alone will save 1.8 million people in Russia by 2020. Limitation of alcohol sales during evening and night hours at the regional level in addition to the current federal prohibition will lead to an immediate fall in mortality rates (65 Russian constituent entities have already introduced this measure). Furthermore, it is necessary to expand the timing of federal limitations from 8 pm to 11 am and significantly limit alcohol sales on Sundays and Saturdays after 4 pm. Limitation of geographical accessibility of alcohol to the level approved in the Scandinavian countries -no more than 1 point of sale of alcohol stronger than 4-5% per 5,000 people (current accessibility to alcohol in Russia is unprecedented -about 1 point of sale per 360 people, including non-permanent points of sale). Counteraction of production and sales of alcohol on which no excise taxes are paid: tightening control and liability for illegal alcohol production and sales, lowering the threshold of criminal liability for such offences, and expanding the scope of excise taxes on liqueurs and medical ethyl alcohol. Encouraging a shift to consumption of alcohol other than spirits, i.e. more beer and wine consumption in place of vodka and hard liquor. COMPREHENSIVE ANTI-TOBACCO POLICY A comprehensive anti-tobacco policy, in line with the WHO Framework Convention on Tobacco Control and the Guiding Principles to implement its provisions, must include the following measures. Limitation of price accessibility. Tobacco products in Russia have unprecedented low prices due to extremely low excise taxes. It is necessary to considerably increase excise taxes within 3-5 years to the minimum EU level (EUR 1.28 per package of cigarettes). This will prevent up to 100,000 deaths per year and bring to the budget up to RUB 700 billion annually. Total prohibition of tobacco advertising. Introduction of complete prohibition of tobacco advertising, marketing promotion or any sponsor contributions from tobacco companies approved in 2013 will help promptly reduce cigarette consumption by 14% among the Russian population in general and even more among women and teenagers 121 . Total prohibition of smoking in indoor public places will make it possible to minimize risks and losses related to active and passive smoking. Specifically, heart attacks fall 17% during the first year after introduction of the complete prohibition and the effect is even stronger in subsequent years -by 30% from the initial level 122 . Placement on packages of cigarettes of realistic graphic warnings about tobacco being harmful for health. This measure does not involve any budget spending, but helps make smoking significantly less popular (reduction up to 17% 123 ), especially among teenagers. The issue should be resolved as part of the Technical Guidance for tobacco products of the Customs Union or EurAsEC. KEY MEASURES TO REDUCE MORTALITY AMONG THE POPULATION OF THE RUSSIAN FEDERATION To summarize, key measures to reduce mortality among Russian people are as follows: • step-by-step increases of excise taxes on hard alcohol beverages at least by 150% to the level of the Baltic states, with enforcement of limitations of time, geographical and category accessibility and tightened control over alcohol production and sales. • introduce prohibition of alcohol sales with ethanol content over 15% on Sundays and Saturdays after 16:00. This measure proved to be very effective in the Scandinavian countries and it is necessary to implement it in Russia as soon as possible 124 . • prohibit sales of alcohol with ethanol content over 15% in store departments not isolated from other departments and not having separate entrance from outdoor -the point is that if "a person enters a store to purchase bread and sees alcohol on the shelves, this often prompts him to purchase alcohol as well" 125 ; • hiking excise taxes on cigarettes to the minimum EU level (EUR 1.28-1.26 per package of cigarettes), total prohibition of tobacco and smoking advertising in closed public places, placement on packages of cigarettes of realistic graphic warnings about smoking being harmful for health. • harmonisation of medical practices (clinical practice guidelines, standards and protocols to treat diseases) primarily in the area of prevention, treatment, and diagnostics of cardiovascular and oncologic diseases with practices in EU countries, the USA and Canada. • ensure geographical and economic accessibility of healthcare, including by retaining feldsher-obstetric stations, expanding training courses for paramedical personnel and extension of the scope of services offered by paramedical personnel (which in turn requires revision of principles to train paramedical personnel (sick nurses, feldshers) with a focus on strengthening theoretical and general clinical training (therapy, general surgery). • implement comprehensive systems to provide medical aid in case of vascular catastrophes (strokes and heart attacks), including formation of inter-disciplinary medical brigades (with working hours round the clock) based on existing therapeutic institutions, mandatory availability and round-the-clock access to computed or magnetic tomography scanners in healthcare institutions providing medical assistance at early stages of vascular catastrophes (up to 12 hours), and ensure that these therapeutic institutions have fibrinolytic medicines with proven clinical effectiveness. The main requirement for the system is to ensure that computer and magnetic tomographic scanning is performed not later than within four hours after the emergency medical brigade was called for, ensure that fibrinolytic medicines are applied to patients with an ischemic stroke not later than within six hours after indications of a stroke. • improve effectiveness of the system for prevention and treatment of cardiovascular diseases (including "cardiovascular catastrophes" through application of methods with proven effectiveness and safety (including early diagnostics and pharmacological control of cholesterol, blood pressure, and blood sugar levels), with reimbursement to citizens of expenses to purchase such medicines. • ensure co-payment or complete reimbursement to outpatients of expenses to purchase medicines. Develop an subsidized of free pharmaceutical supply system for patients suffering from chronic severe and socially important illnesses, including oncologic diseases. • reduce mortality from road accidents by introducing speed limits, stronger efforts to halt driving under intoxication, control over use of seat belts (including on rear seats) and baby seats, bringing road infrastructure and domestic cars in compliance with international safety standards, and ensuring timely assistance to victims of road accidents. • include vaccination from human papillomavirus to the National Schedule of Preventive Shots in Russia to considerably reduce sickness and mortality rate from cervix uteri cancer. • extensive and accessible communications to the general population and care providers about early indications of potentially lethal crisis situations (stroke, heart attack, hypertonic crisis etc.) and basic rules for providing first aid at the onset of such crises. "Demographic maneuver" Further substantial rise of the excise duties on tobacco and alcohol These funds can be used to support family policy save up to 300,000 human lives per year. bring up to 800 billion rubles to the state budget, to secure 500 000 additional births per year Russia is able to increase substantially fertility and to reduce substantially mortality without raising budget expenses MEASURES TO OPTIMISE MIGRATION GROWTH Given the possible dubious social and cultural consequences of large-scale "replacement" immigration, as well as its inadequacy to compensate for current excessive mortality and low fertility, immigration should be considered exclusively as an additional component of demographic policy of Russia The main features of Russia's migration policy should be to try to reduce or eliminate "push-out" factors that lead to emigration, encourage migration flexibility among Russian citizens and refocus domestic migration flows towards eastern regions of Russia. Policy should also include measures for the selective attraction of necessary categories of immigrants based on cultural and qualification parameters, and maintaining migration gains at the target rate determined by the Concept of Demographic Policy of the Russian Federation for the period up to 2025 -300,000 people per year, since according to most calculations, it is impossible to fully resolve the problem of the future reduction of Russia's population without maintaining migration gains at this level. It is important to note that a considerable migration reserve is represented by emigration from Russia, which includes educated and qualified specialists, young and active people, and is accompanied by business and capital flight. It is possible to reduce this emigration flow only by a significant increase in salary in relatively low-paid government sectors (science, education, culture, art), minimising bureaucratic barriers to business development, liquidation of corruption pressures on people, and creating jobs and opportunities for self-realization in professions and on the labour market, as well as greater legal, security and property protections to improve the business and investment climate. In addition, it is necessary to make interaction with Russian-speaking communities abroad more active. According to preliminary estimates, the number of representatives of these communities may be in the range of 25-30 million and they have significant social, economic and demographic potential. On the one hand, Russian-speaking communities may be "conductors" and "support points" of Russian business, education and culture abroad; while on the other hand, they may represent a certain demographic potential for return migration to Russia. Recognised dual citizenship, simplified procedures to retain Russian citizenship for emigrants and their descendants, and significant benefits provided to them when they enter Russian higher education institutions may strengthen links of Russia with compatriots and attract additional compatriots to Russia. Domestic population migration represents a significant resource for social and economic development of some Russian regions. Domestic population mobility should be developed through supporting the market for low-cost housing for rent, development of a national information base with vacancies on the labour market, and establishing a system of preferences for professionals prepared to relocate and work in regions necessary for the country. These measures may assist in making the population of the Far East and some borderline and geopolitically important areas bigger and younger, relieve demographic pressure from economically depressed regions and settlements with high unemployment rates, and provide a workforce to those regions and settlements experiencing labour shortages. It is advisable to improve the State program to encourage the return of compatriots to Russia, acting from 2007, by providing its participants with Russian citizenship before they arrive in Russia; financing from the federal budget of housing construction in regions for local inhabitants and compatriots on a parity basis; simplifying procedures to provide land plots for construction and agricultural production; and providing tax benefits for opening and doing business in geopolitically important regions. It is necessary to more actively use the integration potential of compatriots in areas where they are living as people with experience of living in other social and cultural conditions and knowing customs and traditions of other people. Compatriots may be involved in self-governance, social projects and cultural events in their residence area. It is extremely important not to limit participants of the State Program to Return Compatriots to Russia in selecting the region to live in. This measure will be more effective than proposed relocation allowances and jobs in rural areas which do not even enjoy demand among local inhabitants. Attempts to allocate returning compatriots to regions, abandoned by local population, brought unsuccessful results as part of the previous program encouraging compatriots to return to Russia. Approximately the same problems have arisen in subprogram No. 3 "Providing Assistance to Voluntary Migration to the Russian Federation of Compatriots Living Abroad" approved in April 2013 as part of the State Program of the Russian Federation "Regional Policy and Federative Relations" (see Appendix 4). The experience of the 2007 program to encourage compatriots to return clearly shows that attempts to resolve at the same time the task of returning compatriots and demographic problems of "priority settlement territories" results in neither task being resolved. These tasks should be resolved independently, even if they are coordinated. On the one hand, Russia's appeal in the eyes of required categories of immigrants will depend on the migration potential in CIS countries which is shrinking (by approximately 5-6 million people) or will have to gradually shift to Europe, America, Asia and Australia. Russia should more actively develop its immigration potential in necessary scope and parameters in "traditional" (CIS, Vietnam) and "new" geopolitically promising partner countries by dissemination of the Russian language and the promotion of Russian literature, education and science. Russian cultural influence should increase, through dissemination of the Russian language, Russian literature, the mass media, and cultural, educational and scientific events. It is necessary to develop a special state program to attract educational (student) migrants in Russia from CIS, Europe, Middle East, South East Asia and Latin America. Special attention should be given to attract children of compatriots, living abroad, to study in Russia. In addition to the above measures, this program should include financing of exchange programs, scientific and research projects, and grants for young people to visit and study in Russia. Development of this program may bring demographic, social, economic and geopolitical benefits to Russia. On the other hand, the appeal of migration to Russia depends on removing administrative bureaucratic "barriers" on the way to obtain work permits, temporary residence permits, registration certificates and Russian citizenship for necessary categories of immigrants. These include foreign students, postgraduate students, qualified workers, researchers, specialists with high qualification and rare professions, top managers, businessmen, investors. In addition, the Russian Federation has a significant reserve represented by immigrants who already live in the country but for various reasons have no legal status or opportunity to obtain it due to bureaucratic procedures (according to preliminary estimates, 2-3 million people, maybe more). It is possible to hold a special campaign to legalise immigrants who did not violate Russian laws, have worked in the country for several years, are integrated into Russian society, have property, but have had no opportunity to become Russian citizens. Based on the above analysis, priority measures to optimise migration gains may be formulated as follows: 1) In terms of improving the State program to encourage compatriots to return: 1.1) Ensure that all compatriots wishing to move to Russia go through a simplified procedure to obtain Russian citizenship in countries where they live before they arrive in the Russian Federation. 1.2) Do not limit opportunities for participants of the State Program to Return Compatriots to Russia with regard to selecting regions to live in. This measure will be more effective if it does not require returning compatriots to move to rural areas which do not even enjoy demand among local inhabitants. 1.3) Expand opportunities for obtaining professional education in the Russian Federation for applicants from Russian-speaking families and adaptation courses for them, taking into consideration differences in educational programs between Russia and countries they live in. This measure will enable Russia to better attract young compatriots from abroad and make it easier for them to adapt to the Russian social environment. Allocate special educational grants from the Russian Federation budget to children from compatriot families for entering Russian higher education institutions. This will ensure an inflow in Russia of human resources especially valuable in demographical terms and, at the same time, financial support to the most effective higher education institutions. Such programs may be launched through a pilot project, with a focus on one of the most developed European countries (for instance, Germany), with subsequent expansion to other countries if successful. 1.4) Ensure effective financing and promotion of state support to current operating Russian language and cultural programs in countries where compatriots live, so as to facilitate early adaptation of compatriots to Russian conditions in case of relocation, promote Russian culture and expand Russia's influence on these countries. 1.5) Recognise dual citizenship, and simplify procedures to retain Russian citizenship for emigrants and their descendants. 2) In terms of development and implementation of state programs to attract educational (student) migrants to Russia from abroad: 2.1) Approve the state program to attract educational (student) migrants to Russia, which includes exchange programs, language courses, grants for trips and secondments. 2.2) Permit employment of foreign students and postgraduate students, who study in Russian higher education institutions, with certain hour limits. 2.3) Ensure that migrants who obtained secondary and higher vocational education in Russia or studied for a certain period of time (for instance, at least 10 years) automatically become Russian citizens. 3) In terms of facilitating adaptation of labour migrants and integration of part of them in the Russian society: 3.1) Remove bureaucratic barriers on the way to receiving a work permit, a temporary residence permit, a registration certificate, Russian citizenship for necessary categories of immigrants (students, postgraduate students, qualified workers, researchers, specialists of high qualification and rare professions, top managers, businessmen, investors). 3.2) Ensure an opportunity to obtain a registration certificate and citizenship for migrants who have been staying in Russia for a long time and have been integrated in the labour market, provided that they are prepared to integrate in the receiving community. Hold a campaign to legalise immigrants who have not violated Russian laws, worked in the country for several years and integrated in the Russian society. 3.3) Ensure that programs are drafted to integrate in the Russian society migrants who legally stay in the country. 4.6) Develop a system to attract graduates from higher education institutions to eastern and borderline regions of Russia by developing "circulation migration" through providing housing and land as property. 5) In terms of reducing "push-out" factors and reducing the emigration from Russia of professionals and researchers: 5.1) Dramatically increase salaries in currently lower-paid government sectors (science, education, culture, art), including payments for labour and for degrees for researchers and lecturers at higher education institutions and research institutions. 5.2) Increase financing of the Russian Foundation for Basic Research and the Russian Foundation for Humanities, including support of Russian and international research centres, programs for secondments of foreign researchers and postgraduate students in Russia, Russian researchers and post graduate students abroad, restore grants for Russian researchers to have secondments and attend conferences abroad. 5.3) Provide researchers with an opportunity for spending resources on surveys, conferences and business trips without bureaucratic limitations, and based on actual costs. 5.4) Develop scientific exchange programs, invite foreign researchers to Russian research centres and provide opportunities for Russian researchers to have secondments in foreign research centres sponsored by the state. 5.5) Establish the direct financing system for effectively working research teams and centres on a tender basis at the expense of grants and budget allocations. CONCLUSION Today, the share of the working-age population in Russia's total population is one of the highest among all large developed countries. This specific feature offers an undoubted advantage compared to other countries and a historic chance -a wonderful opportunity to overcome a demographic hole and make a breakthrough in economic development. However, this exceptional situation will soon change forever, unless urgent measures are taken now. According to expert estimates, in just 20 years the age group of 20-40 years in Russia could be reduced by half. In a decade, the number of people aged 20-30 will also almost halve. People of these generations have the greatest potential for childbirth and active work. Their number is declining steeply. Today, we have half as many 15-year old people as 25-year olds! The conclusion is evident: Russia has just two or three years to strengthen family, raise fertility and improve productivity to restore positive demographic momentum. Either the best conditions are created for these young people to give birth and raise children, increase fertility and become highly productive labour or in several decades, Russia will become a hopelessly aged and poorer country, at risk of being unable to preserve its territory and its heritage. If we miss the historical chance described above, we will lose our historic chance at revival. The upcoming decline in births due to the dramatic fall in the number of young women, made worse by the progressive loss of our employable population by more than a million annually, with more and more widespread family and children being ill, hundreds of thousands of deaths caused by alcohol, drugs and smoking, numerous excluded and deviant categories of population; altogether these factors in their entirety represent a definite threat to national security, capable of causing a population decline only comparable to the large-scale application of hostile military power on our territory. Needless to say, such a situation is extraordinary and requires decisive and urgent measures. Our duty is to do our best to ensure that the potential of younger generations is fulfilled as much as possible, to prevent these negative phenomena from progressing further, and not to allow the quantitative decrease and qualitative degradation of our people, and destruction of the national potential of our great country. A response to this extraordinary challenge requires special efforts to coordinate the development and implementation of pro-family policy measures, make Russian society more attractive to both high skill and moderate/low skill workers as a place to live and raise families and a target for migration, and to interact with traditional religious confessions and other organisations having the potential to undertake pro-family education and relevant social work. The current generation, while still abundant, is called to resolve two tasks which are generally hard to resolve at the same time-to give birth to a large number of children and to build a new modern economy. This will require special measures aimed to create opportunities for parents to combine work and childbearing without limitations on their careers or the welfare of their families. The models of the American "baby boom" of the 1960s -when widespread prosperity and opportunities for jobs and housing led families to commonly have more children than previous generations -and of France and the Scandinavian countries where extensive policies for family support and child care have produced the highest fertility levels in Europe, are worth trying to emulate. It is widely believed that the problems of Russia's declining population may be resolved through immigration. However, this is not the case, since all former USSR countries, without exception, are going through their own demographic crises. We need to multiply the quantitative and qualitative potential of our people. It is necessary to consistently pursue the policy of promoting traditional family and ethical values. A family with a large number of children which is very often regarded negatively today should become the goal of national life! Efforts of the state are not enough here. It is important to consolidate and direct civil society, the media, business, science and education to resolve this task. The religious factor may be the most important here. Today, failure to act, more than ever, means that continued deterioration in the numbers and health of our population would materialise, while active professional managerial actions will ensure that our national potential is preserved and developed and our country is placed on a path to a stable and more prosperous population. women) aged from τ to τ + 1 years at the moment of time t, , -age specific birth rates, women, age from τ to τ + 4 (i. е. by 5-year groups) at the moment of time t, , , -age specific mortality rate, age from τ to τ + 1 at the moment of time t, , , -number of migrants (arrived in the country), this number (generally) may be negative in case of population outflow from the country. # , -Infants survival function at time t Equation (1) describes shift of the age structure by one year (due to mortality and migration), equations (2F) and (2M) describe the "source" (i.e. number of babies). Appendix 2. On using external migration gains as the main source of resolving Russian demographic problems Generally, we view it as extremely risky to draft plans on resolving Russia's demographic problems through migration gains (rather than by stimulating birthrates and liquidation of Russia's excessively high mortality rates). The point is that all CIS countries (the main demographic donors for Russia) have faced their own demographic dips related to a steep decline in fertility rates of the 1990s (in Ukraine it was even steeper than in Russia, see Figure A2.1): The steepest decline in birthrates was registered in Central Asia, although from a very high level. Therefore, these countries are not yet facing the depopulation problem, but they are already having a significant slowdown in labour force gains (see Figure A2.2): As a result, over the next years, the labour market in CIS countries will see smaller and smaller cohorts in the age groups most likely to emigrate (Figures A2.3-A2.4), which will lead to a significant fall in the local excessive labour force and act as the main driver decreasing migration gains for Russian population. Constituent entities participating in the subprogram should approve regional relocation programs which will be endorsed at the federal level and receive co-financing from the federal budget. Also, the list of territories for priority settlement will be approved at the federal level (areas strategically important for Russia and characterized by population outflow and falling number of employable people). The following advantages are provided for compatriots participating in the program: payment of relocation allowance, compensation of transport expenses and document preparation fee, payment of monthly allowance in the absence of income from labor, business or other activity. In order to ensure jobs for relocated workers under the program, it provides for an opportunity to coordinate an invitation for relocation with future employer. As a result of the subprogram, 35,000 compatriots are expected to be relocated. However, a focus on "priority settlement areas" may prove to be a barrier to maximum implementation of the subprogram. As a matter of fact, a considerable migration outflow, especially employable population, may point to comparative unattractiveness of life conditions (including employment opportunities) in this region as compared to other regions (and compared to conditions in countries where compatriots live). However, an attempt to attract compatriots under these conditions may appear not quite successful, even taking into consideration financial advantages provided under the program. Appendix 4. Religious factor of fertility growth As regards modern Russia, we may say that religiosity is a factor increasing fertility in the country. However, this impact is significant only among people involved in religious practices (ordinances, ceremonies) on a regular basis and participating in life of religious communities. According to the all-Russian survey OrthodoxMonitor (2011-2012) 131 , the share of large families is higher among churched Orthodox than the average for Russia and the share of families without children is fewer. The share of large families among representatives of other confessions is also high (15%). Among Orthodox respondents who take one or more communion a month, 16% have a threechildren family, while people who may be regarded as a centre of the community (in terms of self-identification and involvement in social life it their laity), this indicator increases to one fourth of the polled (24%). Speaking of Russian people who regard themselves as Orthodox and participate in the church ordinances, we need to consider more specific differences. In terms of the fertility problem, an important factor is that a person participates in out-of-church activities of the Orthodox community and belongs to a developed community. Also, communities, as compared to average statistical Russian people, have a considerably higher share of women planning to have a child (another child) over the next three years -29% of women in communities and total of 7% in all-Russian sampling answered "definitely yes" when asked whether they intend to do it (see Table A4.3); A.B. Sinelnikov, V.M. Medkov and A.I. Antonov, based on a sociological survey "Religion, Family, Children" and, as part of analysis of the impact of religion in Russia on family life and demographic behaviour of population, found out that not quite religious people have "average expected number of children of less than two" (even very religious Christians have this indicator equal to 2.53) 134 . If we single out religiously active people (by frequency of reading prays) from this group, their indicators of average actual, expected, desired and ideal number of children is considerably higher. "Quite religious Christians which pray at least three times a day, the average expected number of children is 2.82" which is "higher than just the generation replacement line" 135 . And "if we add to this additional parameters of religious activity, for instance, frequency of confessions and communions, the indications ... may be even higher". However, the higher is the degree of their religious activity, the lower is their number. In their article "Differentiation of childbirth factors for various social and economic categories of Russian women" Ya. M. Roschina and A. G. Cherkasova (using data of the Russian monitoring of economic position and health of population for 2000-2006) concluded that "religious women are more likely to have child" 136 (analysis was based on selection of women aged from 16 to 39). It's noteworthy that based on "Analytical report based on selective survey of reproductive plans of population", performed by the Russian Federal State Statistical Service in 2012, "both the desired and expected number of children, on the average, for women and men is higher among people who consider themselves religious" 137 (see Table A4.4): These results are affirmed by data of surveys held both globally and in individual foreign countries. Specifically, data from the World Values international program steadily show that religious families tend to have many more children than non-religious (see Figure A4.1): a lack of a developed system of state support to families (for instance, in Bulgaria), while in other countries the network may also impact the childbirth decision (Italy). In some countries, religious communities are characterised by more developed networks of this kind. 141 Various aspects of the impact of religion on fertility are also registered for other countries. 142 Caroline Berghammer, in her survey on quantitative data 143 , estimated the contribution of religiosity and religious socialisation on third childbirth among women in the Netherlands. According to an analysis of panel data (2002)(2003)(2004), two factors impact third childbirth: church attendance by a woman and religious socialisation of a father. Religious socialisation makes a difference, even if a child's mother has stopped attending the church. The effects of the religious factor strengthen subject to groups. Moreover, religious characteristics of grandmothers and grandfathers (parents of respondents) have a significant influence on third childbirth in a family. In his survey, Guido Heineck 144 (using quantitative data from the Austrian Family and Fertility Survey) studied links between religion and fertility among families having the their first/only marriage. According to the results, a woman's religiosity has a positive impact on the number of children in the family. Thomas Baudin stated, also based on quantitative data in his surveys 145 about France 146 , that if involvement in confession and self-determination as a religious person did not produce any impact on fertility, participation in practices ("practising religious people") makes a tangible positive impact both on fertility as a whole and the number of children. All in all, most empirical surveys in this area performed both in Russia and abroad confirm a tangible impact of participation in religious practices on fertility and an even stronger impact of religious socialisation on the part of parents and grandparents (grandfathers and grandmothers). The results obtained may suggest that religion contains a certain set of settings, norms and values which are transferred (acquired) in the course of socialisation, including the "large family norm". Possible measures to strengthen the effect of the religious factors in the area of fertility growth Based on the preceding data, we may suggest that availability of Orthodox (and other traditional religious) communities with developed church and community activities may be instrumental in improving fertility in the country. Therefore, it is advisable to take measures aimed to allow all religious denominations and their communities to flourish: 1. Develop the system of financing initiatives of religious organizations regarding social and charitable activities. For instance, creating grant tenders (both under the presidential administration, and related ministries and departments) both at the federal level, and in regions and municipal formations. 2. Create a system of economic, legal, information basic education for potential participants of grant tenders. Develop training sessions for participants to form skills for preparing tender documentation and project reports, share knowledge and skills required for preparing such documentation. 3. Provide premises to arrange social activity for churchs and religious organizations, especially activity aimed to work with Russian people of reproductive age and with children (Sunday schools, youth groups, mother and baby homes, recreation centres for parents and children etc.). 4. Support the websites and the media of all traditional religious confessions (especially those devoted to family, motherhood and childhood), and ensure they have access to be televised via federal channels. 5. Establish the State Foundation to support large families with participation of the Russian Orthodox Church. 6. Create social family support centres and crisis pregnancy centres in urban areas and municipal districts jointly with the Russian Orthodox Church and other faiths, and ensure their budget financing. 7. Provide assistance for the Russian Orthodox Church's and other faiths' lay education centres which offer, among other services, recreation, sports, educational opportunities for children and their parents. Models of such behavior in the U.S. include the YMCA (young men's Christian Association), von Neumann Centers (Catholic) and JCCs (Jewish Community Centers.) In the US, such 'faith-based initiatives,' which involve funneling state funds through religious organizations to provide community services (without discriminating among faiths) have often been more effective than direct provision of government programs. 8. Removing legislative and administrative barriers to participation of priests with higher education in secondary schools as teachers of mandatory of facultative subjects. 9. Provide an opportunity to hold group consulting events with participation of priests from the Russian Orthodox Church and clergy of other faiths in secondary schools if children and/or their parents so desire (on issues interesting for pupils and selected by pupils). For many decades, the Soviet Union discouraged public and community religious activities. The recent support of the Russian Federation for the traditional confessions, including funds to restore monasteries and Churches, has been a welcome change in policy. However, for promoting fertility among the Russian people, much more important that restoring buildings is promoting the free expression of religion by people of all faiths, to build strong pro-family religious communities that will encourage child-bearing and support larger families. Appendix 5. Regional differences in natural population movement and regional demographic policy Russian regions vary significantly in terms of natural population movement. On the one hand, some regions -primarily, some republics of the North Caucasus and Siberia -have a significant natural population increase, whereas on the other hand, some regions -primarily, in the Central Federal District -have a natural population decline, which exceeds 0.5% per year even after a notable improvement of the demographic situation in recent years. In 2011, 29 regions recorded a natural population increase. The biggest gains were registered in the Chechen Republic and the Republic of Ingushetia where they exceeded 2% to reach 2.4% and 2.3%, respectively. Four other regions have this ratio above 1%. They are the Republic of Tyva, the Republic of Dagestan, Altay and the Yamalo-Nenets Autonomous District. Four more regions had a natural increase ranging from 0.5% to 1% (the Khanty-Mansiysk Autonomous District -Yugra, the Kabardino-Balkar Republic, the Republic of Sakha (Yakutia) and Tyumen region). All other regions have a natural population decline. The biggest losses are recorded in Pskov and Tula regions (0.9% and 0.8% in 2011, respectively). Total natural population loss is more than 0.7% in Novgorod, Tambov and Tver regions, more than 0.5% in Bryansk, Vladimir, Voronezh, Ivanovo, Kursk, Leningrad, Nizhny Novgorod, Orel, Penza, Ryazan and Smolensk regions and in the Republic of Mordovia. Total fertility ratio is the smallest in the Central Federal District, in some regions of the North-West and Volga federal districts (especially in Leningrad and Tula regions, in the Republic of Mordovia and in Moscow). The strongest birthrates were registered in some republics of the North Caucasus, Siberia and the Far East. In addition, only four regions (the Republic of Altay, the Republic of Ingushetia, the Republic of Tyva and the Chechen Republic) have birthrates higher than they require to ensure population replacement. In all regions (except for the Chukotka Autonomous District), total fertility ratio was higher in 2011than in 2005. Its increase in 2011 vs. 2005 is attributable to changes in the birthrates in the regions: higher birthrates brought a higher increase (on the average), whereas lower birthrates led to a lower increase. Among regions with the weakest increase in total fertility ratio, four regions (the Republic of Mordovia, Leningrad and Tambov regions, Moscow) are in the group of regions with the lowest birthrates in 2011. In the group of regions with the biggest increase in total fertility ratio, half of the regions are represented by regions with the strongest birthrates (the Republic of Altay, the Republic of Ingushetiya, the Republic of North Ossetiya-Alaniya, the Republic of Tyva and the Chechen Republic). Most likely, populations in these regions have higher demand for children, and the state support to second and subsequent childbirths, was perceived there as improved conditions to realise the existing need in children, had a stronger impact on reproductive behaviour. Analysis of data on birthrates by birth order suggests that the likelihood of second childbirth is relatively low, first and foremost, in the Republic of Karelia, the Republic of Komi, the Khabarovsk, Vladimir, Voronezhm Ivanovo, Kirov, Kostroma, Kursk, Lipetsk, Moscow, Novgorod, Orel, Penza, Pskov, Samara, Saratov, Sakhalin, Smolensk, Tambov, Tula and Yaroslavl regions and in Saint Petersburg 147 . Most of these regions have been characterised for a long time by a large number of one-child families rather than just small families. These regions need to focus on support to second birthrates and provide for significant differentiation in various types of allowances and benefits for families with children so that two-children families have much more favourable conditions than one-child families do. Stronger growth of the total fertility ratio for second and subsequent children in 2007-2011 may underscore that the population in these regions is more inclined (as compared to people in other Russian constituent entities) to respond with their reproductive behaviour to similar measures in the future. This means that it is advisable to evolve measures in these regions 147 Only those regions are at issue, for which data on birth order rate among childbirths are available for 2011. that have already been implemented there.. Such regions, for example, may have a stronger effect from maternity (family) capital. Most likely, families there would show a relatively more active response to various forms of financial support. First and foremost, such regions include the Republic of Kalmykia, the Mari El Republic, the Republic of Tatarstan, the Republic of Udmurtia, the Republic of Khakassia, the Chuvash Republic, Kostroma, Omsk and Chelyabinsk regions 148 . On the contrary, weaker growth of total fertility ratio for second and subsequent births in 2007-2011 in most of the other regions may suggest that such measures are obviously insufficient to ensure more or less notable fertility growth. This trend is primarily observed in the Republic of Mordovia, Primorsk, Leningrad, Moscow, Murmansk, Penza and Tula regions and Saint Petersburg. Regional demographic policy measures regarding birthrates may be divided into three groups: measures taken to complement and expand federal measures (a regional onetime maternity grant, including amounts differentiated subject to birth order, an increase in monthly benefit for children under 1.5 years at the expense of the regional budget for certain categories of families, regional maternity (family) capital); new measures proposed by the federal centre and implemented by regions (a monthly payment for third and subsequent children under three years old in the amount of the child's minimum subsistence level, in demographically weak regions it is co-financed by the federal budget); providing land plots to large families (to build a house or a summer cottage); and measures initiated at the regional level. The latter include, for instance, measures to support low-income families with children, pregnant and fostering mothers (for instance, the Republic of Buryatia, Kamchatka, Irkutsk, Kaluga and Kirov regions). A monthly benefit for children from three to six years, higher than monthly benefits for children under 16 years, is paid in Saint Petersburg, the Yamalo-Nenets Autonomous District, the Republic of Komi, the Republic of Sakha Regions with an insignificant increase in life expectancy have similar diversity. On the one hand, they include North Caucasus regions with formally high life expectancy, whose reliability is doubtful due to low quality of statistical service (the Chechen Republic, the Dagestan Republic, the North Ossetiya Republic, the Karachay-Cherkessia Republic); while on the other hand, this group also includes regions with average (Samara and Orenburg regions, the Republic of Bashkortostan and the Republic of Mordavia) and even high (the Kamchatka region, the Republic of Sakha (Yakutia), Magadan region) mortality. In most regions, life expectancy growth for men and women closely correlates with mortality declines in medium and old employable age categories. The age profile of life expectancy growth differs significantly subject to achieved levels and the scale of this increase over the last five years. In the group of strong regions, age profile of life expectancy increase is diluted in all categories of adult population from young to elderly people, which points to a wide range of measures to reduce mortality in these categories. In regions with higher life expectancy than the Russian average, slower life expectancy growth in the last five was due to the fact that is mostly increase in the medium and old working age categories. In the group of regions with lower life expectancy than the Russian average, the growth was achieved due to mortality decrease in the middle and young ages for men, while for women it was dependant on the medium and elderly age categories. In the group of regions with relatively high mortality and a higher increase in life expectancy than the Russian average, this increase strongly correlates with a reduction in mortality in all categories of working age population from 15 to 60 years old and does not correlate with mortality trend in the elderly and old age categories. This correlates well with the explanation that in most Russian regions recent life expectancy increase occurred to a large extent due to decrease in harmful alcohol consumption 149 . Analysis of the 2012 regional data In 2007, the strongest gains in total fertility rate were recorded in the Republic of Tyva 150 Almost all of these regions (except for the Kabardino-Balkar Republic) were characterised by relatively high growth rates in previous years. We may assume that population in these regions still needs relatively more children due to the fact that state support to childbirths, perceived as improved conditions to realise the existing need in children, had a stronger impact on taking a decision about childbirth. The year 2012 was marked by other regional differences in total fertility rate gains. Regions that registered growth rates much stronger than in Russia (0.11) on the whole include the Nenets Autonomous District Nearly all of these regions with a strong increase in total fertility rate saw a more significant increase in 2012 as compared to the 2007 level (except for the Republic of Udmurtia, the Republic of Khakassia, the Chuvash Republic, the Altay and Orenburg regions). If we assume that a stronger increase in fertility rates in 2012, to a certain extent, was impacted by new regional demographic policy measures, regional differentiation of this impact should be primarily assessed by fertility rates at third and subsequent childbirths, since most regions grant regional regional maternity (family) capital at third and subsequent childbirths and land plots for residential construction are granted to families with three and more children. The strongest total fertility rate gains for third and subsequent childbirths in 2012 (among regions for which birth order data are available for 2011 and 2012) were recorded in the Yamalo- 149 Four other regions where maternity (family) capital also amounts to RUB 100,000 should be added to this list. However, they need special consideration. The Novgorod region also grants regional maternity (family) capital in the amount of RUB 100,000. However, there are two circumstances which notably improve its demographic efficiency. Firstly, like in the Voronezh and Pskov regions, according to the Law "On Additional Measures of Social Support for Large Families Living in the Novgorod Region for 2011-2014" (Article 3), families become eligible to this capital at third and each (rather than "or") subsequent childbirth (adoption). Secondly, according to the same article of the Law, the amount of regional "Family" capital, as noted above, increases to RUB 200,000 provided that RUB 100,000 is allocated to improve housing conditions. The Rostov region grants RUB 100,000 of regional maternity (family) capital at third or subsequent childbirth (adoption) only to low-income families, with average per capita income not exceeding the minimum subsistence level. The Tomsk region set this threshold at two minimum subsistence levels rather than one. RUB 100,000 of maternity (family) capital is also to be paid to large families in the Tambov region at childbirth. But only those families are eligible to this capital at childbirth which has not received one-time payments to improve housing conditions or one-time monetary payment to acquire housing or a subsidy for loans raised to acquire construction materials and build a house. Maternity (family) capital in the Kursk region amounts to RUB 75,000. The following 21 regions set this capital at RUB 50,000: the Republic of Adygeya, the Republic of Altay, the Republic of Kalmykia, the Republic of Mari El, the Republic of North Ossetiya -Alaniya and the Republic of Tyva, the Altay and Zabaikal, Arkhangelsk, Astrakhan, Belgorod, Bryansk, Vladimir, Ivanovo, Kaluga, Lipetsk, Ryazan, Tver, Tula, Chelyabinsk and Yaroslavl regions. The Republic of Altay and the Republic of Mari El provide regional (family) capital at fourth or subsequent childbirth and the Republic of Tyva grants it at fifth and subsequent childbirths. Five other regions have maternity (family) capital at the level below RUB 50,000: RUB 40,789 in the Volgograd region, RUB 30,000 both in the Primorsk and Tyumen regions, RUB 25,000 in each of the Kurgan and Nizhny Novgorod regions. Regions, where the amount of regional maternity (family) capital differs subject to birth order, require special consideration. The Ulyanovsk region provides regional maternity (family) capital of RUB 50,000 at second childbirth (adoption), RUB 100,000 at third childbirth (adoption), RUB 150,000 at fourth childbirth (adoption), RUB 200,000 at fifth childbirth (adoption), RUB 250,000 at sixth childbirth (adoption) and RUB 700,000 at seventh and subsequent childbirth (adoption). The Kamchatka region grants regional maternity (family) capital at third or subsequent childbirth (adoption) in the following way: RUB 100,000 at third childbirth (adoption), RUB 150,000 at fourth childbirth (adoption), RUB 200,000 at fifth childbirth (adoption), RUB 250,000 at sixth or subsequent childbirth (adoption). The Republic of Mordovia set regional maternity (family) capital of RUB 100,000 at third childbirth (adoption), RUB 120,000 at fourth childbirth (adoption) and RUB 150,000 at fifth and subsequent childbirth. In the Kaliningrad Region, its amount is set at RUB 100,000 for third or fourth childbirth (adoption) and RUB 200,000 at fifth or subsequent childbirth (adoption). However, this payment is only intended for families with average per capita income not exceeding 3.5 minimum subsistence levels. The Kirov Region provides regional maternity (family) capital of RUB 75,000 at third childbirth (adoption), RUB 125,000 at fourth childbirth (adoption) and RUB 200,000 at fifth and subsequent childbirth (adoption). It is granted in the form of one-time payment. It exists as a one-time payment in the Republic of Adygeya and the Republic of Mari El, in the Zabaikal, Arkhangelsk, Vologda, Ivanovo, Kaluga, Kirov, Kurgan, Lipetsk, Samara, Tyumen and Yaroslavl regions and the Chukotka Autonomous District. The Republic of Buryatia also grants maternity (family) capital as one-time cash payment. In addition, it is specifically intended to acquire housing (based on the price of 11 sq m per child). Almost all regions make provisions for improvement of housing conditions and education for a child (children) as possible ways of spending maternity (family) capital. The third way of spending federal maternity (family) capital -to form a funded part of a mother's labour pension -is significantly less common in regions (the Republic of Mordovia, the Krasnoyarsk, Bryansk, Moscow, Novosibirsk, Omsk and Orenburg regions). Apart from using maternity (family) capital to improve housing conditions, many Russian constituent entities provide for additional ways of spending these funds, related to housing redevelopment, such as repair works (the Republic of Sakha (Yakutia), the Perm, Belgorod, Vladimir, Kaliningrad, Magadan, Nizhny Novgorod, Ryazan, Samara and Ulyanovsk regions), gasification (the Perm, Vladimir, Leningrad, Nizhny Novgorod regions), engineering communications (the Ryazan region), water supply, water disposal, heating equipment installation (the Novgorod region). The Leningrad Region urges families, for which it is confirmed they need to improve their housing conditions, to necessarily allocate maternity (family) capital to improve housing conditions. Maternity (family) capital may be used for medical treatment (including health resort treatment) of a child (children) in the following regions: the Republic of Kalmykia, the Karachay-Cherkessia Republic, the Komi Republic, the Republic of Sakha (Yakutia) and the Republic of Khakassia, the Perm, Primorsk, Voronezh, Leningrad, Magadan, Nizhny Novgorod, Rostov, Saratov, Tomsk, Tula and Ulyanovsk regions, the Nenets Autonomous District and the Jewish Autonomous Region. The Kaliningrad, Samara, Sakhalin, Chelyabinsk, Khabarovsk regions, the Khanty-Mansiysk Autonomous District -Yugra and the Yamalo-Nenets Autonomous District permit the use of maternity (family) capital for financing medical treatment of both a child and his/her parents. The Novgorod region also provides for spending maternity (family) capital on paid medical care services. However, it is unclear whether this relates to a child (children) only or to parents as well. The Stavropol, Orenburg and Samara regions and the Khanty-Mansiysk Autonomous District (Yugra) provide for using maternity (family) capital by parents to raise their education level; the Kaliningrad, Leningrad (in case of five and more children or a disabled child), Murmansk, Novosibirsk, Rostov, Samara and Krasnoyarsk regions, the Republic of Sakha (Yakutia) envisage vehicles acquisition, the Kaliningrad and Murmansk regions allow for acquiring durable goods; the Republic of Kalmykia and the Leningrad region provide for buying land plots; Saint Petersburg allows for summer cottage construction; the Republic of Sakha (Yakutia) envisages development of personal subsidiary economy; the Krasnoyarsk and Perm regions provide for supplying children with technical rehabilitation facilities; the Samara region allows for purchasing items required for baby care and development and the Amur Region provides for repayment of principal amount and interest payment under consumer loans (except for fines, fees and penalties). Some regions allow for receiving one-time payment in the amount of part of regional maternity (family) capital: the Komi Republic (annually RUB 25,000), the Krasnoyarsk (annually up to RUB 12,000), Vladimir, Magadan (annually up to RUB 40,000), Orenburg (RUB 10,000) and Saratov (25% of the capital amount for consumer needs) regions. Average increase in total fertility rate at third and subsequent childbirths in 2012 vs 2011 by groups of regions with various amounts of regional maternity (family) capital (family) capital at third or subsequent childbirth is RUB 150,000 and an increase in fertility rate at third and subsequent childbirths significantly exceeds Russia's average level (0.059). In the Sakhalin region, maternity (family) capital is also equal to RUB 150,000. It is provided at second or subsequent childbirths rather than third or subsequent childbirths. In this regard, it is interesting to note that an increase in total fertility rate not only at third, but also at second childbirths was much higher than Russia's average level (0.067 vs 0.049). The Novgorod region has RUB 100,000 regional maternity (family) capital, as noted above. However, this amount is increased to RUB 200,000 if the capital is allocated to improve housing conditions. In addition, families who gave birth to (adopted) third and each (rather than "or") subsequent child become eligible to this capital. Families are also encouraged not to delay childbirths as the term of regional maternity (family) capital only covers childbirths in this region by the end of 2014. The Novgorod region not only saw an increase in total fertility rate at third and subsequent childbirths that was higher that Russia's average level in 2012. Furthermore, for the first time over many decades, the total fertility rate for all childbirths appeared to be equal to Russia's average (formally, even slightly higher). Regional maternity (family) capital is RUB 100,000 in almost all the remaining regions which in 2012 saw notably higher increases in total fertility rate than Russia's average level. Since 2013, some regions have launched a monthly cash payment for third and subsequent child under three years old in the amount of the minimum subsistence level. According to a sociological survey held in 2013 in the Kaluga, Novgorod and Perm regions, it is the impact of this new measure that was ranked by women, who were pregnant or had third or subsequent childbirths in 2013, on the average, higher than that of other measures (scored 2.57 out of 5). The women polled ranked regional maternity (family) capital as the second measure (scored 2.43). 154 This measure was also ranked the highest as a factor capable of impacting decisionmaking on third childbirth over the next three-four years (scored 3.15 out of 5). The measure that ranked second was an opportunity to go to a kindergarten without any problems (scored 3.09). The measure that ranked third was an opportunity to obtain a land plot for residential construction (scored 3.02). What family policy measures may be recommended based on analysis of regional experience Based on the analysis performed, the following family policy measures may be recommended. It is necessary to provide for loans and beneficial loans for residential construction on the provided land plot for families with three and more children. This would help enhance efficiency of this measure, since the main (in fact, it may be the only one) negative aspect in the practice of providing families with three and more children with land plots for residential construction is the fact that they lack money to build a house on this land plot. Furthermore, an opportunity should be considered whereby young families participating in housing support programmes could suspend bank payments during the maternity leave after second childbirth and families could be granted with an additional loan in the amount of remaining debt under housing loans at third childbirth. An idea may be addressed to provide families after second childbirth with an opportunity of purchasing housing at cost and purchasing housing at cost under interest free mortgage for families after third childbirth. In this regard, we would like to point to what Vladimir Putin said in his State-of-the-Nation Address on 12 December 2013. "Today, housing construction must once again play a decisive part in encouraging population growth in Russia" 155 . Appendix 6. Differences between rural and urban areas In the 21st century, rural population saw stronger fertility growth rates in Russia than urban people. In 2012, total fertility rate for rural population was 0.661 higher than in 2000, and 0.452 higher than for urban population. Speaking of a relative increase in this indicator for this period, differences between rural and urban areas are next to nil, 42.5% and 41.5%, respectively. In 2012, total fertility rate in rural areas was back to the level ensuring common population replacement, at 2.215. An increase in total fertility rate has been stronger for rural population than for urban population only starting from 2007, i.e. during the period when additional measures of state support to families with children have been implemented. In 2006, as compared to the 2000 level, total fertility rate in rural areas rose 0.047 and in urban settlements it gained 0.121. The relative increase amounted to 3.0% and 11.1%, respectively. Stronger growth of birth rates in urban areas during this period has reduced the gap between total fertility rates for rural and urban population from 0.465 in 2000 to 0.391 in 2006. Since 2007, differences in trends of birth rates in urban and rural areas have changed significantly. As early as in 2007, i.e. during the first year, when measures of state support to families with children were launched, total fertility rate for rural population rose 0.197 (or 12.3%) and for urban population it gained 0.084 (or 6.9%). All in all, in 2007-2012, this indicator increased 0.614 (38.4%) in rural areas and 0.331 (27.4%) in urban areas. This suggests that the implemented measures to support families with children have a relatively stronger impact on rural birth rates than on urban birth rates. This is also underscored by a trend in birth order. Apart from a much stronger increase in fertility rates in recent years, rural population has two additional specific features as compared to urban population. Firstly, rural areas, unlike urban settlements, have had a steady increase in total fertility rate at first childbirth starting from 2008 (it was especially notable in 2011-2012). Secondly, after 2007, urban women continued to record older mothers at first childbirth (no increase in birth age in 2012 alone), while rural women have seen almost no increase in average age at second and third childbirth since 2009 (starting from 2010, the average mother age at second childbirth even slightly decreased). This could indirectly underscore that rural families more often than urban families are inclined to have shifts in their childbirth calendars as a result of implemented measures of state support to families with children. In the next few months, however, Russia risks facing a repetition of the 1990s' demographic problems once again -with a new wave of mortality increases and a new wave of fertility decline. Pressing economic issues are currently receiving much more attention from the Government; yet an effective anti-crisis strategy also requires paying attention to the seemingly "long-term" demographic problems. Several threats to recent demographic gains have arrived with the crisis. As inflation is rising, more of Russia's population is falling into poverty -and risks of impoverishment have traditionally been the highest for families with many children. Even with the existing social support, the proportion of households with children among the households with income below the subsistence level is increasing. While in 2005 the ratio of poor households with and without children was 50/50, in 2013 it skewed to 64/36. The share of large families among the poor households has grown over 10 years by 2.8 times and reached 9% of all poor households in 2013. In his 2012 pre-election article 'Building justice. Social policy for Russia' Vladimir Putin condemned and labeled unacceptable the situation when "childbirth brings a family to the brink of poverty. Our national goal for the next 3-4 years is to completely eliminate such a situation". This goal has not been fully achieved yet, and is further threatened by drastic budget cuts. As the resources available for families shrinks, the recent upturn in fertility rates for second and third children may be reversed. When combined with the rapidly declining numbers of women in active reproductive ages (20-29 years) Russia is almost certain to experience a precipitous decline in fertility. In addition, a dramatic increase in the availability of alcohol is looming, reminiscent of the late 1990s. In 1998 Russia experienced a very serious financial crisis accompanied by a jump in inflation (by 84%) -however, the excise duty on spirits was increased only much more modestly, by 20%. As a result, during a single year the relative value of excise duty fell by onethird, leading to an dramatic cheapening of vodka and other spirits. Throughout the early 2000s this fall stayed uncompensated for, and the increases in vodka excise taxes frequently lagged behind the inflation rate. This caused an enormous increase in mortality in 1998-2005, when Russia "additionally" lost about two million lives. Today the recurrence of a mortality jump due to various initiatives on liberalizing the alcohol market is, unfortunately, a highly probable scenario. The Government has cancelled an earlier-planned increase in the spirits excise tax, which -given the high and rising rate of inflation -actually means their remarkable decline. The minimum price of vodka has been significantly reduced since February 1. Beer is supposed to return to sidewalk kiosks, the bans on alcohol advertising in mass media and on alcohol sales overnight are to be virtually lifted, etc. As a result, Russia may face a new round of population decline after all the recent claims of demographic victories. Even more sadly, this decline will probably be written off as the consequences of the economic difficulties, while in reality a new wave of depopulation could be averted -or, at least, substantially mitigated -by carefully designed and well-targeted social c policy interventions (many of which are purely legislative and would not put any additional strain upon the budget). A new series of calculations performed by a team of researchers from the Russian Presidential Academy of National Economy and Public Administration (RANEPA), the National Research University Higher School of Economics, the Russian Academy of Sciences, and the Moscow State University demonstrates that "alcohol liberalization" coupled with the absence of a new set of effective family policies may provoke a new demographic collapse with catastrophic consequences. In order to avert this disastrous scenario, appropriate measures must be taken immediately. A7.1. The demographic situation in early 2015 The results presented below are based on a new series of forecast estimates made in early 2015 on the basis of the most recent data on mortality and fertility, applying the same method that was used for mathematical modeling of scenarios in the main text of the Report 157 . Fig. A7.1 presents our population projections for Russia up to 2050 based on the inertial forecast scenario -i.e., with fertility and mortality rates held constant at their 2012 values, and with stable migration inflow at 300 thousand annually (the average rate of immigration in Russia according to the results of the National Population Census 2010). If the current rates of fertility, mortality and migration remain unchanged, Russian population is bound to decrease to 135-136 million by 2040 and to less than 130 million by 2050. At first the population decline will be relatively slow, but it will speed up after 2025, as more women of the 1990s' "demographic collapse" generation enter childbearing ages (Fig. A7.1): The inertial scenario looks even grimmer when extrapolated up to 2100 (Fig. 7.2): Fig. A7.2. Population projection for Russia up to 2100 based on the inertial forecast scenario, millions However the picture is still not that bad compared to our first inertial forecast scenario, which we calculated in 2009 on the basis of mortality and fertility rates of mid-2000s 158 . Indeed, according to that inertial forecast Russia's population was to plunge to 111.2 million by 2040 and to 99.5 million by 2050 (Fig. A7.3): Thus, the latest inertial forecast projects Russia's population to be 24.5 million higher in 2040 and 29.7 million higher in 2050 as compared to the first inertial forecast scenario. This higher pancreatitis 167 , and 61% of deaths from all external causes, including 67% of murders and 50% of suicides 168 , were associated with alcohol. A large proportion of deaths from pneumonia and tuberculosis are also alcohol-related 169 because the alcohol abusers are more likely to contract infectious diseases and less likely to get proper treatment. In 1998-1999 in the city of Izhevsk 62% of males who died in the ages between 20 and 55 had high blood alcohol content. 170 According to a large study conducted in the city of Barnaul in 1990-2004, 68% of men and 61% of women who died at the age of 15-34, as well as 60% of men and 53% of women who died at 35-69, had high blood alcohol content. 171 It is noteworthy that the mortality decrease in Russia after 2005 is very similar in its structure to the decline during Gorbachev's anti-alcohol campaign of the 1980s. 172 In general, research demonstrates an extremely close relationship between the production of ethyl alcohol from crops and mortality in Russia. A significant increase in production (and consumption) of alcohol leads to an immediate, significant increase in mortality -and vice versa (Fig. A7.4 and A7.5): Let us provide some statistical characteristics of the correlation depicted in the last graph. Routinely, the Pearson correlation coefficient (r) is used as a standard measure of the strength of a correlation. In this case its value is greater than 0.9, which means that we are dealing with an extremely strong relationship. It is useful to square 0.9 in order to understand how close the relation is in this case. The square of 0.9 is 0.81 (i.e. 81%), which is the coefficient of determination (R 2 ). In fact, its value suggests that Russian mortality dynamics of the recent years was predominantly determined by the alcohol factor. Thus, we have a reason to maintain that the record mortality decline observed in Russia after 2005 was more than 80% determined by a reduction in alcohol consumption, i.e. by the effect of the measures aimed at restricting the availability of alcohol. Thus, we have strong grounds to believe that Russia's impressive success in reducing mortality after 2005 was achieved mainly due to the state policy of limiting alcohol consumption. These policies were implemented in line with complex evidence-based anti-alcohol measures recommended by the World Health Organization, including higher prices and excise taxes on alcoholic beverages, as well as limitation of the spatial and temporal availability of alcohol. In addition, significant progress was achieved in reducing the consumption of illegal alcohol, marked by the dramatic reduction of alcohol poisonings, including lethal ones. Yet Russia may lose all these achievements in the near future -if measures are not taken to prevent the looming threats engendered by the initiatives of the alcohol lobby. Hundreds of thousands of "additional" deaths may follow, especially among working-age males, if a return to the days of easy access to alcohol is not averted. Unfortunately, similar reversals have already occurred in recent Russian history: after some growth, fertility would collapse even below its pre-growth level, while significant mortality reduction would be followed by a catastrophic upsurge (Fig. A7.6): The current situation bears a striking resemblance to the late 1990s. In the midst of an acute financial and economic crisis, the priority of demographic issues declines in favor of solving more immediately pressing financial and economic problems. Meanwhile, measures are adopted that have the effect of dramatically increasing the availability of alcohol. The situation is similar to 1998, when Russia experienced a financial crisis accompanied by a jump in inflation (by 84%) -however, the excise duty on spirits was increased much more modestly, by 20%. As a result, during a single year the relative value of excise duty fell by one-third. In 2000 the excise tax was increased slightly above the rate of inflation; during the next several years, its annual increase hovered around the inflation rate or slightly below it, so the huge fall of 1998 was left uncompensated for. This fall of the excise tax on vodka was followed by rising income and purchasing power of the population, which caused a huge increase in alcohol consumption (and, hence, mortality) in 1998-2005 leading to the loss of more than a million lives in Russia 175 . On the contrary, the 2008-2009 economic crisis was not accompanied in Russia by any mortality increase, as it occurred against the background of a strict anti-alcohol policy. Notably, the acute crisis of the early 1990s led to a catastrophic increase in mortality only in the post-Soviet countries where a sharp increase in alcohol consumption was observed (accompanied by all kinds of the negative social phenomena, such as homicide, suicide, abandoned children etc.) while in the countries where alcohol consumption remained flat mortality did not increase (as well as the number of murders, suicides, abandoned children, etc.). 176 The current financial and economic crisis is occurring at a time when a set of measures aimed to increase the availability of alcohol has been planned or already taken, so hundreds of thousands of lives are now under a very serious threat. These measures include: 1. Freezing and actual reduction in excise taxes on alcoholic beverages. According to a recently passed law on changes in excise rates, 177 actual vodka prices are to be lowered in the next two years -instead of a formerly planned increase. According to the previous version of the Tax Code, excise taxes were to be increased from 500 to 600 roubles per liter of anhydrous ethanol. The increase was to come in force on January 1, 2015. However, a law passed in November 2014 annulled this planned increase and set the excise tax to continue at the previous level. With the rocketing inflation this means a substantial reduction in the actual excise tax. We should note here that the increase in excise duties on spirits in previous years led to a significant reduction in mortality, on the one hand, and to a simultaneous increase in budget revenues, on the other (see Fig. A7.7): Deaths from alcohol poisoning, per 100,000 The prospects for raising excise taxes on alcohol are further threatened by a draft "Agreement on the Principles of tax policy in the field of excise duties on alcohol and tobacco products of the Eurasian Economic Union". This draft was designed to slowdown the increase of excise taxes on tobacco products, but it also has already led to a decrease in excise taxes on alcoholic beverages in Russia. For the first time in its whole history, MRP was decreased, not increased. The price for a 0.5-liter bottle of 40% vodka dropped from 220 rubles to 185 rubles (thus getting 16% cheaper). 3. Russia's capacity to implement independent anti-alcohol policy is being undermined. This threat arises from the draft agreement "On regulation of the alcohol market in the framework of the Eurasian Economic Union" which implies an actual loss of Russia's sovereignty in the issues related to alcohol policy regulation, which will lead to the "harmonization" of liquor prices with Belarus and Kazakhstan (where they are much lower) and, hence, to their further significant reduction, and, consequently, to the further growth of alcohol availability and mortality in Russia. 4. Alcohol 'liberalization' in Russian regions. Regional authorities now frequently try to sell alcohol for the longest possible hours under the pretext of combatting illegal sales. For example, last December, the Moscow Region Duma passed an amendment to the law limiting the hours of retail alcohol sales, expanding them to 08.00 -23.00 from the previous 11:00 to 21:00. 5. Lifting spatial restrictions on alcohol sales. The Rosalkogolregulirovanie has put forward a law project which permits the sale of alcohol in some educational, medical and cultural institutions. The bill is already undergoing the process of inter-ministry coordination in the Government. 6. Lifting the ban on remote sales of alcoholic beverages. The Government is discussing lifting the ban on remote sales of alcohol, which will dramatically increase its spatial availability and may lead to mass violations in terms of alcohol sales to minors, as well as illegal alcohol sales in general. 7, Returning beer to kiosks. The Federal Antimonopoly Service (FAS) has proposed to lift the ban on selling beer in street stalls. The Ministry of Industry and Trade has created a working group to consider this proposal. Meanwhile, the prohibition of street beer sales played a key role in the recent reduction of alcohol consumption by Russian teenagers. The implementation of the FAS initiatives will lead to a new wave of alcohol availability to Russian youth. 8. Legalization of alcohol advertising on television. The State Duma of the Russian Federation has passed laws allowing beer advertising on TV (including the sport channels) and advertising of wine after 23.00, despite the fact that alcohol advertising is one of the most effective ways to accustom youths and adolescents to alcohol consumption. PROJECTED EFFECTS OF STATE ALCOHOL POLICY RELAXATION The calculations carried out by an expert group of the Russian Presidential Academy of National Economy and Public Administration (RANEPA), the National Research University Higher School of Economics (HSE), Russian Academy of Sciences, and Moscow State University have shown that the forthcoming full-scale relaxation of the state anti-alcohol policy may lead to a total of 5.5 million additional deaths by 2030 (see Fig. A7.8 and Table A7.1): The number of working-age males will be particularly affected (Fig. A7.9): Thus, the changes in legislation proposed by the alcohol lobby may lead to a significant increase in alcohol consumption and thus to an increase in alcohol-related mortality, morbidity and social problems. Such consequences are extremely likely to seriously undermine Russian progress to goals set forth in the Presidential Decree #606 of May 7, 2012 "On measures for implementation of demographic policy of the Russian Federation", particularly as regards reaching the target value of 74 years of total life expectancy by 2018. Moreover, their overall demographic consequences for our country may be disastrous; so urgent measures must be taken to avert the upsurge of population loss. A7.3. How to prevent a demographic catastrophe Even if the pending "pro-alcohol" legislative initiatives are simply blocked, life expectancy will not go beyond the current value of 71 years. A simple preservation of the state anti-alcohol policy in itself will not suffice to increase the Russian life expectancies up to 74. For this, we need additional limitations on the availability of alcohol, both in time, in space, and economically. Price availability of alcohol must be seriously curbed. It would no longer suffice to return to the initially planned (starting from January 1, 2015) increase of the excise tax on spirits from 500 to 600 rubles178 (which was derailed by the alcohol lobby). Due to the dramatic inflation jump, the new law should raise the excises not to 600 rubles but at least up to 650 rubles. The ban on the sales of alcohol between 11 p.m. and 8 a.m. should be extended to a bigger time interval between 8 p.m. and 11 a.m. Banning morning alcohol sales has proved highly effective in Nordic countries as this blocks the opportunity to have a morning drink after a hangover (which may often lead to prolonged drinking bouts). Sales of alcoholic beverages stronger than 15% are advised to be prohibited in department stores unless separated from other departments with a special entrance. This cuts Inertial scenario Alcoholpessimistic scenario down on spontaneous purchases, i.e. "once entering a shop to buy some bread, one is provoked to purchase some alcohol by seeing it exposed on the shelves» 179 . We should not exclude the possibility of returning to the state monopoly on retail sales of the strong drinks in Russia. This measure has proven to be a very effective tool for reducing alcohol problems and mortality in Sweden, Iceland, Norway, Finland, Canada, etc. In the USA 19 states also have some form of monopoly on the sale of liquor. In these states alcohol consumption is 14.5% lower for those aged 14-18, and the frequency of abuse of alcohol by this age group (intake of more than 70 g of ethanol at one time) is 16.7% lower than in the states without such a monopoly. There is a 9.3% lower alcohol-impaired driving death rate under age 21 in the monopoly states versus the non-monopoly states. 180 In the Scandinavian countries such a monopoly allows the sale of alcoholic beverages (usually stronger than 4.7-5%) only in state stores (except for bar service). In addition such a monopoly helps to fill the state budget. The monopoly countries enjoy higher revenue from the sale of alcoholic beverages then the nonmonopoly countries with the same level of economic development. 181 A major advantage of the state monopoly on the retail sale of alcoholic beverages is that it minimizes the private interest in maximizing alcohol sales, which in this area often confronts the public interest. An employee of a store belonging to the state has no interest in selling alcohol to minors because his salary does not depend on the store's revenue -while the owner of a private shop may capitalize on it. 182 International experience shows that to maximize health and longevity, national alcohol policy should be regulated by the social branch of the Government, as is done in the Scandinavian countries, not by the economic branch. The Ministry of Health, the Russian Federal Service for Surveillance on Consumer Rights Protection and Human Wellbeing (Rospotrebnadzor) and the Federal Service for Regulation of Alcohol Market (Rosalkogolregulirovanie) must take control over this policy to fight the alcohol black market. THE WORST-CASE (PESSIMISTIC) SCENARIO However, it is obvious that the alcohol-pessimistic inertial scenario is by no means the worst possible case. The worst ("pessimistic", "pessimal") demographic scenario will become actual only if a radical surge in mortality coincides with an avalanche-like collapse in fertility. Unfortunately, this scenario is not entirely improbable. First, a certain decline in crude birth rates is virtually inevitable in the Forthcoming decade due to the reduction in the number of women aged 20-29, who mother more than 60% of all births in Russia. This is given by Russia's age structure and the very small cohorts born in the 1990s who are now entering their prime childbearing years. Second, most respondents explain their reluctance to have more children by referring to material difficulties and feeling uncertain about future. 183 Rising insecurity almost inevitably leads to a decrease in birth rates -this is particularly true for financial and economic crises (Fig. A7.10): Fig. A7.10. Birth rate slump in the United States during the Great Depression (1929)(1930)(1931)(1932)(1933) 184 The stimulating role of the maternal capital policy in boosting fertility is bound to decrease, as 97% of the families used to spend its benefits for improving their living conditions, which will become much harder during the current economic crisis. Strong measures are required to prevent a severe birthrate collapse. The financial and economic crisis of 2008-2009 in Russia did not drop the country's birthrate thanks to a set of strong and effective family policy measures launched before and during the crisis. The crises of the late 1980s -early 1990s and the late 1990s were accompanied by a decline in fertility because no such measures were taken. For example, on the eve of 1998 crisis fertility was already very low (1.24 children per woman) but during the crisis it dropped to an unprecedented level of 1.17 children per woman. In the late 1980s, as the starting point of fertility was already fairly high, 185 , the decline in response to the economic distress of the early 1990s was much steeper. In fact, it collapsed so deep that the consequences of the "demographic hole of the 1990s" are still present (see above the main text of the report). Most likely some decline in Russia's birth rate in 2015 is inevitable. The positive trend of recent years could be kept only if the proper measures had been introduced in 2014. For example there were about an additional 100 thousand newborns in 2012 due to the policies of free distribution of lands and allowances for the third child. If the maternal capital program is to be cancelled after 2016 (followed by cuts in other family support programs), the results will definitely be catastrophic demographically. The 'most pessimistic' scenario presents the population projections in a situation when a victory of the alcohol and tobacco lobby is combined with cuts in the family support programs, leading to a retreat to the worse values of mortality and fertility of the mid-2000s. The results of the calculation of this scenario are as follows (Fig. A7.11): Fig. A7.11. Pessimistic and inertial scenarios of the Russian population dynamics for the period till 2100, millions Thus, if no strategic priority is given to socio-demographic policy, this result may well lead to the end of Russia's geopolitical career by the end of the century. POSSIBLE DEMOGRAPHIC EFFECT OF A FULL-SCALE FAMILY POLICY CONSUMING NOT LESS THAN 3% OF GDP It is also possible to model the effect of developing a high-priority demographic policy structure that would aim to reach west European levels of fertility, closer to the replacement rate of two children per woman. This effect was modeled by a smooth (for 10 years) transition of age-specific fertility rates by 2020 in Russia to the level of France in 2012 (corresponding to TFR = 2.0), while preserving Russia's age-specific mortality at the level of 2012. According to international studies and best practice, the most effective measures to improve fertility include a combination of allowances, tax benefits, programs and legislation supporting parents in combining parenting and employment, including access to kindergartens, nurseries, nannies and flexible schedules for employees with family responsibilities. During a crisis the measures stimulating economic activity of parents may be more effective in boosting fertility than cash transfers. An effective system of care for children is also one of the most effective policy measures to support the birth rate. Of all the types of expenditures in OECD the 0 20 40 60 80 100 120 costs of services for child care (namely kindergartens, nursery nurses and payment) correlate the best with the level of fertility. It is extremely important for the child care system to develop a network of services for the care of the youngest children (under 3 years). Comparative analysis shows that all of the most demographically successful countries in Europe have built a wide covering system of free or subsidized services for the care of children under 3 years old. Russia does not have enough kindergartens and the youngest children are not a priority group. Only 58% of Russian children under 6 had access to preschool education facilities in contrast with 90% in France. A set of housing support measures such as subsidized rental housing for young and large families, development of housing and savings cooperatives, as well as substantial subsidies of mortgage rates for families with children may also improve fertility. The corresponding "high demographic priority" forecast of population of the Russian Federation (as compared to the inertial scenario) is as follows (Fig. А7.12). As we can see, measures to support the birth rates can give a significant long-term demographic effect (especially if we can prevent the growth of mortality in our country), but these measures alone are insufficient to prevent Russian depopulation, due to Russia's still-high levels of mortality. POTENTIAL EFFECT OF THE ANTI-ALCOHOL POLICY If a full-scale alcohol control policy is consistently implemented in Russia, our calculations demonstrate that such a deliberate anti-alcohol policy still has an immense demographic potential and will have a very significant long-term demographic impact (see Alcohol-pessimistic scenario These estimates demonstrate the enormous demographic potential of standard alcohol control measures recommended by the World Health Organization for the future of our country. 186 Implementation of these affordable and even profitable measures (such as increasing excise duties on spirits or the introduction of a state monopoly on retail sales of alcohol) may save up to 19 million lives by 2040. 187 Thus, in the short and medium term the alcohol control policy may have an even greater demographic impact than the policy of supporting the birth rate (though in a long run a fertility support policy is significantly more effective). THE FORECAST EFFECT OF COMPLETE ELIMINATION OF RUSSIAN EXCESS MORTALITY Total elimination of Russian extreme mortality would have an especially significant longterm demographic effect. Such results may be achieved through policies including anti-alcohol and anti-tobacco measures, as well as radical improvement of the Russian health care system by increasing the financial allocation for health care to at least 10% of GDP. This effect was modeled by a smooth (for 10 years) transition of the age-specific mortality rates in Russia to reach the corresponding values of Norway in 2009 (this scenario does not imply that by 2020 Russia will overtake Norway; it only starts with the assumption that Russia will be converging to Norway, reaching by 2020 the Norwegian level of 2009, so this scenario is not excessively optimistic). 188 Fig. A7.14. Scenario of complete elimination of Russian excess mortality in comparison with the inertial and pessimistic scenarios of the Russian population dynamics, till 2040, millions As we can see the complete elimination of Russia's excessive mortality may provide a more significant effect in the short and medium term than fertility support. Nonetheless, because of the small birth cohorts of the 1990s, whose effect will be magnified over time if they too give birth to small cohorts (e.g. have low fertility) the elimination of Russia's high mortality cannot, by itself, prevent an eventual return to population decline. If extreme mortality is eliminated, but fertility is preserved as it was in 2012, the Russian population will keep growing only until the mid 2030s. It would then start shrinking in the late 2030s, and this decline would accelerate thereafter. THE COMBINATION OF MEASURES THAT CAN PREVENT DEPOPULATION. THE 'MOST OPTIMISTIC' SCENARIO. Only the combination of an effective fertility support system and the elimination of Russia's excessively high mortality ("the best case scenario") may fully avert the looming threat ofdepopulation. It is worth noting that even under the optimum scenario the effects of the demographic hole of the 1990s will be felt in the 2040s as the small generation of the children born to the mothers born in the 1990s will reach their reproductive age. Nevertheless, in the most optimistic scenario future population decline would be averted, and in the future the Russia's population will stabilize at a level slightly higher than today's: (Fig. A7.15): elimination of excessive mortality and with continued improvement in fertility toward full replacement rates (e.g. fertility 2.0 or higher). For this to occur, current attitudes must be changed. Today the availability of alcohol is increasing instead of being curbed. At the same time the country is facing a new crisis while no new measures to provide stronger support for fertility support are expected. A DEMOGRAPHIC MANEUVER: ADDITIONAL REVENUES FROM ALCOHOL AND TOBACCO CAN STIMULATE THE REDUCTION IN MORTALITY AND THE GROWTH OF FERTILITY There is a demographic maneuver than can be undertaken to reduce mortality and stimulate fertility, and at the same time reduce smoking and alcohol consumption, save 300-400 thousand lives a year and ensure the growth of budget revenues. An increase in excise duties, by itself an unpopular measure, should be linked with measures to support families with children. It is recommended to create a Trust Fund, funded by higher excise taxes on alcohol and tobacco, to support family and health. The Fund should provide funding for the following areas: -to secure the opportunity for families to purchase housing with mortgage loans at 5% interest rate after the 2 nd birth (through the Agency for the Mortgage Crediting [АИЖК]); -to secure the opportunity for families to purchase housing with mortgage loans at zero interest rate after the 3 nd birth (through the Agency for the Mortgage Crediting [АИЖК]); -to ensure 100% availability of pre-school education and childcare for children from 1 to 7; -co-financing of regional programs for prevention and reduction of cardiovascular disease in areas with a high mortality rate in their working age population; -co-financing of regional programs of housing rent subsidies for families with children; -additional social support for families with children in regions with unfavorable demographic situations. During the economic crisis the Foundation of Family and Health Support could ensure the implementation of additional measures of supportive demographic policy, and contribute to ensuring sustainable growth of population after the crisis. There are no 'magic bullets' to easily solve Russia's demographic problems, which are the result of decades of economic ups and downs and shifts in policy. However, establishing a Trust Fund for family support and national health, increasing taxes on alcohol and tobacco, and using those funds for pro-fertility programs, is a policy that would achieve several goals at once without imposing additional cost on the current budget. It would also focus attention on longterm planning to resolve the problems that threaten Russia's demographic future. As the earlier sections of the Report have shown, Russia enjoyed great success with its policies to promote fertility and reduce mortality in the last seven years. However, it would be a foolish and costly mistake to believe those successes had 'solved' Russia's long-term demographic problems. Quite the reverse; they were only a promising 'down payment' on the policies needed to truly put Russia's long-term demographic future on a secure course. Without continuing and expanding the present policies, that future security will dissolve. Worse yet, the policies currently being considered to boost access to alcohol will almost certainly reverse recent progress and set Russia back upon a path of inevitable demographic decay.
2018-06-30T00:33:20.069Z
2022-11-24T00:00:00.000
{ "year": 2022, "sha1": "d2fb22ddb4acdfa1e613fa414d76670f4c2cab38", "oa_license": "CCBY", "oa_url": "https://www.ru-society.com/jour/article/download/50/43", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e73c82d4b31b5aac48ede4c424d20cbabc38eae8", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
17293294
pes2o/s2orc
v3-fos-license
Thermal emission from Isolated Neutron Stars and their surface magnetic field: going quadrupolar? In the last few years considerable observational resources have been devoted to study the thermal emission from isolated neutron stars. Detailed XMM and Chandra observations revealed a number of features in the X-ray pulse profile, like asymmetry, energy dependence, and possible evolution of the pulse profile over a time scale of months or years. Here we show that these characteristics may be explained by a patchy surface temperature distribution, which is expected if the magnetic field has a complex structure in which higher order multipoles contribute together with the dipole. We reconsider these effects from a theoretical point of view, and discuss their implications to the observational properties of thermally emitting neutron stars. Introduction The seven X-ray dim isolated neutron stars (XDINSs) discovered so far (see e.g. Treves et al. 2000, Motch 2001 for a review) offer an unprecedented opportunity to confront present models of neutron star (NS) surface emission with observations. These objects play a key role in compact objects astrophysics being the only sources in which we can have a clean view of the compact star surface. In particular, when pulsations and/or long term variations are detected we can study the shape and evolution of the pulse profile of the thermal emission and obtain information about the thermal and magnetic map of the star surface. So far, X-ray pulsations have been detected in four XDINSs, with periods in the range 3-11 s. In each of the four cases the pulsed fraction is relatively large (∼ 12% − 35%) and, counter-intuitively, the softness ratio is maximum at the pulse maximum (Cropper et al. 2001, Haberl et al. 2003. Spectral lines have been detected in the soft X-rays and the line parameters may change with spin pulse. In addition, often the X-ray light curves appears to be slightly asymmetric and a gradual, long term evolution in both the Xray spectrum and the pulse profile of the second most luminous source, RX J0720.4-3125 has been recently discovered (De Vries et al. 2004). All these new findings represent a challenge for conventional atmospheric models: the properties of the observed pulse profiles (large pulsed fraction, skewness, and possibly time variations) cannot be explained by assuming that the thermal emission originates at the NS surface if the thermal distribution is induced by a simple core-centered dipolar magnetic field. On the other hand, it has been realized long ago that two effects can contribute at least to achieve relatively large pulsed fraction (up to 20%) : 1) radiative beaming (Pavlov et al. 1994) and 2) the presence of quadrupolar components in the magnetic field (Page and Sarmiento 1996). Here we present magnetized atmospheric models computed assuming a quadrupolar magnetic field geometry and show how they can account for some of the general characteristics of the observed X-ray lightcurves (see Zane and Turolla 2004, in preparation for further details). 2 Getting to grips with the neutron star surface Computing the light curve In order to compute the phase-dependent spectrum emitted by a cooling NS as seen by a distant observer, we basically perform three steps. First, we assume that the star magnetic field possesses a core-centered dipole+quadrupole topology, quad . The polar components of B dip and of the five generating vectors B (i) quad are reported in Page and Sarmiento (1996). The NS surface temperature distribution is then computed using the simple expression T s = T p | cos α| 1/2 , where T p is the polar temperature and α is the angle between the field and the radial direction, cos α = B·n. Second, we compute the local spectrum emitted by each patch of the star surface by using fully ionized, magnetized hydrogen atmosphere models. 1 Besides surface gravity, this depends on both the surface temperature T s and magnetic field, either its strength and orientation with respect to the local normal. We introduce a (θ, φ) mesh which divides the star surface into a given number of patches. The atmospheric structure and radiative transfer are then computed locally by approximating each atmospheric patch with a plane parallel slab. 1 We caveat that partial ionization effects are not included in our work. Bound atoms and molecules can affect the results, changing the radiation properties at relatively low T and high B (Potekhin et al. 2004). Third, we collect the contributions of surface elements which are "in view" at different rotation phases (see Pechenick et al. 1983, Lloyd et al. 2003. We take the neutron star to be spherical (mass M, radius R) and rotating with constant angular velocity ω = 2π/P , where P is the period. Since XDINs are slow rotators (P ≈ 10 s), we can describe the space-time outside the NS in terms of the Schwarzschild geometry. Under these assumptions, for a fixed polar temperature, dipolar field strength and surface gravity M/R, the computed light curve depends on seven parameters: b i ≡ q i /B dip (i = 0, ...4), the angle χ between line of sight and spin axis, and the angle ξ between magnetic dipole and spin axis. Studying light curves as a population and fitting the observed pulse shapes Given this multidimensional dependence, an obvious question is whether or not we can identify some possible combinations of the independent parameters that are associated to particular features observed in the pulse shape. Particularly promising for a quantitative classification is a tool called principal components analysis (PCA), which is concerned with finding the minimum number of linearly independent variables z p (called the principal components, PCs) needed to recreate a data set. In order to address this issue, we divided the phase interval (0 ≤ γ ≤ 1) in 32 equally spaced bins, and we computed a population of 78000 light curves by varying χ, ξ and b i (i = 0, ..4). By applying a PCA, we found that each light curve can be reproduced by using only the first ∼ 20 more significant PCs and that the first four (three) PCs account for 85% (72%) of the total variance. However, due to the strong non-linearity, we find so far difficult to relate the PCs to physical variables. Nevertheless, the PCA can be regarded as an useful method to provide a link between the various light curves, since models "close" to each other in the PCs space have similar characteristics. From the PCA we obtain the matrix C ij such that z i ≡ C ij y j , where y j is the observed X-ray intensity at phase γ j and z i is the i-th PC. Therefore, we can compute the PCs corresponding to each observed light curve and search the models population for the nearest solution in the PCs space (see fig. 1, left). This in turn provides us a good trial solution, which can be used as a starting point for a numerical fit. The quadrupolar components and viewing angles are treated as free parameters while the polar values of T p and B dip are fixed and must fall in the domain spanned by our atmosphere model archive. 2 Our preliminary results are illustrated in Fig. 1, 2, and 3, and refer to B dip = 6 × 10 12 G, log T p (K) = 6.1 − 6.2. Summary of results As we can see from Fig. 1, 2 and 3, the broad characteristics of all the XDINS light curves observed so far are reproduced when allowing for a combination of quadrupolar magnetic field components and viewing angles. However, although in all cases a fit exists, we find that in general it may be not unique. This model has not a "predictive" capacity in providing the exact values of the magnetic field components and viewing angles: this is why we do not fit for all the parameters in a proper sense and we do not derive parameter uncertainties or confidence levels. The goal is instead to show that there exist at least one (and more probably more) combination of parameters that can explain the observed pulse shapes, while this is not possible assuming a pure dipole configuration. In the case of RX J0720.4-3125, preliminary results show that the pulse variation observed between rev.78 and rev.711 can not be explained by a change in viewing angle only (as it should be if NS precession is invoked, de Vries et al. 2004) or magnetic field only. Instead, a change in all quantities (quadrupolar components and viewing angles) is needed. 3 Aim of our future work is to reduce the degeneracy either by performing a more detailed statistical analysis on the models population and by refining the best fits solutions by using information from the light curves observed in different color bands and/or from the spectral line variations with spin pulse. 3 How to produce field variations on such a short timescale needs to be addressed in more detail and, at present, no definite conclusion can be drawn. For instance, a change of the magnetic field structure and strength on a timescale of years may be conceivable if the surface field is small scaled (Geppert et al. 2003). In this case, even small changes in the inclination between line of sight and local magnetic field axis may cause significant differences in the "observed" field strength.
2014-10-01T00:00:00.000Z
2004-08-01T00:00:00.000
{ "year": 2005, "sha1": "20b563d56a9498efaf87483794e2583729c5f9d0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/astro-ph/0501094v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "20b563d56a9498efaf87483794e2583729c5f9d0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }