id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
3512002
pes2o/s2orc
v3-fos-license
Gene-diet interaction effects on BMI levels in the Singapore Chinese population Background Recent genome-wide association studies (GWAS) have identified 97 body-mass index (BMI) associated loci. We aimed to evaluate if dietary intake modifies BMI associations at these loci in the Singapore Chinese population. Methods We utilized GWAS information from six data subsets from two adult Chinese population (N = 7817). Seventy-eight genotyped or imputed index BMI single nucleotide polymorphisms (SNPs) that passed quality control procedures were available in all datasets. Alternative Healthy Eating Index (AHEI)-2010 score and ten nutrient variables were evaluated. Linear regression analyses between z score transformed BMI (Z-BMI) and dietary factors were performed. Interaction analyses were performed by introducing the interaction term (diet x SNP) in the same regression model. Analysis was carried out in each cohort individually and subsequently meta-analyzed using the inverse-variance weighted method. Analyses were also evaluated with a weighted gene-risk score (wGRS) contructed by BMI index SNPs from recent large-scale GWAS studies. Results Nominal associations between Z-BMI and AHEI-2010 and some dietary factors were identified (P = 0.047-0.010). The BMI wGRS was robustly associated with Z-BMI (P = 1.55 × 10− 15) but not with any dietary variables. Dietary variables did not significantly interact with the wGRS to modify BMI associations. When interaction analyses were repeated using individual SNPs, a significant association between cholesterol intake and rs4740619 (CCDC171) was identified (β = 0.077, adjPinteraction = 0.043). Conclusions The CCDC171 gene locus may interact with cholesterol intake to increase BMI in the Singaporean Chinese population, however most known obesity risk loci were not associated with dietary intake and did not interact with diet to modify BMI levels. Electronic supplementary material The online version of this article (10.1186/s12937-018-0340-3) contains supplementary material, which is available to authorized users. Background The explosion in worldwide obesity levels is believed to be due to the modern 'obesogenic' environment where there is easy access to highly appetizing and energy-dense food with a reduced need for physical activity and energy expenditure [1]. Nevertheless, substantial between-individual variability exists in the ease and extent of weight gain or weight loss, and not every individual exposed to this 'obesogenic' environment becomes overweight or obese. The overall effects of individual weight change is likely a composite of inherent genetic predispositions and their interaction with the environment [1][2][3]. Genome-wide association studies (GWAS) have successfully uncovered at least 97 independent loci associated with body mass index (BMI) levels [4] and the majority of these loci are known to be transferrable to the Asian populations [4,5]. Studies have further suggested that dietary intake may interact at some of these loci to modify BMI levels [6][7][8][9][10][11][12]. However, these dietary intake interaction analyses have been predominantly performed in European ancestry populations, and since the dietary pattern of Asians is different (higher in carbohydrates) from Europeans [13], it is unclear if similar gene x diet interactions affect obesity levels in Asian populations. In this study, using Chinese subjects living in Singapore, we aimed to evaluate if dietary intake modifies obesity associations at index BMI loci, several of which have been very recently identified and not previously evaluated for in similar interaction analyses. Study population We studied 7817 participants from six data subsets from two adult Chinese populations, Singapore Chinese Health Study (SCHS), including the SCHS coronary artery disease (SCHS-CAD) cases (N = 594) and controls (N = 1070), SCHS-Type 2 diabetes (SCHS-T2D) cases (N = 2004) and controls (N = 2055), and Singapore Prospective Study Programme (SP2, N = 2094). Since the samples were genotyped on various SNP arrays and BMI is a known to associate with CAD and T2D [14,15], we performed analysis individually in these six data subsets and combined individual results using meta-analysis procedures. Singapore Chinese Health Study The SCHS is a population-based long term prospective study focused on the role of diet and nutrition on disease in Singapore [16,17] A total of 63,257 Chinese individuals aged between 45 and 74 years (mean age at entry 56.5) and from two major dialect groups in Singapore, the Cantonese and the Hokkiens, were recruited into SCHS from April 1993 to December 1998. At recruitment, all the study subjects were interviewed in-person at home by a trained interviewer with a structured questionnaire. In April 1994, a 3% random sample of study subjects was re-contacted for donation of blood specimens and the effort was later extended to include all consenting cohort enrollees, which led to the collection of blood in 28,439 participants by 2001. The study was approved by the Institutional Review Boards (IRB) of the National University of Singapore (NUS) and the University of Minnesota (UMN), and all study subjects gave written informed consent. In this study, two case-control studies conducted within the SCHS, the SCHS-CAD and the SCHS-T2D were included for analysis. SCHS-CAD participants provided blood and did not have a history of physiciandiagnosed coronary heart disease (CHD) or stroke at the time of blood collection. Acute myocardial infarction (AMI) cases and coronary heart disease (CHD) deaths were identified and verified in SCHS from three databases [18,19]. Two controls that were alive and free of the disease at the time of the diagnosis or death of the index case were matched to each CAD case on year of recruitment, date of birth, gender, father's dialect group and the date of blood collection. In total, there were 761 incident cases and 1400 controls (N = 2161). For SCHS-T2D, individuals with prevalent diabetes, cardiovascular disease, or cancer at the baseline interview were excluded from analysis. Participants were classified as incident T2D cases if initial diagnosis of diabetes took place after the baseline interview and the disease states were validated as previously described [20,21]. In total, there were 2615 incident diabetes cases and 2615 controls matched on age, gender, dialect group and date of blood collection. Detail information regarding these two subcohorts has been described elsewhere [22][23][24][25][26]. Singapore Prospective Study Program The SP2 is a population-based cross-sectional study of adult Singaporean Chinese, Malay and Asian-Indian subjects aged between 24 to 95 years, and it comprises four previous studies, Thyroid and Heart Study (1982)(1983)(1984) [27], National Health Survey (1992) [28], National University of Singapore Heart Study (1993)(1994)(1995) [29], and the National Health Survey (1998) [30] (N = 11,053). Individuals in these studies were sampled randomly from the Singapore population and a disproportionate sampling scheme was utilized to increase the sample sizes of Malays and Asian-Indians. In total, 7742 individuals completed the questionnaire and 5157 of them (66.6% of individuals with completed questionnaire) attended the subsequent clinical examination [31]. Only Chinese samples from the SP2 were used in the present study. The study was approved by the IRB of NUS and the Singapore General Hospital. All participants gave informed written consent before the study. Body composition and dietary data In SCHS, weight and height were self-reported via inperson interviews [32,33] and were shown to be reliable across populations [34], including Asians [35]. In SP2, a wall mounted measuring tape and a digital scale were used to measure height and weight respectively [36]. BMI was calculated as weight in kilograms (kg) divided by height in meter square (m 2 ). In SCHS, information on dietary components during the year prior to the interview was collected by using a semi-quantitative food-frequency questionnaire (FFQ) specifically developed for this population during the baseline interview. A total of 165 food items commonly consumed by Singapore Chinese subjects were assessed by the questionnaire, and the study participants provided the usual intake frequency (ranging from never or hardly ever to two or more times/d) and portion size for each of the food and beverage items. The FFQ was subsequently validated against a series of 24-h dietary recall interviews [16]. The corrected correlation coefficients for selected energy or nutrients ranged from 0.24 to 0.79 [16,20]. In SP2, a similar semi-quantitative 169-item FFQ which was used in the Singapore National Nutrition Surveys was utilized to collect dietary intake information during the month prior to the interview [36,37]. The estimation of the frequency for consuming each food based on a standard portion size specific for that food group was requested from the participants. The consumption frequency could be reported as per day, per week, per month, rarely or never. Nutrient intakes were computed by the Health Promotion Board of Singapore by use of an in-house database [38,39]. The ten dietary variables examined in this study were: total calories (kcal/day), percentage of energy from protein (%protein), percentage of energy from fat (%fat), percentage of energy from saturated fatty acid (%SFA), percentage of energy from monounsaturated fatty acid (%MUFA), percentage of energy from polyunsaturated fatty acid (%PUFA), percentage of energy from carbohydrates (%carbohydrate), percentage of energy from starch (%starch), dietary fiber (g/day) and cholesterol (mg/day). The dietary score included in this study is the Alternative Healthy Eating Index (AHEI)-2010, which is a measurement for diet quality that has been used in the Singapore Chinese population previously [40]. Detailed information about the calculation of this score has been described previously [41,42]. SNP selection, genotyping and imputation Large-scale GWAS has identified 97 independent BMIassociated loci in European ancestry population [4]. Among them, 78 SNPs were either genotyped or imputed in all datasets. Detailed information about these SNPs is presented in Additional file 1: Table S1. After standard GWAS quality control (QC) procedure, 719 SCHS CAD cases and 1284 SCHS controls genotyped on Illumina Omni-Zhonghua8 Array were utilized in the study [23][24][25]. For the SCHS-T2D samples, 2004 cases and 2055 controls genotyped on Affymetrix ASI (Asian) Axiom array were available for analysis [26]. A total of 4059 individuals was left for subsequent analysis after QC. After QC procedure, 1145 Chinese SP2 individuals genotyped using Human Hap 610Quad (SP2610) and 949 genotyped with Illumina 1Mduov3 (SP21m) were available for analysis [22]. Imputation in both SCHS and SP2 was performed with IMPUTE2 [43] and genotype calls were based on phase3 1000G cosmopolitan panels. Statistical analysis A weighted genetic risk score (wGRS) was calculated based on the 78 BMI-associated variants, where the number of BMI increasing alleles were weighted by their reported effect estimates from recent large-scale GWAS [4]. Intakes of protein, fat, SFA, MUFA, PUFA, carbohydrate and starch were adjusted for total energy intake by converting to nutrient densities. Cholesterol and fiber were converted to calorie-adjusted nutrient values based on the method of residuals [44]. BMI and all the nutrient variables were normalized by rank-based inverse normalization (Z-scores). Linear regression analyses between Z-BMI and dietary factors were performed and adjusted for age, sex and calorie intake. Association between SNPs and BMI/dietary components were evaluated by linear regression with adjustment for age and gender. Interaction analyses were performed by introducing the interaction term (dietary factor x SNP) with the specific dietary factor and SNP included as covariates in the same regression model. Analysis was carried out in each cohort individually and subsequently meta-analyzed using the fixed-effects inverse-variance weighted method. Cochran's Q test was used to measure between-study heterogeneity (P < 0.050) [45]. All analyses were performed using STATA (version 12.1, Statacorp, College Station, TX, USA). Bonferroni adjusted P value of < 0.05 (2 tailed) was considered statistically significant after adjusting for multiple comparison for 858 tests (78 BMI SNPs × 11 dietary variables). Result The characteristics of variables used in the study are presented in Table 1. In total, 7817 individuals (5723 from SCHS and 2094 from SP2) had data available for analysis. Association between dietary factors with BMI levels The following dietary factors, AHEI-2010, total calories, %protein, %fat, %carbohydrate %starch, and cholesterol showed nominal significance with Z-BMI (P between 0.047 and 0.010) ( Table 2). Higher calories, %protein, %fat and cholesterol were associated with increased BMI while higher AHEI, %starch and %carbohydrate were associated with lower BMI. However, none of these remained significant after corrections for multiple tests (adj p-value > 0.110, Table 2). Association between BMI index SNPs with BMI and dietary factors Linear regression analyses were used to test the association between BMI index SNPs and Z-BMI. Among the 78 overlapping SNPs, 9 loci (TMEM18, GNPDA2, RALYL, NT5C2, OLFM4, FTO, MC4R, QPCTL and ZC3H4) were significantly associated with the outcome (P < 0.05, Additional file 1: Table S1). A wGRS was constructed using all 78 BMI index SNPs. Each unit increase in the wGRS was robustly associated with increased Z-BMI in our datasets (β = 0.018, SE = 0.002, P = 1.55 × 10 − 15 , Table 3). The aggregate BMI wGRS however did not show significant associations with any dietary factors (p > 0.190, Table 3). Gene-diet interaction Inclusion of the dietary factor x wGRS in the regression models did not significantly modify their associations with BMI in our dataset (P interaction > 0.112, Table 4). However, when analyzing at single SNP level (Table 5, Additional file 1: Tables S2-S12), we observed one significant interaction between rs4740619 (CCDC171) and cholesterol on Z-BMI even after adjusting for multiple comparisons (β = 0.077, SE = 0.019, P interaction = 5.01 × 10 − 5 , adjusted P interaction = 0.043, Table 5, Additional file 1: Table S12). As red meat and egg yolk are substantial sources of cholesterol [46,47], we further analyzed whether rs4740619 could interact with red meat intake or yolk on BMI and found significant interaction between rs4740619 and processed red meat intake as well as egg yolk consumption (Additional file 1: Table S13). Conditioning the cholesterol intake x rs4740619 effects on processed red meat intake or egg yolk intake (and vice versa) did not affect the associations detected, indicating that these interactions may be independent. Discussion Ethnic differences in dietary intakes may influence the impact of inherent genetic predispositions to obesity. In this study, we evaluated the role of dietary intake, both as individual nutrient components and as a composite score (i.e AHEI-2010), on BMI levels using East-Asian subjects. To the best of our knowledge, our study represents the first systematic investigation on gene-diet interactions in East-Asians at established BMI-susceptibility loci, several of which have been only recently identified [4]. The BMI wGRS showed robust association with BMI levels in our Singapore Chinese samples and individually, most of the BMI susceptibility SNPs, were directionally consistent with their previously reported effects, indicating that genetic predisposition to obesity is largely transferrable to the Singapore Chinese population [4]. A total of nine loci (TMEM18, GNPDA2, RALYL, NT5C2, OLFM4, FTO, MC4R, QPCTL and ZC3H4) were associated with the outcome and the most strongly associated locus was rs11191560 on NT5C2. Nominal associations observed between the AHEI-2010 score and individual dietary components with BMI levels in our study suggest that a healthier diet may reduce obesity levels. However, using an aggregate wGRS score for all known BMI genetic loci, we find little evidence that a healthier diet may modify genetic predisposition to BMI levels in the East-Asian samples evaluated. This is similar to previous larger-scale studies in European ancestry subjects [48]. While the wGRS approach allows for the evaluation of overall genetic predisposition to BMI, it might incorporate multiple heterogeneous pathways that may not associate or interact with lifestyle factors in a similar manner. Investigating individual BMI risk SNPs of the aggregate wGRS score could therefore provide better biological insights on the complex interactions between genetic risks and dietary intake [49,50]. Previous study in adults of European ancestry showed two BMI loci, LRRN6C and MTIF3, could modify the association between dietary score and BMI levels [8]. However, in our study, none of the reported risk loci significantly interacted with AHEI-2010. Differences in sample sizes, risk allele frequencies and/or dietary consumptions may explain these discrepancies. When evaluated at the single-SNP and individual dietary components level, our interaction analyses revealed a novel significant association between cholesterol intake and rs4740619 that increased BMI levels. This interaction was independent of red meat and egg yolk intake. Rs4740619 is an intronic variant on coiled-coil domain containing 171 (CCDC171), a newly identified gene on chromosome 9. HaploReg analysis indicated that rs4740619 may affect binding affinity of peroxisome proliferator-activated receptors (PPARs), which are nuclear receptors involved in regulating multiple metabolic pathways [51]. However, precisely how rs4746019 affects PPAR function and whether these are modulated by cholesterol levels will require further replication efforts and subsequent functional assessments. Our study is not without limitations. Firstly, proxy measures of dietary intake through food-frequency questionnaire data is likely a source of random error [52] and as previously highlighted, may be amplified in the context of obesity due to the awareness between diet and corpulence [8]. Moreover, due to the relatively modest sample set evaluated in this study and likely limited statistical power, further evaluations in larger-scale and better powered East-Asian studies or specific dietary intervention studies would be necessary to confirm and better characterize the associations reported here. The AHEI-2010 score was an alternative to the HEI score, which measures the adherence to dietary recommendations among Americans. Higher AHEI was strongly associated with lower risk for a variety of chronic diseases, such as cardiovascular disease and diabetes [41,42]. Although this score served to capture several dietary components in aggregate, it should be noted that it is not specific to the East-Asian population and may not fully capture dietary differences that exists between ethnic groups. In addition, AHEI-2010 is constructed by a simple summation of several components scored on a scale ranging between 0 and 10 and therefore had assumed that each component affects health equally, which is not the case. Thus a score calculated in a more sophisticated manner might be needed for a comprehensive assessment of dietary effects on health outcomes [53]. Lastly, recent studies have indicated that there may be an association between diet and body composition and have highlighted specific interactions between central obesity associated genetic loci and healthy diet scores (for eg. at GRB4 and LYPLAL1 loci) [8]. As most of the study subjects in our datasets do not have central obesity measures (i.e waist and hip circumferences), we were however unable to perform similar analyses to interrogate the interactions between diet and central obesity. Moreover, BMI is a surrogate measure of body composition. In certain situations, it might not be a valid reflection for body fat percentage, the excess of which is considered to be cause of the comorbid conditions, such as for people with well-developed musculature [54]. Nevertheless, in the general population, there is a significant positive correlation between BMI and body fat percentage, as well as with clinical outcome such as AMI, CAD and CHD mortality, including the Chinese subjects. [55][56][57][58][59][60][61][62]. Conclusion In conclusion, similar to studies performed in large-scale European ancestry samples, our data indicates that, in aggregate, most known BMI risk loci do not interact with dietary intake to modify BMI levels in East-Asian subjects. However, when evaluated as individual SNPs, a specific interaction at rs4740619 (CCDC171) with cholesterol and processed red meat intake that increases BMI levels was identified in our study subjects. Additional file Additional file 1: Table S1. SNP information and Meta-analysis between 78 SNPs and BMI level. Table S2. Interaction between SNPs and AHEI-2010 dietary score on BMI. Table S3. Interaction between SNPs and total calories on BMI. Table S4. Interaction between SNPs and %protein on BMI. Table S5. Interaction between SNPs and %fat on BMI. Table S6. Interaction between SNPs and %SFA on BMI. Table S7. Interaction between SNPs and %MUFA on BMI. Table S8. Interaction between SNPs and %PUFA on BMI. Table S9. Interaction between SNPs and %carbohydrate on BMI. Table S10. Interaction between SNPs and %starch on BMI. Table S11. Interaction between SNPs and fiber on BMI. Table S12. Interaction between SNPs and cholesterol on BMI. Table S13. Interaction between GRS and dietary factors on BMI in individual datasets used in the study. (PDF 1521 kb)
2018-02-25T04:45:13.794Z
2018-02-24T00:00:00.000
{ "year": 2018, "sha1": "8eec26fec9b330d173847e45f2684777399bdbd3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12937-018-0340-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8eec26fec9b330d173847e45f2684777399bdbd3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
59059622
pes2o/s2orc
v3-fos-license
Fast and Robust Maximum Power Point Tracking for Solar Photovoltaic Systems Corresponding Author: Shuhui Li Department of Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, USA Email: sli@eng.ua.edu Abstract: Solar Photovoltaic (PV) energy is becoming an increasingly important part of the world's renewable energy. In order to develop technology for efficient energy conversion from a solar PV system, this paper studies typical Maximum Power Point Tracking (MPPT) control techniques used in solar PV industry and then proposes a close-loop and adaptive MPPT method for reliable and rapid extraction of solar PV power. The paper emphasizes especially on how the proposed and conventional adaptive MPPT methods perform under highly variable weather and solar irradiation conditions in a digital control environment. A computer simulation system is developed by using SimPowerSystems and Opal-RT real-time simulation technology which allows for fast and efficient investigations of the MPPT algorithms under high switching frequency conditions for power converters. A hardware experiment system is built to validate and compare the proposed and conventional MPPT techniques in a more practical condition. Advantages, disadvantages and properties of different MPPT methods are compared and studied, evaluated. Introduction Photovoltaic (PV) systems can be easily integrated in residential buildings, hence they will be the main responsibility of making low-voltage grid power flow bidirectional (Mastromauro et al., 2012). A gridconnected solar PV system consists of a PV generator that produces electricity from sunlight and power converters for energy extraction and grid-interface (Lorenzo et al., 1994;Carrasco et al., 2006;Nelson, 2003). The main applications of PV systems are in standalone (Joerissen et al., 2004;Masters, 2004) or gridconnected configurations (Chedid et al., 1998). In the stand-alone configuration, a PV system is disconnected from the grid and its generated power is either stored in an energy storage device or consumed by loads connected to it. In the grid-connected configuration, however, the power captured by a PV system can be both delivered to the grid and consumed by loads. A PV generation system has two major weaknesses: (1) Low energy conversion efficiency (9-17%) , particularly at a low solar irradiation level; (2) the amount of electric power captured by a PV generator varies constantly with weather conditions. The captured power of a PV system depends on the temperature and solar irradiance. Generally, there is a unique point, called the Maximum Power Point (MPP), at which the whole PV system operates with maximum efficiency and produces its maximum output power. The location of the MPP is unknown, but can be located through a searching algorithm. To maximize the output power of a PV system, continuously tracking the MPP of the PV system is essential. The primary challenges for the MPPT of a solar PV array include: (1) How to get to a MPP quickly, (2) how to stabilize at a MPP and (3) how to make a smooth transition from one MPP to another under sharply changing weather conditions. In general, a fast and reliable MPPT is critical for power generation from a solar PV system. In order for effective development and design of solar PV systems, it is essential to investigate and compare performance, operating principles and advantages or disadvantages of conventional MPPT techniques used in the solar PV industry and develop new competent technology for fast and reliable extraction of solar PV power. In the following sections, the paper first presents a brief analysis about PV array characteristics and how the PV array characteristics are affected by temperature and solar irradiance in section 2. Section 3 examines conventional fixed-step MPPT approaches used in solar PV industry. Section 4 presents traditional adaptive MPPT techniques and a proposed Proportional-Integral (PI) based adaptive MPPT approach for fast and reliable tracking of PV array maximum power. Section 5 gives computer simulation evaluation of the proposed and conventional MPPT methods under stable and variable weather conditions. Section 6 shows a hardware experiment evaluation of the conventional and proposed MPPT methods under actual power converter operating conditions in a dSPACE-based digital control environment. Finally, the paper concludes with the summary of main points. Extracted Power Characteristics A grid-connected solar PV system consists of three parts ( Fig. 1): An array of photovoltaic cells, power electronic converters and an integrated control system (Kobayashi et al., 2004;. The control system of a solar PV array contains two parts: One for MPPT and the other for grid interface (Wu et al., 2003;Szabado and Wu, 2008;Femia et al., 2005;Veerachary et al., 2003). Both control functions are achieved through power electronic converters. Overall, the dc/dc converter performs the MPPT function while the dc/ac converter implements the grid interface control. Figure 2 illustrates typical I-V and P-V characteristics of a PV array for two different irradiance levels. As shown by the figure, if the output voltage of the dc/dc converter applied to the PV array is low, the output current of the PV array is almost constant for a given irradiation level. As the voltage applied to the PV array goes up, the power outputted from the PV array increases. When the output power of the PV array reaches the maximum value, an increase of the applied voltage would cause the output current of the PV array to drop radically and the output power decreases. During a day, solar irradiation and temperature rise and fall over time (ATSRD, 2011), which causes the continuous alteration of the MPP of the PV array. Thus, in order to collect the maximum available power, the operating point needs to be tracked continuously using a MPPT algorithm (Mastromauro et al., 2012). Figure 3 and 4 demonstrate the impact of temperature and solar irradiation to the power production of a PV array. According to Fig. 3, as the temperature increases, the maximum power captured by the PV array drops and the MPP voltage reduces indicating that a PV array produces more power on a cold day than a hot one. Regarding solar irradiation, a change of the solar irradiation level could affect both photo-generated current and temperature of PV cells within a PV array. Figure 4 shows the derivative of PV array output power versus the voltage applied to the PV array for several constant irradiance levels. In the figure, S represents the ratio of the solar irradiance over the nominal irradiance of 1000 W/m 2 . For each constant irradiance intensity, the applied voltage to the PV array at the zero derivative is the required MPP voltage. The zero derivative points represent the location of MPPs. According to Fig. 4, for each irradiance intensity, the derivative is positive before reaching the MPP and negative after the MPP. As the irradiation level changes, the zero derivative point shifts a little bit to the left or right due to the temperature impact of irradiance intensity on PV cells. Perturb and Observe Method The P&O technique is the most widely used MPPT method for PV arrays. It operates by periodically perturbing the voltage applied to a PV array and comparing the output power of the PV array with that of the previous perturbation cycle. In general, if an increase of the voltage applied to the PV array causes an increase of the output power, the P&O controller moves the operating point along that direction; otherwise the perturbation is adjusted to the reverse direction. The P&O process continues until a MPP is reached (Esram and Chapman, 1995;Abdelsalam and Massoud, 2011;. Many different P&O methods have been reported in the literature. In classic P&O methods (Al-Amoudi and Zhang, 1998), the perturbation of the voltage applied to a PV array has a fixed value. In the optimized P&O methods (Esram and Chapman, 1995;Abdelsalam and Massoud, 2011), an average of multiple samples of the array output power is used to determine the perturbation magnitude for improved MPPT. In (Femia et al., 2009), a compensation network is used to improve P&O stability. Incremental Conductance Method The IC method is developed based on the principle that at the MPP, the following equation holds (Femia et al., 2006;: Hence, the direction to perturb the MPP operating point of the PV generator can be determined by comparing the instant conductance I a /V a with the incremental conductance dI a /dV a (Fig. 5). Using the IC method, it is theoretically possible to know when to stop the perturbation process as the MPP is reached. Fast and Reliable Adaptive MPPT Techniques In a PV system, the tracking speed and accuracy are the key factors for the MPPT control. These factors directly relate to the duty ratio adjustment of the dc/dc converter. Since conventional MPPT algorithms are unable to meet those requirements (Otieno et al., 2009;Yu, 2007), adaptive MPPT approaches have been proposed recently, including fuzzy logic based MPPT (Veerachary et al., 2003;Khaehintung et al., 2004), neural networks MPPT (Hussein et al., 2002;Sun et al., 2002) and ripple correlation control MPPT (Midya et al., 1996), etc. All of them basically belong to a "discrete" adaptive MPPT technique. Traditional Adaptive MPPT Methods In conventional adaptive MPPT methods, the perturbation magnitude varies during the MPP tracking process (Femia et al., 2005;Esram and Chapman, 1995). Typical adaptive P&O techniques utilize the derivative of power vs. PV array terminal voltage to determine next perturbation action. This is based on the analysis that the derivative is positive on the left of the MPP, zero at the MPP and negative on the right of the MPP as shown by Fig. 4. Therefore, (Esram and Chapman, 1995) proposed a Scaling Factor (SF) perturbation technique as shown by (2), in which M is a constant coefficient and the duty ratio in the next perturbation cycle is determined by the multiplication of M with the derivative. Hence, the duty ratio adjustment is scalable rather than fixed. Similar to the IC technique, the perturbation process stops theoretically as the MPP is reached: Another conventional adaptive duty ratio strategy is based on a Proportional-Integral (PI) control mechanism (Fig. 6). The error signal to the controller is generated by comparing dP a /dV a with a zero power derivative reference value. The duty ratio of the dc/dc converter is regulated continuously until the MPP is reached, i.e., dP a /dV a = 0. This inner close-loop control structure has a much faster response speed than the open-loop regulation mechanism used in P&O and IC methods so that a rapid MPP tracking can be achieved. However, a major problem for his control structure is that for a high |dP a /dV a |, the fast close-loop regulation of the duty-ratio could reduce the MPPT efficiency and cause more oscillation in the output power. Proposed Hyperbolic-PI (H-PI) Adaptive MPPT Method The proposed adaptive MPPT strategy has adopted the advantage of the inner close-loop control mechanism for the duty-ratio regulation. But, it introduces a hyperbolic function (3) In (3), k is a constant and is tuned to meet a reliable and fast MPPT requirement for a typical solar PV array. The output of the hyperbolic function is close to 1 if |dP a /dV a | is large but reduces greatly if |dP a /dV a | is small. This hyperbolic function allows for a more stable and accurate and much faster MPP tracking properties under dynamic condition. The control diagram of the proposed H-PI method is shown by Fig. 7, the measured current and voltage are first processed by a low-pass filter. After that, the derivative of power vs. voltage passes through a hyperbolic function and the amount of the duty-ratio adjustment is determined through a PI controller which generates a new duty-ratio and applies it to the dc/dc converter for the next control cycle. One issue for the proposed MPPT is the computation associated with tanh(•) function. In general, the tanh(•) can be calculated very quickly in a digital control system. According to a large number of experiments performed over a 2GHz PC, the average computation time of tanh(•) in MatLab is about 10ns. Compared to the controller sampling time, the computation time of tanh(•) is much smaller and ignorable. For tanh(•) implementation in a DSP chip, the additional computational effort is even more insignificant. MPP Tracking Analysis of Conventional and Proposed Adaptive MPPT Methods How to process the derivative of dP a /dV a causes a big difference in MPP tracking using the conventional and proposed adaptive MPPT strategies. Actually, the derivative operation can cause a high non-linearity. For the conventional adaptive MPPT, this derivative is directly used to regulate the duty ratio, which could result in a large regulation of the PV array voltage and a high oscillation in the MPP tracking especially when a large derivative appears. However, for the proposed adaptive MPPT, although a sudden surge of solar irradiation level causes a sharp change in the derivative of dP a /dV a , the derivative is preprocessed by the hyperbolic function before it is applied to the PI controller. In general, the hyperbolic function reduces |dP a /dV a | when a large ∆P a and a small ∆V a appear but increases |dP a /dV a | when a large ∆P a and a large ∆V a are present. Hence, both the tracking speed and reliability are improved. The improvement is especially evident when there are fast random changes of solar irradiation or random measurement noises in the PV control system. Comprehensive simulation and hardware experiment results demonstrate that the processing through the hyperbolic function makes it much more stabile and reliable and faster for maximum power tracking of PV power (sections 5 and 6). Computational Experiments and Analysis To evaluate and compare different MPPT approaches, a computer simulation platform of the integrated power converter and PV array system is built. The experiment system mainly includes three parts: A PV array module, a dc/dc boost converter and a dc/ac inverter. The PV array has a series-parallel configuration comprises of 10 parallel strings with each string containing 20 series panels (Fig. 8). Each PV panel has an external bypass diode in parallel with the panel (Masters, 2004). At the top of each string, a blocking diode is included (Masters, 2004). The dc/dc boost converter is regulated by the MPPT control module for MPPT control of the PV array (Fig. 8). The dc/ac inverter integrates the PV system to the grid and an LCL filter is employed to enhance the power quality in the three-phase ac system. A direct-current vector control technique is used to control the dc/ac inverter, which consists of a d-axis loop for dc-link voltage control and a q-axis loop for grid voltage support or reactive power control. Details about the direct-current vector control is available in (Li et al., 2011). Major measurements of the PV generator include terminal current, voltage and output power of the PV array, dclink voltage and voltage, current and power at the grid side. The generator sign convention is employed, i.e., power transferred to the grid is positive. The converter modules are from Opal-RT RTE-Drive toolbox and can be integrated with the RTE PWM signal generation function from Opal-RT RT-EVENTS toolbox to generate converter driving pulses for very fast and precise simulation of power converters (ORTT, 2003). The switching frequencies are 10 kHz for the dc/dc converter and 1800 Hz for the dc/ac inverter, respectively and losses of the power converters and the LCL filter are included. The development of the MPPT control module has considered digital control system natures, including digital signal processing, sample and hold and time delays (Fig. 9). The measured current and voltage signals is first processed through sample and hold blocks, which transfers measured "continuous" signals to "discrete" signals. Then, a digital filtering is utilized to eliminate high frequency components that may be caused by noises or rapid switching of power converters. A time delay block is included to simulate potential delay between digital and physical systems. The comparison focuses mainly on IC fixed step, traditional Scaling Factor (SF) adaptive and the proposed Hyperbolic-PI based (H-PI) MPPT methods. MPPT Evaluation under Step and Ramp Changes of Solar Irradiance Usually, temperature change smoothly during a day (ATSRD, 2011), but solar irradiance levels could vary rapidly from one value to another. To test and compare different MPPT algorithms under sharp changes of solar irradiance levels, a solar irradiance curve with step and ramp variations is generated (Fig. 10a). The irradiation has a step change from 400 to 1000 W/m 2 at 1.5 s, is kept at 1000 W/m 2 within 1.5 and 2.2 s and changes to 600 W/m 2 at 2.2s. At 2.9 s, there is a ramp change of the solar irradiance level until it reaches 900 W/m 2 at 3.2 s. Then, the solar irradiance stays at 900 W/m 2 for 0.6 s and reduces slowly to 700 W/m 2 at 4s. The PV array maximum power, along with the captured power by using IC, SF and H-PI methods under the step/ramp changes of solar irradiance levels, is presented by Fig. 10b. The sampling rate of the MPPT controller is 0.1 ms. The current and voltage waveforms of the proposed MPPT are shown by Fig. 10c. Figure 10d and 10e are the zoom-in plots of Fig. 10b. Figure 10f presents the duty-ratio adjustment during the MPPT control. Figure 10g shows, for the three MPPT methods, the power vs. voltage locus for a slope change of the solar irradiation level from 0.6 to 0.9 kW/m 2 around 3sec (Fig. 10a). For the IC method, it is quite stable under sharp and gradient solar irradiation changes. The primary issue of the IC method is a continuous perturbation in duty ratio (Fig. 10f) even when the solar irradiance level is stable, causing the oscillation of the captured power. The extent of the power oscillation relies on the perturbation step. The smaller the perturbation step, the smaller the oscillation. Nevertheless, if the perturbation step is too small, the MPPT speed will be affected. For the SF method, there is a very small oscillation when the irradiation level remains at a stable level, at which the power over the voltage derivative is close to zero. But, for changing irradiation levels, the output power of the PV array oscillates a lot as demonstrated by time-domain waveforms ( Fig. 10d and 10e) and the power vs. voltage locus plot (Fig. 10g). This results from a sharp change of dP a /dV a around the MPP (Fig. 4), causing unstable variation in duty ratio. The proposed H-PI approach shows the best performance (Fig. 10b, d, e and g). This is due to the fact that the duty ratio adjustment of the H-PI method is tuned based on the power and voltage derivative that is preprocessed through a hyperbolic function as shown by Equation 3. As it can be seen in Fig. 10f, the change in duty ratio has a smoothly continuous value during an abrupt or ramp change of solar irradiation and is around zero when the solar irradiation is stable. The PV voltage and current oscillate continuously (Fig. 10c), particularly under changing solar irradiation conditions. This causes more oscillation of the instantaneous power of the PV array. This issue is critical and must be considered in the design of the lowpass filters (Fig. 9) to assure fast and robust MPP tracking, particularly for the adaptive MPPT techniques ( Fig. 6 and 7). The power vs. voltage locus as shown by Fig. 10g illustrates more clearly how the maximum power is tracked by using three different MPPTs approaches. As it can be seen from the figure, the proposed adaptive MPPT is more reliable and efficient in tracking the MPP than conventional adaptive MPPT. The dc-link voltage is stable under the direct-current vector control technique applied to the dc/ac inverter, which is an important factor for the MPPT. The waveform of the three-phase current on the grid side is shown by Fig. 10i and the instantaneous grid power is shown by Fig. 10j. As shown by Fig. 10j, the grid power follows the captured PV power. However, due to the existence of harmonics and unbalance in the grid three-phase currents, there are oscillations in the grid power, which is similar to the instantaneous grid power in other renewable energy applications Bao et al., 2012). Sampling Rate Impact In design of a digital control system, sampling rate is generally predetermined. After that, the perturbation controller for each MPPT technique should be designed independently until satisfactory performance is obtained. Figure 11 shows the tracking of MPP by using the three different MPPT methods under the sampling rate of 1ms and 10 ms per sample, respectively. As shown by Fig. 11a, all the MPPT methods can track MPP when the sample time is 1ms. However, when the sample time is 10 ms, there will be a big notch in the captured power by IC and SF method (Fig. 11b). An examination of power Vs. voltage locus (Fig. 11c) reveals more detailed information about the MPP tracking using the three different MPPT approaches. The figure, consistent with Fig. 10g, demonstrates that the proposed adaptive MPPT is more reliable. Overall, the proposed method responses much faster and is more stable under different irradiation conditions. It is needed to point out that the sampling rate determines the waiting time for the next perturbation. From this point of view, the sampling rate concept is not exactly equivalent to the cutoff frequency notion normally used in the digital signal processing field. In a MPPT algorithm for a PV array, the low-pass filters shown in Fig. 9 help to remove noises while the sampling rate determines how fast to conduct the next perturbation. The impact of the sampling rate can be seen from Fig. 11. In general, as the sampling time increases, it is slower to track the maximum power. MPPT under Variable Solar Irradiance Condition In reality, solar irradiance level changes continually over time (Mills et al., 2011). Hence, it is important to compare and evaluate different MPPT methods under variable irradiation conditions. For this purpose, a variable solar irradiance curve is generated (Fig. 12a). Figure 12b compares the MPP tracking using different MPPT algorithms and the parameters of the MPPT algorithms are the same as those used in Fig. 10. As shown by the figure, among the three MPPT algorithms, the proposed H-PI method is the most effective to track the MPP. For the IC method, the fixed step perturbation disables the fast changing requirement in duty ratio to track the MPP. For the SF method, a stable adaptive adjustment based on the derivative information is hard to obtain in tracking the MPP under changing weather conditions. Laboratory Setup and Design A hardware laboratory test system of Fig. 8 is built for further investigation of the conventional and proposed MPPT algorithms. Figure 13 shows the testing system with the following setups. (1) An Agilent E4360A solar simulator is used to represent an actual PV array (KT, 2015). The solar simulator can generate real output voltage and current relation that is equivalent to a practical PV panel or array. By using the solar simulator, it is possible to repeat the same solar irradiation condition to test and compare different MPPT algorithms through a hardware experiment that is otherwise impossible. Another advantage is that the maximum output power of the simulated PV array can be calculated based on the experiment settings so that one can determine whether a MPPT algorithm is effective in a hardware experiment. Due to these reasons, solar simulators have been widely used by many researchers around the world for evaluation of a PV control system (Brito et al., 2011). (2) The dc/dc converter is built by using a LabVolt MOSFET power converter. (3) The capacitor connected to the output terminal of the simulator is formed by several LabVolt capacitors in parallel. (4) A smoothing inductor is employed for the dc/dc converter. (5) The solar simulator is controlled by a dSPACE digital control system (dSPACE, 2014). The control system collects output voltage and current signals of the solar simulator and sends a control signal to the converter based on control demands generated by different MPPT algorithms. Although the dSPACE system is not a digital device used for practical applications, it is a digital control system based on modern DSP chips (Rubaai et al., 2007). Using the dSPACE system, a MPPT digital controller can be quickly built and tested before converting it to a practical digital control device. Experiment Analysis and Comparison The rated values of the hardware experiment system (Fig. 13), including the power converter and the PV simulator, are different from those used in the computational experiment (Fig. 8). In general, the rating of the hardware experiment system is lower than the rating of a practical PV array. Therefore, parameters of the MPPT controllers must be returned. To ensure that the controllers work properly, the retuned MPPT algorithms for both the conventional and proposed techniques are evaluated in simulation first before the hardware experiment, where the simulation time step for the controllers is the same as the sampling time used in the dSPACE digital control system. Another big challenge, that is different from the simulation, is that noises are more significant than expected. One strategy to reduce the noises is to increase the strength of the measured signals. Because of the noises, it is very hard to tune the MPPT parameters for IC and SF algorithms, especially for the SF algorithm. This is due to the fact that a noise can result in a high notch in the calculated power during the next sampling time, causing a large variation in power derivative and thus affecting the stability of the SF algorithm. However, for the proposed H-PI algorithm, a stable MPPT algorithm is much easier to obtain. The test sequence is scheduled as the following with t = 0 s as the starting point for data recording. Around t = 20 s, there is an increase of the solar irradiation. A small increase of the solar irradiation appears near t = 40 s. Close to t = 60 s, there is a large decrease of the irradiation. At about t = 80 s, the sequence repeats itself. The PV simulator voltage and current are not only collected by the dSPACE system but also monitored by oscilloscopes and/or meters. Figure 14 shows the captured maximum power by all the three algorithms. Again, the proposed H-PI approach has the best performance because for the proposed H-PI approach, the power derivative is smoothly processed before it is applied to the PI controller. In addition, the PI controller can response much faster than an open-loop scheme. Conclusion This paper proposes a fast and robust MPPT technique and compares it with typical conventional MPPT techniques used in solar PV industry. Among the three most popular conventional MPPT methods (IC, fixed step P&O and adaptive P&O), the IC and fixed step P&O methods have continuous oscillation even when the solar irradiation level is constant in the power converter switching environment; the adaptive P&O technique has small oscillation if the solar irradiation level is stable. For the proposed MPPT approach, it has the least oscillation and the highest stability. The sampling rate influences the selection of the perturbation rate. This result indicates that proper selection of the sampling rate and perturbation step is important. If the sampling rate is too slow, a stable and reliable MPPT would be hard to achieve. Again, the proposed method is more stable and reliable under different sampling rate conditions. Under the variable irradiation levels, the proposed H-PI approach has better performance than conventional methods, indicating that the derivative of PV array terminal power Vs. voltage is valuable in capturing and tracking maximum power of a PV array under variable weather conditions. The comparison between the traditional and proposed adaptive methods shows that the hyperbolic processing of the derivation is important for high performance of a solar PV system. In the hardware experiment, the unexpected noises would drastically influence the power increment or power derivative calculation in the next perturbation step. Because of the noises, it is very hard to tune the MPPT parameters for IC and SF algorithms, especially for the SF algorithm. However, for the proposed H-PI approach, the power derivative is smoothly processed before it is applied to the PI controller; in addition, the PI controller can response much faster than an open-loop scheme. The comparison demonstrates that the proposed H-PI approach is much easier to tune and has the best performance.
2019-04-14T13:04:21.990Z
2016-01-17T00:00:00.000
{ "year": 2016, "sha1": "c07e66bf7e0e7b73508c658f1ac8932fb439171f", "oa_license": "CCBY", "oa_url": "http://thescipub.com/pdf/10.3844/ajeassp.2016.755.769", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "177d2263eff4611e515a33696d65c7c6f679707f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
204946780
pes2o/s2orc
v3-fos-license
1089. Implementation of a Febrile Neutropenia Management Algorithm on Antibiotic Use and Outcomes: An Interrupted Time Series Analysis Abstract Background Febrile neutropenia (FN) is a common complication of cancer therapy and often necessitates prolonged antibiotic treatment. Antibiotic de-escalation can be challenging given tenuous clinical status. Furthermore, a microbiological or clinical etiology is identified in a minority of FN patients. In 2016 we implemented several evidence-based strategies to guide antibiotic use in high-risk FN patients including specifying vancomycin use indications, minimizing carbapenem escalation in stable patients with ongoing fevers, and defining antibiotic durations regardless of neutrophil count. The study objective was to characterize and evaluate our experience implementing these strategies on antibiotic use and clinical outcomes. Methods Interrupted time series analysis of all admissions to the Malignant Hematology service at the University of California, San Francisco between June 2014 and December 2018. The primary outcome was monthly days of therapy (DOT) per 1,000 patient-days of broad-spectrum IV antibiotics (aztreonam, cefepime, piperacillin–tazobactam, meropenem, and vancomycin). Secondary outcomes included DOT/1,000 patient-days for each IV antibiotic, incidence rates of bloodstream infections (BSI) and C. difficile infections (CDI), and in-hospital all-cause mortality. A segmented regression analysis was conducted to evaluate the impact of the FN management algorithm implementation on antibiotic use and clinical outcomes. Summary statistics and time series scatter plots were used to visualize the trends and outliers. Results 2319 unique patients with 6,788 encounters were included. The median (IQR) age was 59 (46–68) years and 60% were male. Regression results and time series plots are shown in Table 1 and Figures 1–3. Conclusion Implementation of an evidence-based FN management algorithm led to decreased vancomycin and meropenem use without a statistically significant impact on overall antibiotic use, CDI rates, or mortality.While BSI rates fluctuated in the 2 months post-implementation, rates returned to baseline thereafter. A multidisciplinary effort facilitated successful implementation of this stewardship project. This collaboration remains essential to addressing future antimicrobial management strategies in this population. Disclosures All authors: No reported disclosures. Background. Despite evidence to support outpatient anti-pseudomonal fluoroquinolone (FQ) prophylaxis in neutropenic patients, limited data exist to support this for inpatients undergoing induction chemotherapy for acute myeloid leukemia (AML). At our institution, we implemented an initiative to replace FQ prophylaxis with a conditional order for an anti-pseudomonal β-lactam to be given if a fever occurred. Methods. A retrospective chart review was conducted to analyze the outcome differences between patients receiving FQ prophylaxis (pre-intervention) and those who had a conditional order for an anti-pseudomonal β-lactam in place of FQ prophylaxis (post-intervention). Patients were included if they were ≥18 years of age and were newly diagnosed with AML undergoing induction chemotherapy. The primary outcome was 90-day all-cause mortality. Secondary outcomes included the number of patients requiring ICU admission and rate of bacteremic episodes caused by any pathogen and from a Gram-negative rod (GNR). Additionally, ciprofloxacin susceptibility of these pathogens was analyzed. Conclusion. Replacing FQ prophylaxis with a conditional order for an anti-pseudomonal β-lactam for inpatients newly diagnosed with AML receiving induction chemotherapy is a feasible option to decrease FQ exposure. Though increased episodes of GNR bacteremia were observed, there was no difference in total bacteremic episodes or clinical outcomes, and the improved ciprofloxacin susceptibility patterns will allow for an additional treatment option in this extremely vulnerable patient population. Disclosures. All authors: No reported disclosures. Background. Hematopoietic stem cell transplant (HSCT) patients develop profound neutropenia during the transplant process and often fever, which is suggestive of infection. Antimicrobial prophylaxis (AP) during anticipated neutropenia is recommended; however, data regarding when to initiate AP is limited. A local quality improvement initiative adjusted AP initiation to target the duration of severe neutropenia, defined as ANC ≤ 500 mm 3 (ANC500), which is when patients are at the greatest risk of infection. This initiative aimed to reduce antimicrobial utilization and consequences of unnecessary antimicrobial exposure while not adversely affecting patient outcomes. Evaluating the Timing of Antimicrobial Prophylaxis in Allogeneic and Autologous Hematopoietic Stem Cell Transplant Methods. A retrospective study was conducted across two cohorts over a 2-year period. The pre-intervention cohort (November 2016-2017) called for the initiation of AP on Day -1 prior to transplant. The post-intervention cohort (November 2017-2018) called for initiation of AP when patients reached ANC500. The primary outcome was frequency of febrile occurrences (temperature ≥38°C). Secondary outcomes included days of antimicrobial exposure, positive blood cultures, all-cause mortality, length of stay, graft-vs.-host disease, and Clostridioides difficile rates. Patients were excluded if they received a haploidentical transplant or inappropriate AP for the specified cohort. Results. A total of 248 patients were included in the final analysis with 130 patients in the pre-intervention cohort and 118 patients in the post-intervention cohort. The final analysis included 40 allogeneic and 208 autologous HSCT patients. There was no difference in fever occurrences between the two groups (79% pre vs. 69% post; P = 0.078). There was a significant reduction in the mean antibacterial (10.3 vs. 4.95; P < 0.001) and antifungal (13.4 vs. 7.6; P < 0.001) prophylaxis per patient-days in the pre-and post-intervention group. No significant differences in positive blood cultures (11.5% vs. 16.9%; P = 0.222), ICU admissions, length of stay or all-cause mortality were identified. Conclusion. Delaying antimicrobial prophylaxis (AP) until severe neutropenia showed no difference in fever occurrences or other patient outcomes. This approach is associated with a drastic reduction in antimicrobial exposure. Disclosures. All authors: No reported disclosures. Implementation of a Febrile Neutropenia Management Algorithm on Antibiotic Use and Outcomes: An Interrupted Time Series Analysis Trang D. Trinh, PharmD, MPH 1 ; Luke Strnad, MD 2 ; Lloyd E. Damon, MD 1 ;
2019-10-24T09:17:04.863Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "1dd65fe8bf1f5aed48c28d891048ed005a1779e8", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/ofid/article-pdf/6/Supplement_2/S386/30275057/ofz360.953.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "08e55c19bf60bac206ebaf9d1d59d273a4badfd1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
60435848
pes2o/s2orc
v3-fos-license
Birational geometry of algebraic varieties, fibred into Fano double spaces We develop the quadratic technique of proving birational rigidity of Fano-Mori fibre spaces over a higher-dimensional base. As an application, we prove birational rigidity of generic fibrations into Fano double spaces of dimension $M\geqslant 4$ and index one over a rationally connected base of dimension at most $\frac12 (M-2)(M-1)$. An estimate for the codimension of the subset of hypersurfaces of a given degree in the projective space with a positive-dimensional singular set is obtained, which is close to the optimal one. Introduction 0.1. Statement of the main result. In [9] birational rigidity was shown for two large classes of higher-dimensional Fano-Mori fibre spaces: generic fibrations into double spaces of index one and dimension M 5 when the dimension of the base does not exceed 1 2 (M − 4)(M − 1) − 1 and generic fibrations into hypersurfaces of index one and dimension M − 1 9 when the dimension of the base does not exceed 1 2 (M − 7)(M − 6) − 6 (in both cases under the assumption of sufficient twistedness over the base). For Fano-Mori fibre spaces over the projective line the question of birational rigidity is studied well enough, see [8,Chapter 5]. However, one should note that almost all results on birational rigidity of Fano-Mori fibre spaces over the line were obtained by means of the quadratic technique (that is, via analysis of the singularities of the self-intersection of a mobile linear system, defining the birational map), whereas the main result of [9] was obtained by means of the linear technique (that is, via direct analysis of the singularities of the linear system itself, without using the quadratic operation of taking the self-intersection). The quadratic technique requires less restrictions on the variety underconsideration and for that reason makes it possible to embrace a considerably large class of rationally connected varieties. In many respects it is more efficient (at least, at the present stage of the theory of birational rigidity). The aim of the present paper is to develop the quadratic technique of studying birational geometry of Fano-Mori fibre spaces over a higher-dimensional base and apply it to fibrations into Fano double spaces of index one, considerably improving the result of [9] for that class of varieties: we show birational rigidity of generic fibrations into Fano double spaces of dimension M 4 and index one over a rationally connected base of dimension at most 1 2 (M − 2)(M − 1). This result, considerably increasing the admissible dimension of the base of the fibre space, is obtained by means of the quadratic technique of counting multiplicities (see [8,Chapter 5]), which was not used in [9]. Let us make the precise statements. We consider a Fano-Mori fibre space π: V → S, where the base S is non-singular, the variety V has at most factorial terminal singularities, the antocanonical class (−K V ) is relatively ample and Pic V = ZK V ⊕ π * Pic S. We say that a fibre F = F s = π −1 (s), s ∈ S, satisfies the condition (h), if for any irreducible subvariety Y ⊂ F of codimension 2 and any point o ∈ Y the inequality holds, where the degrees are understood in the sense of the anticanonical class, that is, and the condition (hd), if for any mobile linear system ∆ ⊂ | − nK F | and any irreducible subvariety Y ⊂ F of codimension 2 the inequality mult Y ∆ n holds. Further, we say that a fibre F satisfies the condition (v), if for any prime divisor Y ⊂ F and any point o ∈ F of this fibre the inequality holds. Finally, we say that the fibre space V /S satisfies the K-condition, if for any mobile family C of curves on the base S, sweeping out S, and a general curve C ∈ C the class of algebraic cycle −N(K V · π −1 (C)) − F of dimension dim F for any N 1 is not effective, that is, it is not rationally equivalent to an effective cycle of dimension dim F , and the K 2 -condition, if for any mobile family C of curves on the base S, sweeping out S, and a general curve C ∈ C the class of algebraic cycle N(K 2 V · π −1 (C)) − H F of dimension dim F − 1 is not effective for any N 1, where H F = (−K V · F ) is the class of the anticanonical section of the fibre. The following claim is the main result of the present paper. Theorem 0.1. Assume that dim F 4 and every fibre F of the projection π is a variety with at most quadratic singularities of rank at least 4, and moreover codim(Sing F ⊂ F ) 4. Assume further that every fibre F satisfies the conditions (h), (hd) and (v), whereas the fibre space V /S satisfies the K-condition K 2condition. Then every birational map χ: V V ′ onto the total space of rationally connected fibre space V ′ /S ′ is fibre-wise, that is, there is a rational dominant map β: S S ′ such that the following diagram commutes: (Recall that a morphism of projective algebraic varieties π ′ : V ′ → S ′ is a rationally connected fibre space if the base S ′ and the general fibre π ′ −1 (s ′ ), s ′ ∈ S ′ , are rationally connected.) Theorem 0.1 implies immediately the following claim. Corollary 0.1. In the assumptions of Theorem 0.1 on the variety V there are no structures of a rationally connected fibre space over a base of dimension higher than dim S. In particular, the variety V is non-rational. Any birational self-map of the variety V is fibre-wise and induces a birational self-map of the base S, so that there is a natural homomorphism of groups ρ: Bir V → Bir S, the kernel of which Ker ρ is the group Bir F η = Bir(V /S) of birational self-maps of the generic fibre F η (over the non-closed generic point η of the base S), whereas the group Bir V is an extension of the normal subgroup Bir F η by the group Γ = ρ(Bir V ) ⊂ Bir S: Recall that in [9] the following fact was shown. Theorem 0.2. Assume that a Fano-Mori fibre space π: V → S satisfies the following conditions: (i) every fibre F s = π −1 (s), s ∈ S, is a factorial Fano variety with at most terminal singularities and the Picard group Pic F s = ZK Fs , where F s has complete intersection singularities and codim(Sing F ⊂ F ) 4, (ii) for every effective divisor D ∈ | − nK Fs | on an arbitrary fibre F s the pair (F s , 1 n D) is log canonical, and for any mobile linear system Σ s ⊂ | − nK Fs | the pair (F s , 1 n D) is canonical for a general divisor D ∈ Σ s , (iii) for any mobile family C of curves on the base S, sweeping out S, and a general curve C ∈ C the class of algebraic cycle of dimension dim F for any positive N 1 −N(K V · π −1 (C)) − F (where F is the fibre of the projection π) is not effective, that is, it is not rationally equivalent to an effective cycle of dimension dim F . Then any birational map χ: V V ′ onto the total space of a rationally connected fibre space V ′ /S ′ is fibre-wise, that is, there is a rational dominant map β: S S ′ such that the following diagram commutes: Let us compare the assumptions of Theorems 0.1 and 0.2. The canonicity of the pair (F s , 1 n D) in the condition (ii) of Theorem 0.2 (that is, essentially the birational superrigidity of the fibre F s ) follows from the conditions (h) and (hd) in Theorem 0.1 (and is actually equivalent to them as the main method of proving birational superrigidity of a primitive Fano variety is the application of the 4n 2 -inequality combined with the exclusion of maximal subvarieties of codimension two, see [8,Chapter 2]). The log canonicity of the pair (F s , 1 n D) in the condition (ii) of Theorem 0.2 is replaced in Theorem 0.1 by the condition (v), which for certain classes of Fano varieties is much easier to check. Finally, in Theorem 0.1 a new global condition for the Fano-Mori fibre space V /S is added, the K 2 -condition, which is easy to check. Theorem 0.1 will be applied to fibrations into double spaces of index one, when the conditions (h) and (v) hold automatically by the equality deg F = 2. 0.2. Fibrations into double spaces of index one. We use the notations of subsection 0.2 of [9]: the symbol P stands for the projective space P M , M 4, and W = P(H 0 (P, O P (2M))) is the space of hypersurfaces of degree 2M in P. The following general fact is true. Theorem 0.3. The closed algebraic subset of homogeneous polynomials f of degree d in (N + 1) variables, such that the hypersurface {f = 0} ⊂ P N has a singular set of positive dimension, is of codimension at least (d − 2)N in the space Proof is given in §3. The following theorem is immediately implied by Theorem 0.3. Theorem 0.4. There is a Zariski open subset W reg ⊂ W, such that any hypersurface W ∈ W reg has finitely many singular points, each of which is a quadratic singularity of rank at least 3, and, moreover, the following estimate holds: Proof. Setting in Theorem 0.3 d = 2M and N = M, we obtain, that in the complement to a closed subset of codimension 2M(M − 1) in W any hypersurface W has finitely many singular points. It is easy to check that the closed set of hypersurfaces W with a quadratic singular point of rank at most 2 or with a singularity o ∈ W of multiplicity mult o W 3, is of codimension 1 2 (M − 2)(M − 1) + 1 in the space W. This proves the theorem. If F → P is a double cover, branched over a hypersurface W ∈ W reg , then F is a factorial Fano variety with terminal singularities (see [1], Subsection 2.1 in [9] and Proposition 1.4 below), satisfying the conditions (h) and (v) by the equality deg F = 2. The condition (hd) is easy to show by the standard methods (see [8,Chapter 2]; for M 5 it holds in a trivial way, because for any irreducible subvariety Y ⊂ F of codimension 2 the inequality deg Y 2 holds). Thus in order to apply Theorem 0.1, it is sufficient to require every fibre F s , s ∈ S, to be branched over a regular hypersurface W s ∈ W reg , and the fibre space V /S to satisfy the K-condition and the K 2 -condition. In the notations of Subsection 0.2 of [9] let S be a non-singular rationally connected variety of dimension dim S 1 2 (M − 2)(M − 1). Let L be a locally free sheaf of rank M + 1 on S and X = P(L) = Proj ∞ ⊕ i=0 L ⊗i the corresponding P M -bundle. We may assume that L is generated by its global sections, so that the sheaf O P(L) (1) is also generated by the global sections. Let L ∈ Pic X be the class of that sheaf, so that Pic X = ZL ⊕ π * X Pic S, where π X : X → S is the natural projection. Take a general divisor U ∈ |2(ML + π * X R)|, where R ∈ Pic S is some class. If that system is sufficiently mobile, then by the assumption about the dimension of the base S and by Theorem 0.4 we may assume that for any point s ∈ S the hypersurface U s = U ∩ π −1 X (s) ∈ W reg , and for that reason the double space, branched over U s , satisfies the conditions of Theorem 0.1. Let σ: V → X be the double cover branched over U. Set π = π X • σ: V → S, so that V is a fibration into Fano double spaces of index one over S. Recall that the divisor U ∈ |2(ML + π * X R)| is assumed to be sufficiently general. Theorem 0.5. Assume that the divisorial class (K S + R) is pseudo-effective. Then for the fibre space π: V → S the claims of Theorem 0.1 and Corollary 0.1 are true. In particular, is the cyclic group of order 2. Proof. Since the class L is numerically effective and it is sufficient to check the inequalities As it was noted in Subsection 0.2 in [9], the first of these inequalities up to a positive factor is the inequality ((K S + R) · C) 0, which holds because the class (K S + R) is pseudo-effective and the family of curves C is mobile and sweeps out the base S. As for the second inequality, then elementary computations show that up to a positive factor it can be written as the inequality which is the more so true because the locally free sheaf L is generated by global sections. Q.E.D. for the theorem. Remark 0.1. For fibrations into double spaces of index one the K 2 -condition follows from the K-condition. Theorem 0.5 makes Theorem 0.3 of [9] stronger in respect of the genericity conditions which should be satisfied for every fibre of the fibre space V /S: in Theorem 0.5 these conditions are weaker, and for that reason the set W reg is larger. This makes it possible to prove birational rigidity for fibre spaces over a base of higher dimension, and in particular, for fibrations onto fourdimensional double spaces. 0.3. The structure of the paper. The present paper is organized in the following way. §1 contains mostly the first part of the proof of Theorem 0.1: we construct a modification of the base S + → S such that on the pull back π + : V + → S + of the original fibre space onto S + , the centre of each maximal singularity covers a divisor on S + (this procedure is often referred to as flattening the maximal singularities). These arguments are similar to the arguments of §1 in [9], however, in contrast to [9], here they give no proof of the main theorem, but only show the existence of a supermaximal singularity (under the assumption that the claim of Theorem 0.1 does not hold). The latter concept plays an important role in the proof of birational superrigidity of fibre spaces over P 1 , see [8,Chapter 5]; here we extend it to the case of fibrations over a base of arbitrary dimension. We complete §1, studying quadratic singularities, the rank of which is bounded from below (we need this to claim factoriality and terminality of the modified fibre space π + : V + → S + ). In §2 we complete the proof of Theorem 0.1: we exclude the supermaximal singularity, the existence of which has been shown in §1, whence the claim of Theorem 0.1 follows immediately. The excluding is achieved by means of the standard technique of counting multiplicities (see [8,Chapter 5]), adjusted to the situation under consideration. In §3 we obtain an estimate for the codimension of the closed set of hypersurfaces of degree d in P N with a singular set of positive dimension, in the space of all hypersurfaces of degree d in P N . The estimate is close to the optimal one. This is a general and quite useful result, proved by elementary (but non-trivial) methods of algebraic geometry; as far as the author knows, this estimate was not known earlier. 0.4. Historical remarks and acknowledgements. The history of the problems connected with birational rigidity of Fano-Mori fibre spaces over a base of positive dimension, has been reviewed in the introduction to [9] in a detailed enough way, and we will not consider it here. We note, however, that the Sarkisov theorem on conic bundles [10,11] was proved by the quadratic method (the self-intersection of the mobile linear system, defining the birational map, was considered), although for that class of varieties the quadratic technique of counting multiplicities is not needed. The problem of birational rigidity for del Pezzo fibrations over a base of dimension higher than one is entirely open. However, it is clear that for that class of varieties it is the quadratic techniques that is needed, although it is possible that a combination of the linear and the quadratic method will be successful. In the direction of computing the possible values of log canonical thresholds on del Pezzo surfaces a lot of work has been recently done, see [12,13,14,15]. Finally, let us point out the recent work [4], where by means of the results of [6,7] (see also [8,Chapter 7]) the problem of existence of rationally connected varieties that are non-Fano type varieties, stated in [3], was solved. Various technical points, related to the constructions of the present paper, were discussed by the author in his talks given in 2009-2014 at Steklov Mathematical Institute. The author thanks the members of Divisions of Algebraic Geometry and of Algebra and Number Theory for the interest to his work. The author also thanks his colleagues in the Algebraic Geometry research group at the University of Liverpool for the creative atmosphere and general support. Maximal and supermaximal singularities The contents of this section is the first part of the proof of Theorem 0.1. In Subsection 1.1 we modify the fibre space V /S: this procedure is similar to §1 in [9]. As a result, we obtain a new Fano-Mori fibre space V + /S + , satisfying all assumptions of Theorem 0.1 and an additional condition: the centre on V + of any maximal singularity covers a divisor on S + . In Subsection 1.2 we consider the self-intersection of the mobile linear system Σ, related to the birational map χ, and show the existence of a supermaximal singularity. In Subsection 1.3 we make the information about quadratic singularities of a bounded rank more precise. 1.1. Modification of the fibre space V /S. In the notations of Theorem 0.1 fix a birational map χ: V V ′ . Repeating the arguments of Subsection 1.1 in [9], consider an arbitrary very ample linear system Σ ′ on S ′ . Let Σ ′ = (π ′ ) * Σ ′ be its pull back onto V ′ , so that the divisors D ′ ∈ Σ ′ are composed from the fibres of the projection π ′ , and for that reason for any curve C ⊂ V ′ that is contracted by the projection π ′ , we have (D ′ · C) = 0; the linear system Σ ′ is obviously mobile. Set to be its strict transform on V , where n ∈ Z + . Obviously, the map χ is fibre-wise if and only if n = 0. Therefore, if n = 0, then the claim of Theorem 0.1 holds. So let us assume that n ≥ 1 and show that this assumption leads to a contradiction. As was shown in [9, Lemma 1.1], for any mobile family of curves C ∈ C on S, sweeping out S, the inequality (C · Y ) 0 holds. Following [9], we call a prime divisor E over V a maximal singularity of the birational map χ, if its image on V ′ is a prime divisor, covering the base S ′ , and the Noether-Fano inequality holds: where a(E) is the discrepancy of E with respect to V . In [9, Proposition 1.1] it was shown that maximal singularities exist. Let M be the (finite) set of all maximal singularities. In the proof of the existence of maximal singularities an important role is played by a very mobile family C ′ of rational curves on the variety V ′ . Recall [9, Subsection 1.1], that a family of rational curves C ′ on V ′ is very mobile if the curves C ′ ∈ C ′ are contracted by the projection π ′ , sweep out a dense open subset in V ′ , do not intersect the set of indeterminancy of the map χ −1 : V ′ V , and a general curve C ′ ∈ C ′ intersects the image of each maximal singularity E ∈ M transversally at points of general position. Let us fix a very mobile family of curves on V ′ . Its strict transform on V we denote by the symbol C, and its projection π(C) on S by the symbol C. Further, the following fact is true. on V does not cover the base: π(centre(E, V )) ⊂ S is a proper closed subset of the variety S. Proof. Although the statement of this proposition repeats the statement of Proposition 1.2 in [9] word for word, a new proof is needed, since the assumptions are different. Again it is sufficient to show that the restriction Σ| F of the linear system Σ onto a fibre F = π −1 (s) of general position has no maximal singularities (in the standard, weaker sense, see [8,Chapter 2]). This follows immediately from the conditions (h) and (hd), which are satisfied for the variety V . Q.E.D. for the proposition. Now let us construct, following [9, Subsection 1.2], a modification of the base σ S : S + → S and the corresponding modification of the total space of the fibre space V /S, such that the new fibre space π + : V + → S satisfies the following conditions: • the base S + is non-singular, • for every singularity E of the birational map χ•σ: V + V ′ , which is realized on V ′ by a divisor, covering the base S ′ , its centre on V + covers a divisor on S + , that is, codim(π + (centre(E, V + )) ⊂ S + ) = 1. The modification σ S is constructed as a sequence of blow ups with non-singular centres. By the assumption about the singularities of the fibres of the original fibre space V /S, the variety V + has at most quadratic (in particular, hypersurface) singularities of rank at least 4, and moreover, codim(Sing V + ⊂ V + ) 4, so that the variety V + is factorial and terminal. Obviously, so that V + /S + is again a Fano-Mori fibre space. Let T be the set of all σ Sexceptional prime divisors on S + and T the set of all σ-exceptional prime divisors on V + . The map is a bijection between T and T , the inverse map is ZT and a similar equality is true for Pic V + . Proposition 1.2. For the Fano-Mori fibre space V + /S + the K-condition and the K 2 -condition hold. Proof. Let R be a mobile family of curves on S + , sweeping out S + , and R ∈ R a general curve. Then, obviously, σ S (R) is a mobile family of curves on S, sweeping out S, and σ S (R) is a general curve in that family. We have and, respectively, (the discrepancies of the prime divisors T and T = π −1 + (T ) with respect to S and V , are obviously equal), and moreover, a T > 0 for all T ∈ T . Let us consider the class of an algebraic cycle is satisfied, we can see from here that it is satisfied for V + /S + , too. Let us consider the K 2 -condition. Writing out explicitly K 2 + , we get: condition, this implies that V + /S + satisfies it as well. Q.E.D. for the proposition. Since obviously the map χ • σ: V + V is fibre-wise with respect to the projections π + , π ′ if and only if the map χ is fibre-wise, we will prove Theorem 0.1 for the Fano-Mori fibre space V + /S + . The fibres of that fibre space by construction are the fibres of the original fibre space V /S, so that for V + /S all assumptions of Theorem 0.1 are satisfied. From now on, in order to simplify the notations, we assume that V + /S + is the original Fano-Mori fibre space V /S, which now has a new property: every singularity E of the map χ (which is still not fibre-wise), the centre of which on V ′ is divisorial and covers the base S ′ , has on the variety V the centre centre(E, V ), covering a prime divisor on S. In particular, this is true for every maximal singularity E ∈ M. In order not to make the text more difficult to read using by new symbols, we will use the symbols T , T in the new sense: T is the set of such prime divisors T on the base S, that for some maximal singularity E ∈ M we have π(centre(E, V )) = T , and T is the set of preimages T = π −1 (T ) of those divisors on V . The projection π gives a one-to-one correspondence between the sets T and T . Let τ : M → T be the map, relating to a maximal singularity E ∈ M the divisor T ∈ T , containing its centre centre(E, V ), and τ = π • τ : M → T , that is to say, Remark 1.1. In the situation considered in [9], the modification of the base completes the proof of birational rigidity of the fibre space, since by the assumption about the global log canonical threshold of every fibre, no maximal singularity, the centre of which covers a divisor on the base, can exist. In this paper the assumption about the global log canonical threshold is missing, and for that reason the main part of the proof of Theorem 0.1 starts when the base is modified and the centre of every maximal singularity covers a divisor on the base. In the next subsection we carry out some preparatory work for the subsequent exclusion of maximal singularities. Supermaximal singularities. For any maximal singularity E ∈ M set where T = τ (E). By construction, T ⊃ centre(E, V ), so that t E 1. Let ϕ: V → V be a birational morphism, resolving the singularities of the map χ. Every maximal singularity E ∈ M is realized on the variety V by a prime divisor, which we will denote by the same symbol E. By the definition of the numbers t E we get: the divisor is effective and contains none of the maximal singularities E ∈ M as a component. Now let us consider the strict transform C on V of the mobile family of curves C, which was fixed in Subsection 1.1. For C ∈ C we have: Set ν E = ord E Σ and let a E 1 be the discrepancy of E with respect to V . By the symbol K we denote the canonical class K V , so that for the strict transform Σ of the linear system Σ on V we have is a subvariety of codimension at least 2, or ε(E ′ ) 0, and for that reason ( C · Ξ) 0. Therefore, the following inequality holds: Recall that by the K-condition (Y · C) 0. On the other hand, as we could see a bit earlier, the estimate holds. Now let us consider the self-intersection Z = (D 1 •D 2 ) of the mobile linear system Σ (where D 1 , D 2 ∈ Σ are general divisor which do not have common components due to the mobility). Let us write this effective algebraic cycle of codimension 2 in the following way: where in the sub-cycle Z h are collected all components Z, covering the base (the horizontal part of Z), in the sub-cycle Z v are collected all components of the cycle Z that are contained in the divisors T ∈ T and cover T (the vertical part of Z), and in the sub-cycle Z ∅ are collected all the other components of the cycle Z (and that part of the cycle Z is inessential for us). Obviously, we have the presentation where Z v T consists of those components of the vertical part, which are contained in the divisor T and cover T . Let F = F s = π −1 (s) be the fibre over a point of general position s ∈ T . Since holds. This definition is modelled on the definition of a supermaximal singularity for Fano fibre spaces over P 1 , see [8,Chapter 5], and plays the same role. Proposition 1.3. A supermaximal singularity exists. Proof. Since we have (Z · π −1 (C)) = n 2 (K 2 V · π −1 (C)) + 2n(Y · C)H F , as obviously (π * (Y 2 ) · π −1 (C)) = 0. On the other hand, for some λ ∅ ∈ Z + . By the K 2 -condition we get the inequality Combining the inequalities (1), (2) and (4), we get Taking into account that the set of maximal singularities M is a disjoint union of the subsets M T , T ∈ T , we see that in the last inequality every maximal singularity appears only once. Therefore, for some singularity E ∈ M T the inequality holds. Since (E · C) > 0 for all E ∈ M, this implies the inequality (3). Q.E.D. forthe proposition. A remark on quadratic singularities. In [2, Theorem 4] and [9, . 2.1] it was shown that the quadratic singularities of rank at least r 1 are stable with respect to blow ups. This fact can be made more precise in the following way. Proposition 1.4. Assume that an algebraic variety X has at most quadratic singularities of rank at least r, and moreover, the inequality codim(Sing X ⊂ X) r holds. Then for any irreducible subvariety B ⊂ X there is a Zariski open subset U ⊂ X, such that U ∩B = ∅ and the blow up U → U along the subvariety B U = B∩U has at most quadratic singularities of rank at least r, and the following inequality holds: codim(Sing U ⊂ U) r. Remark 1.2. In [2,9] the following obvious fact was used:if a variety X has at most quadratic singularities of rank at least r, then the inequality codim(Sing X ⊂ X) r − 1 holds. Therefore, the codimension of the singular set Sing U is at least r − 1. The proposition stated above makes the results of [2,9] more precise: the property of the singular set of the variety X to have codimension at least r is also stable with respect to blow ups. Proof of Proposition 1.4. By [2, Theorem 4] and [9, Subsection 2.1] we only need to show the inequality (5). Obviously, we may assume that B ⊂ Sing X. Arguing as in Subsection 2.1 of [9], consider a Zariski open subset U ⊂ X, such that B U is a non-singular subvariety, and moreover the rank of quadratic points b ∈ B U is constant and equal to r 1 r. Let E U ⊂ U be the exceptional divisor of the blow up ϕ B : U → U of the subvariety B U . Obviously, ϕ B | E U : E U → B U is a fibration into quadrics of rank r 1 . It is clear that the set of singular points Sing( U\E U ) is of codimension at least r. However, a quadric of rank r 1 has a singular set of codimension r 1 − 1. Therefore, Q.E.D. for the proposition. Exclusion of supermaximal singularities In this section we complete the proof of Thorem 0.1: we show that a maximal singularity can not exist. For that purpose, we use the technique of counting multiplicities (Subsection 2.1) in a modified form, adjusted to varieties with quadratic singularities. We prove that the multiplicities of the self-intersection of the mobile linear system Σ along the centres of the supermaximal singularity satisfy a certain quadratic inequality, which is impossible, as our computations in Subsection 2.2 show. This contradiction completes the proof of Theorem 0.1. In Subsection 2.3 we correct a small issue in [2]. 2.1. The technique of counting multiplicities. Let us fix a supermaximal singularity E and the corresponding divisor T = π −1 (T ). To simplify the notations, we write Z v instead of Z v T and λ instead of λ T : the other singularities and divisors T ′ ∈ T take no part in the subsequent arguments. Let be the resolution of the singularity E, that is, the sequence of blow ups ϕ i,i−1 : , where the last exceptional divisor E K is the supermaximal singularity E. The set of indices I = {1, . . . , K}, parameterizing the blow ups, is the disjoint union where M = dim F is the dimension of the fibre and i ∈ I k if and only if dim so that for j ∈ I M −2 ∪ I M −1 we have µ j = 1. The strict transform of a subvariety, an effective divisor or a linear system on V j we denote by adding the upper index j. For a general divisor D ∈ Σ write Let Z = (D 1 • D 2 ) be the self-intersection of the mobile system Σ. Writing in the usual way (see [8,Chapter 2]) where Z i is an effective cycle of codimension 2 with the support inside the exceptional divisor E i , we define the degree d i of the cycle Z i in the following way. If B i−1 ⊂ Sing V i−1 , then for a point p ∈ B i−1 of general position ϕ −1 i,i−1 (p) is the projective space P δ i and d i = deg(Z i | ϕ −1 i,i−1 (p) ) is the degree of an effective divisor in that projective space. If B i−1 ⊂ Sing V i−1 , then for a general point p ∈ B i−1 the fibre ϕ −1 i,i−1 (p) is an irreducible quadric in the projective space P δ i +2 and d i is the degree of the effective cycle Z i | ϕ −1 i,i−1 (p) in that projective space. In both cases δ i means the elementary discrepancy codim(B i−1 ⊂ V i−1 ) − µ i . As usual, we break the set I into the lower part Besides, we have the estimate Let Γ be the oriented graph of the resolution of the singularity E, that is, the graph with the set of vertices I and an oriented edge (arrow) joins the vertices i and j is compatible with the structure of the graph Γ. Proof. The cartier divisor is effective, which immediately implies the claim of the proposition. Q.E.D. Now [8, Chapter 2, Proposition 2.4] gives the inequality Extending the definition of the numbers r i to i ∈ I M −2 and using the obvious fact that r i is non-increasing as a function of i, we get finally: Remark 2.1. Let p ai be the number of paths in the oriented graph Γ from the vertex a to the vertex i for a = i (so that p ai = 0 for a < i); set p ii = 1 for all i ∈ I. Usually (see [8,Chapter 2]) the technique of counting multiplicities makes use of the numbers p Ki istead of r i in the inequalities of the type (6), and it is easy to see that for µ 1 = 1 the equality r i = p Ki holds. If µ 1 = 2, then r 1 p K1 (see below). The inequality (6) remains true, if we replace r i by p Ki , however such a modification is hard to use, since it is the coefficients r i that appear both in the explicit form of the Noether-Fano inequality, and in the explicit expression for ord E ϕ * K,0 T . Set L sing = max{1 i L | µ i = 2}. holds. Proof. The claim (i) is obvious, since for i 1 + L sing the exceptional divisor E i is non-singular over a general point of the subvariety B i−1 , so that and the decreasing induction gives the equality r i = p Ki . For i L sing the fibre of the exceptional divisor E i over a point of general position on B i−1 is a quadric of rank at least 4. If for j L sing , j > i, we have j → i, then, obviously, as in the non-singular case. If j → i for some j 1 + L sing , then two cases are possible: 1) B j−1 ⊂ Sing E j−1 i , and then again the equality (7) holds, 2) B j−1 ⊂ Sing E j−1 i , and then the equality holds. We emphasize that if the equality (8) holds, then j > L sing , so that For that reason, every path in the graph Γ from the top vertex K to the vertex i gives an input into the number r i , which is equal to 1 or 2, and the latter takes place if and only if the path is of the form where j k L sing , j k+1 > L sing and for the arrow j k+1 → j k the case 2), described above, is realized. Q.E.D. for the proposition. For 1 i L fibre we define the numbers γ i ∈ Z by the equalities The following equalities hold: (i) the multiplicity of the linear system Σ with respect to E satisfies the relation (ii) the multiplicity of the divisor T with respect to E satisfies the relation (iii) the discrepancy of E satisfies the relation Proof repeats the arguments in the non-singular case (see [8,Chapter 2]) word for word, just the number of paths p Ki should be replaced by the new coefficients r i . We will show the equality (9); in the other cases the arguments are similar. We use the induction on K 1. If K = 1, then the equality (9) is obvious. Let K 2. For a general divisor D ∈ Σ write: so that ϕ * K,0 D = ϕ * K,1 D 1 + ν 1 ϕ * K,1 E 1 and for that reason For D 1 the claim of the proposition holds by the induction hypothesis. The proof is complete. Set L * = min(L, L fibre ) and . . , L * . Now the left-hand side of the inequality (6) rewrites in the form The first component in this sum does not exceed since the sequence of multiplicities m h i is not increasing, and by the condition (h). The "vertical" component in the sum (12) by the condition (v) does not exceed the number (see the equality (10)), and the right hand side of the last inequality is strictly smaller than 4ne, where e = ε(E), by the definition of a supermaximal singularity (the inequality (3)). Combining these estimates, we get that the left hand side of the inequality (6) is strictly smaller than the expression Let us consider now the right hand side of the inequality (6). By the definition of the number ε(E) we have: (so that in these notations the Noether-Fano inequality takes the form of the estimate e > 0). Using the standard methods, it is easy to check that the minimum of the right hand side of the inequality (6) on the hyperplane in the space R K (ν 1 ,...,ν K ) , given by the equation (13), is attained for ν i = θ/µ i , where θ can be found from the equation (13). We introduce the following notations: In these notations the inequality (6) implies the estimate Taking into account that Σ sing + Σ non−sing = Σ l + Σ u , after easy computations we get: However, Σ non−sing Σ l + Σ u , so that the previous inequality implies the estimate which can not be true. This contradiction excludes the supermaximal singularity and completes the proof of Theorem 0.1. Birationally rigid Fano hypersurfaces. In the context of the proceedings performed in this subsection, let us consider the problem of estimating the codimension of the set of non-rigid hypersurfaces of degree M in P M , which was set and solved in [2]. Working on the present paper, the author detected an incorrectness in that paper in the proof of the 4n 2 -inequality for Fano hypersurfaces with quadratic singularities of rank at least 5 for M 5 ([2, Section 3]). In this subsection we explain what was incorrect and how it should be corrected. Note that the main claim of ([2, Proposition 1]), and the method of its proof are valid. Recall that in [2, Section 3] the following local fact was shown. Let X be an algebraic variety with quadratic (in particular, hypersurface) singularities of rank at least 5 (so that the set of singular points Sing X is of codimension at least 4 and for that reason the variety X is factorial), B ⊂ Sing X an irreducible subvariety, Σ a mobile linear system on X, and moreover, for some n 1 the pair (X, 1 n Σ) is not canonical; more precisely, it has a non canonical singularity E with the centre at B. Then the self-intersection Z = (D 1 • D 2 ), D i ∈ Σ are general divisors, satisfies the inequality mult B Z > 4n 2 . In fact, the assumptions can be somewhat relaxed. The following claim is true. Proposition 2.4. Let X be a variety with quadratic singularities of rank at least 4, and assume that codim(Sing X ⊂ X) 4. Assume further that a certain divisor E over X is a non canonical singularity of the pair (X, 1 n Σ) with the centre B ⊂ Sing X, where Σ is a mobile linear system. Then the self-intersection Z of the system Σ satisfies the inequality Proof. We only point out what should be modified in the arguments of [2, Section 3]. It follows from Proposition 1.4 that the technique of counting multiplicities works without changes under the relaxed assumptions about the rank of quadratic singularities. Furthermore, in [2, Section 3] it is claimed erroneously that the Noether-Fano inequality has the form where p i is the number of paths in the oriented graph of the resolution of the singularity E from the top vertex to the vertex i (the meaning of all notations is exactly the same as in Subsection 2.1 of the present section). In fact, in the inequality (14) instead of p 1 the coefficients r i , introduced in Subsection 2.1, must be used. After the replacement of the coefficients p i by the coefficients r i all the arguments in [2, Section 3] work as they are and prove Proposition 2.4. Hypersurfaces with non-isolated singularities In this section we prove Theorem 0.3. The procedure of estimating the codimension of the set of hypersurfaces in the projective space with a singular set of a positive dimension, depends on the type of that singular set. In Subsection 3.1 we consider some simple cases (for instance, when the singular set is a line), where the codimension of the set of hypersurfaces with a singular set of the given type can be directly estimated or explicitly computed. In Subsection 3.2 we develop a technique that makes it possible to estimate the codimension of the set of hypersurfaces with at least finite, but sufficiently large set of singular points. In Subsection 3.3 we apply this technique and complete the proof of Theorem 0.3. whence, taking into account the dimension of the Grassmanian of lines in P N , we obtain the equality The following claim is true, which immediately implies Theorem 0.3. Theorem 3.1. The following inequality holds: Remark 3.1. It seems that the inequality of Theorem 3.1 can be improved, replacing its right hand side by (d − 2)N + 3, after which it would become precise. However, the proof below is insufficient for that purpose. In any case the claim of Theorem 3.1 is much stronger than what we need in this paper. Proof of Theorem 3.1. Let P which we will prove for all 1 l k N and a fixed k-plane P ⊂ P N , given by the equations {x k+1 = . . . = x N = 0}. Example 3.2. Consider the case l = k = 2. In that case P is a plane, P ⊂ {f = 0} and the closed set Sing(f ) contains an irreducible plane curve C ⊂ P of degree q 2. This gives (d + 1)(d + 2)/2 independent conditions on the coefficients of the polynomial f | P (they all vanish) and (N − 2) polynomials ∂f ∂x 3 P , . . . , ∂f ∂x N P vanish on the curve C. Note that the coefficients of the polynomials f | P , ∂f /∂x i | P , i = 3, . . . , N up to a non-zero integral factor are distinct coefficients of the polynomial f . We may assume that at least one of the polynomials ∂f /∂x i | P is not identical zero, say, ∂f /∂x 3 | P ≡ 0. Then the curve C is an irreducible component of the plane curve {∂f /∂x 3 | P = 0}. Fixing the polynomial ∂f /∂x 3 | P , we finally obtain independent conditions on the coefficients of the polynomial f , where 2 q d − 1. It ie easy to see that this number satisfies the inequality (15). Example 3.3. Consider the case l = 1, k = 2. In that case Sing(f ) contains an irreducible plane curve C ⊂ P of degree q 2, but f | P ≡ 0, so that {f | P = 0} is a reducible plane curve of degree d, containing C as a double component, so that 2q d. An easy dimension count gives independent conditions on the coefficients of the polynomial f | P . The minimum of the last expression is attained for q = 2. Now the fact that the polynomials ∂f /∂x i | P , i = 3, . . . , N, vanish on the curve C, gives in addition at least (N − 2)(2d + 1) independent conditions on the coefficients of f . As a result, we get and it is easy to check that the inequality (15) is satisfied. Starting from this moment, we assume that k = 3. Recall the following Definition 3.1. (See [5,Section 3] or [8,Chapter 3]). A sequence of homogeneous polynomials g 1 , . . . , g m of arbitrary degrees on the projective space P e , e m + 1, is called a good sequence, and an irreducible subvariety W ⊂ P e of codimension m is its associated subvariety, if there exists a sequence of irreducible subvarieties W j ⊂ P e , codim W j = j (in particular, W 0 = P e ), such that: • g j+1 | W j ≡ 0 for j = 0, . . . , m + 1, • W j+1 is an irreducible component of the closed algebraic set g j+1 | W j = 0, A good sequence can have more than one associated subvarieties, but their number is bounded from above by a constant depending on the degrees of the polynomials q j only (see [5,Section 3] vanish identically on C and the curve C is an irreducible component of the set Sing(f ), from those polynomials we can choose (k−1) ones that form a good sequence with the curve C as an associated subvariety (in particular, N − k k − 1). Fixing these polynomials, for each of the remaining (N + 1 − 2k) polynomials we get the condition ∂f ∂x i P C ≡ 0, where the curve C, as one of the associted subvarieties of the fixed good sequence, can be assumed to be fixed. In [5,Section 3] it was shown that the condition (16) defines a closed subset of codimension at least (d − 1)k + 1. Therefore, and elementary computations show that the inequality (15) holds. Example 3.5. Let us consider the case l = k −1. This case generalizes Example 3.3. Here the hypersurface {f | P = 0} has a multiple irreducible non-degenerate component of degree q, where 2q d, so that the coefficients of the polynomial f | P belong to a closed subset of codimension in the space P d,k . Furthermore, since the curve C is an irreducible component of the set Sing(f ), from the set of polynomials f | P , ∂f ∂x k+1 P , . . . , ∂f ∂x N P we can choose a good sequence, starting with f | P , for which the curve C will be an associated subvariety. In particular, the estimate N + 2 2k holds. Fixing the polynomials of that sequence, we may assume the curve C to be fixed. Now we argue as in Example 3.4 and obtain, in addition to the conditions on the coefficients of the polynomial f | P , also (N + 2 − 2k)((d − 1)k + 1) more independent conditions on the coefficients of the polynomial f . An elementary, although tedious, check shows that the inequality (15) is satisfied. In order to prove the inequality (15) in the case l k − 2, we need a new technique, which is developed below. 3.2. Linearly independent points. The following claim is true. Proof. We may assume that l 0 = x 0 , l 1 = x 1 , . . . , l r = x r . In order to simplify the formulas, we will prove the affine version of the proposition: set v 1 = x 1 /x 0 , . . . , v r = x r /x 0 and u i = x r+i /x 0 , i = 1, . . . , N − r. In the affine space A N ⊂ P N , A N = P N \{x 0 = 0} with coordinates (u, v) = (u 1 , . . . , u N −r , v 1 , . . . , v r ) the affine spaces A(e) = Θ(e)\Π are contained entirely: so that S(e) ⊂ A(e) for all e. Obviously, A(e) = {v 1 = λ 1,e 1 , . . . , v r = λ r,er } ⊂ A N is a (N−r)-plane, which is parallel to the coordinate (N−r)-plane (u 1 , . . . , u N −r , 0, . . . , 0). Let us write the polynomial g in terms of the affine coordinates (u, v) in the following way: (if e i = 0, then the corresponding product is assumed to be equal to 1). Here g e (u) = g e 1 ,...,er (u) is an affine polynomial in u 1 , . . . , u N −r of degree deg g e d − |e|. For the fixed λ ij this presentation is unique. By Lemma 3.1, the condition defines a linear subspace of codimension m(N − r + 1) in the space of polynomials P N −r,d . However, it is easy to see that g| A(0) = g 0,...,0 (u), since for e = 0 in the product there is at least one factor (v i − λ i0 ) = v i , which vanishes when restricted onto the (N − r)-plane A(0). Therefore, the condition S(0) Sing(g| A(0) ) imposes on the coefficients of the polynomial g 0,...,0 (u) precisely m(N − r + 1) independent linear conditions, whereas the polynomials g e (u) for e = 0 can be arbitrary. Now let us complete the proof of Proposition 3.2 by induction on |e|. More precisely, for any a ∈ Z + set ∆ a = {e 1 0, . . . , e r 0, e 1 + . . . + e r a} ⊂ R r , so that ∆ = ∆ d−3 , and let us prove the claim of Proposition 3.2 in the following form: for every a = 0, . . . , d − 3 ( * ) a the set of conditions S(e) ⊂ Sing(g| Θ(e) ), e ∈ Z r + , |e| a, defines a linear subspace of codimension m(N −r+1)|∆ a | in P N,d , where the restrictions are imposed on the coefficients of the polynomials g e (u) for e ∈ ∆ a , whereas for e ∈ ∆ a the polynomials g e (u) can be arbitrary. The case a = 0 has already been considered, so we assume that a d − 4 and the claims ( * ) j have been shown for j = 0, . . . , a. Let us show the claim ( * ) a+1 . Let e ∈ Z r + be an arbitrary multi-index, |e| = a + 1. The restriction onto the affine subspace A(e) means the substitution v 1 = λ 1,e 1 , . . ., v r = λ r,er . For that reason the polynomial g e(u) comes into the restriction g| A(e) with a non-zero coefficient On the other hand, for e ′ = e, |e ′ | a + 1 the product is equal to zero, as for at least one index i ∈ {1, . . . , r} we have e ′ i > e i and therefore that product contains a zero factor. So g| A(e) is the sum of the polynomial α e g e and a linear combination of the polynomials g e ′ with |e ′ | a with constant coefficients. Now, fixing the polynomials g e ′ |e ′ | a, we see that the condition S(e) ⊂ Sing(g| A(e) ) defines an affine (generally speaking, not a linear) subspace of codimension m(N − r + 1) of the space of polynomials g e (u 1 , . . . , u N −r ) of degree at most d − |e|, the corresponding linear space of which is given by the condition S(e) ⊂ Sing g e (u). Note that on the coefficients of other polynomials g e ′ with |e ′ | = a + 1 no restrictions are imposed. This completes the proof of the claim ( * ) a for all a = 0, . . . , d − 3. Q.E.D. for Proposition 3.2. such that the set Sing(f | P ) has an irreducible component Q of dimension l, containing a curve C ⊂ Sing(f ), and such that it is in general position with the subspaces from the set Θ: for all e ∈ ∆ the set Θ(e) ∩ Q contains (k − l + 1) linearly independent points. Since Q = C = P , the subset P N,d (P, Θ) imposes on the coefficients of the polynomial f | P at least (k − l + 1)|∆| independent conditions. Furthermore, from the set of (N + 1) polynomials ∂f ∂x 0 P , . . . , ∂f ∂x N P we may select a good sequence of (k − 1) polynomials, with a certain curve C, C = P , as an associated subvariety, and moreover, this can be done in such a way that the first (k − l) polynomials in that sequence are chosen among the polynomials ∂f ∂x 0 P , . . . , ∂f ∂x k P (and some subvariety Q ⊃ C, Q ⊂ P of dimension l is an associated subvariety of that subsequence), whereas the following (l − 1) polynomials are chosen among the polynomials ∂f ∂x k+1 P , . . . , ∂f ∂x N P . Fixing the polynomial f | P and the other polynomials of the good sequence, we may assume the curve C ⊂ Sing(f ) of singular points to be fixed. Now the condition ∂f /∂x i | C ≡ 0 for every i ∈ {k + 1, . . . , N}, which did not get into the good sequence, give in addition (N + 1 − k − l)((d − 1)k + 1) independent conditions on the coefficients of the polynomial f . An elementary, although tedious, check shows that the inequality (k − l + 1)|∆| + (N + 1 − k − l)((d − 1)k + 1) (d − 2)N + (k + 1)(N − k) holds for all the values k, l under consideration, which completes the proof of the inequality (15) and of Theorem 3.1, and therefore, of Theorem 0.3. Remark 3.2. It is easy to see that the worst estimate for the codimension of P (1,k;l) N,d (P ) corresponds to the case k = N and l = 1, that is, the hypersurface {f = 0} has a non-degenerate curve of singular points. In that case Proposition 3.3 yields the inequality codim(P It seems hardly probable that the presence of a non-degenerate curve of singular points imposes on the coefficients of the polynomial f less (although slightly less) independent conditions than the presence of a line consisting of singular points (when the estimate for the codimension is precise). And indeed, when we apply Proposition 3.3, we essentially replace a curve, consisting of singular points, by a finite set of singular points (although it is quite a large set). Probably, the technique used in the proof of Theorem 3.1 can be improved and for the case of a non-degenerate curve of singular points a more precise estimate could be obtained. This is what was meant in Remark 3.1.
2017-12-14T16:45:16.000Z
2015-12-17T00:00:00.000
{ "year": 2017, "sha1": "19c9b1d8baf08c8827f9436a72455d0e70160ceb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1512.05681", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "745cac95fd69f39e9c6862ea56e3191a5dec2746", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
122341113
pes2o/s2orc
v3-fos-license
The Prevalence and Functional Impact of Chronic Edema and Lymphedema in Japan: LIMPRINT Study Abstract Background: This was a part of LIMPRINT (Lymphoedema IMpact and PRevalence—INTernational), an international study aimed at capturing the size and impact of lymphedema and chronic edema in different countries and health services across the world. The purpose of this study was to clarify the prevalence and the impact of chronic edema in Japan. Methods and Results: This was a two-phase facility-based study to determine the prevalence and functional impact of chronic edema in the adult population in Japan between 2014 and 2015. The prevalence study involved a university hospital, an acute community hospital, and a long-term medical facility. The impact study involved six facilities, including two outpatient clinics in acute care hospitals (one led by a physician and the other led by a nurse), inpatient wards in two acute care hospitals, and two nursing home/long-term care facilities. Various questionnaires and clinical assessments were used to gather patient demographic data and assess the functional impact of chronic edema. The results showed that chronic edema was much more prevalent in the long-term care facility than in acute care hospitals; cellulitis episodes occurred in ∼50% of cases in the gynecologist-led outpatient clinic, even though >80.0% of patients received standard management for edema; edema was found in the trunk region, including the buttock, abdomen, and chest-breast areas, in addition to the upper and lower limbs; and subjective satisfaction with edema control was low, even though the quality-of-life scores were good. Conclusions: The prevalence of chronic edema varied according to the facility type, ranging from 5.0% to 66.1%. The edema was located in all body parts, including the trunk region. Subjective satisfaction with control of edema was poor, while general quality of life was good. This large health care issue needs more attention. Introduction B oth lymphedema and chronic edema have strong negative effects on not only patients' health statuses but also medical expenditures around the world, but the precise epidemiological data and its impact have not been fully elucidated. This study was a part of LIMPRINT (Lymphoedema IMpact and PRevalence-INTernational), an international study aimed at capturing the size and impact of lymphedema and chronic edema in different countries and health services across the world. Its focus is to provide evidence to support the development and reimbursement of lymphedema services. The project is coordinated by Professor Christine Moffatt from the International Lymphoedema Framework (ILF). The ILF is a UK charity, whose aim is to improve the management of chronic edema and related disorders worldwide through the sharing of expertise and resources and by supporting individual countries to develop a long-term strategy for the care and management of chronic edema. Further details of the LIMPRINT project can be obtained on the ILF website (www.lympho.org/limprint). This study used the multicenter data gathered between 2014 and 2016 through the ILF, Japan branch. In Japan, there is only a reimbursement system under national health insurance for lymphedema management of patients diagnosed with lymphedema after the treatment of uterine cancer, uterine adnexal cancer, prostate cancer, or breast cancer with lymph node dissection. 1 However, there is no such system for chronic edema. This is partly because of the lack of epidemiological studies on chronic edema to understand its impact on patients' health. Aim The purpose of this study was to clarify the prevalence and impact of chronic edema in Japan. Methods Study design. This was a facility-based study to determine the prevalence and functional impact of chronic edema in the adult population within the ILF, Japan. LIMPRINT in Japan was a two-phase project conducted between 2014 and 2015, which included a prevalence study and an impact study. Prevalence study Setting. In this study, all hospitalized patients at all appropriate wards were investigated to identify patients with chronic edema (excluding children <18 years and the Department of Psychiatry) on a specific day. The facilities were a university hospital (n = 600, 31 medical departments), an acute community hospital (n = 195, 13 medical departments), and a long-term medical facility (n = 310, 5 medical departments). Definition and assessment of chronic edema. To determine the prevalent cases of chronic edema, patients whose edema continued over 3 months based on interviews and medical chart reviews were defined as having chronic edema. First, the chief investigators in cooperation with in-charge nurses at each facility assessed chronic edema by inspection. If it was difficult to determine the presence of chronic edema by inspection, the AFTD-pitting test was used. 2 AFTD is an acronym derived from the four factors used for the test: Anatomical locations of edema assessment; Force required to pit; the amount of Time; and the Definition of edema. Analysis. The prevalence of chronic edema was calculated by dividing the number of patients with chronic edema by the total number of inpatients, and 95% confidence intervals (CIs) were also calculated. Impact study Setting. Six facilities, including two outpatient clinics in acute care hospitals (one led by a physician and the other led by a nurse), inpatient wards in two acute care hospitals, and two nursing home/long-term care facilities, participated in this study. The two outpatient clinics specialized in lymphedema management led by a gynecologist or a nurse certified as a lymphedema therapist. Two wards in a university hospital and a community hospital participated in the prevalence study. In the university hospital, the breast surgery department, gynecology department, and rehabilitation department follow up lymphedema patients with timely referral to a clinical nurse specialist in cancer nursing and a certified expert nurse in breast cancer care. In a community hospital, a clinical nurse specialist in cancer nursing with lymphedema therapist certification and a general nurse conduct their own outpatient clinic for lymphedema patients that sees patients once a week. The long-term care facilities do not have a special system for chronic edema management. The patient inclusion criteria were as follows: older than 18 years; swelling for longer than 3 months; and able to understand the study as set out in the information sheet and give informed consent. The patient exclusion criteria were as follows: unwilling or unable to participate for whatever reason; receiving end-of-life care; and not considered to be in the patient's best interest to participate, as decided by the lead clinician. Data collection. A random sample could be obtained in two facilities; the two wards from the university hospital and the community hospital with chronic edema were identified due to limited resources. A random permuted block design allowed for a one third sample to be taken. In the long-term care facility, the investigators collected data from all participants. In the outpatient clinic, participants who had an appointment for the service on that day were included in the survey. Questionnaire survey. Questionnaires developed by the ILF were translated into Japanese followed by back translation to English for validation. The core tool was used to gather patient demographic data, and module tools were used to collect data on various aspects of the patients. WHODAS 2.0, a generic assessment instrument for health and disability, was used to assess six domains of functioning, including cognition, mobility, self-care, getting alone, life activities, and participation, with four possible response options (0 = none, 1 = mild difficulty, 2 = moderate difficulty, 3 = severe difficulty, and 4 = extreme difficulty or cannot do). The overall functioning score was calculated according to the guideline provided by the World Health Organization (WHO). 3 The scores for each item were summed up, and then the total score was divided by 48. A higher score indicates a more severe disability status. The EQ-5D, which is a generic health-related QOL profile instrument developed for measuring utility, was used. 4 It contains five domains: mobility, self-care, usual activities, pain/discomfort, and anxiety/depression. A single answer with three possible response options (1 = no problem, 2 = some/moderate problems, and 3 = extreme problems) was required. The EQ-5D has been found to be sensitive to the effect of lymphedema on health related quality of life. 5 The Japanese version was validated for the Japanese general population. 6 Scores from the five domains were combined into a single utility score between -0.594 (worst possible state) and 1.000 (best possible state) based on the Japanese weighting system. 7 The perceived current health state is measured by asking respondents to indicate their current health state on a Visual Analogue Scale with endpoints labeled 0 ''Worst imaginable health state'' and 100 ''Best imaginable health state.'' The LYMQOL was used to determine the level of QOL related to lymphedema. 8,9 This scale was developed to assess condition-specific QOL of patients with lymphedema of the limbs. The questions cover four domains (symptoms, body image/appearance, function, and mood) with four possible response options (1 = not at all, 2 = a little, 3 = quite a bit, and 4 = a lot). Scores for each domain were calculated according to the previous article. 8 A higher LYMQOL score indicates a lower QOL. For overall QOL related to lymphedema, the responder can pick one item from 0 ( = poor) to 10 ( = excellent). Analysis. Data were analyzed according to the four types of facilities involved: a gynecologist-led outpatient clinic, a lymphedema therapist nurse-led outpatient clinic, an acute care hospital ward, and a long-term care facility. Descriptive data are expressed as N (%) for categorical variables and medians (interquartile range) for continuous variables. The prevalence study determined the point prevalence in each facility. In the impact study, the data are presented according to the facility type. The facilities were classified into four groups: outpatient clinic, inpatient ward, nursing home, and long-term care facilities. Ethical considerations. The study protocol was approved by the Medical Ethics Committee of Kanazawa University. Informed consent was obtained from each of the patients or their proxies. Impact of chronic edema In total, 111 patients were investigated for the impact of chronic edema, and the data were analyzed in each facility, the gynecologist-led outpatient clinic (n = 51), the lymphedema therapist nurse-led outpatient clinic (n = 20), the acute care hospital ward (n = 10), and the long-term care facility (n = 30). The median patient age was 65 years, with over 95% of outpatients being female in both facilities. Inpatients of the acute care hospital ward were 59.5 years of age, with 60% female, and those at the long-term care facility were 85 years of age, with 76.7% female. In addition, 0% of outpatients and 96.7% of longterm care facility residents were immobile (Table 1). DAI ET AL. Lymphedema conditions in each facility are shown in Table 2. Except for two cases, they all had secondary lymphedema. Duration of edema was 5-10 years in 23 cases (46%) in the gynecologist-led clinic and 5 cases (25.0%) in the nurse-led clinic, and the duration was 3-6 months in 7 cases (70.0%) in the acute care hospital, while the duration of chronic edema ranged from 36 months to over 10 years in the patients in the long-term care medical facility. Overall, 25 cases (50%) had a history of cellulitis, with 2 cases (3.9%) having 2 episodes and 2 cases (3.9%) having 3 episodes in the gynecologist-led clinic. In each outpatient clinic, over 80% of patients received standard lymphedema care, including skin care advice, massage, multilayer garment, and exercise advice. Positive subjective opinions regarding the quality of edema control ranged from 43.3% to 50.0% in all facilities. The relevant anatomical locations of chronic edema for both sides of the whole body are summarized in Table 3. Of all the body parts, chronic edema was most common in the lower limb, foot, lower leg, and upper leg. Table 4 shows lymphedema status for outpatients in acute care hospitals. Upper lymphedema patients at International Society of Lymphology (ISL) stage II accounted for 69.2% of cases in the gynecologist-led and 70.0% in the nurse-led outpatient clinics. Lower lymphedema patients at ISL late stage II accounted for 63.2% of cases in the gynecologist-led and 40.0% in the nurse-led outpatient clinics. There were no wounds in the affected edema sites in these subjects. Table 5 shows the generic and disease-specific QOL status assessed by WHODAS 2.0, EQ-5D, and LYMQOL for the upper and lower limbs, respectively. Discussion There were four new findings in this study. First, the prevalence of chronic edema was much higher in the long-term care facility than in the acute care hospitals. Second, the prevalence of cellulitis episodes was *50% in the gynecologist-led outpatient clinic, even though over 80.0% of the patients underwent standard management for edema. Third, the edema could be found in the trunk region, including the buttock, abdomen, and chest-breast areas, in addition to the upper and lower limbs. Fourth, subjective satisfaction with control of edema was low, even though the QOL scores were good. The prevalence of chronic edema was much higher among patients in a long-term medical facility (66.1%) with median age of 87.2 years than in both acute hospitals, including a university hospital (5.0%; median age 67.7 years) and a community hospital (7.7%; median age 70.2 years). According to the previous prevalence study of chronic edema, Moffatt et al. reported that, while chronic edema/lymphedema can occur at any age, there was a clear increase in the rate with age. 10,11 Japan is already a super-aged society: the 2017 statistics showed an older adult population of *28% and an average life expectancy of *80.7 years in men and 87.0 years in women. 12 In Japan, more attention to edema management for elderly people is needed. In these results, the highest prevalence of cellulitis episodes was 49.0% in the gynecologist-led outpatient clinic compared to other facilities, ranging from 15.0% to 30.0%. In the gynecologist-led outpatient clinic, over 80.0% of patients received standard management for lymphedema, such as skin care advice (94.1%), massage (100.0%), compression garments (96.1%), exercise advice (80.0%), and cellulitis advice (98.0%), which are known as best practices. 13 The reason why the number of cellulitis episodes was high despite standard care in the gynecologist-led clinic was the larger numbers of lower lymphedema patients (n = 38) and late II patients (63.2%) than in the nurse-led clinic. Lower limb lymphedema and its severity are factors related to cellulitis. 14 Further investigation (e.g., frequency and methodology of management, and patient compliance) is needed to clarify the details that potentially prevent cellulitis episodes in these patients. This study also showed that edema can be found in the trunk region. In the gynecologist-led outpatient clinic, chronic edema was found in the buttock, abdomen, and upper chest-breast areas. Previous studies have not provided the details of the regions affected by chronic edema in outpatient clinics. Further investigation of the details of the care provided to the chronic edema in those regions is needed. Generic QOL scores in lymphedema outpatients were 88.8%-100.0%, a relatively good status compared to inpatients at an acute care hospital (71.7%). It is quite interesting to note that the utility score in patients at outpatient clinics was extremely high (0.796-1.000). Professional-led clinics can offer optimal options for lymphedema management that can preserve patients' functional status, leading to a high utility score. However, subjective satisfaction with control of chronic edema was only 46.6% or 50.0% in both outpatients and inpatients. These results might be explained by the fact that health-related QOL status was not directly affected by subjective satisfaction with edema control. Further study will be needed to improve these subjective satisfaction ratings. This study has two limitations. First, this investigation was a facility-based study, not community based. Therefore, it will not be compared to other community-based studies in LIMPRINT. Second, the questionnaires related to QOL were not suitable for elderly inpatients due to cognitive dysfunction and dementia. Therefore, questionnaires for these subjects might need to be developed. Conclusion This LIMPRINT Japan branch survey investigated the prevalence of chronic edema in various care settings and its impact using a detailed questionnaire. The prevalence of chronic edema varied according to the facility type, ranging from 5.0% to 66.1%. The edema was located in all body parts, including the trunk region. Subjective satisfaction with control of edema was poor, while general QOL was good. This large health care issue needs more attention. 200 DAI ET AL.
2019-04-19T13:02:30.785Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "b64469f81a9b061cd5722538f1027f22f5906eb0", "oa_license": "CCBY", "oa_url": "https://www.liebertpub.com/doi/pdf/10.1089/lrb.2018.0080", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2e1117c441037f027c988c48322bce903c379657", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
36250283
pes2o/s2orc
v3-fos-license
Matrix Metalloproteinase-3 Releases Active Heparin-binding EGF-like Growth Factor by Cleavage at a Specific Juxtamembrane Site* Heparin-binding epidermal growth factor-like growth factor (HB-EGF) is synthesized as a membrane-anchored precursor that is cleaved to release the soluble mature growth factor. The two forms are active as juxtacrine and paracrine/autocrine growth factors, respectively. The enzymes that process the HB-EGF transmembrane form are unknown. Accordingly, an in vitro assay was established using a fusion protein in which alkaline phosphatase (AP) replaced the transmembrane and cytoplasmic domains of HB-EGF (HB-EGF JM-AP). The fusion protein was anchored to agarose beads coated with anti-AP antibodies. Several matrix metalloproteinases (MMPs) were tested for the ability to release soluble HB-EGF in the in vitrosystem. MMP-3 released soluble 12-kDa immunoreactive and mitogenic HB-EGF within 30 min. On the other hand neither MMP-2 nor MMP-9 had any cleavage activities. A non-cleavable mutant was prepared by replacing the juxtamembrane (JM) region of HB-EGF with the JM region of CD4. The mutant HB-EGF, which in its full-length form was as active a juxtacrine growth factor as was the wild type HB-EGF in vivo, was not cleaved by MMP-3 in the in vitro assay. The C-terminal portion of the cleaved HB-EGF JM-AP that remained attached to the anti-AP beads was N-terminally sequenced and the MMP-3 cleavage site was determined to be Glu151-Asn152, a site within the JM domain. MMP-3 treatment also released soluble HB-EGFin vivo from MC2 cells expressing transmembrane HB-EGF precursor, at a level of about 2-fold above control. It was concluded that MMP-3 cleaves HB-EGF at a specific site in the JM domain and that this enzyme might regulate the conversion of HB-EGF from being a juxtacrine to a paracrine/autocrine growth factor. A number of growth factors and cytokines are synthesized as membrane-anchored precursor proteins that can be proteolytically cleaved to release soluble factors. Examples are TGF-␣, 1 tumor necrosis factor-␣, the c-kit ligand, and colony stimulating factor 1 (1). Conversion of insoluble precursor to soluble released factor by enzymatic cleavage may constitute an important post-translational modification that regulates growth factor activity and bioavailability. For example, the transmembrane precursor form of TGF-␣ is a juxtacrine growth factor that mediates signaling, proliferation, and adhesion of neighboring cells (2,3). On the other hand, soluble TGF-␣ can diffuse freely and is a potent paracrine and autocrine growth factor (4). There has been great interest in identifying the enzymes responsible for proteolytic cleavage of cytokine and growth factor precursors, one reason being that release of these factors often contributes to pathological processes. For example, interleukin-1␤-converting enzyme has been identified as an intracellular cysteine protease that releases mature interleukin-1␤, thereby contributing to inflammatory disease (5). Release of soluble TGF-␣, an oncogenic growth factor is mediated by a yet unidentified protease that cleaves at an Ala-Val site in the juxtamembrane (JM) region of the precursor (4). Apopain, the product of the pro-apoptotic gene, ced-3, is a cysteine protease related to interleukin-1␤-converting enzyme and is required for programmed cell death in Caenorhabditis elegans (6). Hydroxamic acid-based derivatives which are specific inhibitors of metalloproteinases have been used to identify precursor-cleaving proteases. For example, TACE, a metalloproteinase member of the mammalian adamalysin family, specifically cleaves the tumor necrosis factor ␣ precursor to release a potent proinflammatory and immunomodulatory cytokine implicated in numerous inflammatory conditions and cachexia (7)(8)(9)(10)(11). Metalloproteinases have also been implicated in the proteolytic release of TGF-␣ (12), the ␤-amyloid precursor protein (12), the FGFR1 ectodomain (13), the interleukin-6 receptor ectodomain (12,14), the human thyrotropin receptor ectodomain (15), and the lymphocyte L-selectin adhesion molecule (16 -18). Our laboratory has been analyzing the structural and biological properties of heparin-binding epidermal growth factorlike growth factor (HB-EGF) since its discovery (19 -21). HB-EGF is a member of the EGF family that is structurally homologous to EGF, TGF-␣, amphiregulin, the neuregulins, betacellulin, and epiregulin (1,(22)(23)(24). HB-EGF was first identified in the conditioned medium of human macrophages as a soluble heparin-binding potent mitogen for fibroblasts, smooth muscle cells, keratinocytes but not for endothelial cells (19,20,25). It activates HER1/erbB1 as does EGF, TGF-␣, amphiregulin, and betacellulin. However, HB-EGF also activates HER4/ erbB4, in common with neuregulin and betacellulin (26). Activation of HER4/erbB4 by HB-EGF results in chemotaxis but not proliferation, unlike activation of HER1/erbB1 which leads to both activities. The significance of HB-EGF heparin binding was demonstrated in that the chemotactic activity of HB-EGF for smooth muscle cells is mediated by interactions with cell surface heparan sulfate proteoglycans acting as low affinity receptors (21). HB-EGF has been implicated as a participant in a variety of processes such as wound healing, blastocyst implantation, smooth muscle cell hyperplasia, atherosclerosis, and tumor growth (27)(28)(29)(30)(31). The transmembrane precursor form of HB-EGF precursor has several biological properties. As a purified protein, recombinant transmembrane HB-EGF is a chemotactic and mitogenic factor (32). In co-culture, it is a juxtacrine proliferative factor for neighboring cells (33,34), and a juxtacrine adhesion factor for mouse blastocysts (29). It is also the unique receptor for diphtheria toxin (35,36). Transmembrane HB-EGF synthesis is induced during skeletal muscle cell differentiation, and its gene expression is activated in these cells by MyoD (37). Release of soluble HB-EGF has been found to occur both constitutively and in a regulated manner. Macrophages and T lymphocytes are among the cell types that release HB-EGF constitutively (19,20,38). On the other hand, in the case of adherent cells such as MDA-MB-231 (39) and Vero (34), the membrane-anchored precursor is the predominant form and processing is inducible, for example, by addition of phorbol esters. Phorbol ester treatment of these cells renders them diphtheria toxin-resistant, suggesting that the transmembrane precursor has been fully converted into the soluble form (39). The cleavage sites for HB-EGF processing and the enzymes responsible are not known. The endopeptidase, furin, has been implicated in the rapid constitutive release of the HB-EGF N-terminal propeptide (40). The enzyme(s) that release soluble HB-EGF from the cell surface may be matrix metalloproteinases (MMP) since phorbol ester-induced HB-EGF secretion is inhibited by hydroxamic-acid based derivatives that are MMP inhibitors (41). 2 Considering that the HB-EGF cleavage enzymes might be MMP-related, we devised an in vitro assay and tested several MMPs for their ability to cleave and release HB-EGF. In this report we demonstrate that MMP-3 (stromelysin-1) but not MMP-2 nor MMP-9, cleaves HB-EGF to release soluble immunoreactive and mitogenic HB-EGF. Furthermore, cleavage occurs specifically in the juxtamembrane region at a Glu 151 -Asn 152 site. In addition, a non-cleavable HB-EGF mutant was generated to exclude artifacts of nonspecific cleavage in measuring juxtacrine activity and determining juxtamembrane cleavage sites. EXPERIMENTAL PROCEDURES Cell Culture-Cell culture reagents were purchased from Life Technologies, Inc. (Gaithersburg, MD). Mouse hematopoietic 32D cells and EP170.7 cells, which are 32D cells transfected with the EGF receptor, were kindly provided by Dr. Jackie Pierce (National Institutes of Health, Rockville, MD) and maintained in RPMI 1640 supplemented with 10% FCS, GPS, and 5% WEHI-3 cells conditioned medium (42). COS7 and Vero cells were purchased from American Type Culture Collection (ATCC, Rockville, MD) and maintained in Dulbecco's modified Eagle's medium containing 10% FCS and GPS. Chinese hamster ovary cells were obtained from ATCC and maintained in ␣-minimal essential medium containing 10% FCS and GPS. MC2 rat ventral prostate epithelial cells were provided kindly by Dr. Michael R. Freeman (Children's Hospital, Boston, MA) and maintained in T-medium (Dulbecco's modified Eagle's medium/F-12, 5 g/ml insulin, 5 g/ml transferrin, 0.244 g/ml biotin, 25 g/ml adenine, and 3 ϫ 10 Ϫ11 M triiodothyronine) containing 5% FCS and GPS. MC2 and 32D cells transfected with HB-EGF-AP were maintained in medium containing 300 g/ml G418 (Geneticin, Life Technologies, Inc.). Sf9 cells were purchased from ATCC and maintained in SF900 II serum-free medium supplemented with 10% FCS and GPS. For recombinant fusion protein production, Sf9 cells were adapted to the serum-free medium and maintained in spinner flasks (Wheaton Instruments, NJ). Zymography-Native pro-MMP-3 (greater than 98% homogeneity by SDS-PAGE electrophoresis and Western blot) was purchased from Biogenesis Ltd. (Sandown, NH). Human recombinant MMP-2 and MMP-9 were purchased from Biogenesis Ltd. and provided by Dr. R. Fridman (Wayne State University School of Medicine, Detroit, MI). The MMPs were activated prior to the cleavage experiments. To activate pro-MMP-2 and pro-MMP-9, samples were incubated with APMA at a final concentration of 1 mM for 1 h at 37°C. Pro-MMP-3 was activated in the same manner with the exception being an increased incubation time of 3 h. All zymographic reagents were obtained from Bio-Rad with the exception of APMA which was obtained from Sigma. Substrate gel electrophoresis was carried out as described previously (43). Briefly, Type I gelatin or casein was added to the standard Laemmli acrylamide polymerization mixture at a final concentration of 1 mg/ml. MMP samples were mixed with substrate sample buffer (10% SDS, 40% sucrose, 0.25 M Tris-Cl, pH 6.8, and 0.1% bromphenol blue) and loaded without boiling into wells of a 4% acrylamide Laemmli stacking gel in a mini-gel apparatus. Polyacrylamide gels were run at 15 mA/gel while stacking and at 20 mA/gel during the resolving phase at 4°C. After electrophoresis, the gels were soaked in 2.5% Triton X-100 with gentle shaking for 30 min at ambient temperature with one change of detergent solution. The gels were rinsed and incubated overnight at 37°C in substrate buffer (50 mM Tris-Cl buffer, pH 8, 5 mM CaCl 2 , and O.02% NaN 3 ). After incubation, gels were stained for 15-30 min in 0.5% Coomassie Blue R-250 in acetic acid:isopropyl alcohol:H 2 O (1:3:6), destained in H 2 O and photographed. Production of Full-length AP-HB-EGF Fusion Proteins with Wild Type or Mutated JM Domains-For juxtacrine co-culture and in vivo MMP-3 cleavage experiments, expression vectors for the synthesis of full-length HB-EGF with both wild type and mutated JM domains were used. A full-length HB-EGF expression vector in which the AP has been inserted between Leu 83 and Thr 85 in the N-terminal region of the mature HB-EGF domain (Fig. 1A) has been previously described and is designated here as AP-HB-EGF (29). To obtain a putative non-cleavable HB-EGF mutant, the JM domain of HB-EGF spanning from Pro 149 to Thr 160 was replaced with the amino acid sequence of the corresponding JM domain of CD4 (CD4-JM) (44). The overall strategy was to substitute the HB-EGF JM with a cloning cassette so as to be able to introduce any desired amino acid sequence such as the CD4-JM domain. To do this an HB-EGF mutant was constructed which substituted the JM domain (Ser 147 -Ile 162 ) with a spacer sequence containing two BsmBI restriction sites using polymerase chain reaction (PCR). The following oligonucleotides were used: oligo A, 5Ј-CCCAAGCTTGCCATGAAGCT-GCTGCCGTCG-3Ј containing nucleotides 1-18 of the human HB-EGF cDNA preceded by a HindIII restriction site; oligo B, 5Ј-GCTCTAGAC-GTCTCTCAGCCCATGACACCTCTC-3Ј containing nucleotides 422-441 of the human HB-EGF cDNA followed by a BsmBI site and an XbaI site; oligo C, 5Ј-GCTCTAGACGTCTCACTGGCCGTGGTGGCTG-TG-3Ј containing nucleotides 487-504 of the human HB-EGF cDNA preceded by an XbaI site and a BsmBI site; oligo D, 5Ј-CCGGAATTC-CTAGTGGGAATTAGTCAT-3Ј containing nucleotides 610 -625 of the human HB-EGF cDNA followed by an EcoRI site. Both PCR products were subcloned into a pCR3 vector (Invitrogen) and sequenced. The HindIII-XbaI fragment and the XbaI-EcoRl fragment were then released and ligated into the HindIII-EcoRI site of the expression vector pcDNA3 (Invitrogen) and the DNA sequence of the product was confirmed. The resulting construct, pHB-EGF ⌬Ser 147 -Ile 162 , facilitated substitution of Ser 147 -Ile 162 with any desired amino acid sequence. To prepare specifically the HB-EGF CD4-JM mutant (Fig. 1A), two 52-mer oligo nucleotides: 5Ј-GCTGAGCCTCAAGGTTCTGCCCACATGGTCC-ACCCCGGTCGAGCCAACCATC-3Ј and 5Ј-CCAGGATGGTTGGCTGC-ACCGGGGTGGACCATGTGGGCAGAACCTTGAGGCT-3Ј, were synthesized and annealed to each other. The resulting oligonucleotide encoded Ser 147 -Leu 148 of human HB-EGF followed by Lys 362 -Pro 373 of human CD4 (44), followed by Thr 161 -Ile 162 of human HB-EGF. This oligonucleotide was ligated to the BsmBI sites of pHB-EGF ⌬Ser 147 -Ile 162 and the DNA sequence of the resulting construct, pHB-EGF CD4-JM was confirmed. To prepare the AP-HB-EGF CD4-JM fusion protein, the 4.5-kilobase pair BsmI fragment of pHB-EGF AP (29) was ligated with the 2.5-kilobase pair BsmI fragment of pHB-EGF CD4 Lys 362 -Pro 373 to obtain a full-length expression vector in which the AP has been inserted between Leu 83 and Thr 85 in the N-terminal region of the mature HB-EGF domain (29) (Fig. 1A). 2 M. Suzuki and M. Klagsbrun, unpublished data. Treatment of Cells Expressing Transmembrane AP-HB-EGF with Phorbol Esters and Matrix Metalloproteinases-For analysis of soluble HB-EGF release in response to phorbol esters in vivo, wild type AP-HB-EGF plasmids and plasmids containing the AP-HB-EGF CD4-JM mutant were transiently transfected for 48 -72 h into COS7, Chinese hamster ovary, Vero, and MC2 cells using LipofectAMINE (Life Technologies, Inc.). Cell surface expression levels of wild type AP-HB-EGF and the CD4-JM mutant were evaluated by measuring AP activity after fixation with 4% paraformaldehyde/phosphate-buffered saline. The cell lines were incubated with or without 1 M phorbol myristate acetate (PMA) for 90 min. Supernatant fractions (2 ml) were bound to heparin-Sepharose beads (25 l) and after washing the beads with 20 mM Hepes, pH 7.2, 0.2 M NaCl, bound proteins were eluted with 100 l of 20 mM HEPES, pH 7.2, and 2.5 M NaCl. An aliquot of each eluate was assayed for AP activity using naphthol phosphate (Sigma diagnostic kit) as a substrate. After incubation at room temperature for 0.5-6 h, absorbance at 410 nm was measured in a 96-well multiwell plate reader (Dynatech MR 5000). The amount of soluble AP-HB-EGF released by each cell type was normalized to the cell surface expression level of transmembrane AP-HB-EGF. For analysis of MMP-induced release of soluble AP-HB-EGF in vivo, stable transfected clones of MC2 cells expressing full-length AP-HB-EGF were obtained by further culturing in a selection medium containing 0.3-0.6 mg/ml G418. The cells were grown to confluence in 24-well plates, washed with Dulbecco's modified Eagle's medium without phenol red and exposed to various concentrations of MMP activated with APMA. The APMA was removed before MMP addition by changing the buffer using ultrafiltration (Centricon 3, Amicon). The supernatant fractions were incubated with heparin-Sepharose beads and the bound material was released with 2.5 M NaCl and assayed for AP activity. Juxtacrine Growth Factor Activity-Parental MC2 cells and MC2 cells transfected with full-length AP-HB-EGF or the AP-HB-EGF CD4-JM mutant cDNA were plated in 96-multiwell dishes at increasing cell densities and cultured overnight. After washing with Dulbecco's modified Eagle's medium, 10% FCS, and 2 M NaCl to remove AP-HB-EGF possibly trapped by cell surface heparan sulfate proteoglycans, cells were fixed with 4% paraformaldehyde/phosphate-buffered saline and washed three times with RPMI 1640, 10% FCS. EP170.7 cells were added at a concentration of 2 ϫ 10 4 cells/200 l/well and after 42 h, [ 3 H]thymidine (1 Ci/well) was added and the cells were incubated for 6 h. The EP170.7 cells, which grow in suspension, were harvested in a cell harvester (Microbeta Plus, Wallac) and DNA synthesis was measured by assaying the incorporation of [ 3 H]thymidine into DNA (38). To compare the specific activities (DNA synthesis/amount of cell surface HB-EGF) of the wild type and CD4-JM mutant transmembrane AP-HB-EGF juxtacrine growth factors, cell surface AP expression levels were determined after fixation of the cells with 4% paraformaldehyde/ phosphate-buffered saline. The AP values were 0.456 OD 410 /min/3.45 ϫ 10 6 cells and 0.95 OD 410 /min/3.96 ϫ 10 6 cells for wild type and mutant HB-EGF, respectively, equivalent to a ratio of 0.56 wild type/mutant cell surface HB-EGF. In Vitro Assay for Cleavage and Release of Soluble HB-EGF-To establish an in vitro assay for identifying enzymes that can release soluble HB-EGF, fusion proteins were prepared containing the mature HB-EGF domain, either the HB-EGF JM domain or the CD4-JM domain, and placental AP replacing the transmembrane and cytoplasmic domains at the C terminus (Fig. 3). This construct was designated as HB-EGF-AP. To do this, the sequence corresponding to the first 149 amino acids of the human HB-EGF cDNA (19) was amplified by PCR using synthetic oligonucleotide primers, 5Ј-GCTCTAGAGCATGAAGC-TGCTGCCGTCG-3Ј and 5Ј-CGCAAGCTTGGGTTGTGTGGTCATAGG-T-3Ј. The PCR product was digested with XbaI and HindIII and ligated to XbaI and HindIII sites of pUC18. The baculovirus transfer vector pVL1392 (Invitrogen) was digested with BamHl, treated with Klenow enzyme and cut again with XbaI. In a single step the XbaI-HindIII HB-EGF fragment and the HindIII-HpaI fragment of a human placental AP coding (tag-1) plasmid (45) were ligated into the linearized plasmid pVL1392 (pVLHB-EGF-AP). A putative non-cleavable mutant was prepared by substituting the CD4-JM domain described above for the HB-EGF JM domain at position Pro 149 -Thr 160 . To generate this fusion protein, the following oligonucleotides were synthesized and annealed: 5Ј-TGAGTCTCAAGGTTCTGCCCACATGGTCCACCCCGG-TGCAGCCAACCCA-3Ј and 5Ј-CAGAGTTCCAAGACGGGTGTACCAG-GTGGGGCCACGTCGGTTGGGTTCGA-3Ј. The resulting oligonucleotide possesses overhanging ends that could be ligated to CelII and HindIII sites. The plasmid pVLHB-EGF-AP was cut by both CelII and HindIII and ligated with the synthetic oligonucleotide to obtain pV-LHB-EGF CD4-JM-AP. To obtain recombinant HB-EGF fusion pro-teins, the baculovirus expression system was used (32). Briefly, plasmids pVLHB-EGF-AP and pVLHB-EGF CD4-JM-AP were used to generate recombinant baculovirus. Sf9 cells growing in a late logarithmic phase in serum-free medium (SF900) were infected with recombinant baculovirus clones. Conditioned medium (500 ml) was harvested 96 -120 h post-infection and applied to a TSK-heparin column (25 ml, TosoHaas, Tokyo, Japan) which was washed extensively with 20 mM HEPES, pH 7.2, 0.2 M NaCl. Proteins were eluted with a 0.2-2.0 M NaCl linear gradient. The elution of the fusion proteins was monitored by measuring AP activity using a Sigma diagnostic kit with naphthol phosphate as a substrate. After incubation at room temperature for 0.5-6 h, absorbance at 410 nm was measured using a 96-multiwell plate reader. The peak fractions were pooled and diluted to 0.2 M NaCl with 20 mM Hepes, pH 7.2. A second round of TSK-heparin FPLC was performed and fractions were assayed for AP activity. SDS-PAGE with silver staining and Western blot (32) with anti-HB-EGF antibody number 197, kindly provided by Dr. Judy Abraham (Scios, Sunnyvale, CA) was used to ascertain the purity and size of the fusion proteins, which were expected to be about 72 kDa. Active purified fusion protein fractions were pooled and used for the in vitro HB-EGF cleavage assay. The in vitro cleavage assay was performed as described schematically in Fig. 3. Excess amounts of purified HB-EGF JM-AP or HB-EGF CD4-JM-AP fusion proteins were incubated with anti-human placental AP-antibody conjugated agarose beads (Sigma) at 4°C for 12 h. The beads were washed extensively with 10 mM HEPES, pH 7.2, 150 mM NaCl, and 5 mM CaCl 2 . Five l of the anti-AP coated beads bound approximate 2.5 g of fusion protein. Aliquots (5 l) of the beads were incubated with 5 g/ml activated MMPs at 37°C using gentle rotation and then pelleted at 1,800 ϫ g for 2 min. The supernatant fraction was collected and assayed for the ability to stimulate DNA synthesis in EP170.7 cells (38) and for HB-EGF protein content by SDS-PAGE and Western blot analysis (32). The pelleted beads were washed extensively with 10 mM Hepes, pH 7.2, 150 mM NaCl, and 5 mM CaCl 2 and boiled in 2 ϫ SDS-PAGE buffer. The dissolved proteins were analyzed by SDS-PAGE and silver staining or subjected to N-terminal sequencing. N-terminal Sequencing-The HB-EGF JM-AP fusion proteins attached to anti-AP coated beads (30 g/60 l beads) were incubated for 300 min with 5 g/ml MMP-3 at 37°C in a volume of 150 l. After incubation, the beads were washed twice with 10 mM Hepes, pH 7.2, 150 mM NaCl, and 5 mM CaCl 2 , and boiled in 2 ϫ SDS-PAGE sample buffer. After SDS-PAGE, proteins were transferred to a polyvinylidene difluoride membrane (ProBlott, Applied Biosystems). The proteins were stained with 0.1% Coomassie Brilliant Blue in 50% methanol and the major 60-kDa band was cut out and subjected to N-terminal microsequencing using an Applied Biosystems model 477A microsequenator as a service provided by Dr. William Lane, of the Harvard Microchemistry Facility (Cambridge, MA). HB-EGF Juxtacrine Growth Factor Activity-It has been shown previously that the transmembrane HB-EGF precursor is a juxtacrine growth factor (33,34). However, those studies could not rule out definitively the possibility that the juxtacrine growth stimulation was merely an artifact resulting from the processing of the membrane-anchored precursor during the co-culture and the subsequent release of active soluble paracrine HB-EGF. To address this possibility, a putative noncleavable mutant was produced that would resist artifactual proteolytic degradation (Fig. 1). This was done by altering the AP-HB-full-length fusion protein (29) so that 12 amino acids in the JM region of HB-EGF were replaced by 12 amino acids in the JM region of the CD4ϩ T cell CD4 transmembrane antigen (Fig. 1A). This construct was designated as AP-HB-EGF CD4-JM. The rationale for this substitution was that CD4ϩ cells release HB-EGF but not CD4 (38), suggesting that the enzymes that cleave HB-EGF do not target the CD4 protein. The ability of the CD4-JM mutant to resist proteolytic degradation was tested in cells treated with PMA. PMA has been shown previously to release soluble HB-EGF from its precursor in several cell types including Vero and MDA-MB 231 cells (34,39). PMA treatment also released soluble AP-HB-EGF from COS7, Chinese hamster ovary, Vero, and MC2 cells transfected with the wild type construct (Fig. 1B). However, PMA did not release AP-HB-EGF from cells transfected with the CD4-JM mutant. To test for juxtacrine growth factor activity, MC2 cells transfected with either the wild type AP-HB-EGF construct or with the non-cleavable CD4-JM mutant construct were tested for juxtacrine growth stimulation of attached EP170.7 cells which express EGF receptor (HER1/erbB1). Both MC2 transfectants stimulated EP170.7 cell DNA synthesis while parental MC2 cells did not (Fig. 2). When the juxtacrine growth factor activities were normalized for levels of cell surface wild type HB-EGF and CD4-JM as described under "Experimental Procedures," the specific activities of the two HB-EGF transmembrane juxtacrine factors were found to be similar. It was concluded that transmembrane HB-EGF was indeed a juxtacrine growth factor and that the non-cleavable mutant could be used as a control for studying proteolytic processing. An in Vitro Assay for Proteolytic Cleavage of Transmembrane HB-EGF-The processing of transmembrane HB-EGF, which is a juxtacrine growth factor, into soluble HB-EGF, a paracrine growth factor, represents a post-translational modification that may regulate HB-EGF function. To analyze the enzymatic processing of transmembrane HB-EGF, an in vitro assay was established (Fig. 3). HB-EGF cDNA consisting of mature, JM, transmembrane, and cytoplasmic domains (Fig. 3A) was modified such that the TM and cytoplasmic domains were replaced with placental AP (Fig. 3B). This fusion protein with AP at the C terminus was designated as, HB-EGF JM-AP. It was expressed in a baculovirus/insect cell system, purified by heparin affinity chromatography and immobilized onto agarose beads coated with anti-AP antibodies (Fig. 3C). To analyze for potential cleavage enzymes, samples to be tested were added to the beads for various time intervals at 37°C. The beads were pelleted and the supernatant fractions were tested for HB-EGF growth factor activity and HB-EGF immunoreactivity in a Western blot analysis. The beads containing the C-terminal remaining portion of the HB-EGF JM-AP fusion protein were subsequently analyzed by SDS-PAGE and N-terminal sequencing as will be described below. (34,39,41). Furthermore, in Vero, MC2, and U937 cells, the phorbol ester-induced cleavage is blocked by hydroxamic acid-based inhibitors of MMPs (41). 3 Accordingly, we tested several MMPs for their ability to release biologically active and immunoreactive HB-EGF from the immobilized HB-EGF JM-AP fusion protein. After activation of the enzymes with APMA, 5 g/ml MMP-2, MMP-3, or MMP-9 were added to the substrate at 37°C and release of HB-EGF was measured by Western blot analysis (Fig. 4). No HB-EGF was released in the absence of MMPs (data not shown). However, after MMP-3 treatment, a single 12-kDa HB-EGF immunoreactive band was detected in the supernatant within 30 min (Fig. 4, lane 1) that comigrated with a recombinant HB-EGF standard (Fig. 4, lane 10). Greater levels of released immunoreactive HB-EGF were detected by 90 and 300 min (Fig. 4, lanes 2 and 3, respectively). In contrast, no immunoreactive HB-EGF was released by MMP-2 or MMP-9 at 30, 90, or 300 min (Fig. 4, lanes 4 -6 and 7-9, respectively), or even at 12 h of incubation (not shown). MMP-3 Cleaves HB-EGF in Vitro-Previous studies have demonstrated that phorbol esters induce proteolytic cleavage of transmembrane HB-EGF The supernatants were also tested for HB-EGF growth factor activity (Fig. 5). MMP-3, but not MMP-2 or MMP-9, released HB-EGF that stimulated DNA synthesis in EP170.7 cells with kinetics comparable to the release of immunoreactive HB-EGF. Since all 6 cysteine residues in mature HB-EGF are required for mitogenic activity, release of biologically active HB-EGF by MMP-3 suggested strongly that the cleavage site for MMP-3 must be somewhere downstream of the sixth Cys residue as shown in Fig. 3A. The three MMPs used in these HB-EGF cleavage experiments were activated by APMA treatment in solution. To ensure that APMA did actually activate these enzymes, especially MMP-2 and MMP-9 which did not release soluble HB-EGF, zymography was carried out with gels into which gelatin was incorporated as described under "Experimental Procedures" (Fig. 6). Prior to activation, the MMP-2 and MMP-3 preparations were primarily in zymogen forms, while the MMP-9 preparation contained both zymogen and active enzyme. Activated MMP-2 and MMP-9 readily degraded gelatin (Fig. 6A). MMP-2-induced gelatin degradation was associated with a 62-kDa band. Activated MMP-9-induced gelatin degradation was associated with an 82-kDa band. Activated MMP-3, which degraded gelatin poorly, degraded casein and this enzymatic activity was associated with a 45-kDa band (Fig. 6B). It was concluded that the APMA-treated MMP-2 and MMP-9 used in these experiments were the lower molecular mass active proteases, but that these activated proteases did not release biologically or immunoreactive HB-EGF from the substrate. To determine whether cleavage occurred within the juxtamembrane domain, MMP-3 was assayed for the ability to cleave an HB-EGF JM-AP substrate in which the wild type JM (WT-JM) domain was replaced with the mutant CD4-JM domain (CD4-JM) (Fig. 7). While MMP-3 released soluble mitogenic HB-EGF (Fig. 7A, left) and immunoreactive HB-EGF (Fig. 7B, lane 2) from the wild type protein, it did not release mitogenic (Fig. 7A, right) or immunoreactive HB-EGF (Fig. 7B, lane 4) from the CD4-JM protein. In addition, when the proteins still associated with the anti-AP beads were analyzed by SDS-PAGE and silver staining, it was found that MMP-3 reduced the molecular mass of the 72-kDa HB-EGF JM-AP protein (Fig. 7C, lane 1, solid arrow) to about 60 kDa (Fig. 7C, lane 2, open arrow). However, MMP-3, even at a dose of 10 g/ml, did not diminish the size of the 72-kDa HB-EGF CD4-JM-AP mutant protein (Fig. 7C, lanes 3 and 4). Taken together, these results along with those in Fig. 1, suggest 1, 4, and 7), 90 min (lanes 2, 5, and 8), and 300 min (lanes 3, 6, and 9) with 5 g/ml of activated MMP-3 (lanes 1-3), MMP-2 (lanes 4 -6), and MMP-9 (lanes 7-9). The beads were pelleted and 10 l of the supernatant fractions were analyzed by SDS-PAGE and Western blot using an anti-HB-EGF antibody. Recombinant HB-EGF (25 ng) was used as a standard (lane 10). Identification of the MMP-3 Cleavage Site-To identify the site at which MMP-3 cleaves HB-EGF JM-AP, the anti-APcoated beads containing the remaining C-terminal portion of the fusion protein were collected, centrifuged, and the bound HB-EGF proteins were visualized by SDS-PAGE (Fig. 8A). The time course showed conversion of 72-kDa HB-EGF JM-AP to a smaller 60-kDa species by 30 min with complete processing apparent at 300 min. The 60-kDa band generated after 300 min of MMP-3 treatment was transferred to a polyvinylidene difluoride membrane and subjected to N-terminal sequence analysis (Fig. 8C). A single sequence of the first 19 N-terminal amino acids of the cleaved product was obtained, NRLYTYDHT-TQAYVRSSGI. This sequence contains amino acids in the JM domain and extends into the N-terminal region of AP. When this sequence was compared with the sequence of HB-EGF JM-AP (Fig. 8B), it was apparent that the cleavage site for MMP-3 in vitro was at Glu 151 -Asn 152 within the JM region of the HB-EGF precursor. MMP-3 Treatment of Cells Expressing AP-HB-EGF-To determine whether MMP-3 could process transmembrane HB-EGF in vivo, MC2 cells transfected with full-length AP-HB-EGF were incubated with increasing concentrations of MMP-3 for 2 h (Fig. 9). At 5 g/ml MMP-3, there was about a 2-fold increase in released AP-HB-EGF from the cells compared with non-MMP-3 treated controls. On the other hand, AP-HB-EGF was not released at all from cells expressing the AP-HB-EGF fusion protein containing the CD4-JM mutant (data not shown). DISCUSSION The proteolytic processing of the HB-EGF transmembrane precursor to the mature soluble form is a key step in converting this juxtacrine growth factor into a paracrine/autocrine one. We have identified MMP-3 as one of the possible proteases responsible for this processing event based on the demonstration that MMP-3 releases soluble bioactive HB-EGF from an insoluble substrate in vitro consisting of mature HB-EGF, the HB-EGF JM domain, and AP at the C terminus replacing the transmembrane and cytoplasmic domains (HB-EGF JM-AP). Half-maximal release at 5 g/ml MMP-3 occurs within 30 min. As might be predicted, the proteolytic cleavage occurs within the extracellular JM domain. The evidence for this is that (i) immunoreactive 12-kDa HB-EGF, the expected size, was released and was mitogenic for cells expressing EGF receptor (HER1/erbB1). Proteolytic cleavage in the mature HB-EGF domain would lead to irreversible loss of growth factor activity. Fig. 2 to have juxtacrine growth factor activity were washed and incubated with activated MMP-3 at 0, 1.25 and 5 g/ml at 37°C for 2 h. The supernatant fractions were assayed for AP activity. The AP levels are normalized to AP activity in the supernatant in the absence of MMP-3 which is given a value of 1.0. The data is plotted as the S.E. of the mean of duplicate samples. rable to wild type transmembrane HB-EGF, indicating that the CD4-JM replacement has no adverse effects on overall structure and function beyond the JM domain; and (iii) N-terminal sequencing of the fusion protein that remains anchored to the anti-AP-coated agarose beads indicated that the specific cleavage site is Glu 151 -Asn 152 which is within the proposed JM domain of HB-EGF, 10 residues upstream of the TM domain. No other N termini were detected. Previous results, based on amino acid composition analysis of C-terminal tryptic peptides of mature HB-EGF released by phorbol ester treatment of Vero cells, suggested that the cleavage site might be at Pro 149 -Val 150 (34). However, this analysis might have been incomplete. Alternatively, after cleavage at Glu 151 -Asn 152 by an MMP-3-like enzyme, there may be additional enzymes available in vivo, for example, neutral endopeptidase (46) that further process mature HB-EGF by removal of C-terminal Val 150 and Glu 151 . Cleavage by MMP-3 at Glu 151 -Asn 152 of the HB-EGF Pro 149 -Val 150 -Glu 151 -Asn 152 sequence is consistent with the demonstration that MMP-3 is highly effective in cleaving a Pro-Val-Glu-norvaline (Nva) synthetic fluorogenic peptide substrate at the Glu-Nva site with a k cat /K m of 218,000 s Ϫ1 M Ϫ1 (47). This substrate is among the most rapidly hydrolyzed fluorogenic MMP-3 substrates yet described. In addition, the HB-EGF cleavage substrate sequence of Leu 148 -Pro 149 -Val 150 -Glu 151 -Asn 152 -Arg 153 , with cleavage occurring at Glu-Asn, is compatible with stromelysin (MMP-3)-sensitive hexamer sequences that were generated based on bacteriophage peptide display analysis (48). One of these sequences, Ile-Pro-Phe-Glu-Gln-Arg (with Glu and Gln at the P 1 and PЈ 1 positions, respectively), resembles the HB-EGF cleavage site sequence, and has a relatively high k cat /K m for MMP-3. Overall, the synthetic hexamers that have relatively high k cat /K m values also have predominantly a Pro residue at position P 3 , an Arg residue at position PЈ 2 and often a Glu residue at position P 1 , positions compatible with those in the HB-EGF cleavage substrate sequence. Although MMP-3 is very effective in releasing soluble HB-EGF from the anchored substrate, MMP-2 (72-kDa gelatinase A) and MMP-9 (92-kDa gelatinase B) have no effect at all, even at 5 g/ml enzyme in a 5-h digestion period and even though it can be demonstrated that these 2 gelatinases are highly active in degrading gelatin using zymography. The HB-EGF JM Glu-Asn cleavage site differs from other known MMP cleavage sites. MMP-3 has been shown to cleave cross-linked fibrin at a Gly-Ala site (49) and insulin B chain at Ala-Leu and Tyr-Leu sites (50), none of which are present in the HB-EGF JM region. In general, there are no significant homologies between the various MMP-3 cleavage sites that are found in proteoglycans, several types of collagen and protease inhibitors (47,51). Other metalloproteinase cleavage sites have also been identified of which none are present in the HB-EGF JM. For example, MMP-2, but not MMP-9, releases an active soluble ectodomain of FGFR 1 by hydrolysis at a Val-Met site which is within the FGFR1 JM domain, eight residues upstream of the TM domain (13). MMP-2 hydrolyzes the ␤-amyloid precursor at Lys-Leu, Leu-Met, and Met-Val sites (52,53). Galectin-3, a cell surface lectin involved in cell-cell and cellmatrix interactions in tumor metastases is cleaved at an Ala-Tyr site by both MMP-2 and MMP-9 (54). MMP-3 also releases soluble HB-EGF in vivo, albeit to a limited extent. MMP-3 treatment of MC2 cells expressing an AP-HB-EGF fusion protein results in about a 2-fold increase in released soluble HB-EGF compared with non-treated controls. The efficiency of release in vivo is limited, with only about 5% of cell surface AP-HB-EGF being cleaved compared with the situation in vitro in which complete release can be achieved. Possible explanations are that the cleavage site is close to the cell membrane and may not be as readily accessible as it is in the in vitro assay and/or that TIMP-like inhibitors of MMP-3 associated with the cells may be inhibiting enzymatic activity. Alternatively, MMP-3 might not be the only cleavage enzyme for HB-EGF in vivo. HB-EGF is processed constitutively by macrophages (19,20) and T cells (38) and in response to PMA treatment by many cell types including breast carcinoma cells (39). Whether these cleavage processes are mediated by the same enzymes and whether MMP-3 is an enzyme that is involved in these processes remains to be determined. For example, besides secreted proteases such as MMP-3, other enzymes that are membrane-bound might be involved in the processing of HB-EGF as has been demonstrated for tumor necrosis factor-␣ (10,11). The significance of MMP-3-induced cleavage of the HB-EGF precursor might be that this post-translational modification would increase the bioavailability of a potent growth factor that has been shown to be involved in physiological and pathological processes (reviewed in Ref. 55). Increased levels of MMPs including MMP-3, enhance the extracellular matrix degradation and remodeling that accompanies cell migration, tumor invasion, and wound healing (51,56). Previously, MMP-3 has been shown to increase the bioavailability of bFGF by releasing it from basement membrane perlecan by cleaving the core protein (57). HB-EGF is a potent mitogenic factor for fibroblasts and keratinocytes, cells involved in wound healing (27,28) and for tumor cells (31). It is also a potent chemotactic and mitogenic factor for smooth muscle cells (19,21), activities which have been linked to smooth muscle cell hyperplasia in atherosclerosis, restenosis, and pulmonary hypertension (30,31,58). Enhanced levels of MMP-3 in the proliferating epidermis of wounds (59), in tumors (60 -62), in atherosclerotic plaques (63,64), and in aortic aneurysm and occlusive disease (65) could lead to an increase in HB-EGF paracrine growth factor activity in these tissues. Furthermore, EGF and TGF-␣ have been demonstrated to increase MMP-3 production (66, 67), suggesting the possible existence of an autocrine amplification loop wherein EGF-like growth factors, perhaps HB-EGF itself, increase MMP-3 levels in turn increasing secreted HB-EGF levels. In summary, we have demonstrated that MMP-3 cleaves HB-EGF at a specific Glu-Asn site in the HB-EGF JM domain. This finding suggests a new role for MMP-3 in enhancing growth factor bioavailability in physiological and pathological processes.
2018-04-03T03:41:07.365Z
1997-12-12T00:00:00.000
{ "year": 1997, "sha1": "70b335574ca000758dbfe87f7101e4535ed5eb4a", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/272/50/31730.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "18dfbab5aacc604ed3afe26621ffdf576f0f0d2b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
149446265
pes2o/s2orc
v3-fos-license
Functional Improvement of Upper and Lower Extremity After Decompression and Neurolysis and Nerve Transfer in a Pediatric Patient with Acute Flaccid Myelitis Patient: Female, 5 Final Diagnosis: Enterovirus infection Symptoms: Weakness in all 4 limbs Medication: — Clinical Procedure: Nerve decompression • neurolysis and nerve transfer Specialty: Neurosurgery Objective: Rare disease Background: Acute flaccid myelitis is an emerging polio-like illness mostly affecting young children, characterized by rapid onset of extremity weakness and paralysis in 1 or more limbs. Certain viruses, including enteroviruses such as EV-68, EV-71, poliovirus, and West Nile virus, can cause this disorder. The largest known outbreak of EVD68 in the United States was in the summer of 2014, causing severe respiratory illness and acute flaccid myelitis, mainly in young children. Furthermore, the US Centers for Disease Control and Prevention noted an increase in the number of patients with clinical symptoms of acute flaccid myelitis in 2018, and 134 confirmed cases by December 2018 were reported in the USA. Case Report: The patient in our present study was a 5-year-old female who had significant weakness and paralysis in all 4 extremities due to acute flaccid myelitis. EV-D68 had caused this disorder in this patient in August 2014. Conservative management had not helped her condition. Specific areas of concern were both shoulders and biceps, and the femoral and peroneal nerves in both sides. Of these, the right shoulder function was the worst, at less than grade 3. The patient also had marked atrophy and weakness of the right quadricep muscles. The patient underwent surgical treatment and had steady improvements in all 4 extremity functional movements. Conclusions: We demonstrated that decompression, neurolysis, and nerve transfer surgical procedures can be used successfully to correct the paralyzed upper and lower extremity movements in acute flaccid myelitis patients. The US Centers for Disease Control and Prevention (CDC) noted an increase in the number of patients with clinical symptoms of AFM in August 2018 [1]. There had been 134 confirmed cases of AFM by December 2018 in the USA (December 6, 2018 news report in Nature). These viral infections also cause similar limb weakness and paralysis such as transverse myelitis and Guillain-Barre syndrome [23]. Guillain-Barre syndrome is an immune-mediated condition affecting peripheral nerves and nerve roots. Both motor and sensory functions are affected in Guillain-Barre syndrome affected patients, whereas there is only a minimal or no sensory loss in AFM patients [8]. There was no sensory loss in the patient in our study. There is no specific treatment for patients with AFM. Neurologists recommend physical or occupational therapy for arm and leg functional movements. CDC and the experts in this field are working together to understand the long-term outcomes in AFM affected patients. Saltzman et al. [24] recently reported functional improvements after nerve transfer in 2 AFM patients. Here, we report the functional improvement of upper and lower extremities, after decompression and neurolysis and nerve transfer in a pediatric patient with acute flaccid myelitis. Case Report The patient in our case study was a 5-year-old female who had significant weakness and paralysis in all 4 extremities due to AFM. Conservative management did not help the patient, and there were ongoing weakness and instability of all 4 extremities. Specific areas of concern were both shoulders, biceps, and the femoral and peroneal nerves in both sides. Of these, the right shoulder function was the worst, at less than grade 3. The patient also had marked atrophy and weakness of the right quadricep muscles. An emerging and increased pathogenic enteroviral infection, possibly EV-D68 (non-polio enteroviruses) had caused this illness in this patient in August 2014. There was a temporal association of AFM with a nationwide outbreak of EV-D68 in the summer 2014. This was about 16 months before the patient presented to our clinic, and she had received only conservative treatment with physical therapy since that time. The patient underwent surgical treatment at our hospital and had stable improvements in all 4 extremity functional movements. History of the patient's illness and her hospital visit The patient's mother reported that the child had a mild cold on August 28, 2014. Then, the patient had severe headaches, fatigue, tremor in her hands, and woke up the next day with double vision and tongue deviated to the left. She was brought to the emergency department. Later, she was unable to move her right arm and her head fell to the right side, she could not walk, support her head, or sit up on her own. The patient was then taken to another hospital for further examinations and treatments. Nerve conduction velocity and electromyography reports Indications were cranial nerve deficits, extremity weakness, facial droop on the left upper and lower extremities, lethargy, and fever. Nerve conduction velocity and electromyography studies reported the following findings. 1) No evidence for diffuse peripheral neuropathy disorder affecting the motor or sensory nerves in the 4 extremities. 2) There was evidence of significant neurogenic axon degeneration of the right median nerve. There was also evidence of reduced amplitude of all motor responses from nerve conduction when recording distally in the extremities. 3) There was an active sign of denervation and axon degeneration in C5-C8 innervated muscles in the right upper extremity and in the L3-L4 innervated muscles in the right lower extremity. 4) Sensory functions were normal. Nerve conduction velocity and electromyography study reports suggested an unusual central and proximal peripheral demyelinating disorder that was multifocal in its distribution. There was also evidence of focal axon degeneration at variable spinal levels that may involve anterior horn cell degeneration and was compatible with an acute motor neuron disorder that was multifocal. Findings Nonspecific signal abnormality and mild swelling of the cervical spinal cord were detected on spine cervical magnetic resonance imaging. Indications for surgery Specific areas of concern were both shoulders, biceps, and the femoral and peroneal nerves in both sides. Of these, the right shoulder function was the worst, at less than grade 3, yet with some deltoid function that could support surgical reinnervation. The patient also had marked atrophy and weakness of the right quadricep muscles. Conservative management had not helped and there was ongoing weakness and instability of all 4 extremities. Based on the findings of the upper trunk neurolysis, the lead author and the operating surgeon (RKN) conferred at this point with the child's parent/guardian, who wished then to proceed with nerve transfer to the non-conducting elements of the right axillary nerve branch to the deltoid muscle. Conducting elements of that nerve were left intact. Surgical procedures performed on the patient Surgical procedures performed on the patient included: external and internal neurolysis of right and left anterior and posterior divisions of brachial plexus, and suprascapular nerve; right and left anterior scalene nerve release; external and internal neurolysis of right axillary and radial nerve; external and internal neurolysis of right and left femoral, deep and superficial peroneal nerves. The patient was brought to the operating room and underwent general anesthesia. The bilateral necks and the right axilla and arm were prepped and draped in the usual sterile fashion. An incision was created in the right neck, and the brachial plexus was exposed in the usual fashion. Scarring was noted over the entire upper trunk and components. The suprascapular nerve was tested, then decompressed, externally and internally neurolysed. Similarly, the posterior division of the upper trunk was externally and internally neurolysed. The anterior division of the upper trunk was externally and internally neurolysed. The anterior scalene muscle was partially released to relieve any compression over the upper trunk. Electrical testing confirmed improved conduction through all elements except the posterior division, which remained poor in conductivity although with some improvement. The skin was closed in 2 layers after antibiotic irrigation and hemostasis. A new incision was created in the right posterior axillary fold. Contracture of the latissimus dorsi muscle was released. Dissection revealed the axillary nerve at the superomedial border of the triceps tendon. The deltoid elements were externally neurolysed, then separated with intraneural dissection and further subdivided into component fascicle groups. Each group was electrically tested and those with conducting nerve fibers were excluded from further dissection. Those that did not conduct were severed to await a triceps branch of the radial nerve for nerve transfer. The radial nerve was then neurolysed in the roof of the axilla. The axial vessels were retracted to access the nerve and its triceps branches. One branch was selected through internal neurolysis, measured for length, and severed as a donor for the previously chosen axillary group fascicles. The radial nerve fibers and the axillary nerve fibers were sutured together in a circumferential fashion using 9-0 epineurial stitches in strict microsurgical fashion and under high magnification. The axillary wound was closed in 2 layers after antibiotic irrigation and hemostasis. An incision was created in the left neck and the brachial plexus was exposed in the usual fashion. Scarring was noted over the entire upper trunk and components. The suprascapular nerve was tested, then decompressed, externally and internally neurolysed. Similarly, the posterior division of the upper trunk was externally and internally neurolysed. The anterior division of the upper trunk was also externally and internally neurolysed. The anterior scalene muscle was partially released to relieve any compression over the upper trunk. Electrical testing confirmed improved the conduction through all elements. The skin was closed in 2 layers after antibiotic irrigation and hemostasis. The drapes were taken down and the lower extremities were prepped and draped in the usual sterile fashion. A vertical incision was made in the right groin inferior to the palpated inguinal ligament and medial to the palpable femoral arterial pulse. The distal branches of the femoral nerve were isolated and dissected. Several branches were noted to arise from the distal trunk of the nerve. This complex underwent electrical testing then external and internal neurolysis, resulting in improved measured electrical conduction. The skin was closed in 2 layers after antibiotic irrigation and hemostasis. An incision was created in the right knee inferior to the fibular neck. The peroneal nerve was dissected free and the superficial and deep branches were isolated. The deep branch underwent external neurolysis and internal neurolysis, followed by the superficial branch which also underwent external neurolysis and internal neurolysis. Electrical testing proved improvement in conduction of both branches. The skin was closed in 2 layers after antibiotic irrigation and hemostasis. An incision was then created in the left knee inferior to the fibular neck. The peroneal nerve was dissected free and the superficial and deep branches were isolated. The deep branch underwent external and internal neurolysis, followed by the superficial branch which also underwent external and internal neurolysis. Electrical testing proved improvement in conduction of both branches. The skin was closed in 2 layers after antibiotic irrigation and hemostasis. All wounds were covered with dry dressings. No complications occurred and no specimens were sent. The patient was awake, alert and extubated in stable condition following surgery. Shoulder movements were assessed pre-operatively and post-operatively by evaluating video recordings of standardized movements according to the modified Mallet classification [25]. Ankle eversion, inversion and dorsiflexion, toe extension, flexion and plantar flexion were clinically evaluated by the lead author (RKN) [26] using the Medical Research Council (MRC) scale for muscle strength pre-operatively and 18 months after surgery. Pre-operative evaluation of upper and lower extremities The patient had severe atrophy and paralysis of the right shoulder. There were ongoing atrophy and paralysis of the right shoulder with MRC scale grade 2 function remaining, indicating useful muscle mass for a nerve transfer and neurolysis. There was weakness also in the entire left upper extremity with grade 3-3+ function. There were antigravity ankle movements throughout, but there was a partial steppage gait in the patient's right leg. Patient's left leg also had generalized weakness but had become her dominate leg. The patient reported that she could not walk up the stairs bilaterally and therefore would place her right leg on the step and use momentum to bring her left leg to the same step and repeated. The patient had no pain and was not on medication. Post-operative functional improvements There were stable improvements in all areas of the left and right upper extremities, except for right shoulder abduction ( Table 1, Figure 1). The patient's total Mallet score for right shoulder functions improved significantly from 19 to 23 (of 30) (P<0.03). Left shoulder and hand movements were close to normal. Patient's walking speed has increased, and she was able to lift her right foot better when walking after 4 weeks of peroneal nerve decompression, micro-neurolysis and neuroplasty surgery. Functional recovery was remarkable in foot Figure 2). Discussion Immunosuppressive therapy is ineffective for these patients. CDC and experts in this field are working together to develop an enteroviral vaccine. Martin et al. [21] found 3 of 12 children develop permanent extremity functional impairment in their study on outcomes of Colorado children with AFM. AFM symptoms are typically described as asymmetric motor weakness mostly affecting the upper limbs [5]. However, the patient in our case study had paralysis in both upper and lower limbs. It was still asymmetric, considering her right side was severely damaged in comparison to her left extremities. Neurologists recommend physical or occupational therapy for upper and lower extremity movements. Nerve and muscle transfer procedures have been historically used for poliomyelitis associated paralysis [5]. Recently, nerve transfer was shown to improve muscle strength and range of motion in 2 pediatric AFM patients, who sustained upper extremity paralysis [24]. Similarly, another study [27] found that surgery was suitable in EV-D68-induced dysphagia patient; this patient had motor dysfunction but preserved sensory function. We and other investigators have previously shown that decompression and neurolysis and nerve transfer have significantly improved both upper and lower extremity functional movements in brachial plexus injury (BPI), winging scapula, and foot drop patients [28][29][30][31][32]. We have used these surgical techniques successfully for the patient in this study report. Conclusions We demonstrate that decompression, neurolysis, and nerve transfer surgical procedures can be used successfully to correct the paralyzed upper and lower extremity movements in AFM patients.
2019-05-11T13:03:36.613Z
2019-05-10T00:00:00.000
{ "year": 2019, "sha1": "f501241521001013166aab813908a5e7350b4d34", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc6523989?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "f501241521001013166aab813908a5e7350b4d34", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247318837
pes2o/s2orc
v3-fos-license
Quantum corrections to the thermodynamics of rotating charged BTZ black hole in gravity's rainbow In this manuscript, we investigate the thermal properties of the rotating charged BTZ black hole under the gravity's rainbow (GR) and generalized uncertainty principle (GUP) formalism. At first, we study the GR-corrected thermal quantities according to the usual Heisenberg algebra. Then, we consider a deformed algebra that leads to a change in the Heisenberg uncertainty principle and compare the Hawking temperature, entropy, thermodynamical volume, pressure, and heat capacity functions with the previous results. Thus, we understand and interpret the quantum effects on the BTZ black hole. Introduction Black hole physics and thermodynamics is one of the most interesting and challenging topics in general relativity and modern cosmology. Historically, in the early 1970s, Bekenstein suggested that black hole entropy could be described as proportional to black hole area [1,2]. In the classical approach accepted at the time, it was thought that black holes do not emit radiation, so Bekenstein's suggestion was met with skepticism. Just a year later, Hawking confirmed that a black hole could be taken as a black body, thus might emit radiation if the quantum effects are taken into account [3]. This process, later called Hawking radiation, showed that a well-defined temperature could be defined for a black hole. Accordingly, the thermodynamic properties of a black hole, and especially its thermodynamic stability, have recently been discussed in interesting studies by many authors [4][5][6][7][8][9]. Another great challenge of theoretical physics is deriving a well-defined unified theory from the theories of gravity and quantum. In order to achieve this goal, many quantum gravity theories, such as the string theory [10], noncommutative geometry [11], loop quantum gravity [12,13] have been proposed. All these theories present some differences with respect to each other, so they have some advantages and disadvantages. But they also have some common properties, such as the existence of a minimum measurable length. However, scientists were confused by the existence of a minimum length on the Planck scale, since the Planck length scale is not a Lorentz invariant quantity. To resolve this contradiction, modification of the usual distribution relation in the special theory of relativity was proposed. The modified dispersion relation yielded a new theory called doubly special relativity (DSR), which has two observer-independent scales namely, the velocity of light c and the Planck energy E P (or the Planck length ℓ P ) [14][15][16][17][18][19]. It is worth noting that at the limit where the minimum length disappears, the DSR theory reduces to standard special relativity. In 2004, Magueijo and Smolin formulated the DSR in the curved spacetimes and called the formalism as gravity's rainbow (GR) (In the literature, sometimes it is also called rainbow gravity (RG).) [15]. The basic assumption of the GR stands on the fact that the energy of the test particle must also affect the geometry of spacetime. Therefore, the spacetime background has to be represented with the parameter-dependent family of metrics that depend on the energy of the considered particle. This parameter dependency creates the rainbow of the metric and recently a lot of papers are published on the black hole thermodynamics by considering various GR [20][21][22][23][24][25][26][27][28]. The existence of the minimum measurable length in quantum mechanics can also be obtained by the deformation of the usual Heisenberg algebra [29,30]. Such a deformation also affects the Heisenberg uncertainty principle (HUP), and as a result, the generalized uncertainty principle (GUP) has to be used instead of the HUP. Recently, with increasing interest, we observe that many papers examine the effect of the GUP formalism on black hole physics and thermodynamics . One of the common features of these studies is the modification of the Hawking radiation. In [54][55][56] the authors discussed whether the black holes may or may not be detected in the LHC experiments and claimed the existence of a remnant can have important phenomenological consequences for the observation of black holes at the LHC experiments. In addition, in the above studies, they showed that the entropy of the black hole drastically changes by getting a logarithmic correction term with a negative sign which is consistent with the string theory and loop quantum gravity. In 1992, Bañados, Teiteloim and Zanelli (BTZ) considered a negative cosmological constant and studied the usual Einstein-Maxwell field equation in 2 + 1 spacetime. They obtained a vacuum solution and interpreted them as the BTZ black hole solution which shows similarities to the 3 + 1 dimensional Schwarzschild and Kerr black hole solutions [57]. One year later, Achúcarro and Ortiz discussed the rotating charged BTZ black hole solution [58]. Certain aspects of the thermodynamic geometry of rotating BTZ black holes are investigated for uncharged and charged cases, respectively in [59][60][61][62]. Recently, Alsaleh studied the thermodynamics of neutral BTZ black holes in the GR [63]. As it is mentioned above, the GUP formalism affects the thermodynamics of a black hole drastically. To this end, Iorio et al. considered the BTZ black hole in the GUP formalism and presented its impact on the usual thermodynamic quantities [64]. To our best knowledge, an investigation on the thermodynamic features of the charged BTZ black holes in the GR formalism under the GUP has not been carried on before. This fact is the one of the main motivation of this manuscript. We construct the manuscript as follows: In Sec. 2, we consider a charged BTZ black hole and study its thermal quantities in GR formalism. After analyzing the Hawking temperature, entropy, thermodynamical volume, Helmholtz free energy, pressure, internal energy and heat capacity functions, in Sec. 3 we introduce the GUP formalism and we repeat a similar analyze in the GUP scenario. Finally, we give a brief conclusion in the last section. Thermodynamics of BTZ black holes under the effects of the GR We consider the Hilbert action in (2 + 1) Anti-de Sitter (AdS) spacetime with the electromagnetic field coupling [65] Here, R and Λ represent the curvature scalar and cosmological constant where Λ = −ℓ −2 < 0, while ℓ is the AdS radius. The Einstein field equations that are expressed with the Einstein tensor, G ab , and stress-energy tensor, T ab , in the form of G ab + Λg ab = πT ab , gives the BTZ black hole solution via the given metric [67] where the lapse function, F (r), is taken as Here, three integration constants: M , J, and Q denote mass, angular momentum (spin), and the charge of the BTZ black hole, respectively. The roots of the lapse function are the horizons of the black hole. However, it is not easy to determine a precise expression of the horizons of the BTZ black hole. With the help of extremal points of the lapse function, dF (r) dr r=rext = 0, one can obtain a condition, F (r ext ) < 0, that ensures real-valued event horizons [66]. In a particular case, namely for Q = 0 and J = 0, it reduces to M > |J| ℓ , while in another case, for Q = 0 and J = 0, it becomes M > Q 2 (1 − 2 ln Q). Considering the vanishing value of the lapse function, we express the mass of the black hole in terms of the outer event horizon, r + , as To examine the GR effects, we follow the method given in [68][69][70][71], and modify the metric via dt → by employing two arbitrary rainbow functions. So that, we obtain the GR modified metric in the following form where Here, E illustrates the energy at which the geometry is probed, while E P denotes the Planck energy. Then, we constraint the arbitrariness of the rainbow functions by choosing one of the most interesting forms of the rainbow functions It is worth noting that this selection of rainbow functions has a physical background, such as its use in the string theory [68,72], quantum cosmology [73], loop quantum gravity [74], and κ− Minkowski noncommutative spacetime [75]. The assumption of f E EP = 1, yields a time-like Killing vector in the GR as usual, so that, the local thermodynamic energy does not show a dependency on the energy of the test particle. First, we intend to derive the Hawking temperature of BTZ black hole in the presence of the GR. Following [68], we employ the formula and straightforwardly, we find the Hawking temperature in the GR framework in the form of Since the Hawking temperature can only take real and positive values, we obtain two constraining conditions. One of them stems from the GR formalism and provides a relation between the horizon and the rainbow parameter, η, in the Planck energy scale. The second condition is based on the fact that the temperature must take a positive value and establishes a relationship between charge, angular momentum, space-time radius and horizon. Therefore, we can interpret it as an intrinsic physical condition. In fact, under the hypothesis Q = η = 0 the reality condition, M > |J| ℓ , guarantees positive temperature. When Q = 0, Eq. (13) becomes useful. We find it interesting to perform a deeper analysis on the unification of these two conditions since they are based on different facts. Assuming the first condition sets a lower limit to the horizon than the second condition, we substitute Eq. (12) in Eq. (11) and we obtain a correlation between the GR's formalism and the BTZ black hole properties. for η > 0. On the other hand, the black hole horizon cannot be greater than the length of spacetime, ℓ > r + . If we use this as an upper bound in Eq. (13), we get the following relationship between the black hole parameters. Since ℓ has a non-zero positive value, we arrive at a new condition among the parameters as follows For the following two sub-cases , where the BTZ black hole is considered as non-static uncharged or static charged, the conditions given in Eqs. (14) and (16) According to this analysis, we plot the Hawking temperature versus the horizon in Fig. 1 with the parameters that obey Eqs. (14) and (16). We observe that the GR formalism leads to a lower bound on the horizon, r min , as predicted. Although it modifies the Hawking temperature, we see that these effects are dominant only in the nonphysical region, r min ≤ r + ≤ r phys , where the horizon is smaller than 1.42 for the chosen representative parameter values. In the range r phys ≤ r + ≤ r max , the GR corrected Hawking temperature mimics the usual one. Next, we employ another fundamental postulate of the black hole thermodynamics to derive the Bekenstein entropy, S = dM TH . By using Eqs. (6) and (11), we find We see that the Bekenstein entropy depends on the rainbow parameter. In other words, Eq. (17) shows that entropy does not obey area law if η = 0. But when η = 0, the area law for the BTZ black hole is clearly obeyed [76][77][78][79][80][81][82]. We depict GR corrected entropy function versus the horizon in Fig. 2. We observe a linear increase in entropy function for η = 0. In the presence of the GR formalism, this linearity breaks down drastically especially at small horizon values i.e. for r H < (r phys = 1.42), where the BTZ black hole is unstable, hence, its entropy is not physically meaningful. In the range r phys ≤ r + ≤ r max , the GR corrected entropy differs from the usual one by the factor η E 2 Since the entropy of the considered black hole is not proportional to the event horizon area, we would like to calculate its volume via the following formula We find In the absence of GR formalism, we obtain V = 4πr 2 + , which is the usual form of the rotating charged BTZ black hole volume. In that case for r + = 0, volume becomes zero. However, in the GR formalism, a non-zero minimal volume value arises at minimal event horizon value in the form of We present the variation of the GR corrected black hole volume versus the event horizon in Fig. 3. We see that for η = 0.05, η = 0.15 and η = 0.50, minimum horizon reads 0.22, 0.39, 0.71, and accordingly minimum volume values are 0.94, 1.79, 2.18. In the stable region, we observe that the GR corrected volume value is smaller than the usual one. This deviation becomes greater for large GR parameters. Then, we investigate the Helmholtz free energy. For the derivation we use the well-known formula F = − SdT . By substituting Eqs. (17) and (11), we obtain the GR corrected Helmholtz free energy of the charged rotating BTZ black hole in terms of horizon as follows: We notice that the second term represents the GR contribution. To illustrate the modification of the Helmholtz free energy, we depict it in terms of horizon in the presence and absence of the GR in Fig. 4. We observe that in the absence of GR parameter, Helmholtz free energy decreases monotonically. In the presence of the GR formalism, it exhibits a different characteristic behavior in the horizon range where the black hole is unstable. On the contrary, we observe that when the black hole becomes stable, the GR effect does not make a significant decrease on the thermodynamic function. Next, we examine the thermodynamic pressure of the black hole, defined as proportional to the first derivative of the Helmholtz free energy with respect to the volume. We obtain the pressure in the form of In the usual case, where η = 0, we see that the pressure decreases only by taking positive values and converges to a certain value that depends on the characteristic properties of the black hole. Since the pressure has to be real valued, we see that RG correction enforces a lower limit bound on the event horizon. In order to give a detailed discussion, we plot the RG corrected pressure versus event horizon in Fig. 5. We see that negative pressure can arise only when the black hole is unstable. Moreover in that range the pressure first increase rapidly, and then decrease. We conclude that the GR correction is dominant only in this non physical range. On the other hand, in the stable range, GR correction does not change the pressure substantially. In greater horizon values, the pressure saturates at a non zero value. Next, we examine the internal energy using the following thermodynamic relations, Considering Eqs. (11) and (17), we obtain the internal energy independent from the effect of GR as follows: We see that the internal energy is not modified by the GR formalism. Then, we discuss the stability of the GR corrected BTZ black hole. To this end, we derive the heat capacity and analyze its behaviour. According to the following definition we express the heat capacity of the black hole with the help of Eqs. (17), (11), and (26) in the form of This corrected heat capacity function reduces to standard condition for η = 0. We depict GR corrected heat capacity versus horizon in Fig. 6. We observe that in the nonphysical range black hole is unstable, When r H > r phys , black hole becomes stable. For r H = r phys , black hole with the mass does not radiate. This mass value corresponds the minimal value of the mass function given in Fig. 7. Quantum corrections to the thermodynamics of the BTZ black hole under the rainbow gravity In this section, we consider a deformed Heisenberg algebra and examine the quantum effects on the thermodynamics of BTZ black hole in the presence of RG formalism. To this end, we start by introducing the considered deformed algebra which leads to the following GUP [83,84]: where α is a positive parameter. It is worth noting that there are other scenarios where the deformation parameter is addressed with a negative quantity [85,86]. Besides, there are also cases where the deformation parameter is considered as a dynamic variable rather than a constant, leading to a more general perspective and analysis [87,88]. We also would like to emphasize that GUP can be formulated in a completely different way, particularly in the presence of the BTZ black hole. For example, Iorio et al used the form that gives ∆P to the power of 3/2 in [64]. Now, let us continue by rewriting Eq. (29) as and Taylor expand. We get Next, we use the saturated form of the uncertainty principle E∆X ≥ 1, which follows from the saturated form of the HUP, ∆P ∆X ≥ 1 [89], in Eq. (31). We get Here, E is the energy of the tunneling particles and E GUP is the corrected energy of them. For a particle with corrected energy, the tunneling probability of crossing the event horizon is [84]: Then, we can compare Eq. (33) with the Boltzmann distribution, exp − E T , and find the quantum-corrected Hawking temperature of particles with energy E: We note that when α = 0 Eq. (34) reduces to Eq. (11). For small α values the GUP-corrected Hawking temperature reads We depict GR-GUP corrected Hawking temperature versus horizon in Fig. 8. We observe that quantum corrections increase the Hawking temperature in physical region. Next, study the entropy function. We obtain the GR-GUP corrected entropy up to the second order correction as We see that the GUP correction terms leads to a decrease in the entropy function. In the absence of GUP parameter Eq. (36) reduces to Eq. (17). We demonstrate the behaviour of the entropy function versus event horizon in Fig. 9. We observe that entropy decreases more at greater GUP-parameter value. We also notice that the quantum corrected entropy of the BTZ black hole in gravity's rainbow in tunneling formalism, is of the form of the predicted by string theory and loop quantum gravity. Next, we investigate the thermodynamical volume of the black hole for a given entropy. We find In the HUP limit, Eq. (37) becomes identical to Eq. (19). We present the plot of thermodynamical volume versus event horizon in Fig. 10. We observe that with greater alpha parameters, smaller volumes are formed on the same horizon. Next, we investigate the quantum corrected pressure function. We find where For α = 0, Eq. (38) reduces to Eq. (23). We show the variation of the quantum corrected pressure with respect to the event horizon in Fig. 11. We notice that the quantum effects are relatively more effective in the non-physical region. Finally, we investigate the quantum corrected heat capacity function. We employ Eq. (26) for the derivation. In this scenario we find the heat capacity in the form of where We see that in the absence of the quantum corrections, Eq. (40) reduces to Eq. (27). We notice that the numerator is the product of two terms. However, the term that arises from the quantum correction does not have a real root. Therefore, the event horizon value, where the heat capacity is equal to zero, does not change. Thus, in the GUP case, when the black hole becomes stable its mass value is the same as it is in the HUP case. In Fig. 12, we give the variation of the heat capacity with respect to the event horizon in order to interpret what quantum corrections change. We observe that when the black hole becomes stable, the heat capacity function takes smaller values for greater quantum deformation parameter values due to quantum corrections. Conclusions In the present manuscript, we consider a rotating charged BTZ black hole in gravity's rainbow (GR) under the generalized uncertainty principle (GUP) formalism. At first, in the Heisenberg uncertainty principle case, we show that the GR approach provides an extra constraint on the event horizon value. With a detailed analyze, we investigate the Hawking temperature, entropy, thermodynamical volume, Helmholtz free energy, pressure, internal energy and heat capacity functions. We find that there is a physical and nonphysical region associated with the stability of the black hole. Then, we consider a quantum deformation in the Heisenberg algebra, that leads to the GUP, and examine the same functions according to the latter scenario. We find that quantum deformations increase the Hawking temperature, while they decrease the entropy, volume and heat capacity in the physical region. We demonstrate all these effects on these functions with graphs.
2022-03-10T03:15:37.669Z
2022-03-09T00:00:00.000
{ "year": 2022, "sha1": "ba1119862eea0353f98d0532b018e3685c00803f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2203.04490", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "721fd7e19144421b42f7f77d089982a4fc0f347f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
7493174
pes2o/s2orc
v3-fos-license
Hepatosplenic Sarcoidosis: Contrast-Enhanced Ultrasound Findings and Implications for Clinical Practice Sarcoidosis is a complex granulomatous disease that affects virtually every organ and tissue, with a prevalence that varies significantly among the sites involved. The role of conventional imaging, such as computed tomography and magnetic resonance imaging, in the assessment of hepatosplenic sarcoidosis is well established by revealing organ enlargement, multiple discrete nodules, and lymphadenopathy. In this review, we aim to describe contrast-enhanced ultrasound (CEUS) findings in liver and spleen involvement by sarcoidosis, reporting evidence from the literature and cases from our experience, after a brief update on safety profile, cost-effectiveness, and clinical indications of this novel technique. Furthermore, we highlight potential advantages of CEUS in assessing hepatosplenic sarcoidosis that may be useful in the clinical practice. Introduction Sarcoidosis is a complex granulomatous disease that virtually affects every organ and tissue, with a prevalence that varies significantly among the sites involved. However, it affects most often compartments such as lungs and mediastinal lymph nodes, manifesting as a pulmonary restrictive disorder in up to 65% of patients, and with pulmonary fibrosis in 20-25% of them [1][2][3]. The value of mediastinal ultrasound in patients with sarcoidosis has been recently shown [4]. The high prevalence of pulmonary disease could be associated with the primary activation of alveolar macrophages by inhaled exogenous agents, such as inorganic particles, insecticides used at work, and those from exposition to moldy environments [1][2][3]. The formation of typical noncaseating granulomas represents the final product of an incomplete antigens degradation, associated with an exuberant macrophage and T-and B-cell activity due to prolonged antigenaemia [5,6]. Also genetic factors (both HLA and non-HLA genes) have been associated with an increased risk of sarcoidosis, and the complex interaction between endogenous and exogenous agents may reflect the great variability of clinical manifestations [7]. Organs, such as liver and spleen, are less frequently affected than lungs, and their involvement often shows a benign course but portal hypertension and loss of liver function may occur [8][9][10]. However, a correct evaluation of these organs represents an important step in patients with sarcoidosis, before starting appropriate treatment. gamma-gt, and alkaline phosphatase can be observed [8]. In a recent study that comprises 837 patients with sarcoidosis, an increase of ALT and AST was found in up to 15% of cases [11]. Hepatic sarcoidosis can manifest with constitutional symptoms such as weight loss, anorexia, fever, and night sweats [1] or, less frequently, with symptoms related to chronic intrahepatic cholestasis, such as pruritus and jaundice. In these cases, laboratory tests reveal an increase of alkaline phosphatases and total and direct bilirubin [12]. Rarely, cholestasis is associated with common bile duct compression by a mass in the pancreatic head or by enlarged perihepatic lymph nodes [13]. Only few cases complicated by cirrhosis and portal hypertension have been reported in the literature, and they present with ascites and/or bleeding from rupture of gastroesophageal varices [14][15][16]. Splenic involvement is uncommon (5-10% of cases) and usually manifests with asymptomatic and mild splenomegaly. Rarely, the enlargement of the spleen is more pronounced with hypersplenism and pancytopenia [8,14]. B-Mode and Color Doppler Ultrasound Findings. Hepatosplenic sarcoidosis is common in patients with systemic disease, but it is often underestimated on imaging techniques, in particular conventional ultrasound, because granulomatous inflammation of the liver and spleen can be minimal and/or manifest with nonspecific patterns. While granulomas have been found in 60-80% of liver biopsy specimen, sarcoid hepatic nodules are found on imaging in only 5% of cases [17]. The most common finding is represented by hepatomegaly with homogenous distribution of echoes and without evidence of prominent nodules. Sometimes, US can demonstrate an increase of parenchymal liver echogenicity, mimicking fatty liver disease ( Figure 1) [18,19]. A prominent parenchymal inhomogeneity with coarsening pattern can also be found, suggesting an irregular patchy infiltration of the parenchyma by confluent granulomas, associated with various degrees of fibrosis surrounding the coalescing tissue ( Figure 2) [20][21][22]. Hepatic nodules usually appear on US as multiple, discrete, and rounded hypoechoic lesions of various sizes. They may mimic liver cirrhosis or nodular regenerative hyperplasia [23]. They can also manifest as areas of increased or similar echogenicity with respect to the adjacent parenchyma, though these patterns are less frequently reported in the literature [17,24]. Isoechoic lesions can be missed on conventional US and are found on imaging such as CT or MRI. Usually, the nodules are multiple, have different sizes ranging from 1 to 2 mm to several centimeters, are not associated with mass effect on the surrounding parenchyma, and show hypovascularity on Color Doppler US ( Figure 2) [17,22]. Less frequently, single hypoechoic nodules can be observed, raising problems of differential diagnosis with other focal lesions ( Figure 1). In our experience enlarged perihepatic lymph nodes can be encountered in almost all patients with circumscribed focal sarcoid liver infiltration. Splenomegaly can be observed in either absence or presence of focal splenic lesions, and there is a different prevalence in observing discrete nodules among published studies (6-33%), perhaps reflecting ethnic differences in study populations [17]. Splenic nodules usually appear as multiple and hypoechoic focal lesions; they show different size on US (usually less than 10 mm but larger lesions may occur) and are hypovascular on Color Doppler US ( Figure 3) [10]. The nodules can also manifest as hyper or isoechoic lesions with respect to the healthy parenchyma. The different patterns can be related to a different degree of fibrosis in the granulomatous tissue [25]. Furthermore, enlarged lymph nodes have been observed in up to 76% of cases, both in hepatic and splenic sarcoidosis, and they appear as single or multiple hypoechoic masses that are located most often in periportal, celiac, paracaval, and paraaortic compartments, with sizes between 1 and 2 cm [26]. We generally observed larger perihepatic lymph nodes in advanced liver disease with a lymph node size up to 4-6 cm (Figure 4). In the context of benign diseases such large perihepatic lymph nodes have been observed only in primary biliary cirrhosis (PBC) [27]. Involved abdominal lymph nodes may show inhomogeneous echotexture, with multiple low-level echoes inside [22,28]. The concomitant enlargement of perihepatic and mediastinal lymph nodes is typical for sarcoidosis but has also been observed in chronic virus hepatitis C [29]. Other not commonly observed findings are represented by punctate calcifications that appear as multiple, small, hyperechoic areas of few millimeters. They can be found in both liver and spleen [20,26]. Hepatic and splenic involvement by sarcoidosis can be associated with systemic disease or can be isolated. In the latter, the diagnosis is difficult if based only on imaging studies and requires often a biopsy and a histopathological examination of the organs [25][26][27][28][29][30][31][32][33]. US can also demonstrate some atypical pattern, rarely described in the literature. Some nodules, due to their confluence tendency and irregular appearance, raise the problem of differential diagnosis with neoplastic disorders [10,34,35]. Bauones et al. have recently reported a case of incidental finding of multiple hypoechoic and nonvascular splenic nodules that were associated with a significant retraction of the overlying splenic capsule; splenomegaly was not found and no other organ involvement was documented. This atypical finding has mimicked neoplastic disease, and the patient underwent a laparoscopic splenectomy in order to exclude malignancy [25]. In these cases, histopathological examination is required to avoid a misdiagnosis that can lead to a completely different therapeutic approach. The diagnosis of sarcoidosis can be suspected on the basis of typical clinical, laboratory, and imaging features but is usually achieved with histopathological findings that confirm the presence of noncaseating granulomas and exclude other causes of granulomatous inflammation [1,36]. Effort should be made to obtain a sample to analyze from biopsy specimen [1,[37][38][39]. The Role of Computed Tomography and Magnetic Resonance Imaging. Other imaging techniques, such as contrastenhanced CT (CECT) and MRI, are reliable to evaluate the organ involvement in sarcoidosis. CT can confirm hepatosplenomegaly, and, in most cases, the liver appears homogeneous; sometimes, however, the liver appears heterogeneous and a septa-like pattern can be found after contrast agent injection [17]. CECT can be useful to confirm hepatosplenic nodules or to reveal them, for the first time, after a negative US examination. The lesions manifest as hypodense masses relative to the adjacent healthy tissue, without peripheral enhancement [17,40,41]. MRI can also serve as an adjunctive diagnostic tool to confirm the presence of both hepatic and splenic nodules that appear hypointense, relative to the adjacent parenchyma on all sequences, without substantial contrast enhancement after gadolinium administration, and appear less evident on delayed imaging, suggesting equilibration. The nodules are best visualized on T2-weighted fat-suppressed and early-phase dynamic contrast-enhanced images [42]. Furthermore, MRI can be useful to reveal nonspecific hepatic findings such as periportal hyperintensity on T2-weighted images; some authors have suggested that this sign could be associated with a greater tendency of granulomas to localize within periportal spaces [17,22]. Finally, both CT and MRI can be useful to reveal the presence of punctate calcifications and/or lymphadenopathy [26]. The Evolving Role of CEUS. In recent years, the use of ultrasound contrast agents (UCAs) has rapidly increased in the clinical practice. Since the first guidelines regarding the use of CEUS in the assessment of liver lesions, released by European Federation of Societies for Ultrasound in Medicine and Biology (EFSUMB) in 2004 and lastly updated in 2013 [43][44][45], new fields have been investigated with the evaluation of other organs such as spleen, pancreas, gastrointestinal tract, kidneys, and lungs. EFSUMB released an extensive update on nonhepatic use of CEUS, highlighting the wide range of clinical applications that can be carried out [46]. Comments on the guidelines have been published as well [47,48]. UCAs perform as blood pool tracers and are constituted by gas surrounded by a membrane that prolongs their half-life and provides stability. The envelope consists of organic materials such as galactose, palmitic acid, albumin, and phospholipids. After intravenous injection, enhancement patterns can be evaluated in real time with a higher temporal resolution than in other imaging techniques [44]. UCAs are generally safe and have a low incidence of side effects, without heart, liver, and renal toxicity. Incidence of life-threatening anaphylactoid reactions is very low (0.001% among the 23,000 patients examined) and it is not necessary to perform laboratory tests before starting CEUS examination [45]. CEUS in the Differentiation between Benign and Malignant Focal Hepatosplenic Lesions. CEUS has demonstrated a high overall diagnostic accuracy in the differential diagnosis of focal liver lesions, with similar values of sensitivity and specificity as compared to conventional imaging, such as CT or MRI [49][50][51][52][53][54][55]. A recent systematic review and costeffectiveness analysis found that the pooled estimates of sensitivity and specificity to detect and/or characterize malignant lesions were 95.1% and 93.8% using CEUS, and 94.6% and 93.1% using CECT, respectively [53]. Similar results were obtained by also comparing CEUS and MRI [50]. The use of CEUS is effective in the workup of patients with focal liver lesions, by identifying specific patterns and selecting those who need further diagnostic investigation [56,57]. Furthermore, several authors have demonstrated that CEUS can provide valuable information in the differential diagnosis of focal splenic lesions with high accuracy [58][59][60][61][62]; Yu et al. have found that the sensitivity, specificity, and accuracy of CEUS in the diagnosis of focal splenic lesions were 91.1%, 95.0%, and 92.0%, respectively. Lower values were obtained using conventional US (75.0%, 84.2%, and 77.3%, resp.) [59]. CEUS can also improve the differentiation between benign vascular and malignant lesions [63] and can be useful when there are no suggestive findings on benign conventional US [64]. The good safety profile, real time evaluation, and absence of radiation exposure are some of the reasons for the wide diffusion of CEUS in the last few years and for the establishment of appropriate indications for its use. CEUS Patterns of Hepatosplenic Sarcoidosis. Although there is increasing evidence regarding the usefulness and reliability of CEUS, a broad group of disorders have not been investigated so far with this technique. Actually, there is a lack of ad hoc studies in patients with hepatosplenic sarcoidosis, and most evidence derives from description of single case reports or from findings of small case series [10]. Most of the trials have been conducted with the aim to differentiate benign focal lesions from malignant focal lesions, as discussed above. It is reasonable to expect this lack of data, first of all because sarcoidosis is an uncommon disease, and demonstration of liver and spleen involvement on imaging is even rarer; then, in most of cases, hepatosplenic sarcoidosis appears homogenous on US without evidence of discrete nodules, and second imaging, such as CT and MRI, is preferred to assess the organ involvement in these cases. However, CEUS has documented accuracy to characterize splenic and hepatic parenchymal inhomogeneity, when found [44,46]. Even if the evidence is limited, hypoechoic hepatic lesions derived from sarcoidosis appear, after UCA injection, as variably arterial enhancing and progressively hypoenhancing nodules in the portal-venous and late phases [10,65]. Also hypoechoic lesions of the spleen appear most often as progressive hypoenhancing nodules, in arterial and parenchymal phases, compared to adjacent splenic tissue, with increasing lesion-to-parenchyma contrast diffusion while moving to parenchymal phase ( Figure 3) [58,65]. The pattern of slight enhancement can be diffusely homogenous or heterogeneous in the arterial phase and diffusely homogenous or dotted in parenchymal phase. Furthermore, some peripheral irregular vessels can be found [58]. Other authors have described a complete absence of enhancement in both arterial and parenchymal phases [66]. In one case, we observed a more rim-like enhancement in the arterial phase, followed by hypoenhancement in parenchymal and late phases ( Figure 5) [10,67]. This pattern can overlap with those observed in neoplastic disorders [57], and biopsy with histopathological examination is, therefore, required to exclude malignancy. CEUS can be useful to identify hepatic or splenic isoechoic nodules that are not otherwise evident on conventional US; these lesions appear as progressively hypoenhancing masses ( Figure 3) [24]. Sometimes, they appear as almost isoenhancing nodules in the late phase ( Figure 2). CEUS can also confirm the presence of abdominal lymph nodes enlargement with homogeneous enhancement, suggesting a benign inflammatory pattern (Figure 4) [68]. Conclusion: Implications for Clinical Practice and Future Perspectives The limited evidence regarding CEUS findings in hepatosplenic sarcoidosis raises the need for further studies that evaluate the role of CEUS in this uncommon disease. Although the most observed pattern is characterized by absence or less enhancement of the nodules with respect to the healthy parenchyma, no studies have reported CEUS findings in hyperechoic lesions derived from hepatosplenic sarcoidosis. It would be interesting to explore these patterns and to see if there is a different behavior after UCA administration; however, we expect similar findings for hyperechoic nodules on CEUS to that of hypo-and isoechoic lesions, because of their similar hypodense appearance on CECT [17,24]. CT and MRI are often preferred to evaluate organ involvement in sarcoidosis, because lesions with similar echogenicity to the healthy parenchyma cannot be found on conventional US. CEUS can overcome these limitations and reveal hepatic and splenic nodules. In our experience, we observed that conventional ultrasound may be also useful to show treatment response and a significant reduction in the size of hepatosplenic lesions after steroid therapy. Further studies should evaluate any change of contrast enhancement pattern after treatment of focal lesions and perihepatic lymphadenopathy [27]. CEUS could be useful to follow up the lesions over time, thus avoiding unnecessary radiation exposure associated with CT imaging and kidney damage in patients at risk, after administration of iodine contrast or gadolinium. In view of the patient history, this pattern was first suggestive of malignancy. However, other organs were normal on second imaging, and histopathological examination revealed noncaseating granulomas, suggesting the diagnosis of splenic sarcoidosis (reprinted with permission from [10]). In conclusion, hepatosplenic sarcoidosis remains so far a challenging diagnosis [69,70]. Imaging findings are often nonspecific, and, in cases of isolated abdominal organ involvement, a diagnosis of sarcoidosis can be achieved only by revealing noncaseating granulomas in tissue specimens and excluding other causes of granulomatous inflammation [36,71,72]. The role of conventional imaging, such as ultrasound, CT, and MRI, can be reserved in the staging of the disease and not for diagnostic purposes, because isolated hepatosplenic sarcoidosis can be misdiagnosed with disorders such as lymphoma or metastasis that manifest with similar findings and may raise an erroneous suspicion, especially if the patients have a history of malignancy. CEUS has potential in the assessment of hepatosplenic sarcoidosis, but there is need of prospective, controlled trials that aim to explore CEUS patterns in comparison with conventional imaging and biopsy, before drawing definite conclusions.
2018-04-03T03:37:05.130Z
2014-08-18T00:00:00.000
{ "year": 2014, "sha1": "f4dba949b6f1349d162a7d6f38ea8f91cbf29b4d", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2014/926203.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d64148c349d3f35e6a476dd7ef25e26ea87a8f6a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225432352
pes2o/s2orc
v3-fos-license
Managers’ work and behaviour patterns in profitable growth SMEs We investigated managers’ work and behaviour patterns in profitable growth small- and medium-sized Swedish companies, and considered how these patterns might be associated with good health outcomes. Specifically, we looked at hours worked by managers, proportion of time spent on working activities, and leadership behaviour orientation. We used a quantitative cross-sectional design and collected data via a standardized questionnaire that was answered by 133 top managers. The data were analyzed with descriptive statistics, linear regression, and compositional data analysis. Our results indicate that the managers worked long hours, which is a health risk both for them as individuals and for their organizations, but also that they engaged in work practices and leadership behaviours that were favourable for organizational health and for their employees. The managers spent a high proportion of their time in touring, which could be beneficial to organizational health, and exercised active leadership through behaviours that contribute to both employee health and company effectiveness. Comparing our results to other studies, we can observe that patterns of managers’ time use differ between small and large companies, confirming that the size of the firm is an important determinant of managerial work. performance and health, drawing on the Healthy Work Organization (HWO) perspective (Sauter et al. 1996). The topic is important since there is increasing global interest in organizational health across small businesses, which represent a large share of many economies (Legg et al. 2014;Nowrouzi et al. 2016). In Sweden, smalland medium-sized enterprises (SMEs; ≤ 250 employees) constitute 99.9% of all companies and employ around 65% of all private-sector employees (Tillväxtverket 2019). We also know that organizational health in SMEs is poorly managed, and tends to be marginalized (Legg et al. 2014). Employees in small businesses have poorer working conditions and are exposed to greater health and safety risks (Walters et al. 2018;Hasle and Limborg 2006). Daily managerial work is overlooked in the contemporary leadership discourse (Alvesson and Sveningsson 2003a;Hales 2001), management research (Mintzberg 2009;Tengblad 2012), and small business literature (O'Gorman et al. 2005). Building on the practice perspective on management (Tengblad 2012), we maintain that successful and experienced managers' daily work practices, behaviours, and activities can, in addition to the mainstream theories on leadership, be used "as the primary data for theorizing about good management" (Tengblad 2012:5). The present article therefore focuses on managerial work and behaviour patterns in the context of profitable growth SMEs. This context is important for two reasons. First, firm growth has been an important area of research in entrepreneurship both due to the significance of SMEs for the economy and because SMEs' growth has been shown to be critical for their survival, success, longevity, and financial performance (Pasanen 2007). Second, too little research attention has been paid to managerial work in small businesses compared with large companies (O'Gorman et al. 2005;Tengblad 2012). We explored the nature and extent of managerial work by means of a questionnaire answered by managers of 133 SMEs with profitable growth. Our results show that these managers worked long hours, which is a health risk, but also that they engaged in "pro organizational health" practices such as touring and active engagement in health-promoting leadership behaviours. Yet we also found that the extent and nature of managerial work were not determined by managerial style, and that organizational factors may matter. Our research contributes to the body of literature on the nature of managerial work by augmenting empirical evidence on managers' work and behaviour patterns in the context of profitable growth SMEs. Furthermore, by theoretically discussing the health effects of managers' work and managerial styles, we contribute to the emerging work on organizational health in the context of SMEs. To our knowledge, there is no other study that has employed the interdisciplinary approach we take here, bringing together research streams within managerial work and leadership behaviour, and discussing occupational health in the context of growing SMEs. 2 Literature review 2.1 The nature of managerial work Researchers in management and work behaviour have tried to understand management practices by asking what managers really do (Hales 1986;Tengblad 2012). Managerial practices have been discussed in terms of roles, work content and form, communication patterns, and informal work aspects (Hales 2001). Mintzberg's classic study from 1973 has traditionally served as a reference for description of managerial work. In that study, Mintzberg examined patterns of managerial activities using the categories deskwork, telephoning, scheduled meetings, unscheduled meetings, and tours in the organization; the similarities that this revealed in managerial practices led him to advocate the generic nature of managerial work (Mintzberg 1973). Other researchers have emphasized both commonalities and variations in managerial behaviours (Hales 1999;Tengblad 2012). Stewart (1976) argued that variations in managerial work may be explained by variations in managers' work demands, constraints, and choices. By studying managers' time allocation in profitable growth SMEs, we demonstrate the managerial work and behaviour patterns prevalent among this group, implying that managers' choices might be associated with higher organizational performance. According to researchers, e.g. Hales (1986;1999;2001;Martinko and Gardner 1985;O'Gorman et al. 2005), although managerial work research has provided a good account of what managers do, there is still insufficient explanation why do managers do what they do and whether it matters for employees and organizations. In particular, the criticism concerns too little attention in investigating: what factors (organizational, individual, and contextual) explain variations and commonalities in managerial work, how managers' work and behaviours relate to organizational performance outcomes, and how managers' work affect employees. By incorporating context and exploring variations within the group of profitable growth SMEs, we address some of the questions raised in this criticism. Research also suggests that the size of the organization influences the nature of managerial work and behaviours (O'Gorman et al. 2005). The few studies that have investigated managerial time use in small companies found that these managers spent less time in scheduled meetings, more time in informal communication, and more time touring compared with peers in larger companies (Choran 1969;Tell 2004, 2012). O'Gorman et al. (2005) studied growing small businesses and found the same pattern. Florén and Tell (2012) investigated managerial work in fast-growing versus slow-growing small businesses but found no substantial differences in time use, degree of formalization, or communication pattern that could explain the firms' growth. Managerial work from an organizational health perspective According to the HWO model, organizational performance and employee health are interrelated and together form organizational health, which in turn is influenced by managerial practices (Sauter et al. 1996). Our standpoint in this paper is that managers' working hours and time spent on touring (defined here as managers walking around the workplace and interacting with employees) are important for organizational health. Classical studies of managers' time use (Carlsson 1951;Mintzberg 1973;Stewart 1988) view touring as i n s p e c t i o n t o u r s a n d a m ed i u m f o r v i s u a l communication, collecting valuable information, and seeing if everything is going well. Stewart (1976) notes that managers can also have informal discussions with employees, creating value for the employees in terms of increased morale. Peters and Waterman's (1982) highlighted some years later management by walking around (MBWA) as a characteristic behaviour for managers in successful US companies. MBWA was thus regarded as a way of bringing managers out of office to talk to employees and clients in an informal manner. A Swedish study of successful private and public organizations with good employee health and high effectiveness also identified MBWA as a key leadership behaviour (Larsson and Vinberg 2010). In a study of effective managers, Kotter (1982) found that they often are engaged in short informal conversations. He interpreted these seemingly non-managerial, chaotic, and often not work-related activities as an efficient way of problem-solving, getting useful information, relating to employees and setting agenda. In Alvesson and Sveningsson' study (2003b), managers stressed the role of such ordinary activities as listening and informal chatting in everyday managerial work, and maintained that these activities have positive effects on employees (since employees feel they are seen, respected, and an important part of the team). Other studies show managers' being hands-on and accessible for contacts with employees to be important for employee health (Lundqvist et al. 2012;Skarholt et al. 2016;Poulsen and Ipsen 2017). According to our interpretation, the activities referred to in these studies (i.e. MBWA, short informal conversions, managers being accessible, listening to/chatting with employees) relate to the concept of touring reflecting managers' being present and available for spontaneous informal interactions with employees. Touring also provides greater opportunities for managers to engage in relation-oriented leadership behaviour, which has been found to be associated with employee well-being (Skakon et al. 2010). Touring is thus of interest both as a management technique that may be common to effective companies, and as a behaviour important for employee well-being. Researchers have suggested that managers' excessive workload might hinder their ability to handle their own working situation in the long run (Carlsson 1951;Tengblad 2006). Working long hours is associated with depression, anxiety, sleep disturbances, and coronary heart disease (Bannai and Tamakoshi 2014), thus making it a serious occupational health risk. Furthermore, systematic reviews of available research have reported a relationship between managers' and employees' health (Skakon et al. 2010) as well as between managers' wellbeing and their leadership behaviours (Kaluza et al. 2019). Leadership behaviour orientation The relationship between what managers do, who they are, and where they work is poorly examined, particularly regarding individual factors and leadership styles (Hales 1999). Despite the large body of research on managerial behavioural styles, to our knowledge, these styles have not been related to managers' practical work, for example, in terms of time use. A well-established typology of leadership behaviours is presented in the three-dimensional leadership theory (Yukl et al. 2002) and its twin, the CPE leadership model Arvonen 1991, 1994), categorizing managerial behaviours into three broad categories: task/production-oriented, relation/employee-oriented, and change-oriented. Despite different naming, the dimensions in both models are similar, and describe leadership in terms of how much emphasis a manager places on each dimension. Task-oriented behaviours (e.g. organizing and planning work activities, setting goals and standards, monitoring operations and performance) are aimed at maintaining effective production and task fulfilment; relation-oriented behaviours (e.g. providing support and encouragement, recognizing contributions and accomplishments, socializing to build relations) emphasize cooperation and trust; and change-oriented behaviours (e.g. providing encouragement to view problems and opportunities in a different way, developing innovative new strategies, and encouraging and facilitating innovation in the organization) point at change, growth, and adaptation to the external environment (Yukl et al. 2002). Change-oriented behaviours may be regarded as entrepreneurial, since innovativeness, risk-taking, creativity, and commitment to change and growth have been discussed as being characteristic of entrepreneurial behaviour (Sadler-Smith et al. 2003). We have selected the three dimensional model for this study due to its relevance from the HWO perspective and since it is a well-established model in our field of research. The three dimensions within the model are associated in various ways with health, job satisfaction, sickness absence, disability pension, performance, quality, and effectiveness (Arvonen 2002;Kuoppala et al. 2008;Larsson 2010;Nyberg 2008;Nyberg et al. 2005;Skakon et al. 2010). Studies show strong empirical support for the association between relation-oriented behaviours and employee health (Skakon et al. 2010). Also, other researchers (Kaluza et al. 2019) have regarded task-oriented, relation-oriented, and change-oriented behaviours to be constructive leadership behaviours in relation to their consequences for followers and the organization, as opposed to destructive (e.g. abusive and passive) behaviours. For increased understanding of leadership, it is important to address patterns of specific behaviours used by managers within each dimension and to study leadership behaviour in its context (Yukl et al. 2002). Therefore, in this study, we use the healthy and effective leadership behaviour (HEL) model (Larsson and Vinberg 2010) since it complements the threedimensional model allowing to analyze specific leadership behaviours that promote organizational health and effectiveness. Furthermore, it is based on the study of organizations in Sweden, making it interesting also from the HWO and contextual perspectives. The model includes nine groups of common leadership behaviours: a strategic and visionary leader role, communication and information, authority and responsibility, a learning culture, subordinate conversations, plainness and simplicity, humanity and trust, walking around, and reflective personal leadership (Larsson and Vinberg 2010). These behaviours espouse visibility, openness, communication, trust, and cooperation with the aim of promoting the development of one's subordinates. When behaviours in the HEL model are viewed through the prism of the three-dimensional model, high relationship orientation appears to be a universal component for successful leadership, while task and change orientation vary in this regard depending on situational variables (ibid.). Leadership behaviour and managers' time use reflect different but potentially interlinked aspects of managerial practices. If different leadership behaviour dimensions influence managers' time use and working activity prioritization in practice, this may explain variations in managerial work. The effects of leadership style may vary depending on the organizational and individual factors constraining managers' work. Research questions As discussed above, few studies have addressed managers' time use in growing SMEs and the type of leadership behaviours involved across managerial practices. This study adds to the empirical evidence on managers' time use in profitable growth SMEs and explores patterns of work that might be associated with personal and organizational health. We do this by looking at hours worked by managers, the nature of their work, and the prevalence of leadership behaviours associated with good health outcomes. Our approach pays special attention to time spent on touring and managers' total working hours, since these aspects might have implications for employees'/managers' health as well as organizational performance. Furthermore, we investigate whether leadership behaviours, managers' characteristics, and organizational characteristics can explain the extent and nature of managers' work. Linking managers' time use, leadership behaviour styles, and managers' background may enhance the understanding of factors influencing managers' time use in profitable growth SMEs. The research questions were as follows: a) What is the extent and nature of managerial work in profitable growth SMEs? b) Do differences in (i) leadership behaviour orientation, (ii) organizational context, and (iii) managers' background characteristics influence the extent and nature of managerial work in profitable growth SMEs? Study population and sample The data for this cross-sectional study were collected within the project "Successful Companies in Gästrikland" (SCiG). This project gives annual awards to the 50 most successful companies in Gästrikland, a province in central Sweden. The study sample was specified via a two-step sampling process: the SCiG process followed by the study sampling process (Fig. 1). The SCiG inclusion criteria are companies registered in Gästrikland, ≥5 years of operation, ≥4 employees, and ≥ 4 million Swedish Crowns in net sales. All companies fulfilling these criteria are rated by SCiG according to the dynamics of economic indicators such as net sales, number of employees, equity ratio, income, pre-tax profit margin, return on assets, and return on equity. Selection and rating are performed by an independent auditing firm using data on the companies' last five annual financial statements (with the last report weighing heaviest) aggregated by a European company specializing in quality-assured business and financial information. The project thus selects companies demonstrating growth with retained profitability. Annually, the 120 highest-rated companies are nominated for the award. Managers of these companies are interviewed via a standardized questionnaire on health-promoting and effective leadership. Managers of companies nominated for the award during 2015-2018 constituted the study population. In 2018, data from their paper questionnaires were screened and entered into SPSS. We used two additional criteria beyond those of SCiG: SMEs and top managers. Top managers were defined as owner-managers, executive directors, and similar-level managers; SMEs were defined as companies employing up to 250 persons. The study included responses from one manager at each company, and for companies nominated several times, only data from their first entry were included. Answers from 133 managers were included in this study ( Table 1). The managers' average age was 48, 88% were male, and 31% had university education. They had worked at the company for 15 years on average (range: 1-42) and had almost as long managerial experience. Average company size was 21 employees (range: 4-150). All cases with any missing data were excluded from the analysis via list-wise deletion. When we compared the cases with valid responses (n = 133) with the cases with some missing data (n = 15), the groups differed in terms of age and proportion of time spent on spontaneous meetings. Thus, the internal missing data might have affected the results of the study. However, there were no significant differences regarding other variables, including the main outcomes. We also checked that there were no differences between the groups regarding the financial performance ranking that was used to select the sample. Furthermore, no differences in financial performance ranking were found between companies that participated in the survey (n = 148) and those that did not (n = 139). We were not able to make sociodemographic comparisons because sociodemographic data were not available for the non-respondent group. To test the validity of the questionnaire and ensure the quality of data collection, five pilot interviews were carried out to ensure that the respondents understood the questionnaire. While screening the answers, additional telephone calls were made to some of the respondents to clarify their answers. Measurements of variables The key characteristics included in our analysis were the extent and nature of managerial work, leadership behaviour orientation, managers' background characteristics, and organizational context. According to Maes et al. (2005), factors on the level of manager, company, and managerial practices are basic determinants of company performance. Outcomes The main outcome of the study was the extent and nature of managerial work, measured in terms of All companies in the province, n=5891 a Companies matching the inclusion criteria, n≈480-500 b 120 companies nominated (highest rated) 50 companies awarded SCiG inclusion criteria Step 1. Economic rating Step 2. Structured interviews with managers willing to participate. Step 3. Results of step 1 and 2 are combined to select companies for award managers' working hours and distribution of time between activities (particularly tours) in line with the categories used in previous research Tell 2004, 2012;Kurke and Aldrich 1983;Mintzberg 1973;O'Gorman et al. 2005;Tengblad 2006). Participants were asked to specify the percentage of their working time spent in a typical week on deskwork (e.g. e-mail, general administration), telephoning, scheduled meetings, unscheduled meetings (e.g. meeting somebody in a corridor and holding a spontaneous meeting), and tours (walking around and talking to subordinates). We assessed total working hours by asking "How many hours per week do you normally work?". Independent variables Leadership behaviour, as the main exposure, was measured with four one-item questions assessing managers' overall orientation towards relations, task, and change according to the taxonomy in the three-dimensional model (Ekvall and Arvonen 1991;Yukl et al. 2002) and usage of behaviours in the HEL model (Larsson and Vinberg 2010). Respondents were asked to read descriptions of core behaviours in each dimension and assess the extent to which they practiced these behaviours in their daily work on a scale ranging from 0 (I never do this) to 100 (I clearly do this and could be a role model for other managers). For example, the description for relation-oriented behaviours was as a manager I provide support and encouragement to employees, express conviction that an employee can accomplish a difficult task, recognize achievements, provide coaching when needed, discuss, advise, check with employees and keep them updated in decision-making processes, and handle conflicts in a constructive way. With respect to managers' characteristics, we used managers' age, sex, education, managerial experience, and tenure in line with the upper echelons perspective (Hambrick and Mason 1984), employing observable managerial background characteristics as predictors of strategic choices and organizational performance. Age, education, and management experience are commonly used as manager background characteristics in the area of small business performance research (Maes et al. 2005). Age was measured on a continuous scale. Sex was measured as male = 0 and female = 1 and education as no higher education = 0 and higher education = 1. Managerial experience was the number of years that the respondent had worked in a managerial position in current and previous organizations. Organizational tenure, measured as the number of years worked in the current organization, referred to experience and understanding of how business was done in the companyspecific context. The final two variables reflected aspects of the organizational context within which the managers operated, and which might affect their work. Control span measured number of direct subordinates, and number of employees (total number in the organization) measured company size. Statistical analysis The first step consisted of a descriptive analysis of total working hours and proportions of time spent on each activity category: deskwork, telephoning, scheduled meetings, unscheduled meetings, and tours. Measures of central tendency and spread were employed: n, mean, median, range, and standard deviation. We also calculated arithmetic means and standard deviations for proportions of time spent on activities and total working hours corresponding to low and high levels of leadership behaviour orientation, situational characteristics, age, sex, and education. For these purposes, all the independent variables and covariates were dichotomized around the median (below median = 0, above median = 1), and differences were assessed with the Mann-Whitney U test. Correlation analysis was performed between all dependent and independent variables. The second step included a univariate and multiple linear regression analysis exploring whether total working hours were related to leadership behaviour orientation, organizational context, and managers' background characteristics. First, univariate linear regression was performed to assess the association between total working hours and each of the predictors, and then multivariate regression analysis was performed to explore the relationship between working hours, leadership behaviour, and the predictors that were significantly associated with the outcome in the univariate analysis. Since the total proportion of time spent on the categories of managerial activities made up 100% of a total working day, the variables were inherently co-dependent. Conventional statistical methods may be inappropriate for finite and collinear data, where parts compose the whole and their variation depends on other components and is constrained by the constant sum (Pawlowsky-Glahn and Egozcue 2006). We therefore performed compositional data analysis (CoDA) to explore whether the proportion of time spent touring was related to leadership behaviour orientation, organizational context, and managers' background characteristics. First, the dependent variable (tours) was transformed into a proportion (i.e. bounded between 0 and 1): in this way, the effect of explanatory variables tends to be non-linear and the variance tends to decrease when the mean gets closer to one of the boundaries. In order to estimate the impact of exposure variables on the dependent one, marginal effects after fractional logit model were calculated. P values and 95% CI were also reported. To examine whether any patterns of time allocation were characteristic of managers in profitable growth SMEs, we qualitatively compared our results with studies of managers in different-sized private companies that also used Mintzberg's categories: three concerning small companies (Choran 1969;Florén and Tell 2012;O'Gorman et al. 2005), one concerning intermediate companies (Kurke and Aldrich 1983), and two concerning large organizations (Mintzberg 1973;Tengblad 2006 Ethical statement The study was approved by the Regional Ethical Review Board in Uppsala, Sweden (ref: 2016/208). All participants gave their written informed consent. Descriptive analysis The managers in these profitable growth SMEs worked an average of 52.4 h per week, spending 34% of their working time on deskwork, 17% on telephone calls, 16% in scheduled meetings, 12% in unscheduled meetings, and 19.6% on touring the organization and spontaneous interactions with employees (Table 2). Their leadership behaviour was oriented about 80% towards relationships, 80% towards change, 65% towards tasks, and about 80% towards the HEL behaviours. Table 3 compares our results with the results of previous studies of managers' time use in small, intermediate, and large organizations. The managers in our study spent nearly 20% of their total working time on tours, as compared with 6-12% in other studies of managers in small companies (Choran 1969;Florén and Tell 2012;O'Gorman al. 2005) and 1-3% in studies of larger companies (Kurke and Aldrich 1983;Mintzberg 1973;Tengblad 2006). Table 3 clearly shows that managers in smaller companies spend more time in touring and administrative work and less time in scheduled meetings as compared with larger companies. Managers in this study worked longer hours (52.4 h/ week) than managers in slow-growing (45.5 h/week) and fast-growing (44.5 h/week) small businesses (Florén and Tell 2004). The picture is less clear when it comes to comparison with large companies, as the present managers worked longer hours than the largecompany managers studied by Mintzberg (1973) (45 h/ week) and Kurke and Aldrich (1983) (44 h/week), but shorter hours than the large-company managers studied by Tengblad (2006) Patterns of managerial activities and total working hours stratified by sex, education group, and all other predictor variables (dichotomized around the median) are given in Online Resource 1. There were small variations in working hours and time spent on activities depending on levels of predictors. Managers with more orientation towards relationships (≥ 81%) spent less time on deskwork, those with more task orientation (≥ 66%) spent less time in unscheduled meetings, and those working in larger companies (≥ 13 employees) spent more time on administrative work and scheduled meetings. Those with greater span of control (≥ 13 subordinates) worked 5 h more than managers with less. Women reported more time spent on deskwork, but this should be interpreted with caution, since only 12% of respondents were female. Managers with higher education spent more time on scheduled meetings and worked 6 h less than managers without higher education. Time spent touring showed no differences when the analysis was stratified by sex, education, and other predictors. Correlations between all the variables are given in Online Resource 2. There were no significant correlations between the main outcomes (time spent touring and total working hours) and the predictors of interest (perceived degree of leadership behaviour orientation). Tours were not significantly correlated with leadership behaviour orientation, company context, or manager background characteristics. Longer working hours were correlated with larger span of control, more time spent on telephone calls, and less time spent in unscheduled meetings. Time spent on deskwork was negatively correlated with orientation towards relationships. Higher proportion of time spent in scheduled meetings was correlated with higher number of employees in the organization. All the categories of managerial activities were correlated with each other, possibly due to the compositional nature of the data. All four leadership orientations were also correlated with each other. Regression analysis Regression analysis was used to explore whether managers' total working hours were associated with their leadership behaviour orientation, the organizational context, and managers' background characteristics. There was no association between working hours and leadership behaviour orientation ( Table 4). The univariate analysis showed associations between hours worked and age, education, and managerial experience. However, after controlling for age, education, and managerial experience in the multivariate analysis, only education was related to the outcome (although with a larger confidence interval). The compositional analysis examining associations between proportion of time spent touring and all exposures (Table 5) showed no statistically significant associations between time spent touring and leadership behaviour orientation. Number of employees and manager's educational level were negatively related to time spent touring. The multivariate analysis confirmed the association between number of employees and time spent touring. Managers' work and behaviour patterns in profitable growth SMEs The managers in our study spent a high proportion of their working time touring; this may be a factor contributing to organizational health, as illustrated in other studies, particularly in the USA and Sweden, emphasizing that touring is a management technique common in healthy and effective organizations (Larsson and Vinberg 2010;Peter and Waterman 1982). Spending more time touring may signify a manager who is often present and available and who has greater opportunities for engaging in relation-oriented leadership behaviour, which is linked to employee well-being (Skakon et al. 2010). The managers worked 52.4 h/week, exceeding the average working week by 30%. Bannai and Tamakoshi (2014) define long working hours as ≥ 40 h/week. Although there is no general agreement on the exact thresholds for hazardous overtime work (Spurgeon et al. 1997), thresholds between 41 and 63 h have been regarded as risky in relation to different health outcomes (Bannai and Tamakoshi 2014). Thus, the managers in our study worked long hours, and this can be a health risk. As discussed earlier, managers' health is linked to leadership behaviours (Kaluza et al. 2019) and might be an important prerequisite for exercising healthy and effective leadership behaviours (Lundqvist et al. 2012). Comparing our results with other studies, we can distinguish differences between small and large companies in regard to managers' time allocation, suggesting that organization size matters for managerial work. Taken together, our results point at opposite directions in relation to two of Mintzberg's (1973) propositions on managerial work in small companies, namely that managers spend little time on tours and that they are preoccupied with scheduled meetings. Deskwork was the most time-consuming activity in smaller companies, while managers in larger companies spent most of their time in scheduled meetings. This might be because managers in small companies have fewer supporting functions within their organizations (e.g. HR, finance). Furthermore, when compared with large companies, managers in small companies spent a lower proportion of time on formal activities (deskwork and scheduled meetings) and a higher proportion on informal activities (telephoning, unscheduled meetings, and tours), indicating that the degree of formalization is lower in small companies. However, we do not see a clear pattern of differences or similarities between groups of small and large companies in relation to the extent of managerial work. We found a minor relationship between managers' leadership behaviour and manager background and organization characteristics on one hand, and time allocation to managerial activities and total working hours on the other. The analysis stratified by low versus high levels of predictors showed stable patterns of time use with only moderate variations. When applying Stewart's (1976) concepts, our study might indicate that constraints and individual choice have a minor influence on managers' daily work in practice. This suggests that work demands related to managerial responsibility play a bigger role in defining how work is performed. The results indicate that managers in profitable growth SMEs show a high degree of engagement in task-oriented, relationship-oriented, change-oriented (as categorized in the three-dimensional model), and HEL behaviours. As already mentioned, the dimensions in the models relate to health, effectiveness, job satisfaction, and performance (Arvonen 2002;Larsson 2010;Nyberg 2008;Nyberg et al. 2005;Skakon et al. 2010) as indicators for organizational health. Our results support Ekvall's and Arvonen's (1994) conclusion that successful managers use all three dimensions to a marked degree. The score for task dimension was somewhat lower, which indicates that the managers in this study used entrepreneurial, supportive, and dialogue-oriented leadership behaviours to a somewhat higher degree than structuring and planning behaviours. The study results regarding HEL are also in line with previous findings concerning characteristics of healthy and effective organizations (Larsson and Vinberg 2010). More research is needed to understand the specific behaviours that managers in effective SMEs commonly use. Our findings also indicate that managers in profitable growth SMEs use active leadership behaviours which contribute to organizational health. Touring and managers' working time in relation to leadership behaviours, managers' background characteristics, and organizational context The regression analysis revealed no associations between leadership behaviour orientation and time spent touring. There are several possible explanations for this. First, our results seem to confirm Mintzberg's (1973) proposition on the stability of managerial work, and may also suggest that managers have little individual choice and their practical work is predetermined by their tasks. Second, the results might have been influenced by the sampling. We selected SMEs in the top bracket of financial performance, which might have reduced variation. Moreover, the managers reported high usage of leadership behaviours in all dimensions, and so there was no large spread of values in the behaviours. Comparing companies with different levels of profitability and growth (high-low) might have shown different results. Finally, the factors may have been indirectly linked due to the compositional nature of the managerial work categories. Our analysis showed correlations between time spent on deskwork and relationship orientation (Online Resource 2) as well as between time spent on unscheduled meetings and structural orientation (Online Resource 1). In line with Stewart's (1988) suggestion that tours can be seen as residual activities that tend to be curtailed or dropped when new tasks arise, we can assume that managers with higher relationship orientation who spent less time on deskwork (possibly due to delegation and empowerment) spent more time in tours. Organization size was related to proportion of time spent touring. We demonstrated above that small and large businesses differ in this regard, but can also conclude that company size creates differences even within the group of SMEs. It might be easier for a manager to interact with and relate to a smaller number of employees. This contrasts with Stewart's (1988) suggestion that a small number of subordinates reduce the need for inspection tours. Our results indicate that SMEs should not be regarded as a homogenous group in relation to managerial work. Size, industry, structure, and other factors may influence managers' work situation, behaviours, and availability for contacts with employees and engaging in relations. Managers' working hours appeared to be related to their individual characteristics (education as a managerial resource), affecting their individual choice despite the common demands and characteristics of SME managerial work. However, the difference may also reflect other factors, for instance the company's field and core activity, which affect what managers do in practice. Furthermore, even though managers with more direct subordinates worked longer hours than managers with fewer subordinates, the associations with company size and control span were not significant. The multivariate regression analysis showed no association between working hours and leadership behaviour orientation. Limitations This study aimed primarily to explore managers' work and behaviour patterns in profitable growth SMEs without intending to assert associations with companies' financial performance. Company effectiveness, measured in terms of profitable growth, was a given factor in the context of our study. Survey data have both strengths and limitations. The traditional method of studying managerial work is structured observation as introduced by Mintzberg (1973), but trying a different method gave us new possibilities. Survey data on a larger number of managers, including a broader array of variables on leadership behaviours and background characteristics of managers and companies, allowed us to explore variability of managerial behaviour within the studied context as well as links with leadership behaviours. It is important to emphasize that our sample might not be representative of all growing SMEs in Sweden. Although the overall situation (legislation, organizational culture) is similar, local and regional differences might make it difficult to generalize the results. Our use of self-reported data might have influenced the accuracy and internal validity of the results. First, the answers might have reflected managers' perceptions of their work more than the actual situation. Second, the respondents might have found it hard to remember an accurate picture of a normal working day. However, some researchers maintain that perceived workload is a better predictor of psychological health than actual workload (Hobson and Beach 2000). Since our questionnaire followed mainstream studies that build on Mintzberg's (1973) categories of working activities, we were able to compare our results with the available data. However, we might have missed other relevant categories not covered by these predetermined categories (e.g. managers' operative work). We are aware of the fact that SMEs are not a homogeneous group. The companies included in our sample varied in terms of number of employees (range: 4-150) age, industry, and other characteristics which may affect managers' work and behaviours. Nevertheless, for the sake of simplifying comparisons, we have treated SMEs in a single approach. Finally, the cross-sectional design used in this study did not allow us to establish causality of the observed relationships. Conclusions Managers in profitable growth SMEs work long hours, which is a health risk for them as individuals and for their organizations, but they also engage in work practices and leadership behaviours that promote organizational health for their employees. They spend a high proportion of time in touring, which could be beneficial to organizational health, and they exercise active leadership by substantial use of behaviours oriented towards relationships, tasks, change, and the dimensions of the HEL model, which contribute to both employee health and company effectiveness. The extent and nature of managerial work do not seem to be associated with managerial style, but may be affected by organizational factors. A comparison of our results with those of other studies shows that patterns of managers' time use differ between small and large companies, confirming that the size of the firm is an important determinant of managerial work. Further qualitative studies are needed to better understand the content and meaning of touring in organizations. There is also a need for more research on managers' health in growing SMEs, and its role for organizational health. Acknowledgements The authors gratefully acknowledge the insightful comments of the editor and two anonymous reviewers that substantially improved the article. The authors wish to thank Amelie Carlsson for digitizing the questionnaire answers, Mirko Di Rosa for performing the CoDA analysis, Hans Högberg for statistical support, and Cormac McGrath for proofreading the revised version of the manuscript. FundingInformation Open access funding provided by University of Gävle. Compliance with ethical standards Conflict of interest The authors declare that they have no conflict of interest. Availability of data and material The datasets generated and analysed during the current study are not publicly available as they are part of a larger and still ongoing study, but are available from the corresponding author on reasonable request. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2020-08-06T09:05:29.077Z
2020-07-30T00:00:00.000
{ "year": 2020, "sha1": "8a85588799e5679199a73d6a918b4b99878e9f5f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11187-020-00386-0.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "825c97d434892a4048b1aa84959fcfeabf6e466c", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
255746768
pes2o/s2orc
v3-fos-license
kHz-precision wavemeter based on reconfigurable microsoliton The mode-locked microcomb offers a unique and compact solution for photonics applications, ranging from the optical communications, the optical clock, optical ranging, the precision spectroscopy, novel quantum light source, to photonic artificial intelligence. However, the photonic micro-structures are suffering from the perturbations arising from environment thermal noises and also laser-induced nonlinear effects, leading to the frequency instability of the generated comb. Here, a universal mechanism for fully stabilizing the microcomb is proposed and experimentally verified. By incorporating two global tuning approaches and the autonomous thermal locking mechanism, the pump laser frequency and repetition rate of the microcomb can be controlled independently in real-time without interrupting the microcomb generation. The high stability and controllability of the microcomb frequency enables its application in wavelength measurement with a precision of about 1 kHz. The approach for the full control of comb frequency could be applied in various microcomb platforms, and improve their performances in timing, spectroscopy, and sensing. However, microresonators are vulnerable to the environmental thermal noises and parasitic laser-induced nonlinear effects due to its small mode volume [19][20][21][22] , and the microsoliton is even more sensitive to the frequency fluctuation of the cavity modes and pump laser for ultranarrow optical resonance 23 . These deleterious effects can induce local perturbations of the refractive index of the dielectric material and the geometry of microresonator, thus lead to non-uniform shifts to individual optical resonances and also gives rise to the unexpected fluctuation of the FSR. To stabilize the soliton comb, the common cavity tuning approaches that rely on a global change of the dielectric refractive index (δn) or the cavity geometry (δL) are utilized based on thermal-optics and electro-optics effects, which lead to a global frequency shift δf/f ∝ (δn/n + δL/L) 21 of resonances. Such approaches simultaneously tune the frequencies and the FSR of all modes 11,24,25 , thus is difficult to realize the full control of the comb. For example, we could not keep the target pump mode wavelength fixed when tuning f rep , because the whole cavity spectrum is simultaneously shifted by an amount being several orders of magnitude larger than the FSR drift 26 . Then the drive laser would be far-off resonance, and the microsoliton is in a dilemma. Eventually, challenges are imposed to stabilize or even tune the f rep , and the practical applications of the microsoliton are hindered. Here, we propose a universal approach to stabilize and tune the f rep of microsoliton, by using two global-frequency-tuning (GFT) methods simultaneously. In a practical experimental configuration, we introduce a two-temperature model that are independently controlled by a pump laser and an auxiliary laser to realize the self-adaptive stabilization of the microcomb and independent tuning of the f rep . We demonstrate the fast, programmable and through frequency controlling of arbitrary comb line, and show a frequency measurement precision around kHz in a proof-of-principle demonstration of wavemeter. Compared to previous work with precision of several MHz 16 , the precision of our work achieves a three-order of magnitude improvement. Our work paves the way toward the low-cost and chip-integrated comb spectroscopy and optical frequency standard. Reconfigurable microsoliton Figure 1a schematically illustrates the basic GFT mechanism of a cavity, as the change of optical round-trip paths can be effectively described by inserting or removing different dielectric materials inside the cavity. In practices, the GFT of microcavity could be induced by the temperature change 26 , geometry deformation 27 , or electro-optics effect 28 , which is almost uniform at the scale of cavity length. As a results of a single GFT, all optical resonances belongs to the same mode family shifts to the same direction [ Fig. 1b]. The resonance shifts have slightly different rates due to the material and geometric dispersions, thus the FSR also changes under a GFT, while the variation of FSR is several orders smaller than that of individual resonance 26 . Therefore, the pump laser frequency f p and the repetition rate f rep of microsoliton could not be controlled independently by a single GFT method. However, the real-time controlling of f rep is critical not only for the stabilization of the microsoliton, but also for applications in precision spectroscopy and optical frequency reference. This challenge could be circumvented by introducing distinct GFT methods simultaneously. The tuning could be described as a modification of the effective refractive index of the cavity materials by with n j and η j denoting the contribution from the j-th GFT mechanism and the corresponding weighting factor. f is the frequency and ∂n m j =∂ m f is the dispersion of the corresponding GFT with respect to a reference frequency f 0 . It should be noted that distinct GFT methods have different dispersion ∂n m j =∂ m f , thus it provides the degree-offreedom η j so that the net dispersion ∂n m eff =∂ m f of the cavity could be alternated for m ≥ 1 under the constraint ∑ j η j n j = c, where c determines the frequency of an individual mode and can be chosen as a constant to match a certain optical reference. For the purpose of solely controlling the f rep , we only need two GFT methods to control η 1 ∂n 1 /∂ m f + η 2 ∂n 2 / ∂ m f while keep η 1 n 1 + η 2 n 2 fixed, as shown in Fig. 1c. The experimental demonstration of the independent control of f rep is carried out in a silica microrod cavity. By simultaneously exciting two different spatial modes, two GFT mechanisms are proposed by employing the thermal-optics effect, as shown in Fig. 1d. A drive laser stimulates the microsoliton, and an auxiliary laser is employed to pump another mode with different field distribution. Comparing the calculated electric fields and temperature fields of the fundamental mode for the pump and the high-order mode for the auxiliary laser, the spatial temperature distribution of the auxiliary mode is more uniform than that of the pump mode. Since the thermal-optic refractive index change is proportional to the temperature, the more localized thermal field induced by the pump laser induces a stronger dispersive change of refractive index, i.e., ∂ m n 1 /∂f m ≠ ∂ m n 2 /∂f m and thus induce different GFT with η j ∝ T j , with T j being the temperature increased by the corresponding laser. It is worth noting that, under certain conditions, the frequency of the mode driven by the pump laser could be almost fixed self-adaptively due to the balance between the two-temperatures [29][30][31] . As a result, with a power and frequency stabilized pump laser, we could tune the FSR of the microcavity by adaptively changing T 1 and T 2 via varying the power or frequency of the auxiliary laser, while the η 1 n 1 + η 2 n 2 is fixed autonomously, without external feedback control of system (see Supplementary Information). To certify the flexible mode frequency control via the twotemperature model, we carried out the experiments with the setup shown in Fig. 2a. A pump laser (1551.3 nm) is used to stimulate and sustain the microsoliton, with the assistance of an auxiliary laser driving the microcavity through a different spatial mode from opposite direction, which helps suppressing the strong thermal instability 32 . A typical soliton spectrum (blue lines) is shown in Fig. 2b, with the red lines being the backscattering from the comb generated by the auxiliary laser. For convenience, the comb lines are labeled by integer μ and the comb frequency is f μ = f p + μf rep , with μ = 0 corresponding to the pump mode. When the microcavity reaches the soliton state, the frequencies of pump mode family can be effectively tuned by scanning the frequency of auxiliary laser. The numerical results of the mode shifts is plotted in Fig. 2c, indicating the slops of frequency shifts are proportional to the μ, i.e., with the auxiliary laser frequency change while the central mode frequency is fixed. To verify this, we characterized the cavity resonances around soliton comb lines by a weak The pump resonance is self-adaptively stabilized while the FSR can be tuned independently. d The two-temperature model in a silica microrod cavity. When the cavity is excited by lasers through different spatial modes, the temperature shows distinct spatial distributions and thus induce different tuning effects to optical resonances, corresponding to the multiple GFT methods. probe laser (~50 μW) 33 , while the single soliton state is sustained and the pump laser is stabilized by a reference cavity (see "Methods"). As shown in Fig. 2d, the measured frequency shifts of the optical modes (see Supplementary Information for more details) are in agreement with the theoretical prediction, and demonstrates that the FSR are indeed tuned by the auxiliary laser. The tuning of FSR also agrees with the linear relationship between the measured shift of f rep (δ) and the frequency of auxiliary laser [ Fig. 2e], which is different from the previous tuning mechanisms based on dispersive wave and Raman selffrequency shift [34][35][36] . Operation of wavemeter The independent tuning of the f rep by varying the auxiliary laser detuning promises real-time control of the soliton spectrum and enables the locking of f rep to a RF clock. When the pump is fixed, our device is a reliable frequency reference and enables a wide range of applications, since the frequencies f μ = f p + μf rep can be fully determined. As an example, we develop a high-precision wavemeter by the controllable and fully-stabilized microsoliton, which is applied to measure three laser signals, as shown in Fig. 3. The beatnotes Ω j = |ω j − f μ | between these lasers ω j and the adjacent comb lines f μ are recorded by the RF spectroscopy. The frequency of the signal is calculated by ω j = f p + μf rep ± Ω j , which requires us to determine the value and the sign of μ, and also clear the ambiguity of the sign before Ω j . To discriminate the sign of Ω j , we first introduce an acoustic-optics frequency shifter in the measurement setup, by switching the AOM frequency from 80 MHz to 75 MHz and the frequency of the probe laser is changed, thus we could determine the sign by check if the beat note shift is +5 or −5 MHz. Then we sweep the f rep during the wavemeter operation, and obtain |μ| = |∂Ω j /∂f rep |. The sign of μ is determined by checking the parity of the signs of ∂Ω j / ∂f rep and Ω j (see Supplementary Information). The trace of the beatnotes under AOM shifting and f rep tuning for the wavemeter is shown in Fig. 3a, and the enlarged trace of signals in RF spectra [ Fig. 3a] is presented in Fig. 3c-e, with an AOM frequency switching around the time t = 6 s. The corresponding evolution of f rep in RF spectra is shown [ Fig. 3b]. For the example in Fig. 3c-e, we can deterministically derive the orders of the comb lines as μ = 18, −9, 17 for the corresponding probe lasers, respectively. Therefore, the multi-wavelength measurement is simultaneously achieved without ambiguity, which is very challenge for commercial wavemeter. At about 21 s, the repetition rate of the microcomb is tuned suddenly, and the system responses quickly in 1 ms with maintaining the soliton state. We could estimate the locking bandwidth of our method exceeds 1 kHz. In the above operations for realizing the wavemeter, fast tuning and switching of the f rep are demonstrated, which is unique and could be beneficial for many applications where dualcomb source is required. Performance of wavemeter The performance of our wavemeter is further characterized by measuring a signal with varying frequency. Figure 4a shows a measured pattern USTC in the frequency-time domain by switching the probe laser frequency in real-time. Even the frequency range of the pattern is as small as 1.2 MHz, the pattern can be clearly resolved by our wavemeter, which indicates a high-frequency resolution of the wavemeter. Since the resolution and precision of the wavemeter depends on the frequency stability of the microcomb, the performance can be further improved by locking the f rep to a microwave reference with feedback to the auxiliary laser. The frequency stability is characterized by the traces of the measured f rep of the stabilized soliton and the beat note between an ultra-stable laser and the nearest comb line (f beat ), as shown in Fig. 4b, c. The f rep(beat) has an uncertainty of 0.013 (0.49) kHz with a 95% confidence interval. In Fig. 4d, e, the stability of f rep and f beat is further tested by the Allan deviations. Comparing the free-running state (hollow orange circle) and the locked state (solid orange circle), our stabilized comb has a significant improved performance, indicating a kHz-level frequency measurement precision (17 kHz at 1 s measurement time). Therefore, the accuracy of f p in our experiment is inferred as the similar level. Discussion In conclusion, a universal mechanism for the precise and thorough control of microsoliton spectrum is proposed and realized. By introducing multiple GFT methods, individual mode resonance and FSR of a cavity are decoupled and can be tuned independently. In contrary with previous fully stabilized microsolitons 11,24,25 , we achieve decouple of the repetition rate and pump laser frequency. Experimentally, the all-optical and self-adaptive control of the microsoliton is realized with a two-temperature model based on the thermo-optic effect. By switching and stabilizing the microsoliton, a wavemeter with ultrahigh frequency measurement precision at kHz-level and the capability of simultaneously multiple wavelength measurement are demonstrated. The mechanism demonstrated in this work is applicable to all dielectric microcavities with GFT approaches and promises the full control of high-order dispersion of a cavity by introducing more GFT approaches, which might also be useful for comb generation based other nonlinear processes, such as the mode-locked laser and Pockels microcomb. For instance, our scheme could be extended to microring resonators with optomechanical or electro-optics tuning. Therefore, the demonstrated precise microsoliton controlling would facilitate their potential applications in precision measurements, optical clock, spectroscopy as well as communications. Device fabrication and soliton generation In this work, the microrod resonator is fabricated from a rotating fused silica rod heated by a focused CO 2 laser beam. The diameter of the microrod resonator is around 1.07 mm. The FSR of the microrod is about 60.7 GHz, which agrees well with the repetition rate. The soliton is generated by using an auxiliary-laser-assisted thermal response control. The pump laser (Toptica CTL 1550) and the auxiliary laser (Toptica CTL 1550) are coupled into the resonator through the tapered fiber from opposite directions with two circulators, and both lasers are amplified by the erbium-doped fiber amplifier (EDFA). The polarization of the pump mode is orthogonal to the polarization of the auxiliary mode. The wavelength of the pump laser is around 1551.3 nm, and the auxiliary is around 1541.72 nm. The input power of the pump laser is around 100 mW, and the power of the auxiliary laser is almost four times higher than that of the pump laser (~380 mW) according to the Q factor of the relevant optical modes. Then, the thermal effect induced by the pump laser is effectively suppressed by the auxiliary laser, which ensures the accessibility of the soliton step in our experiments. By slowly tuning the pump frequency into the pump mode from blue detuning to red detuning, and simultaneously the auxiliary laser offers the knob to realize the self-adaptive cavity tuning presented in this work. Stabilization of the soliton A Pound-Drever-Hall (PDH) frequency stabilization technique was used to lock the frequency of the pump laser relative to the optical mode of a reference cavity, which has a finesse of 250 and a FSR of about 5 GHz. The temperature of the reference cavity is stabilized by a Proportion-Integral-Derivative (PID) servo. By locking the pump laser to the reference cavity, the linewidth of the pump laser is suppressed bỹ 12 dB (from 1 MHz to 60 kHz). Limited by the bandwidth of our detector, the comb lines around 1550 nm are filtered and modulated by an electro-optic modulator (EOM) with frequency of 30 GHz to down convert the beat note signal less than 1 GHz. The measured beat note is around 739.05 MHz, corresponding to the repetition rate of 60.739 GHz. Then, the measured beat note signal is phased locked to a reference electronic oscillator (Rohde and Schwarz) through a phase lock loop by feedback to the control current of the auxiliary laser, corresponding to locking the repetition rate to the reference electronic oscillator. Then, the repetition rate can be also tuned by simply changing the reference electronic oscillator. The maximal tuning range of the repetition rate reaches~200 kHz in our experiment. Calibration of the absolute frequency We calibrate the absolute frequency of the microcomb by referencing a comb line around 1542 nm to a acetylene-stabilized fiber laser (stabiλlaser 1542, 194.369489384(5)THz). Based on the stabilized repetition rate and the corresponding μ, we could deduce the absolute frequency of the stabilized pump laser. Furthermore, we could calculate the absolute frequency of probe lasers. Allan deviation measurement The performance of the stabilized soliton microcomb is characterized by the Allan deviation of the repetition rate and the beat note of the comb line and the acetylene-stabilized fiber laser. The RF frequency is measured in the time domain using a frequency counter (Tektronix FCA 3000) and the Allan deviation is calculated according to for different integrating times. Here τ is the averaging time, M is the sample number of frequency measurements, and ν i is the average frequency of the signal (measured in unit of Hz) in the time interval between iτ and (i + 1)τ. Data availability All data generated or analyzed during this study are available within the paper and its Supplementary Information. Further source data will be made available on reasonable request.
2023-01-13T06:17:37.963Z
2023-01-12T00:00:00.000
{ "year": 2023, "sha1": "3f7cde60bfd4c0f998df385b200a58a93ac4d8d6", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-022-35728-x.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "df5b7f00666830b59c50f4e5bf05a44493e3084a", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
233486552
pes2o/s2orc
v3-fos-license
RNA-Sequencing Reveals Heat Shock 70-kDa Protein 6 (HSPA6) as a Novel Thymoquinone-Upregulated Gene That Inhibits Growth, Migration, and Invasion of Triple-Negative Breast Cancer Cells Objective Breast cancer has become the first highest incidence which surpasses lung cancer as the most commonly diagnosed cancer, and the second highest mortality among women worldwide. Thymoquinone (TQ) is a key component from black seed oil and has anti-cancer properties in a variety of tumors, including triple-negative breast cancer (TNBC). Methods RNA-sequencing (RNA-seq) was conducted with and without TQ treatment in TNBC cell line BT-549. Gene Ontology (GO) function classification annotation, Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses for these genes were conducted. Western blot and semi-quantitative RT-PCR were used to verify the regulated gene. Functional assays by overexpression or knocking down were performed for HSPA6 and its mediator TQ for inhibiting growth, migration and invasion of TNBC cells. The regulatory mechanisms and prognosis for HSPA6 for breast cancer survival were conducted through bioinformatics and online databases. Results As a result, a total of 141 downregulated and 28 upregulated genes were identified and 18 differentially expressed genes, which might be related to carcinomas, were obtained. Interestingly, GO and KEGG pathway showed their roles on anti-cancer and anti-virus. Further analysis found that the HSPA6 gene was the high significantly upregulated gene, and showed to inhibit TNBC cell growth, migration and invasion. High expression of HSPA6 was positively correlated with long overall survival (OS) in patients with breast cancer, indicating the tumor-suppressive roles for HSPA6. But DNA methylation of HSPA6 may not be the regulatory mechanism for HSPA6 mRNA upregulation in breast cancer tissues, although the mRNA levels of HSPA6 were increased in these cancer tissues compared with normal tissues. Moreover, TQ enhanced the inhibitory effect of migration and invasion when HSPA6 was overexpressed; while HSPA6 was knocked down, TQ attenuated the effects of HSPA6-promoted migration and invasion, demonstrating a partially dependent manner through HSPA6 by TQ treatment. Conclusion We have successfully identified a novel TQ-targeted gene HSPA6, which shows the inhibitory effects on growth, migration and invasion in TNBC cells. Therefore, identification of HSPA6 not only reveals a new TQ regulatory mechanism, but also provides a novel candidate gene for clinical management and treatment of breast cancer, particularly for TNBC. Objective: Breast cancer has become the first highest incidence which surpasses lung cancer as the most commonly diagnosed cancer, and the second highest mortality among women worldwide. Thymoquinone (TQ) is a key component from black seed oil and has anti-cancer properties in a variety of tumors, including triple-negative breast cancer (TNBC). Methods: RNA-sequencing (RNA-seq) was conducted with and without TQ treatment in TNBC cell line BT-549. Gene Ontology (GO) function classification annotation, Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses for these genes were conducted. Western blot and semi-quantitative RT-PCR were used to verify the regulated gene. Functional assays by overexpression or knocking down were performed for HSPA6 and its mediator TQ for inhibiting growth, migration and invasion of TNBC cells. The regulatory mechanisms and prognosis for HSPA6 for breast cancer survival were conducted through bioinformatics and online databases. Results: As a result, a total of 141 downregulated and 28 upregulated genes were identified and 18 differentially expressed genes, which might be related to carcinomas, were obtained. Interestingly, GO and KEGG pathway showed their roles on anti-cancer and anti-virus. Further analysis found that the HSPA6 gene was the high significantly upregulated gene, and showed to inhibit TNBC cell growth, migration and invasion. High expression of HSPA6 was positively correlated with long overall survival (OS) in patients with breast cancer, indicating the tumor-suppressive roles for HSPA6. But DNA methylation of HSPA6 may not be the regulatory mechanism for HSPA6 mRNA upregulation in breast cancer tissues, although the mRNA levels of HSPA6 were increased in these cancer tissues compared with normal tissues. Moreover, TQ enhanced the inhibitory effect of migration and invasion when HSPA6 was INTRODUCTION As the malignant tumor, female breast cancer has become the first highest incidence which surpasses lung cancer as the most commonly diagnosed cancer, and the second highest mortality among women worldwide (1). In this year, breast cancer was estimated to reach 2.3 million new cases (11.7%), followed by cancers of lung (11.4%), colorectal (10.0%), prostate (7.3%), and stomach (5.6%) (1). The incidence for breast cancer in China is increasing year by year (2). The treatment of breast cancer includes radiotherapy, endocrine therapy, chemotherapy, biological targeted therapy and traditional Chinese medicine adjuvant therapy; but the efficacy still needs to be further improved to benefit the patients. Thymoquinone (TQ) is a key component from black seed oil from traditional herb medicine and has anti-cancer properties in a variety of tumors (3,4). Previous studies in our laboratory and others demonstrated that TQ has significant inhibitory effects on the migration and invasion on breast cancer cells, including triple-negative breast cancer (TNBC) (5)(6)(7)(8)(9). TNBC is the most aggressive and chemoresistant subtype in breast cancer, with a typical characterization of lack of receptor expressions of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2). The management for TNBC imposes an economic burden on the society and family and represents a main challenge for both patients and clinicians. New molecular targets and therapeutic reagents are required for improving TNBC patient prognosis and survival. The global regulatory effects and its targets by TQ in TNBC cells are still unknown. Thus, it is necessary to identify novel TQ-targeted genes for breast cancer, including TNBC. Heat shock 70-kDa protein 6 (HSPA6) (OMIM: 140555), which is cytogenetically located on human chromosome 1q23.3, encodes a 70-kDa protein. HSPA6 was first identified by Leung et al. in 1990 as a stress-induced heat-shock gene (10). HSPA6 and HSPA7 were reported to share more than ninety percent nucleotide identity through their coding regions; but HSPA7 showed no protein-coding potential (11). Although HSPA6 was discovered three decades ago, the functional roles in cancer progression are unclear (12)(13)(14). Recently, HSPA6 was discovered to be dispensable for Withaferin A-mediated apoptosis/autophagy or migration inhibition of breast cancer (15). In this study, RNA-sequencing (RNA-seq) was performed and TQ-targeted gene HSPA6 was successfully identified for TNBC inhibition functionally. Reagents and Cell Culture BT-549 and MDA-MB-231 cells, both are TNBC cell lines, and HeLa cells (cervical cancer cell line) were purchased from the American Type Culture Collection (Manassas, VA, USA). RPMI1640 and DMEM were purchased from Thermo Fisher Scientific (Waltham, MA, USA). The fetal bovine serum (FBS) was purchased from Pan Biotech (Bavaria, Germany). TQ was purchased from Sigma-Aldrich and dissolved in dimethyl sulfoxide (DMSO) (Corning, Manassas, VA, USA). For BT-549 cell culture, the RPMI1640 medium containing 10% of FBS, 0.023 U/ml of insulin was used. For MDA-MB-231 and HeLa culture, the DMEM medium containing 10% of FBS was used. Then we incubated the cells in an incubator at 37°C with a 5% CO 2 air atmosphere. RNA Extraction, Library Preparation, and RNA-Sequencing After BT549 cells were treated with TQ for 6 h, total RNA was extracted by TRIzol Reagent (Invitrogen, cat. No 15596026) as described previously (16,17). DNA contamination should be removed by digestion with DNase I after RNA extraction. The concentration and quality of RNA was measured by detecting A260/ A280 with Nanodrop TM spectrophotometer (Thermo Fisher Scientific Inc. Waltham, MA, USA) and the integrity of RNA was verified with 1.5% agarose gel electrophoresis. Then Qubit 3.0 with Qubit TM RNA Broad Range Assay Kit (Q10210, Life Technologies) was used to quantify the RNA. Preparation for stranded RNAsequencing library was constructed with 2 mg of total RNA using KC-Digital TM Stranded mRNA Library Prep Kit from Illumina (Catalog # DR08502, Wuhan Seqhealth Co. Ltd., China). Then, we got enriched and quantified library products with 200 to 500 bps in length for RNA-seq on Novaseq 6000 sequencer (PE150 model, Illumina), according to the instruction of NovaSeq 5000/6000 S2 Reagent Kit (cat #: 20012861, Illumina). Briefly, we firstly thawed the preconfigured sequencing by synthesis (SBS) reagent cartridge and the cluster generation reagent cartridge. The library and the SBS reagent cartridge were then mixed and denatured. Then, the library tubes were put into the thawed cluster generation reagent cartridge. Subsequently, we put the cluster generation reagent cartridge into the flow tank for running. Finally, we selected "sequence" in the software, set parameters and started running. RNA-Seq Data Analysis, GO and KEGG Analyses After RNA-seq, we used Trimmomatic (version 0.36) to filter raw data, discarded the low-quality reads, and trimmed the reads contaminated by adaptor sequences to ensure the clean data were good enough to use for standard RNA-seq analysis (18). Then, they were mapped to the reference genome of Homo_sapiens. GRCh38 was from URL: ftp://ftp.ensembl.org/pub/release-87/ fasta/homo_sapiens/dna/using STAR software. Reads mapped to the exon regions of each gene were counted by software of featureCounts (version 1.5.1, Bioconductor), and then Reads Per Kilobase per Million mapped reads (RPKM) was calculated. Using the edgeR package (version 3.12.1) (19), genes differentially expressed with and without TQ treatments were identified. To judge the significantly statistical significance of gene expression differences, a p value cutoff score of 0.05 and fold-change cutoff score of 2 were used. Gene ontology (GO) enrichment and Kyoto encyclopedia of genes and genomes (KEGG) pathway analyses were applied for differentially expressed genes, implemented with software for KOBAS (version: 2.1.1) with a p value cutoff score of 0.05 (20). Analysis of mRNA Expression by Semi-Quantitative RT-PCR After extraction, 1 mg of total RNA was used to generate cDNA. The total volume of cDNA synthesis reaction system (reverse transcriptase/RT-PCR) is 10 ml, including 1 ml of dNTPs, 2 ml of 5 × RT buffer, 0.5 ml of random primer, 0.5 ml of RevTra Ace enzyme (which was purchased from TOYOBO company, China), 0.25 ml of RT-enhancer, 0.25 ml of super RI, approximate amount volume of RNase free water and 1 mg of total RNA were also added. The reactions were carried out in a Mastercycler gradient thermocyler (Eppendorf, Germany) as follows: 15 min at 37°C, 5 min at 50°C, 5 min at 98°C, final holding at 16°C. The reaction products were used as templates for semi-quantitative PCR (21). Primers 5'-tggacaaggcccag attcat-3' and 5'-atcctctccacctcctcctt-3' were used to measure HSPA6 mRNA levels. Meanwhile, the 5'-acagtcagccgcatcttctt-3' and 5'-ttgattttggagggatctcg-3' were used to measure GAPDH mRNA level, which served as an internal control to show the difference of HSPA6 mRNA level among the experimental groups. The semi-quantitative RT-PCR experiments were repeated three times. Western Blot Assays The proteins were extracted with EBC lysis buffer, separated on polyacrylamide gel electrophoresis, and transferred to nitrocellulose membrane (BioRad, USA) (22). The membrane was then kept in 5% skim milk (1 × TBST) at room temperature for 1~2 h, shaken gently in primary antibody solution at 4℃ for 8~12 h, washed thrice with 1 × TBST, and then incubated with secondary antibody (tagged with HRP) for 2~4 h at room temperature. Finally, the membrane was washed thrice with 1 × TBST buffer. After chemiluminiscence reaction, the protein bands on the membrane were visualized by using a digital imaging system from BioRad Lab (Universal Hood II, Italy). The primary antibodies were anti-HSPA6 (Santa Cruz Biotechnology, Inc., CA, USA), anti-b-actin (Cell Signaling Technology, Inc., MA, USA), and anti-Flag (Sigma-Aldrich, Inc., MO, USA). The secondary antibodies, corresponding to primary antibodies, were anti-rabbit or anti-mouse (Cell Signaling Technology, Inc., MA, USA). Assays for Real Time CelI AnaIysis (RTCA) We used a real time cell analyzer (xCELLigence RTCA DP, Roche, Germany) to analyze cell migration, invasion and growth index, which was reported previously (5,22). A CIM plate was used for cell invasion/migration assays. The matrigel (cat #: 354277, BD Biosciences) was diluted in 1 × PBS at 1:40, and then added to its upper chamber and solidified in cell incubator at 37°C. After the glue was solidified (about 1~2 h), 10% serum supplemented medium was added to the lower chamber wells to induce cell invasion, and 100 ml of cell suspensions (total number of cells 5 × 10 3 ) was added into the upper chamber. After installing the upper and lower boards, we started the experiment by setting up the program, and monitored the processes of cell invasion/migration every 15 min till the end of the experiments. About 7 h later, the experimental group was treated with TQ at a final concentration of 10 mmol/L. The cell migration test was similar to the invasion test, except that there was no matrigel in the superior chamber wells. The cell growth experiment was carried out with E-Plate. First, 50 ml of 10% serum supplemented medium was added to each well after the cells were digested and counted so that each 100 ml cell suspension containing 5×10 3 cells was added to each well, and the experiment began. The methods of TQ treatment were same as the invasion and migration experiments. All experiments were repeated three times. Methylation Analysis for HSPA6 Promoter The methylation status of HSPA6 promoter region in the tissues of BRCA patients from The Cancer Genome Atlas (TCGA)-BRCA was explored through the UALCAN database and the database of DNA methylation interactive visualization database (DNMIVD). The associations between the HSPA6 expression and promoter methylation of HSPA6 in the normal and BRCA tissues were conducted by the database of DNMIVD (http://119. 3.41.228/dnmivd/query_gene/?gene=HSPA6&panel= Summary&cancer=BRCA) (24)(25)(26). Prognosis Analysis The clinical data for breast cancers from GEO, EGA, or TCGA were used for an overall survival (OS) analysis (27). The two patient cohorts according to upper quantile expressions of HSPA6 were compared using a Kaplan-Meier survival plot (https://kmplot.com/ analysis/index.php?p=service) (27,28). The gene name HSPA6 was searched in the database website and the patients were split by median, with or without restriction to breast cancer subtypes. Results for Genes That Are Differentially Expressed by TQ Treatment in Breast Cancer Cells BT-549 To identify globally affected target genes by TQ, RNA-seq was performed in TNBC cells BT-549 with or without TQ treatments. After RNA-seq, we have successfully identified a total of 141 downregulated and 28 upregulated genes ( Figure 1A Then, GO enrichment and KEGG pathway analyses were performed to investigate the functions and pathways which are involved. Results for GO enrichment analysis of these differentially expressed genes in details are presented in Supplementary Figure 2 and Supplementary Tables 2, 3, mainly in regulation of nucleotide-binding oligomerization domain containing 2 signaling pathway, positive regulation of tumor necrosis factor-mediated signaling pathway, protein refolding, cellular response to heat, viral life cycle, response to oxidative stress (GO up, Supplementary Table 2), negative regulation of myosin-light-chain-phosphatase activity, sister chromatid segregation, nuclear chromosome segregation, single-organism organelle organization, cytoskeleton, cell cycle (GO down, Supplementary Table 3), etc. Results for KEGG pathway analyses of differentially expressed genes are presented in Supplementary Figure 3 and Supplementary Tables 4, 5, revealing that mainly in ribosome, longevity regulating pathway, legionellosis, estrogen signaling pathway, antigen processing and presentation (KEGG up, Supplementary The Expression of HSPA6 Is Increased by TQ Treatment in Triple-Negative Breast Cancer Cells From above differentially expressed genes, we found 18 differentially expressed genes, which might be closely related to carcinomas, either as oncogenes or tumor suppressor genes. After carefully analyzing, the HSPA6 gene, as the highly significantly upregulated gene ( Figure 1A by TQ treatment, was captured by us, and previous studies showed that this gene might be related to tumor repression (12). For further verification whether this gene had changes consistent with the results of RNA-seq in BT-549, we subsequently performed semi-quantitative RT-PCR and western blot. As expected, the obviously increased expression of mRNA level in BT-549 cells ( Figure 1B) and protein level in both BT-549 and MDA-MB-231 cells ( Figure 1C) were confirmed. Thus, HSPA6 may be a novel TQ-targeted gene for our further study. HSPA6 Inhibits Cancer Cell Growth, Migration, and Invasion Based on the above experimental data, we identified HSPA6 as one of the target genes of TQ. In order to further verify the inhibitory effect of HSPA6 on cancer cell growth, we performed HSPA6 overexpression on HeLa cells with undetectable endogenous HSPA6. To do so, we transfected HSPA6 plasmid into HeLa cells and western blot was performed to check whether it was successfully expressed. Figure 2A shows that empty vector in HeLa cells did not express HSPA6, and the HSPA6 plasmid with Flag tag was successfully expressed. On the basis of this successful experiment, we further checked the effect of HSPA6 overexpression on cell growth, migration and invasion by RTCA assays. As presented in Figures 2B-D, HSPA6 did inhibit the cell growth, migration and invasion ( Figures 2B-D). On the other hand, knocking down of HSPA6 in breast cancer cells BT-549 with highly endogenous expression was performed by using three shRNA plasmids. Figure 3A shows that HSPA6 was successfully silenced by all three shRNA plasmids, indicating plasmid 531 with more efficiency. Further RTCA assays revealed that the growth curve of BT-549 cells was significantly higher than that of the control group ( Figure 3B). In addition, this inhibitory effect of HSPA6 may not be affected throughout cell cycle (Supplementary Figure 4). Then, we'd like to further ask whether HSPA6 inhibits cancer cell migration and invasion, the results by RTCA assay found that HSPA6 inhibited the migration ( Figure 4A, red line vs. green line) and invasion ( Figure 4B, red line vs. green line) when HSPA6 was overexpressed; while knocking down of HSPA6 promoted the migration ( Figure 5A, red line vs. green line) and invasion ( Figure 5B, red line vs. green line) in TNBC BT-549 cells. Taken together, these studies strongly demonstrated the inhibitory effects of HSPA6 on tumor cell growth, migration and invasion. TQ Enhances the Inhibitory Effects of Cell Migration and Invasion When HSPA6 Was Overexpressed, While Knocking Down Attenuates the Effects It has been reported that TQ inhibits breast cancer cell migration and invasion (5,8), and further study here reveals that TQ upregulates HSPA6 expression. With these regards, by overexpression or knocking down of HSPA6 and then assays of cell migration and invasion were performed by RTCA. And the results found that TQ enhanced the inhibitory effect of cancer cell migration ( Figure 4A, blue line vs. pink line) and invasion ( Figure 4B, blue line vs. pink line) when HSPA6 was overexpressed; when knocking down HSPA6, TQ attenuated the inhibitory effects of cell migration ( Figure 5A, red line vs. green line) and invasion ( Figure 5B, red line vs. green line) of HSPA6-promoted, thus demonstrating a partially dependent manner through HSPA6 by TQ treatment. The Mechanism for Regulation of HSPA6 Expression in Breast Cancer Tissues To further investigate the HSPA6 expressions and its clinical significance in breast cancer patients, we thus utilized the data from CPTAC, and results showed that the HSPA6 protein expressions were decreased in breast cancer tissues compared with normal tissues ( Figure 6A). However, the mRNA levels of HSPA6 were increased in breast cancer tissues compared with normal tissues (data not shown). The mechanistic study by HSPA6 promoter analysis indicated that the promoter regions of HSPA6 in BRCA samples were increased in cancer tissues compared with matched normal tissues ( Figure 6B), indicating that DNA methylation of HSPA6 may not be the regulatory mechanism for HSPA6 mRNA upregulation in those breast cancer tissues. And promoter methylation and HSPA6 expression in BRCA were also positively correlated ( Figures 6C, D). DISCUSSION In order to identify target genes/pathways globally affected by TQ, RNA-seq was performed in TNBC cells BT-549, a total of 141 downregulated and 28 upregulated genes were found. GO function classification annotation showed mainly in protein refolding, cellular response to heat, nuclear chromosome segregation, sister chromatid segregation, microtubule cytoskeleton, chromosome segregation, single-organism organelle organization, cell cycle, viral life cycle, response to oxidative stress, etc.; KEGG pathway revealed mainly in Fanconi anemia pathway, Salmonella infection, pathways in cancer, or ribosome, longevity regulating pathway, legionellosis, estrogen signaling pathway, antigen processing and presentation, etc. Genes demonstrating in pathways of cancer and in Figure 3A. "shHSPA6" indicates knocking down of HSPA6 for clone 531, and "shVector" indicates the empty vector as a control without knocking down. viral life cycle indicate that TQ has roles for both anti-cancers and anti-viruses. Interestingly, recent studies found that TQ might have inhibitory potential against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) protease (29), particularly for cancer patients (30 From differentially expressed genes, we found the HSPA6 gene was the high significantly upregulated gene by TQ treatment in BT-549 TNBC cells, and showed that HSPA6 inhibited TNBC cell growth, migration and invasion via overexpression and knocking down assays. Through analyzing the clinical data of breast cancer by Kaplan-Meier Plotter, we found that high expression of HSPA6 was positively correlated with long OS in patients with both all subtypes of breast cancer and TNBC, indicating the tumorsuppressive roles for HSPA6. Thus, the data through bioinformatics analysis of multiple databases support the inhibitory effect of HSPA6 on breast cancer. Then, further mechanistic study showed that, although the mRNA levels of HSPA6 were increased in breast cancer tissues compared with matched normal tissues, the promoter regions of HSPA6 in BRCA samples were increased in cancer tissues compared with matched normal tissues, indicating that DNA methylation of HSPA6 may not be the regulatory mechanism for HSPA6 mRNA upregulation in those breast cancer tissues. And correlation for promoter methylation and HSPA6 expression in BRCA was positively related. These data suggest that, in addition to heat stress, other mechanisms, such as small molecules for example TQ, should be involved in HSPA6 upregulation. Thus, these studies strongly demonstrated the inhibitory effects of HSPA6 on tumor cell growth, migration and invasion. TQ has been reported to inhibit breast cancer cell migration and invasion and epithelial-mesenchymal transition (EMT) markers (5,9,34), and our RNA-seq data further revealed that TQ upregulates HSPA6 expression. With these regards, by overexpression or knocking down of HSPA6, the inhibitory roles of cell migration and invasion by TQ were performed, and we found that TQ enhanced the inhibitory effects of cancer cell migration and invasion when HSPA6 was overexpressed; while knocking down, TQ attenuated the inhibitory effect of growth, migration and invasion of HSPA6-promoted, thus demonstrating a partially dependent manner through HSPA6 by TQ. Altogether, identification of TQ-targeted HSPA6 not only reveals a new TQ regulatory mechanism, but also provides a novel candidate target for clinical management and treatment of breast cancer, particularly for TNBC upon TQ. CONCLUSIONS By RNA-seq, we have successfully identified a novel TQ-targeted gene HSPA6, which showed the inhibitory effects on growth, migration and invasion in TNBC cells. The HSPA6 promoter DNA methylation may not be the cause for HSPA6 mRNA upregulation; other mechanism should be involved. Overexpression or knocking down of HSPA6 demonstrates a partially dependent manner through HSPA6 by TQ for HSPA6 inhibitory effects on TNBC cell growth, migration and invasion. Altogether, identification of HSPA6 will provide a novel candidate target for clinical management and treatment of breast cancer, particularly for TNBC on TQ. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.
2021-05-04T13:25:32.675Z
2021-05-04T00:00:00.000
{ "year": 2021, "sha1": "5f1993d7e6b46a79493225ef61d31f1d757a70f9", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.667995/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5f1993d7e6b46a79493225ef61d31f1d757a70f9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
249714588
pes2o/s2orc
v3-fos-license
JAK2 gene knockout inhibits corneal allograft rejection in mice by regulating dendritic cell-induced T cell immune tolerance Corneal allograft rejection can be seen in some patients after corneal transplantation. The present study intends to investigate whether JAK2 gene knockout affects corneal allograft rejection through regulation of dendritic cells (DCs)-induced T cell immune tolerance. In order to identify the target gene related to corneal allograft rejection, high-throughput mRNA sequencing and bioinformatics analysis were performed. JAK2 knockout mice were constructed and subjected to corneal allograft transplantation. The incidence of immune rejection was observed, the percentage of CD4+ T cells was detected, and the expression of Th1 cytokine interferon γ (IFN-γ) was determined. Flow cytometry and ELISA were performed to analyze the effects of JAK2 gene knockout on bone marrow-derived DCs (BMDCs). JAK2 was the target gene related to corneal allograft rejection. JAK2 gene knockout contributed to significantly prolonged survival time of corneal grafts in mice and inhibited corneal allograft rejection. The in vitro cell experiment further confirmed that JAK2 gene knockout contributed to the inactivation of CD4+ T cells and induced IFN-γ expression, accompanied by inhibition of DC immune function, development, maturation, and secretion of inflammatory cytokines. Collectively, JAK2 gene knockout inactivates CD4+ T cells to decrease IFN-γ expression, as well as inhibits DC development, maturation, and secretion of inflammatory cytokines, thereby reducing corneal allograft rejection. INTRODUCTION Corneal transplantation is regarded as a prevalent solid organ transplantation that may encounter a failure due to T cellmediated rejection [1]. Patients with corneal neovascularization or edema, and large donor graft buttons may be more vulnerable to treatment failure due to corneal allograft rejection [2]. Currently, the widely used immunosuppressive agents for preventing corneal graft rejection mainly include steroids [3]. Importantly, it has been reported that immunomodulatory therapies targeting dendritic cells (DCs), an important player of the immune system, can improve the survival of corneal grafts [4]. Strikingly, the potential of gene therapy has been highlighted in cornea transplantation through modification of allografts ex vivo before transplantation [5]. Against such a backdrop, it is of significance to search target genes for control of corneal allograft rejection. Janus kinase 2 (JAK2) is identified as a cytoplasmic tyrosine kinase that plays an important role in cytokine signaling [6]. Intriguingly, it is known that the increase of JAK expression is related to the immune rejection of allografts as well as the inflammation in autoimmune diseases [7]. As previously reported, JAK2, as a key modulator of the immune response, is involved in the occurrence of graft-versus-host disease, a contributor to transplant-related mortality following allogeneic hematopoietic cell transplantation [8]. Activated JAK2 by treatment of cryptotanshinone could regulate CD4 + T cell cytotoxicity in lung tumor [9]. Of note, it was found that JAK2 could modulate DC differentiation and that the inhibition of JAK2 could suppress inflammatory dendritic epidermal cell development and function in atopic dermatitis [10]. The abnormal activation of JAK2 signaling in immature myeloid DCs was shown to participate in the regulation of immune tolerance [11]. Besides, it was revealed that knockout of JAK2 selectively inhibited DCsmediated innate immunity in a mouse model of lipopolysaccharide (LPS)-induced septic shock [12]. Inhibition of JAK2 could bring about long-term tolerance to alloantigen by DCs-induced T cells [13]. Interferon γ (IFN-γ) production plays a crucial part in the process of corneal allograft rejection [14], and activated JAK2 could induce the expression of IFN-γ in tuberculosis [15]. Considering all the above findings, we proposed the hypothesis in this study that JAK2 gene knockout could affect corneal allograft rejection, which involved with the regulation of DCs-induced T cell immune tolerance. RESULTS Twenty-four target genes related to corneal allograft rejection were screened through high-throughput sequencing In order to study the target genes related to the occurrence of corneal allograft rejection, we first used a high-throughput sequencing method for transcriptome analysis on normal control mice and model mice receiving corneal allografts. The results screened out a total of 177 DEmRNAs (90 upregulated DEmRNAs and 87 downregulated DEmRNAs) ( Fig. 1A and Table S1). At the same time, 24 differential genes were obtained by intersection with the DEmRNAs with 48 genes related to corneal allograft rejection found in the CTD database (Fig. 1B). Furthermore, GO and KEGG analyses were performed on 24 differentially expressed genes. The results of GO functional analysis showed that the target genes were mainly enriched in the items of negative regulation of the neural apoptotic process, negative regulation of the apoptotic process, oocyte development, and so on (Fig. 1C). In the cell component, they were mainly enriched in the cytoplasm, cytoskeleton, membrane, and other items (Fig. 1D). In molecular function, they were mainly enriched in protein binding, motor activity, and identical protein binding (Fig. 1E). KEGG pathway analysis revealed that the target genes were mainly enriched in items such as cholinergic synapse, longterm depression, colorectal cancer, prostate cancer, toxoplasmosis, and neurotrophin signaling pathway (Fig. 1F). These results suggest that candidate target genes mainly play a role in biological processes such as apoptosis and development, and are enriched in structures such as cytoplasm and cytoskeleton. The molecular function of candidate target genes is mainly involved in protein binding. A total of 24 candidate target genes were introduced into the String database (species: mice) to obtain the protein-protein interaction (PPI) relationship. The PPI network was constructed using the Cytoscape software, involving 15 nodes and 15 edges (PPI enrichment p value < 1.0e −16 ). Subsequently, MCC network topology algorithm in the cytoHubba software was used to predict the top 10 hub genes from the PPI network, in which JAK2 was found to be the top 1 hub gene (Fig. 1G). Overall, JAK2 plays an important role in the regulation of corneal allograft rejection, so we chose JAK2 as the target gene for further study. JAK2 gene knockout significantly prolonged the survival time of corneal grafts In order to verify the important role of JAK2 in early corneal allograft rejection, we successfully established the corneal allograft model in JAK2 knockout mice and WT mice, and compared the eyeballs of the mice before operation with those of the mice after modeling ( Fig. 2A). Postoperative observation under a slit lamp showed that in WT mice, transient edema and opacity occurred due to inflammatory reaction 28 days after operation; two cases of corneal grafts were turbid 7 days after the operation, and the pupil was not visible (Fig. 2B). After removal of the corneal suture, the edema and opacity aggravated; 14 days after the operation, four cases of corneas showed edema, opacity, and completely opaque (Fig. 2C). Up to 21 days after the operation, 10 cases had corneal edema and opacity, and the pupil was not visible. The average survival time was (20.16) days; four cases of the cornea had long-term survival. In JAK2 knockout mice, transient corneal edema and opacity occurred due to inflammatory reaction. The corneal grafts were transparent 7 days after the operation (Fig. 2D). Only two cases had mild corneal epithelial edema. After the removal of the corneal suture, the cornea became transparent gradually. All corneal grafts were still transparent 14 days after the operation (Fig. 2E). Up to 21 days after the operation, only four cases had corneal opacity. The average survival time of the JAK2 knockout mice was (25.58) days; nine cases of corneal long-term survival. There was a significant difference in median survival time between WT mice and JAK2 knockout mice (p < 0.05). WT mice and JAK2 knockout mice received donor corneas from BALB/c mice. The survival time of corneal grafts in JAK2 knockout mice was significantly prolonged (p = 0.0377). Forty days after the operation, the survival rate of corneal grafts in JAK2 knockout mice was 75.00%, while that in WT mice was only 33.33% (Fig. 2F). These results suggest that JAK2 gene knockout can significantly prolong the survival time of corneal grafts, and JAK2 may be a key gene involved in the occurrence of corneal allograft rejection. A Differential expression of mRNAs in corneal allograft rejection and normal samples. The green dots indicate downregulation, the red dots indicate upregulation, and the gray dots indicate no significant difference. The X-axis represents the logarithm (log 2 ) of the FoldChange between different groups, i.e., log 2 (FC). The Y-axis represents the logarithm (−log 10 ) of the p value, namely −log 10 (p value). B The Venn diagram for the retrieval results of the two databases. C GO functional analysis of DEmRNA at the biological process level. D The GO function of DEmRNA was analyzed at the cellular component level. E GO functional analysis of DEmRNA at the molecular function level. F The size of dots indicates the number of selected genes, and the color represents the p value of enrichment analysis. G Network for candidate target gene interaction (node represents protein, edge represents protein association, and colors and shapes represent degree value and Combine Score value. JAK2 gene knockout prevented local corneal allograft rejection Immune rejection after corneal transplantation is the critical reason for the failure of corneal grafts to survive [16]. After model establishment, the differences in terms of the corneal graft and inflammatory infiltrating cells, CD4 + T cells, and IFN-γ expression were compared, in order to investigate the relationship between JAK2 and local corneal allograft rejection. RT-qPCR and Western blot revealed that the mRNA and protein expression of JAK2 in the corneal allografts of WT mice was significantly higher than that in JAK2 knockout mice (Fig. 3A, B). Based on the results of HE staining for corneal allografts 14 days after the operation, the WT mice revealed rejected corneal graft with edema, thickening, a structural disorder of stromal layer (normally composed of clear-cut epithelial cells and regular arrangement of collagen fibers), neovascular lumen and a large number of inflammatory cell infiltration. Corneal allografts in JAK2 knockout mice showed mild edema, along with only a little inflammatory cell infiltration (Fig. 3C). The detection of the percentage of CD4 + T cells in the ipsilateral cervical lymph nodes of the operated mouse eyes showed that compared with JAK2 knockout mice, WT mice had a notable increase in the number of CD4 + T cells in the ipsilateral cervical lymph nodes of the operated eyes (Fig. 3D, E). In addition, determination of IFN-γ expression in corneal allografts detected by RT-qPCR and Western blot 14 days after operation suggested that the expression of IFN-γ in JAK2 knockout mice was significantly higher than that in JAK2 knockout mice (Fig. 3F, G). Taken together, the occurrence of local corneal allograft rejection can be prevented through JAK2 knockout. JAK2 gene knockout inhibited the development, maturation, and secretion of inflammatory cytokines of DCs We isolated BMDCs from the bone marrow of JAK2 knockout mice and normal control mice, and treated BMDCs with GM-CSF and IL-4. We found that JAK2 gene knockout markedly reduced the number of DCs than that in normal control mice (Fig. 4A). In addition, we further detected the number of DCs in spleen tissue of JAK2 gene knockout mice and normal control mice and found that the spleen weight of JAK2 knockout mice was markedly lighter than that of normal control mice (Fig. 4B). We also found that the total cell number in spleen tissue in JAK2 knockout mice was notably less than that in normal control mice (Fig. 4C). On the premise of the decreased total cell number in spleen tissue, the proportion of DCs in spleen tissue of JAK2 knockout mice was also notably lower than that of normal control mice (Fig. 4D). These results suggest that JAK2 gene knockout is able to significantly inhibit the development of DCs. Similarly, after treatment with GM-CSF and IL-4 for 9 days, part of the DCs was taken and stimulated with 500 ng/mL LPS for 24 h. On the 10th day, the cells were collected to detect the surface markers. Flow cytometry showed that the purity of DCs was more than 85% in both JAK2 knockout mice and normal control mice. By analyzing the phenotype of CD11C + cells, we found that JAK2 gene knockout significantly reduced the response of DCs to induction of maturation. In the absence of stimulation, the expression of MHC class II molecules and costimulatory molecules (such as CD80, CD86, and CD54) on the surface of BMDCs from JAK2 knockout mice was significantly lower than that on the surface of controls (Fig. 4E). Under LPS stimulation, most of BMDCs from normal control mice entered mature state, showing high expression of MHC class II molecules and costimulatory molecules (CD80, CD86, and CD54). On the contrary, only a small number of BMDCs from JAK2 gene knockout entered mature state under LPS stimulation (Fig. 4F). To conclude, JAK2 gene knockout can significantly inhibit the maturation of BMDCs. One of the important functions of DCs is to secrete cytokines to assist other cell differentiation and function [17]. According to the results from ELISA, before stimulation, IL-2, IL-6, IL-10, and IL-12 could not be detected but a low level of TNF-α in BMDC supernatant of JAK2 knockout mice and normal control mice. TNFα secreted by BMDCs from JAK2 knockout mice was only about 1/ 10 of that from the WT mice (Fig. 4G). After LPS stimulation, BMDCs secreted a lot of inflammatory cytokines such as TNF-α, IL-6, and IL-12. Moreover, TNF-α, IL-6, and IL-12 secreted by BMDCs from JAK2 knockout mice were significantly lower than those from normal control mice (Fig. 4H). Collectively, JAK2 deficiency affects the secretion of inflammatory cell molecules by DCs. In conclusion, JAK2 knockout is capable of inhibiting the innate immune function of DCs such as maturation and secretion of inflammatory cytokines. DISCUSSION Immunologic graft rejection is considered to be the major contributor to graft failure in corneal transplantation [3]. In the present study, we set out to explore the role of JAK2 gene knockout in the regulation of corneal allograft rejection, the results of which found that JAK2 gene knockout could suppress corneal allograft rejection by affecting DCs-induced T cell immune tolerance. First of all, our transcriptome high-throughput sequencing screened out 24 target genes related to corneal allograft rejection, among which JAK2 was found to be the top 1 key gene in the PPI network. Moreover, our study further demonstrated that JAK2 gene knockout could significantly prolong the survival time of corneal grafts while also preventing the occurrence of local corneal allograft rejection. JAK2 was revealed to be implicated in the secretion of the chemokine IL-8 in human corneal fibroblasts [18]. JAK pathway is believed to be able to regulate myeloidderived suppressor cells, which can affect immune tolerance, graft survival, and rejection [19]. It was reported that the use of AG490, an inhibitor of JAKs including JAK2, could regulate CD4 + CD25 + T cell development as well as a Th2 shift of CD4 + T cells, thereby exerting prevention in acute lung allograft rejection in a rat model [20]. Of note, a previous study demonstrated that JAK2 was a potential biologic target for control of allograft rejection, for suppression of JAK2 could result in tolerance to alloantigen by human DCs-triggered T cells, involved with the regulation of memory T cells and responder Th1 and Th17 cells [13]. Besides, targeting JAK2 could decrease graft-versus-host disease as well as xenograft rejection by mediating T cell differentiation and is of potential to modulate donor alloreactivity following allogeneic hematopoietic cell transplantation or solid organ transplantation [8]. Pharmacologic suppression of JAK2 could diminish graftversus-host disease and retain the graft-versus-leukemia effect in allogeneic hematopoietic stem cell transplantation, and was thus suggested to be applied in diseases including organ transplant rejection [21]. In addition, the recipient and donor JAK2 46/1 haplotypes were unfolded to be accountable for acute graftversus-host disease after allogeneic hematopoietic stem cell transplantation [22]. The above findings can support our result regarding the regulation of corneal allograft rejection by JAK2 gene knockout. Furthermore, the current study revealed that JAK2 gene knockout could inhibit DC development, maturation, and secretion of inflammatory cytokines. To our acknowledge, an increasing number of studies have unfolded the regulatory role of JAK2 in CD4 + T cells and DCs. JAK2-STAT3 activation was found in splenic CD4 + T cells and could affect cell differentiation, thereby aiding in modulating the inflammation [23]. It was unveiled that the lack of JAK2 selectively repressed DCs-regulated innate immunity in mice with LPS-induced septic shock by playing an important role in the development and maturation of DCs while also regulating DC secretion of proinflammatory cytokines [12]. In addition, the activation of the JAK2 signaling pathway by leptin could accelerate the migration and maturation of DCs in a mouse model [24]. Interestingly, the regulatory function of JAK2 on IFN-γ has been previously reported. For instance, silencing of JAK2 was unfolded to result in suppression of IFN-γ-induced activation of peripheral blood mononuclear cells from patients with STAT4 risk [25], and downregulation of JAK2 by miR-21 could repress IFN-γinduced STAT1 pathway in macrophages [26]. Therefore, the function of JAK2 gene knockout in corneal allograft rejection was achieved by affecting DCs-induced T cell immune tolerance. In summary, JAK2 gene knockout is able to inhibit the activation of CD4 + T cells to diminish the expression of IFN-γ, which contributes to suppression of the innate immune functions of DCs such as development, maturation and secretion of inflammatory cytokines, thereby reducing corneal allograft rejection (Fig. 5). This finding may provide a theoretical basis for further understanding the mechanism of corneal allograft rejection and provide new ideas and theoretical basis for the development of new and more effective corneal transplantation anti-rejection strategies. MATERIALS AND METHODS Establishment of corneal allograft model in mice The experimental mice (purchased from Hunan SJA laboratory animal co., ltd, Changsha, Hunan) were clean mice in specific-pathogen-free grade (donor: C57BL/6; receptor: BALB/c mice; 6-8 weeks old; three corneal allograft model mice and three normal control mice) (the method of model construction was described in the following method) were selected for high-throughput sequencing. The operation steps were as follows: (1) Anesthesia: 5% sodium pentobarbital was diluted to 0.5% sodium pentobarbital. The body weight of each mouse was measured using an electronic balance before the operation, and the dose used for each mouse was estimated based on the body weight. Next, 0.4 mL/g 0.5% sodium pentobarbital was injected intraperitoneally into mice. (2) Preoperative preparation: 30 min before the operation, the eyes of donor mice and the right eyes of the recipient mice were given 0.5% compound tropicamide eye drops 2-3 times to fully dilate the pupil. The eyelids, eyelashes and periocular fur of the eyes of donor mice and the right eyes of the recipient mice were cut off. Subsequently, 0.1 mL 0.5% tetracaine eye drops were dropped into the eyes of donor mice and the right eyes of the recipient mice for topical anesthesia and the conjunctival sac was washed with normal saline until there was no residual foreign body. During the operation, the eyes were kept up and the respiratory tract unobstructed. The skin around the eyes was disinfected twice with Anil iodine, and 0.1 mL of thalidol eye drop was used to prevent infection. Disinfectant towels were laid routinely for operation. (3) The operation procedures of corneal allograft transplantation: the corneal graft imprint of donor mice were drilled with a trephine in 2.0-mm diameter. The corneal puncture knife was used to puncture into the anterior chamber at the 3 o'clock direction of the cornea at a puncture angle not to damage the lens. After successful puncture, the puncture site was expanded with Venus scissors to a size that a syringe needle could get in. In order to maintain anterior chamber depth, sodium hyaluronate viscoelastic agent was injected into the anterior chamber in time, with prevention of anterior chamber collapse caused by the excessive outflow of aqueous humor. Venus scissors were used to cut the corneal grafts clockwise according to the corneal imprints and the grafts were put in the culture dish with normal saline. After the right eye corneal graft (diameter: 1.5 mm) of recipient mice was made by the same method, the prepared donor graft was gently moved to the center of the graft bed of recipient mice, and 8-10 stitches were intermittently sutured using 11-0 suture. (4) Postoperative nursing: after the operation, the conjunctival sac of the mice was smeared with thalidol eye ointment, and the eyelids were sutured two stitches with 10.0 suture. After 3 days of single cage feeding, the eyelids were opened by removing the suture. After operation, the mice were given thalidol eye drops, 0.1 mL/time, once a day, to prevent infection. The corneal sutures were removed 7 days after the operation. The operation was performed by the same operator. Intraoperative anterior chamber hemorrhage, postoperative iris synechia, and cataract were considered as surgical failure, and those mice were not included in the experimental group; in that case, experimental animals were supplemented in time. Transcriptome high-throughput sequencing Trizol kits (Invitrogen, Carlsbad, CA, USA) were used to extract total RNA from corneal tissue of normal mice and corneal allograft model mice (three biological replicates in each group). The purity, concentration, and integrity of RNA samples were detected in time after digestion of total RNA to ensure the use of high-quality RNA for transcriptome sequencing. Computer testing was performed on the Illumina Next CN500 highthroughput sequencing platform, and the set value of sequencing was read using PEl50. The FASTQ software was used to control the quality of the obtained data, that was, clean data were obtained after the removal of the connectors and low-quality sequences in raw data. The sequences were aligned to the mouse reference genome using the Hisat2 software, and then the gene expression was quantified using R software package to obtain the gene expression matrix. Construction of JAK2 knockout mice Cre +/+ -JAK2 fl/fl mice were selected from the hybrid offspring of Cre-ERT2 transgenic mice and JAK2 fl/fl mice. The mice were reared under 12-h light/dark cycles. The male Cre +/+ -JAK2 f1/f1 mice that had undergone genotyping were injected subcutaneously with tamoxifen for 5 days at 8-week old. Tamoxifen was freshly prepared with corn oil before injection. Two days after the last injection, mice were used for the experiment. The JAK2 fl/fl mice with conditional JAK2 gene knockout were first constructed by Krempler etc. [27]. Through the Cre-lox system, JAK2 gene can be knocked out in adult mice, and the technology can effectively avoid the death caused by the knockout of JAK2 in the embryo stage. In this mouse genome, the upstream and downstream of the first exon of JAK2 were introduced into loxP sequence, the recognition site of Cre recombinase. Under the action of Cre recombinase, the gene fragment between the two loxP sequences will be removed, leading to the failure of JAK2 to express functional protein. Cre-ERT2 transgenic mice were formed by the fusion of Cre recombinase and mutant estrogen receptor protein. It could be activated by tamoxifen but not by estrogen. The fusion protein is driven by the human ubiquitin C promoter. Cre recombinase was activated by tamoxifen injection, which could remove JAK2 from Flox. Fig. 4 JAK2 gene knockout inhibits the development, maturation, and secretion of inflammatory cytokines of DCs. A After two weeks of induction of gene knockout, BMDCs were induced by GM-CSF and IL-4. BMDCs were collected and counted on the 10th day of culture. B Two weeks after JAK2 gene knockout, mouse spleens were isolated and weighed. C Spleens were isolated from JAK2 knockout mice and normal control mice, single-cell suspension was prepared, and the total number of spleen cells was counted after red blood cells were lysed with 1 × ACK red blood cell lysate. D The cell suspension of spleens from JAK2 knockout mice and normal control mice. CD11c staining was performed, followed by flow cytometry. E BMDCs were collected after 2 weeks of induction of gene knockout. GM-CSF and IL-4 were used to induce BMDCs, which were collected at the 10th day of culture. After staining with CD11c, MHC-II, and costimulatory molecules (CD80, CD86, and CD54) antibodies, flow cytometry was performed. The left side shows the representative flow chart, and the right side is the statistical chart. F BMDCs were collected after 2 weeks of induction of gene knockout. BMDCs were stimulated with LPS for 24 h on the 9th day of culture and then collected for flow cytometry. The left side is the representative flow chart, and the right side is the statistical chart. G BMDCs from JAK2 knockout mice and normal control mice were seeded into 96 well plates at the same density on the 9 th day of culture. After 24 h, the culture supernatant was collected. ELISA was performed to determine the expression of TNF-α secreted by BMDCs from JAK2 knockout mice. H BMDCs from JAK2 knockout mice and normal control mice were seeded into 96 well plates at the same density on the 9th day of culture, followed by LPS stimulation for 24 h. The culture supernatant was collected, and ELISA was performed to determine the expression of TNF-α, IL-6, and IL-12 secreted by BMDCs from JAK2 knockout mice. *p < 0.05 vs. compared with normal control mice. Intervention of JAK2 knockout mice The study enrolled 18 JAK2 knockout mice (C57BL/6 strain; 6-8 weeks old, specific-pathogen-free grade) and 18 BALB/c mice (6-8 weeks old, specific-pathogen-free grade). The right eyes of the recipient mice were used as the operated eyes, and for the donor mice, both eyes were used. In the wild-type (WT) group, BALB/c mice were used as donors; the C57BL/6 mice Fig. 5 The molecular mechanism plot for the role of JAK2 gene knockout in regulating corneal allograft rejection. JAK2 gene knockout contributes to inactivation of CD4 + T cells to decrease IFN-γ expression and suppress DC development, maturation and secretion of inflammatory cytokines, which leads to inhibition of corneal allograft rejection. were used as the recipients and the right eyes were subjected to corneal allograft transplantation. In JAK2 gene knockout group, BALB/c mice were used as donors; the JAK2 knockout mice were used as the recipients and right eyes were subjected to corneal allograft transplantation. Observation of corneal allograft rejection The transparency of the corneal grafts was observed for consecutive 8 weeks (twice a week) by the same operator under a slit lamp microscope at 72 h after operation. The rejection was scored according to the following criteria: [28] 0 point: the corneal graft was transparent; 1 point: slight epithelial turbid; 2 points: mild matrix turbidity, pupil margin, and iris vessels were visible; 3 points: only some pupil margin was found, with medium matrix turbidity; 4 points: the anterior chamber was only visible, with deep matrix layer turbidity; 5 points: the anterior chamber was not visible and the matrix layer was completely turbid. According to the standard, the degree of rejection was evaluated quantitatively: at 2 weeks and 2 weeks ago, a corneal score ≥3 indicated rejection; after 2 weeks, a corneal scor ≥2 was considered as rejection. Cornea without rejection for consecutive 56 days was considered to permanently survive. Hematoxylin and eosin (HE) staining On the 14th day after the operation, 6 mice in each group were used for histopathological examination. In brief, after intraperitoneal injection of a lethal dose of 0.5% sodium pentobarbital to anesthetize the mice, the mouse eyeballs were immediately collected and fixed in an Eppendorf tube containing paraformaldehyde solution. The collected eyeballs were then dehydrated, cleared, and paraffin-embedded. The wax blocks were sliced into 4-μm sections using a slicer, immersed in warm water, and completely dried in an oven, and immediately dewaxed in xylene, followed by rehydration, staining, dehydration, clearing, and sealing. Immunohistochemical staining On the 14th day after the operation, six mice in each group were used for immunohistochemical staining. In brief, after intraperitoneal injection of a lethal dose of 0.5% sodium pentobarbital to anesthetize the mice, the mouse eyeballs were immediately collected and immersed in an Eppendorf tube containing paraformaldehyde solution for fixation. The collected eyeballs were then dehydrated, cleared, and paraffin-embedded. The wax blocks were cut into slices with a thickness of about 4-μm using a slicer and then placed in warm water. Subsequently, the slices were completely dried in an oven and immediately put into xylene to dissolve and dewax, followed by rehydration, and blockade of endogenous enzymes. After antigen repair, the slices were incubated with primary and secondary antibodies, stained with DAB, counterstained, dehydrated and sealed. Reverse transcription-quantitative polymerase chain reaction (RT-qPCR) Six mice in each group were randomly euthanized 14 days after the operation, and corneas were collected and put into a 1.5 mL centrifuge tube. The total RNA was extracted using Trizol kits (Invitrogen) and then reverse-transcribed into complementary DNA (cDNA) according to the instructions of TaqMan microRNA Assays Reverse Transcription Primer (4427975, Applied Biosystems, Carlsbad, CA, USA). Next, 5 μL of cDNA products were obtained as a template for PCR amplification. β-Actin was used as an internal parameter for mRNAs. The relative difference of gene expression was calculated using 2 −ΔΔCT method. The primer sequence is shown in Table S2 (primer design was carried out using the primer design function provided by NCBI). Western blot The total protein of mouse corneas was extracted with radioimmunoprecipitation assay lysate containing phenylmethylsulfonyl fluoride (R0010, Solarbio, Beijing, China). The total protein was incubated on ice for 30 min at 12000 r/minute, followed by centrifugation at 4°C for 10 min to obtain the supernatant. BCA kits (23225, Pierce, Rockford, IL, USA) were used to determine the protein concentration of each sample, which was further adjusted with deionized water. Next, 50 µg protein samples were separated by 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis gel (P0012A, Beyotime, Shanghai, China) at 80 V for 2 h. The protein samples were then transferred to a polyvinylidene fluoride membrane (ISEQ00010, Millipore, Billerica, MA, USA) with wet transfer method, which was blocked with Tris-buffered saline with 0.5% Tween 20 buffer containing 5% skimmed milk powder for 2 h. The primary antibody was added to the membrane for overnight incubation at 4°C. Subsequently, the membrane was incubated with goat anti-rabbit against immunoglobulin G (IgG) antibody labeled with horseradish peroxidase (HRP) (Beijing Zhongshan Biotechnology Co. Ltd., Beijing, China, diluted at 1:5000). Afterwards, the membrane was developed by the enhanced chemiluminescence test kit (BB-3501, Amersham Biosciences, Piscataway, NJ, USA) and exposed in a gel imager. The relative protein expression was expressed by the ratio of the gray value of the corresponding protein band to that of the glyceraldehyde-3-phosphate dehydrogenase protein band. Isolation of DCs cells from mouse bone marrow The mice were euthanized through cervical dislocation, and the femur and tibia were stripped out under aseptic conditions. The bone marrow cells were washed with Hank's solution and phosphate-buffered saline (PBS), and the red blood cells were lysed with 0.83% Tris-NH 4 Cl, followed by two rinses with 1640 culture solution. The cell concentration was adjusted to 1 × 10 6 cells/mL, and then the cells were added with recombinant murine granulocyte-macrophage colony-stimulating factor (rmGM-CSF) (1000 U/ mL) and RMLL-4 (500 U/mL) in a 24-well culture plate for incubation in an incubator at 37°C with 5% CO 2 . On the 3rd day, the plate was shaken gently, and most of the suspended cells were absorbed off. The same amount of culture medium was added to the cells, and the cytokines were replenished. Half medium was changed every other day and the suspended or adherent cells were collected on the 7th day. Enzyme-linked immunosorbent assay (ELISA) The culture supernatant of bone marrow-derived DCs (BMDCs) was obtained. The inflammatory factors interleukin (IL)-2, IL-6, IL-10, IL-12 and tumor necrosis factor-α (TNF-α) were determined according to the instructions of ELISA kits. The standard curve was drawn and the content of inflammatory factors was calculated. Flow cytometry On the 14th day after the operation, six mice with different treatments were selected, and the proportion of immune cells in ipsilateral cervical lymph nodes of mice was detected by flow cytometry. After the mice were anesthetized through intraperitoneal injection of 0.5% sodium pentobarbital, the ipsilateral cervical lymph nodes (about four available lymph nodes in each mouse) were collected immediately. (1) The mice were euthanized through cervical dislocation. After 75% alcohol was sprayed on the surface, the limbs were fixed on the dissecting table with pins. The mouse neck skin was pinched with forceps, the tissue was cut open, and the skin was stripped out. The neck lymph nodes of mice were carefully searched and the lymph nodes were collected through blunt separation. The lymph nodes were placed in Eppendorf tubes containing PBS balanced buffer. (2) Preparation of lymph node single-cell suspension: the lymph nodes collected in the above steps were transferred to a 200-mesh sieve, which was placed in a clean glass plate. Next, 5 mL PBS was dripped on the lymph nodes, which were gently ground with a 5 mL syringe (In this process, physiological saline was continuously drawn from the plate to wash the sieve to ensure that no cells were left out). The sieve was removed and the cell suspension in the plate was transferred into a 1.5 mL Eppendorf tube, followed by centrifugation at room temperature at 3000 rpm for 3 min. (3) Detection of cell surface molecules: 1 μL of the above single-cell suspension was taken out and diluted, followed by cell counting. The concentration of cell suspension was recorded, and centrifugation was performed at 4°C for 5 min to screen the myeloid cells. The cells were resuspended with flow cytometry staining buffer at a concentration of 10 6-7 /mL. Subsequently, 100 μL of the suspension was collected, and incubated with an appropriate amount of surface flow cytometry antibody at room temperature in darkness for 15 min. Finally, resuspension was performed using 150 μL cytometry staining buffer, followed by detection. Statistical analysis All data were processed utilizing SPSS 21.0 statistical software (SPSS, IBM, Armonk, NY, USA). The measurement data from three independent experiments were expressed by mean ± standard deviation. Comparisons between two groups with normal distribution and homogeneous variance were performed using an unpaired t-test. Data between multiple groups were compared using one-way analysis of variance (ANOVA), followed by the Tukey post hoc test, and those at different time points were compared by repeated measures of ANOVA. p < 0.05 indicated the difference was statistically significant. DATA AVAILABILITY The data that support the findings of this study are available on request from the corresponding author.
2022-06-17T13:36:59.679Z
2022-06-16T00:00:00.000
{ "year": 2022, "sha1": "6759f55a460a64ec7991c76bc29b21e5db78f83b", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e0e5b90e7b8cb54be89b5a60f4a6af007bb4f18d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17488333
pes2o/s2orc
v3-fos-license
Recombinant Lactobacillus plantarum expressing and secreting heterologous oxalate decarboxylase prevents renal calcium oxalate stone deposition in experimental rats Background Calcium oxalate (CaOx) is the major constituent of about 75% of all urinary stone and the secondary hyperoxaluria is a primary risk factor. Current treatment options for the patients with hyperoxaluria and CaOx stone diseases are limited. Oxalate degrading bacteria might have beneficial effects on urinary oxalate excretion resulting from decreased intestinal oxalate concentration and absorption. Thus, the aim of the present study is to examine the in vivo oxalate degrading ability of genetically engineered Lactobacillus plantarum (L. plantarum) that constitutively expressing and secreting heterologous oxalate decarboxylase (OxdC) for prevention of CaOx stone formation in rats. The recombinants strain of L. plantarum that constitutively secreting (WCFS1OxdC) and non-secreting (NC8OxdC) OxdC has been developed by using expression vector pSIP401. The in vivo oxalate degradation ability for this recombinants strain was carried out in a male wistar albino rats. The group I control; groups II, III, IV and V rats were fed with 5% potassium oxalate diet and 14th day onwards group II, III, IV and V were received esophageal gavage of L. plantarum WCFS1, WCFS1OxdC and NC8OxdC respectively for 2-week period. The urinary and serum biochemistry and histopathology of the kidney were carried out. The experimental data were analyzed using one-way ANOVA followed by Duncan’s multiple-range test. Results Recombinants L. plantarum constitutively express and secretes the functional OxdC and could degrade the oxalate up to 70–77% under in vitro. The recombinant bacterial treated rats in groups IV and V showed significant reduction of urinary oxalate, calcium, uric acid, creatinine and serum uric acid, BUN/creatinine ratio compared to group II and III rats (P < 0.05). Oxalate levels in kidney homogenate of groups IV and V were showed significant reduction than group II and III rats (P < 0.05). Microscopic observations revealed a high score (4+) of CaOx crystal in kidneys of groups II and III, whereas no crystal in group IV and a lower score (1+) in group V. Conclusion The present results indicate that artificial colonization of recombinant strain, WCFS1OxdC and NC8OxdC, capable of reduce urinary oxalate excretion and CaOx crystal deposition by increased intestinal oxalate degradation. Electronic supplementary material The online version of this article (doi:10.1186/s12929-014-0086-y) contains supplementary material, which is available to authorized users. Background The lifetime risk for kidney stone disease currently exceeds 6-12% in the general population, and its prevalence appears to increase steadily in both sexes [1]. Calcium oxalate (CaOx) is the major constituent of about 75% of all urinary stones population [2]. Secondary hyperoxaluria either based on intestinal hyperabsorption of oxalate or high intake of oxalate is considered a crucial risk factor in the pathogenesis of CaOx stone formation [3]. Urinary oxalate (UOx) is predominantly derived from endogenous production of oxalate from ingested or metabolically generated precursors and from the diet. It has been suggested that dietary contribution to UOx excretion is up to 50% [4]. Some foods, particularly vegetables such as spinach, wheat bran, and cereals contain high amounts of oxalic acid [5]. An increased absorption of oxalate has been demonstrated in 46% of patients with CaOx kidney stone [6]. Existing treatments for patients with CaOx urolithiasis are limited and do not always lead to sufficient reduction in UOx excretion. Even though, the invasive technologies (shockwave lithotripsy, ureteroscopy, percutaneous stone extractions) exist, these techniques have its own disadvantages like renal injury, recurrent stone formation with a prevalence of 50% over 10 years. Another possible approach to prevent renal stone recurrence is to reduce the consumption of oxalate rich foods. Although, such dietary restriction is commonly advised to reduce stone recurrence, its long-term effectiveness is uncertain and would probably lead to deficiency in essential nutrients [7]. Thus, other methods meant to reduce intestinal oxalate absorption are required. Among them, the microbiological approach has received increasing attention in recent years. Oxalate degrading bacteria is being considered for degrading intestinal oxalate to prevent CaOx stone formation. Starting in 1980 with the discovery of an oxalotropic gut-resident bacterium Oxalobacter formigenes (O. formigenes) leading to a new research direction for the management of CaOx urolithiasis. O. formigenes is an anaerobic bacterium that naturally colonizes the colon of vertebrates, including humans, and utilizes oxalic acid as its sole source of energy [8]. The use of O. formigenes in reduction of oxalate excretion in urine and prevention of renal stone recurrence was elaborately studied [9,10]. However, endogenously derived oxalate supplement was needed to colonize the bacterium in the gut. Hence, usage of this bacterium raises some concern and the other side Oxalobacter strains are not considered mainstream therapy primarily due to lack of sufficient clinical data supporting their use. Earlier, reports have shown that lactic acid bacteria (LAB) have no influence on reduction of hyperoxaluria [11]. The discovery of oxalate decarboxylase (oxdC) gene in Bacillus subtilis (B. subtilis), which breaks down the oxalate in to formate and CO 2 raise a new hope to mitigate hyperoxaluria [12]. In subsequent years various research groups have demonstrated the use of oxalate decarboxylase (OxdC) protein in degradation of oxalate by in vitro and in vivo experiment for the treatment of hyperoxaluria [13][14][15]. Hence, we designed a strategy to engineer LAB component of intestinal microflora by heterologous expression of oxdC gene from B. subtilis origin. Artificial colonization with this recombinant strain may decrease the intestinal oxalate absorption and renal excretion by degrading dietary oxalate. In the present work, in vivo oxalate degrading potency of two recombinants Lactobacillus plantarum (L. plantarum) strains such as OxdC-secretory WCFS1OxdC [16] and non-secretory NC8OxdC [17] was investigated in rats fed with oxalate-rich diet. Chemicals and reagents Primers used were synthesized and procured from Sigma Aldrich (USA) [Additional file 1]. The experimental diet containing 5% potassium oxalate was procured from National Institute of Nutrition (NIN, Hyderabad, India). Hyperoxaluria and calcium oxalate crystal were induced in a rat model as described elsewhere [18]. Urinary and serum biochemical parameters were measured in semi automated photometer 5010 V5 + (Robert Riele GmbH, Germany) using commercially available kits [Additional file 2]. Bacterial strains, media and growth conditions The bacterial strains and plasmids used in this study are listed in table 1. L. plantarum was grown in deMan-Rogosa-Sharpe (MRS) media at 30°C without shaking. Erythromycin was added to the MRS at a final concentration of 5 μg/mL for the growth of recombinant L. plantarum. Manipulation of recombinant Lactobacillus plantarum The genetically engineered OxdC-secreting L. plantarum WCFS1OxdC was developed [16] and the construction of non-secreting L. plantarum NC8OxdC was described [17] and both the recombinants and non-recombinant L. plantarum WCFS1 strain was used to evaluate in vivo oxalate degradation in rat model. Preparation of live bacterial inocula The recombinant WCFS1OxdC, NC8OxdC and the non-recombinant strain of L. plantarum WCFS1 was grown in MRS medium. The bacterial number per milliliter of cultures was estimated using spectrophotometric measurements (OD 600 ) and cellular pellets were harvested by centrifugation at 5000 rpm. The pellet was washed and resuspended in sterile phosphate buffered saline (PBS) at (5X10 10 CFU mL −1 ) [10]. Animals and study design Male wistar albino rats (130-140 g) were used in this study and the experimental procedure was approved by the Internal Research and Review Board, Ethical Clearance, Biosafety and Animal Welfare Committee of Madurai Kamaraj University. The rats were divided into five groups (n = 6/group) and were kept at 27 ± 2°C with a 12 h light and dark cycle. Group I control rats received standard rat chow and the experimental group rats (II, III, IV and V) received chow mixed with 5% potassium oxalate (weight/weight oxalate/chow) to induce hyperoxaluria [18]. The rats in group III, IV and V were orally administered with non-recombinant and recombinants L. plantarum respectively by esophageal gavage of (5X10 10 CFU mL −1 day −1 ) bacterium [10]. Day 14 onwards the group II rats were administrated by esophageal gavage with 1 mL PBS day −1 ; while group III were administrated with non-recombinant L. plantarum; group IV and V rats were administrated with recombinant L. plantarum harboring plasmid pLdhl0373OxdC and pLdhlOxdC respectively. At the end of the fourth week, the animals were sacrificed and serum samples was separated. Kidney tissues were processed for localization of crystals, biochemical and various other morphological analyses. Urine collection and analysis On the day 0, 7, 14, 21 and 28 the rats were placed in metabolic cages and 24 h urine was collected in presence of 0.02% sodium azide to prevent bacterial growth. After determining urinary volume and pH, urine was aliquot for various assays. Urinary oxalate, calcium, uric acid, creatinine and urea were also determined using commercial kit in semiautomatic photometer according to manufacturer's protocol. Each week one-hour urine samples were collected and examined by polarized light microscopy to analyze the presence of CaOx crystalluria and scored on a basis of 0-3+ [20]. Determination of recombinant L. plantarum in feces Determination of recombinant L. plantarum in feces was carried out by culture methods as well as by PCR as described elsewhere [10]. Serum parameters analysis Serum parameters such as creatinine, calcium, urea, uric acid, protein and C -reactive protein (CRP) were measured 2.52 ± 0.14 a*b*c* 3.07 ± 0.61 a*c* α Data are expressed as mean ± SD. Comparisons are made against Group I (Control) a , Group II (lithiatic control) b and Group III (Non-recombinant strain) c . a* b* and c* indicates the mean value is significant at p < 0.05 against group I, II and III correspondingly. n = 6 rats each group. Urinary oxalate (µmol / 24 h urine) by using respective kits as suggested by manufacturer (Additional file 2). Analysis of oxalate and calcium in kidney homogenate A pair of kidney from each group rats was removed and a section of kidney was used for analysis of oxalate and calcium. Kidney tissue was rinsed with ice cold saline (0.9% w/v sodium chloride) and repeatedly washed with 0.15 M KCl, weighed, homogenized using 10% HCl and was centrifuged at 2500 rpm for 3 min. The supernatant was used to determine oxalate and calcium. Oxalate concentration was determined manually by colorimetric method described elsewhere [21]. RNA isolation and semi-quantitative RT-PCR The mRNA levels of glyceraldehyde-3 phosphate dehydrogenase (GAPDH), OPN, renin, and ACE in the kidney were quantified by semi-quantitative reverse transcriptasepolymerase chain reaction (RT-PCR). [Additional file 3]. Analysis of histopathology and CaOx crystal in kidney The kidney tissue from each group was fixed in 10% neutral buffered formalin, trimmed, processed, and embedded in paraffin. Sections from each kidney were stained with hematoxylin and eosin and examined under light microscope for pathological analysis and polarized light microscope for visualizing CaOx crystal. The presence of CaOx crystal was scored on a basis of 0-5+ [22]. CaOx crystal present in each kidney tissue was examined by pizzolato staining methods [23]. Pathological analysis was examined with the help of qualified pathologist. Statistical analysis Data were expressed as mean ± SD. The statistical significance between subgroups was analyzed with one-way ANOVA followed by Duncan's multiple-range test using SPSS, software. Results were considered significant if the P value < 0.05. 6.00 6.24 6.10 Engineered LAB efficiently degraded oxalate under in vitro The recombinant OxdC-secretory L. plantarum WCFS1OxdC harboring the recombinant vector pLdhl0373OxdC size of 4.7 kb and non-secretory L. plantarum NC8OxdC harboring the recombinant plasmid without signal peptide sequence pLdhlOxdC was used to analyze in vivo oxalate degradation in rat model. Schematic representation of expression cassette of recombinant plasmids used for secretion and expression of OxdC in the L. plantarum was shown in Figure 1. The OxdC-secreting WCFS1OxdC strain harboring plasmid (pLdhl0373OxdC) was consisting of constitutive promoter (P ldhL ) and signal peptide (Lp_0373) sequences, as a result the WCFS1OxdC strain secretes the functional OxdC at extracellular level and degrading 70% of extracellular oxalate ( Figure 2). The specific activity of recombinant OxdC purified from recombinant strain of WCFS1OxdC was found to be 19.1 U/mg and secretion efficiency of the strain WCFS1OxdC shows that 25% of the OxdC produced was secreted into the medium. The OxdC non-secreting NC8OxdC strain which harboring recombinant plasmid (pLdhlOxdC), consisting of constitutive promoter (P ldhL ) and lacking the signal peptide sequences. Thus, NC8OxdC strain expressing biologically active OxdC at intracellular level and degrading 77% of oxalate under in vitro condition (Figure 2). Whereas the wild type L. plantarum WCFS1 unable to degrade the oxalate as expected. Oxalate degrading recombinant LAB improved primary health of hyperoxaluric rat Control rats (group I), received standard chow, and experimental rats (group II, III, IV and V), which received oxalate mixed food stayed healthy and gained weight. However with time, experimental rats gained significantly lesser weight than control (P < 0.05), while rats in groups IV and V receiving the recombinant L. plantarum WCFS1OxdC and NC8OxdC respectively gained more weight than groups II and III (P < 0.05, Table 2). Urinary pH was seen lower in experimental rats than control (P < 0.05, Table 2) and pH of group IV and V showed increased level than group II and III (P < 0.05). Urinary excretion of creatinine increased with time in all animals but it was significantly higher in experimental group than control (P < 0.05). However, at the end of experiment (Day 28), mean value of creatinine in groups IV and V showed significantly lower (P < 0.05) against group II and III rats ( Table 2). Excretion of uric acid in groups II and III rats showed significant increase (P < 0.05) when compared to group I, IV and V ( Table 2). Rats artificially colonized by recombinant LAB reduced urinary oxalate excretion Compared to baseline values of urinary oxalate (UOx), the excretion was significantly increased in all groups (P < 0.05). By days 7, 14, 21 and 28, excretion of urinary oxalate in groups II, III and V showed significantly increased level than group I (P < 0.05). On the other hand, the excretion of oxalate in group IV rats showed significant variations on day 7, 14 and 21 when compared to group I (P < 0.05), whereas, on 28 th day no significant variation was observed ( Figure 3A). When the comparisons were made between group II and treated groups (III, IV and V) the UOx excretion on day 21 and 28, groups IV and V rats showed significant reduction than group II (P < 0.05). Similarly, when compared to non-recombinant bacterial treated group III, significant decrease of UOx excretion was seen in groups IV and V (P < 0.05), at the end of experiment ( Figure 3A). Urinary calcium on baseline does not show any significant change in all groups. Compared to the group I rats calcium level was increased significantly in all groups during the experimental days (P < 0.05). While compared to group II and III, the urinary calcium level dropped significantly in group IV on 21 st and 28 th day (P < 0.05), and group V shows significantly lower level against group II and III rats at 28 th day (P < 0.05, Figure 3B). Urea level of all groups at baseline, 7 th , 14 th and 21 st day did not show any significant difference against group I, whereas on 28 th day the group II and III showed significantly increased level than group I rats (P < 0.05). On the other hand, significantly decreased level of urea was observed in groups IV and V against groups II and III (P < 0.05, Figure 3C). Recombinant L. plantarum survived in rat intestine The colony forming units (CFU) method and PCR was used to detect the presence of live recombinant and nonrecombinant L. plantarum in the intestine of treated rats. Mean colony forming units (CFU) per gram of feces in group III, IV and V was 6.00 ± 0.13 (L. plantarum WCFS1), 6.24 ± 0.12 (WCFS1OxdC) and 6.10 ± 0.10 (NC8OxdC) respectively ( Figure 4A). Whereas, no strains were detected in the feces of groups I and II. PCR confirmed that the fecal DNA in group IV and V rats alone produces the amplicon corresponding to OxdC gene (1.2 kb) ( Figure 4B). Prevention of crystalluria in recombinant treated rats All experimental rats were examined for the presence of CaOx crystal in urine after the administration of nonrecombinant and recombinant L. plantarum. Group I control rats urine was devoid of any CaOx crystal throughout experimental period. By day 28, rats in groups II and III showed high score (2+) of CaOx crystal, while group V urine shows low score (1+). The group IV rats did not show any CaOx crystal ( Figure 5). Recombinant L. plantarum maintained normal serum parameters in hyperoxaluric rats Blood urea nitrogen and creatinine ratio (BUN/Creatinine) was calculated to predict the renal function. The mean value of BUN/Creatinine ratio in groups II and III rats was 41.04 ± 1.68 and 40.04 ± 0.54 respectively, against group I (37.52 ± 1.30). Whereas groups IV and V showed 34.61 ± 1.46 and 36.35 ± 1.19, which clearly reveal the significant difference in group II and III (P < 0.05) than group I. The uric acid was predicted to be increased in groups II and III against group I (P < 0.05). However, no significant difference was observed in groups IV and V against group I (Table 3). In order to predict the inflammation, C-reactive protein (CRP) level was measured in the serum sample of all groups. When compared to control group, significantly Table 3). Recombinant L. plantarum administered rats reduced oxalate level in kidney Oxalate concentration in kidney tissue homogenate of groups II, III and V showed significant increase (P < 0.05) when compared to groups I and IV rats. However, the recombinant L. plantarum administered groups IV and V showed significantly decreased level of oxalate compared to group II and III (P < 0.05, Figure 6A). The concentration of calcium level significantly increased in groups II and III against groups I, IV and V rats (P < 0.05, Figure 6B). Gene expression analysis and renal histopathology revealed reversal of kidney stone-induced damage in hyperoxaluric rats Renal function was examined by using semi-quantitative PCR for renin, ACE and OPN expression. The upregulation of renin mRNA was observed in groups II and III when compared to group I rats. While the recombinant bacterial treated group IV and V shows significant reduction in mRNA level compared to group II and III. The down regulations of ACE, OPN mRNA were seen in groups II, III, IV and V rats ( Figure 7A, B). Histopathological examination of kidney sections of group I rats showed normal histological structures. Group II and III rats showed a reduced number of glomeruli and large areas of red blood cell casts with dialated tubules. Stroma showed hemorrhage and blood vessels were congested and thickened. Sections obtained from rats in the group IV administered with WCFS1OxdC revealed normal glomeruli with no red blood cast, but slight tubular necrosis. Examination of stroma shows areas of hemorrhage. Similarly, group V rats that received NC8OxdC showed normal glomeruli, but high tubular necrosis and congested blood vessels. The CaOx crystals were examined by pizzolato staining and also by using polarized microscopy. It revealed no incidence of CaOx crystal deposition in group I whereas as high score (4+) of CaOx crystals in groups II and III rats. However, group IV showed no identifiable crystal deposits in the kidneys and group V showed significantly lower score (1+) (Figure 8). Discussion Dietary oxalate is a major contributor to urinary oxalate (UOx) excretion in humans [4]. The identification of intestinal oxalate degrading bacteria provided a new direction for the reduction of UOx [24]. The present study is to examine the efficacy of heterologous OxdC expressing and secreting recombinant L. plantarum to degrade the intestinal oxalate thereby preventing hyperoxaluria and CaOx urolithiasis in rats. Previously, we reported in vitro degradation of oxalate by recombinant L. plantarum expressing heterologous OxdC at intracellular level [17]. Since the expression was intracellular, we made an attempt to express OxdC extracellularly to increase the oxalate degradation efficiency. Sasikumar et al. [25] analyzed the two homologous signal peptide (SP) such as Lp_0373 and Lp_3050 of L. plantarum for the extracellular expression OxdC under inducible condition and results shown that the SP (Lp_0373) efficiently secrete the OxdC than the SP (Lp_3050). Later on, by using previously characterized homologous promoter (P ldhL ) and signal peptide (Lp_0373) sequences, the genetically modified constitutively OxdC-secretory WCFS1OxdC strain was developed [16]. The resulting L. plantarum strain found to be very efficient for secretion of OxdC and degradation of extracellular oxalate. Here, the intragastric oxalate degrading efficiency of intracellular and extracellular OxdC expressing recombinant L. plantarum was evaluated in rats. Results of plasmid segregation analysis reveal daily administration of recombinant L. plantarum is vital since the L. plantarum lost almost 70-90% of erythromycin-based plasmid [16]. Hence, artificial intestinal colonization and oxalate degradation in rat was established via the daily load, as a result the expression of OxdC was retained. In future, the plasmid can be stabilized by constructing mutants lacking essential genes like alr (alanine racemase), which can be complimented by adding back via the plasmid [26]. O. formigenes is efficient in oxalate degradation and had been proposed for its application for degrading intestinal oxalate [10,27,28]. Numerous studies have linked the absence of O. formigenes to higher UOx excretion [29,30]. Reports revealed no significant difference in UOx excretion between patients who tested positive or negative for O. formigenes [31]. In addition, colonization of O. formigenes in the gut require oral oxalate supplements [9]. Sidhu et al. [27] demonstrated that when oxalate is removed from the diet, artificially colonized rats lose colonization within 5 days. Since the uses of O. formigenes in mitigation of intestinal oxalate have difficulty, here we tried alternatively by using recombinant L. plantarum secreting OxdC protein extracellular level for degradation of intestinal oxalate. The significant reduction of urinary oxalate excretion in group IV and V rats clearly illustrates the degradation of dietary oxalate by the presence of recombinant L. plantarum WCFS1OxdC and NC8OxdC. Hyperoxaluric conditions were observed in the absence of recombinant strain in group II and III rats. Even though, groups IV and V rats showed significant reduction in UOx excretion, the higher reduction was seen in group IV (43%) than in group V (30%) which suggested that intestinal oxalate in group IV is better degraded than in group V rats. When compared to group II, 40% and 25% of total oxalate concentration was reduced in the kidney tissue of group IV and V rats and 45% and 30% of oxalate reduction when compared to wild type L. plantarum treated group III rats respectively. The higher reduction of oxalate in kidney tissue of group IV rats administrated with recombinant WCFS1OxdC strain was associated with the secretion of OxdC, which prevented hyperoxaluria effectively compared to non-secretory NC8OxdC strain treated rats (group V) by promoting higher degradation of intestinal oxalate. Increase in calcium and oxalate content in the renal tissue of group II and III were associated with oxalate supplemented diet. Orally administered Escherichia coli (E. coli) expressed recombinant B. subtilis OxdC has substantially declined the UOx level in experimental rat [13]. Oral therapy with crystalline, cross-linked formulation of the OxdC in mice diminishes symptoms of hyperoxaluria and urolithiasis [14]. Furthermore, orally given formulation of B. subtilis OxdC, was shown to be safe in rats and dogs during short-term toxicity tests [15]. Although, the use of OxdC enzyme to decompose intestinal oxalate was broadly demonstrated, this approach to treat hyperoxaluria can be very expensive and daily load of OxdC was also required. The recombinant L. plantarum developed in this study was degrading intestinal oxalate by simply colonizing bacterium in the gut. However, improvement in strategy of artificial colonization of the strain for its use as probiotics is majorly required. The significantly lower excretion of urinary urea, uric acid, creatinine and serum BUN/Creatinine ratio, uric acid in recombinant strain administered rats in group IV and V reveals the oxalate mediated renal damage was protected in rats group by degrading intestinal oxalate and thereby preventing oxalate toxicity. Increased level of urinary creatinine and serum BUN/Creatinine ratio in group II and III rats associated with renal tissue damage and functional abnormalities by the oxalate induced toxicity. The changes in the urinary pH of rats in group II and III might be associated with the distal tubular dysfunction. A significant increase in the expression of renin mRNA in kidneys of groups II and III rats suggesting higher oxalate stress in kidney due to the oxalate diet. While, reversed expression of renin mRNA in group IV and V indicating that oxalate stress in the kidney was reduced due to the degradation of oxalate in intestine by the administered recombinant L. plantarum. Similarly, the increase in renin mRNA expression is associated with hyperoxaluria and CaOx crystal deposition [32]. Microscopic examination of urinary sediments of oxalate-diet fed rats in groups II and III showed a high score of CaOx crystal than rats in groups IV and V at the end of experimental period. Earlier reports also suggested that administration of oxalate supplemented diet induced CaOx crystal in urine [33]. Polarized microscopic examination of paraffin kidney sections revealed no significant CaOx crystal in group IV rats that received OxdC-secreting strain (WCFS1OxdC), whereas, group V rats administered with non-secretory strain (NC8OxdC) showed lower CaOx crystal deposition. This observation reveal that kidney of group IV rats was better protected from oxalate toxicity compared to group V. But, group III rats receiving wild type L. plantarum showed higher crystal score, suggesting that the wild type strain does not degrade the intestinal oxalate that lead to higher crystal aggregation. Similar results were also observed in pizzolato stained kidney sections of experimental rat groups (I, II, III, IV and V). Histopathology observation of kidney tissue of groups II and III rats showed kidney damage, while the group IV and V rats kidney showed normal glomeruli with moderate and high necrosis respectively. The increased level of CRP in the serum of group II and III rats was associated with the renal inflammation and renal function abnormalities, which was also clearly observed in histological studies. However, the significantly decreased CRP levels were observed in groups IV and V compared to groups II and III rats, that indicates renal damage was protected due to the reduction of oxalate toxicity by the recombinant L. plantarum. The present study showed the artificial colonization of L. plantarum harboring the plasmid pLdhl0373OxdC and pLdhlOxdC containing oxalate degrading gene (oxdC) decrease urinary oxalate excretion and CaOx crystal deposition in rats due to the degradation of dietary oxalate in intestine by OxdC expressing and secreting recombinant L. plantarum. However, using them as a probiotic require improvement by stabilizing the plasmid by constructing mutant strain lacking essential genes (eg., thyA or alr).
2017-06-17T16:34:25.451Z
2014-08-30T00:00:00.000
{ "year": 2014, "sha1": "9901f571f428325cccc3cfe5cadc089a1c6b8883", "oa_license": "CCBY", "oa_url": "https://jbiomedsci.biomedcentral.com/track/pdf/10.1186/s12929-014-0086-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0446ed5008b5b9da86ad32c32df603a87878a871", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
259293661
pes2o/s2orc
v3-fos-license
Safety evaluation of the food enzyme pectinesterase from the genetically modified Trichoderma reesei strain RF6201 Abstract The food enzyme pectinesterase (pectin pectylhydrolase; EC 3.1.1.11) is produced with the genetically modified Trichoderma reesei strain RF6201 by AB Enzymes GmbH. The genetic modifications do not give rise to safety concerns. The food enzyme was considered free from viable cells of the production organism and its DNA. It is intended to be used in five food manufacturing processes: fruit and vegetable processing for juice production, fruit and vegetable processing for products other than juices, production of wine and wine vinegar, coffee demucilation and production of plant extracts as flavouring preparations. Since residual amounts of the total organic solids (TOS) are removed during the coffee demucilation and the production of flavouring extracts, dietary exposure was calculated only for the remaining three food processes. It was estimated to be up to 0.532 mg TOS/kg body weight (bw) per day in European populations. Genotoxicity tests did not indicate a safety concern. The systemic toxicity was assessed by means of a repeated dose 90‐day oral toxicity study in rats. The Panel identified a no observed adverse effect level of 1,000 mg TOS/kg bw per day, the highest dose tested, which, when compared with the estimated dietary exposure, resulted in a margin of exposure of at least 1,880. A search for the similarity of the amino acid sequence of the food enzyme to known allergens was made and two matches were found with pollen allergens. The Panel considered that, under the intended conditions of use, the risk of allergic reactions upon dietary exposure, particularly in individuals sensitised to pollen allergens, cannot be excluded. Based on the data provided, the Panel concluded that this food enzyme does not give rise to safety concerns under the intended conditions of use. Introduction Article 3 of the Regulation (EC) No 1332/2008 1 provides definition for 'food enzyme' and 'food enzyme preparation'. 'Food enzyme' means a product obtained from plants, animals or microorganisms or products thereof including a product obtained by a fermentation process using microorganisms: (i) containing one or more enzymes capable of catalysing a specific biochemical reaction; and (ii) added to food for a technological purpose at any stage of the manufacturing, processing, preparation, treatment, packaging, transport or storage of foods. 'Food enzyme preparation' means a formulation consisting of one or more food enzymes in which substances such as food additives and/or other food ingredients are incorporated to facilitate their storage, sale, standardisation, dilution or dissolution. Before January 2009, food enzymes other than those used as food additives were not regulated or were regulated as processing aids under the legislation of the Member States. On 20 January 2009, Regulation (EC) No 1332/2008 on food enzymes came into force. This Regulation applies to enzymes that are added to food to perform a technological function in the manufacture, processing, preparation, treatment, packaging, transport or storage of such food, including enzymes used as processing aids. Regulation (EC) No 1331/2008 2 established the European Union (EU) procedures for the safety assessment and the authorisation procedure of food additives, food enzymes and food flavourings. The use of a food enzyme shall be authorised only if it is demonstrated that: • it does not pose a safety concern to the health of the consumer at the level of use proposed; • there is a reasonable technological need; • its use does not mislead the consumer. All food enzymes currently on the EU market and intended to remain on that market, as well as all new food enzymes, shall be subjected to a safety evaluation by the European Food Safety Authority (EFSA) and approval via an EU Community list. The 'Guidance on submission of a dossier on food enzymes for safety evaluation' (EFSA, 2009a) lays down the administrative, technical and toxicological data required. 1.1. Background and terms of Reference as provided by the requestor 1.1.1. Background as provided by the European Commission Only food enzymes included in the EU Community list may be placed on the market as such and used in foods, in accordance with the specifications and conditions of use provided for in Article 7 (2) of Regulation (EC) No 1332/2008 on food enzymes. Four applications have been submitted by the companies "Novozymes A/S" and "AB Enzymes GmbH" for the authorisation of the food enzymes Alpha-amylase from a genetically modified strain of Bacillus licheniformis (strain NZYM-AV), Beta-glucanase, Xylanase and Cellulase produced by a strain of Humicola insolens (strain NZYM-ST), Polygalacturonase from a genetically modified strain of Trichoderma reesei (strain RF6197) and Pectin esterase from a genetically modified strain of Trichoderma reesei (strain RF6201). Following the requirements of Article 12.1 of Commission Regulation (EC) No 234/2011 3 implementing Regulation (EC) No 1331/2008, the Commission has verified that the four applications fall within the scope of the food enzyme Regulation and contain all the elements required under Chapter II of that Regulation. Terms of Reference The European Commission requests the European Food Safety Authority to carry out the safety assessment on the food enzymes Alpha-amylase from a genetically modified strain of Bacillus licheniformis (strain NZYM-AV); Beta-glucanase, Xylanase and Cellulase produced by a strain of Humicola insolens (strain NZYM-ST); Polygalacturonase from a genetically modified strain of Trichoderma reesei (strain RF6197) and Pectin esterase from a genetically modified strain of Trichoderma reesei (strain RF6201) in accordance with Article 17.3 of Regulation (EC) No 1332/2008 on food enzymes. 1.2. Interpretation of the terms of Reference The present scientific opinion addresses the European Commission's request to carry out the safety assessment of food enzyme pectinesterase from the genetically modified Trichoderma reesei strain RF6201. 2. Data and methodologies Data The applicant has submitted a dossier in support of the application for authorisation of the food enzyme pectinesterase from a genetically modified Trichoderma reesei (strain RF6201). Additional information was requested from the applicant during the assessment process on 25 February 2022 and was consequently provided (see 'Documentation provided to EFSA'). Methodologies The assessment was conducted in line with the principles described in the EFSA 'Guidance on transparency in the scientific aspects of risk assessment' (EFSA, 2009b) and following the relevant guidance documents of the EFSA Scientific Committee. The 'Guidance on the submission of a dossier on food enzymes for safety evaluation' (EFSA, 2009a) as well as the 'Statement on characterisation of microorganisms used for the production of food enzymes' (EFSA CEP Panel, 2019) have been followed for the evaluation of the application with the exception of the exposure assessment, which was carried out in accordance with the updated 'Scientific Guidance for the submission of dossiers on food enzymes' (EFSA CEP Panel, 2021a). Pectinesterases catalyse the de-esterification of pectin, resulting in the generation of pectic acid and methanol. The food enzyme under assessment is intended to be used in five food manufacturing processes: fruit and vegetable processing for juice production, fruit and vegetable processing for products other than juices, production of wine and wine vinegar, coffee demucilation and production of plant extracts as flavouring preparations. 3.1. Source of the food enzyme The pectinesterase is produced with T. reesei strain RF6201, which is deposited in the Westerdijk Fungal Biodiversity Institute culture collection (CBS, the Netherlands) with the deposit number . 4 The production strain was identified as T. reesei . 5 Characteristics of the parental and recipient microorganisms The parental strain was . 6 . 7 Characteristics of introduced sequences The sequence encoding the pectinesterase ( . 8 Description of the genetic modification process The purpose of genetic modification was to enable the production strain to synthesise pectinesterase . 9 10 3.1.4. Safety aspects of the genetic modification The technical dossier contains all necessary information on the recipient microorganism, the donor organism and the genetic modification process. The production strain T. reesei RF6201 differs from the recipient strain in its capacity to produce the pectinesterase . 11 No issues of concern arising from the genetic modifications were identified by the Panel. Production of the food enzyme The food enzyme is manufactured according to the Food Hygiene Regulation (EC) No 852/2004 12 , with food safety procedures based on Hazard Analysis and Critical Control Points, and in accordance with current good manufacturing practice. 13 The production strain is grown as a pure culture using a typical industrial medium in a submerged, fermentation system with conventional process controls in place. After completion of the fermentation, the solid biomass is removed from the fermentation broth by filtration. The filtrate containing the enzyme is then further purified and concentrated, including an ultrafiltration step in which enzyme protein is retained, while most of the low molecular mass material passes the filtration membrane and is discarded. 14 The applicant provided information on the identity of the substances used to control the fermentation and in the subsequent downstream processing of the food enzyme. 15 The Panel considered that sufficient information has been provided on the manufacturing process and the quality assurance system implemented by the applicant to exclude issues of concern. 3.3. Characteristics of the food enzyme Properties of the food enzyme The pectinesterase is a single polypeptide chain of amino acids. 16 The molecular mass of the mature protein, calculated from the amino acid sequence, is around kDa. The food enzyme was analysed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE). 17 A consistent protein pattern was observed across all batches. The gel showed three major protein bands of about kDa, corresponding to differently glycosylated forms of the enzyme. The food enzyme was tested for b-glucanase, cellulase and xylanase activities and all were detected. 18 No other enzyme activities were reported. The in-house determination of pectinesterase activity is based on the hydrolysis of citrus pectin (reaction conditions: pH 4.5, 30°C, 8 min). The enzymatic activity is determined by measuring the released free carboxylic groups that are titrated with sodium hydroxide. Pectinesterase activity is expressed in pectin esterase units (PE)/g. One unit is defined as the amount of enzyme that will release 1 lmol of acid groups per minute under the conditions of the assay. 19 The food enzyme has a temperature optimum around 40°C (pH 4.5) and a pH optimum around pH 4.5 (30°C). Thermostability was tested after a pre-incubation of the food enzyme at 85°C for different time periods (pH 4.5). No activity was detected after 2 min pre-incubation. 20 Chemical parameters Data on the chemical parameters of the food enzyme were provided for three batches used for commercialisation and one batch produced for the toxicological tests (Table 1). 21 The mean total organic solids (TOS) of the three food enzyme batches for commercialisation was 25.8% and the mean enzyme activity/TOS ratio was 404 PE/mg TOS. Purity The lead content in the three commercial batches and in the batch used for toxicological studies was below 5 mg/kg, 22,23 which complies with the specification for lead as laid down in the general specifications for enzymes used in food processing (FAO/WHO, 2006). The food enzyme preparation complies with the microbiological criteria for total coliforms, Escherichia coli and Salmonella, as laid down in the general specifications for enzymes used in food processing (FAO/WHO, 2006). 22 No antimicrobial activity was detected in any of the tested batches. 22 Strains of Trichoderma, in common with most filamentous fungi, have the capacity to produce a range of secondary metabolites (Frisvad et al., 2018). The presence of T2-toxin and HT2-toxin was examined in the four food enzyme batches and all were below the limit of quantification (LoQ) of the applied methods. 22,24 Adverse effects caused by the possible presence of other secondary metabolites are addressed by the toxicological examination of the food enzyme-TOS. The Panel considered that the information provided on the purity of the food enzyme is sufficient. Viable cells and DNA of the production strain The absence of viable cells of the production strain in the food enzyme was demonstrated . 25 . No colonies of the production strain were detected. A positive control was included. The absence of recombinant DNA in the food enzyme was demonstrated . 26 Toxicological data A battery of toxicological tests, including a bacterial reverse mutation test (Ames test), an in vitro mammalian chromosomal aberration test and a repeated dose 90-day oral toxicity study in rats, was provided. The batch 4 (Table 1) used in these studies has a comparable activity/TOS value to those of the commercial batches and was considered suitable as a test item. A bacterial reverse mutation assay (Ames test) was performed according to Organisation for Economic Co-operation and Development (OECD) Test Guideline 471 (OECD, 1997a) and following good laboratory practice (GLP). 27 Five strains of Salmonella Typhimurium (TA98, TA100, TA102, TA1535 and TA1537) were used in the presence or absence of metabolic activation, applying the standard plate incorporation method (experiment I) and the preincubation method (experiment II). The first experiment used eight concentrations of the food enzyme (3, 10, 33, 100, 333, 1,000, 2,500 and 5,000 lg TOS/plate) and the second experiment six concentrations of the food enzyme (33, 100, 333, 1,000, 2,500 and 5,000 lg TOS/plate). No cytotoxicity was observed at any concentration tested. Upon treatment with the food enzyme, there was no significant increase in revertant colony numbers above the control values in any strain with or without S9-mix. The Panel concluded that the food enzyme pectinesterase did not induce gene mutations under the test conditions employed in this study. In vitro mammalian chromosomal aberration test The in vitro mammalian chromosomal aberration test was carried out in Chinese hamster V79 lung cells according to OECD Test Guideline 473 (OECD, 1997b) and following GLP. 28 The dose-finding study was performed at concentrations ranging from 331.9 to 5,310 lg/mL, and no inhibition of cell growth of 50% or more was observed. Based on these results, the cells were exposed to the food enzyme at 1,327, 2,655 and 5,310 lg/mL (corresponding to 1,250, 2,500 and 5,000 lg TOS/mL) in a short-term treatment (4 h followed by 14 h recovery period) with and without metabolic activation (S9-mix), and in a continuous treatment (18 h) in the absence of S9-mix. No cytotoxicity was observed at any concentration tested. The frequency of structural and numerical chromosomal aberrations in treated cultures was comparable to the values detected in negative controls and within the range of the laboratory historical control data. The Panel concluded that the food enzyme pectinesterase does not induce chromosome aberrations under the test conditions employed for this study. Repeated dose 90-day oral toxicity study in rodents The repeated dose 90-day oral toxicity study was performed in accordance with OECD Test Guideline 408 (OECD, 1998) and following GLP. 29 Groups of 10 male and 10 female RccHan TM : WIST (SPF) rats received by gavage the food enzyme in doses equivalent to 100, 300 and 1,000 mg TOS/kg bw per day. Controls received the vehicle (bidistilled water). One low-dose male was found dead on day 67 of treatment. The necropsy findings indicated misdosing as the cause of death. The body weight was statistically significantly increased on day 15 (+8%) in mid-dose females and on days 15 (+9%) and 22 (+7%) in low-dose females when compared with controls. The body weight gain was statistically significantly decreased from day 8 onwards, with statistical significance on days 15, 22 and 29 (À15%, À18%, À16%, respectively) of administration in high-dose males. The body weight gain was statistically significantly increased (+58%, +58%, +58%, respectively) on day 15 of administration in all treated females. The Panel considered the changes as not toxicologically relevant, as they were only recorded sporadically and no statistically significant changes in the final body weight and body weight gain were reported. Functional observational battery tests revealed that locomotor activity was statistically significantly increased in mid-dose males during 0-10 min (+30%) and decreased in high-dose males from 50 to 60 min (À64%) when compared with the controls. The Panel considered the changes as not toxicologically relevant as they were only recorded sporadically, they were only seen in one sex and there was no dose-response relationship (the first interval). The haematological investigation revealed in high-dose males a statistically significant decrease in relative reticulocyte counts (À17%), in mean high-fluorescence reticulocytes (À41%) and a statistically 27 Technical dossier/Annex 17. 28 Technical dossier/Annex 18. 29 Technical dossier/Annex 19. significant increase in mean low-fluorescence reticulocytes (+11%). In mid-dose males, a higher methaemoglobin level was noted (+13%). In high-dose females, a statistically significant increase in the relative monocyte counts (+63%) was reported. In mid-dose females, reduced white blood cell count (WBC; À19%) and absolute basophil count (À50%) were noted. Reduced lymphocyte counts were reported in mid-(À25%) and high-dose females (À17%). The Panel considered the changes as not toxicologically relevant as they were only observed in one sex (all parameters), there was no dose-response relationship (methaemoglobin, WBC, absolute basophile and lymphocyte counts), the magnitude of the changes was small (absolute basophil and relative monocyte count), there were no changes in other relevant parameters (for lymphocytes in a total white blood cell count), the changes were within the historical control values and there were no changes in the bone marrow (reticulocytes). The clinical chemistry investigation revealed a statistically significant decrease in total bilirubin (À21%, À25%, À25%, respectively) in all treated males. A statistically significant increase in sodium (+1%) was reported in high-dose males and in chloride levels in mid-(+1%) and high-dose (+2%) males. In high-dose females, a statistically significant decrease in lactate dehydrogenase (LDH) activity (À33%) and an increase in calcium (+3%) were reported. A statistically significant increase in sodium (+1%, +2%, respectively) and chloride (+3%, +2%, respectively) were observed in mid-and highdose females. In mid-dose females, a statistically significant decrease in phosphorus (À17%) was noted. The Panel considered the changes as not toxicologically relevant as they were only observed in one sex (bilirubin, LDH, calcium, phosphorus), there was no dose-response relationship (chloride in females, phosphorus) and the changes were within the historical control values (with the exception of the sodium levels in females, which were slightly outside the historical control values, i.e. 148.8 and 149.4 mmol/L vs. 137.8-147.8 mmol/L in the historical controls). Statistically significant changes in organ weights included an increase in absolute heart weights (+15%), heart-to-body weight ratio (+10%) and heart-to-brain weight ratio (+12%) in high-dose females. In low-dose females, a significantly significant increase in absolute heart weight (+12%) and absolute liver weight (+15%) and a decrease in ovary-to-body weight ratio (À19%) were noted. The Panel considered the changes as not toxicologically relevant as they were only observed in one sex, the changes were small (heart, liver), there was no dose-response relationship (absolute liver weight, relative ovary weight), there were no histopathological changes in the organs. No other statistically significant or biologically relevant differences to controls were reported. The Panel identified the no observed adverse effect level (NOAEL) of 1,000 mg TOS/kg bw per day, the highest dose tested. Allergenicity The allergenicity assessment considered only the food enzyme and not any carrier or other excipient that may be used in the final formulation. The potential allergenicity of the pectinesterase produced with the genetically modified T. reesei strain RF6201 was assessed by comparing its amino acid sequence with those of known allergens according to the 'Scientific opinion on the assessment of allergenicity of GM plants and microorganisms and derived food and feed of the Scientific Panel on Genetically Modified Organisms' (EFSA GMO Panel, 2010). Using higher than 35% identity in a sliding window of 80 amino acids as the criterion, two matches were found. The matching allergens were pectin methylesterase from Russian thistle (Salsola kali) and Ole e 11 pectinesterase from olive tree (Olea europaea), known as respiratory allergens. 30 No information is available on oral and respiratory sensitisation or elicitation reactions of this pectinesterase. Pectinesterases present in plant tissues and pollen are reported for their role in allergenicity: the allergen Ole e 11, a pectinesterase from Olive tree (Olea europaea), was identified as a source of allergy (Salamanca et al., 2010), as well as Sal k 1, a pectinesterase from Russian thistle (Salsola kali) (Barderas et al., 2007). The Panel noted that the oral allergy syndrome, i.e. allergic reactions mainly in the mouth and seldomly leading to anaphylaxis, is associated with sensitisation to olive and Russian thistle pollen. The Panel considered that, under the intended conditions of use, the risk of allergic reactions upon dietary exposure to this food enzyme, particularly in individuals sensitised to pollen allergens, cannot be excluded. 3.5. Dietary exposure 3.5.1. Intended use of the food enzyme The food enzyme is intended to be used in five food processes at the recommended use levels summarised in Table 2. In fruit and vegetable processing, the function of pectinesterase is to aid the depolymerisation of pectin in different raw materials at various points in the production process. For juice production, the food enzyme can be added during the peeling and crushing; to the crush mash of fruits/vegetables (with or without peels) and/or to the pressed juice before clarification and filtration. 31 The disruption of the gel structure reduces the viscosity, thus improving the pressing ability of the pulp and consequently increasing the yield of fruit juices. The enzymatic treatment can reduce haze and enhance colour and aroma. The food enzyme-TOS remains in the juices. In puree production, the pectinesterase is added to the crushed pulp before pasteurisation. 32 The enzymatic treatment reduces viscosity and improves the consistency of the puree. Treatment with pectinesterase can also improve the firmness of jams, canned and frozen fruit and vegetables products. 34 The food enzyme-TOS remains in these products. In wine and wine vinegar production, the pectinesterase is often added together with other cell wall hydrolytic enzymes during crushing. It can be added also during maceration and clarification steps. Such enzymatic treatment aids pressing and facilitates the extraction of aromatic compounds. 33 The food enzyme-TOS may remain in wine and wine vinegar. In coffee bean demucilation, the pectinesterase is added to green coffee cherries during pulping and fermentation to degrade the mucilage. 34 The food enzyme-TOS is removed during the subsequent washing steps (EFSA CEP Panel, 2021b). The food enzyme is used to obtain aroma concentrates or essential oils for use as flavouring preparations. To produce essential oils, fruit components rich in oil are treated with the pectinesterase to assist the release of aromatic compounds from the raw material. It is expected that the food enzyme-TOS partitions with the water phase. Therefore, they are not carried into the oil phase. 32 The aroma concentrates are primarily used in the reconstitution of juices. Samples of the apple aroma concentrate and orange aroma oil, as well as samples obtained by trichloroacetic acid precipitation were separated by SDS-PAGE and stained with Coomassie Blue. 35 No proteins of the food enzyme were detected by liquid chromatography tandem mass spectrometry. 36 The Panel accepted this evidence as sufficient to support the lack of TOS transfer into the essential oils. Based on data provided on thermostability (see Section 3.3.1), the pectinesterase is expected to be inactivated by heat in most of the food processes, but may remain active in wine and wine vinegar, and in juices, depending on the pasteurisation conditions. Dietary exposure estimation In accordance with the guidance document (EFSA CEP Panel, 2021a), dietary exposure was calculated only for food manufacturing processes where the food enzyme-TOS remains in the final foods: fruit and vegetable processing for juice production, fruit and vegetable processing for products other than juice and production of wine and wine vinegar. Chronic exposure to the food enzyme-TOS was calculated by combining the maximum recommended use level with individual consumption data (EFSA CEP Panel, 2021a). The estimation involved selection of relevant food categories and application of technical conversion factors (EFSA CEP Panel, 2021b). Exposure from all FoodEx categories was subsequently summed up, averaged over the total survey period (days) and normalised for body weight. This was done for all individuals across all surveys, resulting in distributions of individual average exposure. Based on these distributions, the mean and 95th percentile exposures were calculated per survey for the total population and per age class. Surveys with only 1 day per subject were excluded and high-level exposure/intake was calculated for only those population groups in which the sample size was sufficiently large to allow calculation of the 95th percentile (EFSA, 2011). Table 3 provides an overview of the derived exposure estimates across all surveys. Detailed mean and 95th percentile exposure to the food enzyme-TOS per age class, country and survey, as well as contribution from each FoodEx category to the total dietary exposure are reported in Appendix A -Tables 1 and 2. For the present assessment, food consumption data were available from 43 different dietary surveys (covering infants, toddlers, children, adolescents, adults and the elderly), carried out in 22 European countries (Appendix B). The highest dietary exposure to the food enzyme-TOS was estimated to be 0.532 mg TOS/kg bw per day in infants. Uncertainty analysis In accordance with the 'guidance provided in the EFSA opinion related to uncertainties in dietary exposure assessment' (EFSA, 2006), the following sources of uncertainties have been considered and are summarised in Table 4. The conservative approach applied to estimate the dietary exposure to the food enzyme-TOS, in particular assumptions made on the occurrence and use levels of this specific food enzyme, is likely to have led to overestimation of the exposure. The exclusion of two food manufacturing processes from the exposure assessment was based on > 99% of TOS removal during these processes and is not expected to have an impact on the overall estimate derived. 3.6. Margin of exposure A comparison of the NOAEL (1,000 mg TOS/kg bw per day) from the 90-day rat study with the derived exposure estimates of 0.003-0.316 mg TOS/kg bw per day at the mean and from 0.013-0.532 mg TOS/kg bw per day at the 95th percentile resulted in a margin of exposure (MoE) of at least 1,880. Conclusions Based on the data provided, the removal of TOS during coffee demucilation and the production of flavouring extracts, and the derived margin of exposure for the remaining three food manufacturing processes, the Panel concluded that the food enzyme pectinesterase produced with the genetically modified Trichoderma reesei strain RF6201 does not give rise to safety concerns under the intended conditions of use. The CEP Panel considered the food enzyme free from viable cells of the production organism and recombinant DNA. 5. Documentation as provided to EFSA Use of data from food consumption surveys of a few days to estimate long-term (chronic) exposure for high percentiles (95th percentile) + Possible national differences in categorisation and classification of food +/À Model assumptions and factors Exposure to food enzyme-TOS was always calculated based on the recommended maximum use level + Although two different used levels were given for puree and firming, only the higher one was used in the calculation. + Selection of broad FoodEx categories for the exposure assessment + Use of recipe fractions to disaggregate FoodEx categories +/À Use of technical factors in the exposure model +/À Exclusion of other processes from the exposure assessment -Production of plant extract as flavouring preparations -Coffee demucilation details Appendix A can be found in the online version of this output (in the 'Supporting information' section). The file contains two sheets, corresponding to two tables.
2023-07-01T05:12:37.503Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "b0916a65d8aa928c5f15863d4e5abdce859a6341", "oa_license": "CCBYND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b0916a65d8aa928c5f15863d4e5abdce859a6341", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
113988399
pes2o/s2orc
v3-fos-license
Some aspects of the CI engine modification aimed at operation on LPG with the application of spark ignition A lot of investigation on modification of the compression ignition engine aimed at operation on LPG with the application of spark ignition has been carried out in the Laboratory of Vehicles and Combustion Engines at Kazimierz Pulaski University of Technology and Humanities in Radom. This paper presents results of investigation on establishment of the proper ignition advance angle in the modified engine. Within the framework of this investigation it was assessed the effect of this regulation on basic engine operating parameters, exhaust emission as well as basic combustion parameters. Introduction LPG enjoys great popularity in Poland due to its low price comparing to gasoline and diesel oil. It has also a well-developed distribution infrastructure. In recent years, it has been observed a trend toward modification of older generation CI engines aimed at operation on LPG. This modification consisted in reduction of the compression ratio and spark ignition application. Such kinds of modifications in CI engine constructions started in the last years of the XXth century. Description and discussion of investigations undertaken in this area are presented, among others, in [2]. Comparing to the investigations on dual-fuel engines operating on diesel oil and LPG, the scale of these investigations is much smaller. Very often the presented results do not refer to the effects resulting from the applied modifications, mainly to the following changes: − operating engine parameters (effective power, fuel consumption and engine overall efficiency), − parameters influencing natural environment (exhaust emissions), − combustion parameters (that determine engine durability). Investigation on the above-mentioned topics were carried out in the Laboratory of Vehicles and Combustion Engines at Kazimierz Pulaski University of Technology and Humanities in Radom. The first stage of the investigation was focused on the establishment of the compression ratio of the modified engine. The obtained results were presented at the conference KONMOT 2014 and published in [1]. A further investigation concerned the establishment of the proper ignition advance angle of the modified engine. Within the framework of this investigation it was assessed the effect of this regulation on basic engine operating parameters, exhaust emission and combustion parameters. Description of the test bed The investigations were carried out with the use of a single cylinder 1HC102 research engine. In the standard version, this is a compression ignition engine fuelled with diesel oil. In the modified version, this is a spark ignition engine with a compression ratio of 9 -the value which was established earlier. This engine was coupled with an electrorotational brake Vibrometr 3 WB 15. The test bed was equipped with systems providing measurements of: − engine torque − engine speed − hourly fuel consumption − exhaust emissions − combustion pressures. General views of the test bed and the engine control room are presented in (figure 1 and 2). Investigation procedure In the first stage of the investigation, regulation characteristics of maximum load Momax [Nm] versus the ignition advance angle α (C.A. BTDC) were prepared and engine overall efficiency obtained in these conditions. Characteristics were prepared at three values of the engine speed: n1=1200 rpm, n2=1700 rpm, n3=2200 rpm. The obtained results provided a basis for further investigations taking into consideration the beneficial value of ignition advance angle at each of the selected engine speeds. The following stage of the investigation consisted in preparation of load characteristics of the main exhaust emissions from the modified engine at each of the selected engine speeds. The final stage of the investigation consisted in preparation of load characteristics of the basic combustion parameters (maximum pressure Pmax , mean rate of pressure rise ∆ ∆∝ mean and maximum rate of pressure rise ∆ ∆∝ max). The present paper presents results obtained at the engine speed of 2200 rpm only. The relationships obtained for the modified engine were compared with those obtained for the standard engine operation. Results of the investigation and comparisons are presented in chapter 5. It results from these characteristics that, taking into consideration the developed torque Mo [Nm], the most beneficial regulation of the ignition advance angle is α =25 C.A. BTDC (at the engine speed n=2200 rpm). The established ignition advance angle was set in further investigation on the main exhaust emissions and combustion parameters. Comparison of basic operating parameters, main exhaust emissions and combustion parameters for the modified and the standard engines In order to carry out the necessary comparisons, load characteristics of the above-mentioned parameters were prepared. In the case of the engine operating on diesel oil, the standard ignition advance angle α =20 C.A. BTDC was maintained. In the modified engine, there was applied the established earlier ignition advance angle equal to 25 C.A. BTDC. Analysis of the characteristics presented in figure 5 shows that the modified engine delivers higher overall efficiency comparing to the standard operating engine, especially at loads close to the maximum ratings. It is worth mentioning the fact that both engine versions deliver the same maximum load rating of Mo=50 [Nm]. Achievement of such result was possible, to a large degree, due to the proper regulation of the ignition advance angle α for the modified engine. Comparison of characteristics relating to the main exhaust emissions The engine exhaust emissions were measured under load characteristics for both engine versions. The obtained characteristics are presented in ( figure 6, 7 and 8). It should be noticed, that during investigation on the modified engine, the air -LPG mixture was kept at the stoichiometric ratio (λ≅1). It results from these characteristics (obtained at the engine speed n=2200 rpm and the ignition advance angle regulation α =25 C.A. BTDC) that: − CO emission from the modified engine is lower than from the standard operating engine, particularly at loads close to the maximum ratings, − HC emission from the modified engine is also lower than from the standard operating engine particularly at loads close to the maximum ratings as well, − NOx emission from the modified engine is also higher than from the standard operating engine. According to the authors, the positive effect reduction of CO and HC emissions of the modified engine is due to the following facts: − the standard CI engine was equipped with an old style injection pump (of the inline type) operating at low injection pressure (ca. 17 MPa) − the modified engine operated on a gaseous fuel (LPG). Fuel of this type, as a rule, has beneficial combustion characteristics. The air-fuel equivalence ratio of the modified engine was kept at the stoichiometric condition for a SI engine λ≈1. In the standard CI engine, this ratio had a value varying from λ≈8÷10 (at the minimum load) to λ≈1,3 at the maximum load. At this stage of investigation, it is difficult to give a clear answer to the question of why a higher NOx emission is observed in the modified engine. However, it should be mentioned that the investigated engine was not fitted with a catalytic converter. It is expected that application of a catalytic converter will result in lower emissions, especially in view of the fact that the engine is operating at stoichiometric conditions (λ=1). The presented characteristics of basic combustion parameters reveal that, at the established engine speed n=2200 rpm and at the established regulation of the ignition advance angle α =25 C.A. BTDC, the investigated combustion parameters (maximum combustion pressure Pmax, mean rate of pressure rise ∆ ∆∝ mean and maximum rate of pressure rise ∆ ∆∝ max) of the modified engine are lower, over the full range of load, than corresponding parameters of the standard operating engine. It should be mentioned, that this feature is beneficial regarding engine durability. LPG at the engine speed n=2200 rpm , ε=9 Comparison of characteristics relating to the basic combustion parameters Diesel fuel at the engine speed n=2200, ε=17 LPG at the engine speed n=2200 rpm, ε=9 Diesel fuel at the engine speed n=2200 rpm, ε=17 Summary -conclusions Modification of a compression ignition engine should be preceded by investigation aimed at establishment of the compression ratio ε and the ignition advance angle α. Proper establishment of the above-mentioned parameters enabled to obtain higher engine overall efficiency ηo of the 1HC102 engine maintaining the developed torque Momax. Operation of the modified 1HC102 engine resulted in lower CO and HC emissions and higher NOx emission. It should be noticed, that fitting the engine with a catalytic converter would enable a further decrease of these emissions. The process of combustion in the modified engine, in comparison to the standard operating engine, is characterized by lower values of the following parameters: -maximum combustion pressure Pmax, -mean rate of pressure rise ∆ ∆∝ mean, -maximum rate of pressure rise ∆ ∆∝ max. This feature is beneficial regarding engine durability. The above-presented observations enable a statement that the analysed modification of older generation CI engines would bring about a number of benefits (increased engine performances, higher engine overall efficiency, lower exhaust emissions) provided that the compression ratio ε and the ignition advance angle α will be established in a proper way. Further improvement of the analysed operating parameters of the modified engine may be obtained in result of an accurate design of the combustion chamber.
2019-04-15T13:06:59.934Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "13ea28962ea09e54e3e52c866010efa7f437e857", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/148/1/012072", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "21391ef9b4c1aacbaaa9fdbd2dcfd1ca2363996f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
1940307
pes2o/s2orc
v3-fos-license
Stress-first single photon emission computed myocardial perfusion imaging. BACKGROUND Myocardial perfusion imaging (MPI) with single photon emission tomography (SPET) is widely used in coronary artery disease evaluation. Recently major dosimetric concerns have arisen. The aim of this study was to evaluate if a pre-test scoring system could predict the results of stress SPET MPI, thus avoiding two radionuclide injections. METHODS All consecutive patients (n=309) undergoing SPET MPI during the first 6 months of 2014 constituted the study group. The scoring system is based on these characteristics: age >65 years (1 point), diabetes (2 points), typical chest pain (2 points), congestive heart failure (3 points), abnormal ECG (4 points), male gender (4 points), and documented previous CAD (5 points). The patients were divided on the basis of the prediction score into 3 classes of risk for an abnormal stress-first protocol. RESULTS An abnormal stress SPET MPI was present in 7/31 patients (23%) with a low risk score, in 24/90 (27%) with an intermediate score risk, and in 124/188 (66%) with an high score risk. ROC curve analysis showed good prediction of abnormal stress MPI. CONCLUSIONS Our results suggest an appropriate use of a pre-test clinical prediction formula of abnormal stress MPI in a routine clinical setting. INTRODUCTION Single photon emission tomography myocardial perfusion imaging (SPET-MPI) is one of the most used and most accurate non invasive method of evaluation of patients with coronary artery disease. In the last 20 years, however, a significant reduction of abnormal findings on SPET-MPI has been observed. Actually, Rozanski et al. reported a gradual decline in the frequency of abnormal perfusion studies from 41% in 1991 to 9% in 2009 1 . Thus, concerns have arisen on over utilization of SPET-MPI, particularly in low risk patients 2 . It should be noted that the acquisition protocol still in use have been developed several years ago. Since then major concerns on radiation exposure, in the last few years a 6-fold increase in background radiation from medical imaging has been observed 3 ; moreover, health care costs have arisen. The routinely used protocol of SPET-MPI is based on two administration of the radiotracer: one at rest and one during stress. Since few abnormal studies are expected to be found in routine applications, a reasonable way to reduce both radiation dose and costs could be to avoid the rest injection of the radiotracer and thus the rest SPET-MPI acquisition if stress SPET-MPI shows normal myocardial perfusion. A strategy of stress-first SPET-MPI, leading to stress-only if images are normal, has been proposed over two decades ago 4 , and many authors as well as Scientific Societies enforced it because of reduced radiation exposure and costs with improved laboratory efficiency [5][6][7][8][9][10][11] . A stress-only approach would reduce radiation dose to less than 30%-60% and costs would be decreased because of the reduction of examination time (<90 minute instead of 3-5 hours) leading to a reduced use of the medical equipment and an increase in the number of patients examined daily 12,13 . Actually, not all people can be tested with the stressfirst technique. Main criteria of eligibility are: presence of symptoms in a patient with a low likelihood of ischemia, no history of documented myocardial infarction and/or revascularization (PCI and/or CABG), a recent normal functional or anatomic study 14,15 . Recently, Duvall et al 14 proposed a pre-test scoring system based on clinical variables to accurately identify patients who can successfully undergo a stress-first imaging protocol without the need for rest imaging. Thus, the aim of this study is to evaluate in a routine setting if the pre-test scoring proposed by Duvall et al 14 could predict an abnormal stress SPET-MPI. METHODS All consecutive patients (n=309) undergoing SPET-MPI during the first 6 months of 2014 in the Nuclear Medicine Department of San Giovanni di Dio e Ruggi D'Aragona University Hospital constituted the study group. None of the patients was in the Emergency Department and none of them had an available recent (i.e. < 3 months) coronary angiography. Demographic and stress test variables at the time of SPET-MPI were collected for all patients (Table I). Demographic variables recorded were age, gender, height, weight. Clinical variables collected were chest pain, shortness of breath, diabetes, hypertension, hyperlipidemia, smoking, family history of CAD, peripheral vascular disease, cerebrovascular disease, congestive heart failure, documented CAD (which included known CAD by diagnostic testing or patient history, history of myocardial infarction, history of revascularization), abnormal ECG, previous normal stress MPI, previous normal coronary angiography, congestive heart failure, pulmonary hypertension, and stressor used. The scoring system is based on the following parameters, linked with a specific score: age >65 years (1 point), diabetes (2 points), typical chest pain (2 points), congestive heart failure (3 points), abnormal ECG (4 points), male gender (4 points), and documented CAD (5 points) 14 . According to the proposed scoring model 14 , all the patients were divided into 3 classes of risk for an abnormal stress SPET-MPI: low risk (<5), intermediate risk (≥5 <10) and high risk (≥10). SPET-MPI was performed according to standard imaging protocol as endorsed by ASNC 16,17 . A rest-stress or stress-rest imaging sequence was employed using Tc-99m sestamibi. All patients underwent physical exercises. SPET-MPI was performed using a dual head camera (CardioMD, Philips), equipped with a high resolution collimator, stop and shoot acquisition with 64 steps, a 180°arc from right anterior oblique to left anterior oblique, a 64 x 64 x 16 matrix, using an iterative reconstruction algorithm (Astonish). Image acquisition began 30-60 minutes after radiotracer injection. A 17-segment model was applied for semi quantitative visual analysis of SPET-MPI images. For each myocardial segment a 5-point scoring system was used: 0= normal perfusion, 1= mild reduction in counts (not definitely abnormal), 2= moderate reduction in counts (definitely abnormal), 3= severe reduction in counts, 4= absent uptake. In addition to individual scores, the summed scores were calculated. A summed stress scores (SSS) was obtained by adding together the stress scores of all the segments and the summed rest score (SRS) by adding together the resting scores of all the segments. Stress SPET-MPI was considered abnormal with a SSS >3. Previously unpublished data obtained in our laboratory in 95 patients showed an ICC= 0.98 for intraobserver reproducibility and and ICC=0.97 for interobsever reproducibility (p<0.001 for both) of visual analysis. MedCalc Statistical Software version 13.1.2 was used for statistical analysis (MedCalc Software bvba, Ostend, Belgium; http://www.medcalc.org; 2014). All data are expressed as mean + 1 standard deviation or as percentage, as appropriate. Receiver operating curve (ROC) analysis was used to assess the accuracy of the predictive model and to assess the accuracy of the predictive model and to determine the optimal cutoff by using the Youden index 18 . A p value < 0.05 was considered significant. Table 1 ROC curve analysis showed good prediction of abnormal stress SPET-MPI ( Figure 3) with an area under the ROC curve of 0.75. Using the optimal cutoff selected by the ROC curve analysis, sensitivity was 80% and specificity was 58%. These evidences suggest a probably redundancy of rest SPET-MPI in many patients where a normal stress study obviates the need for rest imaging, as stated by the European MPI guidelines 22 . RESULTS The routine procedure adopted in many clinical nuclear medicine centers is based on two separate radiotracer injection (stress and rest) and obviously two SPET-MPI. The two injections could be performed in the same day, 2-3 hours apart, or in two separate days. The procedure requires 3 to 5 hours to be performed, when a single day protocol is adopted, or 1 to 2 hours for each day when a 2-day protocol is scheduled. Of course, two radiotracer administrations lead to a higher radiation exposure, often unnecessary 12,13 . A stress-first SPET-MPI can decrease both procedure time and radioactive dose, avoiding the rest scan if the stress one is normal. All these advantages are relevant to the health care system 12,13,19,23 . Moreover, avoiding the rest SPET-MPI when a normal stress SPET-MPI is found would not affect the clinical relevance of the study, since a low cardiac event rate is associated with a normal stress-only study, with an annualized cardiac event rate < 0.7% 19,24 . Recently, new diagnostic imaging techniques in CAD patients have been introduced showing excellent results, namely Cardiac Computed Tomography, which has been proposed as an alternative to SPET-MPI. MPI SPECT in low-intermediate risk CAD patients optimized with stress only imaging is similar to Cardiac Computed Tomography in time to diagnosis, length of hospital stay, and cost, with improved prognostic accuracy and less radiation exposure 25 . The efficacy of stress-only protocol has been evaluated in several studies including a variety of subjects: in-patients, outpatients, and the emergency department 12,13,20,21,26 . An effective use of the stress-first SPET-MPI protocol requires an appropriate selection of patients to be studied with. Criteria for selecting patients for a stressfirst imaging protocol can be: no symptoms suggestive of ischemia and low to intermediate pre-test probability, no history of documented myocardial infarction and/or coronary revascularization, a history of a recent normal functional or anatomic study. A key point in stress-first protocols is the presence of the physician who should select the protocol for each patient and check the presence of any perfusion abnormality on stress SPET-MPI and thus decide to perform the rest scan. A way to limit the number of abnormal stress-first studies to be analyzed would be to perform rest-stress studies only in patients with a history of CAD or myocardial infarction who are considered ''high risk''. However, defining exactly who is an ''high risk'' patient could be difficult. On the other hand, a predictive scoring system could help in the selection of patients with a high probability of a normal stress SPET-MPI, i.e. low risk patients. Duvall et al. 12 , in particular, analyzed a large court of patients identifying a 92% success rate for the low risk group with a stress-first protocol and an area under the ROC curve of 0.82. The pre-test scoring tool we used in the present study is able to predicts patients who have a high likelihood of successfully completing a stress-first imaging protocol without the need for rest imaging on the basis of level of risk. Actually, while 77% of patients with lowintermediate risk do have normal myocardial perfusion at stress SPET-MPI, 66% of those with high risk showed abnormal myocardial perfusion. Thus, it would be conceivable to perform a stress-first SPET-MPI protocol in patients in low or intermediate pre-test risk classes. The finding of a similar prevalence of abnormal findings in low and intermediate risk patients clearly indicates that the model is not able to discriminate between these two classes of risk. This results is different from what reported by Duvall et al 12 , and could be due to differences in the populations studied, as we do not have patients from the Emergency Department, or to differences in acquisition methods, since we do not have attenuation correction. However, it should be noted that using the best cutoff selected by the ROC curve analysis we obtained good results in selecting patients suitable for stress-only myocardial perfusion imaging. The present study has some limitations. The retrospective collection of data and the relatively low number of the patients could prevent from a general conclusion. Benefits of a prediction formula would be of course more relevant in a larger cohort. Moreover, the camera used in our study does not allow attenuation correction. However, the good results we obtained without attenuation correction indicate that the proposed model is quite robust and can be used in routine practice. Finally, no gated MPI has been performed. Although it is true that gated acquisition is important, the finding of normal wall motion in a myocardial segment showing a perfusion abnormality on stress image without attenuation correction does not change the perceived need for a rest study or the interpretation certainty because the stress perfusion abnormality may represent either ischemia or attenuation artifact 26 . Applying a stress-first protocol in a routine clinical setting leads to some logistic and dosimetric consideration. Clinical and demographic characteristic of the patient must be known before data acquisition to decide the opportunity to perform a stress-first acquisition for each patient. It should be noted that all the parameters used for the score can be easily obtained from the clinical history and / or the medical record of each patient . Furthermore, decisions may be taken in advance or upon arrival of the patient in the Nuclear Medicine laboratory even by different members of the staff. A key point is the need to analyze stress images as soon as possible. This implies that the nuclear medicine physician in charge must be present in the elaboration room and read the MPI data just at the end of the data acquisition. From a dosimetric point of view, besides the dose reduction for the patients, the radiation burden is also reduced for the staff. Indeed, the clinical data collection takes place before the administration of the radiotracer and avoiding the rest injection of the radiotracer in selected patients would save the member of the staff in charge of injection a second irradiation. In conclusion the results of the present study suggest an appropriate use of a pre-test clinical prediction formula of abnormal stress MPI in a routine clinical setting.
2018-04-03T03:16:02.452Z
2016-11-01T00:00:00.000
{ "year": 2016, "sha1": "0a033bdebdaa0bd4945bae5cedc6ccb4855dee22", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "0a033bdebdaa0bd4945bae5cedc6ccb4855dee22", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119305602
pes2o/s2orc
v3-fos-license
Charge conjugation from space-time inversion in QED: discrete and continuous groups We show that the CPT groups of QED emerge naturally from the PT and P (or T) subgroups of the Lorentz group. We also find relationships between these discrete groups and continuous groups, like the connected Lorentz and Poincar\'e groups and their universal coverings. Introduction It was shown in [1] that the CPT group, Gθ(ψ) (θ =Ĉ * P * T ), of the Dirac quantum field is a non abelian group with sixteen elements isomorphic to the direct product of the quaternion group, Q, and the cyclic group, Z 2 : Unlike Gθ(ψ) [1,2,3], the CPT group, Gθ(Â µ ), of the electromagnetic field is an abelian group of eight elements with three generators [2]: As the CPT transformation properties of the interactingψ −Â µ fields are the same as for the free fields [4], the complete CPT group for QED, GΘ(QED), is the direct product of the above mentioned two groups, GΘ(ψ) and GΘ(Â µ ), i.e., C from PT It was shown in [3] that Q becomes isomorphic to a subgroup H of SU(2), being λ the isomorphism: where ι, γ, κ are the three imaginary units of the quaternion group and σ k (k = 1, 2, 3) are the Pauli matrices; and taking also into account that Z 2 is isomorphic to the center of SU(2): {I, −I}, then: Since SU(2) is the universal covering group of SO(3): then Φ(H) has 4 elements and, for that reason, the unique candidates are groups isomorphic to C 4 and D 2 ∼ = Z 2 × Z 2 , the Klein group. A simple application of Φ to the elements of H led to: with R x (π), R y (π), R z (π) the rotations in π around the axes x, y and z, respectively, and I, the unit matrix in SO(3). It was then immediately verified that the multiplication table of Φ(H) < SO (3) is the same as for D 2 . Then, we have: Within the Lorentz group O(3, 1), the transformations of parity P and time reversal T , together with their product PT and the 4×4 unit matrix E, lead to the subgroup of the Lorentz group, called the PT -group, which is also isomorphic to D 2 . On the other hand, P or T separately, together with the unit 4×4 matrix E, give rise to the group Z 2 . Then, we obtain the desired result for the Dirac field: while, for the electromagnetic field, we have: The above result suggests that the Minkowskian space-time structure of special relativity, in particular the unconnected component of its symmetry group, the real Lorentz group O(3, 1), implies the existence of the CPT group as a whole, and therefore the existence of the charge conjugation transformation, and thus the proper existence of antiparticles. Taking into account diagrams (13) and (26), the group homomorphisms: make commutative the following diagram: making explicit the close and possibly deep relationship between these discrete and continuous groups. Discussion In summary, we have that Gθ(ψ) and GΘ(Â µ ), which are groups acting at the quantum field level that include the charge conjugation operator, emerge in a natural way from the PT -group and its P (or T ) subgroups. That is, from matrices acting on Minkowski classical space-time. It is important to note that G P T generates GΘ(Â µ ), the CPT group of the electromagnetic field, without passing through SU (2). That is, without the need of using spinors; while the group SU(2) is needed in order to generate Gθ(ψ), the CPT group of the Dirac field. Finally, another important thing that we found is the relationship between discrete groups, like GΘ(Â µ ) and Gθ(ψ), and continuous groups, like the connected Poincaré group (P c 4 ) and its universal covering (P c 4 ). This is shown in diagram (31).
2010-09-03T21:28:18.000Z
2010-09-03T00:00:00.000
{ "year": 2010, "sha1": "4c4e5d8f50776f46762a0e3532a0829bf9fe899d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1009.0774", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4c4e5d8f50776f46762a0e3532a0829bf9fe899d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
232335381
pes2o/s2orc
v3-fos-license
Failure-Tolerant Contract-Based Design of an Automated Valet Parking System using a Directive-Response Architecture Increased complexity in cyber-physical systems calls for modular system design methodologies that guarantee correct and reliable behavior, both in normal operations and in the presence of failures. This paper aims to extend the contract-based design approach using a directive-response architecture to enable reactivity to failure scenarios. The architecture is demonstrated on a modular automated valet parking (AVP) system. The contracts for the different components in the AVP system are explicitly defined, implemented, and validated against a Python implementation. I. INTRODUCTION Formally guaranteeing safe and reliable behavior for modern cyber-physical systems is becoming challenging as standard practices do not scale [1]. Managing these highly complex architectures requires a design process that explicitly defines the dependencies and interconnections of system components to enable guaranteed safe behavior of the implemented system [2]. A leading design methodology to develop component-based software is contract-based design, which formalizes the design process in view of component hierarchy and composition [3], [4], [5]. Contract-based design reduces the complexity of the design and verification process by decomposing the system tasks into smaller tasks for the components to satisfy. From the composition of these components, overall system properties can be inferred or proved. This contract-based architecture has been demonstrated for several applications [6], [7], [8], [9]. Our goal here is to adapt and extend this framework to model a directiveresponse architecture on an automated valet parking system with the following features: 1) Discrete and continuous decision making components, which have to interact with one another. 2) Different components have different temporal requirements. 3) A natural hierarchy between the different components in our system that may be thought of as different layers of abstraction. 4) The system involves both human and non-human agents, the number of which is allowed to change over time. 5) Industry interest in such a system. One example of industry efforts to commercialize such a system is the automated valet parking system developed by Bosch in collaboration with Mercedes-Benz, which has been demonstrated in the Mercedes-Benz Museum parking garage in Stuttgart, Germany. Bosch and Daimler also later announced in 2020 that they would set up a commercially operating AVP at the Stuttgart airport [10]. Another commercial AVP system is supposed to be set up by Bosch in downtown Detroit as a collaboration with Bedrock and Ford [11]. Other examples include efforts by Siemens [12] and DENSO [13]. The contributions of this paper include the formulation of a formal contract structure for an automated valet parking system with multiple layers of abstraction with a directiveresponse architecture for failure-handling. By implementing this system in Python, we aim to bridge the large gap between abstract contract metatheory and such non-trivial engineering applications. In addition, we incorporate error handling into the contracts and demonstrate the use of this architecture and approach towards writing specifications in the context of the automated valet parking example. Finally, we prove that the composed implementation satisfies the composite contract, adding this example of a larger scale control system, involving a dynamic set of agents that are allowed to fail, to the small and slowly growing list of examples of formal assume-guarantee contract-based design. A. Contract Theory Background Contract-based design is a formal modular design methodology originally developed for component-based software systems [14]. A component's behavior can be specified in terms of a guarantee that must be provided when its environment satisfies a certain assumption. This pairing of an assumption with a guarantee provides the basis for defining a contract. A contract algebra can be developed in which different contract operations can be defined which enable comparison between and combinations of contracts, formalizing modularity, reusability, hierarchy etc. [15]. A comprehensive meta-theory of contracts is presented in [1]. In the following, we will introduce a variant of assumeguarantee contracts that incorporates a directive-response architecture. B. Directive-Response Architecture In a centralized approach for contingency management, recovery from failures is achieved by communicating with nearly every module in the system from a central module, hence increasing the system's complexity and potentially making it more error-prone [16]. The Mission Data system (MDS), developed by JPL as a multi-mission information and control architecture for robotic exploration spacecraft, was an approach to unify the space system software design architecture. MDS includes failure handling as an integral part of the design [17], [18]. It is based on the state analysis framework, a system engineering methodology that relies on a statebased control architecture and explicit models of the system behavior. Fault detection in MDS is executed at the level of the modules, which report if they cannot reach the active goal and possible recovery strategies. Resolving failures is one of the tasks the system was designed to be capable of and not an unexpected situation [17], [19]. Another architecture based on the state analysis framework is the Canonical Software Architecture (CSA) used on the autonomous vehicle Alice by the Caltech team in the DARPA Urban Challenge in 2007. The CSA enables decomposition of the planning system into a hierarchical framework, respecting the different levels of abstraction at which the modules are reasoning and the communication between the modules is via a directiveresponse framework [20]. This framework enables the system to detect and react to unexpected failure scenarios, which might arise from changes in the environment or hardware and software failures in the system [16]. In this paper we are trying to capture the MDS and CSA approaches by incorporating directive-response techniques into a contract framework. C. Directive-Response Contract Framework In this paper, we propose a contract-based design framework incorporating a directive-response architecture to enable reactivity to failures in the system. System components can be abstracted as black boxes constrained by assumeguarantee contracts that specify the behavior of the integrated system. Components communicate with one another by exchanging directives and responses, potentially acting according to a contingency plan that specifies how to react to possible failures. The higher module sends a directive, and the lower module chooses its responses according to its status in achieving the directive's intended goal. The system components are composed to satisfy the overall system requirements while interacting with the environment, such as safety and liveness specifications. III. MOTIVATING EXAMPLE The motivating example that we are developing in this paper is automated valet parking (AVP), as introduced in the previous section. The goal of this system is to automate the parking and retrieving process for multiple cars concurrently, while providing efficient operations in a safe environment. A. Overall Specification To be a successful operation, the AVP system needs to provide guarantees to customers regarding their safety and that their car will eventually be returned. These specifications can be written in linear temporal logic (LTL) [21]. For a detailed discussion on LTL, see [22]. The symbol represents the "always" operator and the ♦ represents "eventually". These are operators on predicates or traces. An example of the specification is the following: Property 1 (Safety): ¬collision (Always no collision.) and Property 2 (Liveness): healthy ⇒ ♦Returned (Healthy car will eventually be returned.) where the predicate collision is True if more than one car or pedestrian occupy the same space, and healthy and Returned are predicates which correspond to the status of the car, where healthy is True if the car does not have a failure and Returned is True once the control of the car has been given back to the customer. These specifications have to be satisfied for any implementation of the system and we will show this in our proof of the correctness of the composed system. IV. MATHEMATICAL FORMULATION To provide a formal description of the contracts and the components, we will introduce the mathematical background in this section. We will provide definitions regarding the geometry of the path planning, introduce the variables of our AVP world, and define the directive response framework and components. A. Geometry Definition 1 (Path): A path is a continuous map p : be such that p h (s) is the heading angle measured in degrees from the abscissa to p (s), the derivative vector of p with respect to s. For t ∈ [0, 1], letp(t) denote the element p(t) × p h (t) of R 3 . We will denote the set of all paths by P and, by abuse of notation, we will also use p to denote p([0, 1]), the image of where Γ δ (p, s) := {(x, y, θ) ∈ R 3 | δ(p, s, (x, y, θ)) = True} such that Γ δ (p, s) is open and containsp(s) then we say Γ δ (p) is a δ-corridor for p. B. AVP World Building Blocks: In this section we will introduce naming symbols for objects that exist in the AVP world. Definition 4 (AVP World): The AVP world consists of the following 1) A distinguished set of indexing symbols T := {t, t , t , ...} denoting time. 2) A set of typed variables U to denote actions, states, channels, etc. 3) The following set of constants: C, G where a) C, a set of symbols, is called the customer set. b) G, a set of symbols, is called the garage set containing the following constant values i) G.drivable area ⊆ R 3 , the set of configurations that vehicles are allowed to be in. ii) G.walkable area ⊆ R 2 , the area that pedestrians are allowed to walk on. iii) G.entry configurations ⊆ R 3 , a set of configurations that the customers can deposit their car in. iv) G.return configurations ⊆ R 3 , a set of configurations that the car should be returned in. v) G.parking spots ∈ N, the number of parking spots available in the parking lot. vi) G.interior ⊆ R 2 , the area inside the parking garage. Directive-Response Message Types: Each channel in the system is associated with a unique message type. The following are all the message types in our AVP system. For each type T we will denote byT the product type T×C which will be used to associate a message of type T with a specific customer in C. In addition, we will use Id to denote the set of message IDs. Behavior: For each variable u ∈ U, we denote by type(u) the type of u, namely, the set of values that it can take. The types of elements of T are taken to be R ≥0 . Definition 5 (Behavior): Let Z be an ordered subset of variables in U. A Z-behavior is an element of B(Z) := ( z∈Z type(z)) R ≥0 . Given σ Z ∈ B(Z) and τ ∈ T, we will call σ Z (τ ) the valuation of Z at time τ . If z ∈ Z, we will also denote by z(τ ) the value of z at time τ . Note that each behavior in Z ⊆ U can be "lifted" to a set of behaviors in U by letting variables that are not contained in Z assume all possible values in their domains. Additionally, the set of behaviors B(Z) can be lifted to a set of behaviors in B(U) in a similar way. To ease notational burden for the reader, we will take the liberty of not explicitly making any reference to the "lifting" operation in this paper when they are in use unless there is any ambiguity that may result from doing so. Definition 6 (Constraint): A constraint k on a set of variables Z is a function that maps each behavior of Z to an element of B, the Boolean domain. In other words, k ∈ B B(Z) . Note that by "lifting", a constraint on a set of variables Z is also a constraint on U. Definition 7 (Channel variables): For each component X and another component Y , we can define two types of channel variables: • X ←Y , denoting an incoming information flow from Y to X. • X →Y , denoting an outgoing information flow from X to Y . In this work, we assume that X →Y is always identical to Y ←X . Each channel variable must have a well-defined message type and each message m has an ID denoted by id(m) ∈ Id. If the message has value v, then we will denote it by [v, id(m)], but we will often refer to it as [v] whereby we omit the ID part to simplify the presentation. Intuitively, given a behavior, a channel variable x is a function that maps each time step to the message the associated channel is broadcasting at that time step. Definition 8 (System): A system M consists of a set of each of the following Directive-response: Before introducing directive-response systems, for any predicates A and B, we define the following syntax: ("leads to") A B := ∀t :: B(t) ⇒ ∃t ≤ t :: A(t ). ("always from t") starts at(A, t) := A(t) ∧ ∀t < t :: If M is a set-valued variable, then we define: Definition 9 (Directive-response system): A directive-response system M is a system such that for each output (resp., input) channel variable chan there is an internal variable send chan (resp., receive chan ) whose domain is a collection of sets of messages that are of the type associated with chan. If chan is an output channel variable, there is a causality constraint k chan ∈ con M defined by: That is, a message must be sent before it shows in the channel. Otherwise if chan is an input channel variable: (4) Namely, a message cannot be received before it is broadcasted. Definition 10 (Lossless directive-response system): A lossless directive-response system is a directive-response system such that if chan is an output channel then and if chan is an input channel persistent(receive chan ) ∧ (m = chan m ∈ receive chan ). (6) Definition 11 (Assume-guarantee contracts): An assumeguarantee contract C for a directive-response system M consists of a pair of behaviors A, G of M and denoted by C = (A, G). An environment for C is any set of all behaviors that are contained in A while an implementation of C is any set of behaviors that is contained in A ⇒ G. C is said to be saturated if the guarantee part satisfies G = (¬A ∨ G) = (A ⇒ G). Note that any contract can be converted to the saturated form without changing its sets of environments and implementations. The saturated form is useful in making contract algebra less cumbersome in general. If M is a system, then we say M Definition 12 (Customer): A customer is an element of C. Corresponding to each c ∈ C is a set of U variables var(c) that include c.x, c.y (the coordinates of the customer him/herself), c.car.x, c.car.y, c.car.θ (the coordinates and heading of the customer's car), c.car.healthy, whether the car is healthy, c.controls.v, c.controls.ϕ (the velocity and steering inputs to the vehicle), c.car. (the length of the car), c.car.towed (whether the car is being towed). We will use the shorthand c.car.state to mean the 3-tuple (c.car.x, c.car.y, c.car.θ). For each behavior in B(U), we require each c ∈ C for which c.car.towed is False to satisfy the following constraints that An input channel of typeÃ(Tracker). Constraints con M Vehicle dynamics See (7) Car and pedestrian limits (8) and (9). describe the Dubins car model: C. AVP System By treating the CustomerInterface as an external component, the AVP system consists of three internal components: Supervisor, Planner and Tracker. These systems are described below. 1) CustomerInterface: The environment in which the system shall operate consists of the customers and the pedestrians, which we will call a CustomerInterface. A customer drops off the car at the drop-off location and is assumed to make a request for the parked car back from the garage eventually. The pedestrians are also controlled by the environment. When a pedestrian was generated by the environment, they start walking on the crosswalks. Pedestrians are confined to the pedestrian path, meaning they will not leave the crosswalk and walkway areas and their dynamics are continuous, meaning no sudden jumps. The cars move according to their specified dynamics. This includes a breaking distance depending on their velocity and maximum allowed curvature. For a formal description, refer to Table I. Below are some constraints we impose on this module. 2) Supervisor: A Supervisor component is responsible for the high level decision making in the process. It receives the CustomerInterface requests and processes them by sending the appropriate directives to the Planner to fulfill a task. A Supervisor determines whether a car can be accepted into the garage or rejected. It also receives responses from the Planner. A Supervisor is to be aware of the reachability, the vacancy, and occupied spaces in the lot, as well as the parking lot layout. Formally, a Supervisor is a lossless directive-response system described by Table II. 3) Planner: A Planner system receives directives from the Supervisor to make a car reach a specific location in the parking lot. A Planner system has access to a planning graph determined from the parking lot layout, and thus can generate executable trajectories for the cars to follow. The Planner is aware of the locations of the agents and the obstacles in the parking lot from the camera system. A Planner is a lossless directive-response system described by Table III. 4) Tracker: A Tracker system is responsible for the safe control of cars that are accepted into the garage by a Supervisor. It receives directives from a Planner consisting of executable paths to track and send responses based on the task status to a Planner. See Table IV. V. AVP CONTRACTS In this section we will define the contracts for each of the modules in our system. These contracts are the guidelines for the implementation, and will be used to verify each of the components, as well as the composed system. In Figure 2 the green arrows represent directive-response assume-guarantee contracts, solid black arrows represent communication, and dashed black arrows represent passive -If the car is healthy and accepted by the garage, it will be returned after being summoned: • Guarantees -When the request is accepted, the CustomerInterface should not tamper with the car controls until the car is returned (i.e., control signals should match the directive) : ∀c ∈ C :: ∀t :: ∀(v, ϕ) ∈ I :: -When the CustomerInterface is not receiving any new input signal, then it keeps the control inputs at zero: ∀c ∈ C :: ∀t :: -From sending a request until receiving a response, the car must stay in the deposit area: -After the car is deposited, the customer will eventually summon it: ∀c ∈ C :: [Accepted, c] ∈ receive CustomerInterface ←Supervisor [Retrieve, c] ∈ send CustomerInterface ←Supervisor . -Sending a Retrieve message must always be preceded by receiving an Accepted message from the Supervisor: -If a customer receives Rejected or Returned from the Supervisor, then they must leave the lot forever: Contract 2 (C Supervisor ): The contract for the Supervisor is as follows. -Cars making requests are deposited correctly by the customer: -If a car is healthy and summoned, then it will eventually appear at the return area and the Planner will send a Completed signal to the Supervisor: ∀c ∈ C :: ( ≥0 c.car.healthy ∧ [Retrieve, c] ∈ receiveSupervisor ←Planner ([Completed, c] ∈ receiveSupervisor ←Planner ∧ c.car.state ∈ G.return configurations)). (24) • Guarantees -All requests from customers will be replied: -If a car is healthy and a Retrieve message is received, then the last thing sent to the Planner should be a directive to the return area (the second configuration should be one of the return configurations). -If the car is healthy and if it is ever summoned, then the Supervisor will send a Returned message to its owner: ∀c ∈ C :: ( ≥0 c.car.healthy ⇒ [Retrieve, c] ∈ receiveSupervisor ←CustomerInterface [Returned, c] ∈ sendSupervisor →CustomerInterface ). -If there is a not-yet-responded-to Park request and the parking lot capacity is not yet reached, then the Supervisor should accept the request: ∈ sendSupervisor →CustomerInterface (t ). -For every Accepted to or Retrieve from the CustomerInterface or Blocked from the Planner, the Supervisor sends a pair of configurations to the Planner, the first of which is the current configuration of the car and such that there exists a path of allowable curvature : Contract 3 (C Planner ): The contract for the Planner is as follows: • Assumes -When the Tracker completes its task according to the corridor map δ, it should send a report to the Planner: ∀c ∈ C :: ∃p ∈ P :: ∀p ∈ P :: • Guarantees -When receiving a pair of configurations from the Supervisor, the Planner should send a path to the Tracker such that the starting and ending configurations of the path match the received configurations or if this is not possible, send Blocked to the Supervisor: ∀c ∈ C :: ∃(p0, p1) ∈ R 6 :: [(p0, p1), c] ∈ receive Planner ←Supervisor (∃p ∈ P ::p(0) = p0 ∧p(1) = p1 ∧ [p, c] ∈ send Planner →Tracker ∨ [Blocked, c] ∈ send Planner →Supervisor ). (39) -Tracking command inputs are compatible with cars: -Never drive into a dynamic obstacle (customer or car): A. Simulation Environment and Implementation The proposed design framework was demonstrated via simulation of an automated valet parking (AVP) system [23]. It consists of the layout of a parking lot (Fig. 1), as well as multiple cars that arrive at the drop off location of the parking lot and are parked in one of the vacant spots by the AVP system. Once the customer requests their car, it is returned to the pick-up location. The asynchronicity is captured by modeling each component as a concurrent process using Python async library Trio [24]. The communication between the layers is implemented using Trio's memory channel. In particular, each channel is a first-in-first-out queue which ensures losslessness. The architecture is described in Figure 2. In this setup, the cars may experience failures and report them to the Tracker module. The failures considered in this demonstration are a blocked path, a blocked parking spot, and a total engine failure resulting in immobilization. The benefit of the directive-response architecture becomes apparent when failures are introduced into the system. Upon experiencing a failure, a component that is higher in the hierarchy will be alerted through the response it receives. If possible, the failure will be resolved, e.g., through the re-planning of the path or assigning a different spot. Every layer has access to its contingency plan, consisting of several predetermined actions according to the possible failure scenarios and corresponding responses it receives. In some cases (e.g., complete blockage of a car), when no action can resolve the issue, the cars have to wait until the obstruction is removed. We assume that only broken cars can be towed, and when a car breaks down, it will take a specified amount of time until it is towed. B. CustomerInterface Modeling In our simulation, customers are responsible for driving their cars into the parking garage and depositing them at the drop-off area with an admissible configuration before sending a Park directive to the Supervisor and stay there until they get a response. This is satisfied as long as the customer drops off their vehicle behind the green line such that the heading of the vehicle is within the angle bounds α and α as shown in Figure 4 with the projection w of the vehicle onto the green edge of the blown-up entrance box shown in Figure 5. Therefore, CustomerInterface satisfies G (14) . If the Park directive is Rejected by the Supervisor, the customer is assumed to be able to leave the garage safely (satisfying G (20) ). If the car is Accepted then the customer will leave the control of the car to the Tracker (satisfying G (12) and G (13) ). The customer is assumed to always eventually send a Retrieve directive to the Supervisor, after their car is Accepted (satisfying G (15) and G (19) ). Once the vehicle is Returned, the customer is assumed to be able to pick it up and drive safely away. All pedestrians in the parking lot are customers, and they are constrained to only walk on the walkable area and never stay on a crosswalk forever (thus satisfying G (16) and G (17) ). When a car fails, it becomes immobilized until it is towed (G (18) ). From this, it follows that CustomerInterface satisfies C CustomerInterface . C. Supervisor Implementation At any time, the Supervisor knows the total number of cars that have been accepted into the garage, which is represented by the variable num active customers, and is designed to accept new cars when this number is strictly less than the total number of parking spots G.parking spots. This implies G (29) is satisfied. Overall, this ensures all directives will get a response, yielding G (25) . Whenever the Supervisor receives a Completed signal, it will check if the car is the return area. If it is, then the Supervisor will send a Returned signal to the CustomerInterface in compliance with G (26) . If the Supervisor ever accepts a new car, or receives a Blocked signal from the Planner, or a Retrieve request it will send a start configuration compatible with the car's current state as well as an end configuration to one of the parking spaces in the former case and to a place in the return area in the latter. This guarantees G (30) . Proposition 1: M Supervisor satisfies C Supervisor . Proof: Let M denote our implementation of the Supervisor and σ ∈ M . We want to show that From the description of the Supervisor implementation, we conclude σ ∈ G (25) ∧ G (26) ∧ G (27) ∧ G (29) ∧ G (30) . Since σ ∈ A (24) and because in our implementation whenever the Supervisor receives a Completed signal it will alert the customer of the corresponding status, our implementation satisfies G (28) . D. Planner Implementation The Planner computes paths that cover the parking spots, as well as the entry and exit areas of the parking garage, which are κ-feasible for a car that satisfies (7) such that the corresponding δ-corridor is on G.drivable area. Given a maximum allowable curvature, a grid discretization scheme is based on a planning grid whose size is computed to provide full lot coverage and satisfy the curvature bounds, as depicted in Figure 3. For every specified grid size, the algorithm will check if the planning graph is appropriate by determining how well the parking lot is covered. Only a grid size that provides full coverage of the lot is chosen for path planning. The dynamical system specified in (7) is differentially flat [25]. In particular, it is possible to compute all states and inputs to the system, given the outputs x, y, and their (in this case, up to second order) derivatives. Specifically, the steering input is given by where κ(t) is the curvature of the path traced by the midpoint of the rear axle at time t given by The task of tracking a given path can be shown to depend only on how ϕ(t) is constrained. For practical purposes, let us assume |ϕ(t)| ≤ B for some B > 0. Then by Equation (46), tracking feasibility depends on whether the maximum curvature of that path exceeds tan(B) . For our implementation this is assumed to be 0.2 m −1 . This problem has been studied in [26] in the context of rectangular cell planning. We apply the algorithm described therein for a Type 1 path (CBTA-S1) to a rectangular cell while constraining the exit configuration to a heading difference of ±5 • and a deviation of ±0.5 m from the nominal path. The setup and the resulting initial configuration, for which traversal is guaranteed, are shown in Figure 4 and Figure 5. The initial car configuration can be anywhere on the grid segment entry edge, as long as it is between the lower bound α and the upper bound α. By passing through this initial funnel segment, the car will transition itself onto the planning grid. Therefore, it remains to be verified that each path generated from the grid is guaranteed to have a maximum curvature that is smaller than κ. An example path and its curvature are provided in Figure 5. Combining the parking lot coverage, initial grid segment traversability, and the curvature analysis, a grid size is determined to be 3.0 m for the path planner, according to Figure 3. The synthesized grid size and path smoothing technique used in our Planner guarantee that all trajectories generated meet this maximum curvature requirement. In addition to satisfying G (35) , any execution of the Planner also satisfies G (33) and G (34) because either the Planner can generate a feasible path or it will send a Blocked signal to the Supervisor. When the Planner receives a Blocked signal from the Tracker it will either attempt to find a different path on the planning graph or report this to the Supervisor. This satisfies G (36) . E. Tracker Implementation The Tracker receives directives from the Planner consisting of trackable paths and sends responses according to the task status to the Planner. The Tracker sees all agents in G.interior and guarantees no collisions by sending a brake signal when necessary to ensure a minimum safe distance is maintained at all times. The tracking algorithm that we use is an off-the-shelf MPC algorithm from [27]. To ensure that the vehicles stay in the δ-corridors, given knowledge of the vehicle's dynamics, we can synthesize motion primitives that are robust to a certain disturbance set ∆ Car (see Figure 2). Algorithms for achieving this have been proposed and implemented, for example, in [28] for nonlinear, continuous-time systems and for affine, discretetime systems in [29]. In our implementation we used a simplified approach, which ensures that a backup controller for the car gets activated if the car approaches the boundary of the δ-corridor and ensures that the car will merge onto the path again. Once it reaches the original path, the tracking of the remaining path will continue. By A (37) , any new path command [p, c] sent down from the Planner module is assumed to be κ-feasible and have a drivable δ-corridor, the initial portion of which contains c.car at that time. In our implementation, we ensure that every time this happens, c.car is stationary. And under this condition, we were able to confirm by testing that a car controlled by the MPC algorithm can track the corresponding δ-corridors of a diverse enough set of paths, thus satisfying G (39) and G (43) . The MPC algorithm is configured to output properly bounded control inputs, thus satisfies G (40) . In addition, our implementation satisfies G (42) , G (44) and G (45) by construction. And finally, we can guarantee G (41) by Property 49. VII. CORRECTNESS OF THE COMPOSED SYSTEM In this section, we will show that our implementation of the AVP is correct and satisfies the overall system specification by leveraging the modularity provided by the contract based design. We start by composing the AVP components, namely the Supervisor, the Planner, and the Tracker and then computing the quotient of the overall specification and the composed contract. Then we will show that our contract for the CustomerInterface is a refinement of this quotient. A. Contract Composition As part of the final verification step, we will be taking the composition of the component contracts and showing that our overall system implementation satisfies this composition. This will imply that the composition is consistent. Given two saturated contracts C 1 and C 2 , their composition C 1 ⊗ C 2 = (A, G) given by [1]:    A nice property of the composed contract is that if M 1 satisfies C 1 and M 2 satisfies C 2 then M 1 × M 2 satisfies C 1 ⊗ C 2 . Using the fact that the composition operator ⊗ is associative and commutative, a straightforward calculation yields the following more explicit form for the composition of N saturated contracts If A = ∅, then the composed contract is compatible. The contract is consistent if there exists an implementation for it, namely G = ∅ if it is saturated. For our AVP system, we will show that our composed implementation also satisfies the composed contract in a non-vacuous way, meaning it satisfies all guarantees of the component contracts simultaneously. In the composition, an acceptable behavior satisfies the following properties, namely the operation of the car inside the garage: 1) The Supervisor rejects the car due to the lack of reachable, vacant spots. The car will not enter the garage. 2) A car which was dropped off correctly in the deposit area is accepted by the Supervisor by G (28) . a) Accepted, no contingency: The Tracker by A (38) takes over control. After this, the Supervisor must send a directive in the form of a pair of configurations to the Planner G (30) , which in turn must send to the Tracker a safe and feasible path (satisfying A (37) ) such that the starting and ending configurations of the path match the received configurations (G (33) , G (34) ). Upon receiving the path from the Planner, the Tracker ensures that the car stays in the corridor of the path G (39) and ensures that it will make progress on that path (this satisfies G (43) ). It will accomplish this while sending compatible inputs to the customer's car G (40) and not driving it into people and other cars G (41) . When the CustomerInterface sends a Retrieve command, the above process repeats with the Supervisor, which ensures that the last configuration is in the return area, thus satisfying G (27) . If this is the last sent path, then upon reaching the end of the path, it should notify the Planner module that it has completed the task by G (44) which satisfies A (24) , A (29) , and A (31) . The Supervisor alerts the CustomerInterface of the completed return by G (26) . 3) Accepted, with problems: If the car is accepted and at any time during the above process: a) The car fails (hence, cannot move by G (18) ), the Tracker will send a Failed message to the Planner by G (42) satisfying A (32) and by G (35) this will be forwarded to the Supervisor. This satisfies A (22) , which together with A (21) , will imply that the failed car will eventually be towed. b) The car is Blocked, the Tracker will report to the Planner by G (45) , which will try to resolve or alert the Supervisor satisfying G (36) . B. Contract Quotient For saturated contracts C and C 1 , the quotient is defined in [15] as follows: Quotienting out the composed specification of the components from the overall system specification should yield the the required customer behavior. The composed system was computed to be with the assertions G i and A i of the Supervisor, Planner and Tracker contract C i in saturated form. With the contract for the overall system defined as: The contract for the overall system is as follows: • Assumes -Any circumstances. -Always healthy cars will eventually be returned (liveness). When computing the quotient of the overall system specification and the composed system, the resulting assumptions and guarantees are the following. Assuming that the AVP components work correctly (e.g. provide their respective guarantees), the customer must guarantee that all assumptions that the AVP components make on the customer are valid, while ensuring safety and progress. Meaning the customer need to provide the following guarantees: • Guarantees: -The customer will drop off the car correctly satisfying A (23) . -The customer will not interfere with the car controls after the drop-off satisfying A (38) . -The customer needs to ensure progress by not blocking the path forever, and eventually requesting and picking up the car. -The customer will not take any action towards collision ensuring safety. Our customer contract refines the contract with the above mentioned guarantees. A (23) and A (38) are satisfied by G (14) and G (12) . The safety property is guaranteed by the customer staying in the walkable area by G (16) . Progress is ensured by G (15) , G (17) , and G (20) . Our CustomerInterface contract includes the guarantees generated from the quotient and thus is a refinement of this contract. We will now show specifically that the composed system satisfies the safety and progress properties (G (41) and G (28) x, c 2 .y) ≥ ε min,people )). (49) Proof: (Sketch) For each vehicle in the parking lot, the following invariance is maintained. There will be no collisions, as the Tracker checks the spatial region in front of the car and brings it to a full stop in case the path is blocked by another agent (car or pedestrian). The minimum distance to an obstacle is determined by a minimum braking distance. Furthermore, the environment does not take actions, which will lead to an inevitable collision due to the constraints on the pedestrian dynamics 8. Proof: (Sketch) Consider the parking lot topology shown in Figure 1. Let c ∈ C and c.car.healthy. Assume that c sends a Retrieve message to the Supervisor . For each t, let us define f (t) to be the number cars between c.car and its destination. Clearly, f (t) ≥ 0 for any t and f (t) is well-defined because for the topology being considered, we can trace out a line that starts from the entrance area, going to any one of the parking spots and ending at the return area without having to retrace our steps at any time. We will show that there exists a t ≥ t such that f (t ) = 0, implying that there is no longer any obstacle between c and its destination. Next, we claim that ∀t, t :: t > t :: f (t) ≥ f (t ). This is true because: • The parking lot topology and the safety measures do not allow for overtaking. • The area reservation strategy implemented in the Supervisor prevents an increase in f upon re-routing to avoid a failed car. A notable detail is that if c.car is trying to back out of a parking spot, a stream of cars passing by can potentially block it forever. This is resolved by having c.car reserve the required area so that once any other car has cleared this area, c.car is the only one that has the right to enter it. Finally, we will show that ∀t :: ∃t :: t > t :: f (t) > f (t ). Let c be such that c .car is between c.car and its destination. By the dynamical constraint on pedestrians and by assumptions A (16) and A (17) , they will not block cars forever. Our algorithm guarantees that one of the following will happen at some time t > t: 1) c .car is picked up by c . 2) c .car is parked and c.car drives past it 3) c .car drives past c.car's destination. 4) c .car breaks down and by A (21) is eventually towed. It is easy to see that each of these events implies that f (t) > f (t ). Since f is an integer and cannot drop below 0, the result follows. VIII. SUMMARY AND FUTURE WORK We have formalized an assume-guarantee contract variant with communication via a directive-response framework. We then used it to write specifications and verified the correctness of an AVP system implementation [23]. This was done separately for each module and everything together as a complete system. The application of this framework in the AVP can be extended to more agent types, for example, human-driven cars and pedestrians that do not necessarily follow traffic rules at all times. A contract between the valet driven cars and the human-driven cars will be needed to ensure the safe operation of the parking lot, and in the event that a human-driven car violates the contract, cars controlled by the system need to be able to react to this situation safely. More failure scenarios such as communication errors (message loss, cyberphysical attacks etc.) may also be included.
2021-03-25T01:15:43.923Z
2021-03-24T00:00:00.000
{ "year": 2021, "sha1": "97210c4b4ffbf1f4d247aec0b0f16ff82cfd7e51", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "97210c4b4ffbf1f4d247aec0b0f16ff82cfd7e51", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
265181390
pes2o/s2orc
v3-fos-license
Application of 1D ResNet for Multivariate Fault Detection on Semiconductor Manufacturing Equipment Amid the ongoing emphasis on reducing manufacturing costs and enhancing productivity, one of the crucial objectives when manufacturing is to maintain process tools in optimal operating conditions. With advancements in sensing technologies, large amounts of data are collected during manufacturing processes, and the challenge today is to utilize these massive data efficiently. Some of these data are used for fault detection and classification (FDC) to evaluate the general condition of production machinery. The distinctive characteristics of semiconductor manufacturing, such as interdependent parameters, fluctuating behaviors over time, and frequently changing operating conditions, pose a major challenge in identifying defective wafers during the manufacturing process. To address this challenge, a multivariate fault detection method based on a 1D ResNet algorithm is introduced in this study. The aim is to identify anomalous wafers by analyzing the raw time-series data collected from multiple sensors throughout the semiconductor manufacturing process. To achieve this objective, a set of features is chosen from specified tools in the process chain to characterize the status of the wafers. Tests on the available data confirm that the gradient vanishing problem faced by very deep networks starts to occur with the plain 1D Convolutional Neural Network (CNN)-based method when the size of the network is deeper than 11 layers. To address this, a 1D Residual Network (ResNet)-based method is used. The experimental results show that the proposed method works more effectively and accurately compared to techniques using a plain 1D CNN and can thus be used for detecting abnormal wafers in the semiconductor manufacturing industry. Introduction Semiconductor manufacturing is a batch multi-step process, where silicon wafers undergo a sequence of complex and lengthy processing operations involving a large number of recipes and equipment types, during which electronic circuits are gradually crafted to create functional integrated circuits.Products are organized into batches of 25 silicon wafers throughout equipment production.Finalized wafers are obtained after several months of extensive processing cycles, representing hundreds of operations.The semiconductor manufacturing process is nonlinear and can be disrupted by various factors, such as equipment aging, cleaning, and repairs; the state of the wafers and wafer transfer; and preprocess chambers and chamber warm-up.As a result, there is process variability within a wafer (intrawafer variability), between wafers (interwafer variability), within a batch (intrabatch variability), and between different batches (interbatch variability).The equipment's data, which are automatically collected by numerous sensors located on the process equipment, provide direct information about the process conditions, such as temperature, pressure, gas flow, power, capacitance, etc.This results in a vast amount of sensor data that are routinely collected and stored on appropriate media. Modern manufacturing industries use cutting-edge big data technologies and innovative machine learning techniques to reduce manufacturing costs and improve production quality by extracting insightful knowledge from the collected data to enhance process automation, predictive analyses, and effective equipment monitoring [1].As for equipment monitoring, its main purpose is to identify abnormalities and faults in manufacturing process operations.In manufacturing industries, equipment monitoring can be segmented into four main parts: fault detection, fault identification and diagnosis, estimation of fault magnitudes, and product quality monitoring and control [2].The methods used to implement this monitoring are divided into three main categories: qualitative model-based, quantitative model-based, and data-driven methods [3].In order to use model-based monitoring methods, the structure and behavior of the monitored system and all of its components must be thoroughly known and understood.Model-based monitoring methods are very reliable, but they suffer from numerous flaws, as the detailed analytical descriptions needed for their implementation are either unavailable for complex industrial processes or greatly time-consuming to obtain due to the need for extensive human intervention.Unlike model-based methods, data-driven methods do not require any a priori knowledge about the system.The models are constructed by relying solely on available process data, through which the characteristics of the system are extracted. To guarantee consistent, continuous, and reproducible production quality, the sensor data collected from hundreds of equipment variables are utilized for equipment monitoring purposes, such as fault detection, fault diagnosis, prognosis, equipment health management, predictive maintenance, and virtual metrology.The early detection and precise classification of faulty wafers that result from abnormal processing are crucial for controlling operations, minimizing yield losses, and preventing defective wafers from progressing to the subsequent stages for each equipment.This paper, in particular, emphasizes the use of sensor data for fault detection and classification (FDC) in the semiconductor industry. As time goes on, the strong technological push provides improved data storage and data analysis capabilities, resulting in the collected sensor data being significantly larger as the number of data samples and dimensionality jointly increase.With this increase in sensor data availability, the collected data disclose many subtleties, such as incompleteness, high dimensionality, infrequent labeling, and severely unbalanced samples.This paper focuses on high dimensionality and severely unbalanced data. Firstly, the intricate nonlinear interactions between the signals (multiple intervariable correlations) in the high-dimensional sensor data make the detection of abnormal measurements exceedingly challenging.For data-driven tasks, it is crucial to extract solely the pertinent information, especially when dealing with multidimensional data [4].Various feature extraction and dimensionality reduction methods have been developed to extract relevant features by performing nonlinear mappings of input data into an embedded representation [5].The learned embedded representation contains useful features, which can be used to perform fault detection with statistical control charts or machine learning methods, resulting in improved reliability.To detect faults through feature extraction and dimensionality reduction, several unsupervised machine learning methods have been proposed based on factor analysis embedding, locally linear embedding, and singular value decomposition (SVD) embedding. Secondly, industrial faults rarely occur, resulting in severely unbalanced data samples, where faulty samples are scarce.The rare occurrence of faults makes it difficult to constitute a dataset sufficiently balanced for effective supervised machine learning.While numerous feature extraction and classification approaches for fault detection and analysis have been presented, the fault classification accuracy remains unsatisfactory due to the severely unbalanced data samples [6].This imposes a great limitation on the usage of supervised learning methods for fault detection.The scarcity of faulty samples has led to the widespread use of self-supervised learning methods based on Principal Component Analysis (PCA) [7], Independent Component Analysis (ICA) [8], and Partial Least Squares (PLS) [9], which can be combined with supervised learning methods such as support vector machine (SVM) [10] and k-Nearest Neighbors (k-NN) [11] for fault identification. The high volume of data poses a challenge for machine learning methods that require extensive data preprocessing, leading to performance limitations [12].To address this challenge in the semiconductor industry, deep learning algorithms that can handle large volumes of data without extensive preprocessing have been explored for fault detection.Additionally, deep learning algorithms can adapt and learn from new data, making them suitable for dynamic environments, where data patterns may change over time.Deep learning approaches have performed very well across a wide range of applications, effectively transforming high-dimensional information into new embedded representations with robust and meaningful characteristics.Self-supervised deep learning methods, such as stacked [13], denoising [14], convolutional [15][16][17][18], and recurrent autoencoders [19,20], have been used to enable this efficient translation of input data to embedded characteristics.Deep learning methods achieve equally good fault detection performance when working on unbalanced datasets with supervised learning methods based on Convolutional Neural Networks (CNN) [21][22][23].In their study, Hsu et al. [22] notably used data augmentation with a sliding window to generate numerous subsequences from multiple time series, which helped avoid overfitting on the unbalanced datasets. With the increase in dataset sizes for complex data characteristics, such as those found in multivariate time series, deeper models are needed.Deep learning models provide more accurate results as the number of layers increases.In order to achieve the most accurate models on very large datasets, the depth of the models must be continuously increased to cope with the increase in dataset sizes.However, despite being the primary method with state-of-the-art performance, deep learning techniques face the issue of vanishing/exploding gradients when the network becomes very deep.As a result, shallow counterparts may outperform deep networks [24,25].He et al. [24] proposed residual networks (ResNet) to efficiently overcome vanishing gradients.To perform bearing fault detection, Qian et al. [26] used a ResNet classifier with model-based data augmentation to cope with the requirement of large amounts of data.This paper addresses the gradient vanishing problem in a plain 1D CNN-based fault detection method trained with a substantial amount of multivariate time series data from a semiconductor manufacturing process.To overcome this observed issue with vanishing gradients, this paper introduces a novel ResNet architecture for fault detection on multidimensional time series.The proposed architecture uses 1D convolutions, which capture both the temporal dynamics and spatial correlations in the multivariate time-series data.The approach's effectiveness is demonstrated by analyzing two datasets and comparing them to the state-of-the-art methods.This study is an extended analysis of a work previously presented at a conference [27].providing new and interesting insights into gradient analysis, detailed data, and fault-type description, as well as discussing detection performance for each fault type. The remainder of this paper is organized as follows.Section 2 introduces the representative deep learning methods used in fault detection.Section 3 exposes the gradient vanishing problem and describes the proposed ResNet model.Section 4 presents the experimental setup, and Section 5 discusses the detection performance on real and simulated data from a semiconductor manufacturer.Finally, Section 6 concludes the paper and discusses future studies. Deep Learning Methods for Fault Detection This section introduces the nature of the sensor data and briefly presents the neural network approaches used for the experimental analysis.The gradient vanishing problem on deep CNNs is formalized, and the theory behind residual connections is explained. Multivariate Time Series A multivariate time series, also known as multidimensional time series, is a sequence of vectors that involves multiple variables recorded over a period of time, with each vector representing the state of a monitored variable at a specific time point.In other words, it is a collection of time series, where each time series corresponds to a different feature or dimension.A multivariate time series S with T time steps and M variables is represented as S = [S 1 , S 2 , . . ., S T ], where S k = (s 1,k , s 2,k , . . ., s M,k ) is an M-dimensional vector that represents the values of the M variables at time k.In contrast to a univariate time series, which involves only a single variable, a multivariate time series can capture the relationships and interactions between multiple variables. In the semiconductor industry, equipment sensor data are collected at a given frequency, and this can vary from one equipment to another.Sensor data variables, also referred to as status variable identification (SVID), can be collected every 1 s, 0.5 s, 0.2 s, and so on, and this value is fixed for specific equipment and never changes.Semiconductor manufacturing is a batch-processing industry, and the equipment sensor data are collected as three-dimensional data.They constitute a multivariate time series, which can be represented in a 3D matrix form, i.e., wafer number, SVID, and processing time, as shown in Figure 1. For each SVID, all the wafers are recorded for different durations due to variations in the processing time for different recipes, as well as the time-varying behaviors inherent in semiconductor manufacturing.This leads to a non-stationary dynamic in the multivariate time series.Consequently, all the durations need to be synchronized and preprocessed to a fixed length prior to fault detection.Given the various operating conditions, there are differences in the statistical characteristics of the collected time series between one wafer and another, and one batch and another. Supervised Deep Learning for Fault Detection Long Short-Term Memory (LSTM) is a type of recurrent neural network (RNN) that is capable of processing sequential data such as time-series data by preserving information over a longer period of time compared to traditional RNNs, which suffer from the vanishing gradient problem.LSTM [28] is introduced as a solution to the vanishing gradient problem in RNNs.Instead of a single hidden state, LSTM uses a cell state and three gates (input gate, forget gate, and output gate) to control the flow of information.The cell state acts as a memory unit that can store information over longer periods of time.The gates regulate how much information is allowed to flow into or out of the cell state at each time step, allowing the LSTM to selectively forget or remember information from the past.These gating mechanisms allow LSTM to selectively remember or forget information over long periods of time, making it well suited for modeling time-series data. LSTM is well suited for tasks such as time series classification and anomaly detection because it can learn complex temporal patterns and capture long-term dependencies and multivariate correlations in the data.For anomaly detection, the LSTM model is then trained using normal system data to learn the normal behavior of the system.Once the model is trained, it is used to detect anomalies in the system data.Anomalies are detected by comparing the output of the LSTM model for a given input with the expected output based on the model training data and computing a corresponding anomaly score.LSTM can be trained using backpropagation over time [29], which allows it to learn from past data and make predictions about future data.In [30], the authors proposed LSTM-AD, a self-supervised anomaly detection method based on stacked LSTMs.By leveraging the power of stacked LSTMs, LSTM-AD captures complex temporal dependencies in the normal time-series data.This enables it to effectively learn and predict expected behavior, making it robust against variations and anomalies in the analyzed time series.The utilization of prediction errors and thresholds allows LSTM-AD to accurately identify and flag any deviations from the learned normal patterns, providing a reliable anomaly detection mechanism.The same main author later proposed EncDec-AD in [19], an LSTMbased encoder-decoder approach for multi-sensor time-series anomaly detection.EncDec-AD reconstructs time series in reverse, uses the reconstruction error to compute anomaly scores, and sets a decision boundary threshold using the mean and standard deviation.This threshold helps classify the time-series data as either normal or anomalous.The encoder-decoder architecture of EncDec-AD is derived from a particular type of neural network: autoencoder. Autoencoders (AEs) are a type of neural network used for unsupervised feature learning [13].They can be used for a variety of tasks, such as data compression, image denoising, and anomaly detection.By rebuilding the input at the output, AEs approximate the identity function by reconstructing the input data as accurately as possible.They can capture complex patterns and relationships from many data types.AEs handle highdimensional data efficiently, making them suitable for multivariate time-series analysis.An autoencoder consists of two main components: an encoder and a decoder.The encoder maps the input data to a lower-dimensional representation, whereas the decoder maps the lower-dimensional representation back to the original input.During training, the autoencoder is optimized to minimize the reconstruction error, which is the difference between the input and the output of the decoder.The denoising autoencoder (DAE) is a variant of AEs that is specifically designed to remove noise from input data.It works by training the AE to reconstruct clean versions of corrupted input data, thereby learning to extract meaningful features and patterns from noisy data, making it more resilient to noise and improving its generalization capabilities.The DAE's robustness to input noise makes it valuable in applications where noise is prevalent. Time-series classification using autoencoders involves training an autoencoder on a set of time-series data and then using the learned representation for classification.The encoder of the autoencoder can be thought of as a feature extractor, which maps the time-series data to a lower-dimensional feature space.The extracted features can then be used as input to a classifier, such as a support vector machine (SVM) or a random forest, to perform classification.Anomaly detection using autoencoders involves training an autoencoder on a set of normal time-series data and then using the learned representation to detect anomalies in new time-series data.Anomalies are detected by comparing the reconstruction error of the autoencoder for a given time-series data point with a threshold value.If the reconstruction error is above the threshold, the data point is considered to be an anomaly.In [15], the authors proposed using convolutional sparse autoencoders (CSAE-AD) and the corresponding convolutional denoising sparse autoencoders (CDSAE-AD) to create a self-supervised FDC approach.With the use of convolutional kernels and the addition of a sparsity penalty [31] based on the Kullback-Leibler divergence [32] in the cost function, convolutional sparse autoencoders differ considerably from basic autoencoders.CSAE-AD allows the model to learn hierarchical features from the input data and encourages the activation of only a few neurons, resulting in more efficient and robust representations.CDSAE-AD, the denoising component, further enhances performance by training the model to reconstruct clean data from noisy inputs, improving its ability to handle realworld data with noise.Later, the authors of [14] introduced an FDC approach based on stacked denoising autoencoders to extract noise-resistant features and accurately classify semiconductor data. However, like any machine learning technique, their performance is highly dependent on the quality of the data and the specific problem being solved.Self-supervised learning methods based on LSTMs or AEs for fault detection on semiconductor time-series data perform worse than supervised learning methods, as shown in [33].In [33], CNN-based fault detection methods exhibited the best performances. Convolutional Neural Networks (CNNs) are a type of deep learning model commonly used in computer vision applications, but they can also be applied to time-series data.CNNs [34] are composed of multiple layers, including convolutional layers, pooling layers, and fully connected layers.Convolutional layers are the core building blocks of CNNs and consist of multiple filters that slide over the input data to extract features.The pooling layers downsample the output of the convolutional layers, reducing the dimensionality of the data.Finally, the fully connected layers are used to classify the input data.CNNs have been shown to be effective for fault detection in time-series data.The approach involves using the 1D convolutional layer to learn relevant features from the time-series data.The convolutional layer slides a kernel over the input data to extract local features, which can then be combined to form global features that are used for classification.Lee et al. [21] introduced FDC-CNN, a supervised anomaly detection approach that demonstrated good classification performance in fault detection on a Chemical Vapor Deposition (CVD) process dataset, which consisted of multivariate time series.FDC-CNN utilizes convolutional kernels to sweep the time axis of the two-dimensional input and extract both the temporal and spatial relationships between variables during feature extraction.Subsequently, Kim et al. [35] presented a modified version of FDC-CNN, which incorporates a self-attention mechanism into a CNN to improve the fault detection accuracy on an etch-process dataset.The self-attention mechanism [36] assigns attention weights via a probability distribution to different time steps, enabling the detection method to disregard irrelevant parts and concentrate on the relevant parts of a sequence, thus enhancing its ability to detect subtle anomalies. Deep neural networks are used to enhance performance on big datasets rather than on shallow ones.Although the CNN-based methods proposed by [21,35] achieved some great results on our small datasets, they faced the vanishing gradient problem when the networks became very deep.The vanishing gradient problem is a well-known issue that can occur when training deep neural networks, including CNNs and RNNs.The vanishing gradient problem occurs when the gradients used to update the weights of a neural network during training become very small, making it difficult for the network to learn.This can happen in deep neural networks with many layers, where the gradients must pass through multiple layers during backpropagation.The gradients can become small, as they are multiplied by the weight matrices in each layer, leading to a problem where the early layers of the network learn much more slowly compared to the later layers.In time-series fault detection with CNNs, the vanishing gradient problem can occur because the input data are high-dimensional and have complex temporal dependencies.The CNN model must learn to extract relevant features from the data, and these features can be spread across multiple layers of the network.If the gradients become very small as they pass through the layers, the early layers of the network may not be able to learn the relevant features, leading to poor performance.In our case, as seen in Figure 2, there was a gradual decrease in the training and test errors as the number of layers in the CNN-based fault detection model increased from two to nine layers.From 11 layers and beyond, the training and test errors increased as the number of layers increased, resulting in a drop in the detection performance as the number of layers in the network increased.This highlights the necessity of proposing a method capable of addressing the vanishing gradient problem. Residual Connections in Deep Neural Networks He et al. [24] brought attention to the problem of performance degradation observed when CNNs deepen.As the network depth increases, the network performance begins to saturate and finally degrades.This phenomenon is caused by the vanishing gradient of deep neural networks rather than overfitting [25].This can make it difficult for the network to learn from the training data, as the updates to the parameters based on the gradient can become insignificant.Thus, slow convergence or even complete failure to converge can be observed during the training of the network.Several network designs, including ResNet [24], Highway Network [37], and DenseNet [38], have been proposed to address this issue.All these networks share the same design principle, commonly referred to as shortcut, skip, or residual connections.Shortcut connections are a technique used in deep neural networks to accurately address the vanishing gradient problem.They allow the gradient to be directly propagated from one layer to another, bypassing intermediate layers that may cause the gradient to become small.This helps to alleviate the vanishing gradient problem and allows the network to learn more efficiently, even when it contains many layers. In the ResNet architecture, the shortcut connections are mainly used in two ways: they can either perform identity mapping, such as in the identity block in Figure 3a, or execute a linear projection, as in the convolution block in Figure 3b.The output of the identity blocks is combined with the output of the stacked layers, which does not add any extra parameters or computational complexity to the network.Consequently, they have the same number of parameters, depth, and width, making them simple to compare to the corresponding plain networks.For an input x, their output y is defined as: where F (x, {W i }) is the residual mapping to be learned and σ is the ReLU activation function.The function F (x, {W i }) denotes several convolutional, normalization, and activation layers, where element-wise addition is executed on two feature maps, channel by channel.In convolution blocks, the shortcut connections conduct a linear projection to align the dimensions between the input x and the residual mapping F (x, {W i }).This linear projection is achieved by using a 1x1 convolutional layer with appropriate filters.By doing so, the dimensions of the input and the residual mapping are made compatible, allowing for element-wise addition.This technique helps preserve important information while enabling the network to learn more complex representations.The output of this block is: where W s is a square matrix performing the linear projection of x.The linear projection is employed when a modification in dimension arises in the stacked layers of a block.The structure of the residual blocks is adaptable, as depicted in Figure 3, where the blocks contain two convolutional layers.However, it is possible to have additional layers and diverse configurations.The shortcut connection allows the gradient to flow directly from the output of the residual block to the input, bypassing the convolutional layers.This helps prevent the gradient from vanishing as it propagates through the network, making it easier to train deeper models.By adding the input to the output, the network is able to learn residual functions that represent the difference between the input and the output.This makes it easier for the network to learn the underlying function being modeled, especially when the function has many complex features. Shortcut connections have been shown to be effective in a variety of deep neural network architectures [39].Their ability to address the vanishing gradient problem and improve training efficiency has made them an essential tool for building deep neural networks.They have helped advance the state-of-the-art in tasks such as image classification, object detection, natural language processing, and semantic segmentation. Proposed Method for Fault Detection In the semiconductor industry, ResNet architectures have recently been used for wafer defect detection and classification [40,41].They aim to sort defective chips by analyzing images of wafer surfaces.In the literature, no works have addressed fault detection on multivariate time series using residual networks.This section discusses a fault detection method based on a ResNet architecture that uses 1D convolutions to process raw multivariate time series from semiconductor manufacturing equipment. In this paper, we implement standard CNNs and CNNs with shortcut connections to convey the advantages of adding residual connections to improve the feature learning capability of deep convolutional networks on time series.Also, a ResNet-type architecture is suggested for fault detection.ResNet, an enhanced version of the standard convolutional network, is utilized to minimize training difficulty by efficiently using shortcut connections to prevent the gradient from vanishing as it propagates through the deep network.The ResNet architecture consists of a succession of residual blocks for feature extraction, followed by fully connected layers for classification. In standard CNNs, the receptive field is a square matrix of weights that links the input layer to the convolutional layer.With a size smaller than the input data, the receptive field moves across its horizontal and vertical axes with a predetermined stride to perform convolutions.For an input x of size M × K, the output of a convolution operation with no padding is stored in a node and can be expressed as follows: where F represents the size of the square receptive field; S is the stride; x (m+iS)(n+jS) is the input element at position (m+iS, n+jS); w m,n and b are the weights at position (m, n) and the bias, respectively; and σ is a nonlinear activation function, typically a rectified linear unit (ReLU).The receptive field or filter used to create a feature map contains a single-weight matrix, which means all the nodes in a feature map share the same weights.This allows the receptive field to search for a common characteristic (such as a single intervariable correlation in multivariate sensor signals) across the entire input data [21].However, the conventional square receptive field of CNNs is not ideal for extracting intervariable and temporal correlations among all the SVIDs, which is crucial for fault detection in multivariate time-series data.To address this, the proposed architecture utilizes a rectangular receptive field that moves only along the time axis.One-dimensional (1D) convolution layers are tailored to implement this feature, operating along a single axis.For an input wafer x of size M × K, which represents M SVIDs and K time steps, the output of the first convolution operation with no padding, immediately after the input layer for a node, is given by: where F and S are the row size and stride length of the receptive field, respectively.The proposed approach for fault detection combines a feature extractor based on a ResNet for feature learning with a fully connected layer.The ResNet-based architecture proposed in this study includes both identity blocks (Res-block a) and convolution blocks (Res-block b) to enhance the feature extraction process, as shown in Figure 4.The entire architecture, as illustrated in Figure 4, has some specificities.The batch normalization layer is utilized to reduce the computational complexity of the training process.The spatial dropout layer [42] is implemented to regularize the network weights and prevent overfitting.Residual blocks are employed to mitigate the degradation problem and extract distinctive features from the dataset, with two types of blocks: identity and convolution.The convolution layers in the blocks follow two design rules: (i) when the feature map size is the same, the layers have the same number of filters, and (ii) when the feature map size is halved, the number of filters per layer is doubled.The halving or downsampling is accomplished using convolution layers with a stride of 2. The pooling layer is used to reduce the dimension of the intermediary algebraic elements, which are then flattened to obtain the 1D dimension required by the fully connected layers. The fully connected layers, also known as dense layers, form a multi-layer perceptron, which takes a one-dimensional array obtained from the output of the feature extractor. The fully connected layers are responsible for learning the complex relationships between the features extracted by the previous layers and generating the final output probabilities for each class.In this fault detection approach, the fully connected layers perform binary classification to determine if an input sample is normal or faulty.The complexity of a control system is similar to that of a controlled system.To alleviate the complexity of the ResNet model, the number of sensors employed at each stage of a process is chosen by experts on the basis of domain knowledge.Reducing the number and quality of sensors thus helps mitigate the complexity of the monitoring algorithm.Another way of reducing the complexity of the monitoring algorithm is by using raw sensor data.Raw sensor data have no signal processing or filtering applied to them before ingestion by the ResNet, as it is not essential for its operations.Hence, signal processing can be totally skipped with no impact on the performance of the ResNet. When using a considerably complex algorithm like ResNet for monitoring a multivariate multi-stage process, special effort has to be made during the design and training processes to ensure proper working of the algorithm owing to its complexity.The complexity of the ResNet lies in the structure of the residual blocks and the depth of the overall model.He et al. [24] proposed a set of simple rules for the design of efficient residual blocks, as described above.The suitable depth needed depends on the size of the training dataset and is determined through empirical experimentation.One model is designed, trained, and implemented according to the production recipe of a given equipment.The design changes between two models for two production recipes occur mainly in the modulation of the input layer so as to accommodate the length of the time series and the number of sensors. Experimental Setup This section reports a comprehensive empirical study for fault detection in multivariate time series.First, the datasets used for experimental evaluation are introduced.Then, the experimental setup and architecture details of the networks are described.Finally, the metrics are defined to analyze and discern the results obtained.The results of the proposed method are compared to the most recent findings in the literature for fault detection in the semiconductor industry. Data Preparation This paper examines the effectiveness of the proposed model using two datasets provided by STMicroelectronics Rousset 8 fab.To simplify the analysis, we focus on one equipment and one recipe for each dataset.The raw data consist of time series with three-dimensional information (wafer, variables, and time) but are represented as a twodimensional matrix with processing time and SVID axes only.Sensor data are collected every second for both datasets.Equipment faults are rare in semiconductor manufacturing, and the number of faulty samples available for a given production recipe is very low compared to the number of normal samples.This leads to a situation of imbalanced data, as there are more normal samples than faulty samples in training and testing datasets.The ideal composition for supervised classification is an equal number of samples for each class.Encountering unbalanced datasets is a real challenge and adds complexity to the fault detection methods. The first dataset was obtained from a process simulator developed by STMicroelectronics that mimics the dynamics of real variables, such as the gas flow, pressure, and temperature of an etch tool.For a single recipe lasting an average of 150 s, 11 variables are monitored for a total of 7000 wafers, including 5000 normal samples and 2000 faulty samples.This results in a ratio of 28.6% faulty data, which is considered good given the rarity of faulty data in the semiconductor industry.STMicroelectronics has identified five recurrent fault types in its manufacturing processes.The first dataset comprises five distinct fault types, each with an equal distribution of 400 samples.For each fault type, faults are introduced in one process step, and they occur on at least two different variables but not concurrently.The step in which the fault occurs is randomly selected for each fault type, ensuring that faults do not occur systematically in the same step.Figure 5 portrays these five common fault types that occur during wafer manufacturing.Fault 1 represents a breakage point, creating a deviant cycle with an amplitude range ranging between 30 and 50% of the time-series maximum value for at least 10 time steps.Fault 2 represents a temporary change in value with a return to a regular level after several time steps.For fault 2, the amplitude ranges between 10 and 30% of the time-series maximum value for at least 8 time steps.Faults 3 and 4 are analogous to additive noise and sinusoidal disturbances, respectively, acting as innovational outliers that induce a trend change.Fault 3 has an amplitude ranging from 1 to 10% and occurs for at least 5 time steps.For fault 4, the amplitude ranges from 1 to 5% with damping and phase shift factors and occurs for at least 8 time steps.Fault 5 represents a peripheral point, which is an independent data point that is notably outlying, resulting from a sudden rise in value (a peak), with an amplitude between 40 and 60% of the time-series maximum value.The primary objective is to identify all types of faults, and the current study does not examine the classification of the detected faults.The second dataset is from a plasma etching tool.The production recipe consists of a series of nine steps and lasts 130 s on average.Due to operating conditions, the time series does not have the same length from one wafer to another.The input data need to have a fixed length to be processed by neural networks.In order to have a fixed length for the dataset, the times series are all padded to a fixed length of 140 s.Among all the collected SVIDs, domain engineers selected a set of 25 for fault detection and classification.For one month of production, this represents 516 wafers, with 423 normal samples and 93 faulty samples.The ratio of faulty wafers is 18.0%, showing a class imbalance.This second dataset has only one fault type, the temporary change. Neural Network Configurations Two versions of the ResNet are proposed as fault detection methods: a ResNet with average pooling (ResNet-1) and a ResNet with spatial pyramid pooling [43] (ResNet-2).Two different pooling methods are used here for performance optimization.For comparison purposes, six neural network models are considered as benchmarks: two CNN-based, two LSTM-based, and two autoencoder-based, with different sequence encoding methods.These architectures have achieved consistent results when used for fault detection in the semiconductor industry [14, 15,21,35].The baseline methods used are stacked autoencoders (SAE-1), convolutional autoencoders (SAE-2), standard CNN (CNN-1), CNN with self-attention (CNN-2), stacked LSTM (LSTM-1), and LSTM with self-attention (LSTM-2).CNN-2 and LSTM-2 correspond, respectively, to CNN and LSTM architectures with a self-attention layer replacing the final pooling layer.SAE-1 corresponds to a stacked autoencoder, which is composed of two symmetrical artificial neural networks in a bottleneck form.SAE-2 corresponds to convolutional autoencoders composed of a symmetrical convolutional encoder and deconvolutional decoder. To optimize the models, various configurations were evaluated for each of the previously presented models, and only the best parameters were retained to produce the final results.The neural network architectures proposed for the experimental setting were implemented using the widely acclaimed Tensorflow software, version 2.10. The best ResNet-1 architecture comprises one convolutional layer (with 64 filters, a kernel size of 3, and a stride of 1), one spatial dropout layer (with a rate of 10%), four residual blocks (two identity blocks with 64 filters, followed by one convolution block and one identity block with 128 filters), one average pooling layer (the pool size being fixed to 2), one dense layer (with 100 units), and one dropout layer (with a rate of 10%).For ResNet-2, the architecture is the same as that of ResNet-1 with the average pooling layer replaced with a spatial pyramid pooling layer (with 32, 16, 8, 1 bins).Spatial pyramid pooling [43] maintains the spatial information in the local spatial bins.The number of bins and their size are fixed, thus generating a fixed-length representation regardless of input size.SAE-1 is a fully connected layer-based model, comprising an encoder and a decoder network composed of dense layers, with the decoder being the mirrored version of the encoder.The encoder has three hidden layers with 22, 15, and 10 nodes.SAE-2 is a convolutional-based model, comprising an encoder, a decoder, and one dense layer (with 100 units) for classification.The decoder is a mirrored version of the encoder with deconvolutions.The encoder has three hidden layers with 44, 30, and 20 filters.The ReLU function is used as the activation function.CNN-1 is configured as follows: 11 convolutional layers with 64, 64, 64, 64, 64, 128, 128, 128, 128, 256, and 256 filters coupled with batch normalization; ReLU activation and spatial dropout (rate: 10%) layers; one dense layer (with 100 units); and one dropout layer (rate: 10%).For CNN-2, self-attention mechanism-based Luong-style attention is used.For the LSTM architecture, two layers with 128 LSTM cells each are used.In addition, one dropout layer (the rate being 10%) and one dense layer (with 100 units) are used for classification.For LSTM-2, Luong-style attention is applied.In terms of the activation function, the underlying nonlinearity in the data is enforced through the sigmoid function for the LSTM-based models. The neural network configurations are summarized in Table 1.This process is performed through a stratified fivefold cross-validation partitioning in order to avoid biased results.In terms of implementation, the partitioning is carried out using Scikit-learn. • Weight initialization: The initial weights are defined using Glorot uniform distribution. No layer-weight constraints are set on the weight matrices for the learning process. • Weight optimization: The Adam optimizer is used for the training, with the learning rate fixed at 0.0005 for all models.After numerous optimization tests, the batch sizes are, respectively, fixed at 32 for the ResNet-based, CNN-based, and autoencoder-based models and at 16 for the LSTM-based models.For all of the models, the number of epochs is fixed at 300 with early stopping, and the cost function is the binary cross-entropy. Evaluation Metrics The evaluation metrics are the F-scores for model efficiency assessment and the computational complexity.The F-score is a function of the Precision and the Recall.In this specific framework, the Precision (see (5)) is the ratio of actual faults among the total detected faults and the Recall, as detailed in (6), corresponds to the ratio of the actual faults with respect to the correct predictions. Given ( 5) and ( 6), the F-score is expressed in (7).F weighted , as expressed in (8), is used as the main score, which is a weighted sum of F 0 and F 1 that takes into account the imbalanced dataset.It follows: where T P , F P , F N , A P , and A N represent true positive, false positive, false negative, actual positive, and actual negative, respectively.The efficiency of a given model is an increasing function of the score, i.e., the model is considered very precise when the score is high (close to 1, which is the maximum upper bound). Remark: It is worth highlighting that accuracy, which is the most intuitive way to evaluate classification models, is not a convenient efficiency measure for an imbalanced dataset [44].This is why the F weighted score is proposed as the evaluation metric. Results and Discussion This section presents and discusses the results obtained on both a simulated and a real dataset from semiconductor manufacturing. Gradient Analysis In Figure 6, the training and validation errors during the training of shallow and deeper networks for both plain and ResNet architectures are compared.In Figure 6a, the gradual decrease in the training error from the 7-layer to the 11-layer plain network and the sudden degradation of the 13-layer plain network, which had higher training errors throughout the training process, can be observed.The same cannot be said for the validation error of the plain networks, as shown in Figure 6b.It is hypothesized that the deep plain networks may have encountered optimization difficulties and thus exponentially low convergence rates, which affected how well the training error was reduced.On the other hand, it can be seen that despite the increase in depth, the residual networks exhibited equal training errors from 7-to 13-layer networks, indicating high convergence rates.This implies that the degradation problem was adequately handled.There were gains in the detection capability from the increased depth until the 13-layer network, where degradation can be observed in both plain and ResNet architectures.This suggests that even for residual networks, there is a maximal depth beyond which performance starts to degrade.Table 2 shows the detection performance achieved for layer depths varying from 7 to 13 layers for plain and ResNet architectures.It can be seen that when comparing networks with equal depth, residual networks always demonstrated better capabilities than their plain counterparts.Subsequently, the best plain and ResNet architecture (11 layers) was retained for comparison with other deep learning-based fault detection methods.For the dataset used, training was carried out on 7 to 13 layers because networks with fewer than 7 layers are not relevant in a ResNet architecture, and all networks with more than 13 layers suffer from the vanishing gradient problem. Results Analysis Tables 3 and 4 present the results of the proposed method and the aforementioned deep learning-based fault detection methods on the simulated and real datasets, respectively.The best results for the fault detection models, obtained with optimized hyperparameter values (best values for the learning rate, batch size, number of epochs, and dropout obtained by testing several configurations), are indicated in bold. For both datasets in Tables 3 and 4, the proposed ResNet-based approaches outperformed the other baseline methods by a significant margin and exhibited the best performance with the highest F weighted scores (0.9389 and 0.9708).The LSTM-based methods achieved the worst scores and significantly underperformed compared to the other methods.Both models achieved null values for F 1 and F weighted due to the inability of the models to converge on both datasets.The standard CNN-based method (CNN-1) exhibited the second-best performance on the simulated dataset, closely followed by the convolutional autoencoder (SAE-2), and both methods outperformed the fully connected-based stacked autoencoder (SAE-1) model and the attention CNN (CNN-2), which achieved the worst results among the converging methods.Nonetheless, on the real dataset, the convolutional autoencoder exhibited the second-best performance with only a slightly better F weighted score (<1%) compared to the standard CNN (CNN-1).On the real dataset, the standard CNN (CNN-1) and the convolutional autoencoder (SAE-2) performed better than the attention CNN (CNN-2) and the fully connected-based stacked autoencoder (SAE-1) by a rather large margin (>10%).In addition, the margin was very tight on the simulated dataset (<1%).For all models, the F 0 -score was always better than the F 1 -score by a significant margin, which provides insights into the detection capacities of the models.The F 0 -score measures the ability of the models to correctly identify normal samples, whereas the F 1 -score evaluates their ability to identify faults.The results demonstrate that all models were generally effective in identifying normal samples, with F 0 -scores consistently above 0.8.The proposed model achieved the highest scores of 0.9600 and 0.9825 on the simulated and real datasets, respectively.Even though it is important to correctly identify normal samples, fault identification is the critical factor, and the F 1 -score is a more informative performance metric.The LSTM-based models failed to encode lengthy sequences over time, resulting in null scores for F 1 on both datasets. With our task being fault identification, we focus on the F 1 -score.For all models, the F 1 -score was lower than the F 0 -score, with the proposed model achieving the best F 1 -scores of 0.8865 and 0.9189 on the simulated and real datasets, respectively.This suggests that the models struggled more to identify faults than normal samples and the degree of difficulty varied among the models, as indicated by the difference between the two scores.This difference was significant (>10%) for all models, except for the proposed ResNet models, highlighting their superiority and establishing them as a reliable FDC method.All models struggled to identify faults because of the unbalanced dataset used for training the models, with fewer faulty samples than normal ones. Table 5 presents the results of the best ResNet method (ResNet-1) and the best CNNbased fault detection method (CNN-1) for each fault type on the simulated dataset.The results here focus on the F 1 -score only to determine how well the methods identified the different fault types as faults.In Table 3, it can be seen that the overall F 1 -scores were 0.8315 for CNN-1 and 0.8865 for ResNet-1, which do not provide information on how these methods performed in detecting each fault type.The results in Table 5 show that the best proposed ResNet-1 performed better than the best CNN-based method for each fault type, with some remarkable performance gaps.For faults 3 and 4, it can be seen that CNN-1 performed poorly compared to the other fault types.Faults 3 and 4, which are illustrated in Figure 5, refer to noise and sinusoidal disturbances, respectively.For fault 3, when looking at the noise distribution (Gaussian normal distribution centered on 0), a large number of samples were close to zero most of the time.Moreover, the amplitudes of the noise faults were quite small (see Section 4.1), which made them more difficult to detect because they appeared as recurrent industrial disturbances rather than faults.With the data being raw time series, differentiating fault types 3 and 4 from simple industrial noise disturbances was more difficult for the two models.This was even more true for fault type 3, where even the proposed ResNet-1 struggled, although it exhibited better performances compared to the plain CNN-1.Regarding the nature of the noise disturbances, despite being less pronounced compared to those of the other four fault types, the detection results obtained using the proposed ResNet-1 were quite good.One of the reasons for implementing deeper networks, as in the proposed ResNet-1, is to craft a method capable of effectively detecting all fault types. Conclusions This paper proposes a ResNet-based fault detection method for semiconductor process monitoring using multivariate sensor signals.The proposed model redesigns the first convolutional layer to consider the structural characteristics of raw multidimensional sensor data and extract meaningful correlation and temporal information.The use of residual blocks with shortcut connections improves training and mitigates the degradation problem of deep neural networks, resulting in better fault detection performance.The proposed model is evaluated using both simulated and real data from the semiconductor industry, outperforming state-of-the-art and baseline models for fault detection.All five fault types addressed in this study are successfully detected, with the proposed method achieving the best detection performance for each.This study also demonstrates that residual networks outperform their plain counterparts with equal layer depths.The small size of the real dataset used for training and testing does not significantly impact the generalizability of the conclusions. Future work will focus on adapting the model to work with variable-length sensor data and providing insights for fault diagnosis.Research will also be conducted to enable the model to detect faults in multiple recipes with a single model and classify detected faults based on their nature, proposing relevant elements for equipment root-cause diagnosis. Figure 1 . Figure 1.Multidimensional sensor data representation.A labeled 3D data matrix for N wafers, M SVIDs, and with varying process times n i per wafer. Figure 2 . Figure 2. Training errors (left) and test errors (right) of plain CNNs with 2, 5, 7, 9, 11, and 13 layers on a multivariate time-series dataset.The training and test errors gradually decreased as the networks deepened but started to increase from the 11-layer network, confirming the vanishing gradient problem on deeper networks. Figure 3 . Figure 3.The two residual block structures (with shortcut connection) behind the ResNet architecture proposed in [24]. Figure 4 . Figure 4. ResNet-based feature extraction.Res-blocks x-a are the identity blocks shown in Figure 3a, and Res-blocks x-b are the convolution blocks shown in Figure 3b. Figure 5 . Figure 5. Description of the 5 common fault types (in red).These anomalies transpire across different variables and can be either atomic or aggregate in nature.Atomic anomalies involve abnormal values for a single variable, whereas aggregate anomalies arise from groups of variables deviating collectively. Figure 6 . Figure 6.Training and validation on the simulated dataset presented in Section 4. In these plots, the residual networks have no extra parameters compared to their plain counterparts. Table 1 . Summary of the neural network configurations of the various methods used for fault detection on the simulated dataset. Table 2 . Fault detection performance of residual vs. plain networks with the same layer depth on the simulated dataset. Table 3 . Fault detection performance on the simulated dataset. Table 4 . Fault detection performance on the real dataset. Table 5 . F1-scores for fault detection performance for each fault on the simulated dataset.
2023-11-15T16:47:31.996Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "019527951a27e0ad2b64d8b22d8b7e7c56881ab1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/23/22/9099/pdf?version=1699609441", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7fca8e8e2b46b11446a55863fc40762db10c3431", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [] }
220665319
pes2o/s2orc
v3-fos-license
Risk of depression, suicidal ideation, suicide and psychosis with hydroxychloroquine treatment for rheumatoid arthritis: a multi-national network cohort study Objectives Concern has been raised in the rheumatological community regarding recent regulatory warnings that hydroxychloroquine used in the COVID-19 pandemic could cause acute psychiatric events. We aimed to study whether there is risk of incident depression, suicidal ideation, or psychosis associated with hydroxychloroquine as used for rheumatoid arthritis (RA). Methods New user cohort study using claims and electronic medical records from 10 sources and 3 countries (Germany, UK and US). RA patients aged 18+ and initiating hydroxychloroquine were compared to those initiating sulfasalazine (active comparator) and followed up in the short (30-day) and long term (on treatment). Study outcomes included depression, suicide/suicidal ideation, and hospitalization for psychosis. Propensity score stratification and calibration using negative control outcomes were used to address confounding. Cox models were fitted to estimate database-specific calibrated hazard ratios (HR), with estimates pooled where I 2 <40%. Results 918,144 and 290,383 users of hydroxychloroquine and sulfasalazine, respectively, were included. No consistent risk of psychiatric events was observed with short-term hydroxychloroquine (compared to sulfasalazine) use, with meta-analytic HRs of 0.96 [0.79-1.16] for depression, 0.94 [0.49-1.77] for suicide/suicidal ideation, and 1.03 [0.66-1.60] for psychosis. No consistent long-term risk was seen, with meta-analytic HRs 0.94 [0.71-1.26] for depression, 0.77 [0.56-1.07] for suicide/suicidal ideation, and 0.99 [0.72-1.35] for psychosis. Conclusions Hydroxychloroquine as used to treat RA WHAT IS ALREADY KNOWN ON THIS TOPIC • Recent regulatory warnings have raised concerns of potential psychiatric side effects of hydroxychloroquine at the doses used to treat COVID-19, generating concern in the rheumatological community • Serious psychiatric adverse events such as suicide, acute psychosis, and depressive episodes have been identified by the US Food and Drug Administration (FDA) adverse events reporting system and at case report level WHAT THIS STUDY ADDS • This is the largest study on the neuro-psychiatric safety of hydroxychloroquine to date, including >900,000 users treated for their RA in country-level or private health care systems in Germany, the UK, and the US • We find no association between the use of hydroxychloroquine and the risk of depression, suicide/suicidal ideation, or severe psychosis compared to sulfasalazine HOW MIGHT THIS IMPACT ON CLINICAL PRACTICE • Our data shows no association between hydroxychloroquine treatment for RA and risk of depression, suicide or psychosis compared to sulfasalazine.These findings do not support stopping or switching hydroxychloroquine treatment as used for RA due to recent concerns based on COVID-19 treated patients. . INTRODUCTION Hydroxychloroquine (HCQ) has received much scientific and public attention during the COVID-19 pandemic as a leading therapeutic and prophylactic target.[1,2] Commonly used for autoimmune disorders (e.g., systemic lupus erythematosus) and inflammatory arthritis, HCQ was released for emergency use for COVID-19 due to its postulated antiviral efficacy in cellular studies.[3][4][5][6][7][8][9] HCQ is currently being used in over 217 registered ongoing clinical trials for the treatment of SARS-Cov-2 as of 12 th June 2020.[10,11] Results to date have been conflicting, with emerging data suggesting a lack of clinical efficacy against COVID-19 [12][13][14][15][16][17][18].Potential side effects described in the use of HCQ include neuropsychiatric side effects such as psychosis, depression, and suicidal behaviour.[19][20][21] Regulatory authorities have received reports of new onset psychiatric symptoms associated with the increased use of high dose HCQ during the pandemic.[22] New reports of serious side effects associated with HCQ used in COVID-19 are concerning to the rheumatology community, leading to confusion and anxiety for patients who are taking HCQ for autoimmune conditions.We performed a review of the literature to determine what was already known about the potential risks of psychosis, depression, and suicide associated with HCQ use from literature database inception until 14/05/2020 (Supplementary Appendix Section 1).Interrogation of adverse event registers have identified potential associations between HCQ and psychiatric disorders. [11] Case reports and case series describing new onset psychosis, bipolar disorder, seizures and depression associated with HCQ and chloroquine use for rheumatological disorders and malaria prophylaxis can be found as early as 1964.[19,[23][24][25][26][27][28][29][30][31] No clinical trial or observational study was found that had investigated the incidence of new onset neuropsychiatric symptoms associated with HCQ use. Considering the wide-scale use of HCQ in rheumatology, we therefore aimed to determine if there is an association between incident HCQ use for rheumatoid arthritis (RA) (the most common indication for the drug) and the onset of acute psychiatric events, including depression, suicide, and psychosis compared to sulfasalazine. . Study design A new user cohort, active-comparator design was used, as recommended by methodological guidelines for observational drug safety research.[32] The study protocol is registered in the EU PAS Register as EUPAS34497.[33] Sulfasalazine (SSZ) was used as the active comparator for HCQ, IQVIA OpenClaims (OpenClaims).In addition, data were obtained and analysed from electronic primary care data from the Netherlands (IPCI database) and Spain (SIDIAP), and from Japanese claims (JMDC) but none of these analyses were deemed appropriate due to low/no event counts in at least one of the cohorts.A more detailed description of all these data sources is available in Appendix Section 2. Follow-up Participants were followed up from the date of initiation (first dispensing or prescription) of HCQ or sulfasalazine (SSZ) (index date) as described in detail in Appendix Section 3.1.Sulfasalazine was proposed as an active comparator as it shares a similar indication as a second-line conventional synthetic DMARD for RA.Two different follow-up periods were pre-specified to look at short-and long-term effects, respectively.First, a fixed 30-day time window from index date was used to study short-term effects, where follow-up included from day 1 post-index until the earliest of: loss to follow-up/death, outcome of interest, or 30 days from therapy initiation, regardless of compliance/persistence with the study drug/s.Second, in a long-term (on treatment) analysis, follow-up went from day 1 post-index until the earliest of: therapy discontinuation (with a 14-day additional washout), outcome of interest, or loss to follow-up/death.Continued treatment episodes were constructed based on dispensing/prescription records, with a 90-day refill gap allowed to account for stockpiling. Participants All subjects registered in any of the contributing data sources for at least 365 days prior to index date, aged 18 years or older, with a history of RA (as defined by a recorded diagnosis any time before or on the same day as therapy initiation), and starting either HCQ or SSZ during the study period, were included.Potential participant counts and age-, sex-and calendar year-specific incidence per database were produced for transparency and reviewed to check for data inconsistencies and face validity, and are available for inspection at https://data.ohdsi.org/Covid19CohortEvaluationExposures/,labelled as "New users of hydroxychloroquine with previous rheumatoid arthritis" and "New users of sulfasalazine with previous rheumatoid arthritis". Outcomes and confounders Code lists for the identification of the study population, for the study exposures and for the relevant outcomes were created by clinicians with experience in the management of RA and by clinical epidemiologists using ATLAS, an open science analytics platform that provides a unified interface for researchers to work within.[34] Exposures and outcomes were reviewed by experts in OMOP vocabulary and in the use of the proposed data sources.A total of three outcomes were analysed: depression, suicide or suicidal ideation, and hospital admission for psychosis.Detailed outcome definitions with links to code lists are fully detailed in Appendix Section 3.2.[35] [36] Cohort counts for each of the outcomes in the entire source database, and age-sex and calendar-time specific incidence rates were explored for each of the contributing databases, and reviewed to check for data inconsistencies and face validity.These are available for inspection at https://data.ohdsi.org/Covid19CohortEvaluationSafetyOutcomes/A list of negative control outcomes was generated for which there is no biologically plausible or known causal relationship with the use of HCQ or SSZ.These outcomes were identified based on previous literature, clinical knowledge (reviewed by two clinicians), product labels, and spontaneous reports, and confirmed by manual review by two clinicians.[37] The full list of codes used to identify negative control outcomes can be found in Appendix Section 4. Statistical methods All analytical source code is available for inspection and reproducibility at https://github.com/ohdsistudies/Covid19EstimationHydroxychloroquine2.All study diagnostics and the steps described below are available for review at https://data.ohdsi.org/Covid19EstimationHydroxychloroquine2/.The following steps were followed for each analysis: 1.Propensity score estimation Propensity score (PS) stratification was used to minimise confounding.All baseline characteristics recorded in the participants' records/health claims were constructed for inclusion as potential confounders (including demographics, past medical history, procedures and medication prescription within 30 and within 365 days prior to drug initiation) [35].Covariate construction details are available in Appendix Section 5. Lasso regression models were fitted to estimate propensity scores (PS) as the probability of hydroxychloroquine versus sulfasalazine use based on patient demographics and medical history including previous conditions, procedures, healthcare resource use, and treatments.The full resulting PS models are available for inspection by clicking on 'Propensity model' after selecting a database in the results app. 2.Study diagnostics Study diagnostics were explored for each database-specific analysis before progressing to outcome modelling, and included checks for power, observed confounding, and potential residual (unobserved) confounding.Only database-outcome analyses that passed all diagnostics below were then conducted and reported, with all others marked as 'NA' in the accompanying results app. Positivity and power were assessed by looking at the number of participants in each treatment arm, and the number with the outcome (see the 'Power' tab after clicking on a database in the results app).Small cell counts less than five (and resulting estimates) are reported as "<5" to minimise risk of secondary disclosure of data with patient identification.PS overlap was also plotted to visualize positivity issues and can be seen by clicking on 'Propensity Scores'. Observed confounding was explored by plotting standardized differences before (X axis) vs after (Y) PS stratification, with standardized differences > 0.1 in the Y axis indicating the presence of unresolved confounding [36]: see by clicking on 'Covariate balance' in the results app.Finally, negative control outcome analyses were assessed to identify systematic error due to residual (unobserved) confounding.The results for these are available in the 'Systematic error' tab of the .results app.The resulting information was used to calibrate the outcome models using empirical calibration [37,38]. 3.Outcome modelling Cox proportional hazards models conditioned on the PS strata were fitted to estimate Hazard Ratios (HR) for each psychological outcome in new users of HCQ (vs SSZ).Empirical calibration based on the previously described negative control outcomes was used to minimise any potential residual confounding with calibrated HRs and 95% confidence intervals (CI) estimated [38,39].All analyses were conducted for each database separately, with estimates combined in random-effects metaanalysis methods where I 2 ≤40%.[40] The standard errors of the database-specific estimates were adjusted to incorporate estimate variation across databases, where the across-database variance was estimated by comparing each database-specific result to that of an inverse-variance, fixedeffects meta-analysis.No meta-analysis was conducted where I 2 for a given drug-outcome pair was >40%. Data Sharing Open Science is a guiding principle within OHDSI.As such, we provide unfettered access to all opensource analysis tools employed in this study via https://github.com/OHDSI/,as well as all data and results artefacts that do not include patient-level health information via http://data.ohdsi.org/Covid19EstimationHydroxychloroquine2.Data partners contributing to this study remain custodians of their individual patient-level health information and hold either IRB exemption or approval for participation. . RESULTS A total of 918,144 HCQ and 290,383 SSZ users were identified.Participant counts in each data source are provided in Appendix Section 6.Before PS stratification, users of HCQ were (compared to SSZ users) more likely female (for example, 82.0% vs 74.3% in CCAE database) and less likely to have certain comorbidities such as Crohn's disease (0.6% vs 1.8% in CCAE) or psoriasis (3.0% vs 8.9% in CCAE).Prevalence of systemic lupus erythematous was higher in HCQ users as expected (1.5% vs 0.5% in CCAE), whilst use of systemic glucocorticoids was similar (46.1% vs 47.2% in the previous month in CCAE).The prevalence of depressive disorder was similar in both groups (13.4% vs 13.5% in CCAE) and so was the history of use of antidepressants in the previous year (36.4% vs 36.4% in CCAE).Average baseline dose of HCQ was homogeneous, with >97% in CCAE using an average dose of 420mg daily, and only <3% taking an estimate dose >500 mg.All the observed differences between groups were minimised to an acceptable degree (<0.Database-specific and overall counts and rates of the three study outcomes in the short-(30-day) and long-term ('on treatment') analyses are reported in detail in Table 2. Depression was the most common of the three study outcomes, with rates in the 'on treatment' analysis ranging from 1.99/1,000 person-years amongst HCQ users in CPRD to 17.74/1,000 amongst HCQ users in AmbEMR.Suicide/suicidal ideation was the least common outcome, with rates ranging from 0.32/1,000 (HCQ users in AmbEMR and SSZ users in IMRD) to 14.08/1,000 in SSZ users in MDCD.Database-specific counts and incidence rates (IR) for all three outcomes stratified by drug use are detailed in full in Appendix Section 9. . Principal findings This large observational study shows that in routine healthcare treatment of RA, there is no association with the use of HCQ with acute psychosis, depression, or suicide as compared to SSZ.These results are seen both in the short-term and long-term risk analyses.Whilst an excess of psychiatric events have been reported during the COVID pandemic in those prescribed HCQ, this risk does not appear to be associated with HCQ prescribed in RA compared to those prescribed SSZ.This study uses data from three countries, with a variety of healthcare systems and modes of routine healthcare data included, enabling the study to produce more generalisable results. Comparison with other studies The bulk of the evidence prior to this study consisted of isolated case reports and case series, making it difficult to draw demographic comparisons with previous work.Sato et al. reported that neuropsychiatric adverse events found in the FDA adverse event reporting system associated with chloroquine use were predominantly in females in the sixth decade of life.[20]Increase in reporting of acute psychiatric disease during the COVID-19 pandemic may be multifactorial, with an increase in external stressors such as social isolation, financial uncertainty, and increased misuse of drugs and alcohol.[42][43][44] Considering that we find no association for HCQ use compared to SSZ with acute psychiatric outcomes in the RA population, evidence points towards external stressors being more likely involved in the aetiology of psychiatric events seen during this pandemic. Strengths and weaknesses of the study This study is based on new users of HCQ for RA and therefore, the results of this study are most directly relevant to the risk of neuropsychiatric side effects seen in the rheumatological population.The regulatory warnings of possibly increased acute psychiatric events associated with HCQ warrant investigation in all available datasets to prevent harm in both rheumatological patients and those taking for emergency use, especially as very few clinical trials include acute psychiatric outcomes.Whilst the general population presenting with COVID-19 may differ from those with RA, within the context of emergency authorisation or off label use of HCQ, all available evidence must be taken into account when considering the risks associated. Several considerations must be taken into account when interpreting these results.Firstly, the doses used to treat RA are lower than those suggested in current clinical trials for the treatment of SARS-CoV2, and therefore adverse events seen in the treatment and prophylaxis of COVID-19 may be greater if dose dependent, as is the case with cardiac adverse effects.[45,46] Secondly, this study could be affected by outcome misclassification.Only acute psychiatric events presenting to medical services will be captured, and this is especially important for the outcome of suicide.Suicide may not be fully recorded if patients do not reach medical care or cause-of-death information is not linked to the datasource, and therefore the true incidence of suicide may be under-recorded.[47] Similarly, this study only focused on acute psychosis and depression severe enough to be identified in medical consultation in patients with no history of either condition.Whilst we generated phenotypes that underwent full cohort diagnostics, and phenotypes were constructed using a multidisciplinary team of clinicians and bioinformaticians to ensure face validity, it should be noted that no formal validation was undertaken.We took all reasonable steps to ensure the validity of the phenotypes, whilst considering the risk-benefit tradeoff of what could be undertaken within the time frame used to respond to the serious questions raised by regulatory bodies following the HCQ use in COVID-19. This study can highlight the association for patients without a prior history of psychosis or depression, but cannot inform of the risk of acute deterioration after beginning HCQ treatment for those already known to psychiatric services. Thirdly, depression and hallucinations are listed as potential undesirable effects of sulfasalazine treatment, which may underestimate the true risk, if any, from HCQ. [48] However, the frequency of depression (described as changes in affect in the summary of product characteristics for HCQ) is reported to be common (≥1/100 to < 1/10) whilst for sulfasalazine depression is listed as being uncommon (≥1/1000 to < 1/100).Therefore, it is potentially reassuring for patients that we observed no difference compared to sulfasalazine for which there is a paucity of published evidence suggesting causailty.[49] Propensity score stratification and matching, as well as a comprehensive examination of potential sources of systematic error, were undertaken prior to blinding of results to identify and reduce the risk of confounding.Baseline characteristics after PS stratification were adequately balanced; of note, the incidence of systemic lupus erythematosus (SLE) was balanced between treatment groups.Identifying the balance of SLE between treatment groups was undertaken prior to unblinding due to the potential neuropsychiatric sequelae of the condition aside from the potential side effects of pharmacological treatment.This study could also be limited by the fact that patients may overlap and exist in more than one dataset within the US.The meta-analysis assumes populations to be independent, and therefore the obtained estimates may slightly underestimate variance. Future research For rheumatological disorders, future work could expand into investigating the occurrence of acute psychiatric events in patients in SLE.This would enable greater understanding of whether neuropsychiatric conditions are related to disease activity or due to pharmacological treatment.Similarly, in the emergency use of HCQ in COVID-19, there is already concern about the potential heightened risk of acute psychiatric disorder due to elevated number of psychosocial stressors present during a pandemic and high dose use.[50] Future work should consider including acute psychiatric outcomes in order to differentiate between psychiatric conditions generated by the impact of a global pandemic compared to iatrogenic events due to pharmaceutical therapies used. Meaning of the Study Exponential growth in research into the best treatment of SARS-CoV2 infection is generating rapidly evolving evidence for the relative efficacy of pharmaceutical agents.For the rheumatological community, media attention previously surrounded HCQ as a strong forerunner of COVID-19 prophylaxis and treatment.The results of the RECOVERY trial identifying dexamethasone reduced mortality in intensive care patients has now overtaken HCQ as the leading rheumatological drug for the pandemic, but the concerns regarding HCQ safety remain for those who take the drug for conventional indications.[17,51] Cardiovascular safety, and reports that it might lack efficacy for both treatment and prophylaxis, have halted major HCQ clinical trials.[45,[52][53][54][55] The identification of acute psychiatric events associated with HCQ use has raised the need to clarify the risk within general rheumatological use.Our study identifies no increased risk in RA patients when compared with sulfasalazine, and provides evidence to users and clinicians alike that the reports presented during the pandemic are likely to be related to further causes aside from HCQ. FOOTNOTES conclusions contained in this study are those of the author/s alone.The protocol for this study ( 20_059R) was approved by the Independent Scientific Advisory Committee (ISAC).DA Germany This is a retrospective database study on de-identified data and is deemed not human subject research.Approval is provided for OHDSI community studies. IMRD The present study is filed and under review for Scientific Review Committee for institutional adjudication.Due to the public health imperative of information related to these data, approval is provided for this publication. IPCI The present study was approved by the Scientific and Ethical Advisory Board of the IPCI project (project number: 4/2020).JMDC New England Institutional Review Board (IRB) and was determined to be exempt from broad IRB approval, as this research project did not involve human subject research.MDCD New England Institutional Review Board (IRB) and was determined to be exempt from broad IRB approval, as this research project did not involve human subject research.MDCD New England Institutional Review Board (IRB) and was determined to be exempt from broad IRB approval, as this research project did not involve human subject research.Open Claims This is a retrospective database study on de-identified data and is deemed not human subject research.Approval is provided for OHDSI community studies.Clinformatics New England Institutional Review Board (IRB) and was determined to be exempt from broad IRB approval, as this research project did not involve human subject research.Optum EHR New England Institutional Review Board (IRB) and was determined to be exempt from broad IRB approval, as this research project did not involve human subject research. FIGURE LEGENDS Figure 1 . FIGURE LEGENDSFigure1.Forest plot of the association between short-(top) and long-term (bottom) use of HCQ (vs SSZ) and risk of depression, by database and in meta-analysis. Figure 2 . Figure 2. Forest plot of the association between short-(top) and long-term (bottom) use HCQ (vs SSZ) and risk of suicidal ideation or suicide, by database and in meta-analysis. Figure 1 . Figure 1.Forest plot of the association between short-(top) and long-term (bottom) use of Hydroxychloroquine versus Sulfasalazine and risk of depression, by database and in meta-analysis. Figure 2 . Figure 2. Forest plot of the association between short-(top) and long-term (bottom) use of Hydroxychloroquine versus Sulfasalazine and risk of suicidal ideation or suicide, by database and in meta-analysis. Electronic health records (EHR) and administrative claims data from the UK and US were used, previously mapped to the Observational Medical Outcomes Partnership (OMOP) common data model (CDM).The study period covered from September 2000 until the latest data available at the time of extraction in each database.Data from 10 data sources were analysed in a federated manner using a distributed network strategy in collaboration with the Observational Health Data Science and Informatics (OHDSI) and European Health Data and Evidence Network (EHDEN) communities.The data used included primary care electronic medical records from the UK (Clinical Practice Research Datalink, CPRD; and IQVIA Medical Research Data, IMRD); specialist ambulatory care electronic health records from Germany (IQVIA Database Analyzer Germany; DAGermany); electronic health records in a sample of US inpatient and outpatient facilities the Optum® de-identified Electronic Health Record dataset (Optum EHR, and IQVIA US Ambulatory EMR;AmbEMR); and US claims data from the IBM MarketScan® Commercial Claims Database (CCAE), Optum® de-identified Clinformatics® Data Mart Database-Date of Death (Clinformatics), IBM MarketScan® Medicare Supplemental Database (MDCR), IBM MarketScan® Multi-State Medicaid Database (MDCD), and Table 1 . 1 standardised mean differences) after propensity score stratification: in CCAE, the most imbalanced variable was use of glucocorticoids on index date, with prevalence 36.1% vs 35.8%.Detailed baseline characteristics for the two pairs of treatment groups after PS stratification in CCAE are shown in Table 1 as an example, with similar tables and a more extensive list of features provided in Appendix Section 7. Study diagnostics including plots of propensity score distribution, covariate balance, and negative control estimate distributions are provided in Appendix Section 8. Baseline characteristics of patients with RA who are new users of hydroxychloroquine (HCQ) vs sulfasalazine (SSZ), before and after PS stratification, in the CCAE database Table 2 . Patient counts, event counts and incidence rates (IR) (/1,000 person years) of key events according to drug use 9 datasets passed cohort diagnostics and contained sufficiently robust data for inclusion into the short term analyses for depression; 6 passed for suicide and 2 passed for psychosis.A small imbalance with the incidence of a past medical history of SLE was seen in MDCD and with cutaneous lupus in DAGermany.As a result, we excluded both from the psychosis outcome but not for depression as we did not consider this was a confounder.Short-term (30-day) analyses showed no consistent association between HCQ use and the risk of depression, with database-specific HRs ranging from 0.Note only databases passing diagnostics are included within the plot and meta-analysis.Similarly, no association was seen between the use of HCQ and the risk of suicidal ideation or suicide.In the short-term, HRs ranged from 0.27 [0.06-1.29] in MDCD to 10.46 [0.51-216.29] in CPRD, with metaanalytic HR of 0.94 [0.49-1.77](Figure 2, top).Long-term effects were similar, with HRs ranging between 0.55 [0.20-1.49] in MDCR and 2.36 [0.21-26.87] in AmbEMR, and meta-analytic HR of 0.77 [0.56-1.07](Figure 2, bottom).Finally, no association was seen between the use of HCQ (compared to SSZ) and the risk of acute psychosis.Short-term analyses showed database-specific HRs of 0.44 [0.05-3.49] in OptumEHR and 1.01 [0.65-1.58] in OpenClaims, with a meta-analytic estimated HR of 1.03 [0.66-1.60].Only OpenClaims contributed to the 'on treatment' analysis of this event, with an estimated HR of 0.98 [0.73-1.33].
2020-07-22T01:01:55.373Z
2020-07-21T00:00:00.000
{ "year": 2020, "sha1": "4cfcdf47d166339e14087ecb01b8fcc6708e1b55", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/rheumatology/article-pdf/60/7/3222/38849970/keaa771.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "15f071c7b95862989fc679271f2eb27a9f6d172a", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
252992
pes2o/s2orc
v3-fos-license
Resonant Coherent Phonon Spectroscopy of Single-Walled Carbon Nanotubes Using femtosecond pump-probe spectroscopy with pulse shaping techniques, one can generate and detect coherent phonons in chirality-specific semiconducting single-walled carbon nanotubes. The signals are resonantly enhanced when the pump photon energy coincides with an interband exciton resonance, and analysis of such data provides a wealth of information on the chirality-dependence of light absorption, phonon generation, and phonon-induced band structure modulations. To explain our experimental results, we have developed a microscopic theory for the generation and detection of coherent phonons in single-walled carbon nanotubes using a tight-binding model for the electronic states and a valence force field model for the phonons. We find that the coherent phonon amplitudes satisfy a driven oscillator equation with the driving term depending on photoexcited carrier density. We compared our theoretical results with experimental results on mod 2 nanotubes and found that our model provides satisfactory overall trends in the relative strengths of the coherent phonon signal both within and between different mod 2 families. We also find that the coherent phonon intensities are considerably weaker in mod 1 nanotubes in comparison with mod~2 nanotubes, which is also in excellent agreement with experiment. I. INTRODUCTION Single-walled carbon nanotubes (SWNT) can be viewed as rolled up sheets of graphene, having a one-dimensional band structure with unique electronic, mechanical, and optical properties. Their electronic properties vary significantly, depending on their chirality indices (n,m), and can be either metallic or semiconducting. 1,2,3,4,5 Although there are currently world-wide efforts to achieve single-chirality samples, a standard for fabrication of such samples has yet to be established. Resonant Raman spectroscopy (RRS) or photoluminescence excitation spectroscopy (PLE) is usually used to study chirality-dependent electronic and vibrational properties. However, carbon nanotube samples typically contain ensembles of nanotubes with different chiralities, and the unknown relative abundances of different-chirality tubes in such samples often makes it challenging to extract reliable parameters on chiralitydependent properties from experimental results. Resonant Raman spectroscopy can be used to study chirality-dependent electron-phonon coupling in nanotubes and can be used to uniquely determine the chirality of individual tubes. 6,7,8,9,10 Raman spectroscopy is a sensitive probe of ground-state vibrations but is less suitable for studying excited state vibrational properties. Recently, excited state lattice vibrations in carbon nanotubes have been studied with coherent phonon (CP) spectroscopy. 11,12,13,14 In CP spectroscopy, coherent phonon oscillations are excited by pumping with an ultrafast pump pulse and are detected by measur-ing changes in the differential transmission using a delayed probe pulse. The CP intensity is then obtained by taking the temporal power spectrum of the differential transmission. The peaks in the power spectrum correspond to coherent phonon frequencies. Coherent phonon spectroscopy allows direct measurement of excited state phonon dynamics in the time domain including phase information and dephasing times. We have developed a technique that allows us to study chirality dependent properties of nanotubes in an ensemble. 15 This is described in Section II. By shaping the pump pulse, we incorporate quantum control techniques in CP spectroscopy. Using pre-designed trains of femtosecond optical pulses, we have selectively excited and probed coherent lattice vibrations of the radial breathing mode (RBM) of specific chirality single-walled carbon nanotubes. We are able to gain information on light absorption, coherent phonon generation, and coherent phonon-induced band structure modulations. We find that coherent RBM phonons can be selectively excited by using a train of pump pulses whose repetition rate is in resonance with the desired phonon frequency. By exciting only those phonon modes with a specific frequency, we can selectively study nanotubes with the same chirality in an ensemble of tubes. In order to explain our experimental results, we develop in Section III a microscopic theory for the generation and detection of coherent phonon lattice vibrations in carbon nanotubes by ultrafast laser pulses. We use a third nearest neighbor extended tight-binding (ETB) model to describe the electronic states over the entire Brillouin zone while the phonons are treated in a valence force field model. In treating the electrons and phonons, we exploit the screw symmetry of the nanotube to drastically simplify the problem. Equations of motion for each CP vibrational mode are obtained, using a microscopic description of the electron-phonon interaction based on direct evaluation of the three-center electron-phonon matrix elements using ab initio wavefunctions and screened atomic potentials. For each CP active mode we find that the CP amplitudes satisfy a driven oscillator equation with a coherent phonon driving function that depends on photoexcited hot carrier distributions. An ultrafast laser pulse generates electron-hole pairs and the driving function rises sharply in a step-like fashion. If the pulse duration is shorter than the phonon oscillation period, the rapid initial jump in the coherent phonon driving function gives rise to oscillating coherent phonon amplitudes. Carbon nanotubes with the same values of 2n + m are said to belong to the same family (with index 2n + m). Carbon nanotubes in a given family are metallic if mod (n − m, 3) = 0 and semiconducting otherwise. The semiconducting tubes are classified as either mod 1 or mod 2 depending on whether the value of mod (n−m, 3) is 1 or 2. In CP spectroscopy, we find that a strong signal is obtained when we pump at the allowed nanotube E ii optical transitions. We found experimentally that, for the RBM modes, the CP intensity within a mod 2 family tends to decrease with chiral angle and the decrease in CP intensity with chiral angle is found to be much more pronounced for the E 11 feature. We also found that CP intensities are considerably weaker in mod 1 families in comparison with mod 2 families. In general, the E 22 CP intensities in mod 2 families are stronger than the E 11 features while the opposite is true in mod 1 tubes. For RBM modes in mod 1 tubes, the E 11 CP intensities tend to decrease with increasing chiral angle within a given family. As the family index (2n+m) increases, the E 11 CP intensity in mod 1 tubes decreases. Finally, we compared our theoretical results with experimental CP spectra in mod 2 nanotubes and found that our theoretical model correctly predicts the experimentally observed overall trends in the relative strengths of the CP signal both within and between mod 2 families. We found discrepancies between our theoretical predictions with regard to the peak positions and lineshapes. These discrepancies can be qualitatively attributed to Coulomb interactions which have not yet been included in the calculations. We do not consider the Coulomb interaction and excitonic effects in our theoretical model for reasons of simplicity and tractability. It has been pointed out that excitonic effects are important for understanding the optical properties of small diameter carbon nanotubes. 16 In a number of Raman scattering theories the Coulomb interaction is neglected, but nevertheless the computed RBM Raman spectra can explain many experimental measurements. 17,18,19 It is worth dwelling a little on the reason for this. Jiang et al. 10 recently undertook a study of the exciton-photon and exciton-phonon matrix elements in single-walled carbon nanotubes using a tightbinding model. These authors found that for the RBM and G-band modes, the phonon matrix elements in the exciton and free particle pictures are nearly the same. However, values for the exciton-photon matrix elements are on the order of 100 times greater than the electronphoton matrix elements computed in the free particle picture. Thus, when we discuss the photoexcitation of carriers, the actual photoexcited carrier densities will be different from what we predict. On the other hand when we discuss the dependence of the coherent phonon amplitudes on the E ii transition energies or the tube chiraility, the present discussion has a physical meaning. Thus we expect reasonable agreement between experiment and theory for relative coherent phonon amplitudes and the relative strengths of the computed CP spectra. However, the peak positions and line shapes of the CP spectra will be altered by the neglect of excitonic effects. Apart from being useful in resonant CP spectroscopy of chirality specific nanotubes, we note that laser induced coherent phonons in carbon nanotubes may also have important practical applications in the fabrication of carbon nanotube electronic devices. In recent years laser induced coherent phonon generation has been theoretically studied using molecular dynamics techniques. 20,21,22,23,24 Garcia et al. 21 have simulated laser induced coherent phonons in a mod 1 zigzag (10,0) capped carbon nanotube using a formalism that combines a nonadiabatic molecular dynamics method and a density matrix approach to describe the dynamics of the carbon ions and valence electrons. Dumitricȃ et al. 22,23 have theoretically studied the possibility of achieving selective cap opening in (10,0), (5,5) and (8,4) capped carbon nanotubes driven by laser induced coherent phonons using nonadiabatic molecular dynamics simulations based on a microscopic electronic model. It is well known that self-assembled carbon nanotubes suffer from structural imperfections that modify their electronic, optical, and mechanical properties. Such defects pose a problem for the fabrication of nanotube based electronic devices. 24 A common type of defect in carbon nanotubes is the (5-7) pair defect introduced by applying a Stone-Wales transformation to the nanotube structure. 20 In the Stone-Wales transformation, four hexagons are replaced by two pentagons and two heptagons. 25 The possibility of eliminating such defects using laser generated coherent phonons has been studied theoretically in armchair and zigzag nanotubes using nonabiabatic molecular dynamics simulations. Romero et al. 24 have studied the response of armchair nanotubes with (5-7) pair defects to ultrafast laser pulses and found that when the fraction of photoexcited electrons exceeds a critical threshold ( around 7% ) the resulting coherent phonon oscillations cause the nanotube to undergo an inverse Stone-Wales structural transition that heals the defect. More recently Valencia et al. 20 studies to zigzag nanotubes and found similar results. Recently, Jeschke et al. 26 have theoretically studied the structural response of nanotubes of different chiralities to femtosecond laser excitation using molecular dynamics simulations. They found that carbon nanotubes may transform into more stable structures under the appropriate conditions. Such investigations may be important for technological applications. For example, nanotubes excited by lasers above a certain threshold may tear open and interact with other tubes leading to the creation of new structures. II. EXPERIMENT In this section, we demonstrate the implementation of ultrafast pulse-shaping to excite the coherent radial breathing modes of specific chiralities, providing a definitive ability to study single chirality nanotubes from an ensemble sample. Our method exploits selective excita- tion to not only extract chiral-dependent band gap modulations, but also utilizes information from the probe energy dependence of the phase and the amplitude of the coherent phonon oscillations to reconstruct excitation profiles for the E 22 transitions. In particular, our observation of probe-energy-dependent phase reversal provides direct, time-domain evidence that for coherent radial breathing modes the band gap oscillates in response to the nanotube diameter oscillations. The sample used in this study was a micelle-suspended SWNT solution, where the single-walled carbon nanotubes (HiPco batch HPR 104) were suspended as individuals with sodium cholate. The optical setup was that of standard degenerate pump-probe spectroscopy, but chirality selectivity of RBM oscillations was achieved by using multiple pulse trains, with a pulse-to-pulse interval corresponding to the period of a specific RBM mode. 15 Among different species of nanotubes, those having RBM frequencies that are matched to the repetition rate of multiple pulse trains will generate large amplitude coherent oscillations with increasing oscillatory response to each pulse, while others will have diminished coherent responses. 11,12,13 The tailoring of multiple pulse trains from femtosecond pulses was achieved using the pulseshaping technique developed by Weiner and Leaird. 27 As depicted in Fig. 1, pulse trains are incident on an ensemble of nanotubes as a pump beam, whereas coherent RBM oscillations are monitored by an unshaped, Gaussian probe beam. Real-time observation of coherent RBM oscillations is possible without pulse-shaping by employing standard femtosecond pump-probe spectroscopy. 11,12,13 domain beating profiles reflect the simultaneous generation of several RBM frequencies from nanotubes in the ensemble with different chiralities, which are clearly seen in Fig. 2(b) with the Fourier-transformation of the timedomain data. Although resonance conditions and mode frequencies lead to the assignment of chiralities to their corresponding peaks, 12 obtaining detailed information on dynamical quantities such as the phase information of phonon oscillations becomes rather challenging. Additionally, if adjacent phonon modes overlap in the spectral domain, this can lead to peak distortions. However, by introducing pulse-shaping, multiple pulses with different repetition rates are used to excite RBM oscillations, and as shown in Figs. 3(a)-3(d), chirality selectivity was successfully obtained. With the appropriate repetition rate of the pulse trains, a single, specific chirality dominantly contributes to the signal, while other nanotubes are suppressed. For example, by choosing a pump repetition rate of 7.07 THz, we can selectively excite only the (11, 3) nanotubes, as seen in Fig. 3(a). Similarly, with a pump repetition rate of 6.69 THz, the (10, 5) nanotubes are selectively excited, as seen in Fig. 3(b). The accuracy of selectivity depends on the number of pulses in the tailored pulse train as well as the distribution of chiralities in the nanotubes ensemble. Furthermore, selective excitation of a specific chirality also requires the pump energy to be resonant with the corresponding E 22 transition for each chirality-specific nanotubes. There is a π phase shift between the 780 nm and 810 nm data. These three wavelengths (from the top to the bottom of the figure) correspond to photon energies above, at and below the energy of the second exciton resonance, respectively, of (11,3) nanotubes. The ability to excite single-chirality nanotubes allows us to perform detailed studies of excited states of singlewalled carbon nanotubes. For example, by placing a series of 10-nm band pass filters in the probe path before the detector, we can measure the wavelength-dependence of RBM-induced transmission changes in order to understand exactly how the tube diameter changes during coherent phonon RBM oscillations and how the diameter change modifies the nanotube band structure. As seen in Fig. 4, the differential transmission is shown for three cases, from top to bottom, corresponding to probe photon energies above-resonance, on-resonance, and belowresonance, respectively, for selectively-excited (11,3) carbon nanotubes. Although the transmission is strongly modulated at the RBM frequency (7.07 THz) for all three cases, the amplitude and phase of oscillations vary noticeably for varying probe wavelengths. Specifically, the amplitude of oscillations becomes minimal at resonance, and, in addition, there is clearly a π-phase shift between the above-and below-resonance traces. Because the band gap energy and diameter are inversely related to each other, and because it is the RBM frequency at which the diameter is oscillating, we can conclude from this data that the energy of the E 22 resonance is oscillating at the RBM frequency. Namely, when the band gap is decreasing, absorption above (below) resonance is decreasing (increasing), resulting in positive (negative) differential transmission. We can also look at the short response to see how the diameter changes in response to ultrafast excitation of electron-hole pairs by the pump pulse. In Fig. 5(a), we plot the differential transmission data taken at 780 nm Data near time zero for two wavelengths corresponding to energies above and below the second exciton resonance, respectively, of (11, 3) nanotubes. (1.59 eV) together with the pump pulse train, with time zero corresponding to the center of the pulse train. Here we note that an increase in the absorption corresponds to a decrease in the differential transmission. Figures 5(b) and 5(c) show data near time zero for two wavelengths corresponding to energies above and below the second exciton resonance, respectively, of (11, 3) nanotubes. The sign of the differential transmission oscillations in the first quarter-period, where the time delay varies from 0.0 ps to 0.07 ps, is positive (negative) for the above (below) resonance probe, indicating that there is an initial decrease (increase) in absorption for energies above (below) resonance, demonstrating that the diameter of the nanotube initially expands, taking into account the fact that the resonance energy is inversely related to the diameter. This initial expansion of the tube diameter is in agreement with our theoretical predictions for the photoexcitation of coherent phonon RBM oscillations by ultrafast laser pulses pumping near the E 22 transition energy in mod 2 nanotubes [e.g., (11,3) tubes]. III. THEORY We have developed a microscopic theory for the generation of coherent phonons in single-walled carbon nanotubes and their detection by means of coherent phonon spectroscopy experiments. Our approach is based on ob-taining equations of motion for the coherent phonon amplitudes from the Heisenberg equations of motion as described by Kuznetsov et al. in Ref. 28. In our theoretical model, we explicitly incorporate the electronic energies and wavefunctions for the π electrons, the phonon dispersion relations and the corresponding phonon modes, the electron-phonon interaction, the optical matrix elements, and the interaction of carriers with a classical ultrafast laser pulse. For simplicity, and to make the problem tractable, we neglect the many-body Coulomb interaction and interactions with the surrounding liquid medium in the micelle-suspended nanotube ensemble. We are able to treat nanotubes of arbitrary chirality by exploiting all the screw symmetry operations. This allows us to examine trends in the CP signal strength within and between nanotube families. In addition, we gain something in our conceptional understanding by deriving a simple driven oscillator equation for the coherent phonon amplitudes where the driving function depends explicitly on the time-dependent photoexcited carrier distribution functions. In the limit where we ignore Coulomb interactions, the driven oscillator equation for the coherent phonon amplitudes turns out to be exact. 28 A. Electron Hamiltonian We treat carbon nanotube π and π * electronic states in the extended tight-binding (ETB) formalism of Porezag et al. 29 In the ETB model, the tight-binding Hamiltonian and overlap matrix elements between π orbitals on different carbon atoms are functions of the interatomic distance. The position-dependent Hamiltonian and overlap matrix elements are obtained from a parametrization of density-functional (DFT) results in the local-density approximation (LDA) using a local orbital basis set as described in Ref. 29. Our computed energy dispersion relations for the bonding π and anti-bonding π * bands in graphene are plotted in Fig. 6 along high symmetry directions in the hexagonal two-dimensional Brillouin zone. For comparison, we also plot the graphene energy dispersion relations obtained from the simple tight-binding model (STB), described in Ref. 1, in which only nearest neighbor Hamiltonian and overlap matrix elements are considered. In the STB model, the values of the nearest neighbor Hamiltonian and overlap matrix elements are −3.033 eV and 0.129, respectively. 1 The ETB and STB models agree with each other near the K and K' points in the Brillouin zone and in carbon nanotubes these are the states that give rise to the low lying conduction and valence subbands that we are interested in. In a carbon nanotube with chiral indices (n, m), a translational unit cell can be found such that the atomic structure repeats itself after translation of the tube by a translational vector T parallel to the tube axis. 1 The resulting Brillouin zone is one-dimensional with |k| ≤ π/T and the number of two-atom hexagonal cells in each translational unit cell is 1 where gcd(i, j) is the greatest common divisor of integers i and j. If we only make use of translational symmetry in formulating the electronic problem, the resulting size of the Hamiltonian and overlap matrices is 2N hex × 2N hex if we retain one π orbital per site. In practice, the size of the electronic Hamiltonian and overlap matrices obtained using the nanotube translational unit cell can become prohibitively large, especially for chiral nanotubes with n > m = 0. Fortunately, we can reduce the size of the electronic problem by further exploiting the symmetry of the nanotube. As pointed out in Ref. 17 a two-atom hexagonal unit cell in graphene, with the two carbon atoms labeled A and B, can be mapped onto the nanotube by applying two different screw operations. If we make use of the screw symmetry operations, we can block diagonalize the 2N hex × 2N hex Hamiltonian and overlap matrices into 2 × 2 subblocks which we label µ. In carbon nanotubes, the subblock index µ labels the cutting lines in the zone folding picture. For states near the Fermi energy, the cutting line numbers µ have a nice geometrical interpretation as pointed out in Ref. 30. Derivations for the Hamiltonian and overlap matrices for the electronic states in a carbon nanotube are given in Appendix A. We let s = v, c label the valence and conduction band states, and the electronic energies E sµ (k) for a given cutting line are obtained by solving the matrix eigenvalue problem in Eq. (A8). The second quantized electron Hamiltonian is simplŷ where c † sµk creates an electron in the state with energy E sµ (k). B. Phonon Hamiltonian Following Jiang et al. 31 and Lobo et al. 32 , we treat lattice dynamics in a carbon nanotube using a valence force field model. In our force field model, we include bond stretching, in-plane bond bending, out-of-plane bond bending, and bond twisting potentials. In constructing the valence force field potentials, we take care that they satisfy the force constant sum rule which requires that the force field potential energy remain invariant under rigid translations and rotations (see Ref. 33, p. 131). As pointed out in Ref. 34, a number of calculations in the literature use force field models that violate the force constant sum rule and, as a result, fail to reproduce the long wavelength flexure modes predicted by elasticity theory. Our valence force field model as described in Appendix B has seven force constants, four due to bond stretching interactions out to fourth nearest neighbor shells and one each from the remaining three interactions. To determine these seven force constants, we fit our model results for planar graphene to the model of Jishi et al. 35 Our best fit dispersion relations are shown in Fig We should point out that since our force field model contains force constants that are independent of the density of photoexcited carriers, it cannot describe phonon softening which is observed at high values of the laser fluence. However, a rather high value of the laser fluence is generally needed to generate a high density of photoexcited carriers. In the case of metallic nanotubes, the chirality dependent frequency shift of the RBM and G modes has been studied by Sasaki et al. in Refs. 36 and 37 as a function of the Fermi energy. In the case of the RBM mode they find that armchair nanotubes do not exhibit any frequency shift while zigzag nanotubes exhibit phonon softening. 36 The phonon energies and corresponding mode displacement vectors are obtained by diagonalizing the dynamical matrix. In graphene, there are two atoms per unit cell giving rise to the six phonon modes shown in Fig. 7. In a carbon nanotube, the size of the dynamical matrix is 6N hex × 6N hex . By making use of the nanoutube screw symmetry operations, we can block diagonalize the dynamical matrix into 6 × 6 subblocks which we label ν = 0 . . . N hex − 1. Again, the subblock index ν labels the cutting lines. In a carbon nanotube, the phonon energies ω βν (q) are obtained from solving the dynamical matrix eigenvalue problem in Eq. (B10) where ν is the cutting line index and β = 1 . . . 6 labels the six modes associated with each cutting line. The phonon wavevector q is defined on a one-dimensional Brillouin zone given by |q| ≤ π/T . The second quantized phonon Hamiltonian is given bŷ where b † βνq creates a phonon in a state with energy ω βν (q). C. Electron-phonon coupling The position-dependent single-electron Hamiltonian for a single-walled carbon nanotube is given by where the kinetic energy is T 0 and v c (r) is the carbon atom potential. In Eq. (4) R rJ is the equilibrium position of the r-th atom in the J-th two-atom unit cell and U rJ is the corresponding atomic displacement from equilibrium as defined in Appendix B. Expanding this Hamiltonian in a Taylor series to first order in the atomic displacements, we obtain a positiondependent electron-phonon interaction Hamiltonian Starting from the classical symmetry-adapted atomic displacement in Eq. (B7) of Appendix B, we make the transition to quantum mechanics and define a symmetryadapted second-quantized phonon displacement operator with the quantized phonon amplitudes Using the phonon displacement operator in Eq. (6) we arrive at the desired second-quantized electron-phonon Hamiltonian The first term in the electron-phonon Hamiltonian, Eq. (8), is proportional to the operator c † s ′ ,µ+ν,k+q c sµk b βνq . This term describes a phonon absorption process in which an electron with energy E sµ (k) absorbs a phonon with energy ω βν (q) and then scatters into the state with energy E s ′ ,µ+ν (k + q). Likewise, the second term in Eq. (8) describes a phonon emission process. Details concerning the evaluation of the electronphonon interaction matrix elements M µ,ν s,s ′ ,β (k, q) are given in Appendix C. In evaluating these matrix elements, we make explicit use of 2p z atomic wavefunctions and screened atomic potentials obtained from an ab initio calculation in graphene. D. Coherent phonon generation The second quantized phonon displacement operators in Eq. (6) are defined in terms of a sum over phonon modes of the second quantized operator b βνq +b † βν,−q . For each phonon mode in the nanotube we are thus motivated to define a coherent phonon amplitude given by 28 where denotes the statistical average. Equations of motion for Q βνq (t) can be obtained from the phonon and electron-phonon Hamiltonians in Eqs. (3) and (8). From the Heisenberg equation we obtain In coherent phonon spectroscopy, we assume that the optical pulse and hence the distribution of photoexcited carriers is spatially uniform over the nanotube. In this case the electronic density matrix is diagonal and can be expressed as is the electron distribution function in subband sµ with wavevector k. 28 The only coherent phonon modes that are excited are the ν = q = 0 modes whose amplitudes satisfy a driven oscillator equation where Q β (t) ≡ Q β00 (t) and ω β ≡ ω β,0 (q = 0). There is no damping term in Eq. (11) since anharmonic terms in the electron-phonon Hamiltonian are neglected. We solve the driven oscillator equation subject to the initial conditions Q β (0) =Q β (0) = 0. Taking the initial condition into account, the driving function S β (t) is given by where f sµ (k, t) are the time-dependent electron distribution functions, f 0 sµ (k) are the initial equilibrium electron distribution functions, and M β sµ (k) ≡ M µ0 ssβ (k, q = 0). The coherent phonon driving function S β (t) depends on the photoexcited electron distribution functions. In principle, we could solve for the time-dependent distribution functions in the Boltzmann equation formalism taking photogeneration and relaxation effects into account. In CP spectroscopy, an ultrafast laser pulse generates electron-hole pairs on a time scale short in comparison with the coherent phonon period. In our experimental work, we typically use 50 fs ultrafast laser pulses to excite RBM coherent phonons with oscillation periods of around 0.14 ps ( ω β ≈ 30 meV or 242 cm −1 ). After photoexcitation the electron-hole pairs slowly scatter and recombine. Jiang et al. 38 recently carried out a study of the electron-phonon interaction and relaxation time in graphite and found relaxation times on the order of a few picoseconds which is much slower than either the ultrafast laser pulse or the RBM coherent phonon oscillation period. The driving function S β (t) thus rises sharply in a step-like fashion and then slowly vanishes as the distribution functions f sµ (k, t) return to f 0 sµ (k). The rapid initial jump in S β (t) gives rise to an oscillatory part of the coherent phonon amplitude Q β (t) at the coherent phonon frequency ω β while the slow subsequent decay of S β (t) gives rise to a slowly varying background. Since the observed CP signal is proportional to the power spectrum of the oscillatory part of Q β (t), we choose to ignore relaxation effects and retain only the rapidly varying photogeneration term in the Boltzmann equation. Neglecting carrier relaxation will have a negligible effect on the computed CP signal since the relaxation time is much greater than the coherent phonon period. The photogeneration rate in the Boltzmann equation depends on the polarization of the incident ultrafast laser pulse. Using an effective mass model, Ajiki and Ando 39 showed that optical absorption in an isolated singlewalled carbon nanotube for polarization perpendicular to the nanotube axis is almost perfectly suppressed by photo-induced charge (the depolarization effect). Recently, Popov and Henrard 40 undertook a comparative study of the optical properties of carbon nanotubes in orthogonal and nonorthogonal tight-binding models and found that optical absorption due to light polarized parallel to the tube axis (z axis) is greater than absorption due to light polarized perpendicular to the axis by about a factor of five. Consequently we confine our attention to light polarized parallel to the tube axis only. We compute the photogeneration rate in the electric dipole approximation using Fermi's golden rule. In the case of parallel polarization, optical transitions can only occur between states with the same angular momentum quantum number µ. 17,40 For the photogeneration rate we find where ∆E µ ss ′ (k) = |E sµ (k)−E s ′ µ (k)| are the k dependent transition energies, ω is the pump energy, u(t) is the time-dependent energy density of the pump pulse, e is the electron charge, m 0 is the free electron mass, and n g is the index of refraction in the surrounding medium. The optical matrix element is given by where the sum over rJ is taken over fourth nearest neighbors of the atom at R r ′ 0 . In Eq. (14), C r (s, µ, k) are the expansion coefficients for the symmetry-adapted ETB wavefunctions obtained by solving the matrix eigenvalue problem in Eq. (A8) of Appendix A and φ J (k, µ) is the phase factor defined in Eq. (A7). The z components of the atomic dipole matrix elements (which can be evaluated analytically) are given by where the 2p z orbitals ϕ rJ are defined in Eq. (C3). Note that the squared optical matrix element |P µ s,s ′ (k)| 2 has units of energy. We point out that optical dipole matrix elements in the vicinity of the K point in both graphite and carbon nanotubes have been studied previously in Ref. 41. The pump energy density u(t) is related to the fluence F = dt u(t) (c/n g ). To simplify our theoretical model, it is assumed that the pump beam consists of a train of N pulse identical Gaussian pulses each with an intensity full width at half maximum (FWHM) of τ p and a Lorentzian spectral lineshape with a FWHM of Γ p . The Gaussian pulses are equally spaced in time with the time interval between pulses being T pulse . The peak intensity of the first pulse is taken to occur at t = 0. To account for spectral broadening of the laser pulses we replace the delta function in Eq. (13) with 42 From the coherent phonon amplitudes, the timedependent macroscopic displacements of each carbon atom in the nanotube can be obtained by averaging Eq. (6). Thus where A β ≡ A β,0 (0) andê β r ≡ê r (β, 0, 0). It is apparent that only four coherent phonon modes can be excited in a carbon nanotube regardless of the chirality. Since ν = q = 0 the CP active mode frequencies and polarization vectors are found by diagonalizing a single 6 × 6 dynamical matrix in Eq. (B10). Two of the six mode frequencies ω β are zero and these phonons are not excited since the driving term S β (t) vanishes as can be seen in Eq. (12). Of the remaining four modes, the one with the lowest energy is the radial breathing mode. Coherent acoustic phonon modes whose energies vanish at q = 0 cannot be excited in an infinitely long carbon nanotube under conditions of uniform illumination by the pulse laser. If, however, the electric field of the pump laser could be made to vary spatially along the nanotube axis with a periodicity given by a real space wavevector q pulse , it would be possible to generate coherent acoustic phonons which would travel along the nanotube at the acoustic sound speed. The generation of coherent acoustic phonons has been demonstrated in semiconductor superlattices where the pump laser generates carriers in the quantum wells thus giving rise to carrier distribution functions having the periodicity of the superlattice. 43,44,45,46 In the case of coherent acoustic phonons in superlattices, the coherent phonon lattice displacement satisfies a driven loaded string equation rather than a driven harmonic oscillator equation. 43 E. Absorption spectrum In coherent phonon spectroscopy a probe pulse is used to measure the time-varying absorption coefficient of the carbon nanotube. The time-dependent absorption coefficient is given by 42,47 α( ω, t) = ω n g c ε 2 ( ω, t) where ε 2 ( ω, t) is the imaginary part of the timedependent dielectric function evaluated at the probe photon energy ω. The imaginary part of the nanotube dielectric function is obtained from Fermi's golden rule where A t = π(d t /2) 2 is the cross-sectional area of the tube and d t is the nanotube diameter. In our model, we replace the delta function in Eq. (19) with a broadened Lorentzian spectral lineshape with a FWHM of Γ s . The distribution function f sµ (k) and bandstructure E sµ (k) are time-dependent. The time-dependence of f sµ (k) comes from the Boltzmann carrier dynamics which can include the photogeneration of electron-hole pairs as well as various carrier relaxation effects. The time-dependence of E sµ (k) arises from variations in the carbon-carbon bond lengths due to the coherent phonon induced atomic displacements U rJ (t) given in Eq. (17). This time-dependent deformation of the nanotube bond lengths alters the tight-binding Hamiltonian and overlap matrix elements in the extended tight-binding model described in Section III A. Note that to first order in the lattice displacements the energies E sµ (k) vary with time while the tight-binding wavefunctions and optical matrix elements P µ ss ′ (k) do not. F. Coherent phonon spectrum In the coherent phonon spectroscopy experiments described by Lim et al. in Refs. 12 and 13 single color pump-probe experiments are performed on an ensemble of nanotubes. The excitation of coherent phonons by the pump modulates the optical properties of the nanotubes and gives rise to a transient differential transmission signal. After subtraction of a slowly varying background component, the coherent phonon spectrum is obtained by taking the power spectrum of the time-dependent differential transmission. In our model we simulate singlecolor pump-probe experiments and take the theoretical CP signal to be proportional to the power spectrum of the transient differential transmission after background subtraction. We compute the power spectrum using the Lomb periodogram algorithm described in Ref. 48. We find that using the Lomb periodogram to evaluate the power spectrum in our theoretical results is more convenient than using fast fourier transform methods since it works well for data sets whose size is not an integer power of two or whose data points are not evenly spaced. It is worth emphasizing that in CP spectroscopy we measure the power spectrum of the time-dependent coherent phonon-modulated differential transmission. Thus, as a function of the pump energy, the CP spec- trum at the coherent phonon frequency tracks the absolute value of the first derivative of the static absorption coefficient. This is nicely illustrated in Fig. 6 of Ref. 12 where it is shown that an excitonic peak in the absorption spectrum will give rise to a symmetric double peaked structure in the CP power spectrum. IV. THEORETICAL RESULTS To illustrate our theoretical model, we will discuss in some detail simulated CP spectroscopy experiments in an undoped (11, 0) zigzag nanotube. This is a mod 2 semiconducting nanotube with mod (n − m, 3) = 2 belonging to the family of nanotubes with 2n+m = 22. We choose this example since we have performed CP spectroscopy for the (11, 0) nanotube (see Fig. 18) and because Lim et al. measured coherent lattice vibrations in a micelle-suspended solution of carbon nanotubes with a diameter range of 0.7 to 1.3 nm 12,13 and found strong CP signals due to excitation of coherent RBM modes in this family of nanotubes. A. Bandstructure and absorption spectra The four lowest-lying one-dimensional electronic π bands for the (11, 0) nanotube are shown in the upper panel of Fig. 8. In our model we ignore structural optimization and assume that the carbon atoms lie on the surface of the rolled up unreconstructed graphene cylinder. In an unreconstructed zigzag nanotube the length of the translational unit cell is T = √ 3 a where a = 2.49Å is the hexagonal lattice constant in graphene. 1 In Fig. 8 the conduction bands have positive energy and the valence bands have negative energy. Since the electronic problem has been reduced to solving the 2 × 2 matrix eigenvalue problem in Eq. (A8), the conduction bands in Fig. 8 with a given value of the angular quantum number µ can only mix with the valence band having the same value of µ. The four bands shown in the figure are doubly degenerate with two distinct values of µ giving rise to the same band energies. The allowed optical transitions for z-polarized light (with selection rule ∆µ = 0) are indicated by vertical arrows and are labeled E 11 . . . E 44 . The lower panel of Fig. 8 shows the square of the optical matrix elements defined in Eq. (14) for each of the transitions E 11 . . . E 44 . For the E 11 , E 22 and E 33 transitions the squared optical matrix elements are strongly peaked at the van Hove singularity at the direct band gaps. The size of this peak in the squared optical matrix elements increases as the band gap moves away from the Fermi energy at E = 0. The absorption for these three transitions are sharply peaked at the band edge due to the van Hove singularity in the joint density of states as well as the peak in the squared optical matrix elements that occurs there. For the E 44 transition, the conduction and valence bands are very flat giving rise to an enhanced van Hove singularity in the joint density of states while the squared optical matrix element is a slowly varying function of k. For this transition, the peak in the absorption spectrum is due almost entirely to the sharply peaked joint density of states. As a check on our theoretically calculated squared optical matrix elements, we compared our results with optical dipole matrix elements calculated independently by Jiang et al. in Ref. 49 for the metallic (5,5) armchair and (6,0) zigzag tubes for light polarized parallel to the tube axis. Our squared optical matrix elements are proportional to the square of the dipole matrix elements shown in Figs. 2 and 3 in Ref. 49. We found excellent agreement between our theoretical results and the corresponding results for the two tubes considered. With the electronic band structure and squared optical matrix elements shown in Fig. 8, we can obtain the absorption coefficient of the (11, 0) nanotube using Eqs. (18) and (19) and the carrier distribution functions. The computed absorption coefficient for the undoped (11, 0) nanotube in thermal equilibrium at room tem- perature is shown in the upper curve of Fig. 9 where the spectral FWHM linewidth is taken to be Γ s = 0.15 eV. Also shown in the figure are the absorption spectra of the other members of Family 22, namely the (10, 2), (9,4) and (8,6) nanotubes. B. Generation of coherent RBM phonons In a typical simulation, we excite coherent RBM phonons with a single 50 fs Gaussian laser pulse pumping at the peak of the broadened E 22 transitions shown in Fig. 9. The pump fluence is taken to be 10 −5 J/cm 2 , the FWHM spectral linewidth is assumed to be Γ p = 0.15 eV, and the time scale is chosen so that the pump reaches its peak intensity at t = 0. The pump energies ω at the E 22 peaks are taken to be 2.05 eV for the (11, 0) nanotube, 2.04 eV for (10, 2), 1.97 eV for (9, 4), and 1.89 eV for (8,6). The photogenerated carrier distribution functions are obtained from Eq. (13) and the photoexcited carrier densities per unit length are shown in Fig. 10 for Family 22 nanotubes. The carrier densities after photoexcitation all lie in the range from 70 to 90 cm −1 increasing as we go from (11,0) to (8,6). Using the photogenerated carrier densities f sµ (k, t) we can obtain the coherent phonon amplitudes Q β (t) by solving the equation of motion (11) with the driving function S β (t) given in Eq. (12). As we noted earlier, β labels the six coherent phonon modes for each value of µ. Coherent phonon oscillations can only be excited in the four q = 0 modes with non-zero frequency corresponding to µ = 0. The six µ = 0 phonon dispersion curves are shown in Fig. 11 for the (11, 0) nanotube. At q = 0, there are two acoustic modes with zero frequency. The one with the lower sound speed is the twisting mode (TW) in which the A and B atom sublattices move in phase in the circumferential direction. The mode with the higher sound speed is the longitudinal acoustic mode (LA) in which the A and B atom sublattices move in phase along the tube axis. The remaining four q = 0 modes are the CP active modes. The lowest CP active mode is the radial breathing mode at 37.1 meV (300 cm −1 ) in which all In our model we neglect slow carrier relaxation effects and retain only the photogeneration term in the Boltzmann equation. The net photogenerated conduction band electron distribution function f cµ (k) − f 0 cµ (k) is then equal to the net photogenerated hole distribution function for each value of k. In this case, we can obtain a simplified expression for the coherent phonon driving function which only involves the photogenerated conduction band electron distributions. We find The driving function kernel S β µ (k) is given by where M β sµ (k) (s = v, c) are the same matrix elements appearing in Eq. (12). Each value of µ in the impulsive excitation model corresponds to a specific optical transition E ii . The k dependence of S µ (k) for the RBM phonon in the (11, 0) nanotube is shown in Fig. 12 for the first four optical transitions. For the RBM mode, we chose the unit mode polarization vector to point radially outward so that positive values of S(k) contribute to a radially outward directed driving term. As can be seen in Fig. 12 both positive and negative values of S(k) are possible. If, for example, we were to pump near the E 11 band edge, the electron distribution functions would be localized near k = 0 and we would get negative values for S(t). The signs of the driving function kernels near k = 0 for the E ii transitions are in agreement with other results reported in the literature. 19 The sign of S β µ (k) is the negative of the sign of the electron-phonon matrix element M β cµ (k) − M β vµ (k) appearing in Eq. (21). This matrix element has been obtained by Machón et al. in an ab initio calculation reported in Ref. 19 for the lowest four optical transitions with light polarized parallel to the tube axis. In Table I of Machón et al. 19 the sign of the band edge electron-phonon matrix element for E 11 in the (11, 0) tube has a sign opposite to that of the higher lying transitions in agreement with the results shown in Fig 12. In Fig. 13 we plot the photoexcited carrier density n(t), the coherent phonon driving function S(t), and the coherent phonon amplitude Q(t) for RBM coherent phonons in (11, 0) tubes for 50 fs z-polarized laser pulses with photoexcitation energies of 1.07 eV and 2.05 eV. These correspond to the E 11 and E 22 absorption peaks seen in Fig. 9 respectively. The absorption peaks are comparable for E 11 and E 22 . However, for constant fluence, the number of photoexcited carriers n ∝ α/ ω (see Ref. 42, p. 341) and so the number of photoexcited carriers is larger for the E 11 transition primarily as a result of the smaller transition energy. As expected from Fig. 12, the coherent phonon driving functions S(t) and amplitudes Q(t) have different signs in the two cases. This means that for photoexcitation at the E 11 transition in mod 2 nanotubes the tube diameter decreases and oscillates about a smaller equilibrium diameter while the opposite is true for photoexcitation at the E 22 transition energy. As an aside, we note that in mod 1 tubes the predicted E 22 and E 11 photoexcited diameter oscillations have the opposite initial phase relative to the corresponding mod 2 oscillations as will be show in Section IV E. If we pump at the broadened E 22 absorption peaks with 50 fs pulses of the same spectral width for each nanotube in Family 22, we obtain the driving functions shown in the upper panel of Fig. 14. In all cases, the driving functions are positive which implies that all the nanotubes in Family 22 initially move radially outward in response to photoexcitation by an ultrafast pump at the E 22 transition energy. The dimensionless coherent phonon amplitudes Q(t) for the radial breathing mode obtained by solving the driven oscillator equation (11) are shown in the lower panel of Fig. 14 where the curves are offset for clarity. As expected, the coherent phonon amplitudes oscillate at the RBM frequencies about a new positive equilibrium point. We note that for the radial breathing mode the coherent phonon amplitude is proportional to the differential change in the tube diameter. In Fig. 14, we see that the magnitude of the coherent phonon oscillations depends on chirality. The size of the jump in the driving function S(t) due to photoexcitation, and therefore the magnitude of the oscillations in Q(t), should be roughly proportional to the product of the electron-photon and electron-phonon matrix elements. The electron-phonon interaction matrix element for E 22 transitions in mod 2 tubes is largest in the zigzag nanotube limit 18 while the corresponding electron-photon interaction matrix element is largest in the armchair limit. 49 Thus we expect S(t) and Q(t) to have maxima somewhere between these two limits. C. Resonant excitation of coherent phonons In our experimental work, we resonantly excite coherent RBM phonons in specific chirality nanotubes in a micelle suspended sample by using a train of pump pulses with a repetition rate equal to the RBM period. To illustrate resonant excitation of coherent phonons in nanotubes, we repeated our simulations for the (11, 0) nanotube using a train of six Gaussian pulses pumping at the E 22 transition energy (2.05 eV) where the pulse repetition period was in phase and 180 degrees out of phase with the period of the RBM phonon. The results are illustrated in Fig. 15. The upper panel of Fig. 15 shows the pump laser intensity as a function of time for the in-phase and out-of-phase cases. For comparison, we also plot the pump laser intensity for the single 50 fs Gaussian pulse used in our earlier simulation. In all three cases the fluence is taken to be 10 −5 J/cm 2 and hence the final density of photogenerated carriers were the same. We note that in the figure, the intensities of the two pulse trains are multiplied by a factor of six since the intensities of the Gaussian pulses scale inversely with the number of pulses if the fluence is held constant. The corresponding coherent phonon amplitudes for the RBM phonons are shown in the lower panel of Fig. 15. For the single Gaussian pulse, the coherent phonon amplitude is the same as that shown in Fig. 14 for the (11,0) nanotube. For the in-phase case, the coherent phonon amplitude is magnified after each Gaussian pulse and at long times is identical to the coherent phonon amplitude obtained using the single Gaussian pulse. Since it is the long time behavior of the oscillating part of the coherent phonon amplitude that determines the CP signal, the two resulting CP spectra are also identical. When the Gaussian pulses are 180 degrees out of phase, the oscil- lating part of the coherent phonon amplitude and hence the resulting CP spectra are completely suppressed. In Fig. 15 the predicted coherent phonon amplitudes at long times oscillate rapidly about the steady state valuē Q β = S β (t → ∞)/ω 2 β . In real nanotubes the driving function S β (t) will slowly vanish as the carriers recombine. In addition the coherent phonon amplitudes will slowly decay on a time scale of tens of picoseconds as evidenced in Fig. 3. D. Coherent phonon detection in mod 2 tubes The generation of coherent phonons results in periodic oscillations of the carbon atoms which in turn modulate the optical properties of the nanotube. These coherent phonon oscillations can be detected by measuring transient optical properties in pump-probe experiments. We simulate single color pump-probe measurements in which we pump with light linearly polarized along the tube axis and measure the transient differential gain as a function of probe delay for a probe pulse having the same energy and polarization as the pump. The coherent phonon spectrum is obtained by scanning the pump-probe energy and in our simulations, the pump fluence, duration, and FWHM spectral linewidth are assumed to be constant as we vary the pump-probe energy. The coherent phonon (CP) spectrum for mod 2 (11, 0) nanotubes is shown in Fig. 16. The bottom panel shows the absorption spectrum for light linearly polarized along the tube axis assuming a FWHM linewidth of 0.15 eV. The upper panel shows a contour map of the coherent phonon power spectrum as a function of pump-probe energy and photon energy. The CP intensity is proportional to the power spectrum and as we scan in photon energy two large peaks are observed at the E 11 and E 22 transitions at a phonon energy near 37.1 meV (300 cm −1 ) which corresponds to the RBM frequency of the (11,0) nanotube. Comparing the upper and lower panels of Fig. 16, we can verify our earlier assertion that as we scan in pump energy the CP intensity at the RBM coherent phonon frequency is proportional to the absolute value of the first derivative of the absorption coefficient. We should point out that our theoretical model does not include many-body Coulomb effects and so the position and shape of the CP signal is due to modulation of the free carrier E 11 and E 22 transitions. Thus in our free carrier model, we see an asymmetric double peak at each transition with the stronger peak at low pump energy and the weaker peak at higher energy. In our simulations with 50 fs laser pulses, we excite coherent RBM phonon modes since the pulse duration is much less than the RBM phonon oscillation period. To excite the higher lying coherent phonon modes it is nec- To examine this case, we simulated CP spectroscopy in (11, 0) nanotubes using short 5 fs laser pulses. In qualitative agreement with the measurements of Gambetta et al., we find that the two strongest modes are the RBM and LO modes while the strengths of the oTO and iTO modes are found to be negligible. Our result for the iTO mode is consistent with the chirality dependent Raman G-band intensity in which the iTO signal is absent in zigzag nanotubes. 50 The CP spectra for the RBM and LO modes are shown in Fig. 17 where the differential gain power spectra at the RBM and LO energies (37.1 and 198 meV, respectively) are plotted as a function of pump-probe energy. Two strong features are seen near the E 11 and E 22 transition energies. In this example, the two curves have similar shapes but the LO CP spectrum (multiplied by a factor of 1000 in the figure) is much weaker than the corresponding RBM spectrum. We note that shortening the duration of the laser pulse from 50 fs to 5 fs enhances the strength of the E 22 peak relative to the E 11 peak. This can best be seen by comparing the bottom curves in Fig. 17 with the bottom curve in Fig. 21. It is useful to examine trends in the CP spectra within and between mod 2 semiconducting nanotube families by plotting the theoretical CP intensity at the RBM phonon frequency as a function of pump-probe energy. This is done in the left panel of Fig. 18 where we plot our theoretical CP intensity at the RBM frequency as a function of pump-probe energy for all nanotubes in Families 22 and 25. The curves for each nanotube are labeled with the nanotube chirality (n, m) and the RBM phonon energy in meV. In each nanotube, we see peaks in the CP spectra corresponding to E 22 transitions. Within a given family, the CP intensity tends to decrease as the chiral angle increases, i.e., as the chirality goes from (n, 0) zigzag tubes to (n, n) armchair tubes. From Fig. 18 we can also see that the theoretical CP intensity increases as we go from Family 22 to Family 25. The right panel of Fig. 18 shows the corresponding experimental CP spectra for the nanotubes in Families 22 and 25. Comparing experimental and theoretical curves in Fig. 18, we see that our theory correctly predicts the overall trends in the CP intensities. Since we are using pump probe methods to study an ensemble of micellesuspended nanotubes, the relative agreement between the theoretically calculated and experimentally measured CP intensities suggests that nanotubes of different chi-ralities in Families 22 and 25 in the micelle-suspended sample studied are equally probable and that the measured CP signal strengths are an intrinsic property of the tubes. There are discrepancies in the predicted pump-probe energies of the peaks on the order of 0.4 eV or less. A discrepancy of this size is expected since we have not included many-body Coulomb interactions in our theoretical model. It is well established that both the excitonic red shift and the self-energy blue shift are very large in nanotubes, with the latter exceeding the former. 9,51,52,53,54,55 We also note that the dielectric function of the surrounding medium also influences the excitonic transition energies. 56 There are also differences between the theoretical and experimental CP lineshapes in Fig. 18. If we compare theoretical and experimental CP spectra for the (12,1) nanotube, we see that both exhibit a double peaked structure. However, the lower energy theoretical peak is much stronger than the higher energy peak whereas the two experimental peaks have comparable strength. This discrepancy can be attributed to strong excitonic modification of the shape of the nanotube absorption spectrum whose time-dependent modulation of the probe pulse gives rise to the shape of the CP signal. The free carrier absorption edge is highly asymmetric while the excitonic absorption spectrum exhibits a symmetric peak at the exciton transition energy, thus accounting for the discrepancy. Our theory qualitativley agrees with experiment, but to obtain quantitative agreement, one must include details of the Coulomb interaction. Our experimental and theoretical results are also in qualitative agreement with the results of CP spectroscopy measurements previously reported by Lim et al. in Ref. 12. As these authors note, the tendency of the CP intensity to increase with family index is in contrast to the situation in resonant Raman scattering where the strength of the resonant Raman signal is observed to decrease as the family index increases. E. Coherent phonon detection in mod 1 tubes It is useful to perform a comparison between mod 1 and mod 2 semiconducting nanotubes with the same chiral angle. To this end, we compare the Family 22 (11, 0) mod 2 nanotubes with Family 26 (13, 0) mod 1 tubes. In both cases, the pump lasers have the same fluence (10 −5 J/cm 2 ), pulse duration (50 fs), and spectral linewidth (0.15 eV), while the pump energies correspond to the maxima in the broadened E 22 absorption features. For the (11, 0) tube we pump at 2.05 eV and for the (13, 0) tube we pump at 1.84 eV. The time-dependent photogenerated carrier densities per unit length are shown in the upper panel of Fig. 19. The middle and lower panels show the coherent phonon driving functions and corresponding coherent phonon amplitudes for RBM coherent phonons in the two cases. For (11, 0) mod 2 tubes, the coherent phonon driving function is positive while for (13, 0) mod 1 tubes, the driving function is found to be negative. Some insight into this behavior can be obtained by examining the k-dependent driving function kernel for coherent RBM phonons in (13, 0) nanotubes shown in Fig. 20. Comparing Fig. 20 with Fig. 12, we see that the driving function kernels for E 11 and E 22 transitions near k = 0 have opposite signs which accounts for the sign change. These results are supported by other studies reported in the literature. 18 The difference in the sign of the E 22 driving function kernels at k = 0 for the zigzag mod 2 (11,0) tube shown in Fig. 12 and the zigzag mod 1 (13,0) tube shown in Fig. 20 is due to a change in sign of the electron-phonon matrix element. This sign change in the electron-phonon matrix element for tubes with different mod numbers was also obtained independently by Jiang et al. in Ref. 18. Figure 1(a) in Jiang et al. 18 shows the electron-phonon matrix element for coherent RBM phonons excited at the E 22 transition as a function of the chiral angle θ for mod 1 and mod 2 tubes (the SII and SI curves in Jiang's Fig. 1(a)). For zigzag tubes, θ = 0 and from Jiang's Fig. 1(a) we can see that the sign of the electron-phonon matrix element for the mod 2 (SI) tube is positive while the sign of the electron-phonon matrix element for the mod 1 (SII) tube is negative. This is consistent with the theoretical results shown in our Figs. 12 and 20. We find that the CP intensity is very sensitive to the nanotube mod number. In general, the CP intensity in mod 2 semiconducting nanotubes is much larger than the CP intensity in mod 1 semiconducting tubes. This is illustrated in Fig. 21 where we plot the CP power spectra as a function of pump-probe energy at the RBM frequencies for the zigzag (13, 0) mod 1 and (11,0) mod 2 semiconducting nanotubes. For the (13, 0) and (11,0) tubes the RBM phonon energies are 31.5 (254 cm −1 ) and 37.1 meV (300 cm −1 ), respectively. Note that in Fig. 21 the mod 1 curve is multiplied by a factor of 5. In general, we find that CP intensities in mod 2 tubes are considerably larger than CP intensities in mod 1 tubes. This is consistent with the experimental results of Lim et al. as reported in Refs. 12 and 13. We also find that in mod 1 tubes, the E 11 feature is more pronounced than the E 22 feature in contrast to what is seen in the mod 2 case. The CP intensities as a function of chirality for nanotubes in two mod 1 families (Families 26 and 29) are shown in Fig. 22. We find that in all cases the E 11 features in the mod 1 tubes are much stronger than the E 22 features. Within a mod 1 family, the CP intensity of the E 11 feature is found to decrease with increasing chiral angle. We also find that the CP intensities decrease as the mod 1 family index increases. V. SUMMARY Using femtosecond pump-probe spectroscopy with pulse shaping techniques, we have generated and detected coherent phonons in chirality-specific semiconducting single-walled carbon nanotubes. The signals were resonantly enhanced when the pump photon energy coincides with an interband exciton resonance, and analysis of such data provided a wealth of information on the chirality-dependence of light absorption, phonon generation, and phonon-induced band structure modulations. To explain our experimental results qualitatively and quantitatively, we have developed a microscopic theory for the generation and detection of coherent phonons in semiconducting single-walled carbon nanotubes via coherent phonon spectroscopy. For extremely short laser pulses, we find that the two strongest coherent phonon modes are the RBM and LO modes. The CP spectrum of the LO mode is similar in shape to that of the RBM mode but is found to be much weaker. For the RBM modes, the CP intensity within a mod 2 family tends to decrease with chiral angle, and the de- crease in CP intensity with chiral angle is found to be much more pronounced for the E 11 feature. We also find that CP intensities are considerably weaker in mod 1 families in comparison with mod 2 families. In general, the E 22 CP intensities in mod 2 families are stronger than the E 11 features. For RBM modes in mod 1 tubes, the E 11 intensities are stronger than the E 22 intensities and tend to decrease with increasing chiral angle within a given family. As the family index increases, the E 11 CP intensity in mod 1 tubes decreases. For mod 2 nanotubes, we predict that the tube diameter will initially increase for E 22 photoexcitation and decrease for E 11 photoexcitation. In mod 1 nanotubes, the opposite is precdicted to be the case, i.e. the tube diameter will initially decrease for E 22 photoexcitation and increase for E 11 photoexcitation. We compare our theoretical results with experimental CP spectra in mod 2 nanotubes and find that our theoretical model correctly predicts the overall trends in the relative strengths of the CP signal both within and between mod 2 families. We find discrepancies between our theoretical predictions with regard to the peak positions and lineshapes, which we attribute to Coulomb interaction effects that are not included in our calculations. For (11, 3) mod 2 nanotubes, we experimentally verified our theoretical prediction that the diameter of E 22 photoexcited nanotubes initially increases. However, we were unable to get a good sample for verifying our related prediction that the diameter of E 22 photoexcited mod 1 nanotubes initially decreases. This will be one of the goals of our future studies. APPENDIX A: ELECTRONIC STATES The tight-binding wavefunctions for the electronic states in carbon nanotubes are expanded in terms of the symmetry-adapted basis functions |k, µ = r C r (k, µ) |k, µ, r where r = A, B labels the atoms in the two-atom unit cell, C r (k, µ) are expansion coefficients, and |k, µ, r are symmetry-adapted basis functions. The symmetry-adapted basis functions are linear combinations of localized atomic π orbitals |k, µ, r = 1 √ N J e ik(k,µ)·RJ |J, r where N is the number of two-atom unit cells in the system and |J, r is a localized atomic π orbital on atom r in the two-atom unit cell at R J . The positions of the two-atom unit cells (in unrolled graphene xy coordinates) are R J = j 1 a 1 + j 2 a 2 (A3) where J = (j 1 , j 2 ) and a 1 = ( √ 3a 2 , a 2 ) and a 2 = ( √ 3a 2 , − a 2 ) are the graphene basis vectors. The two dimensional wavevectork(k, µ) appearing in the symmetry-adapted basis function expansion in Eq. (A2) is determined by imposing translational and rotational boundary conditions on the nanotube. Imposing the translational boundary condition we havê k(k, µ) · T = k T (A4) where T = t 1 a 1 +t 2 a 2 is the nanotube translational vector which is parallel to the tube axis and has the length of the translational unit cell. Explicit expressions for t 1 and t 2 in terms of the chiral indices n and m can be found in Ref. 1 and are given by t 1 = 2m+n dR and t 2 = − 2n+m dR , where d R = gcd(2n + m, 2m + n). The rotational boundary condition isk (k, µ) · C h = 2πµ (A5) where C h = n a 1 + m a 2 is the chiral vector in unrolled graphene coordinates and µ = 0 · · · N hex −1 is an angular momentum quantum number that labels the cutting lines in the simple zone folding picture. 1 From Eqs. (A1-A5), we arrive at the symmetryadapted tight-binding wavefunction |k, µ = 1 √ N r,J e i φJ(k,µ) |J, r with the phase factor φ J (k, µ) ≡k(k, µ) · R J given by φ J (k, µ) = πµ((2n + m)j 1 + (2m + n)j 2 )) n 2 + nm + m 2 + √ 3 a k 2 mj 1 − nj 2 √ n 2 + nm + m 2 (A7) Substituting the symmetry-adapted tight-binding wavefunction (A6) into the Schrödinger equation, we obtain, for each value of µ, a 2 × 2 matrix eigenvalue equation for the electronic energies E sµ (k), and the expansion coefficients C r (s, µ, k), namely where s = v, c labels the valence and conduction band states. The 2 × 2 Hamiltonian and overlap matrices are given by H r,r ′ = J ′ e iφ J ′ (k,µ) 0, r|H|J ′ , r ′ (A9) and S r,r ′ = J ′ e iφ J ′ (k,µ) 0, r|J ′ , r ′ (A10) In the sum over J ′ = (j ′ 1 , j ′ 2 ), in Eqs. (A9) and (A10) we keep only the on-site and third nearest neighbor contributions since the parameterized matrix elements vanish at distances beyond the third nearest neighbor distance in graphene. Values of (j 1 , j 2 ) for the first to fourth nearest neighbors of the A and B atoms in the two-atom unit cell J = (0, 0) are easy to work out and can be found in Table 2 of Ref. 57.
2009-06-01T14:38:31.000Z
2008-12-10T00:00:00.000
{ "year": 2008, "sha1": "995ccd579857dadb3e4e71b233277471a632b763", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0812.1953", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "69266ac6e2ae0f503012f5902c43980745f9f094", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
238666635
pes2o/s2orc
v3-fos-license
An optimized RNN-LSTM approach for parkinson’s disease early detection using speech features Received Nov 28, 2020 Revised Mar 2, 2021 Accepted Aug 19, 2021 Parkinson's disease (PD) is the second most common neurodegenerative disorder disease right after Alzheimer's and the most common movement disorder for elderly people. It is characterized as a progressive loss of muscle control, which leads to trembling characterized by uncontrollable shaking, or (tremors) in different parts of the body. In recent years, deep learning (DL) models achieved significant progress in automatic speech recognition, however, limited studies addressed the problem of distinguishing people with PD for further clinical diagnosis. In this paper, an approach for the early detection of patients with PD using speech features was proposed, a recurrent neural network (RNN) with long short-term memory (LSTM) is applied with the batch normalization layer and adaptive moment estimation (ADAM) optimization algorithm used after the network hidden layers to improve the classification performance. The proposed approach is applied with 2 benchmark datasets of speech features for patients with PD and healthy control subjects. The proposed approach achieved an accuracy of 95.8% and MCC=92.04% for the testing dataset. In future work, we aim to increase the voice features that will be worked on and consider using handwriting kinematic features. INTRODUCTION Parkinson's disease (PD) is a complex neurological illness that, being classified as a degenerative, chronic, and progressive disease that affects a person's movements [1], [2]. Most people are diagnosed during their 70s, although 15% of cases occur among people who are under 50 years of age. Its expansion rate is estimated to be 1.5% approximately for people aged over 65 years [3]. The Clinic pathological studies show that up to 25% of the patients with PD are diagnosed incorrectly [4], The accuracy of clinical diagnosis can reach approximately 90% within a period of 2 years and 9 months [5]. Diagnosing PD is rather difficult, up till now there is no blood test that can reveal whether a person has a PD or not. Such illness is usually diagnosed through clinical exams and brain scans. These methods are quite costly, sometimes erroneous, and need an elevated level of professional expertise. Machine learning (ML) is a technique for analyzing data, it automatically learns the information and attitudes of a system and perceives the complexity of patterns with ease [6]. Deep learning (DL) is considered a great evolution of machine learning. It is inspired by brain operationality; it uses a programmable neural [7] that authorizes the machines to make accurate decisions without needing interference from humans. A neural model with appropriate generalization can provide precise answers even when testing it with inputs that have never been experienced before in the training set [8], also DL offer high prediction performance compared to other ML methods such as support vector machine (SVM) and random forest (RF) [9]. In recurrent neural networks (RNN) with long short-term memory (LSTM), the impermanent correlations of the input data can be learned [10], which consists of blocks of memory that allows retaining input information for a long period [9]. The optimizer is a method to adjust the varied parameters of the model. optimizing the neural network is very beneficial for increasing the accuracy and reducing the loss. Instead of mapping inputs to outputs alone, the RNN-LSTM network has the capability of learning a mapping function from inputs to outputs over time. An explicit set of observations need not be pre-specified. The main contributions of this paper are:  Proposing an enhanced approach based on deep learning through using RNN-LSTM for early detection of PD using voice features.  Applying the proposed RNN-LSTM approach with a batch normalization layer after each hidden layer to standardize the outputs of the hidden layers.  Applying the adaptive moment estimation (ADAM) optimization algorithm for training the network by updating the weights of the network iteratively based on the training data while training. The rest of this paper is organized as; section 2 presents state-of-the-art studies for PD detection, section 3 describes the phases of the proposed approach, section 4 presents and discusses the obtained experimental results, section 5 presents conclusions and future work. RELATED WORK Classification techniques based on ML and DL would be a convenient tool for an accurate diagnosis to differentiate healthy people from individuals with PD. Zham et al. [11] used a naïve bayes (NB) algorithm on handwriting tasks and spiral drawing, different measures were used for each task. The fourth task has achieved the best classification accuracy with 83.2%. Taleb et al. [12] used a feature selection technique on handwriting tasks based on statistical tests and the SVM classifier. The feature giving the highest classification performance is picked up firstly. Features were provided separately one by one as an input to the SVM classifier. The highest classification accuracy obtained of a solitary feature was 87.5%. Then, features were fed continuously one after another until they get 86 features. The best classification accuracy of a group of features was 96.875% for N=12 features. Drotár et al. [4] compared three different classifiers: Knearest neighbors (K-NN), ensemble AdaBoost classifier, and SVM on parkinson's disease handwriting based on pressure and kinematic features using (PaHaW) dataset. SVM obtained the best result of all three classifiers with an accuracy of 81.3%. Also, Drotár et al. [13] used SVM on handwriting features to classify the PD patients, the accuracy was 88.1% for 162 handwriting features. Moreover, in Drotár et al. [14] they used SVM classifier for measuring the in-air and on-surface kinematic variables of the handwriting features of the PD patients. The achieved accuracies were 84% for inair movement, 78% for on-surface movement, and 85% for both in the air + on surface movement. On the other hand, in [15]. Afonso et al used the optimum-path forest (OPF), deep-hierarchical OPF (dOPF), and kmeans algorithms for the identification of parkinson's disease on the handwriting of spiral and meander features, the best result was for the K-means algorithm with an accuracy=84.17%. Pereira et al. [16] applied a convolutional neural network (CNN) on spiral and meander hand drawing features of PD patients, the accuracy for 128*128 meander images was 87.14% and the accuracy for 128*128 spiral images was 77.92%. Also, Pereira et al. [17] used three classifiers NB, OPF, and SVM on the handwriting of spiral drawing, the NB classifier obtained the best result with accuracy=78.9%. Heremans et al. [18] used handwriting features to estimate the quality of writing in PD patients with and without freezing of gait (FOG). The writing qualities were severely affected by patients with FOG. Grover et al. [19] in this survey used deep neural network (DNN) on UCI's voice dataset with three layers: input, hidden and output layer. The classification accuracy was 94.4% for training and 62.7% for testing. Saikia et al. [20] used an artificial neural network to classify PD patients from healthy controls in addition to providing the different progression stages of the disease based on the Electroencephalogram and the Electromyogram features. In [21] proposed a model for detecting the PD disease via smell signature using two sensors to analyze the sweat components and comparing these components between the PD and non-PD individuals. In [22] compared the classification accuracies of five different classifiers, the SVM, NB, KNN, DT, and the LDA, relying on gait dynamics. The average accuracy of the first three classifiers was 96.8% and 93.5% for the last two classifiers. Shinde et al. [23] used the rate of eye blinking per minute to determine parkinsonism, where if the rate is higher than ten blinks per minute the individual is considered as having PD. In order to enhance the 2505 detection of patients with PD, in this paper, we proposed a RNN with LSTM and ADAM optimizer based on different voice features. Despite that LSTM requires some memory, RNN with LSTM can deal with large datasets without increasing the size of the model. Also, LSTM is more effective in comparison to the traditional time series models as it learns long-term dependencies that use former time proceedings to inform the next ones, so it allows information to persist and achieves best results. The proposed model overcomes the disadvantage of existing models with respect to the limited dataset and features that seriously affect the accuracy of PD prediction. In addition to emphasizing the benefit of accumulation, as traditional neural networks applying direct feedforward appears shortcoming, meanwhile, RNN with LSTM is considered as a loop network that learns long-term dependencies, which enhance the prediction. Different measures were used to validate the model. RESEARCH METHOD The proposed model embraces three main phases listed is being as; preprocessing phase, optimization phase, and classification phase. The framework of the proposed model for diagnosing parkinson's diseases based on speech features is illustrated in Figure 1. The proposed model structure consists of seven layers (input layer, 5 hidden layers, and the output layer). LSTM input layer contains 27 neurons a neuron for each feature, five LSTM hidden layers, a 27 neurons dense layer followed by a two-neuron dense layer as an output layer. Each LSTM layer is appended by a dropout and a batch normalization layer. The dropout regularizes the input and the recurrent connections to the LSTM units by excluding some inputs from activation (drops them out) based on statistical calculations. The batch normalization layer standardizes the outputs of the hidden layer by normalizing the values coming from the previous layer. The batch normalization layer reduces the overfitting as it has a slight regularization effect which improves the performance of the model. Finally, a 27 neuron dense layer followed by a fully connected dense layer, where all neurons in the previous layer are connected to that layer, the last dense layer works as the output layer. The following subsections illustrate the details of each phase. Preprocessing phase This phase worked to collect and prepare the data for the following phases to improve the results and suppress the effect of outliers in it. Min-max normalization was applied to make every datapoint have the same range of values so each feature is equally important. This is done via (1). This process helps to have small standard deviations, which can suppress the effect of outliers. Optimization phase The main goal of deep learning and machine learning is reducing the diversity between the actual output and the predicted output. This is known as the cost function or loss function. To assure adequate generalization of an algorithm and to diminish the cost function by detecting the optimized value of the weights appears the urge of using optimization via training the neural network. This makes a better prediction for the data that was not seen before. In the proposed model two different optimizers were used, the commonly known SGD optimizer and the most widely used optimizer for deep learning models the ADAM optimizer. The ADAM optimizer has achieved the best performance, and this will be displayed in 3.2.4. subsection. ADAM optimizer [24], [25] is one of the most recommended optimization techniques, it is essentially combining the advantages of the stochastic gradient descent (SGD) with momentum algorithm and the root mean square (RMS). The advantages of ADAM could be pointed out in the following points:  The ADAM algorithm doesn't need high memory requirements.  The ADAM algorithm makes use of the average of the second moments of the gradients not only adapting the learning rates based on the average of the first moments. The first moment is mean, and the second moment is uncentered variance.  The ADAM algorithm works very well even with a little regulation of hyperparameters. The ADAM optimizer works according to the following steps: a. Initiate the 1st moment 0=Zero, initiate the 2nd moment 0=Zero, and initialize the first time period T=Zero. b. Update the bias of the 1 and 2 moments, this is shown in (2), (3). Where; 1and 2 are hyperparameters with default values of 0.9 and 0.999 respectively. ε is the learning rate ε=10 −3 . The ADAM optimizer is shown in Figure 2. Classification phase The proposed model applied RNN with LSTM for classifying healthy individuals from PD patients and used the ADAM optimizer to update the weights of the network iteratively, this will be illustrated in more details in the next subsections. Recurrent neural networks RNN is a generalization of a feedforward neural network that contains an internal memory. In RNN the output of the current input relies on the prior computation. After getting the output, it is copied and sent back into the recurrent network. For making a decision, RNNs use the internal memory to operate on a series of inputs where all the inputs are associated with each other. Long short-term memory LSTM uses back-propagation for training. LSTM network has mainly three gates. input gate, forget gate, and the output gate. The input gate uses a sigmoid function to decide which values from the input shall be activated and modify the memory. The forget gate determines what details from the previous state could be discarded from the block. Finally, the output gate controls the output. Regularization with dropout In general, the most common problem that neural network models suffer from is overfitting. Overfitting could be explained as that the model has a good performance with the training dataset but does not perform very well with the test dataset. To overcome this problem, the proposed model applied the dropout regularization technique. The dropout is carried out on both the training and testing states. The dropout parameter value used was 0.2. The recurrent neural networks model with adam optimizer The RNN model comprises an Input layer, then passed to five LSTM hidden layers, and the last layer is the output layer. Now, elaborating on the application of the ADAM optimizer on the proposed Recurrent Neural Networks model in more detail. The dataset is loaded and all the data is normalized into values between 0 and 1. The training data is processed for a batch size of 104 sample records and 10 epochs. The training data is compiled with the ADAM optimizer which updates the weights of the network iteratively, using sparse_categorical_crossentropy loss function with learning rate=0.001 and decay=1e-4. The network structure of the proposed model is shown in Table 1. Table 2 shows the proposed model performance with ADAM optimizer and the performance of the typical RNN "RNN with stochastic gradient descent (SGD) optimizer". From Table 2 the ADAM optimizer has improved the accuracy of the proposed model by approximately 15.6% more than the typical RNN. EXPERIMENTAL RESULTS AND DISCUSSION In this section, we discuss the optained results through presenting the used datasets with brief details about the features of each dataset, the experimental settings, the measures used to validate the model performance. Also, we present a comparison between the proposed model and the model presented by Grover et al. [19] that addresses the same problem based on the accuracy performance and the structure of the two models. Moreover, we examine the accuracies and some validation measures of the different ML algorithms such as RNN with ADAM optimizer, RNN with SGD, SVM, and K-NN that we applied on the two datasets in order to highlight the best model for detecting PD. Finally, we show a performance comparison between the proposed model and other related works. Datasets and experimental setting In our experiment, we work with Python programming language along with TensorFlow and Keras libraries. The proposed model implemented a RNN with LSTM along with ADAM optimizer and a sparse_categorical_crossentropy loss function. We also consider the presented model of [19] that used a feedforward neural network with three hidden layers. Two benchmark datasets of speech features are used in this study. The first PD dataset (DS1) is the parkinson's telemonitoring voice dataset from the UCI public repository of datasets [26]. This dataset consists of 1040 samples for training and 168 samples for testing with 27 voice features. The second dataset (DS2) is created by Max Little of the University of Oxford, in collaboration with the National Centre for Voice and Speech, this dataset contains 195 samples 130 samples for training, and 65 samples for testing with 22 voice features [27]. When applying the second dataset, we modified the number of neurons in the hidden layers of the network to be 22 neurons according to the number of the voice features and kept the same network structure. Details of the features of both dataset's are listed in Table 3. Results We used different measures to validate our model, these measures are accuracy, recall, precision, and F-score. Where true positive (TP), true negatives (TN), false positive (FP), and false negatives (FN) as shown in (8) The accuracy of a model is a method to measure how the model correctly classifies the data. It is the ratio between the correctly predicted samples to the whole number of the prediction samples. Precision is the ratio of the rightfully predicted as positive by the model to all positives, in other words, precision clarifies how many predicted PD patients are actually PD. Recall measures how correctly the model identifies true positives, in the proposed model the recall shows how many PD patients are correctly predicted. F-score is the average of the recall and precision. The obtained classification accuracy of our model on the first dataset was 95.8%, in comparison to the proposed methodology by Grover et al. [19], which was 62.7%. This shows that our proposed model has the discrimination of 33.1% for the classification accuracy over the 2509 methodology presented in [19]. Table 4 presents a brief comparison between the structure of the two models and the accuracy performance of each model. Table 4 the proposed approach had a higher accuracy than the approach of Grover et al. [19] by approximately 33%. Different ML algorithms were applied to find out the best model for predicting the possibility of having parkinson's disease, these algorithms are the RNN with ADAM optimizer, RNN with Figure 3 shows that the RNN model with ADAM optimizer on the first dataset (DS1) increased the accuracy of the classification by 15.6% in comparison to the RNN with SGD, achieved better classification accuracy by 5.8% than the SVM algorithm, and improved the accuracy by 1.9% than the K-NN. Also, Figure 3 illustrates that the RNN model with ADAM optimizer has maintained the best accuracy performance on the second dataset (DS2) with a difference of 9.7%, 7.4%, and 10.7% versus the RNN with SGD, SVM, and the KNN models respectively. These results have shown that the RNN model with ADAM optimizer has achieved the best classification result on both voice datasets. Table 5 shows the performance of these models on the two datasets based on the recall, precision, and the F-score. The achieved result of the different models applied on the second dataset (DS2) could have lower performance due to the small number of samples in comparison to the first dataset (DS1). Table 6 compares the validation performance between previous surveyed studies with different models and datasets with the performance of the proposed approach for detecting PD. Moreover, the matthews correlation coefficient (MCC) of the proposed model with the first dataset (DS1) was calculated, and it gives 92.04%. MCC considers all the TP, FP, TN, and FN values, and the high value of the MCC (near to 1) means that the two classes were properly predicted, even in case one of the two classes is disproportionately represented. MCC can be calculated from (12). The elapsed time for the whole process was 20 minutes with 104 epochs. Each epoch takes approximately 11 seconds. CONCLUSION In this paper, we presented a model with the aim to diagnose parkinson's disease with less human interference and in a much cheaper and more efficient way. A RNN with LSTM and ADAM optimizer was used with sparse_categorical_crossentropy loss function and the SoftMax activation function. The model was applied in two different voice datasets, and multiple measures were computed to evaluate the model performance. The achieved accuracy on the first dataset is 95.8%, the recall is 100%, the precision is 92.3%, and the F-score is 96%. For the second dataset, the proposed approach obtained an accuracy of 82.2%, 99% for recall, 82.2% for precision, and 90.24 % for F-score. For future work, we will work on considering more voice features with other kinematic features like handwriting features.
2021-10-14T00:07:13.904Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "3d8ae6021dd537f9f57e78dccf5100e973056baf", "oa_license": "CCBYSA", "oa_url": "https://beei.org/index.php/EEI/article/download/3128/2319", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "256d4f20e9146e645594bdd89e12e1c854ff49b3", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
7965828
pes2o/s2orc
v3-fos-license
Outbreak of pandemic influenza A/H1N1 2009 in Nepal Background The 2009 flu pandemic is a global outbreak of a new strain of H1N1 influenza virus. Pandemic influenza A (H1N1) 2009 has posed a serious public health challenge world-wide. Nepal has started Laboratory diagnosis of Pandemic influenza A/H1N1 from mid June 2009 though active screening of febrile travellers with respiratory symptoms was started from April 27, 2009. Results Out of 609 collected samples, 302 (49.6%) were Universal Influenza A positive. Among the influenza A positive samples, 172(28.3%) were positive for Pandemic influenza A/H1N1 and 130 (21.3%) were Seasonal influenza A. Most of the pandemic cases (53%) were found among young people with ≤ 20 years. Case Fatality Ratio for Pandemic influenza A/H1N1 in Nepal was 1.74%. Upon Molecular characterization, all the isolated pandemic influenza A/H1N1 2009 virus found in Nepal were antigenically and genetically related to the novel influenza A/CALIFORNIA/07/2009-LIKE (H1N1)v type. Conclusion The Pandemic 2009 influenza virus found in Nepal were antigenically and genetically related to the novel A/CALIFORNIA/07/2009-LIKE (H1N1)v type. Background The 2009 flu pandemic is a global outbreak of a new strain of H1N1 influenza virus, often referred to as swine flu [1]. The virus is a novel strain of influenza [2]. This new pandemic H1N1 influenza strain contained genes from five different flu viruses: North American swine influenza, North American avian influenza, human influenza, and two swine influenza viruses typically found in Asia and Europe [3]. Due to the genetic mutations in its hemagglutinin (HA) protein, the influenza viruses can escape from the host defense mechanisms and thus to be able to continuously infect human and other species [4,5]. On June 11, 2009, the ongoing outbreak of Influenza A/H1N1 was officially declared by the WHO to be the first influenza pandemic of the 21 st century with new strain of Influenza A virus subtype H1N1 identified in April 2009 [6] . Till May 30, 2010 worldwide update by World Health Organization (WHO) more than 214 countries have reported laboratory confirmed cases of pandemic influenza H1N1 2009, including over 1,8,114 deaths [7]. Nepal has started screening febrile travelers with respiratory symptoms from affected countries for the Pandemic influenza A (H1N1) since April 27, 2009, and the first case was detected on June 21, 2009 and introduction of the disease to the country was declared on June 29. Community transmission of Pandemic Influenza A/H1N1 2009 was declared on 15 October onwards. This study reflects the actual outbreak situation and its severity of Pandemic Influenza A/H1N1 in Nepal. Methods This is a Laboratory based prospective cross-sectional study carried out at National Public Health Laboratory from April 2009 to May 2010. Initially during the pandemic declaration of H1N1, samples were collected from patients of all age groups from both gender meeting the criteria of case definition of influenza like illness (ILI) with history of international travel from country of confirmed Pandemic H1N1 or close contact with confirmed H1N1 infected persons or with shortness of breathing or hospital admitted patients. As the community outbreak of pandemic influenza A/H1N1 was declared, sample collection was done by random sampling method from the patients meeting case definition of ILI. The patients already on antiviral treatment were excluded from the study. Influenza like illness (ILI) is defined as those who has fever ≥ 38°C with at least one respiratory symptoms such as cough, rhinorrhoea or sore throat [8]. Posterior pharyngeal swabs were collected into Viral Transport Medium (VTM). For transportation from outside hospitals and outbreak area, Specimens were kept on ice box (2-8°C) with bio-safety precaution and transported to the National Public Health Laboratory within 48 h after collection. At NPHL, Laboratory diagnosis of Pandemic influenza A/H1N1 was made by one step probe-based Real Time-Polymerase Chain Reaction (PCR) on Posterior pharyngeal swabs. Initially, samples were tested for Universal influenza A, Once Universal influenza A was positive, it was further tested for Swine specific A followed by Swine specific H1. According to CDC guideline, Presumptive positive for Pandemic influenza A/H1N1 was declared if RT-PCR give positive results from either swine A or swine/H1 or positive from both tests [9]. All the specimens which give presumptive negative for Pandemic influenza A/H1N1 but universal influenza A positive were tested for seasonal influenza A (H1, H3 and H5) and those specimens negative for all swine A/ swine/H1 and universal influenza A were tested for Influenza B. Quality assurance and accreditation of Molecular Laboratory five samples presumptive positive for Pandemic influenza A/H1N1 at our laboratory were retested at Microbiology Department of University of Hongkong, Hongkong and the concurrence was 100%. We are regularly participating in the WHO External Quality Assessment Programme for the detection of Influenza virus type A and type B by PCR from panel 8. Three separate primer/probe sets for Universal influenza A(Inf A), Universal swine specific (swFluA) and Swine specific H1 (swH1) were used. The internal positive control for human nucleic acid was RNase P primer/probe set which targets for human RNase P gene. The primers and probes used in the RT-PCR system were as described in Table 1. RT-PCR system was performed as per protocol provided by Center for Disease Control, USA. The following RT-PCR program was used in this study: Reverse Transcription was at 50°C for 30 min, Taq inhibitor activation at 95°C for 2 min, PCR amplification cycle was at 95°C for 15 sec and 55°C for 30 sec with 45 times repetition. Fluorescence data (FAM) was collected during the 55°C incubation step [9]. Analysis in RT-PCR When all controls i.e. Negative, Positive and human Rnase P meet the requirements, a specimen is considered positive if the reaction growth curves cross the threshold line within 40 cycles. Similarly, a specimen is considered negative if growth curves do not cross the threshold within 40 cycles [9]. Virus typing Blindly selected few positive samples of Pandemic influenza A/H1N1 from RT-PCR were sent to the WHO reference laboratory of Center for Disease Control and Prevention Division (CDC), Atlanta. All samples were isolated and antigenically characterized by Haemagglutination Inhibition Assay (HAI). Statistical tools Statistical tools like Probability were calculated by using Statistical Package for the Social Sciences (SPSS) programme. Results A total of 609 patients with suspected Pandemic influenza A/H1N1 were tested at National public health laboratory during the study period. All the samples were confirmed by Real-Time PCR. Out of these samples, 172 (28.3%) were Pandemic influenza A/H1N1 positive and 130 (21.34%) cases were seasonal influenza A as in Table 2. Due to heavy work load and priority for identification of pandemic H1N1, all the Seasonal influenza-A positive cases were not sub-typed. A sub-set of 88 seasonal-A positive samples, till November 2009 were subtyped and found 70(79.5%) were seasonal H3, 8(9.09%) were seasonal H1 and 10(11.36%) were unsubtypeable seasonal influenza A. Among 219 negative cases for Universal Influenza A were tested for Influenza B and found only one positive case. Ten randomly selected positive cases of Pandemic influenza A/H1N1 by RT-PCR were isolated and antigenically characterized by Haemagglutination Inhibition Travelling history of pandemic influenza cases before community transmission was as shown in Figure 3. Among them, largest no of cases were found from US. After community outbreak, most of the cases of Pandemic influenza were from Kathmandu district followed by Kaski and Chitwan (Figure 4). All the confirmed cases of Pandemic influenza A/ H1N1 were in the range of age group from 1-74 with mean age 21 years. Most of the cases were found in the age group 11-20 followed by 21-30 and 0-10. Among the positive cases 119(69.18%) were male and 28 (30.82%) were female ( Figure 5 and 6). Till May 2010, three death cases were reported due to Pandemic influenza A/H1N1. All were Female patients with age 31, 29 and 23 (Figure 7). Discussion Since the middle of March 2009, infections with the new influenza A (H1N1) strain started to occur in Mexico, and the first two cases in the United States occurred in late March 2009, although they were not confirmed until April 15, 2009 [11]. The rapid global spread indicates towards the influence of international air travel on influenza [12]. According to the data from WHO till March 2010, this new influenza A (H1N1) was estimated to have a case-fatality rate (CFR) of 1.28%. In this study, by May 28 2010, a total of 609 suspected patients were tested for Pandemic influenza A/H1N1 2009. Out of the collected samples, 130 (21.34%) were seasonal influenza A positive and 172 (28.3%) were Pandemic influenza A/H1N1 positive. The mean age of patients with confirmed Pandemic influenza A/H1N1 was 21, whereas the mean age of the positive cases of Seasonal influenza A was 28. Patients with pandemic (H1N1) 2009 were significantly younger (p = 0.04, χ2 = 8.21) than patients with Seasonal influenza A (< 20 years). A characteristic feature of the H1N1 pandemic is that it disproportionately affected so far children and young adults [13]. Similar results were observed in different parts of the world: In Saudi Arabia, the age of the cases ranged between 1 and 56 years with mean (SD) of 24.2 (14.4) years [14]. One of the early studies from the USA showed that although the age of pandemic influenza A (H1N1) patients in the study ranged from 3 months to 81 years, 60% of patients were 18 years of age or younger. In most countries, the majority of Pandemic influenza A (H1N1) cases have been occurring in young people, with the median age estimated to be 12 to 17 years in Canada, the USA, Chile, Japan, and the UK. Most of the pandemic influenza cases in young age people indicate towards partial immunity to the virus in the older population [15]. The overall case fatality ratio (CFR) found in this study was 1.74% as 3 death cases were reported till May 28, 2010. All the death cases were found in female with below 35 years of age. The data found in this study is similar with the other literatures. The overall case fatality rate has been less than 0.5%, and the wide range of estimates (0.0004 to 1.47%) reflects uncertainty regarding case ascertainment and the number of infections [16][17][18]. The case fatality rate for symptomatic illness was estimated to be 0.048% in the United States [19] and 0.026% in the United Kingdom [20]. In contrast to seasonal influenza, most of the serious illnesses caused by the pandemic virus have occurred among children and nonelderly adults, and approximately 90% of deaths have occurred in those under 65 years of age [21]. Conclusion In Nepal, mostly young aged people were affected by the wave of this pandemic influenza and all the isolated pandemic influenza A/H1N1 2009 virus were antigenically
2016-05-12T22:15:10.714Z
2011-03-23T00:00:00.000
{ "year": 2011, "sha1": "47ec5c445af56fcb2b9d7c91022f18a3b3e606ff", "oa_license": "CCBY", "oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/1743-422X-8-133", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2c8762d241da637e5267395f3cdbb66e328c339d", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
257458472
pes2o/s2orc
v3-fos-license
Ophthalmic Vein Thrombosis Associated with Factor V Leiden and MTHFR Mutations Superior ophthalmic vein thrombosis (SOVT) is a rare clinical entity that may be associated with hypercoagulability status. We present a case of a 77-year-old woman who presented to the emergency department complaining of eye ptosis, chemosis and conjunctival congestion in the right eye (RE). The ophthalmological examination revealed best-corrected visual acuity (BCVA) was 0.5 for the right eye (RE) 0.5 and 0.06 for the left eye (LE). Intraocular pressure (IOP) was 25 mmHg in RE and 14 mmHg in LE. Non-contrast computed tomography (CT) of the brain and orbits revealed a hyperreflectivity at the level of the right ophthalmic vein and inferior rectus muscle hypertrophy. An extensive hypercoagulable panel was completed and we found a positive result for Factor V Leiden (heterozygous mutation) and methyl-enetetrahydrofolate reductase (MTHFR-C677T homozygous mutations). Systemic steroidal anti-inflammatory and anticoagulant treatments were started immediately. Gradual resolution of symptoms was noted during the hospitalization, and BCVA in her RE was established at 0.7 at the 10-week follow-up. Ophthalmic vein thrombosis is a rare clinical condition (with an incidence of 3-4 cases/million/year) but with particularly important effects on patients' lives; therefore, early diagnosis and treatment are critical in these cases [1,2]. Clinical, ophthalmic vein thrombosis manifests with acute orbital signs such as unilateral ptosis, chemosis, ophthalmoplegia and decreased visual acuity [1]. Superior ophthalmic vein thrombosis is a rare pathology in current practice with a multifactorial etiology [2]. Clinically, symptoms are often acute, with eye pain, chemosis, ptosis, conjunctival congestion and decreased visual acuity [6]. In these cases, risk factors can be local or systemic but usually include at least one component of the Virchow triad (hypercoagulability, hemodynamic changes (stasis, turbulence), endothelial injury/dysfunction) [7]. The diagnosis is usually based on CT or MRI results and according to the patient's symptoms. In our case, CT scan revealed a hyperreflectivity at the level of the right ophthalmic vein and inferior rectus muscle hypertrophy ( Figure 1). Figure 1. A 77-year-old woman, with hypertensive pathology, presented to the emergency department complaining of eye ptosis, chemosis and conjunctival congestion. The patient was known to have high myopia (both eyes) and amblyopia (LE). The ophthalmological examination revealed bestcorrected visual acuity (BCVA) on the right eye 0.5 and 0.06 for left eye. Intraocular pressure (IOP) was 25 mmHg in RE and 14 mmHg in LE. The anterior segment of the left eye was normal. In the RE we noticed diffuse conjunctival congestion, chemosis and eyelid ptosis. Ocular movements were normal in both eyes. Pupillary light reflex was bilaterally intact. Hertel's exophthalmometry revealed a proptosis of 20 mm in the right eye vs. 17 mm in the left eye. Non-contrast computed tomography (CT) of the brain and orbits revealed a hyperreflectivity at the level of the right ophthalmic vein (a) and inferior rectus muscle hypertrophy (b). Routine biochemical examination showed an increase of inflammatory markers (C-reactive protein, fibrinogen) and a high D-dimers value. Blood culture and a bacteriological conjunctival exam were negative. Additionally, other secondary causes were excluded (tumors, autoimmune diseases, infections). (a) Axial sections of non-contrast head CT scan-hyperreflectivity of the right ophthalmic vein. (b) Non-contrast head CT coronal scan-inferior rectus muscle hypertrophy. An increased homocysteine level may be associated with this condition. Methylenetetrahydrofolate reductase (MTHFR) normally catalyzes the conversion of 5,10-methyltetrahydrofolate to 5-methyltetrahydrofolate. Even with a normal level of folic acid in the body, MTHFR activity slows down when the MTHFR gene C677T polymorphism is present. As a result, hyperhomocysteinemia can produce severe damage to the vascular endothelial cells [14]. Some studies claim that hyperhomocysteinemia is independently associated with an increased risk of thrombosis [15]. Factor V Leiden mutation (FVL) is one of the most common genetic risk factors for venous thromboembolic disease. Factor V mutations are also known to potentiate the effect of MTHFR on deep vein thrombosis [16]. This mutation is a point mutation in the factor V gene in which glutamine is substituted for arginine at position 506. As a result, the risk of thrombosis rises due to the activated protein C resistance (APC-R). Functional resistance to APC-R assays and genetic testing using DNA-based techniques can both be used to identify the FVL mutation and differentiate between heterozygotes and homozygotes. In the Caucasian community, the FVL mutation was discovered to be the most prevalent inherited thrombophilic condition, accounting for up to 37% of venous thrombosis cases [17]. In our case, ophthalmic vein thrombosis may be a consequence of the synergism between the two hematological abnormalities. There is a significant risk of developing other ocular complications due to the hematological mutation such as retinal vein occlusion (RVO) or retinal artery occlusion (RAO). It has been proven that this mutation is related to venous thromboses. Systemic inflammation leads to RVO through the induction of systemic hypercoagulability. Additionally, certain studies have demonstrated that hyperhomocysteinemia, particularly in patients with the MTHFR C677T gene variant, is a significant risk factor for RVO [14]. Regarding arterial occlusions, the connection between these events and the mutations of Factor V Leiden and MTHFR are still controversial. However, the presence of coagulation disorders should be suspected especially in young people [18] and cases where their simultaneous presence was proved were documented in the specialized literature [19,20]. The onset and the symptoms in the presented case were typical for this pathology with ptosis, chemosis and conjunctival congestion. In rare cases, thrombosis of the superior ophthalmic vein can progress to thrombosis of the cavernous sinus [5]. Most of the patients present with unilateral ocular complaints, although bilateral ocular involvement has been reported as well [21]. The treatment of thrombosis of the superior ophthalmic vein depends on the etiology. Additionally, it depends on the severity of the signs and symptoms and associated systemic diseases. It is important to rule out an infectious cause (that would require antibiotic treatment). In aseptic cases, anticoagulant treatment can be initiated, but only after an adequate assessment of the associated bleeding risks [17]. Additionally, treatment with corticosteroids can be useful in reducing orbital inflammation and congestion [22]. In the present case, the initiation of anticoagulant treatment together with systemic corticosteroid treatment significantly improved the patient's condition, with a favorable evolution and remission of symptoms. Due to increased intraocular pressure, the patient also received ocular hypotensive treatment during hospitalization. The optical coherence tomography (OCT) of the RE shows normal RNFL thickness. Also, fundus examination was normal in RE ( Figure 2). Many orbital diseases, particularly those involving proptosis, have been associated with increased intraocular pressure. The pathophysiological mechanism is complex. Increased orbital pressure can influence the intraocular pressure both directly by increasing hydrostatic pressure around the eyeball or indirectly by compressing the episcleral pressure and orbital veins, thus increasing venous pressure [23]. hypercoagulable panel was completed and we found a positive result for Factor V Leiden (heterozygous mutation) and MTHFR (C677T homozygous mutations). Additionally, a slight increase in the level of homocysteine was noted. Rheumatoid factor, antiphospholipid antibodies and antinuclear antibody were negative. Systemic steroid anti-inflammatory (dexamethasone 4 mg/mL twice a day) and anticoagulant (Enoxaparin 60 mg/0.6 mL twice a day) treatments were initiated along with antihypertensives (Perindopril 5 mg/once a day) and neuroprotector treatment for 10 days. She also received topical antibiotics and ocular hypotensive treatment (fixed combination of 2% dorzolamide/0.5% timolol (Cosopt)). During the treatment, INR, blood table and vital signs were monitored. Gradual resolution of symptoms was noted during the hospitalization, and the vision in her RE was preserved at 0.7 at the 10-week follow-up. Thrombosis of the superior ophthalmic vein is a rare clinical entity but with an increased risk of morbidity if it is not discovered and treated in time. The etiology is vast, and differential diagnosis can be difficult; therefore, imaging examinations are essential for diagnosis. Treatment depends on the etiology and generally includes antibiotics, anti-inflammatories and anticoagulants. In our case, the evolution was favorable under treatment with restoration of visual acuity. An improvement was noticed immediately after the systemic steroid anti-inflammatory and anticoagulant therapy. This demonstrated the accuracy of the diagnosis. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
2023-03-12T15:10:05.062Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "a3eebd00b33b34c14a156ea703beebed5c7d6a44", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/13/6/1052/pdf?version=1678417946", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "91c407cd9e2ef3ccd9e75ba5fad60faf4bdfe42e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
251772617
pes2o/s2orc
v3-fos-license
Parent Satisfaction With Outpatient Telemedicine Services During the COVID-19 Pandemic: A Repeated Cross-Sectional Study Prior to the COVID-19 pandemic, the development of hospital-based telemedicine services had been slow and circumscribed in scope due to insurance and licensure restrictions. As these restrictions were eased during the COVID-19 pandemic to facilitate ongoing patient care, the public health emergency facilitated a rapid expansion and utilization of telemedicine services across the ambulatory service sector. Objectives The current quality improvement (QI) study utilized this unprecedented opportunity to evaluate the use of telemedicine services across a variety of clinical disciplines and patient groups. Methods Caregivers of patients (ages 0–21) who received care through an outpatient specialty center provided experience ratings of telemedicine services delivered during the initial pandemic months (March–June 2020; N = 1311) or during the national “winter surge” in late 2020 (November 2020–February 2021; N = 1395). Questionnaires were distributed electronically following the clinical visits, and ANCOVA was employed (with patient age as the covariate) to determine if caregiver responses differed based on patient demographic characteristics. Results Ratings of patient satisfaction with services were very strong at both time points; greater variability in scores was noted when caregivers were asked if they would use telemedicine services again. At both time points, younger patient age (i.e., age 0–5) was associated with decreased caregiver willingness to use telemedicine services in the future. Smaller effects were seen for certain “hands on” therapies (occupational, physical, and speech) during the initial months of the pandemic and for proximity to the hospital during the “winter surge.” Conclusions These data suggest a very positive overall caregiver response to telemedicine-based services during the COVID-19 pandemic. Several areas of potential improvement/innovation were identified, including the delivery of telemedicine therapies (e.g., occupational, physical, and speech) services to young patients (i.e., aged 0–5). INTRODUCTION The onset of the COVID-19 pandemic and the resulting federal Public Health Emergency (PHE) declaration led to several notable changes in pediatric medicine, including an unprecedented and rapid shift from in-person to telemedicine services to continue to facilitate patient care while maintaining a physical distance. Telemedicine includes two-way, real-time interactive communication using audio and video equipment between the patient and practitioner at a distant site (1). The PHE led to the passage of PL 116-123, with a Section 1135 waiver, allowing for the temporary waiver of certain Medicare requirements. This established the path for Health and Human Services (HHS) to introduce several flexibilities that permitted telemedicine accessibility, e.g., expansion of eligible practitioners and covered services, removal of restrictions for patient and practitioner location, easing of technology requirements by the Health Insurance Portability and Accountability Act (HIPAA), permission to practice across state lines, and the ability to bill as if the session was conducted in person (2). These changes in restrictions helped facilitate an exponential increase in the provision of telemedicine services (3,4). With these changes, medical centers were quickly faced with the need to establish the infrastructure necessary to deliver, support, and evaluate their telemedicine services (5). Telemedicine can be advantageous for patients and caregivers. First, telemedicine may reduce the burden on patients and their families by reducing time away from school, missed time at work, and/or travel time associated with in-person services (6)(7)(8)(9)(10). The time-saving convenience of telemedicine can extend to registration, triage, and wait times for patients as those components are often taken care of in advance of a telemedicine appointment (6). While general satisfaction with telemedicine-based care has typically been high, recent studies have found a discrepancy between expressed satisfaction and willingness to use telemedicine-based services again (9,11,12). For instance, Leung et al. (11) found that 96% of neurosurgical patients rated their telemedicine visit as "excellent" or "good, " but only 83% of patients indicated they would choose a video visit over an in-person visit. Similarly, Schmidtberg et al. (12) documented very high satisfaction ratings (95.6%), but found that fewer (87%) patients would consider another telemedicine visit in the future. These findings raise questions concerning the small, but noteworthy, a cohort of patients and caregivers who are more hesitant about using telemedicine beyond their initial experience. Hesitancy concerning the reuse of telemedicine may derive in part from care-related variables. Tomines (13) found that telemedicine's effectiveness varied widely based upon the pediatric specialty, care delivery setting, and patient preference. Tenforde et al. (9) qualitatively identified several parents-and family members-reported limitations of rehabilitation services delivered via telemedicine, such as the inability of the therapist to receive tactile motor/muscle feedback during a telemedicine visit. In addition, younger children may have difficulty engaging with a provider or staying focused during a telemedicine appointment (9) and require assistance from a parent or caregiver throughout the telemedicine appointment (14,15). This parent/caregiver participation can result in additional burdens for caregivers who may be simultaneously working or caring for other children (16). This quality improvement (QI) project was designed to assess parent and caregiver satisfaction with telemedicine appointments and related services during the COVID-19 pandemic. The aims of this study were to: (1) examine parent satisfaction with telemedicine care at two-time points during the COVID-19 pandemic; and (2) identify parent-, child-, and service-related variables associated with parent satisfaction with telemedicine services. We hypothesized that parents/caregivers would be less interested in using telemedicine again if their children were younger (e.g., ages 0-5), were seen for "hands on" therapy services (i.e., occupational therapy and physical therapy), and/or lived near the hospital. While several studies have looked at the provision of telemedicine services in specific disciplines, this study aimed to evaluate the use of telemedicine services across multiple disciplines. METHODS This quality improvement study received acknowledgment from the hospital's overseeing institutional review board (IRB), as well as oversight by the internal Office of Human Research Administration. Facility This study took place in an interdisciplinary hospital specializing in treatment for children and young adults referred for medical, cognitive, and behavioral issues related to traumatic injuries or intellectual/developmental disabilities. The hospital had a pre-existing telebehavioral health program for military families that was established 4 years prior to the COVID-19 pandemic. Although this was a small program (100 patients, 21 telehealth providers, and 700 sessions), it provided the infrastructure necessary to rapidly shift to telemedicine-based services delivered in families' homes when the pandemic began. The hospital had telemedicine policies and procedures already in place, as well as established HIPAA-compliant video conferencing and electronic documentation programs. The experienced telebehavioral health providers were able to transition their efforts to expand the established systems throughout the hospital, such that by the end of March 2020, the hospital had provided over 4000 telehealth sessions. Participants Caregivers responded to a brief experience questionnaire following their child's outpatient telemedicine appointment. A repeated cross-sectional design resulted in data at two-time points ( Table 1). The utilization of two samples allowed for the examination of caregiver experience during the initial transition to telemedicine, as well as experience deeper into the pandemic when clinicians, parents/caregivers, and patients were potentially more familiarized with telemedicine-based services. Caregiver responses were included if they responded to both target questions, identified in the Questionnaire section that follows. Time 1, Early Pandemic Sample The Time 1, Early Pandemic sample was defined as the patients seen during the first several months of the COVID-19 pandemic (13 March 2020-13 May 2020). A single retrospective distribution of telemedicine experience questionnaires returned 2128 responses (24.9% completion rate) during the data collection window of June 1-June 30, 2020. Of these, 223 unfinished or duplicate responses were removed, leaving 1872 responses. To focus on caregiver reports, patient reports were removed which further reduced the sample to 1535 responses. The few (10) self-pay patients were removed as well. Finally, the sample was reduced to 1311 respondents after 214 surveys with missing or skipped responses were removed. Demographic information of the patients in the early pandemic sample is presented in Table 1. Time 2, Winter Surge Sample The Time 2, Winter Surge sample was defined as the patients seen during a portion of the national "winter surge" of COVID-19 cases (20 November 2020-26 February 2021). Surveys were distributed on an ongoing basis during the sampling window of 8 December 2020 and 2 March 2021. Of the 3,188 questionnaires received (11.9% completion rate), 1131 unfinished or duplicate surveys were removed, leaving 1918 responses. The sample was further reduced to 1,395 respondents to limit the sample to caregiver reports. Of note, parents/caregivers were not restricted from participating in both sampling windows; however, only one parent/caregiver response per patient was included per sample. Demographic information for patients in the Time 2 sample can be found in Table 1. Questionnaire A parent/caregiver experience questionnaire was developed by a QI committee, drawing from prior questionnaires and de novo items. The questionnaire consisted of a combination of the Likert scale (five-point; 1-strongly disagree to 5-strongly agree) and open response questions. This project focused on the responses to the following two statements: (1) "Overall, I am satisfied with the service (s) I received during the appointment."; (2) "I would use telehealth services again, even if an in-person appointment was an option." Procedure Questionnaire invitations were emailed to parents/caregivers via Qualtrics Survey Platform (17) and included appointmentspecific details extracted from the hospital's medical record system, including the provider's name, department/clinic, and date. For patients with multiple appointments in a survey window, the most recent visit was prioritized to optimize respondent recall and ensure proportional representation of clinical visit types across larger and smaller clinics. Race Of the 23 racial/ethnic categorical choices recorded in the electronic medical record, racial designations were condensed for the purposes of analysis: White, Black, Multi-racial, Unknown, and Other. Those who endorsed more than one race were classified as Multi-racial. Those for whom race data were unavailable were classified as Unknown. Other includes those who endorsed only one race other than white or black. Age Patient age was based on the difference between the date of birth and age at appointment. Patient age groups used in the supplemental analysis were as follows: ages 0-5, 6-10, 11-15, and 16-21. Service Type Appointments occurred in 83 different clinics throughout the hospital system, among four larger service type categories. Behavioral health services were split into two service categories. Behavioral Health: General includes behavioral health services (e.g., family therapy, behavioral consultation, counseling, etc.,) for children with common behavioral diagnoses, including attention-deficit/hyperactivity disorder, depression, anxiety, conduct disorder, adjustment disorder, etc. Behavioral Health: Developmental includes more specialized behavioral health services (e.g., applied behavior analysis) for children with developmental disorders such as intellectual disability, feeding disorders, and an autism spectrum disorder. The third group, Medicine, was comprised of visits primarily provided by physicians and nurses (including psychiatrists and nurse practitioners). Finally, the fourth group, Therapy Services, included occupational therapy (OT), speech therapy, physical therapy (PT), audiology, and nutrition. Insurance Type Insurance payor groups were categorized into a Commercial insurance group, a Public insurance group (i.e., Medicaid, Managed Medicaid, and Medicare), and Military insurance (i.e., a Department of Defense plan/TRICARE). Of note, patients in the military group had the most previous experience with telemedicine-based services, as TRICARE covered telemedicine in patients' homes prior to the COVID-19 pandemic. Proximity to Hospital Patient proximity to the hospital was quantified using three proximity zones. The boundaries encompassing the city proper were identified as zone 1 (i.e., Close Proximity). The counties immediately surrounding/adjacent to the city were identified as zone 2 (i.e., Adjacent/Surrounding). Finally, zone 3 (Distal) was identified as those counties (both in-and out-of-state) extending beyond zones 1 and 2. The Distal zone 3 included individuals from states adjacent to the hospitals as well as a wider national distribution. Statistical Analysis Plan This study made use of a convenience sample, comprised solely of caregivers who responded to the two target items noted in the Questionnaire section above. Data were initially reviewed in Microsoft Excel (18) and further analyzed in IBM SPSS Statistics version 27.0 (19). Parametric and nonparametric tests were used to test for any demographic differences between the two samples. Analysis of Covariance (ANCOVA) was then used to determine if parent responses to two key survey items, measuring service satisfaction and willingness to use telemedicine ("telehealth" in survey items) services again, differed based upon patient demographic characteristics. The age of the child was used as a covariate in the ANCOVA model. Bonferroni-corrected post-hoc pairwise comparisons further described between-group differences. Additional supplemental ANOVA was used to examine response differences among parents with children in discrete age categories (i.e., ages 0-5, 6-10, 11-15, and 16-21). For reference, partial eta squared (η p 2 ) was used to determine effect sizes: small (0.01), medium (0.06), and large (0.14). Time 1, Early Pandemic Sample The Time 1 sample had a mean age of 10.12 years old (SD = 4.84) and two-thirds (67.1%) of the sample were male. Among insurance groups, the proportion of patients within race groups differed [X 2 (8) = 179.7, p < 0.001] due to an overrepresentation of white patients in the commercial insurance group and an overrepresentation of black patients in the public insurance group. Mean patient age differed across service types [(F (3, 1307) = 76.3, p < 0.001], with significantly younger children in the therapy services group (M age = 6.4 years) compared to all other groups, and significantly younger children in the behavioral health: developmental (M age = 9.1 years) group compared to the behavioral health: general (M age = 11.1 years) and medicine (M age 11.4 years) groups. See Table 1 for sample details. At Time 1, caregiver ratings resulted in an overall average satisfaction rating of 4.61 out of 5, and 95.5% of caregivers "agreed" or "strongly agreed" that they were satisfied with the telemedicine services they received. When controlling for age, ANCOVA revealed a very small, yet significant, main effect for insurance type (p = 0.032; η 2 p = 0.005), but there were no main effects found for age, race, service type, insurance type, or patient proximity to the hospital. A post-hoc comparison revealed that caregivers in the military insurance group provided somewhat stronger (p < 0.05) satisfaction ratings (4.79, SD = 0.45) compared to the commercial (4.61, SD = 0.73) and public (4.56, SD = 0.69) insurance groups. When asked if they would use telehealth again (even if an in-person appointment were an option), three quarters (77.1%) of caregivers agreed or strongly agreed that they would use Frontiers in Pediatrics | www.frontiersin.org telehealth services over a future in-person appointment, 12.5% were neutral, and a small number disagreed (7.1%) or strongly disagreed (3.3%). The ANCOVA ( Table 2) revealed very small but significant effects for the insurance group (p = 0.029; η p 2 = 0.005), and type of service (p = 0.017; η p 2 = 0.008). There was no significant main effect of race or patient proximity to the hospital. Age was significant in the model and had a small effect size (p < 0.001; η p 2 = 0.011). Post-hoc comparison revealed lower ratings (all p < 0.001) for the therapy service group compared to both behavioral health groups (see Figure 1). In terms of insurance, parents of patients with commercial insurance (4.12, SD = 1.16) provided lower (p = 0.031) ratings than those with public insurance (4.26, SD = 0.97). Time 2, Winter Surge Sample As shown in Table 1, children in the Time 2 sample had a mean age of 9.75 years (SD = 4.83) and were 60.6% male. As was the case at Time 1, race at Time 2 differed among insurance types [X 2 (8) = 129.2, p < 0.001], due to an overrepresentation of white patients in the commercial insurance group and an overrepresentation of black patients in the public insurance group. Age differed among service types [F (3, 1391) = 76.6, p < 0.001], with pairwise comparison revealing significantly younger children in the therapy services group (M age = 5.2 years) compared to all other groups, and in the behavioral health: developmental (M age = 8.0 years) group compared to the behavioral health: general (M age = 10.9 years) and Medicine (M age = 10.5 years) groups. At Time 2, 94.2% of caregivers "agreed" or "strongly agreed" that they were satisfied with the telemedicine services they received. ANCOVA revealed a very small effect for the race (p = 0.030; η p 2 = 0.008), but not for the service type, insurance type, or patient proximity to the hospital. The covariate age was significant in the model and had a very small effect size (p < 0.05; η p 2 = 0.003). Comparison of the race groups did not survive post-hoc Bonferroni-correction. When asked if they would use telehealth again, 74.6% of caregivers "agreed" or "strongly agreed", 13.0% were neutral on this item, and a considerable number "disagreed" (8.1%) or "strongly disagreed" (4.3%). When controlling for age, FIGURE 2 | COVID-19 Time 1 (Early Pandemic) Sample, Mean caregiver ratings for item "I would use telemedicine services again, even if an in-person appointment was an option" by age group and service type. Table 3) revealed small but significant effects for patient proximity to the hospital (p = 0.004; η p 2 = 0.008) and type of service (p = 0.002; η p 2 = 0.011), but not for insurance type or race. Age was a significant covariate in the model, with a small to medium effect size (p < 0.001; η p 2 = 0.026). Post-hoc Bonferroni-corrected pairwise comparison revealed significantly (p = 0.004) higher ratings from caregivers in proximity zone 3 (distal counties or states) compared to zone 2 (surrounding or adjacent counties; Figure 3). Post-hoc comparison did not reveal significant differences between service types. Supplemental Analysis As described earlier, significant main effects were noted at Time 1 and Time 2 for patient age and service type when parents were asked if they would use telemedicine again in the future. Supplemental two-way ANOVA analyses were run using these two independent variables to further investigate these variables and their potential interactions. For each analysis, the age of the patient was categorized as follows: age 0-5, 6-10, 11-15, and 16-21. At Time 1 (Figure 2), the age group of the patient was significant (p<0.001; η p 2 = 0.014) but not the service type (p=0.310) or the age group by service type interaction (p = 0.751). At Time 2 (Figure 3), both patient age group (p < 0.001; η 2 p = 0.039) and service type (p = 0.034, η 2 p = 0.009) were significant, but not the age group by service type interaction (p = 0.177). DISCUSSION The rapid upscaling of telemedicine-based services has created an unparalleled opportunity to evaluate the use of outpatient, hospital-based telemedicine services across a wide range of patient groups and hospital services. As caregiver attitudes about telemedicine were expected to evolve over the course of the COVID-19 pandemic, this quality improvement study used a repeated cross-sectional design and captured patient experience data at the beginning of the pandemic during the initial scalingup of telemedicine care delivery (Time 1, Early Pandemic), and several months later in the pandemic when both patients and providers had more experience with telemedicine technology (Time 2, Winter Surge). As anticipated, caregivers of children seen for outpatient telemedicine care reported very high levels of overall satisfaction both in the initial months of the COVID-19 pandemic (95%) and 6-9 months after its onset (94%). Of note, patient satisfaction ratings at the beginning of the pandemic were marginally higher among the caregivers of patients with military insurance compared to those with commercial and public insurance. This small effect is potentially attributable to pre-pandemic, reimbursable exposure of military-insurance FIGURE 3 | COVID-19 Time 2 (Winter Surge) Sample, Mean caregivers ratings for item "I would use telemedicine services again, even if an in-person appointment was an option" by age group and service type. families to telemedicine and their subsequently more experienced providers. It is worth noting that the difference in satisfaction ratings among insurance groups was no longer evident during the Time 2 winter surge, suggesting that increased caregiver/patient exposure to and/or provider experience with caregiver telemedicine contributed to the ensuing parity in satisfaction ratings between insurance groups. There were high levels of overall satisfaction with telemedicine-based visits expressed by caregivers at both time points in this study, suggesting consistent satisfaction with telemedicine throughout the COVID-19 pandemic's first 9 months in the U.S. While there was an impressively large proportion of caregivers (77.1 and 74.6%, respectively) who agreed that they would likely use telemedicine again (even if on-site appointments were an option), there was more response variability on this item. Of note, caregivers of children between the ages of 0 and 5 provided the lowest ratings of willingness to use telemedicine again, with small and medium effect sizes noted during Time 1 and Time 2, respectively. Anecdotal reports from telemedicine-hesitant caregivers of young children suggest several areas of concern, including difficulty getting their child to pay attention and maintaining engagement with the provider. These reports are consistent with a recent survey of 271 global pediatric clinicians, in which 56.5% noted distractions at home as a barrier to the use of telemedicine services during the COVID-19 pandemic (20). Combined with similar conclusions by Tenforde et al. (9), these potential hinderances justify further exploration into the delivery of telemedicine to younger children and their families. The present study revealed that some hospital service types (at least in their current delivery models) might be a better fit for telemedicine compared to others. Indeed, when examined in isolation, caregiver ratings of willingness to use telemedicine again were considerably lower when the patient was seen for services typically considered "hands on" (e.g., OT, PT, and speech therapy). Conversely, ratings were higher from caregivers whose children received verbally based, behavioral health services. These findings, however, are complicated by pre-existing differences in patient ages among service type groups, in that there were significantly younger children in the therapy services group compared to all other groups. At both timepoints, caregivers of older children were more enthusiastic about using telemedicine again compared to caregivers of younger children. Even after controlling for patient age, the statistical model revealed a small effect size of service type upon the willingness to use telemedicine again. Taken together, both age and therapy type appear to play an independent role in contributing to parent/caregiver satisfaction with telemedicine services and willingness to use them again. These findings have implications for hospital efforts moving forward. Innovations are needed to adapt different types of "hands-on" therapy services for telemedicine, particularly for pre-school and early school-aged children. In the meantime, these data may signal the need to prioritize younger children for onsite/in-person care where possible, particularly for those patients receiving OT, PT, and speech therapy services. This study has several notable strengths. Of note, this study explored the use of and satisfaction with telemedicine services across an entire hospital system, capturing data from multiple disciplines and a wide patient age span. This allows for the comparison of parent/caregiver telemedicine satisfaction between disciplines and age groups, which assists in the identification of relative strengths and weaknesses within the telemedicine service modality. Additionally, this study made use of a repeated, cross-sectional design, which allowed for the examination of satisfaction with telemedicine services at different time points, as well as our hospital's provision of telemedicine services, during the COVID-19 pandemic. Lastly, this study had a notable sample size, thus providing more reliable insight into parent/caregiver responses. This study is not without its limitations. The foremost limitation is the potential for selection bias due to the low and inconsistent response rate across timepoints (24.9% for Time 1, and 11.9% for Time 2) and sampling from a single hospital site. Additionally, satisfaction ratings were only received from caregivers who agreed to a telemedicine visit, and not those who declined telemedicine services, preventing us from identifying other possible barriers to accessing care. Over and above the demographic variables included in this study, there are other variables that could account for variability in caregiver ratings of telemedicine, including language and socioeconomic inequities in access to telemedicine care (21)(22)(23). There may also be provider-related variables that impacted the quality of experience for patients and/or caregivers, thus influencing willingness to use telemedicine again (6). Since pediatric selfreported ratings of experience have been found to differ from those of their caregivers (24,25) and have been found to be feasible to collect (26), future projects should consider surveying pediatric patients in addition to caregivers. Lastly, future studies should further examine caregiver telemedicine satisfaction among patients with complex or chronic conditions, as these families may differentially value telemedicine services given the number of appointments and/or related travel inherent to the management of these conditions. In summary, this study provides overwhelmingly positive feedback from parents/caregivers indicating satisfaction with the quality of telemedicine services received and an interest in utilizing telemedicine services in the future, even when onsite appointments are available. The current study identified several areas deserving of attention and innovative effort to adapt the conduciveness of certain therapy services to telemedicine, particularly for younger children. It is clear that telemedicine is becoming a more permanent modality of care, but as in-person appointments become safer to resume, it is critical that patient and caregiver telemedicine experiences must continue to be researched in order to best inform efforts to expand accessibility to all families, regardless of patient age or service type. DATA AVAILABILITY STATEMENT The datasets presented in this article are not readily available because the responses are directly linked to patient protected health information. Requests to access the datasets should be directed to TZ, zabela@kennedykrieger.org. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Johns Hopkins Institutional Review Boards. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
2022-08-25T13:54:23.499Z
2022-08-25T00:00:00.000
{ "year": 2022, "sha1": "fb4edc9046cbdec9ac9e92e92e12e416506d131a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "fb4edc9046cbdec9ac9e92e92e12e416506d131a", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
259316776
pes2o/s2orc
v3-fos-license
Pareto optimal proxy metrics North star metrics and online experimentation play a central role in how technology companies improve their products. In many practical settings, however, evaluating experiments based on the north star metric directly can be difficult. The two most significant issues are 1) low sensitivity of the north star metric and 2) differences between the short-term and long-term impact on the north star metric. A common solution is to rely on proxy metrics rather than the north star in experiment evaluation and launch decisions. Existing literature on proxy metrics concentrates mainly on the estimation of the long-term impact from short-term experimental data. In this paper, instead, we focus on the trade-off between the estimation of the long-term impact and the sensitivity in the short term. In particular, we propose the Pareto optimal proxy metrics method, which simultaneously optimizes prediction accuracy and sensitivity. In addition, we give an efficient multi-objective optimization algorithm that outperforms standard methods. We applied our methodology to experiments from a large industrial recommendation system, and found proxy metrics that are eight times more sensitive than the north star and consistently moved in the same direction, increasing the velocity and the quality of the decisions to launch new features. Introduction North star metrics are central to the operations of technology companies like Airbnb, Uber, and Google, amongst many others [4]. Functionally, teams use north star metrics to align priorities, evaluate progress, and determine if features should be launched [18]. Although north star metrics are valuable, there are issues using north star metrics in experimentation. To understand the issues better, it is important to Figure 1: A simulated example of two cases where a proxy metric is useful. The left figure shows the case where the north star metric is positive, but is too small relative to the noise to measure accurately. The right figure shows the case where the north star metric is significantly different in the short and long term, and the proxy metric reflects the long-term impact early in the experiment. know how experimentation works at large tech companies. A standard flow is the following: a team of engineers, data scientists and product managers have an idea to improve the product; the idea is implemented, and an experiment on a small amount of traffic is run for 1-2 weeks. If the metrics are promising, the team takes the experiment to a launch review, which determines if the feature will be launched to all users. The timescale of this process is crucial -the faster one can run and evaluate experiments, the more ideas one can evaluate and integrate into the product. Two main issues arise in this context. The first is that the north star metric is often not sufficiently sensitive [6]. This means that the team will have experiment results that do not provide a clear indication of whether the idea is improving the north star metric. The second issue is that the north star metric can be different in the short and long term [14] due to novelty effects, system learning, and user learning, amongst other factors. A solution to deal with this problem is to use a proxy metric, also referred to as a surrogate metric, in place of the north star [8]. The ideal proxy metric is short-term sensitive, and an accurate predictor of the long-term impact of the north star metric. Figure 1 visualizes the ideal proxy metric in two scenarios where it helps teams overcome the limitations of the north star metric. Existing literature on proxy metrics [9,1] has focused more on predicting the long-term effect, but has not focused on its trade-off with short-term sensitivity. In this paper, we fulfill both goals with a method that optimizes both objectives simultaneously, called Pareto optimal proxy metrics. To our knowledge, this is the first method that explicitly optimizes sensitivity. The paper is divided as follows. Section 2 discusses how to measure the objectives and their empirical trade-off. Section 3 covers our methodology and algorithms. Section 4 discusses our results, and we conclude in Section 5 with some observations on how to use proxy metrics effectively. 2 How to measure proxy metric performance The two key properties for metrics are metric sensitivity and directionality [6]. The first refers to the ability of a metric to detect a statistically significant effect, while the second measures the level of agreement between the metric and the long-term effect of the north star. This Section discusses each property individually, and proposes metrics to quantify them. We conclude with our empirical observation regarding the trade-off between sensitivity and directionality, which motivated the methodology in this paper (see Figure 2). Metric sensitivity Metric sensitivity is commonly associated with statistical power. However, it can be expressed as a broader concept [17]. In simple terms, metric sensitivity measures the ability to detect a significant effect for a metric. Following [5], we can write this as where δ is the true treatment effect, P (Reject H 0 |δ) is the statistical power, and dP (δ) is the distribution of true treatment effects in a population of related experiments. Sensitivity depends heavily on the type of experiments. This is captured in the dP (δ) term in Equation 1, and is sometimes referred to as the moveability of the metric. For example, metrics related to Search quality will be more sensitive in Search experiments, and less sensitive in experiments from other product areas (notifications, home feed recommendations, etc.). Although each experiment is unique, our analysis groups together experiments with similar treatments, and we assume that the underlying treatment effects are independent and identically distributed draws from a common distribution of treatment effects. We need to define quantities that summarize how sensitive a metric is. Our intuition is that we can estimate the probability a metric will detect a statistically significant effect by seeing how often such an effect was statistically significant in historical experiments. Suppose that there are J experiments whose outcome is recorded by M metrics. In each experiment, the population is randomly partitioned into N ≈ 100 equal groups, and within each group, users are independently assigned to a treatment and a control group. We refer to these groups as independent hash buckets [3]. Let X T r i,j,m and X Ct i,j,m with m = 1, . . . , M and j = 1, . . . , J denote the shortterm recorded values for metric m in experiment j in the treatment and in the control group, respectively, and let X i,j,m = 100% × (X T r i,j,m − X Ct i,j,m )/X Ct i,j,m their percentage differences, in hash bucket i = 1, . . . , N . We refer to these metrics as auxiliary metrics, since their combination will be used to construct a proxy metric in Section 3. The within hash bucket sample sizes are typically large enough that we can use the central limit theorem to assume that X i,j,m iid ∼ N (θ j,m , σ 2 j,m ) for i = 1, . . . , N , where θ j,m and σ 2 j,m are unknown mean and variance parameters, and test H 0,j,m : θ j,m = 0 vs H 1,j,m : θ j,m = 0. ,m the mean percentage difference between the two groups and se j,m the standard error, calculated at Google via the Jackknife method [3], the null hypothesis H 0,j,m is rejected at the α level if the test statistics t j,m =X j,m /se j,m is larger than a threshold τ α,N −1 in absolute value. The common practice is to let α = 0.05. From the above, it naturally follows that metric sensitivity should be directly related to the value of the test statistic t j,m . For instance, we call binary sensitivity for metric m the quantity |t j,m |, (m = 1, . . . , M ). ( The above quantity is the average absolute value of the test statistic across experiments. It has the advantage of being continuous and thus easier to optimize, but it pays a cost in terms of lack of interpretability and is also more susceptible to outliers. In the case of large outliers, one effective strategy is to cap the value of the t-statistic. Which measure of sensitivity to use depends on the application. When a large pool of experiments is available, we recommend using equation (2) due to its interpretation and intrinsic simplicity. Equation (3) should be invoked when optimizing over a discrete quantity yields unstable results. Directionality The second key metric property we need to quantify is called directionality. Through directionality, we want to capture the alignment between the increase (decrease) in the metric and long-term improvement (deterioration) of the user experience. While this is ideal, getting ground truth data for directionality can be complex. A few existing approaches either involve running degradation experiments or manually labeling experiments, as discussed in [6,7]. Both approaches are reasonable, but suffer from scalability issues. Our method measures directionality by comparing the short-term value of a metric against the long-term value of the north star. The advantage of this approach is that we can compute the measure in every experiment. The disadvantage is that the estimate of the treatment effect of the north star metric is noisy, which makes it harder to separate the correlation in noise from the correlation in the treatment effects. This can be handled, however, by measuring correlation across repeated experiments. There are various ways to quantify the directionality of a metric. In this paper, we consider two measures: the first is the mean squared error, while the second is the empirical correlation. Following the setting of Section 2.1, let Y T r i,j and Y Ct i,j define the long-term value of the north star in the treatment and in the control group for every cookie bucket i and experiment j. The resulting recorded percentage difference is Then we can define the mean squared error as where againȲ j = N −1 N i=1 Y i,j is the long-term mean of the north star in experiment j. Equation (4) measures how well metric m predicts the long-term north star on average. Such a measure depends on the scale of X and Y and may require standardization of the metrics. For a scale-free measure, one instead may adopt correlation, which is defined as follows whereX m = J −1 J j=1X j,m andȲ = J −1 J j=1Ȳ j are the grand mean of metric m and the north star across all experiments. Equations (4) and (5) quantify the agreeableness between a metric m and the north star, and their use is entirely dependent on the application. Notice that equation (5) measures the linear relationship, but other measures of correlation may be employed, such as Spearman correlation. It is possible to use different measures of correlation because our methodology is agnostic to specific measures of sensitivity and directionality, as detailed in Section 3. The trade-off between sensitivity and directionality So far, we have established two key properties for a metric: sensitivity and directionality. Empirically, we observe an inverse relationship between these two properties. This can be clearly seen from Figure 2, where we plot the value of the binary sensitivity in equation (2) and the correlation with the north star in equation (5) for over 300 experiments on a large industrial recommendation system. Figure 2: The relationship between correlation and sensitivity for 70 auxiliary metrics across over 300 experiments. Each metric is either a gray or black dot. We highlight several auxiliary metrics that trade-off between sensitivity and correlation in black. Notably, the short-term value of the north star is in the bottom right, which is the least sensitive metric, but the most correlated with the long-term impact of the north star. As such, there is a trade-off between sensitivity and directionality: the more we increase sensitivity, the less likely our metric will be related to the north star. Thus, our methodology aims to combine auxiliary metrics into a single proxy metric to balance such trade-off in an optimal manner. Pareto optimal proxy metrics Our core idea is to use multi-objective optimization to learn the optimal tradeoff between sensitivity and directionality. Our algorithm learns a set of proxy metrics with the optimal trade-off, known as the Pareto front. The proxy metrics in the Pareto front are linear combinations of auxiliary metrics. Each proxy in the Pareto front is Pareto optimal, in that we can not increase sensitivity without decreasing correlation, and vice versa. In this section, we first describe the proxy metric problem, and we later cast the proxy metric problem into the Pareto optimal framework. Then we discuss algorithms to learn the Pareto front and compare their performance. The proxy metric problem We define a proxy metric as a linear combination between the auxiliary metrics m = 1, . . . , M . Let ω = (ω 1 , . . . , ω M ) be a vector of weights. A proxy metric is obtained as for each i = 1, . . . , N and each experiment j = 1, . . . , J. Here, ω m defines the weight that metric m has on the proxy Z i,j . For interpretability reasons, it is useful to consider a normalized version of the weights, namely imposing that M m=1 ω m = 1 with each ω m ≥ 0. In doing so, we require that a positive outcome is associated with an increase in the auxiliary metrics. This means we must swap the sign of metrics whose decrease has a positive impact. These include, for example, metrics that represent bad user experiences, like abandoning the page or refining a query, and which are negatively correlated with the north star metric. Within such a formulation, the proxy metric becomes a weighted average across single metrics where ω m measures the importance of metric m. Un-normalized versions of the proxy weights can also be considered, depending on the context and the measures over which the optimization is carried over. In general, the binary sensitivity in equation (2) and the correlation in equation (5) are invariant to the scale of ω m , which implies that they remain equal irrespective of whether the weights are normalized or not. The solution to the optimization in equation (7) is not available in an explicit analytical form, which means that we need to resort to multi-objective optimization algorithms to find ω * . We discuss these algorithms after first introducing the concept of Pareto optimality. Pareto optimality for proxy metrics A Pareto equilibrium is a situation where any action taken by an individual toward optimizing one outcome will automatically lead to a loss in other outcomes. In this situation, there is no way to improve both outcomes simultaneously. If there was, then the current state is said to be Pareto dominated. In the context of our application, the natural trade-off between correlation and sensitivity implies that we cannot unilaterally maximize one dimension without incurring in a loss in the other. Thus, our goal is to look for weights that are not dominated in any dimension. In reference with equation (7), we say that the set of weights ω is Pareto dominated if there exists another set of weight ω such that BS(Z · (ω )) ≥ BS(Z · (ω)) and Cor(Z · (ω )) ≥ Cor(Z · (ω)) at the same time. We write ω ≺ ω to indicate the dominance relationship. Then, the set of nondominated points is called Pareto set. We indicate it as The objective values associated with the Pareto set are called the Pareto front. The grey points represent the value of the objectives for a set of weights generated at random, while the red points are the ones in the Pareto set. The green dot is an example point that is Pareto dominated by the area highlighted in grey. It is easy to see that any point in the grey area is strictly better than the green dot. The purpose of multi-objective optimization is to efficiently identify the Pareto front and the weights in the Pareto set. Algorithms to estimate the Pareto front are reported in the next Section. Algorithms for Pareto optimal proxies Multi-objective optimization is a well-studied problem that can be solved via a wealth of efficient algorithms. Common methods to extract the Pareto front combine Kriging techniques with expected improvement minimization [11,20], or black box methods via transfer learning [19,13]. These methods are particularly suitable for cases where the objective functions are intrinsically expensive to calculate, and therefore one wishes to limit the number of evaluations required to extract the front. In our case, however, both objective functions can be calculated with minimal computational effort. As such, we propose two algorithms to efficiently extract the front that rely on sampling strategies and nonlinear optimization routines. We then compare our algorithms against a standard Kriging-based implementation. Our first method to extract the Pareto front involves a simple randomized search, as described in Algorithm 1 below. The mechanism is relatively straightforward: at each step, we propose a candidate weight ω and calculate the associated proxy Z i,j for every i = 1, . . . , N and every experiment j = 1, . . . , J. Then, we evaluate the desired objective functions, such as the binary sensitivity and the correlation in equations (2) and (5). These allow us to tell whether ω is dominated. In the second case, we update the Pareto front by removing the Pareto dominated weights and then by including the new one in the Pareto set. 8: If ω is not dominated by any other ω ∈ W, add ω to W. 9: Remove the weights in W that are dominated. 10: end for The advantage of Algorithm 1 is that it explores the whole space of possible weights and can be performed online with minimum storage requirements. However, such exploration is often inefficient, since the vast majority of sampled weights are not on the Pareto front. Moreover, the method may suffer from a curse of dimensionality: if the total number of auxiliary metrics M is large, then a massive number of candidate weights is required to explore the hypercube [0, 1] M exhaustively. A standard solution to such a problem relies on a more directed exploration of the space of weights via Kriging, where the weight at one iteration is sampled from normal distributions whose mean and variance are obtained by minimizing an in-fill criterion [11]. Refer to [2] for a practical overview. Since evaluating sensitivity and correlation is a relatively simple operation, we propose a more directed algorithm, which we now illustrate. Consider the bivariate optimization problem in equation (7). If we fix one dimension, say sensitivity, to a certain threshold and later optimize with respect to the other dimension in a constrained manner, then varying the threshold between 0 and 1 should equivalently extract the front. In practice, this procedure is approximated by binning the sensitivity in disjoint intervals, say [u b , u b+1 ) with b = 1, . . . , B − 1, with u 1 = 0 and u B = 1, and then solving Solve the constrained optimization in equation (8) via nlopt. 6: Add ω b to W 7: end for The optimization problem in equation (7) and Algorithm 2 can be solved via common nonlinear optimization methods such as the ones in the nlopt package. See [15] and references therein. Each algorithm produces a set of Pareto optimal proxy metrics. However, we typically rely on a single proxy metric for experiment evaluation and launch decisions. This means we need to select a proxy from the Pareto front. In practice, we use the Pareto set to reduce the space of candidate proxies, and later choose the final weights based on statistical properties and other product considerations. Algorithm performance This Section evaluates the performance of our proposed algorithms. The task is extracting the Pareto front between binary sensitivity and correlation from a set of over 300 experiments. Details on the data are described in Section 4. We test three different algorithms: 1. Randomized search (Algorithm 1). We let the algorithm run for M × 4000 iterations. 2. Constrained optimization via binning (Algorithm 2). We split sensitivity into 14 discrete bins, ranging from 0 to the maximum sensitivity of a single metric in our data set. From the nlopt package, we rely on the locally biased dividing rectangles algorithm [12]. 3. Kriging and minimization of the expected increase in hyper-volume [11], using the R package GPareto [2]. We let the algorithm run for M × 40 iterations. We estimate the Pareto front for M = 5, 10, and 15 metrics to understand how algorithm performance scales in the number of metrics. Figure 4 compares the Pareto front extracted by each algorithm. Each algorithm yields a similar Pareto front. We notice that constrained optimization detects points in high sensitivity and high correlation regions better than the other two methods, especially as the number of metrics increases. However, the middle of these extracted curves are very similar. A more direct comparison is reported in Figure 5. Here, we quantify the extracted Pareto Front using the Area under the Pareto Front metric (larger values are better). We also compare the run-time of each algorithm. The clear takeaway from Figure 5 is that the choice of algorithms does not matter much for a small number of metrics (5). However, constrained optimization is the best trade-off between accuracy and speed when the number of metrics is large. Results We implemented our methodology on over 300 experiments in a large industrial recommendation system. We then evaluated the performance of the resulting proxy on over 500 related experiments that ran throughout the subsequent six months. Specifically, we compare the proxy with the short-term north star metric, since its precise goal is to improve upon the sensitivity of the short-term north star itself. As success criteria, we use Binary Sensitivity in equation (2) and the proxy score, which is a one-number statistic that evaluates proxy quality. See Appendix A for a detailed definition. Table 1 compares our short-term proxy metric against the short-term north star metric. Our proxy metric was 8.5 times more sensitive. In the cases where the long-term north star metric was statistically significant, the proxy was statistically significant 72% of the time, compared to just 40% of the time for Figure 4 the short-term north star. In this set of experiments, we did not observe any case where the proxy metric was statistically significant in the opposite direction as the long-term north star metric. We have, however, seen this occur in different analyses. But the occurrence is rare and happens in less than 1% of experiments. Finally, our proxy metric has a 50% higher proxy score than the short-term north star. Our key takeaway is that we can find proxy metrics that are dramatically more sensitive while barely sacrificing directionality. Table 1: Comparison of using the short-term north star metric and the Pareto optimal proxy metric. This table was constructed on a set of experiments that ran for six months after we implemented our proxy. The sensitivity of our proxy is 8.5X%, compared to just X% for the north star. Table 1 only evaluates the relationship between the proxy and north star metric when the north star is statistically significant. These experiments are useful because we have a clear direction from the north star metric. However, it is also important to assess the proxy metric when the long-term north star metric is neutral. For this, we can look at the magnitude of the north star metric when the long-term effect is not statistically significant, split by whether the proxy is negative, neutral, or positive. We display this in Figure 6, which shows that, although we may not get statistically significant results for the north star metric, making decisions based on the proxy will be positive for the north star on average. In practice, we are careful when rolling out these cases, and have tools to catch any launch that does not behave as expected. Figure 6: The magnitude of the long-term north star treatment effect, when the long-term treatment effect is neutral, depending on if the proxy metric is negative, neutral, or positive. Finally, it is instructive to analyze how the weights of the proxy metrics vary as we move along the Pareto front from directionality to sensitivity, as illustrated in the example in Figure 7. As expected, when we select points that emphasize correlation, our proxy metric puts more weight on the short-term north star. But when we choose points that emphasize sensitivity, we put much more weight on sensitive, local metrics. Discussion This paper proposes a new method to find proxy metrics that optimizes the trade-off between sensitivity and directionality. To our knowledge, this is the first approach that explicitly incorporates metric sensitivity into the objective. In our experiments, we found proxy metrics that were 6-10 times more sensitive than the short-term north star metric, and minimal cases where the proxy and the north star moved in opposite directions. Our experience developing proxy metrics with multiple teams across multiple years has spurred many thoughts on their pros, cons, and things to watch out for. These considerations go beyond the mathematical framework discussed in Figure 7: Weights of the proxy metrics in the Pareto set as a function of both objectives. We include three metrics, the short-term north star and two metrics that are more sensitive, capturing different elements of the user experience. The optimal weights are highlighted in red for both objectives. In this example, we choose the point that optimized the Area under the Pareto Curve. this paper, and we list them in the next section. We then discuss some other benefits of using proxy metrics. Finally, we'll discuss some limitations in our methodology and future areas of improvement. Considerations beyond Pareto optimality Below are other important considerations we learned from deploying proxy metrics in practice: • Make sure you need proxies before developing them. Proxies should be motivated by an insensitive north star metric, or one that is consistently different between the short and long term. It is important to validate that you have these issues before developing proxies. To assess sensitivity, you can compute the Binary Sensitivity in a set of experiments. To assess short and long-term differences, one possibility is to compare the treatment effects at the beginning and end of your experiments. • Try better experiment design before using proxies. Proxies are one way to increase sensitivity, but they are not the only way. Before you create proxy metrics, you should assess if your sensitivity problems can be solved with a better experiment design. For example, you may be able to run larger experiments, longer experiments, or narrower triggering to only include users that were actually impacted by the treatment. Solving at the design stage is ideal because it allows us to target the north star directly. • Choose proxies with common sense. The best auxiliary metrics in our proxy metric captured intuitive, critical aspects of the specific user journey targeted by that class of experiments. For example, whether a user had a satisfactory watch from the homepage is a good auxiliary metric for experiments changing the recommendations on the home feed. In fact, many of the best auxiliary metrics were already informally used by engineers, suggesting that common sense metrics have superior statistical properties. • Validate and monitor your proxies, ideally using holdbacks. It is important to remember that proxy metrics are not what we want to move. We want to move the north star, and proxies are a means to this end. The best tool we have found for validating proxies is the cumulative long-term holdback, including all launches that were made based on the same proxy metric. It is also helpful to regularly repeat the model fitting process on recent data, and perform out-of-sample testing, to ensure your proxy is still at an optimal point. Other benefits of proxy metrics Developing proxies had many unplanned benefits beyond their strict application as a tool for experiment evaluation. The first major benefit is the sheer educational factor: the data science team and our organizational partners developed a much deeper intuition about our metrics. We learned baseline sensitivities, how the baseline sensitives vary across different product areas, and the correlations between metrics. Another unplanned benefit is that the proxy metric development process highlighted several areas to improve the way we run experiments. We started to do better experiment design, and to collect data from experiments more systematically, now that the experiments can also be viewed as training data for proxy metrics. Finally, the most important benefit is that we uncovered several auxiliary metrics that were correlated with the north star, but not holistic enough to be included in the final proxy. We added these signals directly into our machinelearning systems, which resulted in several launches that directly improved the long-term user experience. Discussion, limitations, and future directions This methodology is an important milestone, but there are still many areas to develop, and our methodology is sure to evolve over time. The first area to explore is causality. Our approach relies on the assumption that the treatment effects of the experiments are independent draws from a common distribution of treatment effects, and that future experiments come from the same generative process. Literature from clinical trials [16,10], however, has more formal notions of causality for surrogate metrics, and we plan to explore this area and see if there's anything we can glean. Another important improvement would be a more principled approach to select the final proxy metric. Some initial work along these lines revolves around our proxy score (Appendix A) and Area under the Pareto curve ( Figure 4). We hope to have a more refined perspective on this topic in the future. We also did not explore more classic model-building improvements in detail. For example, we do not address non-linearity and feature selection. Nonlinearity is particularly important, because it helps in cases where two components of the proxy metric move in opposite directions. For feature selection, we currently hand-pick several auxiliary metrics to include in the proxy metric optimization. However, we should be able to improve upon this by either inducing sparsity when estimating the Pareto front, or adopting a more principled feature selection approach. To conclude, let's take a step back and consider the practical implications of our results. Essentially, we found that the appropriate local metrics, that are close to the experiment context, are vastly more sensitive than the north star, and rarely move in the opposite direction. The implication is that using the north star as a launch criterion is likely too conservative, and teams can learn more and faster by focusing on the relevant local metrics. Faster iteration has also opened our eyes to other mechanisms we can use to ensure that our launches are positive for the user experience. We mentioned earlier that launches using proxies should be paired with larger and longer running holdbacks. In fact, through such holdbacks we were able to catch small but slightly negative launches (case 1 in Figure 1, but with the opposite sign), and further refine our understanding of the differences between the short and long term impact on the north star metric (case 2 in Figure 1, but with the opposite sign). A The proxy score It is useful to have a single metric that quantifies the performance of a proxy metric. We have relied on a measure called proxy score. The proxy score rewards properties of an ideal proxy metric: short-term sensitivity, and moving in the same long-term direction as the north star ( Figure 1). The motivation behind our specific definition comes from the contingency table visualized in Figure 8, which is generated from 1000 simulated experiments. The green cells in Figure 8 represent cases where the proxy is statistically significant in the short-term, the north star is significant in the long-term, and the proxy and north star move in the same direction. These are unambiguously good cases, and we refer to them as Detections. The red cells are unambiguously bad cases: both the short-term proxy and north star are statistically significant, but they move in opposite directions. We call these Mistakes. Informally, we define the proxy score as Proxy Score = Detections − Mistakes Number of experiments where the north star is significant . The key idea is that the proxy score rewards both sensitivity, and accurate directionality. More sensitive metrics are more likely to be in the first and third rows, where they can accumulate reward. But metrics in the first and third rows can only accumulate reward if they are in the correct direction. Thus, the proxy score rewards both sensitivity and directionality. Microsoft independently developed a similar score, called Label Agreement [7]. More formally, and following the notation in Section 2, we can define the proxy score using hypothesis tests for the proxy metric and the north star metric, defined as North Star: H ns 0,j : θ ns j = 0 vs H ns 1,j : θ ns j = 0, Proxy: H z 0,j : θ z j = 0 vs H z 1,j : θ z j = 0. If we let D j = {θ ns j , σ ns j , θ z j , θ z j } be data required to compute the hypothesis tests, then the proxy score for experiment j can be written as PS(D j ) = 1(H z 0,j rejected) (Proxy Significant) × 1(H ns 0,j rejected) (North Star Significant) × 1(θ ns j > 0 and θ z j > 0) + 1(θ ns j < 0 and θ z j < 0) (Agree) × − 1(θ ns j > 0 and θ z j < 0) − 1(θ ns j < 0 and θ z j > 0) , (Disagree) where 1(·) is an indicator equal to one if its argument is true, and zero otherwise. We can aggregate these values across all experiments in our data, and scale by the number of experiments where the north star is significant, to compute the final proxy score for a set of experiments. The scaling factor ensures that the proxy score is always between -1 and 1. . Similar to Binary sensitivity, there can be issues with the proxy score when the north star metric is rarely significant. We have explored a few ways to make this continuous, for example by substituting indicators for Bayesian posterior probabilities.
2023-07-04T06:42:14.990Z
2023-07-03T00:00:00.000
{ "year": 2023, "sha1": "13c07ea02e796e26583d58c9a087c2732f8416c8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "13c07ea02e796e26583d58c9a087c2732f8416c8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
214365730
pes2o/s2orc
v3-fos-license
THE MAIN STRUCTURAL FACTORS THAT MAKE THE BALKANS IMPORTANT FOR TURKISH FOREIGN POLICY The Balkans, due to its geographical proximity and historical, social and cultural ties, represents one of the regions where Turkey has shown special interest. Throughout history, Turkey has played a decisive role in the Balkans for nearly 550 years, and its policies and actions have been instrumental both in shaping the region and at the same time in determining the course of history. The Balkans – which geographically constitute Turkey’s opening gate to the European continent – is also of importance for its special status in the historical process in which the Turkish nation took shape and for its potential for the future in terms of regional integration and security. As a result, the Balkans have been and remain Turkey's main strategic objective regarding the balance of power in this area and European security. Although relations between Turkey and the Balkan neighbours were severely affected during the Cold War, their historical links have continued, in various ways, to this day. From the ethnic and cultural perspective of Turkey, the Turks living in this region belong to both Turkey and the Balkans, such that their dual belonging is considered as being particularly important. In this respect, the emerging / potential emergence of crises in the region has a major significance for Turkey in terms of the sustainability and stability of peace. The Balkans is therefore much more than a neighbouring region for Turkey as a state that has always advocated stability in the Balkans and supported the state integrity. In this study, the main structural factors (historical ties, Balkan Muslims and Turkish minority, geopolitics, security) that make the Balkans important for Turkey’s Foreign Policy will be addressed. INTRODUCTION The Balkans is one of the few regions in the world where diversity is politically, culturally and geographically rare. This diversity is an advantage when strong states dominate the Balkan geography, while it becomes a disadvantage in times of the efforts of many weak states to exist. The British political leader, Winston Churchill, characterises the Balkans as saying, "The Balkans produce more history than they can consume." 1 Indeed, when we look at the historical background of the Balkans, not only has no region of Europe witnessed more attacks, invasions and occupation movements than the Balkans, but the region of which the dominance has often changed hands has been one of the most chaotic and unsettled regions on earth. As Eric J. Hobsbawm rightly refers to the past 20 th century as the "Age of Extremes" 2 or the "Age of Catastrophe", the fact that one of the two bloodiest and most chaotic regions in the world has been the Middle East, and the other one, the Balkans, makes this reality clear. Because in both regions, there have been experienced great wars, civil wars, occupations, ethnic cleansing, exiles, and refugee situations, and there have never been lacking in blood and tears in the lives of the people in these regions. When we look at the history of the Balkans, appear as dynamic and multi-layered geography where the four great civilisations (Ancient Greece, Rome, Byzantine and Ottoman empires) intersect, a wide variety of cultures interact with each other, but never dominated by a single culture. Therefore, if that is true, the Balkans have a destiny that seems immutable. Just in terms of not only races and religions but also it is difficult to find a homogeneous state, even a homogeneous province in this region. So to say; conflict, partition, and ethnic cleansing have been the "ill fate" of the Balkans throughout history. to the concept of "Balkanization" when describing similar regions of the world. The concept is used as a as a negation term in order to refer to some regions that possess or have the potential to contain features such as conflict, division and instability in international relations. As a result of all these facts and processes, the Balkans, which consisted of six countries (Turkey, Greece, Bulgaria, Romania, Albania, Yugoslavia) in the past; simultaneously with the collapse of the Eastern Bloc, together with the countries that emerged after the collapse of Yugoslavia, it has become a region with 12 countries. Besides this, the Balkans, surrounded by globalisation and popular culture, is an area where Orthodox, Catholic, Slavic, and Islamic cultures interact and collide. In this respect, the emerging / potential emergence of crises in the region has a critical importance for Turkey in terms of the sustainability of peace and stability. The Balkans is therefore much more than a neighbouring region for Turkey, which has always advocated stability in the Balkans and supported the establishment of countries' territorial integrities. In this study, an analysis will be made within the framework of the main structural factors (historical ties, Balkan Muslims and Turkish minorities, geopolitics, security) that make the Balkans important for Turkish foreign policy. GEOPOLITICS OF THE BALKAN PENINSULA The Balkan Peninsula is the easternmost of the three peninsulas in the south of the European continent. The geopolitical importance of the Balkans is remarkable as it is a transition point between East and West. This region, as well as being a transit area to other parts of Europe, is noteworthy because of its proximity to Asia and its location extending from Central Europe to the Mediterranean and even to Central Asia. 4 As a result of its geopolitical situation, the Balkan Peninsula, which has shown a long history and cultural unity for centuries, is bordered to the Mediterranean Sea as a historical region in the south. Here, the Aegean Sea, with its hundreds of islands, is positioned in the Balkans; it is bordered to the Adriatic Sea in the West while stretching from there to Crete and the Mediterranean Sea. As for the northern border of the Balkans, since Roman times the Danube River has been designated as the northern border of the peninsula. The Danube River forms a water barrier that is challenging to overcome, such as The Straits in the south. However, since the Danube River is the main trade route to the Black Sea, and because of the strong trade ties between the two coasts, it has never been an insurmountable obstacle. The empires that dominated this side of the Balkans have always tried to keep the territories beyond the Danube under their control. Not only is this not direct sovereignty, but also the north and south of the Danube have shown significant differences in terms of culture and social structure. The Danube has remained as the northern border of the Balkans, especially in military terms. Towards the west of the Balkans, Vidin, and in particular, Belgrade, played a crucial role, and after Belgrade, the Sava River has formed a border separating the Western Balkans from Syrmia and Hungary. 5 Within this natural and historical border formed by rivers, 0.2% of Italy, 27% of Slovenia, and 9% of Romania from the west remain within the Balkan geography. The west of the Balkans borders the Adriatic Sea and the east borders the Black Sea. Its southern border constitutes Greece with the remaining part of Turkey in Western Thrace, which constitutes 27% of the total surface area, and gives the Balkans a feature that provides an exit to the Mediterranean via the Aegean and Ionian seas in the South. 6 TURKISH AND MUSLIM PRESENCE IN THE BALKANS The Turkish presence in the Balkans dates back to very ancient times. Even the word "Balkan" is an Old Turkish word that was meaning "mountain range / mountainous". 7 The Balkan Peninsula, starting from the 6 th century, became a Ibid., p. 52. 7 According to Mark Mazover, the name "Balkan" itself is a name given afterward to the peninsula. According to him, this naming is the name of a mountain, and today's Balkans was known as "Rumeli" or "European Balkans" between the late 19 th century and early 20 th centuries. When the Ottomans called the region "Rumeli", it was based on the fact that the region was previously the territory of the Roman Empire. In this context, Anatolian lands were called "Diyar-ı Rum" and Anatolian Seljuks were called "Greek Seljuks". Rumi was the title of the inhabitants of this region. Even in the classical period of the Ottomans, the concept of Rumi had a social content. In general, to distinguish Ottoman Turks from other Asian Turks and in particular it was used to describe an urban, educated and cultured class of the Empire. To Mazover, the widespread use of this name with negative connotations was during the Balkan Wars (1912)(1913). Therefore, the Balkans began to mean much more than a geographical term over time. Unlike its past uses, the term has strong connotations of violence, desert, and primitivism that are hard to find elsewhere. homeland where Turkish tribes came and settled and the mounted nomadic Turkish tribes who came after each other from the east, through Asia and the northern Black Sea steppe region, either mingled with the Native people from Dacian, Thracian and Slavic origin, then disappeared (such as the Pechenegs and Uz from Oghuz origin in the 11 th century), or founded powerful states in the northeastern Balkans as a military ruling class. 8 This presence continued with the Bulgarian, Oghuz, Pecheneg and Kuman migrations in later periods and reached its climax during the Ottoman Empire. Within the Balkan Peninsula, several strategic massive mountainous regions, straits, and passageways marked the stages of the Empire's founding. This geopolitical factor is of fundamental importance for understanding the stages of Ottoman expansion in the Balkans. After the Ottomans settled on the European coast of the Dardanelles, the Evros River from Edirne to Enez became the first frontier of conquest and spread. 9 Therefore, first of all, the Ottoman Empire was born and developed as a Balkan empire in the 14 th and 15 th centuries. 10 In terms of power balance, the Balkans constituted the dynamic for the establishment and rise of the Empire's power and influence to spread to Europe and to become one of the great powers of Europe. 11 With the annexation of Edirne to the Ottoman territories in 1361, the Turkish population in the Balkans began to increase, and these lands which were called "Rumeli" became one of the two main politically and culturally dominated areas of the Ottoman State, along with Anatolia. Until the Balkan war in 1912, it was possible to go from Istanbul, almost as far as to the Adriatic Sea, within the borders of the Ottoman Empire. All Western Thrace, Macedonia, Albania and even today's Kosovo and Sancak were under Ottoman rule. Salonika was the second-largest city in the Empire, and the majority of the populations living on the aforementioned "Rumeli" lands were either Turkish or Muslim. 12 In Western Thrace and Macedonia, a Muslim-Turkish population consisting of Turks, Muslim Pomaks and even Muslim Slavs who had migrated from Anatolia at the time was the majority. Albanians living in Albania, Kosovo, and Western Macedonia made up a significant part of this population because they were Muslims. Especially, Islamization and migration from Anatolia to the region, it has contributed to the high level of the Turkish-Islamic population in the Balkans. Therefore, the most powerful and long-term influence of the Ottoman Empire in the Balkans is the massification and institutionalisation of "Islamization" in the region. 13 The Balkans are also important in terms of being the starting point for the decline and collapse of the Ottoman Empire in the 19 th century. The late 19 th century witnessed rising Greek nationalism and the modern Greek state in 1830 was the first nation-state in the Balkans to come out of clashes between nationalism and the Ottoman Empire. 14 So to say, the fall and collapse of the Empire spreading and rising in the Balkans began tragically from the Balkans. The tragedy of the Empire in the Balkans, especially the Balkan Wars, ended the search for Ottomanism, which aimed to keep the empire together, regardless of religion or race, and turned the Committee of Union and Progress to Turkish nationalism. After the collapse of the Ottoman Empire, the Turkish and Muslim peoples living in the Balkan countries did not cut their ties with the motherland and managed to maintain their identity despite all kinds of oppression. Following the assimilation policies implemented during the Cold War, ethnic cleansings against the Turkish and Muslim presence in the Balkans emerged in the just after the Cold War. Although the region seems to have reached tranquillity today, in reality, the political and ethnic conflicts that have been the bleeding wound of the region have not been fully resolved. BALKANS AND TURKEY: INDEFEASIBLE LEGACY Moving from the historical reality above, it should be firstly mentioned that the Republic of Turkey bears the heritage of the Ottoman Empire 15 , which dominated the Balkans for about 550 years, and since its foundation, it has turned its face to the West and brought its European and Balkan identity to forefront, rather than its Asian one. Therefore, Turkey is a Balkan country not only geographically but also politically, historically and culturally. After centuries of the Ottoman Empire's sovereignty, the Balkans became one of the most depressed regions during the collapse of the Empire. Both the effects of the French Revolution and the struggles of the great powers on the Balkans as determinants of the independence movements in the region have led to a radical change in the political, human and geographical structure of the region. Following the declaration of the Republic of Turkey in 1923, Turkey's relations with the Balkan countries have shown progress on a peaceful basis. In the interwar period, Turkey's prominent status quo approach in international relations was also reflected in the Balkan policy. During the Cold War period, however, Turkey's Balkan policies were much more limited and it did not find much room for action in the area squeezed by the eastern and Western blocs. The main change in Turkey's Balkan policy was triggered by the end of the Cold War and entering of Yugoslavia into a break-up process in a regional sense, and thus, Turkey gained a wider room for manoeuvre in the Balkans just as it did in the Caucasus and the Middle East. The Balkans region has become significant for Turkish foreign policy due to four main factors in the period from past to present. These four main factors make the Balkans region significant for Turkey and that furthermore affect its relations with the countries and peoples of the region at all times closely. The factors in question are as follows: 1) Historical ties with the region 2) The population of Balkan-origin living in Turkey 3) Muslim and Turkish-origin communities living in the Balkans 4) The geopolitical position of the region and security Looking at the historical ties with the region, one must first state that the Ottoman Empire was born in the town of Sögüt as a beylik, but it was established 15 The Ottoman Empire's dominance over the Balkans is divided into three periods: 1) The period of progression and supremacy in the Balkans (1354-1683), 2) The period of weakening and decline of supremacy ( and flourished in the Balkans as a world state. In this context, with the conquest of Edirne (1362), we see that the Balkans were historically the main expansion area of the Ottoman Empire. In the course of the in the process of expansion of the Empire, its orientation to the West much more than the East, in fact, ensured that the empire is regarded as a Balkan empire rather than an Asian or Eastern Empire. Because a significant portion of the Ottoman human resources and economic income were also provided from the Balkans. Furthermore, the Balkans is a region that has closely influenced Ottoman-Turkish political life. It is a fact that the political ideas and movements that developed in Western Europe entered the political life of the Ottoman Empire through the Balkan lands and strengthened. Therefore, it is an undeniable fact that nationalist ideas, organisations and movements in the Balkans affected Ottoman-Turk intellectuals and political leaders in the development of the understanding of motherland and nation in the modern sense. Indeed, the fact that two of the most prominent centres of the Committee of Union and Progress, the pioneer of the 1908 Revolution (redeclaration of the Constitutional Monarchy [II. Meşrutiyet]), were located in Salonica and Manastir is meaningful in this regard. The Balkans, in the Ottoman Empire, was the region where the Devshirmeh [Devşirme] system was applied most intensively. 16 In this context, this region provided the Empire with a great number of soldiers (Janissary [Yeniçeri]) and senior executives. Considering that from the 215 grand viziers of the Empire, this figure reaches 292 with the reassignments 17 , and 62 of them are of Balkan origin, and the military and civilian cadres performing war of independence are mainly of Balkan origin; the importance of the Balkans in Turkish political life and Turkish foreign policy can probably be better understood. For example, considered one of the most important grand viziers of the Ottoman Empire, Sokullu Mehmet Pasha, who served as Grand Vizier for more than 14 years during the reign of Suleiman the Magnificent, Selim II, and Murat III, marked the period of the Ottoman Empire with his statecraft, projects, and personality as a devshirmeh Ottoman statesman of Serbian origin. 18 18 The provinces in the Balkans were mostly ruled by the pashas of Balkan-origin. Many of Atatürk, Republic of Turkey's founder, was born in Salonika as a member of the last generations of a Turkmen Yoruk family who migrated from Anatolia to the Balkans in the 14 th or 15 th century. 19 It is to some degree true that the Ottoman conquests caused a sudden interruption in the natural development of Balkan history. It is also true that Balkan nations have lost their national dynasties and ruling classes. With the disappearance of the Balkan states, the development of the "high culture" symbolised by their elite also halted. Nonetheless, folk culture and literature and arts connected to the church sustained its vitality and development during the Ottoman period. On the other hand, Ottoman culture had a strong influence on language, arts and daily life. It can be argued that the culture of the indigenous people thus prospered by coming into contact with Islamic culture. The most apparent document of this cultural influence is the Balkan languages. The number of cultural words taken from Turkish, even in today's Balkan languages, varies between 2000 and 5000 words depending on the region. Also, the Ottoman heritage is visible even today in clothes, folk music, eating and drinking traditions, these people were successful managers who grew up through the Devshirmeh system. There were also numerous devshirmehs among the Ottoman viziers and other Council (Divan) members. Therefore, an important element of the political institutionalization of the Ottoman Empire was the practice of devshirmeh. The practice of devshirmeh took place between the 14 th and 18 th centuries. This practice declined in the 17 th century and then ended in the 18 th century. At the time period determined by the central government, the desired number of Christian boys was gathered from the designated regions. The gathering of the children was carried out by officials authorized by the central government and the Janissary Agha. As a rule, one boy was subjected to being devshirmeh for every forty families. Children were chosen by different criteria (body structure, height, intelligence, morals, character, beauty, etc.). According to these criteria, the best children were collected. Only the single-male child family, clergies and prominent families of the region were exempted from the practice of devshirmeh. The collected children were taken to Istanbul and delivered to Janissary Agha. The best ones were taken to the palace, employed in various jobs and trained at Enderun. The most successful ones in the training process were assigned to different positions under the Sultan's command at the palace. Others were sent as rulers to different parts of the Empire. Devshirmehs, who achieved the highest success in the training and post-training posts, were rising up to senior positions in the state administration. See Caner Sancaktar, Ibid., p. 36 Nationalism has been the fundamental dynamics of development, change, integration, and disintegration in the Balkans since the 19 th century. Both the dominance of the Ottoman Empire over the Balkan peoples and that the Balkan peoples embarked on national struggle for independence against the Empire from the early 19 th century and the subsequent efforts to build their own "nation-state", inevitably initiated and developed a process that gave rise to the "Ottoman / Turkish / Muslim" antagonism in the Balkans. Therefore, it is worth noting that both behind the happenings experienced during the Ottoman withdrawal from the Balkans and the ethnic and religious conflicts experienced in the Balkans in 1990s, the process of being purified from the Ottoman past/heritage at both political, cultural and ethnic levels, in other words, "De-Ottomanization" process/efforts, lies. On the other hand, in the first stage of the post-Cold War era, domestic and foreign developments encouraged Neo-Ottomanist ideas and discourses in Turkish foreign policy. The rise of Neo-Ottomanist discourse in Turkish political life in the early 1990s arises from both the actualisation of changes in Turkish foreign policy that promoted the emergence of critical ideas and alternative discourses and a series of developments (the collapse of the bipolar international system, new independent Turkic republics in Central Asia, Bosnia-Herzegovina War, the rise of Kurdish ethnic nationalism in Turkey). Neo-Ottomanism emerged as a result of the political developments in the Balkans, the rise of the Islamic bourgeoisie in Turkey, the widespread religious education especially following the September 12, 1980, military coup, and the economic liberalisation efforts initiated by the Motherland Party, led by Turgut Özal. He sought a geopolitical redefinition of Turkey-West relations (in particular Turkish-American relations) in the cultural geography of a potential Turkish sphere of influence over vast geography, expanding from the Balkans to the China Sea so that the "next century would be a century of Turks." 23 In this context, in the 1990s, Turkey has "entered into more rapprochement with the Ottoman past" in this process, undoubtedly, the attacks against the Turkish and Muslim communities in the Balkans, taking into account the action and reaction principle, have historically led to the development of prejudice in the public opinion and the administrative circles of Turkey as well as against the non-Muslim nations and states of the Balkans. Therefore, in the memories of Turkish foreign policy, the Balkans have often taken their place with negative images. Because, during the interwar period, in terms of spreading area of fascism, and communism after World War II (1939)(1940)(1941)(1942)(1943)(1944)(1945), the Balkans were perceived as a threat to Turkey and a region where minority problems were endless. Considering the second factor, the "population of Balkan origin living in Turkey", that makes the Balkans important for Turkey, together with the process of the Ottoman withdrawal from the Balkans, the waves of migration from the Balkans to Anatolia during the period of the Republic continued for various reasons (oppression and war), creating a population of Balkan origin (Bosnians, Muslim Albanians, Torbesler, Pomaks, and Bulgarian Turks) which has a population of approximately 3 million among Turkey's population 24 , and even a significant number of the relatives of these people still live in the Balkans. Facing massive immigration from the Balkans to Turkey also affect Turkey's ethnic composition, it has also contributed to the construction of the Turkish nation. Although while migration between the Balkans and Turkey based on ethnic rather than religious-based (for example, in the Turkish-Greek population exchange 23 (1923)(1924)(1925)(1926)(1927)(1928)(1929)(1930), which was one of the most recent mass immigration, despite a religious-based exchange), these immigrants are considered as ethnic Turks in Turkey and were also considered Greek in Greece. Because, in the historical process, the fact that the most important determinant of ethnic identities in the Balkans has been primarily the religion, not the race or the language. Turkey and Greece have taken an important step towards forming their national states with the necessary exchange of people. In this context, these immigrants were after the arrival subject of intensive Turkification, which resulted in changed family names, inability to use the mother tongue, etc. 25 On the other hand, the political, social, economic and psychological losses caused by these migrations for the Muslim and Turkish communities of Balkan origin, as well as the pressures experienced by the Muslim and Turkish communities living in various Balkan countries, have become one of the most sensitive issues Turkish foreign policy in every period. In this context, Turkey's one other reason for the interest in the Balkans, increasing concerns about the new wave of mass immigration. Turkey has received large amounts of immigration from the Balkans since the period of the Ottoman Empire; one of the most important reasons for this is that when a crisis or war broke out in the region, Turkey has always been a popular destination for Balkan Muslims and Turkish minorities. In other words, in both groups, especially in times of crisis, they see Turkey as the biggest guarantee. Since the high economic and social cost of these migrations laid a burden on Turkey, Turkey carries out policies that support human rights and freedoms in the countries where minorities come from and still reside, to eliminate the need to emigrate to Turkey. Also, mass migration leads to a reduction in the number of the Turkish minority in the Balkans, and this situation is not a desirable situation for Turkey. This approach, which may also be considered as Turkey's taking an active role in the Balkans, has been perceived, from an imperialist point of view, as Turkey's return to the former Ottomans and the Balkans. This was also considered to be Turkey's foreign policy status quo as a sign of a major change. It has therefore been claimed that the new, Islamic and Ottomanist tendency foreign policy identity was now operative and on the arena. 26 third factor that makes the Balkans important for Turkey and Turkish foreign policy, This Muslim and Turkish presence in the Balkans is the result of continuous mass migration from Anatolia in the 14 th and 15 th centuries. It is known that the Ottoman Empire forcibly moved nomadic groups from Anatolia to Rumelia and settled them in certain areas along strategic routes. One reason for this migration and settlement was to secure the conquered territories and main roads and provide raiding forces at the borderlands. Another reason was the policy of sending animalbreeder nomads to the borderlands of Rumelia, who caused a disturbance in Anatolia and caused harm to the villagers. Whatever the reason is, there is no doubt that the Empire played a leading role in Turkifying Rumelia. Besides that, especially in the 14 th century, there seems to have been a spontaneous movement of migration from Anatolia to settle in the rich lands of Rumelia. Those who participated in this migration were mostly Turkmen tribes who came to Western Anatolia under Mongol pressure from Eastern and Central Anatolia. It seems that the occupation of Byzantine lands in Western Anatolia and the emergence of Turkmen beyliks was the result of this Turkmen migration movement. According to the 16 th -century archival records, even in the 1520s, the Yoruks in Western Anatolia accounted for one in nine of the entire population. The Turkmen nomads who migrated from East to West Anatolia continuously were causing a population pressure, and the Balkans were an appealing area for the nomads who had to search for new grasslands. 27 Today in Balkan geography, more than 1 million Turkish population (approximately 760 thousand in Bulgaria, 120 thousand in Greece, 78 thousand in Macedonia, 40 thousand in Kosovo, and 70 thousand in Romania) and more than 8 million Muslim population (approximately 2.5 million in Albania, 2 million in Bosnia and Herzegovina, 1.6 million in Kosovo, 900 thousand in Bulgaria, 670 thousand in Macedonia, 150 thousand in Greece, 120 thousand in Montenegro, 235 thousand in Serbia, 70 thousand in Romania) live. 28 In this context, Turks and Muslims living in the Balkans today, as they did yesterday, have positive thoughts and opinions towards Turkey and see Turkey as a "protective state" in almost every period (especially during times of oppression, conflict, and crisis). Likewise, Turkey's public opinion and state are also equally sensitive to these communities in the Balkans because of their historical and cultural ties and are interested in their problems. Hence, Turkey, even if other factors are ignored, Turks and the Muslims minority in the Balkans are directly affected by the developments in the region. After the Cold War, Turkey's apparent interest in the Balkans began to experience a change. In the context of Yugoslavia, with the end of ethnic conflicts in the Balkans, since the early 2000s, while soft power has become an effective key throughout the region, Turkey's foreign policy towards the Balkans in the framework of this new approach has emerged. 29 In addition to seeking the political and security aspects that Turkey's Balkan policy as well, in the last decade, quests based on soft power elements have also been very important. 30 In the period following 2009, when Turkey's foreign policy behaviour towards the Balkans is analysed, it is evident that beyond the major changes in the meaning of discourse, there is indeed an essential continuity in terms of Turkey's shaping of its relations with the region. 31 During Ahmet Davutoğlu's term as prime minister, Turkey displayed a better-structured vision in the region, endeavoured to be more proactive in the face of developments experienced in the region, and emphasised the approach of being in touch with people in the field through the use of soft power capabilities and, in particular, cultural ties. 32 Turkey sought new instruments to expand its sphere of influence, and in this context referred to the common Ottoman history of the Balkans in its discourses. It began to use its cultural and religious ties arising from Ottoman heritage, including kinship relations with the Balkan people, to position itself more strongly in Balkan politics and to balance the influence of the great powers in the region. 33 In this context, the instruments of soft power for the region said that Turkey's heavily granted. Looking at the "Geopolitical position of the Balkans and security", as the fourth factor that makes the Balkans significant for Turkey and Turkish foreign policy, firstly, it can be mentioned that the main influence of Balkan geopolitics on Turkey is cultural and historical and this was because the Muslims of the region 29 regarded Turkey as their homeland. The culture that dominated the region for centuries was Ottoman-Turkish culture. At this point, it should also be noted that for the first time in 1808, until the German geographer, August Zeune, uses the term Balkan Peninsula, the region was called "European Turkey" (Turkey d'Europe) in most sources and maps. 34 Therefore, the historical background and the Turks in the region are an important geopolitical factor for Turkey. On the other hand, we can say that the geopolitics of this geography, especially in terms of the "stability" and "security" factors, are significant for Turkey and Turkish foreign policy in every period. Indeed, during the period of the Ottoman Empire, the Balkans had strategic importance with its "outpost" position protecting Istanbul against Europe. Turkey's special geographical position between Asia, Europe, and the Middle East, makes Turkey a Balkan, Mediterranean, and the Middle Eastern country all at the same time. This geographical position also makes Turkey much more sensitive to the developments and changes in the regional or international political and military balances. 35 So, the Balkans region is a very important geographical region connecting Central and Western Europe with Asia, as a peninsula adjacent to the Black Sea, Aegean and Adriatic seas and extending into the middle of the Mediterranean Sea, and with this feature, the geopolitical value of the Balkans is really high both in terms of security and in terms of trade routes, transportation, and stability. In other words, the Balkans has the distinction of being a road corridor to the West for Turkey. The link roads, between Turkey and Europe countries, which have intensive economic and political relations, to pass through this region. Therefore, maintaining security and stability in the Balkans is extremely important that both in terms of Turkey's security as well as to sustain economic and political relations. The emergence of the possibility of a conflict in this geography as well as the creates risk for regional security, Turkey's relations with the region and with Europe will also be negatively affected. Secondly, the Balkans region is Turkey's way of opening up to Europe, so the Balkans' being in "stability" and "security" is of great importance both for Turkey's "national security" and for the path to Europe not to be interrupted. Therefore, the Republic of Turkey has been continuously seeking "stability" and "security" in the Balkans region since its establishment and has supported and promoted all kinds of political, military, economic and cultural formations that 34 36 Moreover, the Balkans region, one of the hinterlands of Turkey's foreign policy, is also very important in its event race against Greece in the context of its struggle to become a "regional power" in this geography. Considering Turkey's deep disagreements with Greece. In this context, Turkey's entering into more effective relations in the Balkans from Greece, to be a balance across Greece is of utmost importance. Thirdly, since the early 1990s, in Turkey's relations with the Balkans, it has emerged as a process of redefining the role and discourse. With the end of the bipolar international system, however, Turkey also had a wider arena of manoeuvring in foreign policy, thereby enabling it to develop more effective foreign policy initiatives. 37 In this context, throughout the 1990s near Turkey's land basin said that a foreign policy showed increased but controlled activity. The wars that took place in the Balkans in the 1990s brought great security concerns, especially for Turkey, and therefore Turkey played an active role in generating a solution in the crisis areas of the Balkans and especially in the conflicts experienced in Bosnia and Kosovo. Turkey's role in the region during the said crisis has reached its peak. In this context, Turkey approached the issue of Yugoslavia with similar matters, such as political and security concerns, and tried to play an active role in solving the Yugoslavia Crisis. Besides, following the internal conflicts that took place in the Balkans, Turkey has played an important role in the international peacekeeping mission in the region. In this context, to ensure lasting stability in the Balkans and to strengthen the cooperation environment, Turkey continues to play a leading role in regional affairs and organisations. 38 The new context of global politics and the new outlook of Turkish foreign policy after the Cold War period, in the context of the Middle East and the Balkans, Turkey pulling the focus of regional politics has opened up new possibilities in international relations to Turkey. 39 In the post-Cold War era, the Justice and Development Party (JDP) that will make a big change in Turkey's domestic and foreign policies, came to power, it not only continued active diplomatic initiatives that had intensified since the 1990s, but it also had a chance to add new elements to Balkan policy thanks to its increased interactions with the European Union (EU). 40 In this respect, since the 1990s, Turkey's active and inclusive in the Balkans is possible to say that the foreign policy of a positive impact on Turkey-EU relations. Ahmet Davutoğlu, the determinant of the main lines of Turkish foreign policy under the JDP, has played a significant role in the theoretical and practical framework of Turkey's Balkan policy. He proposed a new framework in terms of post-Cold War Turkish foreign policy and argued that Turkey should shape its policy in the Balkans, especially by making it based on the region's two important Muslim people: Bosnians and Albanians. 41 On the other hand, for Ahmet Davutoğlu, cultural cooperation with the Balkans and the protection of Ottoman and Turkish heritage in this region are particularly important. According to Ahmet Davutoğlu, based on intra-regional balances and inter-regional dependency, strengthening the internal security of these communities, protecting their cultural assets, strengthening their socio-economic infrastructure, and maintaining and increasing communication between communities, Turkey, as well as peace in the region will make a powerful and secure the tension conjuncture. 42 In the EU, such as Turkey, as well as the maintenance of peace and stability, also sees the Balkans as an area of influence as a test area for its capabilities. 43 CONCLUSIONS Turkey is a multifaceted country for reasons such as its geographical location, history, culture, and the social, political and economic composition of its nation. In this context, Turkey is both an Eastern Mediterranean and the Black Sea country, as well as Eurasian, the Near Eastern and the Balkans country. The Balkans is a very important region not only for Turkey but also for regional and international powers. The Balkans is a region that cannot be reduced to homogeneous patterns in terms of the facts discussed in this article. Therefore, it is necessary to think of the Balkans with many political facts and factors in which the countries in the region whose conflicting histories, contradicts today and becomes uncertain in the future. Turkey's relations with the Balkans, as well as the countries of the region and Turkey's national conjuncture, has been developed within the facilities and subjects where the international conjuncture makes it possible. The foreign policy understanding of the Republic based on maintaining the status quo, Yugoslavia experience of the Balkan countries, multipolar recognises opportunities to the countries in question of the international system after the end of the Cold War, and finally, from the countries of the region they live in identity crisis, and Turkey's since 2002, foreign policy understanding evolving from status quo to active foreign policy understanding was the determining factor of the relationship. In this context, since the end of the Cold War, mainly of Turks and Muslims in the Balkans in the international arena ̶ even one ̶ spokesman Turkey. Therefore, the position of Turkey in the region is special and unique. No doubt, as well as the advantages of this position, the disadvantages will always exist. However, relations between Turkey and the Balkan countries should not be understood as purely bilateral relations. In fact, the international organisations to which each party is a member and the responsibilities it imposes have a direct or indirect impact on mutual political and economic relations. Thus, Turkey and Balkan countries, as well as EU enlargement and deepening policies, affect unequal degrees. On the other hand, EU members Bulgaria and Romania have expressed their support for Turkey's EU membership so far. This might work to their benefit with regard to the EU insofar as the EU recognizes a need to enhance its presence in the Black Sea area and insofar as it is officially favourable toward Turkish membership. This support, as long as the EU is aware of the need to increase its presence in the Black Sea region and it is important as long as the positive view of Turkey's membership. 44 Therefore, the Balkans have great importance with its special position, as well as with its potential for the future, in the context of regional integration and the EU membership goal shared with all the countries of the region. In the post-Cold War period, increasing the effectiveness of Turkey's foreign policy and Turks and Muslims living in the Balkans, language, religion and culture in the sense of partnership of its, offers a wide range of ground relations in the region to Turkey. In this context, Turkey has an important position in the Balkans, which is often in crisis, thanks to its position of being a model country in the region with its image of moderation in foreign policy, responsibility, respect for the international order and, democratic attitude. On the other hand, the developments underwent in the Balkans during the 19 th and 20 th centuries indicate that the region will continue to be an important area of struggle for the great powers in the 21 st century. Because the region is in a strategic location as a geographical position and it is also the geopolitical basin of many historical powers. Within this framework, its special position in the historical process bears the Balkans far beyond being a geographic place for Turkey. Turkey depending on the current national power capacity and has the potential to be a state of shaping the developments in the Balkans. Turkey, which strives to develop regional cooperation with all countries in the Balkans, increases its effectiveness in the Balkans by creating different platforms and contributing to these formations both economically and culturally through a multi-relationship and dialogue network including non-governmental and professional organisations. It is inevitable for Turkey to know and manage the perceptions that pose risks for the policies it will produce in the Balkans, which are very diverse in terms of ethnicity, in order to be more effective in these efforts. In this regard, it is an inevitable necessity for Turkey, which is based on the principles of "regional ownership" and "comprehensiveness" in its Balkan policy in the 21 st century, to claim the Balkans heritage that it inherited from the Ottomans today and in the past, and to play an active role in the region, in terms of both its historical and geo-cultural responsibility and the strategic horizon that Turkish foreign policy holds.
2020-01-09T09:09:48.215Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "0bfeda2967dd70ac377570247258df4f75a76a05", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4316/cc.2019.02.007", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ea768cd8e098418fe7dbac1abc635207928424f7", "s2fieldsofstudy": [ "Sociology", "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
265152363
pes2o/s2orc
v3-fos-license
Predictors of fall risk in older adults using the G-STRIDE inertial sensor: an observational multicenter case–control study Background There are a lot of tools to use for fall assessment, but there is not yet one that predicts the risk of falls in the elderly. This study aims to evaluate the use of the G-STRIDE prototype in the analysis of fall risk, defining the cut-off points to predict the risk of falling and developing a predictive model that allows discriminating between subjects with and without fall risks and those at risk of future falls. Methods An observational, multicenter case–control study was conducted with older people coming from two different public hospitals and three different nursing homes. We gathered clinical variables ( Short Physical Performance Battery (SPPB), Standardized Frailty Criteria, Speed 4 m walk, Falls Efficacy Scale-International (FES-I), Time-Up Go Test, and Global Deterioration Scale (GDS)) and measured gait kinematics using an inertial measure unit (IMU). We performed a logistic regression model using a training set of observations (70% of the participants) to predict the probability of falls. Results A total of 163 participants were included, 86 people with gait and balance disorders or falls and 77 without falls; 67,8% were females, with a mean age of 82,63 ± 6,01 years. G-STRIDE made it possible to measure gait parameters under normal living conditions. There are 46 cut-off values of conventional clinical parameters and those estimated with the G-STRIDE solution. A logistic regression mixed model, with four conventional and 2 kinematic variables allows us to identify people at risk of falls showing good predictive value with AUC of 77,6% (sensitivity 0,773 y specificity 0,780). In addition, we could predict the fallers in the test group (30% observations not in the model) with similar performance to conventional methods. Conclusions The G-STRIDE IMU device allows to predict the risk of falls using a mixed model with an accuracy of 0,776 with similar performance to conventional model. This approach allows better precision, low cost and less infrastructures for an early intervention and prevention of future falls. Introduction Population aging is the result of successful health and social policies, with those over 65 being the group that has had the most growth in recent decades.But, as the aging population increases, more individuals will be at risk of developing chronic diseases, disability, and dependence. Falls are one of the most important geriatric syndromes and one of the main causes of disability; they occur at all ages but specially over 65 years when frailty, sarcopenia and other multiple causes are more prevalent. According to the World Health Organisation (WHO), 684,000 people die every year due to falls the second cause of accidental death in the world, in addition to conditioning important functional consequences.The three highest risk groups are children, workers, and the aged population.However, the elderly are the group with the highest risk of complications so recognizes falls as a public health problem of the first magnitude [1]. Significant complications in the elderly accompany falls; psychological impact, which can condition new falls and secondary functional impairment, physical consequences such as soft tissue injuries, rarhabdomyolysisr head trauma that occurs in 10% of cases, 5% suffer fractures and in 1-2% have a hip fracture that is the one with the greatest functional impact, mortality and hospital costs.Secondarily these complications condition institutionalization, functional and quality of life loss and direct and indirect health costs [2]. Falls typically arise from the complex interplay of various factors rather than a singular cause.Intrinsic factors, such as age, sex, and chronic conditions like diabetes, dementia, or Parkinson's disease, in combination with the effects of medication and environmental hazards, can disrupt balance and impact an older individual's postural responses, thereby heightening the risk of falls.This risk is particularly pronounced in specific circumstances, such as during transfers or while navigating challenging terrain [3]. In young individuals, falls result from external situations like sports or working activities.Still, in the older adult, a minimal external factor can lead to falling by combining multiple intrinsic and extrinsic factors.The multifactorial falls risk assessment allows identifying all these factors in order to develop individualised, tailored fall prevention plans.The identification of the subjects at risk of suffering from falls is crucial since it allows to act in specially susceptible populations and therefore reduce the incidence and prevalence of falls [4].As well as improving their quality of life and their participation in the community.However, to date, screening tools to detect risk subjects have shown variable results [5].and some of them with application exclusively in certain settings (community, acute units, surgical, rehabilitation or residences) [6][7][8]. Finally, there are few studies in which new diagnostic tools such as posturography mechanical sensors or inertial sensors are incorporated.However, their use has emerged as an approach of great interest since they allow greater precision and richness of data, in some cases with lightweight sensors of easy use, portable and with low cost [9][10][11][12]. Recently, the results of a study that evaluates the applicability of the G-STRIDE electronic device based on inertial sensors, in evaluating subjects with and without falls have been published.The results showed that the device detects spatio-temporal gait parameters accurately, and were capable of discriminate between subjects with and without falls.Furthermore, significant correlations between the gait parameters and the functional tests commonly used were found [13].Since the precision and discriminative capacity of G-STRIDE are promising, a relevant question remained to be answered: what is the predictive performance of G-STRIDE to predict fall probability. Therefore the objective of this study is to evaluate the use of the G-STRIDE prototype for predicting fall risk, defining the cut-off points that allow predicting fall risk, and to develop a predictive model that allows discriminating between subjects with and without falls while identifying those at risk of future falls. Methodology This is an observational, multicenter case-control study in older adults with and without fall risk.The Research Ethics Committee approved the sstudy of the Hospital Universitario de la Paz (Registration Number: PI-4486). The estimated effect size for a t-test for differences between two independent means based on a statistical power of 0.8 and an alpha error of 0.05 with an effect size of 0.8, a sample size of 84 subjects was estimated. Participants were included from the out-patient clinic in two general public hospitals and three nursing homes from September 2021 to March 2022.We adopted the World Health Organisation definition for falls [1]: "a fall is an event that results in a person coming to rest inadvertently on the ground or floor or other lower level".According to this definition, we define the "Fallers Group" as those adults over 70 years who had one of the next circumstances: • One fall with consequences in the last year (requiring medical attention) • Two or more falls in the same period • Gait and balance disorder • Fear of falling or post-fall syndrome. These criteria were based on those proposed by the American Geriatrics Society (AGS) and the British Geriatrics Society (BGS) to identify those patients with higher risk of falls who should be offered a multifactorial assessment [14]. The participants without falls were volunteers over 70 years that gave informed consent. Exclusion criteria for the study were terminal illness with a life expectancy of fewer than six months. Gait analysis. The G-STRIDE system Gait analysis was done using G-STRIDE system after the clinical assessment in the same visit.For the walking test, the device was placed on the top of the foot (Fig. 1) and after switching up the device, the participant was invited to walk freely for approximately 30 min.Participants from outpatient clinic walked around the hospital and those institutionalized walked around the nursing home.After this time, the data recorded was stored and the device was switched off. The G-STRIDE device was presented in prevous paper [13].It is comprised by an inertial sensor (IMU) and a processing electronics that allows obtaining kinematic-related variables (described below), store them in a SD card, and connect with a user interface.During the tests, no subject had any complications or problems derived from the use of the device. The G-STRIDE is a lightweight device with dimensions 78 × 45x38 mm.It is comprised by an IMU and Arduino board that samples the data from the IMU during walking.It also features a Secure-Digital (SD) memory card to store the data from each test conducted, as well as Wi-Fi capacity to measure and visualize in real time walking data and system status.Besides, a Raspberry card is implemented to allow for off-line sensor data analysis stored in the SD card, and the execution of the inertial navigation zero-velocity-update (INS-ZUPT) algorithms to obtain the trajectory and orientation of the foot, and derive an all the walking-related variables defined by the clinicians to assess walking.These variables are then stored in a database hosted in the Raspberry itself and are post-processed.The G-STRIDE device was attached to the instep by an elastic band as shown in Fig. 1. Estimated gait parameters The variables estimated by the G-STRIDE using the IMU on the foot of each participant are: • "Total distance(m)": The total distance traveled during long the free walk is measured in meters.• "Total time(s)": The total time taken in the long walk measured in seconds.• "Total steps": The total number of steps in the free walk.• "Gait Cycle Time-GCT": The mean Gait Cycle Time (GCT) measured in seconds.It is the time elapsed during a stride.• "Velocity(m/s)": The mean walking speed computed over the total detected steps measured in meters per second.• "Cadence(steps/min)": The mean cadence measured number of steps per minute.• The time of each cycle in percentage (%) with respect to GCT.The phases and events are shown in Fig. 2: • Pitch angles at start and end of stance/swing, they are the angles that the foot forms with the ground during the heel-strike and toe-off events (see Fig. 3). -"Heel strike angle(deg)": The maximum pitch angle at heel strike measured in degrees.-"Toe-off angle(deg)": The maximum pitch angle at toe-off is measured in degrees. • "Clearance(m)": The clearance or maximum height of the foot with respect to ground during the swing phase (see Fig. 3).It is obtained as the maximum value observed in Z. Additionally to the above mentioned parameters, which are computed as the mean on a step-by-step basics, we also estimate the standard deviation (STD) or variability among steps.These STD variables are also important to register the regularity or repetitiveness of walk, with higher STD indicating that the gait pattern is not too stable.The parameter estimation has been validated in [21], showing a stride-length mean relative error of below 4%. Statistical analysis The sample complied with normality by Kolmogorov-Smirnov, so parametric tests were performed.We determined the demographic and anthropometric parameters as means and standard deviations for continuous variables (groups compared with the t student test) or percentages for the discrete variable (groups compared with Chi square test). Statistical analysis was carried out with SPSS v.27 (Copyright© 2013 IBM SPSS Corp.) and the R language for statistical computing (R Foundation for Statistical Computing, Vienna, Austria) [22]. Logistic regression method for risk-of-fall cut-off and classification As the FALLS variable is binary, we used a logistic regression learning in order to model and explain the variables that cause a fall, and to be able to predict future falls by classifying observations from new participants.This approach is used for cut-off point finding and also for muti-variable regression. We used as training a cross-validation approach with 50 randomly generated subsets of all observations in our database.Each training subset contained 70% of the participants in the database.The remaining random subsets with 30% of the observations were used for testing the 50 generated models. The confusion matrices and derived statistics were the average of all prediction results for all 50 fitted models, including the worst, the best and all other models in between (i.e.not using just the best-model results).The total number of testing observations were 2445 (50 × 0.3 × 163).This systematic methodology generates stable statistics (not changing with new iterations), so the accuracy values given in this paper are quite reliable of the expected performance. The logistic regression models can incorporate demographic variables (such as age, gender, etc.) to address any potential imbalances in the sample distribution, particularly for variables like age or height, which directly influence walking speed. Logistic regression is a valuable tool for categorical prediction, providing probability scores for observations.However, it has some limitations.When multiple co-linear variables are present, the stability of coefficients during convergence to a fit may be compromised.Logistic regression also struggles when dealing with a large feature space or a substantial number of categorical variables.Nonetheless, the regression models developed in this study exhibit reasonable performance with our dataset.It should be noted, however, that the addition of new variables to the models can lead to a decrease in performance.Therefore, there is a trade-off between model performance and the number of variables employed. Results Table 1 shows basal characteristics of the study with 163 participants (86 of fallers group). Mean age was 82.6 ± 6.2 years, being older the group of fallers and 118 (72%) were women. Cut-off points and faller detection performance Cut-off points are able to individually separate non fallers from group of fallers.A total of 46 cut-off values are presented in Tables 2 and 3.They were computed individually with a specific logistic regression for each of the variables.This list contains classical clinical parameters (Table 2) and those estimated with the footmounted IMU (G-STRIDE) (Table 3).The intercept and coefficients are also included in the table to let us know the direction of the effect.Negative coefficients mean that larger values in the variable causes lower probability of fall.On the contrary if coefficient is positive when a parameter increases so does the probability of fall. As can be seen, most variables are significant.The accuracy of fall risk estimation using just one cutoff is good but limited (68.5% for FES1, 68.9% for SPPB Total, 69.51% for SPPB equilibrium, 66.3% for FRG Total, 71.18% for 'StrideLength-SL(m)'or 72.4% for 'StepSpeed (m/s)'.Note the promising classification power (larger than 70%) of the last two G-STRIDE (IMU) variables.It is expected at least a slightly better performance when using several variables at the same time.This will be seen in next subsection using a multi-variable logistic regression approach. Next, we present the different logistic regression models for three types of models: a) Using only conventional vavriables (clinical scales and variables), 2) G-STRIDE kinematic data alone, and 3) Mix model integrating the clinical scales and the kinematic variables obtained from the G-STRIDE. Logistic regression using clinical variables Using the conventional variables, we fitted over the train data the logistic regression model represented in Table 4, where the first column represents the coefficients of the model.Most variables are as individuals discriminant (seen in the cut-off Table 2), but when combined with others appear less discriminant (e.g.SPPB total and"Speed 4 m walk" with p value greater than 0.9).For this combination of variables, the accuracy is 78.4% (Table 7), which is better than using any single cut-off classification (all lower than 70% as seen in last section). Logistic regression using G-STRIDE kinematic variables A selection of G-STRIDE kinematic variables was done using an iteractive process to exclude individual parameters that did not impacted in the performance of the model because of its correlation with other parameters.At the end we obtain a selection of parameters (including both means and standard deviation STD), those fitted in the logistic regression model represented in Table 5, where the first column represents the coefficients of the model.The most relevant variables are GCT STD, Stance-Foot Flat-time STD.The other variable appears to be less significant, but it is a dilution effect due to colinearity.The accuracy for this model is 68.0%(Table 7), which is not as good as expected.Remember that in the cut-off analysis a single parameter performed like this: 71.18% for 'StrideLength-SL(m)' or 72.4% for 'StepSpeed (m/s)' (Table 5).We estimate that could be possible to improve performance by reducing the number of terms, specially selecting the parameters that are easier to estimate (those with lower estimation error, below 5%) or by improving the estimation algorithms to make estimation more reliable in the most challenging parameters. Logistic regression using mixed clinical & kinematic variables Finally, using a logistic regression mixed model, with 4 conventional & 2 G-STRIDE kinematic variables, we obtained the coefficients represented in Table 6.The most relevant variables are FES1, FRG Physical activity and StepSpeed.The accuracy is 77.6% (Table 7). Comparing the three models: coventional, G-STRIDE IMU and mix regression models The capability for classifying or predicting the probability of fall, is shown in the form of histograms for an easy interpretation, in Fig. 2. The ideal 100% perfect classification will correspond to a full red histogram to the right of ther vertical cut-off line (100% true positives) and a full green histogram to the left of the vertical cut-off line (100% true negatives).However is evident the presence of some histogram tails that cross the cut-off line and represent the false positive or false negatives Fig. 6.The confusion matrix derived for each of the 3 models (Conv, IMU and Mix) are presented in Table 7.The Mix model which includes a few parameters ( 6) is able to perform as well as a complete set of conventional study.It is also important to highlight the accuracy of other individual tests such as SPPB and the 4 m walk test (last two columns in Table 7) which are good, although show a lower performance when compared to the conventional and the mixed model that includes other complementary parameters. Discussion The objective of this study was to compare the predictive performance of clinical parameters, obtained by conventional clinical evaluation and kinematic variables obtained by an electronic device based on inertial sensors (G-STRIDE) to identify fall risk in elderly subjects, defining cut-off points in the analyzed variables and regression models that allow predicting future fallers. The results of the study show the cut-off points for risk of falls in both conventional clinical variables and the kinematic ones obtained by G-STRIDE.We investigated three regression models that allow identifying subjects at risk of future falls, with an accuracy > 0.784 (conventional clinical model), > 0.680 (model with G-STRIDE) and > 0.776 (mixed model). We present 8 cut-off points for the clinical-functional variables assessed during the conventional evaluations for falls.In particular, it is important to note that the gait speed has the highest coefficient, demonstrating that the probability of falls increases as this parameter changes.Another essential aspect to mention is that the proposed cut-off point (0.849 m/sec) agrees with those mentioned in the literature [23,24] In the case of other parameters such as SPPB, there is also agreement with what was previously published regarding the cut-off point that defines the risk of adverse events [18,24].In the case of another variable widely used in the assessment of falls, such as the TUG, the cut-off point coincides with that proposed with some authors [25] although other researchers suggest a higher cut-off point to detect future falls or disability [26,27].Regarding the cut-off points in the parameters obtained by the device, it is the first approximation made of these characteristics and this will allow to know after the evaluation of the patient with the device that aspects of the gait are pathological and therefore, on which it should be possible to do tailored interventions, facilitating decision making.In addition, it can have a future application for the development of tools or app devices that facilitate the visualization of results, streamlining and simplifying the decision making of the clinician.Although there is an incrising number of studies using inertial sensors for gait analysis [11,12] only some evaluate cut-off points, suggest specific analysis of stance sub-phases or improves TUG performance with "instrumented TUG" [28,29] Regarding predictive models that allow identifying subjects at risk of future fall we have studied three models and while the three models showed the similar results, the mixed model allow to have a more information of these clincial and biomechancial features related to falls, which provides a more comprehensive fall assessment.This screening tool for falls risk assessment could be used in both community and residential settings where the device has been evaluated.Unfortunately, screening tools published to date have limited or insufficient ability to predict future falls [5,30].Several reviews address the analysis of different screening tools, and it appears that, in community settings, the TUG is the most widely recommended, with a sensitivity varying according to the studies between 0.68 and 0.76, a specificity between 0.49 and 0.74 and an AUC between 0.72 and 0.80 [26,31,32].these differences may be due not only to the methodology used but also to the type of patients, or cut-off points chosen, and the differences are also observed in other tools so that authors recommend using several complementary tools [30,32]. The mixed clinical-kinematic G-STRIDE model that shows an accuracy of 0,776.This approach to fall risk Fig. 6 The capability for predicting the probability of fall for the three models (conventional model, IMU model and Mix model) assessment is novel, since there are no predictive mixed models that sum the main clinical/exploratory risk factors with those collected by sensors in real word for falls prediction.We have found an interesting study by Martínez-Ramirez et al. that propose the use of a mixed model with trunk kinematic parameters during walking finding a more accurate frailty classification as the model could improve the early detection of prefrail status [33]. On the contrary, there are also a limited number of studies that explore the use of predictive models based on inertial systems or other sensory methodology and although there is a wide variety of technological solutions, they have been tested in different locations and with different measurements resulting in heterogeneous and insufficient results to reach firm conclusions [34]. In Fig. 2, it can be seen a good separability among fallers (red) and no fallers (green), using as a cut-off point the value 0.5.The output of the logistic regression indicates the probabilities, which if lower than 0.5 means a prediction as a"non faller", and a probability larger than 0.5 is considered as"faller".The tails within the wrong response (red tail to the left and green tail to the right) can be false positive or negative, or even could be the warning of future risk of fall for non-up today fallers, or no risk of fall for previous fallers.It is necesary to test these models in future studies to confirm the results. This has strong implications, since the same results can be obtained from a reduced number of tests with complementary conventional measurements, such as the FES1, equilibrium, strength and the knowledge of physical activity by the patient; with the complement of some IMU-based features.In fact, the scientific literature identifies various measuring instruments as possible predictive tools of falls in the elderly [35,36] but maintain the biases of human subjectivity or lack of precision details as the G-STRIDE device can offer. The present study has several strengths such as sample size, advanced age representation, have been carried out in different settings (out-patient clinic, nursing homes and home) or collecting numerous functional tests.However, it is also necessary to point out that it would have been interesting to make a follow up to know clinical evolution of participants to detect future falls and evaluate the proposed predictive models.We believe that this could be the objective for a future study. We can therefore conclude that the G-STRIDE IMU device allows to evaluate up to 17 gait parameters identifying 24 cut-off points and predict the risk of falls using a mixed model with an accuracy of 0,776.In this way, G-STRIDE IMU device contributes to improving falls evaluation in elderly in a more flexible and agile way, in real life conditions and with greater accuracy. Fig. 1 G Fig.1 G-Stride IMU.Left: several units for tests.Right: IMU attached to a participant's foot • "Stride Length-SL(m)": The Stride length (distance from one stance position to the next stance of the same foot) is measured in meters.It is the distance travelled during a stride (see Fig. 4).• "StepSpeed(m/s)": The forward speed of foot only during the swing phase is measured in meters per second.It is calculated as the coefficient between Stride Length and Gait Cycle Time.• "2D Path(m)": The path length of the foot in the horizontal plane during a step (always equal or larger than SL), see Fig. 5.It is calculated as the position increment in XY. • "3D Path(m)": The path length of the foot in 3D space during a step (always equal to or larger than SL and Fig. 4 Fig. 5 Fig. 4 Diagram of gait spatial parameters: step and stride length Table 2 Cut-off points for classical clinical parametersCutOff = It is a cutoff or limit value used in the analysis; intcpt = Refers to an intercept or constant in a regression model; cofhim: Represents a coefficient associated with a specific variable; with (Z): Indicates the relationship or association of a variable with respect to other variables or the outcome; p: It is the p-value, which is used to evaluate the statistical significance of the coefficients, p < 0.05; itself: Represents the value or outcome of a variable itself Table 3 Cut-off points for IMU (G-STRIDE) parameters CutOff = It is a cutoff or limit value used in the analysis; intcpt = Refers to an intercept or constant in a regression model; cofhim: Represents a coefficient associated with a specific variable; with (Z): Indicates the relationship or association of a variable with respect to other variables or the outcome; p: It is the p-value, which is used to evaluate the statistical significance of the coefficients, p < 0.05; itself: Represents the value or outcome of a variable itself Table 4 Logistic regression model using conventional variables a OR Odds Ratio, CI Confidence Interval Table 5 Logistic regression model using IMU (G-STRIDE) variables a OR Odds Ratio, CI Confidence Interval Table 6 Logistic regression model using mix variables a OR Odds Ratio, CI Confidence Interval Table 7 Statistics comparing the three models: Coventional (Conv model), G-STRIDE IMU (IMU model) and Mix model
2023-11-14T15:36:02.105Z
2023-11-13T00:00:00.000
{ "year": 2023, "sha1": "796c478daf35b4ef32ecb43251a759759993ebdc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "796c478daf35b4ef32ecb43251a759759993ebdc", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
17459324
pes2o/s2orc
v3-fos-license
Mechanical regulation of epigenetics in vascular biology and pathobiology Vascular endothelial cells (ECs) and smooth muscle cells (VSMCs) are constantly exposed to haemodynamic forces, including blood flow-induced fluid shear stress and cyclic stretch from blood pressure. These forces modulate vascular cell gene expression and function and, therefore, influence vascular physiology and pathophysiology in health and disease. Epigenetics, including DNA methylation, histone modification/chromatin remodelling and RNA-based machinery, refers to the study of heritable changes in gene expression that occur without changes in the DNA sequence. The role of haemodynamic force-induced epigenetic modifications in the regulation of vascular gene expression and function has recently been elucidated. This review provides an introduction to the epigenetic concepts that relate to vascular physiology and pathophysiology. Through the studies of gene expression, cell proliferation, angiogenesis, migration and pathophysiological states, we present a conceptual framework for understanding how mechanical force-induced epigenetic modifications work to control vascular gene expression and function and, hence, the development of vascular disorders. This research contributes to our knowledge of how the mechanical environment impacts the chromatin state of ECs and VSMCs and the consequent cellular behaviours. Introduction The human body is constantly exposed to various types of mechanical forces, such as the stretching of skeletal muscle, the compression of cartilage and bone and the haemodynamic forces on blood vessels [1]. Haemodynamic forces are generated from the pulsatile nature of normal blood pressure and blood flow which can be characterized as cyclic stretch, shear stress and hydrostatic pressure [2]. Although vascular endothelial cells (ECs) and vascular smooth muscle cells (VSMCs) are exposed to both cyclic stretch and shear stress, ECs are primarily subjected to shear stress resulting from blood flow, whereas VSMCs are subjected to cyclic stretch resulting from pulsatile blood pressure. These haemodynamic forces are sensed by mechanoreceptors, which play an initial role in sensing various mechanical stimuli as signals that are then transmitted to the interior of the cell via intracellular signalling pathways. This process is known as mechanotransduction [3]. Many putative mechanoreceptors have been proposed, including ion channels, integrins, receptors of tyrosine kinases (RTKs), G protein coupled receptors, apical glycocalyx, primary cilia and adhesion molecules. In response to various mechanical stimuli, these mechanoreceptors signal through adaptor molecules to activate upstream signalling molecules, such as Ras, which then mediate intracellular signalling through phosphorylation cascades, eventually leading to the morphological and functional changes to maintain homeostasis. These changes include the regulation of gene expression, differentiation, proliferation, angiogenesis and migration. Vascular cell dysfunction because of the impairment of these changes may lead to a pathophysiological state that contributes to the development of vascular disorders, such as atherosclerosis and hypertension [4]. Since Conrad Waddington first proposed the concept of 'epigenetics' in 1942, research has advanced from genotype to phenotype [5]. Epigenetics refers to the study of heritable changes in gene expression and phenotype (i.e. appearance) that occur without changes in the DNA sequence; such changes regulate the dynamics of gene expression [6]. Epigenetics offers a new perspective on gene regulation that broadens the classic cis/trans paradigm of transcriptional processes and helps to explain unresolved problems from limitations of gene expression [6]. Extensive evidence has revealed that epigenetic processes play crucial roles in the development of various diseases, including cancers and cardiovascular and neurological disorders [7]. Studies investigating the role of epigenetics in vascular biology and pathophysiology have emerged only recently. The key processes that comprise epigenetic regulation are DNA methylation, histone modification/chromatin remodelling and post-transcriptional gene regulation by RNA-based mechanisms, such as non-coding RNAs (ncRNAs) [6]. DNA methylation is the addition of a methyl group from S-adenyl methionine (SAM) to the fifth carbon of a cytosine residue to form 5-methylcytosine (5-mC) in the context of CpG dinucleotides [8]. The hypermethylation of CpG islands results in the stable silencing of gene expression. Histone proteins are modified by lysine histone acetyltransferases (HATs) or histone deacetylases (HDACs) at their N-terminal regions, a process that influences the accessibility of the DNA to the transcriptional machinery [9]. NcRNAs, such as microRNAs (miRNAs), are recently emerging endogenous, non-coding, single-stranded RNAs of 18-22 nucleotides that constitute a novel class of gene regulators. MiRNAs bind to their target genes within their 3′-untranslated regions (3′-UTRs), leading to the direct degradation of the messenger RNA (mRNA) or translational repression by a perfect or imperfect complement respectively [10]. Here, we discuss epigenetics as a complex interaction between the genome, surrounding environment and mechanical forces, such as haemodynamic forces, in vascular physiology and pathophysiology. This article gives an introduction and provides new insights into the role of mechanical force-induced epigenetic modifications in vascular cell gene expression, function and pathophysiology by presenting studies of eNOS gene expression, differentiation, angiogenesis, migration, atherosclerosis and hypertension. We also provide in vivo evidence that documents the importance of epigenetic modifications in EC and VSMC gene expression and function in response to haemodynamic force. In conclusion, we propose haemodynamic force to be a critical epigenetic manipulator in modulating vascular biology and pathophysiology in health and disease. Vascular mechanobiology Blood vessels are constantly exposed to various types of haemodynamic forces, including fluid shear stress, cyclic stretch and hydrostatic pressure, which are induced by the pulsatile nature of blood flow and pressure [2]. Fluid shear stress is the frictional force per unit area from flowing blood and acts on the ECs present on the luminal surface of the vessel [11]. Cyclic stretch arises because of blood pressure, causing circumferential stretching of the vessel wall and affects both the ECs and the VSMCs that surround the endothelium in arteries ( Fig. 1) [2,4]. Hydrostatic pressure per se might also alter cellular physiology, but it is less important than shear stress or cyclic stretch. An increasing number of studies indicate that haemodynamic forces utilize mechanotransduction to influence endothelial physiology, the morphology of the embryonic heart and blood vessels and atherosclerosis [3]. In this section, we discuss the cellular response to shear stress and tensile stress in ECs and VSMCs respectively. Shear stress Shear stress modulates vascular morphogenesis The pattern of blood flow in the development of heart tissue and vessels has been shown to play a critical role in vascular morphogenesis. By analysing intracardiac flow forces in zebrafish embryo, which is an ideal model for investigating the cellular and molecular events of cardio-vasculogenesis because of the ability to visually inspect the embryo and the accessibly of genetic modifications [12], Hove et al. demonstrated that the blood flow influences the development of the heart and vascular vessels [13]. Furthermore, Lucitti et al. provided elegant evidence that fluid shear stress mediates the rearrangement of a primitive vascular plexus into a mature vascular tree in early mouse embryos [14]. The latter study suggested that fluid shear stress-induced vessel remodelling is mediated by nitric oxide synthase 3 (NOS3, endothelial NOS, eNOS). North et al. [12] and Adamo et al. [13] demonstrated that fluid shear stress is able to promote Media Layer Adventitial Layer Fig. 1 Schematic diagram showing the generations of shear stress (parallel to the endothelial surface), normal stress (i.e. pressure; perpendicular to the endothelial surface), and circumferential stretch because of blood flow and pressure (from Chiu and Chien [2]). embryonic haematopoiesis, increase the expression of haematopoietic markers and induce colony formation. Knockdown of eNOS decreased the ability of haematopoiesis in haematopoietic stem cells (HSCs). The finding that fluid shear stress-induced eNOS enhances haematopoiesis has been found to be conserved from fish to mammals [15,16]. Shear stress regulates physiological functions In addition to vascular morphogenesis and haematopoiesis, shear stress-induced mechanotransduction in ECs also regulates cellular functions, including cell proliferation/survival, metabolism, cytoskeletal reorganization and cell morphology [11]. For in vitro studies, a parallel-plate flow channel is created using a gasket with a rectangular cut-out that is made with a thin silicon membrane and has a uniform channel height along the flow path [11]. The parallel-plate flow channel can be used to study the effects of steady shear at 12 dyne/cm 2 , 'static' control with shear stress at 0.5 dyne/cm 2 , pulsatile shear at 12 AE 4 dyne/cm 2 , and reciprocating shear (oscillatory) at 0.5 AE 4 dyne/cm 2 . Disturbed shear is generated in a stepflow channel [17]. Interestingly, different patterns of shear stress cause the opposite result with respect to the functions. In laminar shear (steady shear), many events are transiently induced, including the production of reactive oxygen species (ROS), activation of GTPases and pro-inflammatory pathways, such as JUN N-terminal kinase (JNK) and NF-jB [18], and production of adhesion molecules, such as monocyte chemotactic protein-1 (MCP-1) [19]. These events eventually decrease to substantially below baseline levels compared with static controls. In contrast, these events are continuously stimulated by disturbed shear and oscillatory shear [4,20]. Cell cycle regulators, such as p53 and p21, are up-regulated by laminar shear, leading to cell cycle arrest [21]. In disturbed shear and oscillatory shear, bromodeoxyuridine (BrdU) incorporation is markedly enhanced, resulting in increased cell proliferation [20]. Once ECs are exposed to laminar shear, their cytoskeletal fibres undergo remodelling to align the cell in the direction of the shear flow. This remodelling of cytoskeletal fibres is not observed under disturbed flow, but the cells instead appear in a random orientation, similar to that observed under static conditions [17,22]. The cdc42 GTPase and the Rho signalling pathway are involved in shear stress-induced cytoskeletal remodelling [23,24]. Shear stress is involved in the development of vascular pathologies Endothelial dysfunction may lead to a pathophysiological state that contributes to the development of vascular disorders, including atherosclerosis and thrombosis and their complications [2,4]. The possible role of haemodynamic forces in endothelial dysfunction was first suggested on the basis of the observation that the earliest atherosclerotic lesions characteristically develop at arterial branches and curvatures where the shear is low and disturbed [2]. These areas include the carotid bifurcations and the branch points of the coronary, infrarenal and femoral arteries. Recent studies indicate that disturbed shear and oscillatory shear induce sustained activation of a number of atherogenic genes in ECs to promote the development of atherosclerosis [2,4]. Disturbed shear induces EC dysfunction, resulting in the expression of adhesion molecules, such as intercellular adhesion molecule-1 (ICAM-1), vascular cell adhesion molecule-1 (VCAM-1) and E-selectin (E-sel) and chemokines, such as MCP-1. Together, these adhesion molecules and chemokines recruit leukocytes and monocytes, thereby initiating a pro-inflammatory process within the vessel wall. KLF-2 is a key shear stress-induced transcription factor that governs the expression of shear stress-induced genes in ECs [25]. When ECs are subjected to laminar shear stress, KLF-2 is induced and plays anti-inflammatory and anticoagulant roles. In contrast, disturbed shear and oscillatory shear diminish the expression of KLF-2 and cause the dysfunction of ECs [26]. Based on the above in vitro studies, disturbed/oscillatory shear and laminar shear induce differential molecular responses in ECs, leading to preferential sites of atherosclerotic lesion formation. Tensile force Unlike the well-established responses of ECs subjected to fluid shear stress, the VSMCs response to cyclic stretch is less clear; however, the two processes share many similar features. As with shear stress, the mechanoreceptors, such as integrins, RTKs and ion channels, can sense tensile force from blood pressure and transmit the stimuli into intracellular signalling pathways [27]. Several reports have described the important role of cyclic stretch on VSMC gene expression and cellular functions, such as proliferation/apoptosis, migration/alignment and differentiation (phenotypic switch) [28]. In conditions of abnormal blood pressure, such as hypertension, the vascular wall is chronically subjected to exaggerated tensile force by high blood pressure, leading to vascular remodelling, arterial stiffness and calcification [29]. Cyclic stretch modulates gene expression in VSMCs During arterial remodelling, the matrix metalloproteinases (MMPs) play a prominent role in mediating changes to the extracellular matrix (ECM). Asanuma et al. demonstrated that human cultured VSMCs subjected to physiological levels (5%) of stationary or cyclical (1 Hz) uniaxial cyclic stretch had significantly decreased protein and mRNA levels of MMP-2 and MMP-9 after 48 hrs [30]. This report indicates that VSMCs selectively respond to different types of cyclic stretch and the cyclic stretch-induced alteration in MMPs may be involved in the remodelling of the ECM surrounding the vasculature. Cyclic stretch regulates the functions of VSMCs Vascular smooth muscle cells hypertrophy, hyperplasia, migration and ECM remodelling are considered to be a key in the development of hypertension. Watase et al. used a custom-designed plexiglass pressure chamber and subjected VSMCs to 105 or 120/90 mm Hg pressure at a frequency of 60 cycles/min. (0.5 sec. systole, 0.5 sec. diastole) [31]. Interestingly, VSMCs displayed a more elongated morphology and a significant increase in cell number when they were continuously exposed to pressure until day 9. This is the first study to reveal the role of cyclic stretch in modulating the phenotypic modification of VSMCs. of VSMCs and increased the expression of the specific contractile protein h-caldesmon in VSMCs [32]. These data suggest that cyclic mechanical stimulation has dual effects on VSMCs to modulate their proliferation and differentiation. Moreover, uniaxial cyclic stretch causes VSMCs to align in a direction that is perpendicular to the direction of cyclic stretch. The mechanism of cyclic stretch-induced cytoskeletal remodelling and alignment remains to be elucidated [28]. Li et al. further demonstrated that cyclic stretch (60 cycles/min.; 5, 15, or 20% elongation) enhances VSMC migration by promoting the translocation of protein kinase C-d (PKC-d) to the cytoskeleton [33]. PKC-d-deficient VSMCs, which were cultured from PKC-d À/À mice, were unable to migrate in response to cyclic stretch. This study indicates that PKC-d is a key signal transducer in the modulation of VSMC migration. Epigenetics The original concept of epigenetics was coined by Conrad H. Waddington. In 1939, in his student handbook entitled 'An Introduction to Modern Genetics', he suggested 'the causal interactions between genes and their products, which bring the phenotype into being' [5]. More recently, epigenetics has been redefined as the study of heritable changes in gene expression or cellular phenotype that occur without alterations in the DNA sequence [6]. These changes are achieved by covalent and non-covalent modifications of DNA and histone proteins-mediated modifications of the entire chromatin structure. Epigenetic modifications can be classified into the following three main categories: DNA methylation, histone modification/chromatin remodelling, and RNA-based mechanisms. RNA-based mechanisms are a newly recognized type of epigenetic modification in which gene expression is regulated by ncRNAs. Methylation DNA methylation was first discovered in mammals and occurs via the addition of a methyl group from SAM to the fifth carbon of a cytosine residue to form 5-mC [8]. DNA methylation occurs almost exclusively in the context of CpG dinucleotides. CpG dinucleotides tend to cluster into so-called CpG islands [34], which are defined as a region of greater than 200 bases with a CG content of at least 50% and a ratio of statistically expected CpG frequencies of at least 0.6. CpG dinucleotides are quite rare in mammalian genomes (~1%). In the human genome,~60% of gene promoters are associated with CpG islands, and in normal cells, these islands are generally unmethylated [7]. The methylation of CpG islands results in the stable silencing of gene expression. During early embryonic development, CpG islands undergo differential methylation [35]. These marks are important for early embryonic development and the establishment of totipotency or pluripotency as well as for health later in life. The methylation of CpG islands plays a crucial role in genomic imprinting. The mechanisms of DNA methylation include the following three steps: enzymes catalyse the addition of a methyl group onto cytosine (methylation), enzymes remove the methyl group (demethylation), and methylation-associ-ated proteins recognize and bind to the methyl group to eventually influence gene expression [8]. DNA methylation is catalysed by a family of DNA methyltransferases (DNMTs), which includes DNMT1, DNMT2, DNMT3a, DNMT3b and DNMT3L; however, only DNMT1, DNMT3a and DNMT3b possess methyltransferase activity [7]. DNMT-3L lacks intrinsic methyltransferase activity, but it is able to interact with DNMT3a and 3b, leading to the methylation of retrotransposons [36]. DNA methylation-associated proteins, including methyl CpGbinding domain (MBD) proteins, ubiquitin-like PHD and RING finger domain (UHRF)-containing proteins, and zinc-finger domain proteins, can bind to 5-mC with high affinity to modulate gene transcription via cis and trans interactions. In vascular ECs, the methylation of CpG islands in promoters of eNOS and VEGFR2 (vascular endothelial growth factor receptor 2) are identified and MBD2 can bind to these methylated CpG islands and suppresses the gene expression. Loss of MBD2 leads to activation of eNOS and VEGFR2 gene expression, and trigger proangiogenetic signal pathway [37]. This evidence provides the importance of DNA methylation in vascular function. Histone modification and chromatin remodelling In the nuclei of eukaryotic cells, genomic DNA is packaged into chromatin, which is composed of DNA and proteins. The unit of chromatin is the nucleosome, which consists of an octamer of four core histone proteins (H2A, H2B, H3 and H4) that are wrapped around~147 base pairs of DNA in 1.64 left-handed turns [9]. There are 14 contact points between the histones and DNA per nucleosome [38]. The striking feature of histones is their N-terminal 'tails', which are unstructured. A large number and different types of modified residues are placed on the tail of the histone. All of the histones are subjected to post-transcriptional modifications. There are at least eight distinct types of modifications, including lysine acetylation, lysine and arginine methylation, serine and threonine phosphorylation, lysine ubiquitylation, lysine sumoylation, ADP ribosylation, arginine deimination and proline isomerization [9]. In general, acetylation, methylation, phosphorylation, and ubiquitylation of histones have been implicated in the activation of transcription; whereas methylation, ubiquitination, sumoylation, deimination, and proline isomerization of histones have been implicated in the repression of transcription. Among these modifications, histone acetylation is most well studied. In mammalian cells, acetylation is directly performed by histone acetyltransferases (HATs) from acetyl-coenzyme A complexes. HATs are divided into three families, including GNAT, MYST and CBP/p300 [9,39]. Histone tails are acetylated by HATs, resulting in the neutralization of their positive charge and the relaxation of the chromatin structure. This change in the chromatin structure increases the accessibility of transcription factors to their target genes. Spin et al. further demonstrated that p300 is involved in regulation of VSMC phenotypic switch and implicates the complex role of p300 in chromatin remodelling [40]. In contrast, acetyl groups can be removed from histones by histone deacetylases (HDACs). There are three distinct families of HDACs: class I (HDAC1-3 and HDAC8), class II (HDAC4-7 and HDAC9-10) and class III [NAD-dependent enzymes of the sirtuin (SIRT) family (SIRT1-7)] [41]. Class I HDACs are expressed ubiqui-tously in the nucleus and display high enzymatic activity. Class II HDACs are further subdivided into IIA and IIB. Class IIA HDACs (HDAC4-5, HDAC7 and HDAC9) have long N-terminal extensions with conserved binding sites for the transcription factor myocyte enhancer factor-2 (MEF-2) and the chaperone protein 14-3-3, which regulates nuclear-cytoplasmic shuttling [42]. Class IIA HDACs can be phosphorylated by kinases, thereby providing a mechanism for linking extracellular signals with transcription. Class IIB HDAC6 is the primary cytoplasmic deacetylase found in mammalian cells, whereas the functions of HDAC10 are less known. Class III HDACs represent the silent information regulator 2 (Sir2) family of nicotinamide adenine dinucleotide (NAD+)-dependent HDACs (i.e. SIRT1-7), which share structural and functional similarities with yeast Sir2 [43]. Histone acetylation is a dynamic process that is controlled by the antagonistic actions of two large families of enzymes [41]. The balance between these actions represents a critical regulatory mechanism for gene expression, developmental processes and disease progression. Recently, HDACs and HDAC inhibitors have been suggested as clinical therapies for several diseases, such as cancer, cardiovascular diseases, Huntington's disease and Alzheimer's disease [43][44][45]. HDAC inhibitors will likely widen the therapeutic window and possibly lead to their clinical application [46]. In addition to histone modifications, chromatin remodelling can be achieved by ATP-dependent chromatin remodelling complexes [47]. These chromatin remodelling complexes utilize ATP hydrolysis to alter the histone-DNA interaction. The consequences of chromatin remodelling lead to the transient unwrapping of the DNA from the histones, the formation of a DNA loop and the removal of the nucleosome and histone variants; each of these processes results in changes in the accessibility of nucleosomal DNA to transcription factors. These alterations in chromatin structure lead to changes in transcription in a wide variety of biological processes and provide a complex and responsive epigenetic landscape that is superimposed on the underlying genetic code. RNA-based mechanisms The newly recognized type of epigenetic modification for gene regulation involves ncRNA. NcRNA is functional RNA that is not translated into protein. The functionality of individual ncRNAs has been found in mammals, other animals, plants and fungi. Recent studies reveal that ncRNAs are involved in the regulation of various processes, such as metabolism, development, cell proliferation and oncogene induction [48]. The following five classes of non-coding RNAs have been defined: microRNAs (miRNAs), small interfering RNAs (siRNAs), piwi-interacting RNAs, small nucleolar RNAs and long non-coding RNAs (lncRNAs) [48,49]. The functions and features of lncRNAs are distinct from other small ncRNAs, such as miRNAs. Xist nuclear RNA is a 17 kb lncRNA, which is expressed exclusively from the inactive X chromosome in women [50]. MiRNAs are recently emerging endogenous, non-coding, single-stranded RNAs of 18-22 nucleotides that constitute a novel class of gene regulators. MiRNAs bind to their target genes within their 3′-untranslated regions (3′-UTRs), leading to the direct degradation of the mRNA or translational repression by a perfect or imperfect complement respectively [10]. In recent years, the role of miRNAs has received increasing attention in the development of various diseases. Particularly, the functions of miRNAs in vascular development and diseases have been described [51,52]. Most miRNA genes are located in intronic regions, and they may be transcribed as part of the mRNA. The primary miRNA (pri-miRNA) is transcribed by either RNA polymerase II or III from an independent gene in the nucleus. In subsequent processing, the microprocessor complex (i.e. Drosha-DGCR8) processes the pri-miRNA into a~60-100-nucleotide precursor hairpin (pre-miRNA) [53]. The resulting pre-miRNA is exported to the cytoplasm by Exportin-5-RanGTP. In the cytoplasm, the RNase III Dicer and TRBP cleave the pre-miRNA into~22-nucleotide miRNA/miRNA* duplexes [54]. The miRNA strand is termed the guide strand and represents the mature miRNA; the miRNA* strand is termed the passenger strand and undergoes rapid degradation [55]. The mature miRNA is incorporated into a miRNA-induced silencing complex and base-paired to its target mRNA for mRNA degradation or translational repression. Several miRNAs have been identified to play important role in vascular vessel. The dual functions of miR-126, which is the most abundant miRNA in vascular ECs, have been demonstrated in angiogenesis and antiinflammation. During the embryonic development, miR-126 has been shown to regulate the angiogenic signalling and to govern the integrity of blood vessel [56]. Overexpression of miR-126 significantly represses TNF-a-induced VCAM-1 expression and leucocyte adhesion [57]. MiR-143/145 are specifically expressed in normal blood vessel and involved in the modulation of VSMC fate in differential phenotype [58]. Haemodynamic force-induced epigenetic modifications Although extensive studies have demonstrated that haemodynamic forces modulate various vascular cell functions, reports of haemodynamic force-induced epigenetic regulation of gene expression, function and vascular pathophysiology have recently emerged. In this section, we will discuss these mechanisms and functions of haemodynamic force-induced methylation, histone modifications and mi-croRNAs in vascular cells (Tables 1 and 2). Methylation Constitutive eNOS expression in ECs is dependent on the basal transcriptional machinery present in its core promoter and includes positive and negative protein-protein (trans) and protein-DNA (cis) interactions [59]. The haemodynamic force-induced regulation of eNOS gene expression in mRNA processing and stability has been clarified. Recent studies indicate that shear stress can modulate chromatin remodelling on histone H3 and H4, resulting in eNOS being regulated by chromatin-based epigenetic mechanisms at the transcriptional level [60,61]. Lund et al. found that DNA hypermethylation patterns occur prior to the appearance in peripheral blood from atherosclerotic tissue and the DNA methylation patterns of estrogen receptor-b, which has been identified to have an important role in atherosclerotic development [63]. Atherosclerotic tissues showed higher methylation levels (28.7%) than normal arteries (6.7-10.1%) and venous tissues (18.2%), and the methylation of estrogen receptor-b could be diminished with a DNA methyltransferase inhibitor [64]. Histone acetylation/deacetylation (HAT/HDAC) Class I HDACs Haemodynamic force-induced histone modifications have been extensively studied in recent years (Table 1). These histone modifications are involved in the regulation of gene expression, cellular functions, including cell proliferation, survival, migration and atherosclerosis. Zeng et al. demonstrated that laminar flow increases the activity of HDACs and the association of p53 with HDAC1, leading to the deacetylation of p53 in ECs [65]. Treating ECs with Trichostatin A (TSA), an HDAC inhibitor, abolishes the flow-induced p53 deacetylation at Lys-320 and Lys-373. Furthermore, HDAC-deacetylated p53 by laminar shear stress triggers the expression of p21, whereas deletion and mutation of the p21 promoter inhibits p53 activation. These data clearly outline the mechanisms of laminar shear stress-induced cell cycle arrest. Lee et al. utilized HDAC-specific siRNAs to find that class I HDAC1/2/3, but not class II HDAC4/7, modulate oscillatory flowinduced cell proliferation [66]. Oscillatory flow up-regulates the expression of cyclin A and down-regulates the expression of p21 through class I HDAC1/2/3, resulting in the promotion of EC proliferation. In an in vivo stenosis model in which the rat abdominal aorta was subjected to partial constriction with a U-clip, which produced a 65% constriction in diameter [26], high expression of HDAC2/3/5 and BrdU uptake were observed in the luminal ECs at post-stenotic sites, where disturbed flow with oscillatory shear occurs. In addition, when the HDAC inhibitor valproic acid (VPA) was injected into the rats infused with BrdU, the increased BrdU uptake in the ECs at the poststenotic region was inhibited when compared with the group injected with saline. These results indicate that oscillatory flow-induced EC proliferation is mediated by class I HDACs in vivo (Fig. 2). Zampetaki et al. utilized en face staining to demonstrate increased levels of HDAC3 in aortas from apolipoprotein E (apoE)-knockout mice [67]. In addition, cultured ECs were found to up-regulate the expression of HDAC3 proteins and to enhance its phosphorylation at serine/threonine residues in response to disturbed flow. Co-immunoprecipitation studies revealed that HDAC3 and Akt form a complex to promote EC survival. Knockdown the expression of HDAC3 via its specific short hairpin RNA (shHDAC3) led to a dramatic decrease in cell survival accompanied by EC apoptosis. These results indicate that disturbed flow promotes the post-transcriptional modification and stabilization of the HDAC3 protein, thereby highlighting its contribution to atherogenic processes. Zeng et al. demonstrated that laminar shear stress enhances embryonic stem cell-derived progenitor cell differentiation into ECs. This process stabilizes and activates HDAC3 through the Flk-1-PI3K-Akt pathway and deacetylates p53, resulting in p21 activation [68]. Class II HDACs Chen et al. demonstrated that p300, which is a histone acetyltransferase, cooperates with NF-jB subunits (p50 and p65) to bind to the shear stress responsive B element in the human eNOS promoter [69]. The shear stress-induced eNOS expression was blocked by pharmacological inhibition of p300/HAT activity with curcumin or p300-specific siRNA. Chromatin immunoprecipitation assays also revealed that shear stress stimulates the acetylation of histones H3 and H4 at the eNOS promoter, corroborating the results of a previous study [70]. On the other hand, histone deacetylations have been demonstrated to play a critical role in shear stress-mediated eNOS expression. Application of laminar flow to ECs induces their production of NO, which promotes the deacetylation of histones, leading to the enhancement of class II HDAC4/5 nuclear shuttling and the increased activity in ECs [71]. Wang et al. found that the phosphorylation of class II HDAC5 and its nuclear export were stimulated by laminar shear stress through a calcium/calmodulin-dependent pathway [72]. Consequently, flow induced the dissociation of HDAC5 from MEF-2 and enhanced MEF-2 transcriptional activity, leading to KLF-2 and eNOS expression. In addition to regulation of eNOS expression by class II HDACs, Wang et al. used ECs co-cultured with VSMCs to demonstrate that HDAC6 is involved in the modulation of laminar shear stress-induced migration in ECs [73]. The acetylation level of tubulin, which is an important cytoskeletal protein involved in the regulation of cell migration was decreased in these co-cultured ECs by shear stress. Yan et al. found that a cyclic stretch of 1 Hz at 10% elongation significantly inhibited the migration of cultured VSMCs, and this treatment up-regulated the levels of hyperacetylated histone H3 and HDAC7 and down-regulated the levels of HDAC3/4 [74]. Class III HDACs Chen et al. demonstrated the interplay of class III NAD-dependent enzymes, SIRT1 and AMP-activated protein kinase (AMPK) in the regulation of eNOS expression. Laminar shear stress and pulsatile flow increase the SIRT1-eNOS association and eNOS deacetylation. In addition, shear stress activates AMPK activity; hence, the phosphorylation of eNOS by AMPK is required for the SIRT1 deacetylation of eNOS, leading to the expression of eNOS [75]. The class III HDAC SIRT1 has been also shown to play a protective role in atherosclerosis [76]. SIRT1 deacetylates RelA/p65 at lysine 310 in macrophages and suppresses its binding to naked DNA in human aortic ECs, thereby interfering with a crucial step in NF-jB signalling and reducing the expression of EC adhesion molecules, including ICAM-1 and VCAM-1. Overexpression of endothelial SIRT1 in apoE-deficient mice prevents the formation of atherosclerosis by improving vascular function. In addition, SIRT1 is also involved in the proliferation and migration of VSMCs. The increased activity of SIRT1 in VSMCs leads to the suppression of p21 and enhancement of senescence-resistant cells replication [76]. In addition, the activity of the tissue inhibitor of metalloproteinase-3 can be increased by SIRT1 overexpression. Therefore, SIRT1 plays a protective role in atherosclerosis in VSMCs by inhibiting the inflammatory events and thereby preventing atherosclerotic plaques formation. Several HDAC inhibitors are studied in the spontaneously hypertensive rat (SHR) model. Cardinale et al. demonstrated that SHRs treated with VPA for 20 weeks had significant decreases in blood pressure, the levels of pro-inflammatory cytokines and hypertrophic markers, such as reactive oxygen species, and the expression of angiotensin II type 1 receptor in the heart. [77]. Bogaard et al. also found that VPA and TSA reduce pressure overload-induced left ventricular hypertrophy and dysfunction, but the mechanisms of the effect on right ventricular adaptation to pressure overload are unknown [78]. Usui et al. analysed the expression of proteins in the aorta and mesenteric artery from SHRs and Wistar Kyoto rats (WKYs) by Western blotting [79]. The expression of HDAC4 and HDAC5 were decreased in SHRs compared with WKYs. In the mesenteric arteries from SHRs, HDAC4 was increased, whereas HDAC5 was decreased. Taken together, HDACs play an important role in the development of hypertension. MicroRNA In current studies, the functions of haemodynamic force-induced miRNAs have been clarified in vascular cells (Fig. 3). These functions include angiogenesis, inflammation, proliferation and migration. The endothelium-specific transcription factor KLF-2 has been well-established to participate in the regulation of eNOS gene expression [80]. A new regulatory circuit of KLF-2-mediated expression of eNOS by an RNA-based mechanism has been clarified [81]. ECs that were subjected to oscillatory flow (0 AE 4 dyne/cm 2 ), but not pulsatile flow (12 AE 4 dyne/cm 2 ) were triggered to express miR-92a. Bioinformatics analysis demonstrated that KLF-2 is a target gene of miR-92a, and its gene and protein expression levels are down-regulated in oscillatory shear-stimulated ECs. In addition, the KLF-2-regulated genes eNOS and thrombomodulin were repressed by the overexpression of miR-92a in ECs. Nicoli et al. used a zebrafish embryonic model to demonstrate that the angiogenic sprouting of blood vessels requires the blood flow-induced transcription factor KLF-2 [82]. KLF-2 acts upstream of miR-126 to promote fluid flow-stimulated angiogenesis through VEGF signalling. Co-injection of both morpholinos, resulting in the specific knockdown of KLF-2 and miR-126, caused a dramatic defect in the penetrance of AA5X. This implies that KLF-2 and miR-126 share a common pathway in modulating angiogenesis. This study provided new insights into how ECs respond to flow stress and integrate developmental signals with miR-126 to promote angiogenesis. On the other hand, laminar shear stress has been identified as a regulator of EC anti-proliferation as a result of miRNA modulation of cell cycle regulators. Qin and Wang et al. demonstrated that laminar shear stress induces miR-19a and miR-23b, which may participate in cell cycle regulation, leading to EC arrest at G 1 /S [83,84]. Fang et al. showed that swine vessels exhibit decreased expression of miR-10a at athero-susceptible regions of the inner aortic arch and aorta-renal branches, which are the preferential sites for atherosclerotic occurrence [85]. This group further demonstrated that miR-10a is involved in the anti-inflammatory effect observed in ECs. As unusual fluid shear forces, such as disturbed flow and oscillatory flow, promote the development of atherosclerosis, the link between fluid shear stress and the development of atherosclerosis may involve regulation by miR-10. The detailed mechanisms of this regulatory circuit should be further explored. Ni et al. described the miRNA expression profiles of cultured ECs exposed to oscillatory flow and laminar flow [86]. The overexpression of miR-663 increases monocyte adhesion in laminar flow-exposed ECs, whereas the treatment of ECs with a miR-663 antagonist inhibits oscillatory flow-induced monocyte adhesion. MiR-663 was identified to have functional importance in endothelial inflammatory responses but not in apoptosis. Furthermore, Zhou et al. demonstrated that oscillatory flow induces the expression of miR-21 at the transcriptional level in cultured ECs and eventually leads to an inflammatory response by targeting the 3′-UTR of the peroxisome proliferator-activated receptor-a (PPAR-a) [87]. These studies provide a mechanism of atherogenic flow in which oscillatory flow induces an inflammatory response at the post-transcriptional level that is mediated by miRNAs. Hastings et al. used an EC-VSMC co-culture model in response to atheroprone flow to identify the role of chromatin modifications in regulating the VSMC phenotypic switch [88]. Mohamed et al. demonstrated that cyclic stretch-induced miR-26a serves as a hypertrophic gene; thus, the transcription factor CCAAT enhancer-binding protein directly activates miR-26a expression through the transcriptional machinery [89]. In addition, miR-26a directly targets glycogen synthase kinase-3, an anti-hypertrophic protein, to enhance hypertrophy in VSMCs. In the animal study, Wu et al. found that miR-130a correlates with vascular remodelling in SHRs [90]. MiR-130a was up-regulated in the thoracic aorta and mesenteric arteries of SHRs. In addition, the mRNA and protein levels of growth arrest-specific homeobox were down-regulated by miR-130a. MiR-130a mimics at 25 or 50 nmol/l significantly enhanced the proliferation of VSMCs. Yu et al. investigated the miRNA expression profile in isolated VSMCs from SHRs and WKYs [91]. The let-7d miRNA was significantly down-regulated in VSMCs from SHRs, and the role of let-7d is supposedly related to the proliferation of VSMCs. In cellular assays, overexpression of let-7d directly targeted k-ras, which is an oncogene that participates in the modulation of the cell cycle and cell proliferation, leading to the inhibition of VSMC proliferation. This study implicates the role of let-7d in the mechanism of VSMC proliferation in hypertensive rats. Conclusions and future perspectives Haemodynamic forces, such as fluid shear stress and cyclic stretch, can modulate EC and VSMC gene expression, cellular function and pathophysiology in health and disease. Although extensive studies have been performed on the molecular mechanisms by which haemodynamic forces regulate intracellular signals that ultimately modulate downstream gene expression, studies investigating the role of haemodynamic force-induced epigenetic pathways have emerged only recently. In this review, we summarize the current state of the in vitro and in vivo studies on haemodynamic force-induced DNA methylation, histone modification/remodelling and miRNA expression in the regulation of EC gene expression, cellular function and pathophysiology. Studies assessing eNOS gene expression, proliferation, angiogenesis, migration and vascular disorders, such as atherosclerosis and hypertension, are discussed. Shear stress-induced eNOS gene expression is regulated by epigenetic mechanisms, including the acetylation of histone H3 and H4, the interaction of p300 and NFjB with the eNOS promoter, and post-transcriptional modification by HDAC5 and miR-92a, which influences eNOS gene expression by directly binding to the 3′-UTR of KLF-2. These studies clearly demonstrate the complex regulation of eNOS gene expression by shear stress-induced epigenetic modification at the transcriptional and post-transcriptional level. HDACs are critical molecules that participate in multiple aspects of the EC response to shear stress. Particularly, shear stress-induced HDAC3 via the Flk-1-PI3K-Akt pathway controls EC differentiation, survival and proliferation. HDAC5 retards the disturbed flow-induced inflammatory response, leading to a decrease in adhesion molecules. In addition to HDACs, shear stress-modulated miRNAs, including miR126, miR-19a, miR-23b, miR-92a, miR-10a, miR-21 and miR-663, play a crucial role in EC angiogenesis, proliferation and atherosclerosis. The role of shear stress-induced epigenetic modifications in vascular physiology and pathophysiology is demonstrated by their regulatory roles in proliferation, angiogenesis, migration and atherosclerosis. In the SHR model, HDAC inhibitors, such as VPA and TSA, decrease blood pressure and hypertensive markers. The aberrant expression of let-7d, miR-130a and miR-26a may contribute to the VSMC proliferation observed during the development of hypertension. Although extensive studies have revealed that shear stressinduced epigenetic modifications influence EC function, the role of other important mechanical forces, such as tensile forces from ECM remodelling, in the regulation of EC and VSMC functions remain unclear. During the development of atherosclerosis and hypertension, matrix remodelling is a critical feature as a growing number of studies have revealed that changes occur in matrix components and protease, such as MMPs alter the mechanical properties of vascular vessels [92]. Haemodynamic force-induced epigenetic modifications are tightly correlated with physiological maintenance and pathophysiology. Similarly, mechanical forces derived from ECM remodelling may have functions in epigenetic modifications in vascular cells and should be investigated further. The study of epigenetic modifications will contribute to our understanding of the transcriptional and posttranscriptional control machinery in vascular disease that is stimulated by unusual mechanical forces, such as disturbed flow and oscillatory shear. Such studies will likely provide new insights into the mechanisms by which the dynamic environment of the vascular vessel influences the vascular cells during the development of vascular diseases.
2018-04-03T05:43:56.169Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "bfe342e3537b2368afba86311a76b81f8b598524", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.12031", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bfe342e3537b2368afba86311a76b81f8b598524", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119216942
pes2o/s2orc
v3-fos-license
Spectral Index Studies of the Diffuse Radio Emission in Abell 2256: Implications to Merger Activity We present a multi-wavelength analysis of the merging rich cluster of galaxies Abell 2256. We have observed A2256 at 150 MHz using the Giant Metrewave Radio Telescope and successfully detected the diffuse radio halo and the relic emission over an extent $\sim1.2$ Mpc$^2$. Using this 150 MHz image and the images made using archival observations from the VLA (1369 MHz) and the WSRT (350 MHz), we have produced spectral index images of the diffuse radio emission in A2256. These spectral index images show a distribution of flat spectral index (S$\propto\nu^\alpha$, $\alpha$ in the range -0.7 to -0.9) plasma in the NW of the cluster centre. Regions showing steep spectral indices ($\alpha$ in the range -1.0 to -2.3) are toward the SE of the cluster centre. These spectral indices indicate synchrotron life times for the relativistic plasmas in the range 0.08 - 0.4 Gyr. We interpret this spectral behaviour as resulting from a merger event along the direction SE to NW within the last 0.5 Gyr or so. A shock may be responsible for the NW relic in A2256 and the Mpc scale radio halo towards the SE is likely to be generated by the turbulence injected by mergers. Furthermore, the diffuse radio emission shows spectral steepening toward lower frequencies. This low frequency spectral steepening is consistent with a combination of spectra from two populations of relativistic electrons created at two epochs (two mergers) within the last $\sim$0.5 Gyr. Earlier interpretations of the X-ray and the optical data also suggested that there were two mergers in Abell 2256 in the last 0.5 Gyr, consistent with the current findings. Also highlighted in this study is the futility of correlating the average temperatures of thermal gas and the average spectral indices of diffuse radio emission in respective clusters. Introduction The intra-cluster medium (ICM), which pervades the space between the galaxies in galaxy clusters is known to consist of hot thermal gas (∼ 10 8 K), magnetic fields (∼ 1µG) and relativistic particles. The thermal gas emits mainly in X-rays by thermal Bremsstrahlung mechanism. A direct evidence for magnetic fields and relativistic electrons in the ICM are the extended diffuse synchrotron sources associated with the ICM detected in a fraction of clusters (Ferrari et al 2008). These are classified broadly as radio halos (centrally located in clusters and unpolarized ( 5%)) and radio relics (filamentary/ arc-like, located at cluster peripheries and polarized (∼ 20 − 30%)) (Ferrari et al 2008). So far, such sources have been detected in merging clusters of galaxies; thus a strong connection between their origin and merger is favoured (Ferrari et al 2008). The primary models predict that shocks and/or turbulence induced during merging events reaccelerate the electrons in the ICM: shocks reaccelerate by Fermi I process (Ensslin et al 1998) and/or adiabatic compression of fossil radio plasma (Ensslin & Gopal-Krishna 2001); turbulence reaccelerate via stochastic, Fermi II or MHD waves Cassano & Brunetti 2005). The secondary models regard the synchrotron emission to come from electrons generated in hadronic collisions (Dolag & Ensslin 2000 and references therein); this model awaits observational evidence and is not discussed here. The shocks and/or turbulence in the ICM which are responsible for accelerating charged particles leave signatures in the spectra of accelerated particles. A shock can leave a trail of accelerated charged particles in its wake; turbulence can lead to patchy distribution of spectral index in the source (Feretti et al 2004 and references therein). Synchrotron spectral index steepens with time as a result of energy losses proportional to E 2 . In a simplistic approach, the spectral index can be used to estimate the time since the relativistic plasma was accelerated-the spectral age. If the break frequency of the spectrum can be identified, the spectral age can be estimated; if it cannot be identified, at least a limit on the spectral age can be obtained. Spectral age distribution across the extents of radio halos and relics can then be connected to the geometry of shock passage or the sites of efficient/inefficient turbulent acceleration. Due to the complex nature of the reacceleration by turbulence, this may not be the best estimate of the time since the acceleration took place. Consistency with the merger geometries proposed based on X-ray surface brightness and optical galaxy distributions can be verified. Such studies require good quality multi-frequency maps of radio halos and relics. Radio halos and relics being extended (∼ 5 to 30 arcminutes for clusters in redshift range 0.2 − 0.02), low surface brightness (∼ 1 mJy arcmin −2 at 1.4 GHz) sources are difficult to image at multiple frequencies with comparable sensitivities. Spectral index maps have been constructed only in a few clusters so far (for example, Coma (Giovannini et al 1993), A665, A2163 (Feretti et al 2004), A3562 (Giacintucci et al 2005), A2744 and A2219 (Orru et al 2007) and A2255 (Pizzo & Bruyn 2009)). These have shown features like radial spectral steepening and patchy distribution of spectral indices and have been interpreted as variations in the magnetic field in the cluster and turbulence by the authors. Various geometries of merger have also been discussed. It is important to realise that the synchrotron spectrum is curved by nature (energy losses proportional to E 2 ). To understand the curvature in the spectra of different regions and identification of the break frequencies, spectral index maps between multiple frequencies are necessary. So far a study with two spectral index maps has been carried out for the cluster A2255 by Pizzo & Bruyn (2009). Radio halos and relics are rare sources and thus only a few clusters where such detailed study can be carried out are available. One such cluster is Abell 2256 (hereafter A2256) which is a host to a radio halo and a radio relic and shows clear signatures of merger. Using two spectral index maps (150-350 MHz and 350-1369 MHz), a study of the complex dynamics in the cluster A2556 is presented here. A2256 is a rich, X-ray luminous (L X[0.1−2.4keV ] ∼ 3.8 × 10 44 erg s −1 , Ebeling et al 1996) galaxy cluster at a redshift of 0.0581 (Struble & Rood 1999). The X-ray surface brightness is elongated in the east-west direction and shows substructures (Briel et al 1991;Sun et al 2002); the radial velocity distribution of galaxies shows the presence of three distinct groups of galaxies (Berrington et al 2002). These have been interpreted to be indicators of ongoing merger in A2256. Recent temperature maps of A2256 with Chandra (Sun et al 2002) and XMM Newton (Bourdin & Mazzotta 2008) show variation between 4 -10 keV in ∼ 0.6 Mpc region around the cluster centre. Apart from several head-tail radio galaxies, A2256 hosts diffuse radio emission in the north-west of the cluster centre (the radio relic) and at the centre (the radio halo). It has been studied in radio wavelengths for the past three decades (Bridle & Fomalont 1976;Bridle et al 1979;Rottgering et al 1994 (hereafter R94); Clarke & Ensslin 2006 (hereafter CE06);Brentjens 2008 (hereafter B08)). Polarization of ∼ 20 − 40% was detected at 1400 MHz in the relic by CE06; at 350 MHz it is unpolarized (< 1%) (B08). A2256 also hosts the peculiar steep spectrum source 'F'; the optical identification of it is still being debated (B08). New steep spectrum sources have been detected in the 330 MHz images of A2256 made using the GMRT by van Weeren et al (2009); these are at the periphery of the cluster and are unrelated to the halo and the relic that will be discussed in this paper. Study of the integrated spectrum (including the radio galaxies, the compact radio sources as well as the diffuse radio emission) of A2256 has shown that it steepens at low frequencies (B08); a property unique to this cluster. The properties of the relic in A2256 are inconsistent with the scenario of acceleration in structure formation accretion shocks (Ensslin et al 1998). They propose a shock radius of ∼ 1 Mpc for the geometry of the relic in A2256, which implies an origin in shocks interior to the cluster -the merger shocks. Based on the simulations of Roettiger et al (1995), merger between clusters having mass ratios 2:1, in the direction northwest to southeast, such that the smaller cluster is moving towards the observer has been discussed in the case of A2256. This model was proposed to explain the X-ray properties and temperature distribution in A2256 as estimated by ROSAT (Briel & Henry 1994); further sensitive X-ray measurements (ASCA, Chandra, XMM Newton) do not confirm the measurements of temperature by the ROSAT (Sun et al 2002;Bourdin & Mazzotta 2008); rendering the model unusable. Based on X-ray substructure and identification of three distinct groups of galaxies in A2256, two mergers have been discussed (Sun et al 2002;Berrington et al 2002;Miller et al 2003). Berrington et al (2002) propose two mergers -one between two comparable mass subclusters and another between a group and the primary cluster. Miller et al (2003) have favoured the possibility of the group being responsible for the radio relic whereas CE06 favour the merger of two subclusters. The case of A2256 is complex. The spectral distribution across the diffuse radio emission in A2256 is presented in this paper and its implications to the mergers are discussed. This paper is organized as follows. In Section 2, radio observations of A2256 using the Giant Metrewave Radio Telescope (GMRT) and the data reduction are described. The radio image at 150 MHz and the spectral index maps between 150-350 MHz and 350-1369 MHz are presented in Section 3. Discussion of these results in the contexts of temperature of the ICM and the dynamics in the cluster are presented in Section 4. The conclusions are presented in Section 5. Radio Observations and Data Reduction The observations of A2256 were carried out with the GMRT (Oct '07) at 150 MHz for a duration of ∼ 7 hr with a bandwidth of 6 MHz. The Astronomical Image Processing System (AIPS) was used to analyse this data. The GMRT data at 150 MHz are affected by radio frequency interference (RFI). Using the data visualization and editing tasks in AIPS, data affected by RFI were identified and removed. About 35% of the data were excised. Standard procedures of absolute flux density, gain and bandpass calibrations were carried out. This calibrated data was imaged and several self-calibration iterations were performed to obtain the best images. Use of uniform and natural weighting of the visibilities resulted in images at 150 MHz at resolutions of 20 ′′ × 20 ′′ and 33 ′′ × 33 ′′ with rms sensitivities of ∼ 2.2 and ∼ 2.5 mJy beam −1 , respectively. Data in the form of visibilities at 1400 MHz (project code AC522) were obtained from the archives of the Very Large Array (VLA). This data contained observations for a duration of 6 hr with the VLA in the D configuration. A bandwidth of 25 MHz at each of the 2 IFs, namely, 1369 and 1417 MHz, was used in these observations. Images at each of 1369 and 1417 MHz are published in CE06 (Fig. 1, top 2 panels). CE06 have detected the largest extent of the diffuse emission in the image at 1369 MHz and thus this frequency was chosen to obtain an image for our purpose. Visibilities at 1369 MHz were edited, calibrated and imaged using AIPS. Natural weighting of the visibilities resulted in an image at 1369 MHz with a resolution of 67 ′′ × 67 ′′ and an rms of 0.07 mJy beam −1 . This image is similar in terms of rms and detection of extended sources to that in Fig. 1 in CE06 (rms ∼ 0.06 mJy beam −1 ; beam ∼ 52 ′′ × 45 ′′ ). Visibilities recorded in frequency bands around 350 MHz containing observations of A2256 were obtained from the Westerbrok Synthesis Radio Telescope (WSRT) archives. This data contained ∼ 11 hr observations of A2256 with the WSRT in a configuration with the shortest baseline of ∼ 72 m. These visibilities were recorded in 8 frequency bands, between 310-390 MHz, each having a bandwidth of 10 MHz. Natural weighting of the visibilities resulted in a synthesized beam of ∼ 62 ′′ × 62 ′′ . For easy comparison with the 1369 MHz image, a beam with FWHM of 67 ′′ was chosen for imaging and an rms of 0.6 mJy beam −1 was achieved using visibilities in all the frequency bands. This image is similar in quality to that in Fig. 2 in B08 produced from the same dataset. For spectral comparison, an image with a beam of 67 ′′ × 67 ′′ was produced at 150 MHz from the GMRT data. The flux density calibration errors are ∼ 15% at 150 MHz and < 10% at 350 and 1369 MHz. Primary beam corrections to images at each of the frequencies were applied using the task 'PBCOR' in AIPS. At 150 MHz and 1369 MHz, the coefficients of the polynomials representing the respective primary beams reported on the GMRT and the VLA websites were used. The primary beam of the WSRT is approximated by the function cos 6 (θ), where θ is a function of the frequency and the angular distance from the pointing centre. Using this function, parameters for the task 'PBCOR' appropriate for the WSRT were calculated and used. These primary beam corrected images were used for obtaining spectral index images between these frequencies. Radio Images The central portion of the GMRT 150 MHz image of the A2256 region at a resolution of 67 ′′ × 67 ′′ is presented in Fig. 1. This image shows sources with extents ranging between ∼ 67 ′′ and ∼ 15 ′ . The sources are labelled according to the convention in Bridle et al (1979) and R94 for easy reference. The sources A, B, C, D and F have extents ranging between ∼ 67 ′′ and ∼ 6 ′ . The radio sources A, B, C and D are radio galaxies with optical counterparts in A2256 (R94; Miller et al 2003). The source F is an extended ultra-steep spectrum source (Masson & Mayer 1978;Bridle et al. 1979;R94). The extent of source F at 150 MHz is ∼ 4 ′ which is ∼ 280kpc if it is at the redshift of A2256. The identification of source F as a radio tail of optically identified galaxy 122 of Fabricant et al (1989) is still being debated (B08). The sources G and H are extended (∼ 10 ′ ) sources and are together termed as the "radio relic" in A2256 (R94). High resolution (∼ 1.4 ′′ × 1.2 ′′ ) 20 cm maps with the VLA in the A configuration by R94 resolve out G and H completely; thus confirming their diffuse nature. No unique optical counterpart can be associated with these sources (Miller et al 2003). The radio relic seen at 150 MHz, covers a region of ∼ 1×0.4 Mpc 2 . CE06 have reported an extent of ∼ 1.125 × 0.52 Mpc 2 for the relic. The larger extent reported by CE06 and also detected in our 1369 MHz image covers a region of extent ∼ 100 kpc toward the northwest and north of the boundaries of the relic shown in Fig. 1. The surface brightness of this ∼ 100 kpc region is ∼ 0.4 mJy arcmin −2 at 1400 MHz (CE06). This surface brightness implies a surface brightness of 2.4 mJy arcmin −2 at 150 MHz assuming a spectral index of -0.8. This is ∼6 times lower than the sensitivity in the 150 MHz image presented here. The diffuse emission pervading the central region around source D is the "radio halo" (CE06). The radio halo and the relic emission overlap and thus it is difficult to determine the exact extents of each. The region towards south-east that is detected at 1369 MHz by CE06 (∼ 0.2 mJy arcmin −2 ) and at 350 MHz by B08 is beyond the SE boundary shown in Fig. 1 and is not detected at 150 MHz due to its low surface brightness (∼1.2 mJy arcmin −2 at 150 MHz). Integrated spectrum The total flux densities inclusive of the radio halo, the radio relic and the discrete sources are 10.0 ± 1.5, 3.6 ± 0.1 and 1.47 ± 0.07 Jy at 150, 350 and 1369 MHz, respectively. The total flux densities were measured using the same area at all the three frequencies. In Table 2 of B08, total flux density measurements of A2256 at frequencies ranging from 22.25 to 2695 MHz have been reported. Our estimates at 350 and 1369 MHz are within ∼ 1σ errors of the values reported by B08. The total flux density recovered at 150 MHz is ∼ 25% higher than that reported by B08 (8.1 ± 0.8 Jy at 151 MHz from Masson & Mayer (1978)). These total flux density estimates imply spectral indices of α 350 150 = −1.20 ± 0.13 and α 1369 350 = −0.65 ± 0.01 for the integrated spectrum of A2256. Total flux densities of 17 ± 2 Jy at 81.5 MHz (Branson 1967) and 3.51 ± 0.06 Jy at 351 MHz (B08) imply a spectral index of α 351 81.5 = −1.08 ± 0.06. Our estimate of spectral index between 150 and 350 MHz is consistent within 1σ error with this low frequency spectral index. The total flux density of A2256 at 1369 MHz is not available in CE06. Using the total flux density estimate at 351 MHz from B08 and our estimate at 1369 MHz, the spectral index of A2256 is α 1369 351 = −0.63 ± 0.01; our estimate (α 1369 350 ) is consistent within 1σ error. Spectral index maps The (u,v)-coverages at 150, 330 and 1369 MHz were comparable. The (u,v)-plane was sampled sufficiently to short baselines ∼0.1 kλ to image extents of ∼ 20 ′ at the three frequencies. With the use of the VLA-D configuration at 1369 MHz, the WSRT at 350 MHz and the GMRT at 150 MHz, such a (u,v)-coverage could be achieved. Total flux densities were recovered at all the frequencies and thus these images were considered suitable for producing spectral index maps. The spectral index maps of A2256 were obtained using the primary beam gain corrected images produced at 150, 350 and 1369 MHz having resolutions of 67 ′′ × 67 ′′ . Pixels having flux densities less than 5σ level in the respective images were blanked before making spectral index maps to minimise uncertainties. Thus, pixels having flux densities less than 18 mJy beam −1 at 150 MHz, 3 mJy beam −1 at 350 MHz and 0.35 mJy beam −1 at 1369 MHz were blanked. To obtain the spectral index map of the radio halo and the relic, subtraction of the radio sources (A, B, C, D and F) contaminating the diffuse emission was attempted. Visibilities of short baselines were excluded to obtain images of only the discrete sources. Such images were subtracted from the respective images made using all the visibilities at each of the frequencies. This procedure resulted in either a partial subtraction of the relic or an incorrect subtraction of the discrete sources. As an alternative, models including upto 4 Gaussian components were used to fit the discrete sources. Those were subtracted from the images. This subtraction also resulted in artefacts in the images. The discrete sources have complex structures and require sophisticated modeling. To minimise the uncertainties, no discrete sources were subtracted from the final maps that were used in making the spectral index maps. The positions of discrete sources will be kept in mind while discussing the spectra of diffuse sources. We present the spectral index maps of A2256 in Fig. 2 and the corresponding error maps in Fig. 3. The extent of the spectral index map between 150 and 350 MHz (Fig. 2, right) is limited by the emission detected at 150 MHz. As can be seen in Fig. 3, the typical error in each of the spectral index maps is < 0.10. The error is uniform except at the edges of the radio emission where it is between 0.2 and 0.3 in the 350-1369 MHz spectral index map and between 0.2 and 0.5 in the 150-350 MHz spectral index map. The contours plotted in Fig. 3 are of the images of A2256 at 350 MHz (left panel) and at 1369 MHz (right panel) obtained using the archival data as described in Sec. 2. See Sec. 2 for the comparison of these with the images published by CE06 and B08. These images were used in making the spectral index map presented in Fig. 2 (left). Before studying the spectra of the diffuse radio emission, we examined the spectral behaviour of the discrete sources for consistency with earlier measurements. The discrete sources, except F, have spectral indices ∼ −0.7 ± 0.1 which are typical of radio galaxies and can be seen in Fig. 2. The components of the source F, namely F1, F2 and F3, discussed in detail by B08 are marked in the spectral index maps (Fig. 2) . The source F2 has spectral indices of α 1369 350 = −1.80 ± 0.05 and α 350 150 = −1.10 ± 0.05. These values of spectral indices of F2 are consistent with the values α 1446 610 = −1.71 ± 0.08 and α 350 150 = −1.20 ± 0.05 obtained by B08. The tail of source C extends in the north-west direction and is visible in the spectral index map (Fig. 2, left) as a linear feature. The spectral index, α 1369 350 , gradually steepens from ∼ −0.8 ± 0.1 near the head (marked C) to ∼ −1.20 ± 0.05 in the tail. Such a spectral steepening from the head towards the lobes has been seen in radio galaxies and is believed to be due to spectral ageing. Bridle et al (1979) term the tail of C as source "C(ii)" and report a spectral index of −1.04 ± 0.13 between 610 and 1415 MHz. Our estimate is consistent with that of Bridle et al (1979). Note that the discrete source O has a spectral index ∼ −0.7 over the range of 150 to 1369 MHz (Fig. 2) as expected. Source L is not detected at 150 MHz. While discussing the spectral indices of diffuse emission, the regions having large errors (∼ 0.4 − 0.5, see Fig. 3 for error maps) are not considered. A complex distribution of spectral indices is seen over the extent of the diffuse radio emission (Fig. 2). Occurrence of flat spectral indices (α 1369 350 ∼ −0.7 to −0.9 and α 350 150 ∼ −0.7 to −1.1) is noticed in the northwest (NW) regions marked G and H of the diffuse emission. Towards southeast (SE) of the discrete source D the spectra are steeper (α 1369 350 ∼ −1.2 to −2.3 and α 350 150 ∼ −1.5 to −2.5). To establish the significance of this trend, six slices across each of the spectral index maps in the direction NW-SE (approximately 30 • clockwise from the north) separated by 2 ′ from each other in the perpendicular direction were taken. Plots of spectral index versus distance from the SE edge of the diffuse radio emission were produced. Towards the NW, the diffuse radio emission shows average spectral indices of α 1369 350 ∼ −0.8±0.05 and α 350 150 ∼ −0.9±0.2. Toward SE of the source D, the average spectral indices are α 1369 350 ∼ −1.4 ±0.1 and α 350 150 ∼ −2.3 ±0.2. As noted in earlier works the integrated spectrum of A2256 which included the diffuse as well as discrete sources showed steepening at lower frequencies (B08). To find out whether such a steepening occurs in the diffuse radio emission, independent regions, each having a size of the synthesised beam, were chosen along the SE-NW direction (broken line in Fig. 2, right). The regions affected by the discrete sources were avoided. Flux densities of the chosen regions at each of 150, 350 and 1369 MHz were estimated from the images and plotted. These spectra are presented in Fig. 4. The flux densities have been scaled and shifted along the y-axis such that the topmost spectrum is of the region extreme NW and the spectrum at the bottom is of the region extreme SE. It was found that each of these independent patches of diffuse radio emission have steeper spectral indices between 150 and 350 MHz as compared to that between 350 and 1369 MHz; except for the extreme NW where the spectrum is straight. From the NW to the SE, α 350 150 steepens from −0.60 to −2.30. The value of α 1369 350 shows mild variation in the first 3 curves from the top (Fig. 4) but steepens to −1.23 further towards SE edge. It was also noted that the difference, |α 350 150 − α 1369 350 |, increases from NW to SE (Fig. 4). Discussion Most galaxy clusters with radio halos and relics also show signatures of recent or ongoing merger activities (Ferrari et al 2008). The cluster A2256 is a complex case involving more than one merger (Berrington et al 2002;Sun et al 2002). The implications to the spectral index trends in the diffuse radio emission in A2256 from the proposed origins in merger shocks and/or in turbulence are explored further here. The diffuse radio emission in A2256 has been referred to as the relic (G & H in Fig. 1) and the halo (the region S and SE of the source D in Fig. 1) in earlier works. Spectral Index and ICM Temperature A schematic representation of the temperature map of A2256 (Bourdin & Mazzotta 2008) is shown by the black (white in online version) contours in Fig. 2 (left). In A2256, the spectral index of the diffuse emission varies over the range -0.7 to -2.5 and the temperature of X-ray emitting gas varies over the range 4 -10 keV. It is found that the steep spectrum (< −1.5) region towards the SE is co-spatial with the hottest (∼ 10 keV) region in the cluster. The NW region having a flat spectrum (∼ −0.8) has temperatures ∼ 4 − 7 keV. The cluster A2256 does not show any correlation between the hot X-ray and the flat radio spectrum regions; in fact the SE region is a clear anti-correlation. Cooling times of thermal gas in merging clusters are typically more than the lifetime of the cluster (Hubble time) (Buote 2002). Synchrotron cooling times are at least 10-100 times shorter than a Gyr for break frequencies in the GHz range and typical intra-cluster magnetic fields. The thermal plasma will essentially be at the same temperature while the synchrotron spectrum ages (steepens) and finally fades. Earlier studies have explored possible correlations between the temperature of the thermal gas and the spectral indices of radio halos confined in them. Feretti et al (2004) report the absence of one to one correspondence between the high temperature and the flat (∼ −0.8) synchrotron spectrum regions in the radio halo in A665 and only a mild correspondence in A2163. This can be easily understood by comparing the cooling time of the thermal plasma and that of the synchrotron plasma. Recently, Giovannini et al (2009) have reported a mild correlation between the average temperatures of the ICM and the average spectral indices of the radio halos in the respective clusters. As seen in A2256, the temperature varies over a range of 4 -10 keV and the spectral index varies over a range of −0.7 to −2.5. Similar variations have been found in other clusters (Feretti et al 2004;Govoni et al 2002;Bourdin & Mazzotta 2008) too. Therefore comparing the average values of ICM temperatures and of spectral indices of radio halos in respective clusters can be misleading. Moreover, the estimates of temperatures in clusters and the co-spatial occurrence of radio emission and hot gas are affected by projection effects. Spectral index and cluster dynamics The implications of the properties of the complex spectral index distribution in A2256 to the geometries and timescales of mergers are discussed here. The diffuse radio emission in A2256 shows the presence of two regions in spectral index maps. A region NW of the discrete sources A, B and D with flat spectral indices and another SE of these sources having steep spectral indices (Fig. 2). In order to estimate spectral age of radio emission a knowledge of the magnetic field in that region and of the break frequency is required. In A2256, the estimates of magnetic field based on the depolarization properties of the filament G in the NW are in the range 0.02 -2 µG (B08) and those of the SE region based on the classical minimum energy, equipartition and hadronic minimum energy conditions range between 1.5 − 8µG (CE06). For simplicity, a typical value of 1µG for cluster magnetic field (Carilli & Taylor 2002) is used. Synchrotron spectrum is curved and requires measurements at several frequencies, over the entire range from a few MHz to tens of GHz, to identify the break frequency. Flux density estimates of regions within the diffuse radio emission in A2256 are available from the images at 150, 350 and 1369 MHz as presented in this paper; the break frequencies cannot be identified using these. However, from the spectral indices, upper and lower limits on the break frequencies of different regions in the diffuse radio emission can be estimated. The values of spectral indices in the NW and the SE regions discussed in Sec. 3.3 imply that 1369 and 150 MHz can be considered as the lower and the upper limits on the break frequencies of these regions, respectively. Using these break frequencies, the upper and lower limits on the spectral ages of the NW and of the SE regions of the diffuse radio emission are ∼ 0.08 and 0.4 Gyr, respectively. The spectral ages imply that the radio emission in the NW relic region is young and the acceleration is very efficient. The relic could be the present location of a shock front. One possibility is that a cluster merger in the direction SE to NW drove shocks and injected turbulence in the ICM along its way. Cluster merger shocks can accelerate particles to relativistic energies (Ensslin et al 1998;Hoeft & Bruggen 2007). But the distances to which the shock accelerated particles can diffuse within their radiative lifetimes are short (∼ 200 kpc, see Brunetti et al 2008). Thus shock acceleration alone cannot be responsible for the Mpc scale radio emission as seen in A2256. However, the relic region having a sharp edge at the NW and a projected width of ∼ 300 kpc towards the SE could be the result of shock acceleration. The reasons for the possibility of shock acceleration in the relic region are as follows. An average linear polarization fraction of ∼ 20% has been detected across the relic at 1.4 GHz with the averaged B-vector oriented with an angle 10 • (measured east of north) (CE06). In the region of shock, the magnetic fields become alligned with the shock plane. Based on the polarized fraction in the relic, CE06 propose that the shock plane is alligned at an angle of ∼ 45 • with the plane of the sky. This orientation is consistent with the SE to NW direction in which the shock is likely to have propagated based on the spectral ages discussed above. At the location of the shock, which is the site of reacceleration, flat spectral indices are expected and gradually in the wake of the shock, the spectra would be steeper. The relic has a flat spectral index of α ∼ −0.7 between 150 and 1363 MHz but shows a gradual steepening from NW to SE in the frequency range 1363 -1700 MHz (CE06). It should be noted that there is no evidence for a shock in A2256 from the X-ray observations with Chandra and XMM Newton (Sun et al 2002;Bourdin & Mazzotta 2008). Viewing angle can be a reason for the non-detection of shock in X-rays. Nevertheless there is evidence for mergers in A2256. The distributions of the optical galaxies and of the Xray emission in A2256 show substructures which are believed to be due to 2 merger events (Berrington et al 2003;Sun et al 2002). Based on the optical substructure, Berrington et al (2003) propose two mergers in A2256. One is a merger between the primary cluster (PC) and the sub-cluster (SC) and another between a group (Gr) and the combined system of PC and SC or the PC. The symbols box, circle and star (Fig. 2, left) indicate the positions of the optical centroids of the PC, the SC and the Gr, respectively. The estimates of mass for the PC, the SC and the Gr are 1.6×10 15 M ⊙ , 0.51×10 15 M ⊙ and 0.17 ×10 15 M ⊙ , respectively. Of the two mergers one is major merger (PC+SC, mass ratio ∼ 3) and another is a minor merger (PC+Gr or (PC+SC)+Gr, mass ratio ∼ 10). Numerical simulations have shown that mergers with mass ratios ∼ 3 − 10 can give rise to shocks of Mach numbers ∼ 1.5 − 3, but these are not sufficient for accelerating particles that are responsible for observed synchrotron emission (Gabici & Blasi 2003). Thus, even in the region of the relic in A2256, mechanisms other than shock acceleration may be active. Turbulent reacceleration is a mechanism by which Mpc scale radio emission can be generated in clusters of galaxies (Roettiger et al 1999;Fujita et al 2003;Brunetti et al 2004;Brunetti & Lazarian 2007). The first step for this mechanism is injection of fluid turbulence in the ICM. Mergers of clusters are one of the most favoured routes for injection of fluid turbulence on scales of 0.5-1 Mpc in the ICM. Numerical simulations and observations have provided evidence for the existence of fluid turbulence in the ICM on the scales of 0.1 -1 Mpc (Sunyaev et al 2003;Schuecker et al 2004;Churazov et al 2004;Gastaldello & Molendi 2004;Dolag et al 2005;Vazza et al 2006). According to the simulations of cluster mergers by Cassano & Brunetti (2005), mergers with mass ratios in the range 3-10, as is the case of A2256, inject energy equivalent to 5 − 8% of that of thermal gas in large-scale fluid turbulence. The efficiency of turbulent reacceleration depends on many factors such as the timescale for cascading and wave-particle coupling which depend on many physical quantities which are unknown. For example, the spectrum of the magnetosonic (MS) waves and the structure of the magnetic fields are not known. However, Cassano & Brunetti (2005) have shown that MS waves with scales ∼ 100 kpc can efficiently accelerate fast electrons in the ICM to energies sufficient for producing the synchrotron emission detected in radio bands. The decay time of the MHD turbulence at injection length scales (L inj ∼ 1Mpc) can be estimated using: where v i is the relative velocity of impact of merging clusters and η t is the fraction of the energy in turbulence that is in MS waves (Cassano & Brunetti 2005). The value of η t has been constrained by requiring that the accelerated electrons can produce synchrotron emission with spectral index ∼ 1.1 − 1.5 between 327 and 1400 MHz (Cassano & Brunetti 2005). The spectral index of the radio halo in A2256 is ∼ 1.6 and the relative velocity between merging sub-clusters (PC+SC) is ∼ 2000 kms −1 (Berrington et al 2002). The timescale for the decay of turbulence (τ kk ∼ Gyr) is comparable to the crossing time for merging subclusters. If the mergers in A2256 have occurred over the last Gyr then the presence of Mpc scale radio halo is consistent with the timescale over which the turbulence decays. Further we compare the spectral trends and the geometries of mergers in A2256. The mean radial velocity of the Gr is ∼ 2000 kms −1 higher than that of the PC and is moving along the line of sight into the PC (Miller et al 2003). Such a merger event could lead to a shock travelling from the SE to the NW and also inject turbulence in the swept ICM if the direction of merger is inclined to the line of sight. The shock and turbulence in the NW region and the turbulence in the region that was swept by the passage of the Gr can generate diffuse radio emission. Moreover the major merger between PC and SC must also result in injection of turbulence in the ICM. The spectral index trend in the diffuse radio emission(steepening from NW to SE) is consistent with the geometry of the merger of the Gr with the PC. Miller et al (2003) point out the frequent occurrence of radio sources associated with the star forming galaxies in the Gr and interpret it as an effect of merger. Thus there is evidence for the Gr to have undergone merger in a direction into the plane of the sky; the exact orientation is not clear. Miller et al (2003) have argued that the merger of the Gr viewed 0.3 Gyr after the core passage is responsible for the diffuse radio emission in the NW of the cluster. The co-spatial occurrence of the brightest diffuse radio emission and the Gr and the direction of spectral index steepening being consistent with the proposed direction of the Gr -PC merger, support the picture of Miller et al (2003). The radiative lifetimes discussed earlier also support the passage of the Gr within the last 0.4 Gyr. If the merger between PC and SC has been in progress for the past Gyr it will contribute to enhance the radio emission but its role in producing the spectral index distribution is not clear. Another possibility is that the amount of energy that was injected as fluid turbulence in different regions is different and thus has resulted in flat spectrum emission in one region and steep spectrum emission in another. The dependence of acceleration by turbulence on the many unknown parameters mentioned earlier make further progress in testing this possibility difficult. The group Gr, being less massive and not being part of a major merger in A2256 is not favoured by CE06 to have created the radio emission in the NW region. Instead, CE06 have proposed two scenarios involving the major merger between the PC and the SC (see Fig. 11 a and 11 b in CE06); in both the scenarios the SC approaches the PC from the NW in projection on the plane of the sky. In the first scenario the SC approaches the PC from the NW and is proposed to be in the early stages of merger. In this picture the merger shock crosses from the NW to the SE or is along the line of sight towards the observer (see Fig. 11a in CE06). This implies steep spectra in the NW edge and flatter spectra towards SE or a uniform distribution of spectral indices as seen by the observer, respectively. The diffuse radio emission in the SE (radio halo) is considered a remnant of an older merger (CE06). This picture is contradictory to the observed spectral steepening from the NW to the SE and does not account for the radio emission in the SE region (radio halo). In the second scenario of CE06, the SC is in an advanced stage of merging. The outgoing merger shocks are proposed to create the radio emission. One of the shock waves is along the line of sight towards the observer (Fig. 11b) and thus might result into complex spectral index distribution; it cannot explain the spectral index variation from the NW to the SE. Apart from the trend of spectral steepening from NW to SE, the spectrum of the diffuse radio emission steepens at lower frequencies (Fig. 4). Since a synchrotron spectrum is expected to steepen at higher frequencies due to energy losses, this low frequency steepening in the diffuse emission cannot be explained by a single population of emitting particles. Superposition of at least two spectra having unequal amount of steepening can give rise to a spectrum that steepens at low frequencies. It is possible that the two superposing spectra are due to populations of electrons accelerated at different epochs. The two merger events in A2256 could be the two epochs at which the electrons were accelerated; the merger of SC with PC and that of the Gr with the PC and the SC. According to the models proposed to explain the X-ray substructure, the merger between the PC and the SC is viewed 0.2 Gyr prior to the core passage and the Gr merger is viewed 0.3 Gyr after the core passage (Miller et al. 2003;Roettiger et al 1995). These timescales are consistent with the timescale of ∼ 0.08 − 0.4 Gyr over which the radio emission remains detectable in the frequency range of 150 -1400 MHz. Due to lack of detailed simulations, the chronology between the two merger events and the exact mechanism of acceleration of electrons to relativistic energies cannot be established with confidence. Hydro/ N body simulations reproducing the optical and X-ray substructure and the temperature distribution are required to obtain the detailed geometries and the chronological order of the two mergers in A2256. An alternative possibility is that a complex distribution of turbulence and magnetic fields in the cluster may result in regions with flat and steep spectra. Such regions seen projected along the line of sight could also produce a low frequency steepening in the integrated spectrum. A peculiar low frequency steepening trend seen in the integrated spectra of A2256 and of the diffuse radio emission in it have been discussed above. It should be noted that at high frequencies, the synchrotron spectra are expected to steepen due to energy losses. The synchrotron emission due to acceleration mechanisms such as turbulence or shocks have a cutoff due to the maximum energy that is available in turbulence or the strength of the shock. In the case of A2256, a spectral index map of the NW region (marked G and H in the Fig. 2 of this paper and referred to as the relic by CE06) between 1369 and 1703 MHz (Fig. 4 in CE06) is available. This spectral index map in CE06 is affected by the loss of sensitivity to extended structure at 1703 MHz due to instrumental limitations and larger noise level. Nevertheless, the spectral indices in the high surface brightness relic region can be used for comparison with our spectral index maps. The spectral index map in CE06 shows a flat spectrum (α ∼ −0.9) emission at the NW edge and a steeper spectrum (α ∼ −1.6) emission in the SE region of the relic. The same relic region (marked G and H) in our spectral index map between 350 and 1369 MHz (Fig. 2, left) shows flatter spectral indices ∼ −0.7 to −1.0. The steeper spectral indices in the 1369-1703 MHz spectral index map in CE06 could be the effect of ageing of the synchrotron spectrum due to energy losses. Thus putting together the spectral information across the frequency range of 150 to 1703 MHz for the NW region a picture consistent with a high frequency steepening due to the energy losses and a low frequency steepening due to a second component of electron population emerges. Further deeper observations at high frequencies (> 1.4 GHz) which can image the entire extent of the diffuse emission in A2256 are required to confirm the steepening due to energy losses. Conclusions We have carried out a multi-wavelength analysis of the merging rich cluster of galaxies A2256. Using new radio observations at 150 MHz from the GMRT and archival observations from the VLA (1369 MHz) and the WSRT (350 MHz), we have produced spectral index images of the diffuse radio emission in A2256 over the range 150-1369 MHz. These spectral index images show regions of the diffuse radio emission having flat spectral indices in the NW and having steep spectral indices in the SE of the cluster centre. The implied synchrotron life times for the relativistic plasmas are in the range 0.08 -0.4 Gyr. Such a distribution of spectral indices is interpreted as resulting from a merger through the cluster from the SE to the NW in the last 0.5 Gyr or so. Acceleration due to shocks can explain the emission only in the NW relic region. The generation of the diffuse radio halo emission is likely to be due to the turbulence injected in the ICM by the mergers. The injection of fluid turbulence by mergers in the last Gyr and the timescales of ∼Gyr required for the cascade of MHD turbulence are consistent with the spectral ages ∼ 0.4 Gyr. Furthermore, the diffuse radio emission shows spectral steepening toward lower frequencies. This low frequency spectral steepening is consistent with a combination of spectra from two populations of relativistic electrons created at two epochs (two mergers) within the last ∼0.5 Gyr. Earlier interpretations of X-ray and optical data suggested that there have been two mergers in Abell 2256 in the last ∼0.5 Gyr, consistent with the current findings. Also highlighted is the futility of correlating the average temperatures of thermal gas with the average spectral indices of diffuse radio emission in respective clusters. Spectral index imaging of diffuse radio emission in galaxy clusters is found to be a powerful tool to study cluster mergers, dynamics, and history of cluster formation. We thank the staff of the GMRT who have made these observations possible. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. We thank the anonymous referee for valuable suggestions regarding shock ac- -18, 18, 36, 54, 72, 90, 126, 180, 240, 360, 480, 600 mJy beam −1 . The DSS R band image is shown in greyscale. The linear scale is marked for an assumed redshift of 0.058. The labels A, B, C, D, F, O, G and H for the radio sources are according to the convention in Bridle et al (1979) and R94. The synthesized beam is 67 ′′ × 67 ′′ . The symbols square, circle and star (left panel) represent the centroids of the primary cluster, the sub-cluster and the group, respectively (Berrington et al 2002). In the left panel, the black (white in the online version) contours are a schematic representation of regions at different temperatures (Bourdin & Mazzotta 2008). Contours enclose regions with temperatures of 4-6 keV (innermost), 6-8 keV (region surrounding the innermost region) and 8-10 keV (region toward SE) respectively. celeration and turbulence. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The Westerbork Synthesis Radio Telescope is operated by the ASTRON (Netherlands Institute for Radio Astronomy) with support from the Netherlands Foundation for Scientific Research (NWO).
2010-05-20T06:46:54.000Z
2010-05-20T00:00:00.000
{ "year": 2010, "sha1": "03631ac50841777d33b326c2b1c5bd5b9e7641d3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1005.3604", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "03631ac50841777d33b326c2b1c5bd5b9e7641d3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
149392853
pes2o/s2orc
v3-fos-license
The Development of Instructional Materials E-Learning Based On Blended Learning The use of e-learning is becoming the global issue now. In an educational field, there are many institutions already use it. The study very important aimed to test the feasibility and effectiveness the development of instructional materials e-learning based on the blended learning in audio/radio media development course. The background laid behind the problem is the experience the students had that is having difficulties in developing the audio/radio media manuscript. This caused by the limited time lecturer had to explain the screenwriting material, and duration the students had to write the audio/radio script, so it affected the lack of students’ understanding of the course material also to the students’ scriptwriting result which is not feasible to produce yet. Standard mastery of the subject specified within 6 (six) weeks in maximum, and the students should have been able to develop the script established on the rules of production. Thus, the outcome of this research would be the e-learning-based instructional materials based on blended learning, Semester Lesson Plan (SLP) audio/radio media. This inquiry aims at the improvement of the quality of the work of the audio/radio manuscript. This study implements the Research and Development methodology which is based on the steps generally refers to the opinion of Borg and Gall. The R & D steps was done with modified to simplify it into three main stages, namely the introduction, the development, and testing. Data obtained from the learning experts get the percentage of 91.67%, the course professionals and media specialists each earn a percentage of 100%. Based on these data, the model of blended learning instructional materials for the development of audio/radio media course that have been developed could be the solution of the research question stated that blended learning models of instructional materials that have been evolved are practical for use in learning instructional activity. Introduction Every effort and a conscious determination in the learning process is nothing but to improve the quality of learning.Various models have been established starting from the beginning of learning process with the direct face-to-face meet up with the students up to the state where there is no meet up needed, but still would be facilitated by kind of online classes.The development of the Technology of Information and Communication is so swiftly changed affecting the learning activities of outside-guided to be self-guided centered.It also completed because the ease of internet access which makes e-learning is increasingly being popular in education.According to Miarso (2004), the use of e-learning will never separate from the use of internet access.The information obtained from the sites is rather complete so that it affects lecturer's task in the instructional learning process.It used to be this situation where the lecturer's control almost all the input, then the learning environment did develop where the books and lecturer both dominating the input, and will come the time where both books, lecturer, and technology would all dominating the input on the learning process. In globalization era, through the enhancement of technology surely growth hastily, allowing the students even able to learn more effectively and efficiently.One of the ways is to making use of the internet access as one of the alternatives to get the learning sources.The use of e-learning based on blending learning for this one is not only applicable for long-distance class but also could be developed into a good use for the conventional education system.The development of audio/radio media is one of course subjects which is must be taken by the students of the undergraduate program of Technology of Education in the Faculty of Education of State University of Surabaya, Indonesia.One proficiency must be master by the students in this course includes developing audio program script. A good script will determine the program's quality also it could be a guild for both directors and another production team while the recording happened.Audio media script contains the sounds queue, both human's lines, music, and other sound effects which could support the creation of any mood on the program that recorded.Script development competency is having a high difficulty that it needs strong determination and higher creativity in the creation.The understanding about the use of audio language, parties on music and sound effects, also the scriptwriting technique must be mastered by the students before jump into the process of the creation of learning the audio script. Based on the observation and interview results also to look in the state of the course, especially when it comes to the part of creating the script, there are some problems which seem need to be solved immediately, and those problems happened in each year when the students have to take the course.In the practical case, most of the students happened to struggle in developing the audio/radio media script.The case happened due to the time limitation the lecturer had in explaining the lesson related to scriptwriting and the time limitation the students had to do their task so that it cause less understanding by the students on this course.Also there are many of the practice results are not feasible to established.The standard time set to master the course limited in six weeks in maximum, and the students should have been able to develop the script in a result where it would be fit to the standard production. The number of constraints that occurred during the implementation of the learning process, such as lecture hours which are sorely lacking come along with kind of materials or instructional content that is too much is one of the factors the underlying idea to develop an e-learning-based blended learning.Within this model of blended learning instructional materials, we expect to be able to overcome all the obstacles that exist; also it has Semester Lesson Plan (SLP) at each meeting for related course.So the learning will be done following the pattern of the learning media that have been created.Only for some periods, meet up the class dismissed and the class would run online so that the class would be still available by using online learning system without decreasing any understanding in the course materials.Based on the need to develop the quality of instructional learning process on this course, it would be better to set the focus on the lack of time duration the class had.To exclude the students directly on the materials delivery should be taught that is by developing e-learning activity based on blended learning which supports the conventional learning system could be a solution. Based on the facts have been stated above, the researcher feels the urge to develop e-learning based on blended learning that could be studied independently by the students outside the class hours which is well-matched both with the materials and students' characteristic.The problem would likely happen in the process of the media development is the nippy way to groups on, executes, and renewing a learning and practice program (Sadiman, 2016:2).Referring to Anderson's statement, it seems that to be a researcher means must be really understanding on the characters of media, materials, and the students.Just so the media could be used to enhance the students' learning quality. Based on the need analysis done, it could be concluded that there will be e-learning development based on blended learning required to do for audio/radio media development course.The reason is that the characteristic of the utensil of blended learning based e-learning is deemed to be the most suitable relating on the problems happened.In the other hand, it also could be learned independently, students-oriented.Based on the description stated above, the conclusion for the research question would be "Is blended-learning instructional models for audio/radio media development course is practicable for the students of Curriculum and Technology of Education, Faculty of Education, State University of Surabaya, Indonesia?" Context and Review of Literature Blended-learning is defined by etymologic from the word blended and learning, blended means a mixture combination alignment (Oxford English Dictionary) in (Heinze & Procter, 2004:236).While learning explained as the act to understand or to comprehend something, etymologically it could be concluded as the combination of the practical from instructional learning activity.Blended-learning is a flexible approach where its program includes a combination of various places and times that could be used to study.Based on Semler (2005): "Blended Learning combines the best aspects of online learning, structured face-to-face activities, and real-world practice.The aspects such as Online learning systems, classroom training, and on-the-job experience hold the biggest part as an obstacle.The Blended Learning approach uses the strengths of each to retaliate the others' deficiency." According to Rovai and Jordan (2004, p. 3) the instructional models of blended learning is a combination of excellence in learning done by face to face (face-to-face learning) and the virtual (e-learning) ones.The instructional material of blended-learning is a model that combines learning system face-to-face and virtual learning system.This combination, according to Whitelock and Jelfs (2003), gives three terms to Blended Learning, namely: a.The integrated mix of traditional learning with web-based online approaches (drawing on the work of Harrison); b.The blending of media and tools employed in an e-Learning environment; c.The combination of some pedagogic approaches, irrespective of learning technology use (drawing on the work of Driscoll).There are three of them according to Whitelock and Jelfs (2003), namely: a) fusion/integration of traditional learning approaches to web-based on-line (drawing on the work of Harrison); b) a combination of media and equipment used in e-learning environment and; c ) a combination of a number of pedagogical approaches regardless of the technology used (drawing on the work of Driscoll). According to Sharpe, et.al (2006) Blended e-learning characteristics includes: Statutes resources to supplement learning programs related to most of the traditional line, through institutional support virtual learning environment; Transformative level instructional practices supported by up-depth learning design; A holistic view of technology to support learning.From the characteristics above, the distinctive of blended learning are the source of the supplement, with the traditional approach which also supports virtual learning environment through an institution, learning design focusing at the time of grade changes instructional practices and views about all of the technology used to support learning (Riyana, 2010).The characteristics of blended learning based on the previous statement is; first about the instructional materials, blended learning is an additional source for the conventional learning supported by the virtual happened, changes in the practice of the implementation of learning that includes specially designed, and the use of technology that is thorough in its implementation. Approach to the learning theory underlying the application of the instructional materials of blended-learning models aimed at keeping the goals settled could be achieved outstandingly.Based on the understanding of the instructional materials of blended-learning models, means it is combinations of learning models so that learning theory used also vary according to the circumstances of learners and the course environment.To identify about the learning theory used, it is needed to know in the components included in the instructional materials in blended-learning.The fundamental of the instructional materials of blended-learning are face-to-face learning, web-based learning, online learning, computer-based learning, internet based learning, and e-learning. The model for the instructional materials is composed of by the experts based on the principal of instructional learning activity, psychological theory, sociology, system analysis, or other theories supporting (Joyce & Weil, 2000).According to Rusman (2015, p. 133), the course model is a plan or pattern which can be used to from a curriculum (a long-session lesson plan), crafting for instructional materials, also to guide the instructional process in the class, etc.Based on Joyce and Weil (2000, p. 13), it is stated that course model is a description of the learning environment which portraying the curriculum plan, classes, instructional designs, class materials, textbooks, workbooks, and instructional aids through a computer.This model is an instructional design which in its practical provides a media for the students.Sagala (2012, p. 176) argued that this learning model be a conceptual framework that describes a systematic procedure in organizing the learning experience of students to achieve specific learning objectives, and serves as a guide for learning designer and lecturer in planning and implementing learning activities. Methods This study was applying the design of research and development (Research and Development).It is related to the main purpose of this research which is to determine on what kind of Blended Learning model that could be developed at Department of Curriculum and Educational Technology, State University of Surabaya, Indonesia, also the potential support that can be strived to develop a blended learning model of learning in Department of Curriculum and Educational Technology, State University of Surabaya, Indonesia.As described by Borg and Gall (1989, p. 772), "Educational research and development (R & D) is a process used to develop and validate educational products.The steps of this process are usually referred to the R & D cycle that subsists of studying research findings application to the product to be evolve, advancing the product based on the discovery, field testing the product in the framework where it will be used ultimately, and revising it to correct the weaknesses found in the field of testing stage.In indicating that the product meets its behaviorally defined objectives". Which means research and educational development (R & D ) is a process used to develop and validate a product in education.The steps in this process are generally known as the cycle of R & D which consists of an assessment of the results of previous studies related to the validity of the components in the product to be developed, to develop it into a product, to test the brand designed and to review also correcting it based on trial.It was an indication that the findings of product development activities undertaken have objectivity.Product education (educational product) within the term has a broad meaning and includes not only material beings such as textbooks, instructional videos and such but also to the development of processes and procedures, such as the advancement of a method or model.With that in mind, the approach to research and development is seen to have the procedural steps making it easier for developers to be able to go through each step, and this model has a methodical sequence so as to produce a viable media for use in the learning process.The measures contained in this model basically has two (2) goal is to develop products and test the effectiveness of the product to achieve goals. Findings The development of blended learning instructional material based on the steps in the development model of Research & Development (R & D).Following is the description in the development of instructional materials of blended-learning model: Research and Information Collecting The first step conducted by researchers in developing models of blended learning instructional materials on the development of audio/radio media course for students majoring in Curriculum and Educational Technology Faculty of Education, State University of Surabaya, Indonesia, is to conduct research and gather information.Based on the observations conducted, there are some real conditions (real) in learning identified as follows: The situation of the course in audio media development especially for script development material happened to have some problems which seemed net to be solved immediately.There, the problem happened on each year when the students taking the course.In practical, most of the students found it difficult to develop audio/radio media script; then it affected on the less understanding towards the lesson also to the outcomes which was not well-standardized to establish.The standard time limitation to master the course is six weeks in maximum, and the students should have been able to develop a script using the standard production; The audio/radio media development is one of the courses that must be taken by the students in undergraduate program of Technology of Education, Faculty of Education, State University of Surabaya, Indonesia.One of the competency must be master by the students in this course is to develop the audio script. Planning The next thing to do after founding the false on the course is to gather information as the contents to make a plan for the product used to resolve the problems.Based on the situation observed, the researcher developed instructional materials of blended-learning well-matched for the audio/radio media development course, these models are the combination between both face-to-face learning and online learning.In planning the product, things should be done by the researcher is to make the Semester Lesson Plan (SLP) draft and flowchart web.To get the process done well, the researcher did obtain data gathering about facility the students had which can be used to executes the model that have been prepared.Based on the data gathered by class observation, the result is 47 students own laptop, in total 39 units; 35 units of portable wifi spot; 34 units gadget; and six (6) units personal computers. Develop Preliminary Form of Product The development stages of preliminary form of the product stated below: Semester Lesson Plan (SLP) modification, in this phase, the researcher modified the Semester Lesson Plan (SLP) which is created in conventional model to the another model using two styles of learning that is face to face learning style and the only ones.The previous Semester Lesson Plan (SLP) is made by the lecturer of audio/radio media development course.The goal of this modification is to match the learning environment with the instructional activity and to achieve the course goals maximally; Materials framework, in this stage the researcher composed the materials that would be used in the instructional activity together with the lesson expert.The objective of this phase is to get to know kind of materials should be mastered by the students so that they could achieve the objectives of the course.The materials delivered in the class should lure the students, especially who takes on this course so that the students would comprehend the lesson in complete and ease.In this material development, it could be done by having a session with the lecturer team who obliged to deliver the course.Pre-production, before doing the e-learning media production that would be used as instructional materials for blended learning, a web design is needed.The trial on e-learning design was done as the standard for success rate in developing the prototype so that the media would turn out applicable in an instructional learning activity.The trial is done by having a session with the experts both on materials and media related to the materials that will be delivered and developed.= 91.67% From the description stated above, as the conclusion, it is most likely that all the aspects reviewed by the experts got the score as much as 91.67%.That percentage is acknowledged as the good ones, that so the learning model is practicable to be used in the audio/radio media development course in Curriculum and Technology of Education Program, Faculty of Education, State University of Surabaya, Indonesia. Material Experts' Validation Experts' validated by observing on how the trial was done on the blended-learning model in audio/radio media development course.In this advancement, the one who cast as the experts were two lectures of Curriculum and Educational Technology Program, Faculty of Education, State University of Surabaya, Indonesia.Criteria: Yes = 1, No = 0 As could be seen in the table above, the data analysis concluded has 100% average and could be referred as a well-done delivery. Media Experts' Validation Experts' validation is done by two experts.In this blended-learning model by using web media, reviewed by two lectures as the experts coming from Curriculum and Technology of Education Program, Faculty of Education, State University of Surabaya, Indonesia.Criteria: Yes = 1, No = 0 Based on the table above, the data analyzed and has the 100% in average.The conclusion is that the web media used in the blended-learning model acknowledged as a good media. Main Product Revision Instructional Design Experts' Revision, after getting the validation result coming from the expert, there was no suggestion to change the blended-learning media developed for audio/radio media development course.In short, there is no revision needed.Material Experts' Revision, after getting reviewed by the experts, there is no suggestion acquired related to the blended-learning model developed for audio/radio media development course. In conclusion, there is no revision needed.Media Experts' Revision, after getting the validation result coming from the expert, there was no suggestion to change the blended-learning media developed for audio/radio media development course.In short, there is no revision needed. Started from the first stage up to the fifth stage stated above are the steps the researcher through in to develop the media.From the 6 th stage to the 10 th stage was not carried out by researcher because of limited funds and this research is aimed only to develop the blended-learning instructional materials for audio/radio development course for Curriculum and Technology of Education Program, Faculty of Education, State University of Surabaya, Indonesia. Result of the research shows there is a positive impact on the advancement of blended learning syllabus in the learning process.Blended learning works as alternative solution from the deficiency of conventional method (lecturing).By implementing blended learning, the learning process gives out the interesting impression to the students, more engaging also motivating students more in learning and paying more attention to the materials delivered by their lecture.On the other hand, by its implementation, students can also learn by themselves outside the learning period, because it can be accessed by the students by online, complete with various quiz prepared by the lecture to deepen their understanding.Therefore, by using blended learning we can improve students' learning outcome.As the conclusion, the use of blended learning is effective on the audio/radio development course for Curriculum and Technology of Education Program, Faculty of Education, State University of Surabaya, Indonesia. Discussion This development was resulting in a kind of instructional media product that is the blended learning an instructional model that can be used in an instructional activity for audio/radio development course.After going through some stages and trials, the blended-learning model is finally acknowledged as a feasible media used on the course.Below is the discussion on the trials and revision acquired: Data is obtained from both instructional design experts, from all aspects reviewed by instructional design experts, it gets high-value percentage as much as 91.67%.According to Arikunto (2015:31) In this context, blended learning is the act of combining both face-to-face learning activity and e-learning activity.Interesting thing in this study is: 1.All learning activities, both face-to-cae and e-learning activity, arranged in the syllabus; 2. There are new facilities in the e-learning named conference video so the lecture and the students can interact directly, beside using another common features on e-learning.State University of Surabaya is one of representative colleges in developing e-learning.If this program works, then it would be applied to all universities in Indonesia.Therefore, this study contributes to the advancement of education in Indonesia. Conclusion Based on the results of the development and the analysis of data, the conclusion is that the development of "blended learning's instructional design got audio/radio media development course" are applicable to be used for students majoring in Curriculum and Educational Technology Faculty of Education, State University of Surabaya, Indonesia.Based on the overall results of the research and discussion on this development, then given some suggestions are expected to increase the use of the results of this research development.The recommendation are stated as follows: Operating Suggestion, in the use of blended learning instructional design for the audio/radio media development course that has been developed, it expected that the lecturer would mark things as explained below; Teacher is not the only one as a learning sources, tutor's role is to be the facilitator that would affect the students to follow the learning activity that has been created, so that the course would be interesting and the students get higher motivation to be more active.The variation in learning activity created by using particular media and instructional design is considered as an effort to maximize the class sources to create an enjoyable learning environment; the use of blended learning model should come along with materials and lesson plan which were developed by the lecturer already.With those supporting sets, it would be easier for the tutor to deliver the course.Dissemination Suggestion, this development produced the instructional design of blended learning model for audio/radio media development course.If the media is used by any of educational institution, there should be re-identifying related on needs analysis, learning environment, students' characteristic, school's facilities, and etcetera.Every institution has its own peculiar and different matters.Development Suggestion, To the future researcher in the same focus, it suggested to create for another blended learning media with various materials, includes an interactive quiz, video conference and being more flexible, so that the learning activity implemented would be interesting. Based on Table1about the result of instructional design validation by having two respondents that were the experts, things defined as: Table 2 . Result of the review of material expert I and II Table 3 . Result of media experts review I and II , the percentage is acknowledged as an excellent result, the learning model is feasible to be implemented for audio/radio media development course, Department of Curriculum and Technology of Education, Faculty of Education, State University of Surabaya, Indonesia; Data obtained from both materials experts, also all aspects reviewed by materials experts, it gets 100% perfect percentage.So that the learning model is sure applicable to be used in audio/radio media development class, Curriculum and Technology of Education, Faculty of Education, State University of Surabaya, Indonesia; Data obtained on both materials experts, from all aspects reviewed by media experts, it gets 100% perfect percentage.So that the learning model is sure applicable to be used in audio/radio media development class, Curriculum and Technology of Education, Faculty of Education, State University of Surabaya, Indonesia.Based on the data stated above, it could be referred that the instructional design of blended-learning models for the course development of audio media/radio that has been developed could answer the research question stated in the first chapter earlier, namely blended learning's instructional design which has been evolved to be applicable to the class.Study on the development of blended learning syllabus for the course of advancement of audio/radio media is a new thing in Curriculum and Technology of Education Major in Faculty of Education, State University of Surabaya, Indonesia.
2018-12-12T03:50:58.989Z
2017-06-27T00:00:00.000
{ "year": 2017, "sha1": "44eb907176713debd8a8bc53efda263238fc741a", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/ies/article/download/66040/37588", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "44eb907176713debd8a8bc53efda263238fc741a", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [ "Psychology" ] }
58949965
pes2o/s2orc
v3-fos-license
Light chain cardiac amyloidosis-a rare cause of heart failure in a young adult Cardiac amyloidosis is an infiltrative cardiomyopathy, resulting from amyloid deposition within the myocardium. In primary systemic (AL-type) amyloidosis, the amyloid protein is composed of light chains resulting from plasma-cell dyscrasia, and cardiac involvement occurs in up to 50% of the patients We present a case of a 43-year-old man, with complaints of periodical swollen tongue and xerostomia, bleeding gums and haematuria for two months. His blood results showed normocytic anaemia, thrombocytopenia and a high spontaneous INR, therefore he was referred to the Internal Medicine clinic. In the first visit, he showed signs and symptoms of overt congestive heart failure and was referred to the emergency department. The electrocardiogram showed sinus tachycardia and low voltage criteria. Echocardiography showed biventricular hypertrophy with preserved ejection fraction, restrictive physiology with elevated filling pressures, thickened interatrial septum and atrioventricular valves, small pericardial effusion and relative “apical sparing” on 2D longitudinal strain. Cardiac MRI showed diffuse subendocardial late enhancement. Serum protein electrophoresis was inconclusive, however urine analysis revealed nephrotic range proteinuria, positive Bence Jones protein and an immunofixation test with a monoclonal lambda protein band. Abdominal fat biopsy was negative for Congo red stain, nevertheless a bone marrow biopsy was performed, revealing lambda protein monoclonal plasmocytosis, confirming the diagnosis of primary systemic amyloidosis. This case represents a rare cause of heart failure in a young adult. Low-voltage QRS complexes and typical echocardiography features should raise the suspicion for cardiac amyloidosis. Prognosis is dictated by the level of cardiac involvement; therefore, early diagnosis and treatment are crucial. INTRODUCTION Restrictive cardiomyopathies are a heterogeneous group of myocardial diseases, whose hallmark echocardiographic finding is severe diastolic dysfunction.They are not common in daily practice, and their initial presentation is diverse.Several clinical, electrocardiographic and echocardiographic features may help in the diagnosis. CASE We present a case of a 43-year-old man, with history of schizophrenia, previously evaluated by different physicians because of swollen tongue and xerostomia, with an inconclusive work-up.He presented to his general practitioner with fatigue, bleeding gums and haematuria in the past two months.His blood test results showed normocytic anaemia, VIDEO KEY: Echocardiogram with parasternal long and short axis and apical four chamber views and 2D speckle tracking longitudinal strain bull's eye.Typical cardiac amyloidosis features are present: left ventricular hypertrophy with granular appearance, dilated atria, thickened atrioventricular valves and interatrial septum, small pericardial effusion and "apical sparing" pattern. thrombocytopenia and a high spontaneous INR, therefore he was referred to the Internal Medicine clinic.In the first visit, he showed signs and symptoms of overt congestive heart failure and was referred to the emergency department.Chest radiography revealed interstitial oedema and bilateral pleural effusion and the electrocardiogram (ECG) showed sinus tachycardia and low voltage criteria (Figure 1).His hemogram and INR were similar to the previous, and had elevated alkaline phosphatase and gamma glutamyl transferase, high BNP and slight elevation of troponin and creatinine.Echocardiography showed biventricular hypertrophy with preserved ejection fraction, restrictive physiology with elevated filling pressures and relative "apical sparing" on 2D longitudinal strain.He also had thickened interatrial septum and atrioventricular valves and a small pericardial effusion (Figure 2; Video).He was admitted to the Cardiology Department for medical treatment, with progressive clinical improvement.Diagnostic work-up showed hypoalbuminemia, low levels of factors V and X, antithrombin III and protein C, elevated b2-microglobulin, ferritin and erythrocyte sedimentation rate, while the serum protein electrophoresis was inconclusive.Alpha-galactosidase levels were borderline low.Abdominal echography was not suggestive of chronic liver disease.Cardiac MRI showed diffuse subendocardial late enhancement.A 24-hour urine collection revealed nephrotic range proteinuria, positive Bence Jones protein and an immunofixation test with a monoclonal lambda protein band.Abdominal fat biopsy was negative for Congo red stain and the genetic study was also negative for mutations in the TTR and GLA genes.Nonetheless, a bone marrow biopsy was performed and it was no-table for lambda protein monoclonal plasmocytosis, accounting for 80% of the total cellularity, confirming the diagnosis of primary systemic (AL-type) amyloidosis.He was considered a poor candidate to autologous stem-cell transplantation, due to advanced cardiac involvement, and was started on chemotherapy (CyBORD protocol).Seven cycles were administered, with excellent tolerability and response.The patient is currently on NYHA class II, two years after the initial admission, under regular haematology and cardiology follow-up. DISCUSSION This case represents a rare cause of heart failure in a young adult.Cardiac amyloidosis is an infiltrative cardiomyopathy, resulting from amyloid deposition within the myocardium 1,2 .In primary systemic (ALtype) amyloidosis, the amyloid protein is composed of light chains resulting from plasma-cell dyscrasia 2 , and cardiac involvement occurs in up to 50% of the patients 3 .Macroglossia, nephrotic syndrome, bleeding and hepatomegaly are common clues to diagnosis 4 .Low-voltage QRS complexes and typical echocardiography features should in turn raise the suspicion for cardiac amyloidosis ( 2 ).The abdominal fat biopsy has a sensitivity of around 80% 2,4,5 , therefore other histologic studies should be performed if the result is negative 2 .Prognosis is dictated by the level of cardiac involvement (<6 months after the onset of heart failure, without treatment), therefore early diagnosis and treatment are crucial 1,2,4 . Conflict of interests: None to declare Funding: None. FIGURE 1 - FIGURE 1 -Electrocardiogram: sinus tachycardia, low voltage criteria in the limb leads and diffuse nonspecific repolarization abnormalities. FIGURE 2 - FIGURE 2 -Echocardiogram: A -Parasternal long axis view (still image): thickened left ventricular walls, dilated left atrium and small pericardial effusion.B -Speckle tracking longitudinal strain traces and bull's eye, with typical "apical sparing" pattern.C -Transmitral flow pulsed wave Doppler showing a restrictive pattern.D -Tissue Doppler with low e' and high E/e' ratio, suggestive of high left atrial pressures.
2019-01-23T21:23:09.299Z
2018-01-24T00:00:00.000
{ "year": 2018, "sha1": "3541f7d81178a87bf7efdf2e043d052009216fce", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/ramb/v64n9/1806-9282-ramb-64-9-0787.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3a9aea3d423dfd2b9a1e25757dc99f18818ffcff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53113126
pes2o/s2orc
v3-fos-license
A Substrate-Independent Framework to Characterise Reservoir Computers The Reservoir Computing (RC) framework states that any non-linear, input-driven dynamical system (the reservoir) exhibiting properties such as a fading memory and input separability can be trained to perform computational tasks. This broad inclusion of systems has led to many new physical substrates for RC. Properties essential for reservoirs to compute are tuned through reconfiguration of the substrate, such as change in virtual topology or physical morphology. As a result, each substrate possesses a unique"quality"--- obtained through reconfiguration --- to realise different reservoirs for different tasks. Here we describe an experimental framework that can be used to characterise the quality of any substrate for RC. Our framework reveals that a definition of quality is not only useful to compare substrates, but can also help map the non-trivial relationship between properties and task performance. And through quality, we may even be able to predict the performance of similarly behaved substrates. Applying the framework, we can explain why a previously investigated carbon nanotube/polymer composite performs modestly on tasks, due to a poor quality. In the wider context, the framework offers a greater understanding to what makes a dynamical system compute, helping improve the design of future substrates for RC. Introduction Reservoir computing (RC) first emerged as an alternative method for constructing and training recurrent neural networks [1,2]. The method primarily involved constructing a random fixed recurrent network of neurons, and training only a single linear readout layer. It was found that random networks constructed with certain dynamical traits could produce state-of-the-art performance without the laborious process of training individual internal connections. The concept later expanded to encompass any high-dimensional, input-driven dynamical system that could operate within specific dynamical regimes, leading to an explosion in new RC substrates. 1 The ability to perform useful information processing is an almost universal characteristic of dynamical systems, provided a fading memory and linearly independent internal variables are present [3]. However, each dynamical system will tend to suit different tasks, with only some performing well across a range of tasks. In recent years, RC has been applied to a variety of physical systems such as optoelectronic and photonic [4,5], quantum [6][7][8], disordered and self-organizing [9,10], magnetic [11,12] and memristor-based [13] computing systems. The way in which each substrate realizes a reservoir computer varies. However, each tends to implement, physically or virtually, a network of coupled processing units. In each implementation, the concept is to use and exploit the underlying physics of the substrate, to embrace intrinsic properties that can improve performance, efficiency and/or computational power. Each substrate is configured, controlled and tuned to perform a desired functionality, typically requiring the careful tuning of parameters in order to produce a working, or optimal, physical reservoir computer for ad hoc problems. Despite the recent advances of new physical reservoir systems, basic questions for RC are still unanswered. These open problems are summarized and explained in [14]. Relevant questions include: What class of problems can RC solve efficiently? What is the role of heterogeneous structure in RC? What are the limits and benefits of a given physical system for RC? What are the benefits of a physical implementation? To answer these questions, and for the field to move forward, a greater understanding is required about the computational expressiveness of reservoirs and the substrates they are implemented on, if not to at least determine what tasks, for what substrates, are realistically solvable. In the terminology used here, a reservoir represents the resulting abstract system and its dynamics instantiated by (typically, but not limited to) a single, typically static, configuration of the substrate. For an artificial recurrent neural network, implemented in silico, configuration may refer to a set of trained connection weights, defined neuron types and topology. For another substrate, configuration may refer to the physical morphology, physical state, external control signals or complexification of the driving input signal. The number of possible reservoir systems realizable by a substrate depends upon the number of free parameters, and the distinct dynamical behaviours resulting from those parameters. For unconstrained substrates, limited only by the laws of physics, this number may be vast. Yet this does not imply that every such configuration and corresponding reservoir is practical or useful. This also does not imply that each new configuration leads to a different reservoir system; the same or similar dynamical behaviour may be produced by different configurations. The mapping between substrate configuration and instantiated reservoir may be complex. Here we present a practical framework to measure the computational expressiveness of physical or virtual substrates, providing a method to characterize and measure the RC quality of substrates. A higher quality substrate is one that can realize more distinct reservoirs through configuration, giving it greater expressiveness and higher dynamical freedom, and so a greater capacity to tackle very different tasks. Quality is quantified and measured here as the number of distinct reservoirs, or dynamical behaviours, a single substrate can exhibit. The number of reservoirs, rather than configurations, is what is important. This does not imply that substrates with fewer available configuration degrees of freedom perform poorly at every task; they may perform very well at specific tasks within their dynamical range, but are likely to perform poorly when evaluated across a broad range of tasks. To characterize the quality of different substrates, we present the CHARC (CHAracterization of Reservoir Computers) framework. The framework has a basic underlying structure, which can be extended if needed. To demonstrate the framework, it is applied to three different substrates: echo state networks (ESNs) [15], simulated delay-based reservoirs (DRs) [4,16] and a physical carbon nanotube (CNT) composite [9]. The definitions, techniques and substrate-independence of the framework are evaluated using a number of common benchmark tasks. The rest of the paper describes the framework and the techniques used within it, beginning with a description of the workflow, the task-independent properties and search procedure used to characterize the substrate. Framework outline The basic structure and flow of the framework is presented in figure 1. The complete characterization process is divided into a series of phases and levels. In phase one (P1), a reference substrate is evaluated, forming the basis against which to compare quality values. In phase two (P2), the test substrate is assessed and compared to the reference. (a) Basic levels The three basic levels required for each phase are definition, exploration and evaluation. Additional levels may be added, providing further functions that can be used to manipulate, model and learn from the data produced by the characterization process. Here, an additional level is used to validate and determine the reliability and substrate-independence of the overall framework; others are also possible, see §2b. In general, each level requires the results from the previous level. Techniques applied at each level are flexible, and may be substituted with alternative approaches. The techniques and measures used here are simple, and provide a good foundation to demonstrate the framework's concept. The definition level (P1.1, P2.1) defines the reservoir behaviour space to be explored. The behaviour space represents the abstract behaviour of the configured substrate relative to measures of dynamical properties, and is the space in which quality is measured. The framework uses n measures (see example in figure 2) to define the axes of the n-dimensional behaviour space. See §2c for the measures used here. The exploration level (P1.2, P2.2) measures the quality, by determining how much of the behaviour space is realizable through substrate configurations. An exhaustive search of the substrate's parameter space is infeasible. Instead, the use of diversity search algorithms [17] is recommended. These exploration techniques, based on evolutionary algorithms, can characterize the behaviour range and dynamical freedom of the substrate. The evaluation level (P1.3, P2.3) estimates quality, by using the behaviours discovered from the exploration level. The behaviour space is divided into discrete voxels; the total number of voxels occupied by discovered behaviours provides the final quality value of the substrate. In P2.3, the quality of the test substrate is compared with the quality of the reference substrate from P1.3. (b) Additional levels providing further functions Additional levels can be added to the framework to extract further features about the substrate and the behaviour space representation. They need not necessarily relate to the evaluation level (the quality value), and may work independently of it. Example additional levels include: modelling the relationships between the behaviour space and task-specific performances; modelling the relationships between the behaviour space and configuration space. Such relationships can be modelled and learnt using machine learning techniques. Here, one additional level is created: a learning level (P1. 4, P2.4). The learning level is used here to evaluate whether the framework is reliable (that the behaviour metrics capture the underlying reservoir properties) and substrate-independent (that behaviours learned in one substrate can be transferred to a different substrate). To achieve independence, the reliability of the behaviour space representation should be high. In reality, due to noise and translation between the simulated and physical domain, we require reliability above some acceptable threshold. Further levels building on the exploration and learning levels are also possible. For example, the discovered behaviours can provide a reduced search space from which to rank and find optimal reservoirs for a particular task. As the number of tasks increases, this reduced search space decreases the required time to find good task-specific reservoirs without having to repeatedly search over the full configuration space. (c) Task-independent properties In order to form a suitable behaviour space, we need to define each dimension of the space carefully. Some potentially interesting properties are difficult, if not impossible, to transfer across all substrates. For example, measures that require access to the system's internal workings will not transfer to black-box systems. Measures used with the framework must represent only the observable behaviour of the system, independent of its implementation. In general, the behaviour space should be created using as many uncorrelated measures as possible, representing different computational and dynamical properties. This will improve the reliability of the framework, but result in a larger space to explore, requiring more evaluations to build a useful characterization. In the work here, three common property measures are taken from the RC literature to form the behaviour space. These measures are the kernel rank (KR), generalization rank and memory capacity. KR is a measure of the reservoir's ability to separate distinct input patterns [18]. It measures a reservoir's ability to produce a rich nonlinear representation of the input u and its history u(t − 1), u(t − 2), . . .. This is closely linked to the linear separation property, measuring how different input signals map onto different reservoir states. As many practical tasks are linearly inseparable, reservoirs typically require some nonlinear transformation of the input. KR is a measure of the complexity and diversity of these nonlinear operations performed by the reservoir. GR is a measure of the reservoir's capability to generalize given similar input streams [18]. It attempts to quantify the generalization capability of a learning device in terms of its estimated VC-dimension [19], i.e. how well the learned nonlinear function generalizes to new inputs. In general, a low GR symbolizes a robust ability to map similar inputs to similar reservoir states, rather than overfitting noise. Reservoirs in ordered dynamical regimes typically have low ranking values in both KR and GR, and in chaotic regimes have both high. A rule-of-thumb is that good reservoirs possess a high KR and a low GR [20]. In terms of matching reservoir dynamics to tasks, the precise balance will vary. A unique trait that physical and unconventional substrates are likely to possess is the ability to feature multiple time-scales and possess tunable time scales through reconfiguration, unlike their more conventional reservoir counterparts. Another important property for RC is memory, as reservoirs are typically configured to solve temporal problems. (A substrate without memory may still be computationally interesting for solving non-temporal problems.) A simple measure for reservoir memory is the linear short-term memory capacity (MC). This was first outlined in [21] to quantify the echo state property. For the echo state property to hold, the dynamics of the input-driven reservoir must asymptotically wash out any information resulting from initial conditions. This property therefore implies a fading memory exists, characterized by the short-term memory capacity. A full understanding of a reservoir's MC, however, cannot be encapsulated through a linear memory measure alone, as a reservoir will possess some nonlinear memory. Other memory measures proposed in the literature quantify other aspects of memory, such as the quadratic and cross-memory capacities, and total memory of reservoirs using the Fisher memory curve [3,22]. The linear measure is used here to demonstrate the framework; additional measures can be added as needed. (d) Behaviour space exploration To characterize the reservoir behaviour space, the search must explore without optimizing towards any particular property values. A balance between properties is essential to match reservoir dynamics to tasks. However, determining the right balance is challenging. During the characterization process, the exact balance required for specific tasks is irrelevant. Instead, the focus is to explore and map the space of possible trade-offs the substrate can exhibit, and use this to determine substrate quality. For the framework to function, the mapped reservoir behaviour space requires substrateindependence, so the exploration cannot be conducted, or measured, in the substrate-specific parameter space. Also, the exploration must be able to function without prior knowledge of how to construct reservoirs far apart from each other in the behaviour space, as diversity in observed dynamics is not easily related to diversity in substrate-specific parameters. Here, exploration is performed using the open-ended novelty search (NS) algorithm [23][24][25], one of several possible diversity algorithms [17]. NS increases the selection pressure of an underlying evolutionary algorithm towards novel behaviours far apart in the behaviour space. The full details of our NS implementation are given in appendix A. Phase one: reference substrate Phase one establishes a suitable reference substrate to compare against. Here, we use recurrent neural networks (RNNs) that closely resemble ESNs [21] as the reference. These are well established state-of-the-art reservoir 'substrates'. RNNs are flexible, universal approximators of dynamical systems [26] producing a vast range of dynamics when reconfigured. For a standard ESN, the reservoir state update equation x(t) is where f is the neuron activation function (typically a sigmoid) and the weight matrices W in , W and W fb are matrices of connection weights to inputs (W in ), internal neurons (W) and from the output to internal neurons (W fb ). In many cases, the feedback weights W fb are unused and the other weight matrices are selected from a random distribution, then scaled globally. The final trained output y(t) is given when the reservoir states x(t) are combined with the trained readout layer W out : Training of the readout is typically carried out in a supervised way using one-shot linear regression with a teacher signal. A practical guide to creating and training ESNs is given in [27]. (a) Demonstrating and validating the framework (b) Novelty versus random search Here we apply the exploration process P1.2, and evaluate the use of NS by comparing it to random search, determining its usefulness for characterizing a substrate. If NS performs well, if it discovers a greater variation in behaviours than random search within the same time, across network sizes, we argue it will continue to be advantageous for different substrates. First, we compare NS and random search visually. The hypothesis here is that NS can cover a greater volume of the behaviour space within the same number of search evaluations. The results of this experiment show that for every network size, NS expands further in all dimensions of the behaviour space. In figure 3, the explored spaces of the 50 and 200 node ESNs using both search techniques are plotted. In total, approximately 20 000 configurations from 10 separate runs are displayed. Random search (in black, bottom row), which selects weights and scaling parameters from uniform random distributions, appears to produce similar patterns in the behaviour space across all network sizes. These patterns show sparse regions that are difficult to discover, and dense areas that are frequently visited despite different configuration parameters. As network size increases, random search tends to find it more challenging to uniformly cover the behaviour space, suggesting it becomes less effective as substrate complexity increases. NS (in red, top row) covers the behaviour space more uniformly, filling sparse regions and expanding into areas undiscovered by the random search. It does this within the same number of network evaluations, showing itself to be a more effective search technique than simply sampling the configuration space from a random uniform distribution. (c) Quality measure Here we perform the evaluation process P1.3 on the behaviours discovered by NS above, in order to evaluate the voxel-based quality measure proposed to quantify the coverage of the behaviour space, and thus quality. To measure quality and coverage of the behaviour space, standard statistical measures of dispersion such as standard deviation, mean absolute deviation and interquartile range are not suitable by themselves: they downplay outliers, whereas the aim is to measure both the volume and the boundaries of the region explored. For this reason, a voxel-based measure is adopted here. Discovered behaviour instances occupying the same voxel are counted once, thereby grouping similarly behaved reservoirs as a single behaviour voxel. In our three-dimensional example, the discovered behaviours define the bounds of the measurable behaviour space: a large cube. The space is then discretized and partitioned into smaller voxel cubes. The smallest possible voxel size is 1 × 1 × 1: the smallest discretized value of the KR and GR property measures. Voxel size needs to be chosen carefully in order to accurately compare substrates. If the voxel size is too small, every occupied voxel will contain exactly one explored reservoir behaviour, and the quality measure will merely record the number of search points evaluated. If the voxel size is too large, the quality measure will be too coarse grained to make distinctions. Experiments to investigate the effect of voxel size are given in appendix E. These lead us to choose a voxel size of V size = 10 × 10 × 10 for the rest of this paper. The quality of a tested substrate is equal to the final coverage value. As voxel size and total number of evaluations both affect this value, the reference and test substrate should be compared using the same framework parameters. (d) Reliability of the behaviour space In the last part of phase one addressed here, P1.4, the reliability of the behaviour space is measured, to demonstrate that the framework produces usable results. The outcome of this measure is also used to determine that the behaviour space is independent of the substrate implementation, P2.4, §4d. If the reliability is poor, independence is difficult to measure and interpret. To assess reliability and independence, concepts such as the representation relation and commuting diagrams from Abstraction/Representation (A/R) theory [28] are adapted to form a testable hypothesis. In A/R theory, a framework is proposed to define when a physical system computes. Using those concepts, one can assess whether an abstract computational model reliably represents computation performed by a physical system. Our hypothesis for the framework is that if the abstract reservoir space is truly representative of system dynamics, and independent of its implementation, it should hold that similar behaviours across substrates produce similar task performances. This hypothesis was conceived using A/R commuting diagrams as a template, where if the computational model faithfully represents the computation of the physical system, one can predict how the physical system states will evolve. To test the hypothesis, first the relationship between task performance and reservoir behaviour is modelled. The reliability of this model, measured as the prediction error of the model, indicates how well the behaviour space captures the computation occurring within the substrate. As explained in [14], relating property measures to expected performance across many tasks is a non-trivial problem, as good properties for one task are often detrimental to another. Therefore, no single set of measured values will lead to high performance across all tasks. However, the relationship between behaviour measure values and a single task are often simpler to determine; these are the relationships to be modelled. To create the prediction model, four common RC benchmark tasks are selected: the nonlinear autoregressive moving average (NARMA) task with a 10th and a 30th order time-lag; the Santa Fe laser time-series prediction task; the nonlinear channel equalization (NCE) task. Each task requires a different set of reservoir properties. Full details of the tasks are provided in appendix B. The modelling process of matching behaviours to task performances is framed as a regression problem. The model is created using standard feed-forward neural networks (FFNNs) and trained using a sample of the behaviours discovered in the exploration process, and their evaluated task performances. The inputs of the FFNNs are MC (continuous-valued), KR and GR (discrete values). The output of the FFNNs is the predicted task performance (continuous-valued) of each behaviour, measured as the normalized mean squared error (NMSE). The prediction error of the FFNNs is measured on the test sample, as the root mean squared error (RMSE) between the predicted NMSE and the behaviour's actual evaluated NMSE for a given task. That is, the prediction error PE is where N is the size of the test set, ptp is the predicted task performance NMSE, and atp is the actual task performance NMSE. In the experiment, multiple FFNNs of the same size are trained per task, and per substrate network size (see appendix F for experimental details). If the behaviour space provides a reliable representation, the mean prediction error of the trained FFNNs should be low, since reliability implies a strong relationship is present, that is not too difficult to model, and that is similar when network size changes. Some difference in prediction error is present between models trained with different network sizes. This is due to different behavioural ranges, resulting in an increase or decrease in complexity of the modelled relationships. For example, reservoirs in the behaviour space around KR = GR = MC ≤ 25 tend to have similar poor performances for the NARMA-30 task because they do not meet a minimum requirement (MC ≥ 30). This means the task is easier to model for small networks, as performance tends to be equally poor for all behaviours. Similarly, when larger ESNs are used to model the relationship, prediction error will likely increase as the distribution of errors increases and the complexity of the model increases. Patterns such as this are task-dependent, adding another level of complexity to the modelling process. For some tasks, to reliably model the relationship requires a greater variety of behaviours than smaller ESNs can provide. Therefore, FFNNs trained on the behaviour space of a 200 node network perform better than ones provided by the smaller networks, despite the apparent increase in complexity. The mean prediction errors of the FFNNs, for each task and substrate size, are shown in figure 4. Overall, the prediction errors are low, with typical values of less than 0. 16 task 3 (Santa Fe laser) and task 4 (nonlinear channel equalization) decreases with substrate size, suggesting the model improves when trained using a larger variety of behaviours. However, these two tasks are particularly challenging to model (with a typical RMSE > 0.1) because of outliers in the training data coming from poor (high task error) and very good (low task error) reservoirs, typically with an NMSE 0.1. For the NARMA tasks, task 1 (NARMA-10) and 2 (NARMA-30), the prediction error increases as the network size increases. Prediction accuracy of the model therefore tends to degrade when trained with larger behaviour spaces, in contrast with tasks 3 and 4. However, this increase in error happens as variation in task performances increases, mirroring the same modelling problem for tasks 3 and 4. The lowest task errors for the NARMA-10 drop from an NMSE ≈ 0.13 to an NMSE ≈ 0.01 as size increases. The same also occurs for the NARMA-30 task, with the lowest errors decreasing from an NMSE ≈ 0.48 to an NMSE ≈ 0.14. From these results, a strong correlation emerges between the variance in task performance (NMSE) of each behaviour space and the prediction error (RMSE). This suggests refocusing the learning process: instead of trying to reliably model all behaviours, including the poor reservoirs, try to reliably model and predict only the best performing behaviours. The additional experiments in appendix F show the effect of this refocusing. The RMSE is significantly reduced when modelling behaviours below a task performance error threshold, rather than all behaviours. The results show the behaviour representation and model is most reliable when representing only the better task performing behaviours. Overall, the results of this evaluation step suggest the behaviour space provides a sufficiently reliable representation of the substrate's computational capabilities. However, given that the provided behaviour measures are known not to capture all the interesting dynamics of the system, there is room to improve the behaviour representation and the modelling process. To demonstrate and evaluate the framework, two test substrates are characterized here: a simulated delay-based reservoir, and a physical CNT-based system. Each chosen substrate poses a unique challenge for the framework. These include differences in implementation (simulated or physical), structure (spatial or temporal) and levels of noise in each system. (a) Delay-based reservoir The first substrate to be characterized is based on the delay-line reservoir (DR) system [4,16,29], using a single nonlinear node and a delay line. This particular reservoir system mimics the structure of a recurrent network of coupled processing nodes in the time domain rather than spatially. By applying time multiplexing and nonlinear mixing to the input signal, a virtual network of processing nodes is created. To date, DRs have produced excellent performances across different RC benchmarks [4,30,31]. Delay-feedback dynamical systems possess high-dimensional state spaces and tunable memory making them ideal candidates for RC. The dynamics of these systems are typically modelled using delay differential equations of the type: where t is time, τ is the delay time, f is the nonlinear function and J(t) is the weighted and time multiplexed input signal u(t). The DR technique is popular for optical and optoelectronic dynamical systems as it enables the exploitation of properties unique to these systems. It also provides a simple structure to overcome technical hardware challenges. These include exploiting high bandwidth and ultrahigh speeds, and removing the demanding requirement of large complex physical networks. The technique however is not limited to these systems. It also offers a novel approach to implement networks efficiently on other hardware platforms. This is particularly useful when few inputs and outputs are available, creating the required state and network complexity in the time-domain to solve tasks. Examples include electronic circuits [4,32], Boolean nodes on a field-programmable gate array (FPGA) [33], a nonlinear mechanical oscillator [34] and spintorque nano-oscillators [8]. However, the DR technique also has potential shortcomings including a serialized input, pre-processing required on the input and limits determined by the length of the delay line. To overcome some of these shortcomings, more complex architectures of multiple timedelay nodes have been proposed, leading to improved performances compared to single-node architectures [35]. The DR system characterized here consists of a simulated Mackey-Glass oscillator and a delay line, inspired by Appeltant et al. [4]. This same system was also realized physically using an analogue electronic circuit in [4]. Details on the implementation of the Mackey-Glass system and the time-multiplexing procedure are provided in appendix Cc. (b) Physical carbon nanotube-based reservoir The second substrate to be characterized is a physical material deposited on a micro-electrode array. The substrate is electrically stimulated and observed using voltage signals and configured through the selection of input and output locations on the array. The material is a mixed CNT-polymer composite, forming random networks of semi-conducting nanotubes suspended in a insulating polymer. The material has been applied to, and performs well on, several computational problems including function approximation, the travelling salesman problem and robot control [36,37]. However, the material has so far produced only modest performances on challenging RC benchmark tasks [9]. As a reservoir, the material has been shown to perform well on simple tasks, but struggles to exhibit strong nonlinearity and sufficient memory for more complex tasks [38,39]. In previous work [9,38,39], a small level of characterization was carried out on different CNTbased reservoirs, showing even the best fabricated material (a 1% concentration of CNTs w.r.t. weight mixed with poly-butyl-methacrylate) typically exhibits low MC, despite different biasing and stimulation methods for configuration. The right concentration and arrangement of CNTs, and method for stimulating and observing the material, is still an open question. So far, the methods and materials used have led to overall modest performances on benchmark tasks such as NARMA-10 [9] and the Santa Fe laser timeseries prediction task [38], but encouraging when the number of inputs and outputs are taken into account. Characterizing a black-box material like the CNT composite is challenging because of its disordered structure and stochastic fabrication process, making it impractical (or even impossible for the general case) to model its exact internal workings. Originally, the CNT-polymer composite was proposed as a sandpit material to discover whether computer-controlled evolution could exploit a rich partially constrained source of physical complexity to solve computational problems [40]. Because of its physicality, with somewhat unknown computing properties, it provides a challenging substrate for the CHARC framework to characterize. Further details about the CNT-based substrate and its parameters are provided in appendix Cb. (c) Quality of test substrates A visualization of exploration level P2.2 and the results of the evaluation level P2.3 for each substrate are presented here. Similar to phase one, the quality of each substrate is calculated as the total number of voxels occupied after 2000 search generations. Figure 5 shows the total number of occupied voxels after every 200 generations, with error bars displaying the min-max values for different evolutionary runs. The differences in behavioural freedom between the DR, CNT and ESN substrates are significant. Using the voxel measure, we can determine which of the reference substrates are close equivalents in terms of quality to the test substrates. At the beginning of the search process, the DR appears similar in quality to an ESN with 100 nodes, while the CNT has a considerably smaller quality than the ESN of 25 nodes. As the number of search generations increases, the DR's coverage increases rapidly, reaching a final value close to an ESN with 200 nodes, yet the CNT struggles to increase its coverage. The rate at which behaviours are discovered for the DR and CNT are very telling, suggesting it is much harder to discover new behaviours for the CNT than the DR. This increased difficulty could imply the bounds of the substrate's behaviour space have almost been met: as the discovery rate of new novel behaviours decreases, either the search is stuck exploiting a niche area, or it has reached the boundaries of the whole search space. A visual inspection of the covered behaviour spaces provides a more detailed understanding of the final quality values. The discovered behaviours for both substrates are shown in figure 6. In each subplot, the behaviours for each test substrate (DR in figure 6a and CNT in figure 6b) are presented in the foreground and reference substrates with the most similar quality (200 node ESN in figure 6a and 25 node ESN in figure 6b) are placed in the background. In figure 6a, the DR behaviours extend into regions that the 200 node ESN cannot reach, and, as a consequence, only sparsely occupies regions occupied by the ESN. Given more search generations, these sparse regions would likely be filled, as similar behaviours are already discovered. The DR struggles to exceed the MC of the 200 node ESNs, or exhibit a KR or GR beyond 300, despite having 400 virtual nodes. This could indicate that increasing the number of virtual nodes does not necessarily lead to greater memory or dynamical variation, a feature more typical of ESNs (figure 11d). However, the virtual network size is not an isolated parameter; the timescale and nonlinearity of the single node, and the separation between virtual nodes, all play an important role in reservoir dynamics. In figure 6b, the CNT exploration process struggles to find behaviours with MC > 5, reaching what appears to be an MC limit. The highest discovered KR and GR values are also small, tending to be lower than (almost half) their possible maximum values, i.e. the total number of electrodes used as outputs. This suggests the substrate struggles to exhibit enough (stable) internal activity to create a strong nonlinear projection, and to effectively store recent input and state information, agreeing with previous results [9,38,39]. The results here also highlight why only a limited range of tasks are suitable for the substrate, and why small ESNs tend to be good models of the substrate. These results show the CNT substrate in its current form features a limited set of behaviour, explaining its usefulness to only a small class of problems. The DR system features greater dynamical freedom, implying it can perform well across a larger set of problems. The coverage of this particular Mackey-Glass system is similar to large ESNs, explaining why they can closely match the performance of ESNs across the same class of problems [4,41]. (d) Prediction transfer The final level here, P2.4, evaluates the substrate-independence of the framework. To do this, we evaluate the transferability of the learnt relationships (FFNNs) from level P1.4 by measuring their prediction accuracy when tasked to predict a different substrate. We evaluate how well the trained models (FFNNs) of the reference substrates predict the performance of the other reference substrates, i.e. predict the task performance of different ESN sizes. Figure 7 shows the mean prediction error (RMSE) of all FFNNs for every predicted substrate. Each dashed line represents the predicted substrate. The x-axis represents the FFNNs trained on different reference network sizes; four sizes are shown for each task, being FFNNs trained using the ESN sizes 25, 50, 100 and 200 nodes. The y-axis is the prediction error (RMSE) of each model for each substrate. The results show that the models trained with smaller network sizes tend to poorly predict the task performance of larger networks across all tasks. This intuitively makes sense; the smaller network models are trained without any data examples beyond their own behavioural limits, and thus cannot make an accurate prediction for larger networks. Figure 8. Difference ( ) between best (self-)prediction and test prediction for CNT and DR substrates. The models trained with larger network sizes tend to predict the smaller networks fairly well. The best predictions occur when the model is trained and tested using the same network size. Considering the variation in task performance as size increases, and fewer training examples within specific areas occupied by smaller network sizes, prediction appears to be reasonably robust when using the largest explored reference substrate. The model of the largest network (200 node) tends to better predict the DR, on average resulting in the lowest prediction errors. For the CNT, models of all network sizes result in low prediction errors for most tasks, except the nonlinear channel equalization task. Prediction error for this task, however, continues to improve as network size increases. Given these results, we argue that a reference substrate with high quality will tend to provide a good prediction of lower quality substrates. Figure 8 summarizes the results of the substrate-independence experiment. It plots the difference ( ) between the best prediction error and the test substrates prediction error. When the overall prediction error is low and the difference ( ) is close to zero, the relationship between behaviour and task performance is strong, and thus the abstract behaviour space reliably represents underlying computational properties, independent of the substrate's implementation. Figure 8 plots for the two test substrates on all four benchmark tasks. Each bar signifies the difference between the best prediction error (from the model trained and tested with the same network size) and the trained model used to predict the test substrate. The results show on average the CNT tends to provide the smallest with models of smaller networks. For the DR, the model of the largest network tends to provide 's closest to zero. Overall, the low and similar prediction errors across substrates indicates that the CHARC framework has a good level of substrate independence. The results also highlight the non-trivial nature of modelling the task-property relationship, with some tasks being more difficult to model and predict than others. Although not the original purpose of this level, this demonstrates that one could roughly predict the task performances of newly characterized substrates, or potentially even test new tasks using a trained model without having to evaluate the test substrate directly. This feature of the framework is potentially beneficial to hardware systems where training can be time and resource intensive. Conclusion A fundamental question in RC is: For a given task, what characteristics of a dynamical system or substrate are crucial for information processing? The CHARC framework tackles this question by focusing on the characteristic behaviour of the substrate rather than its performance on a specific task. In the process, two non-trivial problems were attempted: (i) how to characterize the quality of a substrate for RC and (ii) how do computational properties relate to performance. To use the framework, two phases must be completed. In the first phase, the basic levels (definition, exploration and evaluation) are applied to a reference substrate, providing context for future quality characterizations for other substrates. In the second phase, the test substrate is explored, characterized and compared. The presented framework is flexible, allowing new computational measures, techniques and additional high-level functions to be added. In this work, we have proposed and demonstrated just one possible high-level function that could model the challenging relationships between tasks and computational properties. This is used to predict the task performance of the substrate given its task-independent behaviour. Using the framework, we have shown that exploration through open-ended evolution can be a powerful tool for outlining the limitations and capability of a substrate. This explains why a CNT-based composite can solve some simple computational tasks but often struggles to compete with more complex reservoir substrates. It is also shown why DR compare so favourably to ESNs due to similar behavioural quality. The characterization process of CHARC has many potential future applications, for example assessing the effect structure, topology and complexity has on dynamical freedom; using quality to guide, understand and explore substrate design; and, eventually, the design of suitable computational models. Ultimately, this can open the door for the co-design of both computational model and substrate to build better, more efficient unconventional computers. Competing interests. We declare we have no competing interests. Funding. This work was part-funded by a Defence Science and Technology Laboratory (DSTL) PhD studentship, and part-funded by the SpInsired project, EPSRC grant no. EP/R032823/1. Appendix A. Novelty search In the presented framework, an open-ended evolutionary algorithm called NS [23][24][25] is used. NS is used to characterize the substrate's behaviour space, i.e. the dynamical freedom of the substrate, by sampling its range of dynamical behaviours. In contrast to objective-based techniques, a search guided by novelty has no explicit task-objective other than to maximize novelty. NS directly rewards divergence from prior behaviours, instead of rewarding progress towards some objective goal. NS explores the behaviour space by promoting configurations that exhibit novel behaviours. Novelty of any individual is computed with respect to its distance from others in the behaviour space. To track novel solutions, an archive is created holding previously explored behaviours. Contrary to objective-based searches, novelty takes into account the set of all behaviours previously encountered, not only the current population. This enables the search to keep track of (and map) lineages and niches that have been previously explored. To promote further exploration, the archive is dynamically updated with respect to two parameters, ρ min and an update interval. The ρ min parameter defines a minimum threshold of novelty that has to be exceeded to enter the archive. The update interval is the frequency at which ρ min is updated. Initially, ρ min should be low, and then raised or lowered if too many or too few individuals are added to the archive in an update interval. Typically in other implementations, a small random chance of any individual being added to the archive is also set. In the presented implementation, a small initial ρ min is selected relative to the behaviour space being explored and updated after a few hundred generations. ρ min is dynamically raised by 20% if more than 10 individuals are added and ρ min is lowered by 5% if no new individuals are added; these values were guided by the literature [23]. To maximize novelty, a selection pressure rewards individuals occupying sparsely populated regions in the behaviour space. To measure local sparsity, the average distance between an individual and its k-nearest neighbours is used. A region that is densely populated results in a small value of the average distance, and in a sparse region, a larger value. The sparseness ρ at point x is given by where ξ i are the k-nearest neighbours of x. The search process is guided by the archive contents and the current behaviours in the population, but the archive does not provide a complete picture of all the behaviours explored. Throughout the search, the population tends to meander around existing behaviours until a new novel solution exceeding the novelty threshold is discovered. To take advantage of this local search, all the explored behaviours are stored in a separate database D. This database stores all the information used to characterize the substrate's later quality and has no influence on the search, which uses only the archive. (a) Novelty search implementation In the literature, NS is frequently combined with the Neural Evolution of Augmented Topologies (NEAT) [25,42] representation; this neuro-evolutionary method focuses on adapting network topology and complexifying a definable structure. For the CHARC framework, a more generic implementation is needed, featuring a minimalistic implementation not based on any specific structure or representation. For this reason, an adaptation of the steady-state Microbial Genetic Algorithm (MGA) [43] combined with NS is used. The MGA is a genetic algorithm reduced to its basics, featuring horizontal gene transfer (through bacterial conjugation) and asynchronous changes in population where individuals can survive long periods. To apply the MGA to the problem a number of adaptations are required. Caching fitness values in the standard steady-state fashion is not possible, as fitness is relative to other solutions found and stored in the growing archive. In this implementation, no individual fitnesses are stored across generations; however, the same steady-state population dynamics are kept, i.e. individuals are not culled, and may persist across many generations. An overview of the evolutionary loop is given in figure 9. The complete process is also outlined in pseudo-code in algorithm 1. At the beginning of the search process, a random population is created. In the population, both the substrate configurations and the resulting behaviours B are stored. This initial population is then added to the archive A and database D. At step 1, tournament selection with a tournament size of two is used. To ensure speciation, the first parent is picked at random and the second is chosen within some proximity to the other determined by the MGA parameter deme size. In this step, the fitness values (novelty) of both behaviours are calculated relative to population P and archive A. The individual with the larger distance, that is occupying the less dense region of the behaviour space, is adjudged the winner. This elicits the selection pressure towards novel solutions. The microbial GA differs from other conventional GAs as the weaker (here, less novel) individual becomes 'infected' by the stronger (more novel) one, replacing its original self in the population. At step 2, the configurations of both behaviours are retrieved and manipulated. This constitutes the infection and mutation phase. In the infection phase, the weaker parent undergoes At the last step 4c, the fitness/novelty of the offspring B Child is compared to both the current population P and archive A. If the novelty of the offspring exceeds the novelty threshold ρ min , the behaviour B Child (configuration is not needed) is added to the archive A. Overall, three fitness values are calculated at each generation. Two fitness evaluations occur in the selection phase and a third fitness evaluation is carried out on the offspring, in order to update the archive. The computational complexity of the fitness function is O(nd + kn) using an exhaustive k-nearest neighbour search. As the dimension d of the archive/behaviour space is small (d = 3 property measures), the number of k-neighbours (here k = 15) has the dominant effect. This value of k is chosen experimentally; larger k-values improve accuracy but increase run time. As the archive size increases, run time increases proportional to archive size n. To reduce complexity, Lehman & Stanley [25] describe a method to bound the archive using a limited stack size. They find that removing the earliest explored behaviours, which may result in some limited backtracking, often results in minimal loss to exploration performance. The same NS parameters are applied to every substrate. These are generations limited to 2000; population size = 200; deme = 40; recombination rate = 0.5; mutation rate = 0.1; ρ min = 3 and ρ min update = 200 generations. Five evolutionary runs are conducted for the CNT and delay-based reservoir, as the time to train increases significantly, and 10 runs for the ESN substrates. Appendix B. Benchmark tasks for prediction phase The NARMA task evaluates a reservoir's ability to model an nth order highly nonlinear dynamical system where the system state depends on the driving input and state history. The task contains both nonlinearity and long-term dependencies created by the nth-order time-lag. An nth ordered NARMA task predicts the output y(n + 1) given by equation (B 1) when supplied with u(n) from a uniform distribution of interval [0, 0.5]. For the 10th-order system parameters are: α = 0.3, β = 0.05, δ = 0.1 and γ = 1; for the 30th-order system: α = 0.2, β = 0.004, δ = 0.001 and γ = 1. The laser time-series prediction task predicts the next value of the Santa Fe time-series Competition Data (dataset A). 2 The dataset holds original source data recorded from a Far-Infrared-Laser in a chaotic state. The nonlinear Channel Equalization task introduced in [45] has benchmarked both simulated and physical reservoir systems [31]. The task reconstructs the original i.i.d signal d(n) of a noisy nonlinear wireless communication channel, given the output u(n) of the channel. To construct reservoir input u(n) (see equation (B 3)), d(n) is randomly generated from −3, −1, +1, +3 and placed through equation (B 2): Following [45], the input u(n) signal is shifted +30 and the desired task output is d(t − 2). Appendix C. Substrate parameters (a) Echo state networks In phase one, regardless of network size the same restrictions are placed on global parameter ranges and weights, applying the same weight initiation processes each time. (b) Carbon nanotube-polymer The training and evaluation of the carbon-based substrate is conducted on a digital computer. Inputs and representative reservoir states are supplied as voltage signals. The adaptable parameters for evolution are the number of input-outputs, input signal gain (equivalent to input weights), a set of static configuration voltages (values and location) and location of any ground connections. Configuration voltages act as local or global biases, perturbing the substrate into a dynamical state that conditions the task input signal. In this work, a 1% CNT poly-butyl-methacrylate (CNT/PBMA) mixture substrate is investigated. The substrate was mixed and drop cast onto a micro-electrode array using the same process in [9,38,39]. The electrode array comprises 64 electrodes (contact sizes of 100 µm and spacings of 600 µm between contacts) deposited onto a FR-4 PCB using a chemical process that places nickel and then a layer of gold (figure 10). Two National Instruments DAQ cards perform measurements and output analogue voltages; a PCI-6225 (16-Bit, 250 KS/s, with 80 analogue inputs), and PCI-6723 (13-Bit, 800 KS/s, with 32 analogue outputs). Both cards communicate to a desktop PC through a session-based interface in Matlab. The PCI-6723 supplies an additional 8 digital I/O lines to a custom routing board to programme on-board switches and synchronize the cards. (c) Delay-based reservoir To generate N virtual nodes and collapse them into a usable state observation, time-multiplexing is used. The input signal u(t) is sampled and held for the period τ (the length of the delay line) and mixed with a random binary mask M, perturbing the node away from the relaxed steady state. For an interval defined by the node separation θ = τ/N, the mask is applied as a piecewise constant, forming the input to the nonlinear node J(t) = I(t) * M. The state of the ith virtual node is obtained after every τ , as: The model of the Mackey-Glass dynamical system is described aṡ where X represents the state,Ẋ its derivative with respect to time, and τ is the delay of the feedback loop. The parameters η and γ are the feedback strength and input scaling. The exponent p controls the nonlinearity of the node. The parameter T, typically omitted from equation (C 1), represents the characteristic time-scale of the nonlinear node. In order to couple, the virtual nodes and create the network structure, T ≥ θ is required. Together, all these parameters determine the dynamical regime the system operates within. The parameters of the delay-based reservoir in this work are fixed at T = 1, θ = 0.2, τ = 80 and N = 400, based on values given in [4]. During the exploration process, evolution can alter the mask, flipping between the binary values [−0.1, 0.1] and manipulate all of the Mackey-Glass parameters: 0 < η < 1, 0 < γ < 1 and the exponent 0 < p < 20. Appendix D. Property measures (a) Kernel and generalization rank The kernel measure is performed by computing the rank r of an n × m matrix M, outlined in [20]. To create the matrix M, m distinct input streams u i , . . . , u m are supplied to the reservoir, resulting in n reservoir states x u i . Place the states x u i in each column of the matrix M and repeat m times. The rank r of M is computed using singular value decomposition (SVD) and is equal to the number of non-zero diagonal entries in the unitary matrix. The maximum value of r is always equal to the smallest dimension of M. To calculate the effective rank, and better capture the information content, remove small singular values using some high threshold value. To produce an accurate measure of KR m should be sufficiently large, as accuracy will tend to increase with m until it eventually converges. The GR is a measure of the reservoir's capability to generalize given similar input streams. It is calculated using the same rank measure as kernel quality, however each input stream u i+1 , . . . , u m is a noisy version of the original u i . A low generalization rank symbolizes a robust ability to map similar inputs to similar reservoir states. (b) Memory capacity A simple measure for the linear short-term MC of a reservoir was first outlined in [21] to quantify the echo state property. For the echo state property to hold, the dynamics of the input-driven reservoir must asymptotically wash out any information resulting from initial conditions. This property therefore implies a fading memory exists, characterized by the short-term memory capacity. To evaluate memory capacity of an N node reservoir, we measure how many delayed versions k of the input u the outputs y can recall, or recover with precision. Memory capacity MC is measured by how much variance of the delayed input u(t − k) is recovered at y k (t), summed over all delays. ( D 1 ) A typical input consists of t samples randomly chosen from a uniform distribution between [0 1]. Jaeger [21] demonstrates that ESNs driven by an i.i.d. signal can possess only MC ≤ N. A full understanding of a reservoir's memory capacity cannot be encapsulated through a linear measure alone, as a reservoir will possess some nonlinear capacity. Other memory capacity measures proposed in the literature quantify the nonlinear, quadratic and cross-memory capacities of reservoirs [3]. Determining the reliability of the behaviour space representation is challenging. Selecting suitable data for this modelling process is difficult as some behaviours perform particularly poorly on tasks, reducing the overall prediction accuracy of the model. Poor task performing reservoirs tend to increase noise in the models training data as some appear to be randomly scattered across the behaviour space. To reduce the problem, different thresholds were placed on the training data to show how well the relationship of the highest performing reservoirs can be modelled. Applying each threshold, reservoirs with task performance (NMSE) above the threshold are removed from the training and test data. A low prediction error (RMSE) of the model with low thresholds indicates greater ability to predict high performing reservoirs. At higher thresholds, more training data are available but include reservoirs that perform poorly on the task. The mean prediction errors of 10 feed-forward networks, trained with each threshold, on each task, using the behaviours from the 200 node ESNs, are shown in figure 12. Across all tasks, the accuracy of the model improves when smaller thresholds are used, i.e. error is smallest when predicting only the highest performing reservoirs, suggesting a strong relationship between behaviour space and task performance. To visualize how well the relationship is modelled for task performances of NMSE < 1, we plot the predicted NMSE versus the evaluated NMSE in figure 13. Here, the output of four FFNNs, trained for each task, are given. We see the laser and nonlinear channel equalization tasks are harder to model, typically resulting in an overestimation, as the actual task performances of most behaviours tend to be low, generally with an NMSE < 0.2. We also calculate Spearman's ρ (called R here), a non-parametric test measuring the strength of association between predicted and actual. A value of 1 indicates perfect negative correlation, while a value of +1 indicates perfect positive correlation. A value of 0 indicates no correlation between predicted and actual. The pvalue for each measure is also provided. If the p-value is less than the significance level of 0.05, it indicates a rejection of the null hypothesis that no correlation exists, at the 95% confidence level. The high values of R and essentially zero p-values suggest the models predict task performance very well.
2018-10-16T16:59:37.000Z
2018-10-16T00:00:00.000
{ "year": 2018, "sha1": "66b2215d864b71532e5b50efdcc405fc0eedfb4b", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc6598063?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "66b2215d864b71532e5b50efdcc405fc0eedfb4b", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
485776
pes2o/s2orc
v3-fos-license
Expression of Partitioning Defective 3 (Par-3) for Predicting Extrahepatic Metastasis and Survival with Hepatocellular Carcinoma Partitioning defective 3 (Par-3), a crucial component of partitioning-defective complex proteins, controls cell polarity and contributes to cell migration and cancer cell epithelial-to-mesenchymal transition. However, the clinical relevance of Par-3 in tumor progression and metastasis has not been well elucidated. In this study, we investigated the impact and association of Par-3 expression and clinical outcomes with hepatocellular carcinoma (HCC). We first confirmed that Par-3 was abundantly expressed in HCC cell lines by Western blot analysis. We used immunohistochemistry to analyze the association of Par-3 expression and clinicopathological characteristics in primary and subsequent metastatic tumors of patients with HCC. Par-3 was overexpressed in 47 of 111 (42.3%) primary tumors. Increased expression of Par-3 in primary tumors predicted an increased five-year cumulative incidence of extrahepatic metastasis. In addition, multivariate analysis revealed that Par-3 overexpression was an independent risk factor of extrahepatic metastasis. Increased Par-3 expression in primary tumors was associated with poor five-year overall survival rates and was an independent prognostic factor on Cox regression analysis. In conclusion, we show for the first time that increased Par-3 expression is associated with distant metastasis and poor survival rates in patients with HCC. Par-3 may be a novel prognostic biomarker and therapeutic target for HCC. revealed that Par-3 overexpression was an independent risk factor of extrahepatic metastasis. Increased Par-3 expression in primary tumors was associated with poor five-year overall survival rates and was an independent prognostic factor on Cox regression analysis. In conclusion, we show for the first time that increased Par-3 expression is associated with distant metastasis and poor survival rates in patients with HCC. Par-3 may be a novel prognostic biomarker and therapeutic target for HCC. Keywords: hepatocellular carcinoma; metastasis; Par-3; survival Introduction Hepatocellular carcinoma (HCC) is a serious malignancy and public health problem in endemic areas of hepatitis B or C virus infection, including Africa and Southeast Asia [1]. Despite the aggressive surgical and non-surgical approaches used to treat and improve the outcome of HCC [2], local recurrence and distant metastasis remain major causes of treatment failure [3,4]. Investigating accurate prognostic biomarkers for early detection and prediction of recurrence and metastasis is critical for developing novel therapeutic strategies to improve outcome and survival for HCC patients. Cell polarity is a fundamental property of all eukaryotic cells and is essential for the cell development of various organisms. Dysfunction of polarity leads to distinct diseases, including cancer progression [5]. The partitioning defective (Par) complex comprises several proteins, including Par-3, Par-6 and atypical protein kinase C (aPKC), which regulate cell polarity and migration by regulating protein-protein interaction with several GTP-bound regulators [6][7][8]. In mammalian epithelial cells, the Par complex localizes to the apical junction region and plays a critical role in establishing apical-basal polarity and tight junctions [9][10][11][12]. Thus, the dynamic balance and regulation of the polarity-related proteins containing Par complex members are extremely important to modulate cancer cell migration and epithelial-to-mesenchymal transition. Dissolution of cell-cell junctions with loss of Par-3 or Par-6 expression promotes cancer cell migration and invasion [8,13]. Conversely, amplification and increased expression of Par-6 and aPKC induced cell proliferation, more aggressive tumors and poor outcomes in breast cancer [14], ovarian cancer [15] and non-small-cell lung cancer [16]. Par-3 expression and regulation are considered largely involved in cancer cell migration, and a few studies have suggested defective expression or amplified PARD3 gene in prostate cancer cells [17], esophageal squamous cell carcinoma [18], neoplastic retinal pigment epithelial cells [19] and HCC [20]. Thus, Par-3 may play an important role in tumor development and cancer cell progression. However, the clinical significance of Par-3 expression in tumor metastasis and survival has never been elucidated. Therefore, in this study, we investigated Par-3 expression by immunohistochemistry in a cohort of patients with HCC. We evaluate the association of Par-3 expression with clinicopathological characteristics and survival rates. Par-3 overexpression was significantly associated with extrahepatic metastasis in HCC, and increased Par-3 expression was associated with worse overall survival with HCC. Our results suggest Par-3 as a potential biomarker and therapeutic target of HCC. Increased Par-3 Protein Expression in Primary and Metastatic HCC Tissues and Association with HCC Extrahepatic Metastasis We examined the expression of Par-3 in paraffin-embedded primary HCC tumors with surrounding non-cancerous parenchyma from 111 patients and 31 matched extrahepatic metastatic tumors by immunohistochemical staining. Negative control slides were negatively unstained with Par-3 ( Figure 2A). The expression of Par-3 was increased in 47 (42.3%) of 111 primary HCC tumors and not in non-cancerous cells adjacent to tumors ( Figure 2B and Table 1). Moreover, Par-3 was overexpressed in 31 matched metastatic HCC specimens, as illustrated in brain ( Figure 2C) and rectum ( Figure 2D). Expression of Par-3 was not significantly related to most clinicopathological characteristics, but was associated with tumor multiplicity (p = 0.002), Alpha-fetoprotein level (p = 0.046) and subsequent extrahepatic metastasis (p = 0.037) ( Table 1) Multivariate analysis confirmed Par-3 expression as a predictor of distant HCC metastasis (p = 0.037) ( Table 2). The cumulative rate of developing extrahepatic metastasis within five years with primary HCC was significantly higher with positive rather than negative Par-3 expression (40.2% ± 8.0% vs. 23.4% ± 6.0%, p = 0.047) ( Figure 3). Furthermore, the expression of Par-3 was significantly increased in metastatic HCC samples than in their primary tumors (21 with increased Q-score > 2, and 10 with no difference in Q-score, p < 0.001). These observations suggest a strong association of Par-3 expression and extrahepatic metastasis of HCC. vs. 23.4% ± 6.0%, p = 0.047). Overexpression of Par-3 and HCC Patient Survival After a mean follow-up of 52.0 ± 28.4 months after surgery, 27 patients (24.3%) remained free of HCC, 54 patients (48.6%) had died because of their disease and 30 patients (27.0%) were still alive with disease recurrence and/or distant metastasis. Survival analysis revealed a significantly better overall five-year survival with negative rather than positive Par-3 expression in primary HCC tumors (59.6% ± 6.3% vs. 41.7% ± 7.3%, p = 0.047) (Figure 4). The increased expression of Par-3 in primary tumors had no significant effect on progression-free survival in these patients (data not shown). In addition, Cox proportional-hazard regression models revealed that Par-3 overexpression was significantly associated with poor overall survival (hazard ratio 2.049, 95% confidence interval 1.082-3.884, p = 0.028), but not associated with progression-free survival (Table 3). Thus, overexpression of Par-3 in primary tumors is an important predictor of poor overall survival with HCC. 59.6% ± 6.3%, p = 0.047). Negative Control Primary HCC Metastasis to Brain Metastasis to Rectum Discussion Cell polarity is a basic and fundamental property of regular multiple cellular functions, including cancer cell migration and epithelial-to-mesenchymal transition. The Par-3/Par-6/aPKC complex is an essential regulator controlling cell polarity via interacting with various proteins. Human Par-3 (PARD3) is a single-copy gene consisting of 26 exons and localized in chromosome 10 [20]. At least five PARD3 variants, derived from alternative splicing and polyadenylation, have been identified in a human liver cDNA library [20]. Furthermore, multiple-splice PARD3 gene variants [21][22][23] and variants with three main molecular weights (180, 150 and 100 kDa) have been reported [18,24], although their specific role remains unclear. However, Par-3 expression in some tumors has been controversial. For instance, Par-3 protein or RNA expression was downregulated in esophageal squamous cell carcinoma and HCC [18,20], but PARD3 gene was found mutationally inactivated in prostate cancer cells [17]. In contrast, gene amplification of aPKC-binding Par-3 protein was reported in transformed neoplastic retinal pigment epithelial cells [19]. Par-3 was reported to localize and regulate epithelial tight junction assembly, which was promoted by epidermal growth factor receptor (EGFR) [24] and TGF-β [25] signaling. Moreover, overexpression of Par-3 suppresses contact-mediated inhibition of cell migration [26]. Thus, Par-3 may be a "double-edged sword" in regulating cell migration and epithelial-mesenchymal transition (EMT), depending on the cell type or tissue. Also, the diverse role of Par-3 may be attributed to the distinct variants with different molecular weights, which may explain the Par-3 protein expression in the most migratory and poorly differentiated HCC cell line, SK-Hep-1, differing from that of the other cell lines (Figure 1). Nevertheless, little is known about whether and how Par-3 variants regulate cell polarity and contribute to cancer cell migration or EMT. Results from this study indicate that increased Par-3 expression participates in promoting distant metastasis and reducing the survival rate of HCC patients. Elevated Par-3 expression in primary tumors is associated with risk of extrahepatic metastasis and poor overall survival with HCC. Thus, Par-3 alone or in combination with 14-3-3 proteins may be a biological marker identifying HCC patients at high risk of metastasis and poor survival. Therapeutic strategies or drugs aimed at Par-3 or 14-3-3 proteins might be developed for these patients. Patients and Clinical Specimens We retrospectively enrolled (from January 1999 to December 2001) and obtained tissue from 111 HCC patients who underwent surgery for tumor resection in Taichung Veterans General Hospital. The mean follow-up was 52.0 ± 28.4 months. In total, tissue from 31 patients (27.9%) showed metastasis 5 to 88 months after the surgery for primary HCC. The metastasis sites included bone, abdominal and chest wall, brain, mesentery, peritoneum, adrenal gland and retroperitoneum. The paraffin-embedded surgical specimens composed of the primary tumors with surrounding non-cancerous liver parenchyma and metastatic tumors underwent pathology examination. We examined pathological features, including Barcelona-Clinic Liver Cancer (BCLC) [31] staging and clinical outcomes. This study was approved by the Institutional Review Board of Taichung Veterans General Hospital. Statistical Analysis One-Way ANOVA was used to analyze differences for clinicopathological variables. Multivariate logistic regression was used to determine factors predicting extrahepatic metastasis. The Wilcoxon signed-rank test was used to analyze the differences between primary tumors and matched metastatic tissues by Par-3 staining density. Kaplan-Meier curves were plotted, and the log rank test was used to analyze time-related probabilities of metastasis, overall survival and progression-free survival. Cox proportional hazards regression models were used to evaluate the impact of prognostic factors on survival. p < 0.05 was considered statistically significant and p = 0.05 to 0.10 marginally significant. Conclusions In this study, we show for the first time that expression of Par-3 is increased and significantly associated with poor prognostic outcomes of HCC patients. To further investigate the molecular mechanism by which Par-3 is involved in regulating HCC tumors will benefit the implication of diagnosis or treatment for HCC. Thus, Par-3 alone or combined with 14-3-3ε or related interacting components may serve as the potential markers or therapeutic targets of HCC.
2014-10-01T00:00:00.000Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "e058c8cbd9d9100ff32f0c5c49f8bc6713778b28", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/14/1/1684/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e058c8cbd9d9100ff32f0c5c49f8bc6713778b28", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
5342721
pes2o/s2orc
v3-fos-license
Recurrent Syncope Associated with Lung Cancer Syncope is an important problem in clinical practice with many possible causes that might be misdiagnosed. We present an unusual case of syncope, which has a normal chest X-ray. Exercise EKG and coronary angioplasty results confirmed the existence of serious coronary heart disease. The patient was treated with coronary stent transplantation. However, scope occurred again and the elevated tumor makers cytokeratin-19-fragment and neuron-specific enolase revealed the bronchogenic carcinoma, which was confirmed by enhanced CT examination. The treatment of carcinoma by chemotherapy was indeed sufficient for prompt elimination of the syncope symptoms. Introduction Syncope is an important problem in clinical practice, with a frequency between 15% and 39%; however, even with advanced technology, nearly one-third of cases reported no certain reason of syncope being found. The causes of syncope include cardiac, neurologic-cerebrovascular, vascular, psychogenic and metaboli-miscellaneous, but some uncommon causes should be kept in mind to avoid misdiagnosis [1]. We present an unusual case of syncope, which was firstly attributed to coronary heart disease and treated with coronary stent transplantation. The chest X-ray of this patient was normal; however, the causative factor, bronchogenic carcinoma, was diagnosis unanticipated by the hint of elevated tumor maker cytokeratin-19-fragment (CYFRA21-1) and neuron-specific enolase (NSE). Case Report A 64-year-old man, a 30-year-long cigarette smoker (30 pieces per day), was admitted with syncope twice on one day to our hospital emergency department in Oct 17, 2013. The patient has two years of high blood pressure history while without any medicine treatment. He had no diabetes mellitus history. In the past two months, he felt chest discomfort sometimes. In the morning of Oct 14, 2013, he suddenly lost consciousness with incontinence while walking in the park. He felt pericardium discomfort before losing consciousness. After regaining consciousness several minutes later, he felt dyspnoeic and sweaty. In the afternoon, syncope occurred again while he was watching TV. His electrocardiogram showed a slight sinus bradycardia of 47 bpm. 24-hour Holter indicated the average heart rate was 66 bpm, the slowest heart rate 47 bpm, the fastest heart rate 129 bpm, and the longest R-R interval 1.6 second. Cranial CT produced no unusual findings (Figure 1(a)). Moderate ST-segment depression was determined in lead II and aVF on treadmill exercise ECG. Ultrasound cardiography (UCG) indicated normal function with the left ventricular ejection fraction 69.4%. No unusual finding was found in chest X-ray. Blood myocardial enzyme, renal function, electrolytes, and D-dimer values were all within normal ranges. Five days later, the patient had coronary angiography examination. Coronary angiography revealed 80% stenosis of proximal segment and 85% stenosis of distal segment in left anterior descending (LAD) coronary artery, 90% stenosis of middle and distal segment in left circumflex (LCX) coronary artery, 50% stenosis of proximal segment, and 80% stenosis of middle segment in right coronary artery (RCA). The patient refused to have coronary artery bypass grafting surgery. So we decided to perform percutaneous coronary intervention (PCI). After balloon predilatation, 3.0 × 24 mm 2 Case Reports in Medicine EXCEL rapamycin-eluting stent (Jiwei, Shandong, China) was implanted in RCA. A 2.5 × 14 mm EXCEL rapamycineluting stent and a 3.0 × 21 mm Partner sirolimus-eluting stent (Raisedragon, Peking, China) were implanted in distal and proximal segment of LAD, respectively. The further PCI was scheduled one week later for LCX. Unexpectedly, syncope occurred again. On the afternoon while sitting 2 days later, symptoms of dyspnea and palpation accompanied with hypotension occurred. Electrocardiogram monitor indicated that the heart rate was serious sinus bradycardia of nearly 30 bmp. Blood pressure was below 80/40 mmHg. The symptom disappeared after atropine and dopamine were infused. Consider the patient is a 64-year-old person with 30-year-long cigarettes intake history. We put a serum marker screen. Unexpectedly serum tumor marker results indicated that carbohydrate antigen 19-9, carbohydrate antigen 72-4, and alpha fetoprotein are all within normal ranges whereas cytokeratin-19-fragment (CYFRA21-1) and neuronspecific enolase (NSE) values elevated slightly (CYFRA21-1 3.75 ng/mL, normal <3.3 ng/mL; NSE 16.62 ng/mL, normal <16.3 ng/mL). Examination of enhanced chest CT led to suspicion of central lung cancer (Figure 1(b)). PET-CT confirmed the leaf of central type lung cancer (4.2 × 3.7 × 2.6 cm) of left lung with multiple lymph node metastasis and the tumor infiltration of adjacent thoracic aorta and left pulmonary artery and vein. No brain metastasis or heart metastasis was found on PET-CT examination. Immunohistochemical examination of the specimens collected with bronchoscopy revealed the following: CKpan(+), CKL(+), Syn(+), CgA(+), CD50(+), CD30(−), CK20(−), CK7(−), CK5/6(−), P63(−), and Ki-67(+). The diagnosis of small cell bronchogenic carcinoma was made; this tumor was clinically staged at T4N2M0 IIIB. Chemotherapy was promptly initiated with use of combination of carboplatin and topotecan. Shortly after the first chemotherapeutic infusion, the patient reported feeling much better. At follow-up a year later, enhanced CT scan showed that the tumour volume decreased and the patient did not experience syncope anymore. Discussion Syncope is the reason for one to three percent of visits to emergency departments and admissions to hospital, affecting about three to six out of every thousand people each year. The risk of a bad outcome, however, depends very much on the underlying cause [2]. However, some uncommon causes should be kept in mind to avoid misdiagnosis. Accounting for 10% to 20% of cases of syncope, a cardiac cause is the main concern in patients presenting with syncope, as cardiac syncope predicts an increased risk of death and may herald sudden cardiac death [2]. It often occurs suddenly without any warning signs, in which case it is called malignant syncope [3]. Unlike what occurs in neurally mediated syncope, the postrecovery period is not usually marked by lingering malaise. Cardiac syncope is often due to structural heart disease with cardiac obstruction, ventricular tachycardia, or bradyarrhythmias. The interest in the present clinical case, especially for the cardiologic community, is the fact that the patient initially presented with a medical history typical of cardiac syncope. The patient has the chest discomfort history for 2 months before syncope occurred. At the same time, results of the treadmill exercise ECG were suggestive of coronary artery disease and coronary angiography examination demonstrated multiple coronary lesions. Therefore, the main consideration for the syncope diagnosis is due to the coronary heart disease. However, after the coronary stent transplantation in RCA and LAD coronary artery, syncope occurred again. Lung cancer is a disease of global geographic reach that is nondiscriminatory in attacking all people, regardless of age, sex, ethnicity, or socioeconomic background [4]. This disease is often suspected on the basis of presenting symptoms and signs. The most frequent symptoms are cough, wheeze, dyspnoea, and haemoptysis. However, the presentation of lung cancer varied quietly. Most people have no sign at first and until the end stage, the disease is noticed and diagnosed but it is too late to treat. So adults of age 55 to 80 years who have a 30 pack-year smoking history and are currently smoking or have quit within the past 15 years are recommended screening with low-dose computed tomography [5]. The chest X-ray is normal, and the patient has no sign of respiratory tissue disease. Therefore, at the first time, we could not consider the lung disease as the possible cause of syncope. Very interestingly, the unexpected oncologic markers CYFRA21-1 and NSE values elevated slightly, which gave us the hint that tumor could not be excluded. Serum tumor markers are considered as biological indicators detected from the serum or plasma of suspected tumor patients with insufficient sensitivity or specificity [6], which possess the advantages of easy detection, noninvasive operation, and cost-effectiveness [6,7]. For years, the oncologic literature suggests that serum CYFRA21-1 and NSE can be a potential serologic biomarker in evaluation of lung cancer, especially small cell lung cancer (SCLC) [8]. Although the chest Xray was normal, we put the further imaginative examination with enhanced chest CT and PET-CT, which confirmed the suspicious diagnosis of lung cancer. And further immunohistochemical examination of the specimens collected with bronchoscopy indicated the diagnosis of SCLC. In the past decades, many types of cancer associated syncopes have been reported [9][10][11], especially when a tumor mass invades the baroreceptor within the carotid sinus or when it disrupts the afferent nerve fibers of the glossopharyngeal nerve. However, only little cases of syncope induced by lung cancer were reported [12]. To the best of our knowledge, this is the first report of lung cancer associated syncope diagnosis by the hint of elevated tumor marker. On PET-CT examination, no evidence of brain metastasis or heart metastasis was found. Therefore, we think that paraneoplastic symptoms might be the cause for this patient's syncope although we did not test the Hu, anti-Yo, or other antibodies. Paraneoplastic syndromes are rare disorders that are triggered by an altered immune system response to a neoplasm, which might be the first or most prominent manifestation of a cancer, specifically small cell carcinomas of the lung [13]. In a broad sense, these syndromes are collections of symptoms that result from substances produced by the tumor, and they occur remotely from the tumor itself. The symptoms may be cardiovascular, endocrine, neuromuscular or musculoskeletal, cutaneous, hematologic, gastrointestinal, renal, or miscellaneous in nature. The relationship between small cell lung cancer and the neurocardiogenic syncope in our case is suggested by the closed temporal sequence and almost immediate improvement after chemotherapy, which is similar to the findings in the case previously reported [14]. In view of such experience, we believe that an atypical medical history of newly developed syncope in an elder patient should alert the cardiologist to the possibility of lung cancer. Syncope is an important problem in clinical practice. Obtaining a detailed history combining all clinical and laboratory findings is very important, particularly in elderly patients who exhibit multiple risk factors for several diseases. Although syncope occurs commonly, lung cancer especially in the context of serious coronary heart disease occurring simultaneously is extremely rare. Such misleading manifestations require a high index of suspicion on behalf of the physician, so that an underlying malignancy is not missed, and a final diagnosis combining all clinical and laboratory findings is reached. In turn, in rare cases common tumor markers such as CYFRA21-1 and NSE can be used as a useful tool driving further management. The association of abnormal coronary motility, neurocardiogenic syndrome, and lung carcinoma seems to be new in the literature and may suggest a wider spectrum of the paraneoplastic neuropathies. Possibly, some peculiar clinical features and surely a unique treatment (chemotherapy) seem to characterize and cease the cancer associated neurocardiogenic syncope. Syncope is an important problem in clinical practice.
2018-04-03T04:33:33.547Z
2015-05-12T00:00:00.000
{ "year": 2015, "sha1": "98c83a463c89b44ec1f93249c2fc74690f258874", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/crim/2015/309784.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "81abe059052c59ed33e702550ba9e4edc1cda866", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10826719
pes2o/s2orc
v3-fos-license
A TWO-YEAR REVIEW OF UTERINE RUPTURE IN A REGIONAL HOSPITAL Background: Uterine rupture causes high maternal and neonatal mortality in many rural setting in the world. Further studies might provide specific interventions to reduce the high prevalence. Objective: To determine the frequency, causes, clinical presentation, management and outcome of uterine rupture Setting: Department of Obstetrics and Gynaecology, Upper West Regional Hospital, Wa, Ghana. Design: Retrospective descriptive study. Method: A descriptive study of cases of ruptured uterus in the Regional Hospital from 1st January, 2007 to 31st December 2008 was done. A structured questionnaire was developed to collate data from various registers for analysis. Results: Total deliveries were 5085 with 4172(82%) spontaneous vaginal delivery and 911(17.9%) caesarean sections. Uterine rupture occurred in 41 cases for a ratio of 1:124 Grand multipara with five or more deliveries represented 41.5% while those with two prior successful deliveries represented 31.7%. The mean parity was 3.8 (SD 2.3) under antenatal care, 85.4% had at least four visits. Severe anaemia 28(68.3%) and abdominal tenderness 27(65.8%) were the most frequent clinical presentation while the use of local herbal concoction with suspected uterotonic activity 24(58.5%), fetopelvic disproportion 4 (9.8%) and malpresentation 5(12.1%) were the most significant causes. Major complications were: neonatal deaths 34(82.9%), maternal mortality 4(9.8%) and wound infections 15 (36.6%). Subtotal hysterectomy 10(24.4%) and total hysterectomy 18(43.8%) were preferred to uterine repair 12(23.3%) and 87.8% required at least two units of blood transfusion. Conclusion: skilled attendance with accessible emergency obstetric care and focused antenatal care are key elements for the prevention and management of uterine rupture. INTRODUCTION Since time immemorial, indigenous people in Africa and other parts of the world have relied on prayers, rituals and sacrifices to gods to control obstetric accidents and maternal mortality.Attempts to imitate the success story of western countries in lowering maternal mortality drastically have failed in Africa because the poor majority still do not have access to the basic life saving techniques, hence the persistent unacceptably high maternal mortality ratio.In 2005, an estimated 536,000 women died from causes related to childbirth in the world and 95% were from Africa and Asia. 1 Ghana, similar to many sub-Saharan countries is still burdened with a maternal mortality ratio between 214 and 820 per 100 000 live births 2 , mostly from preventable causes.No single monument has been erected on our landscape in memory of these numerous victims yet almost all adult Ghanaians have had relatives or friends being direct victims. Uterine rupture stands as a single obstetric accident that exposes the flaws and inequities of health systems and the society at large due to the degree of neglect that it entails.Again, it has the unique potential to impact negatively on Millennium Development Goals 4 and 5. Uterine rupture is a potentially catastrophic event during childbirth in which the integrity of the myometrial wall is breached. 3In a complete rupture there is full-thickness separation of the uterine wall with the expulsion of the foetus and/or placenta into the abdominal cavity where-as the overlying serosa or peritoneum is spared in an incomplete rupture. 3,4This obstetric accident is closely associated with maternal and/or foetal mortality and morbidity such as bladder rupture, vesicovaginal and rectovaginal fistula, foot drop and psychological trauma. Trial of labour on scarred uterus and the use of uterotonics during labour are the most frequent causes in the developed world while neglected and obstructed labour 5 stand as the principal factors in developing countries where multi parity and the use of local concoction with suspected uterotonic activity serve as aggravating factors.Despite the increasing public concern and support, the most vulnerable: the poor illiterate women from rural communities and their babies hardly get the needed attention. Lack of financial and human resources in remote areas pave the way for unskilled delivery, obstructed labour and subsequently uterine rupture.Those who manage to survive the acute haemorrhage might not do so with infection related complications.The occurrence of obstetric fistula and psychological trauma mark the end of a happy womanhood and the very beginning of seclusion and desolation in a victim"s life.Considering the suffering and agony pregnant women go through, it is an indictment on society if proactive interventions are not put in place to make pregnancy and delivery safe.It is against this background that we conducted this study to review the cases of ruptured uterus in the Upper West Regional Hospital from 1st January 2007 to 31 st December 2008 with the objective of determining its frequency, causes, clinical presentation and outcome to improve prevention and management. METHOD The Upper West Regional Hospital is located in the centre of the capital, Wa.It serves as a Regional and District Hospital and also as a referral centre for Sawla-Tuna-Kalba and Bole Districts in the Northern Region.It has a total bed capacity of 189 with 37 for obstetrics and gynaecology.A retrospective study of all cases of ruptured uterus in the Regional Hospital from January 1st 2007 to December 31 st 2008 was done where a questionnaire was developed to collect data from delivery registry, patients" folders, theatre registry and admission/discharge registry.The relevant data collected included age, parity, district, clinical presentation, operation findings, maternal/foetal outcome, complications and management.The data was entered and analyzed with SPSS 11.5 for Windows. RESULTS In the specified period under study, a total of 5085 deliveries were conducted in the Regional Hospital with 4174 spontaneous vaginal deliveries (82%) and 911 caesarean sections (17.9%).A total of 41 uterine ruptures were recorded making an incidence of 1:124.The ages of clients ranged between 18 and 45 years with a mean of 31.1(SD7.1) years.As many as 26(63.4%)patients were between 30 and 45 years.Fourteen (41.5%) were grand multipara, (women with five or more previous deliveries) while 13(31.7%)had had two prior successful deliveries.The mean parity was 3.8(SD 2.3).On antenatal attendance, only 6 clients (14.6%) were non-attendants and therefore did not have antenatal card.The remaining 35 clients (85.4%) had antenatal cards and visited a health facility at least four times during the pregnancy. On the other hand 20(80%) were not registered with the National Health Insurance Scheme (NHIS).A few clients had a previous caesarean section or a previous surgery on the uterus 7 (17.1%).The majority, 34(82.9%)had never had any surgery on the uterus.The ain clinical features on presentation are shown in Table 1. DISCUSSION For a total of 5085 deliveries, 41 cases of uterine rupture making an incidence of 1:124 is on the high side.Gardeil et al in a study from Ireland showed that the rate of unscarred uterine rupture during pregnancy was 1 per 30,764 deliveries (0.0033%). 6A metaanalysis of data from industrialized countries suggests that the modern rate of unscarred uterine rupture during pregnancy is 0.013% (1 in 7440).In developing countries however an incidence of 0.11% (1 of 920) has been recorded.Adanu et al report1:425 from the Korle-Bu Teaching Hospital in Ghana. 7The difference between the frequencies in Wa and Korle-Bu simply illustrates the socio-economic dichotomy even within the same country. The frequency of uterine rupture increases with maternal age.As many as 63.4% of clients with uterine rupture were between 31 and 45 years of age.Similar work was presented by Shipp et al who had 1.4% of uterine rupture in women older than 30years as against 0.5% in younger women. 8Multiparity has long been associated with uterine rupture.A mean parity of 3.8 (SD 2.3) with 41% of cases being grand multipara is similar to the findings of Schrinsky and Benson who found 32% of unscarred uterine rupture in women with parity greater than four. 9However, Gardeil et al found only 2 (0.005%) women with uterine rupture among 39,529 multigravidas who had no previous uterine scar. 6That can be explained by the society in which he conducted his study, the socioeconomic status and the development of their health system.Only 16% of clients were non-attendants at antenatal clinic.The remaining 85.4% had at least four antenatal visits as recommended by the World Health Organization yet, they faced one of the worst preventable obstetric complications.The need to adopt focused antenatal care and upgrade the quality of care at the clinics cannot be over emphasised.The low patronage of the National health insurance scheme (20%) raises concerns about financial barrier as a contributing factor to the delay in accessing health care.The free delivery package introduced by the Government of Ghana in 2008, attempts at addressing the issue of financial barrier but the high cost of transportation and the poor state of our roads really reduces significantly the effectiveness of this intervention in many remote areas. Unlike in developed countries where foetal heart rate abnormalities are the first identified manifestations of uterine rupture 10 , late signs like abdominal tenderness (65.8%), severe anaemia (68.3%) and vaginal bleeding (43.9%) are the most common ones here.Severe anaemia presenting in 68% of cases is consistent with the 60% reported by Cowan 11 but in sharp contrast with the 25% from Shipp et al.The fact that Shipp"s cases were for trial of labour after caesarean section could explain the comparative lower percentage of anaemia. As high as 87.8% of women with uterine rupture in our study required blood transfusion.That is probably due to the poor haemodynamic state in which our clients arrived and the high prevalence of anaemia in pregnancy. In a Study by Kieser and Baskett, 44% of cases required blood transfusion, 12 whereas Leung et al report a much less number of cases (29%) requiring blood transfusion. 13Hysterectomy was performed in 68.2% of cases (24.4% subtotal and 43.8% total) as against 23.3% uterine repair.Murta et al found no significant difference in terms of outcome between total and subtotal hysterectomy. 14Repair is an option for younger clients with simple transverse rupture without signs of infection but in multiparous clients arriving late with overt signs of infection, hysterectomy might be yet a better option.In the absence of a skilled person to perform hysterectomy, repair might be safer.Admassu from Debre Markos Hospital in Ethiopia reports 81% of hysterectomy as against 19% repairs. 15n a South African study, 78% of the cases had hysterectomy 16 while Flamm et al report only 8% of cases requiring hysterectomy. 17A variant of local herbal concoction (called ""kalguteem"" or "MASUGE" in local parlance) suspected to have uterotonic activity was used by 24 clients (58.5%) to hasten labour. Oxytocin and misoprostol were used on 2.4% of clients each respectively.Contrary to references from the western world where oxytocin and prostaglandins are seriously implicated in the genesis of uterine rupture, 18 the situation is quite different here.This herbal concoction with suspected uterotonic activity has been used by the indigenous people in the three Northern Regions of Ghana for several centuries to hasten labour.Precipitate labour induced by this concoction is well appreciated by the locals and therefore a great incentive to continue its use.However, in the midst of a degree of feto-pelvic disproportion, foetal distress and uterine rupture may ensue.Perinatal fatality of 82% is slightly higher than reports by Adanu (74.3%) and Elkady (73.1%) 19 and lower than that of Sameera (91.2%) from Pakistan 20 Maternal case fatality of 9.8% is consistent with the 5-10% reported by Mokgokong from South Africa 16 and contrasting with the 1% by Adanu from the Korle-Bu Teaching Hospital in Ghana. On the other hand, Kwame Aryee et al from the same Hospital looked at peripartum hysterectomy where ruptured uterus was cited as indication for hysterectomy in 89(48.9%)cases out of which 6(27.3% ) died making a case fatality of 6.7%. 21Golan and Elkady reported 15% and 21.4% respectively in their review. 19,22Bladder injury of (7%) is much lower than the 18.5% reported by Gessessew from Ethiopia. 23 a comparative analysis of the statistics for the years 2007 and 2008, though total deliveries increased by 22.3%, caesarean sections increased as well by 28.3%.Uterine rupture incidence dropped from 1:91 to 1:174 where as uterine rupture related maternal mortality dropped from 16% to zero and perinatal mortality decreased by 45.4%.Good antenatal care, access to skilled delivery and emergency obstetric care are key elements to reduce uterine rupture and its associated complications.The antenatal care coverage in the country is quite acceptable (96% in Ghana Maternal Health Survey 2007); though the quality of care offered will definitely need to be upgraded. Poor knowledge of pregnant women about danger signs, poor road network and inadequate skilled attendants in the health facilities are some of the contributing factors to the delay.Emergency community transport systems, equitable distribution of health staff and the creation of maternity waiting homes could help address the problem.Availability of blood banks with sustainable blood supply is indispensable.The use of herbal concoctions to hasten labour should be looked at carefully.Perhaps a scientific research to determine the properties of the active ingredient will help put it into proper use in obstetrics. Table 1 : Clinical Presentation of Patients with Uterine
2018-04-03T00:17:23.546Z
2011-08-22T00:00:00.000
{ "year": 2010, "sha1": "ad176de71e4f2a96cfde5d330683a6fe962a39d3", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/gmj/article/download/68892/56958", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ad176de71e4f2a96cfde5d330683a6fe962a39d3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
25535172
pes2o/s2orc
v3-fos-license
Lack of correlation between microbial penetration method and electro chemical technique for assessment of leakage through the root canal fillings Aims: The purpose of this study was to compare microbial penetration method and electrochemical technique for evaluation of the apical seal. Materials and Methods: A total of 28 teeth were prepared using the step‐back technique to ISO size 40 master apical files. The specimens were randomly divided into an experimental group, two positive and two negative controls. Root canals in the experimental and negative control group were filled with gutta‐percha (GP) and sealer, using lateral condensation technique. In the positive control group, canals were filled with GP without any sealer. The external surface of each tooth was coated with two layers of the nail varnish, except for the access opening and the apical foramen. In the negative control group, the teeth were completely covered with nail varnish. The apical seal was recorded with two methods, each successively used on the same teeth: An electrochemical method and a bacterial penetration method. Statistical Analysis Used: The correlation of the electro chemical readings with the results obtained from bacterial micro leakage test was evaluated by Pearson’s correlation coefficient. Results: Correlation coefficient of the measurements obtained from the two evaluation methods, was 0.23 (r = 0.23), so the correlation was not statistically significant (P = 0.275). Conclusions: This study shows that several studies by different methods are necessary before evaluation of the marginal leakage. Introduction Cleansing and shaping of the root canal system, followed by adequate obturation are the major objectives of endodontic treatment. [1]Root canal obturation provides a seal that prevents microleakage and subsequent reinfection of the canal and the periradicular tissues. [2]For this reason, different endodontic materials and obturation techniques have been applied for decreasing microleakage and improvement of seal. [3,4]Along with these improvements, various test methods have been described to assess the sealing ability of these materials and techniques. [5] vitro methods are used, generally can be divided in to two categories: Methods that used a tracer agent penetrating the filled canal and those that rendered without a tracer. Tracers such as, dyes, radioisotopes, bacteria and their products (i.e.endotoxins) are commonly used in microleakage studies.Other methods include fluid filtration technique, electro chemical technique and etc. Assessment of bacterial leakage might be more biologically relevant than that of dye or radioisotope penetration, but this method has two limitations: Depending on the bacterial species has been used, conclusions might vary and maintaining aseptic condition throughout all steps of the experiment can be difficult. [6,7]cobsen and von Fraunhofer depicted the electrochemical micro leakage method for the first time. [8]In this technique the tooth is immersed in an ionic solution (i.e.NaCl solution).A stainless steel wire (working electrode) is placed into the coronal access cavity, which was filled with the same ionic solution and another stainless steel wire (counter electrode) is submerged in to the solution.The two electrodes are attached to a constant power supply with a multimeter.As the leakage occurs, the solution penetrates through the apical seal.It is assumed that the magnitude of the current detected will indicate the degree of the penetration. [9]e aim of this study was to compare the results of bacterial and electro chemical micro leakage tests and evaluate any correlation between these tests. Preparation of samples A total of 28 freshly extracted human maxillary and mandibular anterior teeth with a single, straight root canal were selected for this study.The exclusion criteria include: Existing cracks, large carious lesions, open apices and resorptive defects.After removal of bony debris, calculus and soft-tissues on the root surface, the teeth were stored in saline solution.All preparation and obturation procedures were carried out by a trained operator as described below. The coronal fragments of all teeth were removed with a diamond disks and left roots with 15 mm length. [6]A diamond bur was used to gain a straight-line access to the root canal.Following pulp extirpation, a size 15 K-file (Dentsply, Maillefer, Ballaigues) inserted in to the canal until it was seen at the apical foramen.The working length was determined by subtracting 1 mm from this length.A size 15 file was percolated through the apical foramen before and after the root canal preparation to attain the apical patency.The root canals were instrumented using step-back technique to ISO size 40 master apical file within 1 mm of the apex. A volume of 2 ml of saline solution were used for irrigation between each file size.After the completion of the preparation procedure, the teeth were randomly divided into one experimental group consisting of 24 samples, two positive and two negative control groups.The canals were dried using paper point (Sina Dent, Iran).The root canals were filled using lateral compaction technique as described below. The ZOE sealer (Gholchai, Iran) was mixed according to the manufacturer instructions and introduced into the canal using a size 30 file with counter clock wise rotation.Size 40 master Gutta-percha (GP) cones (Diadent, Korea), lightly coated with sealer and was then placed to the full working length.Lateral compaction was achieved using size 25 accessory GP cones and a size B finger spreader (Dentsply Maillefer) that initially reached to within 1 mm of the working length. The two positive control teeth were obturated as the same manner as experimental teeth but without any sealer.The two negative control teeth were obturated with GP cones and ZOE sealer.After the obturation procedures were completed, all roots were stored at 100% humidity for 24 h in order to allow complete setting of the sealer cement.Before the evaluation of the microleakage, the excess coronal GP was removed with a Peeso Reamer ISO size 3 (approximately 5 mm GP was remained in each root canal.).Apical sealing ability of the obturated canals was then assessed, using electro-chemical and bacterial micro leakage tests. Electro-chemical micro leakage test The exterior surfaces of the teeth were completely covered with two coats of nail varnish except for the access opening and the apical foramen.The root surfaces in the negative controls were entirely covered with two coats of nail varnish. The roots were mounted with silicon through the bottom of plastic cylinders leaving the open access opening inside the cylinder.The cylinders were filled with saline as electrolyte.The cylinders with teeth were mounted in Petri dishes filled with saline electrolyte.Only 2 mm of the root endings were immersed in the solution. For measurement, a #70 k file (Dentsply, Maillefer, Ballaigues) was placed in each upper chamber and a stainless steel wire was inserted into the Petri dish.The electrode in each upper chamber was separately connected to the electrode in the lower chamber through an electric circuit with an 8-v, DC power supply (Z-IC, 8V1A.Siehe ECA).The electrical conductivity in this circuit was measured in μA with a multimeter (Case, Japan) for each root. Bacterial microleakage test The teeth were inserted in to the plastic Eppendorf test tubes with screw caps and then fixed through it.A volume of 2 ml of sterile culture medium (Tripticase Soy Broth (TSB); Merck, Darmstadt, Germany) were added to sterile glass test tubes.The Eppendorf root assembly was mounted inside the test tube with the root tip contacting the culture medium. A standard Enterococcus faecalis bacterium was cultured in Tripticase Soy Agar and then microbial suspension, with 0/5 McFarland score for turbidity, was prepared in TSB. Using a sterile micropipette, the microbial suspension was placed into the Eppendorf test tubes in contact with the coronal access opening of the filled roots.The test assemblies were incubated at 37°C.The turbidity of the culture medium in the lower chamber was monitored daily for 20 days.As the turbidity was observed, microbial samples were cultured again to confirm the presence of E. faecalis bacterium. The data were analyzed using SPSS software (SPSS ver.11, Statistical Package for Social Science, IBM Corporation, NY-USA).The statistical significance was set as a 0.05 level.The correlation of the electro chemical readings with the results obtained from bacterial micro leakage test was evaluated by Pearson's correlation coefficient. Electro chemical method The teeth in the negative control group showed no flow of electrical current.In the positive control group maximum current flow was recorded (90 μA).The results in the experimental group ranged from 10 μA to 60 μA. Bacterial infiltration method As expected; the negative controls showed no significant infiltration during the experimental period.Samples in positive control group showed infiltration after 1 day of contamination.In the experimental group, infiltration occurred between day 4 and day 17. The results of leakage, obtained by electro chemical method and bacterial infiltration method were compared using Pearson's correlation coefficient [Figure 1].Since, the correlation coefficient of the measurements obtained from the two aforementioned methods, was 0/23 (r = 0/23), the correlation was not statistically significant (P = 0.275). Discussion [12][13][14] The previous studies that measured and compared different methods for evaluation of the leakage, mostly, failed to show any correlation. [10,14]arthel et al. [15] applied the dye leakage test after the bacterial test on the same teeth and found no correlation between these two tests.Pommel et al. [14] also compared fluid filtration, electro chemical and dye leakage tests for assessing the sealing ability of two obturation techniques, using the same teeth.They found no correlation among the tests.They described that this result was not surprising because the leakage phenomena are dependent to different factors. In Modaresi et al. [16] study, electrochemical method was compared with dye penetration method.No correlation was found between two techniques.Delivanis and Chapman [9] compared the electrochemical method to the dye penetration or the radioisotope method.They found a correlation, but only at the two ends of the electric score range.Martell and Chandler [17] compared three root end restorative materials using the electrochemical and dye penetration methods and found a correlation between two methods.A study by Wu et al. [18] compared bacterial penetration to the fluid transport along root canal fillings.They found no correlation between two methods. In the present study, the quantitative measurements recorded by electrochemical method and bacterial penetration method gave contradictory results.This may be due to the differences in working principles of various tests methods. The electrochemical method is based on the diffusion of ions through very narrow spaces and the outcome of this method likely depends on electrical laws. [14]It is assumed that the magnitude of the electrical current that produced by ions diffusion, between two electrodes is directly proportional to the degree of leakage.Any change in ion concentration can affect the results. Seidler [19] has emphasized that all sealers undergo dimensional changes, these changes occur upon setting and dissolution in fluids.Dissolution of inorganic salts that used in sealers formulation may affect the ionic concentration. Another parameter that could be measured in the electrochemical test is electrical resistance.Resistance and leakage related to each other reversely.As the leakage increases, the electrical resistance value declines. Jacobson and von Fraunhofer [8] applied two different types of metal as electrode (stainless steel and copper).This procedure may lead to an electrical potential that creates between two electrodes and effects on our measurements.The results of electrochemical microleakage tests varied considerably.This may be partly because of differences in the composition of the electrolyte, electrode type and distance between the two electrodes, electrode thickness and electrical conductivity of the ionic solution. We have used saline solution, as electrolyte, in our study because; osmotic pressure and ionic composition of saline solution are relatively similar to the interstitial fluid and may not interfere with the results obtained by electrochemical tests, as opposed to NaOCL solution.The ionic concentration in hypochlorite solutions is very high especially in thick ones.This may be impact on the electrical conductivity that measured by electrochemical test.Since the electrochemical method did not destroy the tooth structure, we could assess leakage in one sample, repeatedly. According to Timpawat et al., [7] use of bacteria for assessing the leakage (mainly coronal) is considered to be of greater clinical and biological relevance than the dye penetration method.Many different strains of bacteria have been used to detect marginal leakage and this has led to contradictory results, because the methods depend on the type of bacteria used.[22][23] E. faecalis is also one of the most commonly isolated microbes from the root canal. [7]In this study, E. faecalis was selected due to ease of arrangement and interpretation of the data. According to the results of this study there was not a significant correlation between electrochemical and bacterial penetration tests for evaluation of the leakage.Thus, the clinical relevance of leakage evaluation in vitro may be questioned.Moreover, lack of the correlation between the two methods, which were applied in this study, is likely related to the differences in criteria.So that it is proposed that several methods of evaluation should be used, to have several sets of data before drawing any conclusion. Figure 1 : Figure 1: Scattering diagram of microbial penetration results and electrochemical leakage evaluation (r = 0.23)
2017-08-27T10:09:25.567Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "f011e600355a567474d188238dd05114c487e58d", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0976-237x.128670", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "f011e600355a567474d188238dd05114c487e58d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
248809943
pes2o/s2orc
v3-fos-license
Caregivers’ perception of risk for malaria, helminth infection and malaria-helminth co-infection among children living in urban and rural settings of Senegal: A qualitative study The parasites causing malaria, soil-transmitted helminthiasis and schistosomiasis frequently co-exist in children living in low-and middle-income countries, where existing vertical control programmes for the control of these diseases are not operating at optimal levels. This gap necessitates the development and implementation of strategic interventions to achieve effective control and eventual elimination of these co-infections. Central to the successful implementation of any intervention is its acceptance and uptake by caregivers whose perception about the risk for malaria-helminth co-infection has been little documented. Therefore, we conducted a qualitative study to understand the caregivers’ perspectives about the risk as well as the behavioural and social risk factors promoting malaria-helminth co-infection among pre-school and school-age children living in endemic rural and urban communities in Senegal. In June and December 2021, we conducted individual and group interviews, and participant observations, among 100 primary caregivers of children recruited from Saraya villages in southeast Senegal and among leaders and teachers of Koranic schools in Diourbel, western Senegal. Our findings showed that a majority of the study participants in the two settings demonstrated a high level of perception of risk for malaria and acceptable awareness about handwashing practices, but had misconceptions that malaria-helminth co-infection was due to a combination of excessive consumption of sugary food and mosquito bites. Our observations revealed many factors in the house structures, toilet practices and handwashing with ashes and sands, which the caregivers did not consider as risks for malaria-helminth co-infections. These findings underscore the need to promote caregivers’ awareness about the existence and risk of malaria-helminth co-infection in children. This approach would assist in addressing the caregivers’ misconceptions about the occurrence of the co-infection and could enhance their uptake of the strategic interventions targeted at achieving control and subsequent elimination of malaria and helminth co-infection. Introduction and hand-washing practices) promoting malaria-helminth co-infection among pre-school and school-age children living in the endemic rural and urban communities in Senegal. Although, our primary objective was to ascertain what the caregivers perceived about malaria-helminth co-infection, however, to ensure clarity of the primary objective, we needed first to systematically consider what the caregivers perceived to be the risks for malaria and helminths before exploring their perceptions on the co-infection involving malaria and helminths. Ethics statement This study was approved by the Research Ethics Committees of the London School of Hygiene & Tropical Medicine and Conseil national de Recherche en Santé (CNERS), Senegal. Given that the study was non-interventional and its implementation posed little or no risk to the study participants, verbal consent was considered sufficient for this study [20]. The Ethics Committees approved the verbal consent procedure for this study; and we obtained verbal consent from all potential caregivers before the interviews were conducted. The detailed information about the study was provided to a potential study participant by a trained research assistant, after the participant's concerns and questions were satisfactorily addressed, they signified consent to participate in the study orally. This process of verbal consenting was documented by the research assistant in a dedicated consent register. We also obtained permissions from the health authorities at the national, regional and district levels in Senegal, prior to the implementation of the study. Participation in the study was voluntary and participant confidentiality was maintained. Study settings We conducted this study as a qualitative component of two prospective population surveys designed to address the lack of information regarding the burden of malaria-helminth coinfections and its associated risk factors among pre-school and school-aged children in Diourbel and Saraya districts of Senegal. Diourbel in the western region of Senegal is mainly urban while Saraya in the south-eastern region of the country is mainly rural. Diourbel and Saraya are about 134 km east of and 740 km south of Dakar, the capital of Senegal, respectively ( Fig 1). The two communities share similar epidemiological profiles. Diourbel and Saraya have a tropical Sudano-Sahelian climate with well-defined dry and rainy seasons that result from northeast winter winds and southwest summer winds; the dry season lasts from December to May and the rainy season from June to November. Diourbel and Saraya are parts of the communities most affected by malaria [21] and helminths [22] in Senegal. The main ethnic groups in Saraya are Malinke and Diakhanke, while Serer and Wolof are the main ethnic groups in Dioubel. Study population The study population consisted of primary caregivers of children aged 1-14 years who were vulnerable to malaria-helminth co-infection in Diourbel and Saraya districts of Senegal. In Saraya site, the primary caregivers were the parents and guardians of these children. In Diourbel site, pre-school and school aged children lived as full boarders in Koranic schools (also called 'dahras') where they received Arabic and Islamic teachings. The primary caregivers of the children learning in these schools were Koranic teachers, heads of Koranic schools and their wives. Older Koranic students (Supervising Talibé) were also appointed by the heads of the Koranic schools to supervise and serve as caregivers to the younger students in the schools. Because of the differences in the settings of the two study sites, we adopted a multiple case study approach for Saraya and a single case study for Diourbel. We used a multiple case study with a level of nesting [23] which categorised Saraya district into urban zones and rural zones. These approaches are relevant to research that focuses on understanding (how?) and explaining (why?) complex human behaviours and phenomena occur [23]. The case studies were undertaken in one town (Saraya) and two villages (Sekoto Dantila and Sabadola) in Saraya district and in multiple Koranic schools in the urban district of Diourbel. Sampling strategy Given that caregivers' perceptions were previously shown to be related to gender, age, prior experience and place of domicile [15,17], we purposely selected respondents with different backgrounds. This sampling strategy led to the diversity of gender, age, prior experience and place of domicile, whether in a rural or an urban setting. Purposive sampling was directed towards achieving maximum variation in age and gender, using a snowball approach: 'a nonprobabilistic form of sampling in which persons initially chosen for the sample are used as informants to locate other persons having necessary characteristics making them eligible for the sample [24]. A purposive sampling method was used to recruit 50 participants across the three villages in Saraya district. These included 30 mothers, 10 fathers, 10 grandmothers who were primary caregivers of pre-school and school aged children. Similarly, we randomly selected 50 participants who were Koranic teachers and leaders of the Koranic schools (ouztaz) and were also primary caregivers for the Koranic students (talibés) in these schools (dahras). Sample size was estimated to be sufficient based on the principle of theoretical saturation [24] and based on our previous experience with this methodology, we expected to reach saturation with approximately 50 interviews per group. Data collection methods We used several methods to collect qualitative data from the study participants; these included individual and group interviews and participant observations. The final number of individual and group interviews was determined based on the saturation of the data that we obtained during the collection phase. Face-to-face interviews were facilitated by trained research assistants, using a purposed-designed interview guide (S1 Table). The interview guide explored the risk perception of factors that increased vulnerability to developing a combined infection of malaria and helminths in children. The study participants were asked about the sleeping habits of their children and whether they used a treated bed net, the structure of their house (whether the walls of the house were made of mud and poles, cement blocks, corrugated iron, concrete, mud bricks, burnt bricks, wood, and whether the roof was made of corrugated iron or grass), and toilet facilities. We visited the houses and inspected the surroundings and toilets. We explored a measure of the sanitation level of the households by assessing the toilet facilities and handwashing practices through direct observations of the study participants. The participants were asked to show where they and other members of the households most often washed their hands at any time, i.e. before a meal, before cooking or feeding a child, and after using a toilet. When no handwashing place was shown, the respondents were probed for reasons why they did not have the facility or practised handwashing. We observed whether water and soap were available at a designated handwashing location within the households. The households were also observed for the presence of soaps, or other cleansing agents within an arm's reach of the place for handwashing. The respondents were not asked to fetch soap, as this did not reflect the soap's accessibility. In-depth interviews were also held with the primary caregivers of the pre-school and school age children in the households and Koranic schools to gain an in-depth understanding of their perceptions on risk of their children/wards developing mixed infections of malaria and helminthiasis. The respondents were asked about recent illnesses with malaria and helminths, and what they thought might make the children developed these diseases. They were asked about hypothetical illnesses to understand how health-seeking behaviour may change by illness type. The interviews were conducted in the local languages preferred by the participants and these were recorded using digital dictaphones. Data processing and analysis The transcriptions of the interviews were carried out by trained research assistants who were native speakers of the local languages in Senegal. We verified the meaning and interpretations of the transcribed texts and confirmed consistency with the original texts in the local languages. The entire coding phase followed the inductive coding methodology. We organised the data around evocative themes with regard to the comments and views expressed by the study participants. We used thematic analysis through an objective and systematic analysis of the contents of oral discourse. We extracted the different units of analysis of the discourse and performed horizontal analysis of the theoretical framework in relation to the study objectives. We created themes that emerged from the analysis of the codified data, and validated the meaning by triangulation of the sources and methods of gathering the data. We analysed the data using NVivo 12 software and presented the results along the emerging themes. Study trustworthiness In line with trustworthiness criteria created by Lincoln and Guba [25], we ensured credibility of the study by prolonged engagements, persistent observations, data collection triangulations, and researcher triangulations. Through an iterative process of listening, discussing, and re-listening, the research team identified and consensually validated emerging themes and appended segments of dialogues supporting the proposed themes. We stopped interviews when saturation was reached (i.e. when no new themes were identified). The team systematically reviewed the themes and sorted them into content domains. We used an analytic matrix to identify patterns and connections amongst the domains. Two of us not involved in the qualitative coding process (MA and IAM) audited the analytic matrix, choice of quotes, and thematic analysis. The team members checked the descriptions of the key phenomena and themes which emerged from the data analysis with the study participants and requested them to verify the accuracy or consistency with their perspectives. These activities supported the validity and transferability of our study findings. We kept robust records of the raw data, field notes, transcripts, and reflexive journals from this study and this helped us to seamlessly systemize, relate, and cross reference data. We maintained an audit trail that documented evidence of the decisions and choices that we made regarding the theoretical and methodological issues throughout the study period. These steps supported the dependability of our study [26]. To achieve confirmability [26], we highlighted the rationales for theoretical, methodological, and analytical choices throughout the entire study and this supported that the interpretations and findings were derived from the data collected from the study participants. Reflexivity [26] was maintained by the research team through the analysis and writing by recording, discussing and challenging established assumptions. In addition, MOA, NMS and AD kept reflexive diaries. The first author observed the interviews and discussion groups. He was not known to the participants of this research prior to undertaking the study as a medical doctor who had first-hand personal experience in the subject of discussion. Whilst it was useful to 'know' (from his own background) what the participants were talking about medically (and in terms of detecting items of significance), as a researcher he made conscious efforts not to accept potentially common assumptions at face value. Results This study took place in June and December 2021 in Saraya and Diourbel districts, respectively. A total of 50 study participants was recruited in each district. The socio-demographic characteristics of the study participants are summarised in Table 1 below. Findings from the direct observations of the households The majority of the 'dahras' visited had their bungalow houses built with blocks and roofed with corrugated zinc while slab roofs, cement blocks and iron doors were used for the two-storey buildings (Fig 2). In other 'dahras', the rooms which served as a place of study for the children were either made of straw, wood or zinc, with zinc roofs. In almost all the 'dahras', there was a large hall which served as a place of study for the students, without a fence but with a zinc roof and cement or tiled floor and sometimes the floor was only sand. None of the 'dahras' had mosquito netting fitted on the windows or doors. These observations were corroborated by comments from a senior student, a leader of the Koranic school and a wife of a school leader: "Yes, the whole building is solid, it's a new building and the whole floor has been tiled. The doors and windows are iron. There are no mosquito nets on the doors or on the windows. » Supervising Talibé "All the other rooms are made of zinc, the walls, the roof, the doors and windows. They are six in number (6), this is where the Talibés sleep. The floor is cement except for one of the bedrooms, and that's because the work hasn't finished yet. » Head of Dahra Similarly, in most dahras, the children (talibés) lived in adjoining rooms, some of which had no doors. Also, we observed multiple holes in the roofs, which implied possible leakage of water through the roofs during rainy seasons. A senior student confirmed that they often put containers in each corner of the rooms to collect the rainwater and to prevent flooding the rooms or getting their clothes wet. "Yes, in the room where the children sleep, the water enters there during rain. Either they sleep with that, or they put pots to prevent it from touching them. » Supervising Talibé Bedding organisation In some 'dahras', talibés slept according to age groups. The younger children slept on the mats laid on the floors while the older ones slept on mattresses. Occasionally, the older ones slept under a mosquito net, installed by one of them. Most talibés preferred to sleep in the courtyards of the 'dahras' because of intense heat during hot seasons while others slept in the open spaces in front of the houses, for the same reason. PLOS GLOBAL PUBLIC HEALTH "We have mats at our disposal which constitute beds for the learners. So these mats are given to the learners, according to their age groups, and the rooms, which are five in number. Those who are between 7 and 12 years old share the same room and those who are between 10 and 15 years old also share the same room, and finally, those who are between 15 and 20 years old are in the same room. » Head of Dahra. The household structure and bedding arrangements in the 'dahras' differed significantly from those found in Saraya villages. Majority of the houses in Saraya villages were made of mud walls and thatched roofs (Fig 3). Few had corrugated zinc roofs while none had mosquito netting on the windows or doors. Nevertheless, almost all caregivers in Saraya confirmed that they and their children slept under a mosquito net during the night preceding the interviews. Hand washing practices All 'dahra' leaders and Koranic teachers reminded us of the sacred dimension of hygiene in the Islamic religion. The ablutions performed by muslims by washing different parts of the PLOS GLOBAL PUBLIC HEALTH Caregivers' perception of risk for malaria and worm co-infection body with water before observing the mandatory five daily spiritual prayers, were cited as concrete examples of the importance given to hygiene and hand washing in Islam. They also affirmed that the concept of general body cleanliness and hand washing occupied an important place during daily discussion/awareness sessions organised for the students in the 'dahras'. "I will answer you by Islamic jurisprudence because the prophet of Islam said that each person must wash himself every morning before doing anything because he does not know where his hands spent the night. So even before washing the body or doing ablutions, you have to wash your hands, all the more so when you decide to cook. Cleanliness is all-encompassing in Islam, everything that must enter the body and the belly must be clean" Head of Dahra The advent of Covid-19 outbreak facilitated the training of many 'dahra' managers on good hygienic practices. In addition to the training workshops, the district health authorities also provided the managers with products and accessories to perform hand hygiene. We also observed many pictorial flyers written in local languages, demonstrating the steps in handwashing that were displayed throughout different sections of 'dahras'. Although knowledge of the importance of good handwashing practices was well ingrained among the caregivers, including the health risks that could arise from poor hygiene, most of the interviewees acknowledged the difficulties they faced in enforcing strict compliance with these good practices on the children. Limited financial capacity and human resources were also cited as barriers to enforcing compliance with handwashing among the talibés. For example, when the talibes left the toilets, some talibés washed their hands routinely with water but rarely with soap, mainly because soap was not always available. In the absence of soap or detergent, a few talibés used sand to clean their hands after visiting the toilets. To the question "Is there "madar" (detergent) placed in front of the toilets? The response from many of the teachers was "No, there isn't. With what do they wash then, his answer was: "Usually with water and sand". "We taught them that (repeated many times), but there is no one who is there all the time to remind them, if we see them leaving the toilet without washing, we can call them to order, but there is no one who reminds them of these incessantly. » Wife of a dahra teacher. The descriptions obtained in Diourbel about handwashing were similar to the responses provided by the caregivers in Saraya villages, where majority considered hand washing useful to prevent a lot of diseases. Like their counterparts in Diourbel, Saraya participants also linked the importance of handwashing and body cleanliness to Islamic injunctions. The participants also highlighted how the public health messages shared as part of preventive measures for Covid-19 pandemic, reinforced their hand washing practices. "Handwashing helps prevent disease. Here, almost all of us are used to washing our hands, adults and children alike, before eating everyone washes their hands on a pot with soap" Mother, 29 years old, Sabadola ". . .it's because you won't be comfortable in your activities when you haven't washed your hands. It's not as pretty to see stepping out of the toilet and not washing your hands" Mother, 28 years old, Saraya As observed among Diourbel students, most participants in Saraya villages washed their hands with water or with a natural product such as ashes or sands. According to them, ashes can replace soap. "As soon as you leave the toilet, you have to wash your hands, before eating, wash your hands also if there is no soap. If this is not the case, use the ashes" Father, 32 years old, Sékoto Dantila "When we have money we can buy soap and when we don't have money we only use water like that to wash ourselves and even when leaving the toilet and this constitutes a risk of catching diseases » Mother, 50, Sekoto Dantila Toilet hygiene and toilet use practices Most of the 'dahras' visited had modern toilets built with Turkish chairs, cement walls, a zinc roof and a septic tank as stool disposal system. Some 'dahras' had several toilets, shower toilets and stool toilets (Fig 4). In a few 'dahras', separate toilets were built for the Koranic teachers and the talibés. Traditional toilets built by digging pits with a seat made from cement, a straw fence and no evacuation system were also found in some 'dahras' located in the remote/bushy areas of Diourbel. These were similar to toilet facilities found in all households visited in Saraya villages. When these pit latrines were filled with stools, they were covered with sands and other pits were dug to serve as new toilets (Fig 5). In some households in Saraya, there were no toilets. In these cases, members of the households used neighbours' toilets or practised open defaecation. PLOS GLOBAL PUBLIC HEALTH Caregivers' perception of risk for malaria and worm co-infection PLOS GLOBAL PUBLIC HEALTH Caregivers' perception of risk for malaria and worm co-infection "Here we have a traditional toilet fenced in with solid straw with bricks around it to act as a rampart. The water from the toilets pours behind the concessions or over holes and when it's full we close and dig another one, that's how we do it here" Mother, age unknown, Sekoto Dantila "We don't even have a toilet here, we go to the bush to do open defecation. » Grandmother, 60 years old. Sabadola Most children aged under five years in Saraya villages used potties to pass stools, following which the mothers or caregivers dumped the stools in the home-made toilets or nearby bushes. The mothers cleaned these children with their bare hands, which were later washed with water alone or water and soap, depending on the availability of the latter. "Children who are under 5 years old use the pots . . .. As soon as the child wants to relieve himself, we give him a potty, if he finishes we clean it and the potty too, to throw the stool in the toilet". Practices related to swimming and barefoot walking All participants interviewed in Diourbel and Saraya expressed concerns and difficulty in getting children to wear shoes. Owing to the itinerant nature of the Koranic students in Diourbel, the shoes get lost each time the Koranic leaders procured new ones for the students. In Saraya villages, lack of money to buy new shoes for children was cited as the major reason for children walking barefooted. "(Laughs) it is very difficult, if not impossible, to get children to wear the shoes. Even, yesterday there was a man who asked if it is necessary to buy shoes for the children since they don't wear them, and I told him to buy them and then we'll keep them for them. They wear when going out but do not fit with the shoes. Head of Dahra "They very often wear plastic shoes, often it is the over-15s who wear them the most and small children are often used to being barefoot" Head of Dahra Existence of swimming places Our interviews showed that very few rivers and streams were available in Diourbel where children could bathe or swim. These rivers were very far from the 'dahras' and the students were strictly forbidden from going to the rivers for bathing or swimming, because of two episodes of drowning that occurred there. This situation contrasts sharply with Saraya villages, which had many rivers and streams where adults and children bathed, swam and washed their dirty clothes. Most frequent diseases Malaria was cited by the majority of respondents in Diourbel and Saraya districts as the most common illness among the Koranic students and children. The students were also reported to complain a lot of stomach aches. Mosquito bites were mentioned by most respondents in both sites as the main cause of the recurrence and persistence of malaria, especially given that most Koranic students did not sleep under mosquito nets. "Malaria is the most recurrent. Apart from this pathology, children often suffer from stomach aches, and finally from colds, because you see children who cough, coughs from colds, it happens to them. » Head of Dahra "Malaria is more common here and also stomach aches. The stomach aches are due to the bad food they consume once they go out. As they are talibés, they eat all the food they receive on the street. And these foods may not be good for them. » Head of Dahra "It's the mosquitoes I think. But there is also the fact of not sleeping under a mosquito net, because there is no standing water here, so you can only think of not sleeping under mosquito nets as the origin. Recently I saw multiple cases of malaria in one of the houses, and they told me they didn't have mosquito nets, I think that's why. I spoke to MB about it and he told me that he will find a solution. » Mother, unknown age, Saraya Management of malaria cases Most of the Koranic masters and some of their wives were trained in the diagnosis of malaria using a rapid diagnostic test (RDT) and case management of malaria by the health authorities in Diourbel. The Koranic leaders were also provided with RDTs and Artemisinin-based combined therapy (ACT) for malaria treatment. These was no similar arrangement among the caregivers in Saraya villages, most of whom demonstrated good health-seeking behaviour and attributed the decline in malaria among the children to the yearly mass drug administration with seasonal malaria chemoprevention (SMC). "We have ACTs and malaria tests, so if we do the test and it comes back positive, we prescribe them the ACTs, respecting the dose, and the paracetamols. We order paracetamols from the main pharmacy. Now, if the test is negative, we give them paracetamol, because we will consider the signs like the flu. "Daughter of a Koranic master. "Yes there were medicines that are given every month for three days. And after that, we saw that the malaria rate had dropped a lot, so I think that yes, certain drugs can help with malaria. But for the worms I don't really know maybe the marabout can answer you on that. » Mother, 37 years, Sekoto Dantilla Knowledge and perceptions about worms Unlike malaria which many dahra leaders could detect and manage at household level, the knowledge and perception about symptoms of worms and treatments were minimal. Only stomach ache was mentioned as the symptom of worm infestations by the Koranic teachers and they confirmed that this was rarely a reason for seeking treatments for the students in health facilities. In addition, some mentioned the students' poor diet and lack of hygiene as potential sources of worms, given that the children begged and received different kinds of food from the public. Amongst caregivers in Saraya, worm infestations were said to be caused by eating sugary food which could lead to having stomach ache and passing worms in stools. "We also think that children can have worms but it's not a lot, we have never had complaints about worms coming from children, there is no one who has come and that we consult him and that we realize that he has worms. For the worms we are the ones who assume that what the children eat is too sweet so they can have worms. » Head of Dahra "They sometimes vomit or suffer from stomach aches. I remember my child was vomiting a lot, we went to the hospital, after the tests they did not see anything, they assumed it was worms and they prescribed medicine. » Manager's wife "You see kids eating sugar cubes like that all the time, it can give them worms. In fact, not long ago, we took a child to the hospital and we were told that it was because of the sugar; his face was a little swollen. » Grandmother, 58 years, Saraya Worm treatment In Diourbel and Saraya, medications for the treatment of worms were given at a health post or during the mass deworming campaigns. Some reported that the drugs were not effective and caused side effects such as diarrhoea in children, without eliminating the worms. This led some to prefer to use home remedies. "Well, I don't know too much, but I know that the drugs we are given are to be given to the children every fortnight, that's all I know. For the most part, they recover but I admit that if he does not stop consuming certain foods, they will come back. » Head of Dahra "The worms, we know it's complicated to treat, we know that we only have sedatives because at some point they come back (the worms). We know that killing them until they come completely out of the belly will be complicated. And these are things that also happen here. » Head of Dahra "The worm medication works for some and not for others and for example last year we were given medication for this and it caused diarrhoea in some children. When he had diarrhoea, it was found that no worms came out. » Mother, 29 years, Saraya Risk perception for malaria-worm co-infection Whilst caregivers in Diourbel and Saraya demonstrated fair knowledge about the risk factors for malaria and worms as a separate disease entity, the possibility of a combined infection involving malaria and worms was considered to be a rare occurrence. The respondents in both sites emphasised that the symptoms and signs of malaria-worm co-infection were complex and difficult to recognise among the children in Saraya and Koranic students in Diourbel. "I never had the knowledge that a child can have malaria and worms together at the same time. We don't know if the child has worms, but if he has malaria, we know" Supervising Talibé Some caregivers were of the opinion that children could suffer from malaria and worms at the same time, especially if children consumed too much sugary food and were later bitten by mosquitoes. There was also a consensus of opinion by the caregivers in Diourbel and Saraya that the only way to know if a child has worms was to see the presence of worms in the child's stools. "Yes a child can have worms and malaria. When that happens, we have to go to the doctors, it is he who has the capacity to treat them to pass the worms in their stools. Having these two can be caused by excess sugar intake and the bite of a mosquito. » Supervising Talibé "I have never seen a child have malaria and worms, but I think a child can have both diseases at the same time. Worms are caused by the excessive consumption of sugar in children if it is added by mosquito bites they can have both diseases at the same time. » Mother, 28 years, Sabadola. Discussion We assessed caregivers' perception about the risk for malaria and helminthiasis as a separate disease entity and malaria-helminth co-infection among pre-school and school-aged children living in the urban and rural communities that are endemic for both diseases. Our findings showed a very high level of awareness about the risk for malaria but almost all the study participants demonstrated poor knowledge of the risk and causative factors for worms in children. More importantly, most caregivers in Diourbel and Saraya did not perceive the possibility of the co-existence of infections involving both malaria and worms in their children. The relatively better perception of the risk and factors associated with malaria and its consequences demonstrated by a majority of the study participants is similar to findings reported in previous studies [13,14]. Also, greater investments on community messaging associated with effective implementation of malaria control programmes in Senegal is likely to have contributed to the high perception and increased awareness about the risk and health-seeking behaviour for malaria, demonstrated by the study participants. The training and support provided by the Senegal health authorities to the Koranic school teachers to diagnose and treat malaria at household levels were impressive, demonstrating the commitment of Senegal to achieving malaria elimination. An optimal, if not the same level of awareness about the risk of worm infestations would have been demonstrated by the participants, if similar support was given to NTD control. Our study communities were characterised by socio-economic vulnerability and social stratifications on gender norms in relation to hygiene and sanitation. The management of hygiene of the households and that of the children rested largely on the shoulders of women and older students at the Koranic schools. In Saraya villages and in similar settings in African communities, women were responsible for collecting stools, cleaning up younger children and disposing their wastes while men were mainly responsible for providing financial support to procure materials such as soap to ensure hygiene within the household. Given the setting in 'dahras', older children who were students at the Koranic schools, were responsible for their own toilet management which was confirmed by their caregivers as not being hygienically sound. Despite being aware of these unhygienic practices, we expected the leaders and teachers at the 'dahras' to perceive this as a major risk for worm infestations and similar diseases which manifested as abdominal pain in the students. Situating the apparently low perception of the risk of worm infection demonstrated by the caregivers within the Health Belief Model showed that the perceived susceptibility of the children to worms was shaped by the caregivers' perception that worms were less harmful as a cause of serious illness in children than malaria. This perception may also downplay the need to take appropriate cues that may culminate in acceptable health-seeking behaviours [14]. Also, the caregivers' widely held opinions that excessive consumption of sugary food was responsible for worm infestations in the children reinforces the relevance of the Health Belief Model in shaping the perception of risk and health-seeking behaviours. Given the urban and rural locations of our two study sites, the household structures in Diourbel and Saraya were different not only in term of the materials used to build the walls and roofs, the bedding and sleeping arrangements, toilet types and toilet practices were also significantly different. We opined that if the caregivers had adequate knowledge and/or awareness about malaria-helminth co-infection, they could have probably perceived these factors as risks for the co-infection and mentioned them during the interviews. In the same vein, considerable risk posed by the practice of open defecation, dumping of stools in the bushes and use of traditional toilets without hygienic evacuation system [27,28], were not highlighted or perceived by the caregivers as risks during the interviews. Nevertheless, handwashing practices were well perceived by the caregivers across the two sites. The good knowledge of handwashing practices demonstrated by the caregivers was most likely influenced by the doctrines of Islamic religion [29,30] which almost all the caregivers in Diourbel and Saraya professed. Whilst all study participants recognised the importance of handwashing with water and soap, non-availability of soap made the majority of them resort to using ashes and sands as alternatives to soap. Although the use of ashes and sand may sound practical because it was readily available at no cost, the eggs of the helminths are known to reside within sands and could promote the transmission of worms to the children, thereby increasing their risk and vulnerability to soil-transmitted helminths [28,31]. Almost all study participants demonstrated poor perception about the risk for malaria-helminth co-infection, as they attributed the risk of the co-infection to a combination of excessive consumption of sugary food and mosquito bites. Studies have documented similar misconceptions about the risk and causes of common childhood diseases in Africa and how this had negatively impacted on health-seeking behaviour [32,33]. Given that caregivers who perceived that their children could be susceptible to a disease were more likely to seek treatment compared to caregivers who had low perception about susceptibility to the disease [13,34], there is a need to enhance the caregivers' knowledge on malaria-helminth co-infection and its associated complications in children. Our study had a few limitations. Our findings were largely based on self-reports provided during the interviews by the caregivers. The respondents might wish to impress the interviewers, but the inclusion of participant observations reduced this bias. Also, the study populations in Diourbel and Saraya were not homogenously similar, hence, it was difficult to draw conclusions on some comparative findings obtained from the two diverse communities. Nevertheless, the selection of the two sites provided the opportunity to explore and understand the perspectives of the caregivers on risk factors for malaria-helminth co-infection in the paediatric populations living in different contexts within the same country. The use of multiple qualitative methods also enabled us to generate findings that reflect the diverse settings in a typical African community. In conclusion, a majority of the caregivers in our study demonstrated a high perception of risks as well as acceptable health-seeking behaviour for malaria, but low risk perception and misconceptions about the causative factors of worms and malaria-helminth co-infection. The findings of this study underscore the need to promote awareness about the risk and complications of malaria-helminth co-infections in children. This step would assist in addressing the caregivers' misconceptions about the co-infection and may improve their uptake of the strategic interventions developed to achieve control and elimination of malaria and helminth coinfection. Supporting information S1 Table. Interview guide used to collect qualitative data from the caregivers. (DOCX)
2022-05-17T01:08:21.851Z
2022-05-16T00:00:00.000
{ "year": 2022, "sha1": "3bee69429720ea8ee9f4c909597a5b9d32afdcb7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/globalpublichealth/article/file?id=10.1371/journal.pgph.0000525&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "122b04f45f3519ec59ab5a10c412adfd9dc54b2d", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
11638026
pes2o/s2orc
v3-fos-license
HST images and properties of the most distant radio galaxies We present Hubble Space Telescope images of 11 high redshift radio galaxies (between $z=2.3$ and $z=3.6$). The galaxies were observed with the WFPC2 camera in a broad band filter (F606W or F707W, roughly equivalent to V or R-band), for 2 orbits each. We find that on the scale of the HST observations there is a wide variety of morphological structures of the hosting galaxies: most objects have a clumpy, irregular appearance, consisting of a bright nucleus and a number of smaller components, suggestive of merging systems. Some observed structures could be due (at least partly) to the presence of dust distributed through the galaxies. The UV continuum emission is generally elongated and aligned with the axis of the radio sources, however the characteristics of the ``alignment effect'' differ from case to case, suggesting that the phenomenon cannot be explained by a single physical mechanism. We compare the properties of our radio galaxies with those of the UV dropout galaxies and conclude that (i) the most massive radio galaxies may well evolve from an aggregate of UV dropout galaxies and (ii) high redshift radio galaxies probably evolve into present day brightest cluster galaxies. Introduction Studying the optical morphology of high redshift (z > 2) radio galaxies (HZRGs) can contribute substantially to our understanding of galaxy formation and evolution in the early universe (for a recent review see McCarthy 1993). Although the recent development of new techniques (e.g. U and B band dropouts, Steidel et al. 1996) has led to the discovery of a large population of high redshift galaxies, radio galaxies remain still of exceptional interest, because they pinpoint the most massive systems at high redshift and are potential signposts for finding high-redshift clusters of galaxies. It has been shown that high luminosity radio sources associated with quasars and radio galaxies at redshift ∼ 0.5 are located in rich clusters (e.g. Hill & Lilly 1991). At z ∼ 1 there are now several possible X-ray clusters that have been discovered around powerful radio galaxies, such as 3C324 at z=1.2 (Dickinson et al. 1998), 3C356 and 3C280 (Crawford & Fabian 1996). At z > 2 the existence of clusters around HZRGs has not been established. However, there is an increasing number of important observational indications that HZRGs might be in clusters, including (i) the detection of possibly extended X-ray emission from the radio galaxy PKS 1138-262 at z=2.156, most probably coming from a hot cluster atmosphere (Carilli et al. 1998); (ii) strong Faraday polarization and rotation of the radio emission of some HZRGs which might be due to dense gaseous halos ; (iii) possible excess of companion galaxies detected along the axes of the radio sources ; (iv) possible excess of Lyman break selected galaxies in the fields of several powerful radio sources (e.g. Lacy & Rawlings, 1996) and (v) excess of candidate companion galaxies (with two objects spectroscopically confirmed) in the vicinity of MRC 0316-257, at z=3.14 (Le Fevre et al. 1996). The hosts of powerful low redshift radio sources have long been identified with giant elliptical galaxies, containing old stellar population. The surprising continuity of the K-z relation between the high redshift radio galaxies and the low redshift brightest cluster galaxies which shows little scatter up to redshift of ∼ 4 (although the scatter increases beyond redshift 2, e.g. Eales et al. 1997), might indicate that the hosts of powerful radio sources are the most massive galaxies know at high-redshifts. Moreover, since HZRGs are probably located in forming clusters of galaxies, they could be the ancestors of brightest cluster galaxies. We have previously presented and discussed HST images of the radio galaxy PKS 1138-262 at z=2.156, which shows the clumpiest optical morphology of all the HZRGs imaged with the HST ). Our conclusion was that PKS 1138-262 is giant elliptical galaxy at the center of a protocluster in the late stages of its formation. In this paper we present HST-WFPC2 images for 9 powerful radio galaxies having redshifts between z=2.3 and z=3.6. We also present deep HST archive images of 2 HZRGs observed with WFPC2. We compare the HST images with VLA maps of the associated radio sources having similar resolution. After discussing the sample selection (Sect. 2), we describe the HST imaging and reduction procedures (Sect. 3), the radio imaging and the problem of the relative astrometry between the radio and HST data (Sect. 4). In Sect. 5 we briefly discuss the most important characteristics of each object, also referring to previous results that are relevant to the interpretation of the new data. Finally in Sect. 6 we discuss some statistical trends of the properties of these high redshift radio sources, giving a qualitative interpretation. We then summarize our main results and present our conclusions. We also include the Appendix new radio images of the radio galaxies TX 1707+105 and MRC 2104-242. Throughout this paper we assume a Hubble constant of H 0 = 50 km s −1 Mpc −1 and a deceleration parameter of q 0 = 0.5. Sample selection The radio galaxies were initially selected from the more than 60 HZRGs which were known at the commencement of the project (1995) (e.g. van Ojik 1995 and references therein). Most of these distant radio galaxies were found by observing ultra steep spectrum radio sources (USS) (α < −1.0, where α is the radio spectral index) . Objects were selected according to the following criteria: (i) bright in the the R band (R< 24, i.e. sufficient to be mappable in a reasonable time with the HST); (ii) amongst the brightest line emitters (Lyα flux > 10 −15 erg s −1 cm −2 ). Because of its high redshift, we also included the radio galaxy MG 2141+192 (z=3.594) in the sample. Finally we obtained unpublished HST/WFPC2 images of the radio sources B2 0902+343, at z=3.395, and TX 0828+192, at z=2.572, from the HST archive. For a statistical study of the properties of HZRGs it is important to enlarge the sample of objects with HST images: we therefore included in our analysis the other radio galaxies that have been imaged with the HST. These include the radio galaxy 4C 41.17 at z=3.8, one of the the best studied HZRGs (van Breugel et al. 1998); the radio galaxy PKS 1138-262, at z=2.156 that was studied by our group (Pentericci et al. 1997; MRC 0406-242 at z=2.44 that was object of a multi-frequency study, including WFPC imaging in different color bands, by Rush et al. (1997); and 4C 23.58 at z=2.95 (Chambers et al. 1996a and1996b). The first two objects were imaged with the WFPC2 camera, while the last two were imaged with the pre-refurbishment HST/WFPC. Details of the observations can be found in the mentioned papers. In this way the final sample available for the statistical analysis of the properties of HZRGs consists of 15 galaxies. By including also radio galaxies that have been imaged with the prerefurbishment HST and/or for which the total integration times are considerably different (e.g. the observations of 4C 41.17 are much deeper than for the other objects), the quality of the images varies within the sample. However given the relatively small number of radio galaxies observed, it is important to increase the statistics. In addition to the HST images, all the radio galaxies in the final sample have been imaged with the VLA at several frequencies, to study their radio-polarimetric properties and have Lyα profiles taken with resolution of < 100 km s −1 , thus allowing a detailed study of the morphology and kinematics of the ionized gas . For some of the radio galaxies, ground-based narrow band images of the Lyα emission gas, and broad band images in various color bands (mostly R-band and K-band) are also available (see references for individual objects in Sect. 5). Table 1 summarizes the observations. 9 radio galaxies were imaged with the Planetary Camera (PC) of WFPC2 during Cycle 5 and/or Cycle 6. The PC utilizes an 800 × 800 pixel Loral CCD as detector with pixel size of 0.0455 ′′ (Burrows 1995). The typical exposure time was 5300 sec (2 orbits) for each galaxy. The total observing time was split between two exposures to facilitate removal of cosmic ray events. The filters used for the observations were chosen to avoid contamination from the strong Lyα emission line at 1216Å and to have the rest frame wavelengths sampled as similar as possible throughout the sample. For the radio galaxies at redshift z > 2.9 the filter used was the broad-band F707W filter (centered at λ 0 = 6868Å and with a FWHM of ∆λ = 1382Å), similar to the Cousins R band ; for the lower redshift galaxies we used the broad-band F606W filter (λ 0 = 5934Å and ∆λ = 1498Å) which is similar to the V band. The radio galaxy TX 0828+193 was observed during Cycle 4 with the WFPC2 by Chambers et al., using the filter F675W which is centered at λ 0 = 6756Å and has a FWHM of ∆λ = 865Å. The observations were done in polarimetric mode. The galaxy was observed using the WF3 section of WFPC2, which utilizes an 800 × 800 pixel Loral CCD as detector with pixel size of 0.1 ′′ (Burrows 1995). The total exposure time of 10000 s was split between ten observations. The radio galaxy B2 0902+343 was observed during Cycle 4 by Eisenhardt and Dickinson, using the PC of WFPC2 with the filter F622W, which is centered at λ 0 = 6189.9 A and has a FWHM of ∆λ = 916Å. The total exposure time of 21600 s was split between nine observations. In the redshift range observed the continuum emission may include contribution from the faint emission lines of HeII, CIII] and CIV. For most of the radio galaxies we could estimate the total contamination using the line fluxes measured by low resolution spectra of the objects taken by . The detected lines are listed in Table 1, as well as the total contribution of the line emission to the measured flux, which ranges from 0 to 13.7%, with the highest contribution for the radio galaxy TX 0211-122. We expect that for the 4 sources of which we do not have any such data available, the line contribu-tion will be in the same range. Therefore we can assume that the images represent to a good approximation the continuum emission from the galaxies. Data Processing The data were reduced according to the standard Space Telescope Science Institute pipeline (Lauer 1989). Further processing was performed using the NOAO Image Reduction and Analysis Facility (IRAF) software package and the Space Telescope Science Data Analysis System (STS-DAS) and involved cosmic ray removal and registering of the images. The shifts were measured from the peak positions of a non-saturated star present in both the PC images. The different frames were then added, background subtraction was performed using the average flux contained within 4 or more apertures placed on blank areas of the sky, as close as possible to the source, at different positions, to avoid introducing errors from residual gradients in the background flux. The resulting image was flux calibrated according to the precepts described in the "HST Data Handbook" (1995 edition), using the photometric parameters from the standard HST calibration and included in the file header. The images were then rotated to superimpose them to the VLA radio maps (see Sect. 4.1) The magnitudes were computed from the unrotated images (which have less smoothing) within a fixed aperture of diameter 4 ′′ . In most cases this aperture is large enough (1) Name of the source (2) and (3) to enclose all the light from the galaxies. The magnitudes were computed as: m = −2.5log 10 F + M (0), where F is the measured flux and M (0) = 21.1 is the zero point for the HST magnitude scale normalized to Vega. The results are presented in Table 1. A number of different effects contributes to the errors in the photometric magnitudes; (i) the Poisson noise of the detected counts; (ii) a ∼2% uncertainty in the determination of the zero point (Burrows 1995); (iii) a ∼4% systematic error due to the problem of charge transfer efficiency in the Loral CCD (Holtzman et al. 1995) for which we did not correct; (iv) accurate subtraction of the mean sky background; (v) sky noise within the source aperture. The last two are usually the predominant effects. We estimate that the total uncertainty in the magnitudes is 0.1 or less for all galaxies. A first order transformation from the F606W and F702W ST magnitudes to the standard magnitude system was derived applying the precepts described by Holtzman et al. (1995). The resulting transformation are In Table 1 we list for each galaxy the WFPC magnitude m; the emission lines that have been detected within the filter band and the total line contribution to the continuum flux. Radio imaging All the radio galaxies with the exception of B2 0902+343, TX 1707+105 and MRC 2104-242 were imaged with the VLA as part of a high resolution, multi-frequency radio polarimetric study carried out on a large sample of HZRGs by Carilli et al. (1997). A full description of the observations and the reduction procedure can be found in this paper. The radio map of B2 0902+343 that we use in this paper is a high resolution (0.15 ′′ ) radio continuum image of total intensity at 1.65GHz obtained by Carilli by combining data from the VLA and MERLIN (see Carilli 1995 for details). The radio observations of TX 1707+105 and MRC 2104−214 were performed with the VLA in B array. Details of observations and reduction for both sources can be found in Appendix A and B. Relative astrometry The coordinate frame for the WFPC2 images determined from the image header information has uncertainties of the order of 1 ′′ (Burrows 1995). Since the optical galaxies are generally clumpy on a scale smaller than 1 ′′ , it is important to get the better possible registration between the radio and the optical images, to allow a detailed intercomparison between the emissions. In overlaying the HST images with the radio VLA images we made the following assumptions: for those sources showing a clear detection of the radio nucleus and for which good K-band (or K sh -band) images existed (Mc-Carthy, private communication), we assumed that the peak position of the infra-red image would be a better indicator of the true location of the center of the host galaxy, rather than the peak of the HST image, since the UV continuum might be effected by dust extinction (e.g de Koff et al. 1996). We therefore identified the position of the radio core in the VLA image with the peak position : 5,6,7,8,9,10,11,12,14,16,18,20 of the K-band image. Finally we registered the HST frame and the infrared frame using the weighted positions of several stars which were present on both fields; this can be achieved with an accuracy of 0.1 ′′ which is then the total final uncertainty in the relative astrometry. This procedure was possible for the radio galaxies TX 0211-122, 4C 1243+036, MRC 2025−218 and MRC 2104−242. For those objects which had a clearly detected radio core but no K-band images, we associated the peak position of the HST image to the peak position of the radio emission. We followed this procedure for the radio galaxies 4C 1345+245, 4C 1410-001 and TX 0828+128 (for this last object see remarks made in the individual source description): these objects have a relatively simple morphology, hence it is reasonable to assume that the peak of the UV continuum represents the true nucleus of the galaxy; the final uncertainty of the relative astrometry is then within a pixel i.e. ∼ 0.05 ′′ . For those objects which have no detected radio core (MRC 0943−242, TX 1707+105 and MG 2141+192) we used the HST absolute astrometry, and we then checked the peak position of several stars which were present on the WFPC2 frames, with the position given in the APM catalog; with this method we achieved an accuracy of ∼ 0.8 ′′ . Finally for B2 0902+343 which has a radio core but no clear optical nucleus, we kept the natural HST astrometry: in this way the radio core falls in between the two optical peaks. This is consistent with what found e.g. by Carilli (1995). Individual source description Grey scale HST WFPC2 images (smoothed with a Gaussian function of FWHM equal to 2 pixels) with VLA radio contours superimposed are shown in Figs. 1-10. For every source we also show a contour map of the continuum emis-sion to better delineate the morphology. We do not show such maps for B2 0902+343 and MG 2141+192 because they have very low surface brightness, and a contour map would add no information. For the very large radio galaxies (namely TX 0211-122, TX 1707+105 and 4C 1410-001) we also present a third image showing the complete field of the radio source. The objects are presented in order of increasing radio size, since it has been shown (e.g van Ojik 1995) that several properties of HZRGs tend to change with increasing radio size. We shall now give brief descriptions of the ultraviolet morphology of each radio galaxy, with special emphasis on any peculiar characteristics (such as distortions, jet-like features etc), and compare those with relevant previous results. 4C 1345+245 This radio source at z=2.879 (Chambers et al. 1996a), is the smallest in the sample, being only 2 ′′ in extent (corresponding to 17 kpc in the adopted cosmology). The radio structure has been extensively studied with the VLA at several frequencies by Carilli et al. (1997), who classified it as "compact steep spectrum source" (CSS) and by Chambers et al. (1996b). The radio emission shows two lobes of roughly equivalent brightness, with a one sided feature extending from the core towards the eastern side, which has been identified as a jet. Optical and infrared ground-based observations show a compact object, with the emission extended along the radio axis, with one faint component or companion object along the radio axis to the southwest but beyond the radio lobe. (Chambers et al. 1996a). The new HST image shows that in UV continuum the emission has a bright compact nucleus. On the eastern side of this component there is a jet-like feature that follows remarkably well the small curvature of the radio jet: this imposed. Radio contours are a geometrical progression in 2, with the rst contour at 0.2mJy, which is 3 times the rms background noise. Right: contour representation of the uv continuum emission. Contours are at 11,12,13,14,16,20,25,30, imposed. Radio contours are a geometrical progression in 2, with the rst contour at 0.2mJy, which is 3 times the rms background noise. Right: contour representation of the uv continuum emission. Contours are at 11,12,13,14,16,20,25,30, suggests that we might be observing the optical counterpart of the radio-jet. However the radio-to-optical spectral index derived from the flux of the component (0.7) is completely different from the high-frequency radio spectral index (-1.2). Such flattening of spectral indices into the optical is contrary to what is found for sources with observed optical synchrotron radiation (e.g. Meisenheimer et al. 1989). Therefore we discard this possibility. A more likely interpretation is that star formation is taking place in that region triggered by the passage of the radio jet. Other possible mechanisms to enhance the emission along the radio jet path have been proposed by Bremer et al. (1997). In Sect. 6.1 we will discuss more extensively the alignment effect and how all the various models that have been proposed to explain it, apply to our sample of radio galaxies. The object along the radio axis detected by Chambers et al. (see above) is also detected in our HST image (it is outside the field shown in Fig. 1); its morphology indicates that it is most probably an edge-on spiral (hence a foreground object). MRC 0943-242 This radio source at z=2.923 is only 29 kpc in extent and has a simple double-morphology, with no nucleus detected in the present VLA images ). The HST image shows a bright elongated main component, plus a number of smaller clumps embedded in a halo of lower surface brightness emission with a peculiar overall curved morphology. The inner region of the UV emission shows a remarkably good alignment (within 10 • ) with the radio axis. For comparison,the Keck K-band image taken by van Breugel et al. (1998) shows a somewhat rounder and more centrally concentrated morphology. High resolution spectroscopy of the Lyα line shows spatially resolved absorption by associated neutral hydrogen, with the absorber covering the entire extended Lyα emission This radio galaxy was identified by Lilly (1988) and is one of the most extensively studied high redshift radio galaxies. It is 32 kpc in extent. The radio emission has a bizarre structure showing a bright knotty jet with a sharp bend of almost 90 • at its northern end, and two southern components whose common orientation is perpendicular to the rest of the source (Carilli et al. 1994). Further multi-frequency radio studies lead Carilli (1995) to conclude that most of the peculiarities of the radio galaxy can be explained by assuming that the source is oriented at a substantial angle (between 45 and 60 degrees) with respect to the plane of the sky, with the northern regions of the source approaching and that the central region of the galaxy is obscured by a substantial amount of dust. From extensive studies Eisenhardt & Dickinson (1992) found that B2 0902+343 has a flat optical spectral energy distribution (SED), and an unusually low surface brightness distribution at optical and IR wavelengths; this lead to the suggestion that B2 0902+343 might be a proto-galaxy, undergoing a first major burst of star formation (Eales et al. 1993, Eisenhardt & Dickinson 1992. The presence of associated 21 cm neutral hydrogen in absorption against the radio continuum source was first detected by by Uson et al. (1991) and confirmed by others (?, de Bruyn 1996). However no strong absorption in the Lyα emission line has been detected (Martin-Mirones et al. 1995). The optical morphology, as imaged by the HST, confirms the unusually low surface brightness distribution and shows that the galaxy consists of 2 regions, of approximately the same flux with a void in between, plus an extended fuzzy emission region to the north east of them. The source does not exhibit the radio-optical alignment effect; the UV emission is almost perpendicular to the radio axis. With the present astrometry the radio core is situated in a valley between the optical peaks; this morphology could be explained with the presence of large amounts of dusts. However the uncertainties in the astrometry are such that the radio core could be coincident with any of the two optical components. MRC 2025-218 The galaxy associated with this USS radio source at z=2.630 (38 kpc in extent), was first identified by Mc-Carthy et al. (1990). Deep multi-frequency radio imaging show a double radio source with a jet on the southern side of the core, which has an extremely sharp bend towards the west, making an angle of ∼ 90 • ). The northern lobe has a faint extension in the direction of the core which could be a counter-jet. Groundbased near infrared imaging show a compact object (van Breugel et al. 1998), while the Lyα emission extends for more than 5 ′′ along the radio axis and is distributed bimodally. The total SED of the galaxy is well fit by a main stellar population aged 1.5 Gyrs, combined with a young star-burst contributing 20% of the total light at 5000Å (McCarthy et al. 1992). Cimatti et al. (1993) find that that the rest frame UV continuum emission is linearly polarized (P = 8.3 ± 2.3%), with the electric vector oriented perpendicular to the UV emission axis. The HST image shows that the host galaxy has a compact morphology, consisting of a bright nucleus, two smaller components and extended low surface brightness emission, which is elongated and well aligned with the radio axis. The angle between the inner radio axis and the extended UV emission is only ∼ 5 • ± 3 • . There is no direct oneto-one relation between the radio components and the UV emission, unlike 4C 1345+245; however if we draw a cone of opening angle ∼ 30 • along the radio axis, all the UV emission on both sides of the radio core is then constrained within this cone. Such a morphology, reminiscent of an ionization cone, is expected in models where the aligned UV continuum emission is scattered light of a buried quasar, and is supported by the polarization measurements by Cimatti et al. (1993). The present HST image reveals little UV emission near the bend: however high resolution spectroscopic observations of the Lyα emission line show that the galaxy is embedded in a very large halo of ionized gas, extended well beyond the radio source (more than 60 kpc i.e. double the size of the radio source); therefore the most likely explanation for the bend is that interaction between the radio plasma and the surrounding gas deflects the jet, as observed in other cases (e.g. Pentericci et al. 1997). 4C 1243+036 This radio galaxy at z=3.570 which has an extension of 50 kpc, was identified and extensively studied by van Ojik et al. (1996). The radio source is double with a sharp bent structure on the southern side. Strong depolarization of the radio emission indicates that the source is embedded in a magneto-ionic medium. High resolution spectroscopy and narrow band imaging of the Lyα emission line have detected the presence of a giant (100 kpc) halo of ionized gas showing ordered motion, possibly due to rotation of a proto-galactic gas disk, out of which the galaxy associated with 4C 1243+036 is forming. Furthermore the Lyα emission shows a secondary peak at the location of the bending of the radio jet, consistent with a gas cloud being responsible for the deflection of the radio jet (Ojik et al. 1996). The morphology of the galaxy as imaged by the HST consists of a nucleus from where a narrow and elongated structure departs, which then bends to the south. There is also a smaller component, about 1 ′′ beyond the northern radio hot-spot, which could belong to the system, since narrowband Lyα imaging shows that there is Lyα emission at this location (van Ojik et al. 1996). The most remarkable characteristic of 4C 1243+036 is that the UV light follows closely the direction of the radio source, both in the inner 2 ′′ region where the light is aligned with the radio axis to within 15 degrees, but especially at the location of the bend: here both the UV emission and the radio jet bend rather sharply to the south, suggesting a direct relation between the radio jet and the UV component. This is similar to the case of the radio galaxy 4C 1345+245. Note that recent K-band Keck imaging of 4C 1243+036 by van Breugel et al. (1998), although at a different resolution, indicate that also the K-band continuum emission is elongated and follows the bend of the radio jet. MG 2141+192 This galaxy at z=3.594 (60 kpc in extent) was identified by Spinrad et al. (1992) and since then has been extensively studied by various groups. The radio source has a simple double morphology, with no nucleus detected in the present images. Eales & Rawlings (1996) who imaged this object in the infrared, report the detection of a relatively brighter component half way between the radio hot-spots and a second fainter one, 4 ′′ north, approximately coincident with the northern radio hot-spot. Recently van Breugel et al. (1998) re-imaged the object in the near-infrared with the Keck telescope, finding additional extended low surface brightness emission southern of the nucleus. Armus et al. (1998) imaged the [OIII] emission line nebula associated with the galaxy, which has an extent of more than 70 kpc (equal to the separation between the radio lobes), is extremely narrow and aligned with the radio axis. By comparing fluxes of the different emission lines they also find indications for the presence of large amounts of dust. Finally Maxfield et al. (1997) find that the emission nebulae of Lyα, CIV and HeII are not only spatially extended but also have remarkable velocity structure with multi-components velocity displacement up to 1900 Km s −1 , which are most consistent with a shock ionization picture. The HST shows that the host galaxy is very faint in the UV restframe, and consists of a nucleus with a faint filamentary extension and a small clump to the west. In the HST image some fuzzy emission (at a 3σ level) is present near the position of the radio hot-spot, where the second infrared component is located. We also detect similar emission very close to the position of the southern radio component. Overall the UV rest-frame emission is extremely faint, consistent with the presence of large amounts of dust. Deeper images are needed to delineate the morphology of this galaxy in more detail. TX 0828+193 This large radio source (98 kpc in extent) at z=2.572 (van Ojik 1995), has a double morphology with a jet extending from the core towards the northern hot-spot (most probably the approaching side). The end of the jet contains multiple hot-spots and has a 90 degrees bent. The southern part of the radio source consists only of a single hot-spot. The HST image shows a small galaxy consisting of several clumps arranged in a triangular shape. We choose to identify the radio core with the brightest optical component (the same procedure we followed for other galaxies, see Sect. 4); however another possible registration would be with the radio core at the vertex of the triangulum. At this position, an ionization cone on both sides of the radio core, would encompass all the UV emission. The morphology of TX 0828+193, like that of MRC 2025−218, strongly suggest that a large fraction of the UV light might be scattered light from a buried AGN. The axis of this "scattering cone" is aligned with the radio axis to within a few degrees (7 • ± 3 • ). There is another object located along the radio axis which could be associated with the radio source (a companion galaxy): it is bright in the UV continuum but shows no line emission, so it could as well be an intervening system at a different redshift ). The Lyα emission from this radio galaxy has a spectacular shape, with the entire blue wing of the emission line profile absorbed by neutral gas associated with the galaxy ). If the companion object is at the same redshift as TX 0828+193, then it is possible that a neutral gaseous halo associated with it is the responsible for the absorption. Since the absorption is very steep and broad, it is probably due to a combination of absorbing systems each at a slightly different velocity with respect to the Lyα peak. Also in the red wing of the Lyα profile a broad shoulder is observed that maybe be due to multiple HI absorption systems or to intrinsic velocity structure in the ionized gas. TX 0211-122 This large radio source (134 kpc) at z=2.336 (van Ojik et al., 1994) has a simple double morphology. A jet feature extends from the core towards south, curves and reaches the eastern lobe; this structure suggests that the radio axis might be precessing. The galaxy, as shown from the HST image, consists of a bright nucleus and a much smaller clump, both embedded by lower surface brightness emission, distributed in an irregular way. The contour image of the central component shows that it consists of two "tails", one of which points in the direction of the inner radio jet. The optical spectrum of this source is peculiar with the Lyα emission being anomalously weak when compared to higher ionization lines: the flux ratio of Lyα to NV a factor of 30 smaller than that of typical HZRGs while the large NV/CIV ratio indicates that the line-emitting gas is overabundant in nitrogen (van Ojik et al. 1994). Van Ojik et al. consider various mechanism that could produce these features, and conclude that the galaxy is likely to be undergoing a massive star-burst in the central region, possibly as the result of the passage of the radio jet. The starburst would produce large amounts of dust, which when mixed through the emission line gas partly absorbs the Lyα emission, giving it a very patchy morphology, while the enhancement of nitrogen emission could be produced either by shocks or photo-ionization. TX 1707+105 This radio source at z=2.349 (van Ojik 1995) which is 173 kpc in extent is one of the most peculiar systems in our sample: it consists of two galaxies (labeled A and B in Fig. 9) both showing strong and extended Lyα emission at the same redshift. The 2 objects lie almost exactly along the radio axis and they are both clumpy and elongated in a direction with is almost perpendicular to it. In particular galaxy 1707A (the brightest one) is comprised of a series of knots of approximately the same brightness, which form a sort of string, while galaxy 1707B consists of only two clumps. There is a further emission component, which in Fig. 9 is indicated as C, that lies in between the two galaxies and could be part of the system. It does not show line emission, although probably when the high resolution spectrum was taken this object fell outside our 2 ′′ wide slit. With the present data it is not possible to determine exactly which galaxy is associated with the radio emission. Given the large extension of the source, we expect, for symmetry reasons, that the radio source is associated to the galaxy closest to the center, i.e. galaxy 1707A. If this is the case then 1707B, and possibly 1707C, would be companion galaxies located along the radio axis. There are many cases of companion galaxies of high and low redshift radio galaxies. The best known case is Minkowsky's object: the location of this dwarf galaxy is at the end of the radio jet emanating from the radio galaxy PKS 0123−016 at z = 0.0181, suggesting that its origin is due to jet-induced star formation (van Breugel et al. 1985). A similar star forming region associated with the nearby powerful radio galaxy 3C285 has also been reported (van Breugel & Dey 1993). The most recent example is the radio source 3C34 (at z=0.69), which shows a clumpy emission feature along the radio axis and and oriented towards a radio hot-spot. Also in this case, the emission has been associated with a region of massive star formation triggered by the passage of the radio jet (Best et al. 1997). Finally, find that companion galaxies of radio sources tend to be distributed along the direction of the radio axis, which, in their interpretation, could be due to the luminosity of merging dwarf galaxies being enhanced by scattering and/or jet-induced star formation. MRC 2104-242 This radio galaxy at z=2.491 is 177 kpc in extent and was first identified by McCarthy et al. (1990). It has a simple double morphology and a relatively bright nucleus. The Lyα emission is spectacular: narrow band imaging show two large gas clumps, extending for more than 12 ′′ along the radio axis. Spectroscopy of the line showed that both components have very large velocity distribution (∼1000-1500 km s −1 ), large equivalent width and have a net velocity difference of about 500 km s −1 . Each component contains multiple velocity peaks and kinematic data at various position angles indicate that there is no overall ordered motion (McCarthy et al. 1990, Koekemoer et al. 1996. A detailed study of the Lyα emission line showed that a model based on shocks from direct interaction between the radio plasma and the gas can explain both the kinematics and the morphology of the gas (Koekemoer et al. 1996). The HST image is remarkable: the host galaxy is one of the clumpiest of our sample, consisting of a number of knots of similar brightness and size, located around the radio core. Unfortunately some of the components are confused with the residuals from a spike of an extremely bright nearby star. Furthermore there is a filamentary component that is more than 2 ′′ long and extremely narrow. This last component is aligned with the radio axis to within a few degrees. The overall extension of the host galaxy is almost 7 ′′ , making it the largest optical galaxy in our sample. 4C 1410-001 This radio galaxy at z=2.363 is, with 189 kpc, the largest radio source in the sample. The host galaxy is highly elongated (≃ 5 ′′ ). It consists of a compact nucleus, a second bright component and extended lower surface brightness emission which is clumpy. The galaxy and the radio source are strongly misaligned: the angle between the optical and radio axis is nearly 45 degrees. However, the northern component of the radio source is curved, suggesting the radio axis might be precessing, in which case the elongated optical emission could be located along the previous path of the radio jet. The galaxy has extended (∼ 80 kpc) bright Lyα emission, exhibiting a velocity shear that could be due to rotation of the gas. The amplitude of this shear is almost equal to the overall velocity width of the line ). Radio optical alignment The UV-optical continuum emission from HZRGs is generally aligned with the main axis of the radio emission; several models have been proposed to explain the nature of the optical continuum emission and of this alignment effect (for a review see McCarthy 1993 and references therein). The most viable ones are: (i) star-formation stimulated by the radio jets as it propagates outward from the nucleus (Chambers et al. 1987, McCarthy et al. 1987, de Young 1989, Daly 1990); (ii) scattering of light from an obscured nucleus by dust or free electrons (di Serego Alighieri et al 1989;Scarrott et al 1990;Tadhunter et al 1992;Cimatti et al 1993;di Serego Alighieri et al 1994); (iii) nebular continuum emission from warm line emitting clouds excited by the obscured nucleus (Dickson et al. 1995). So far the only HZRG for which there is direct spectroscopic evidence that the UV continuum clumps are star forming regions, not dominated by scattered light, is 4C41.17: the spectrum of this galaxy shows absorption lines and P-Cygni profiles similar to those found in the spectra of high redshift star forming galaxies . Until recently, polarization measurements were possible only for z∼ 1 radio galaxies and showed that in most cases a large fraction of the UV continuum emission could be explained as scattered light. Recently though, observations of z ≥ 2 radio galaxies have led to quite contradictory results: while some objects show considerable amounts of polarization (e.g Cimatti et al. 1997 the complete absence of polarization . The HST data with their high resolution provide informations about the inner regions of HZRGs, and confirm that the radio/optical alignment is still present at scales of less than an arcsecond. In Fig. 12 we plot the distribution of position angle difference (∆Θ) for our sample. To determine the optical position angle (PA), for the galaxies with a more regular morphology we smoothed the HST images with a Gaussian function having a FWHM of 1 ′′ and then we fit the inner 3 ′′ region with ellipses, using the IRAF package ISOPHOTE, which also gives the orientation of the major axis of the ellipse. For the galaxies with irregular morphologies, the fits gave meaningless results, so we selected as optical axis the line passing through the 2 brightest peaks on the images. The position angle of the radio emission is given by the line joining the radio core to the nearest hot-spots (or the line joining the hotspots, if the core is not detected). Despite the fact that 13 out of 15 radio galaxies in our sample have ∆Θ ≤ 45 • , we notice that the properties of the alignment effect vary considerably from object to objects. We can distinguish various groups: (i) Radio galaxies that show a remarkable one-to one re-0 20 40 60 80 0 2 4 6 8 Delta PA Fig. 12. Distribution of the differences in position angles between the radio emission and the UV continuum emission, measured in the inner 3 ′′ region of the radio galaxies. The histogram includes data from the enlarged sample (see text). lation between radio emission and UV continuum light: this includes 4C 1345+245, that has an optical jet-like feature, and 4C 1243+036 where the UV light follows the bending of the radio jet. These structures can be easily explained by the jet-induced star formation models (see references above). Alternatively Bremer et al. (1997) proposed a mechanism by which, when the radio jet passes through the gas clouds, it breaks them apart thus increasing the surface area of cool gas exposed to the ionizing beam. Consequently the material along the jet path becomes a far more efficient scatterer of nuclear radiation and the UV emission is enhanced in a very narrow region. (ii) Radio galaxies in which the UV continuum emission has a triangular-shaped morphology, reminiscent of an ionization cone. This category includes TX 0828+193, MRC 2025−218 and MRC 0406-242. Such morphologies are expected in models that consider the aligned optical continuum as being scattered light of a central buried quasar. (iii) Radio galaxies where the alignment between the optical morphology and the radio axis is good but there no one-to one relation between radio and UV components. This group includes MRC 0943−242, MRC 2104−242, 4C 1410−001, TX 0211−122, MG 2141+192, PKS 1138−262, 4C28.58 and 4C41.17. The degree to which the two com-ponents are aligned varies strongly even within this group: for example in the radio galaxy 4C 1410-001 the difference in position angles between the radio and the UV emission is 45 • , however the radio map indicates that the radio jets probably had a different direction in the past, corresponding to the direction along which the UV light is elongated. (iv) Finally, galaxies that show total misalignment between radio and optical emission. There are 2 such cases in our sample. First the radio galaxy B2 0902+343 (Fig. 3), where dust could play an important role in obscuring the central regions (Eales et al. 1993, Eisenhardt and, thus "masking" the alignment effect. Second, the extremely peculiar and complex system TX 1707+105 (Fig. 9), which is comprised of 2 (possibly 3) separate galaxies, with similarly strong Lyα emission: the galaxies are located along the radio axis, but they are clumpy and extended almost perpendicularly to the radio axis. This unusual morphology would be hard to explain just by invoking the presence of dust, since the dust should have an extremely complicated distribution in multiple lanes parallel to the radio axis. In summary the new data confirm that there is no single model that can satisfactorily explain the optical morphology of all HZRGs and the nature of the aligned optical continuum emission. At the same time, none of the proposed models can be ruled out by the present data. Therefore it seems likely that all three mechanism contribute to the aligned light, but their relative importance varies greatly from object to object. Clumpiness of the optical emission A striking feature of the HST images of the radio galaxies is the widespread clumpiness of the optical continuum emission. Most galaxies are comprised of several components, regardless of the fact whether they are aligned or not with the radio axis; the clumps are resolved and their typical sizes are in the range 2-10 kpc. To give a consistent definition of "clumpiness" we proceeded in the following way: since the size of our sample is small, and for the faintest galaxies it is difficult to delineate the structures, we first normalized the total observed flux of each galaxy (within a fixed aperture) taking the faintest and most distant galaxy MG 2141+192 as reference. We then defined the parameter n as the number of components which have at least one contour at a flux level of 4.4 × 10 −19 (1 + z) −4 erg cm −2 sec −1Å−1 . This value was chosen so that the radio galaxy MG 2141+192 had 3 clumps. Note that, despite the difference in restframe frequencies sampled by the observations (see Table 1), this is a good approximation because the spectral energy distribution of HZRGs in the UV wavelength range (1300-2000Å) is generally flat. In Fig. 13 we present a plot showing how n, our measure of clumpiness, varies with radio size, for all the radio galaxies in the sample. Clearly there is a tendency for the Fig. 13. Number of optical clumps of the galaxies versus total radio source size (in kpc). larger radio sources to have a clumpier optical continuum: the sources with radio sizes greater than ∼ 80kpc have on average more than twice as many clumps as the smaller radio galaxies. A Spearman rank correlation test gives a significance level of 95% for this correlation. A possible explanation for this trend is that the medium around the hosts of powerful AGN is dense and clumpy on a scale of more than 100 kpc; as the radio sources expand through the gas, they light up more and more material either by triggering star formation in the gas clouds, or by enhancing the scattering properties of the material in the vicinity of the jets. This result is contrary to that found by Best et al. (1996) for a complete sample of z ≃ 1 3CR radio sources, which have been imaged with the HST: they found that smaller radio sources tend to be comprised of a string of several knots, while larger radio galaxies are made generally of only 2 optical components. However note that the range of radio sizes of the z ≃ 1 3CR sample is 3 times as large as that of our sample. Morphological evolution Our sample covers a redshift range from z=2 to z=3.8, which correspond to look-back times from 80% to 90% of the total age of the universe (for Ω = 1). This epoch is close to the epoch of the formation of these HZRGS, therefore it is interesting to search for any evolution in the properties of the radio galaxies with increasing cosmic time. We follow a similar approach to that used by van Breugel et al. (1998) for a sample of powerful HZRGs, observed with Keck in the near infrared, which corresponds to the restframe optical emission (> 4000Å). Their sample is similar to ours, being comprised or a similar number of sources, with the same radio power, but has a higher average redshift (z av = 3.2 versus z av = 2.8) and more galaxies having z ≥ 3. There are 6 radio galaxies common to both samples. In Fig. 14 we present the results for our sample of HZRGs: the left plot shows how the radio /optical size ratio vary with redshift; the radio sizes are measured as the distances between the most distant hot-spots, on either side of the nucleus, while the optical lengths are defined to be the maximum extension of the optical emission in the direction of the radio source. In cases of multiple systems, such as PKS 1138-262 and TX 1707+105, all the optical components where considered, so that the radio/optical size ratio gives an indication of how much emission there is within the radio source extension. The plot indicates that there is no significant evolution in the ratio of radio to optical size. If we divide the sample in two redshift ranges, then the average radio/optical size ratio is 3.2 for the highest redshift bin (z ≥ 2.9 ) and 3.4 for the lowest redshift radio galaxies, so the difference is negligible. This is different from the result of an Breugel et al. for the infrared emission: they present marginal evidence that the host of z ≥ 3 radio galaxies are comparable in size with the radio sources, while the z ≤ 3 radio sources appear systematically larger that the hosts galaxies. In the right plot of Fig. 14 we show how the strength of the radio-optical alignment, represented by the difference in position angle between the optical and the radio emission ∆Θ (see Sect. above for definition) varies with z. Again there is no significant difference between the lowest redshift radio galaxies, which have an average PA difference of 20 • ± 7 •1 and the highest redshift sources which have an average of 18 • ± 8 • . On the contrary van Breugel et al. find a strong evolution in the alignment of the host galaxies from z ≥ 3 to z ≤ 3: specifically the infrared morphologies become smoother and less elongated at z ≤ 3 and the infrared/radio alignment strength decreases. The best interpretation is that, while for the lower redshift sources in their sample the near IR emission is dominated by the most evolved stellar population, (which is less effected by the presence of the radio jets), for the very high redshift galaxies the observed near IR emission starts to be dominated by young stars, probably formed following the passage of the radio jets. On the other hand, our HST observations sample the UV restframe emission which is though to be dominated by the younger stellar populations in all cases, regardless of redshift. These young hot stars are formed in subsequent small bursts, induced either by the interaction of the jets with the medium or by mergers of smaller subunits. Such events may involve only little amounts of mass, but can still produce remarkable UV morphologies (e.g 1138-262 Pentericci et al. 1998), and their frequency is not expected to change from redshift 4 to 2. Therefore we don't expect any strong evolution in the UV restframe properties of the 1 We preferred not to include the radio source TX 1707+105 in calculating the average PA of the low redshift group, because for this source the PA of the single galaxy 1707+105A, (∼ 69 • ), is extremely different from the PA of whole system (3 galaxies, PA∼ 13 • ) radio galaxies in this redshift interval. 6.4. The formation of brightest cluster galaxies? It is interesting to compare the morphologies of our high redshift radio galaxies with those of the high redshift galaxies, which have been recently discovered with UV dropout techniques and extensively studied by various groups, also with the HST (see for example Steidel et al. 1996, Williams et al. 1996. As pointed out in Sect. 6.1, a fraction of the UV continuum emission of HZRGs is directly connected, through various possible mechanisms, to the presence of the AGN. Also some of the features that we see in the galaxies can be easily explained by a direct correlation with the radio jets, for example the narrow elongated structures seen in 4C 1243+036 and MRC 2104−242 and the jet-like feature observed in 4C 1345+245. However in other cases there is a striking similarity between the individual components of the radio galaxies and the population of UV dropout galaxies, which clearly favors a stellar origin for the emission coming from those clumps. Particularly in some of the clumpiest and most extended radio galaxies, such as TX 1707+105 and MRC 2104−242, there are components that have a compact and regular morphology, with sizes of the order of few kpc resembling that of the high redshift radio-quiet galaxies. In a previous paper we made a detailed inter-comparison between the clumps that are observed around the radio galaxy 1138−262 to the UV dropout galaxies: the conclusion was that those components had characteristics similar to the UV dropout galaxies such as absolute magnitudes, surface brightness profiles, half-light radii (∼ 2 kpc) and inferred star formation rates (5-10 M ⊙ yr −1 per clump; Pentericci et al. 1998). Also 4C41.17 has a similar very clumpy morphology with compelling evidence that the clumps are star forming regions. However we must note that it is not yet possible to determine the masses of neither of the 2 classes of objects (UV dropouts and radio galaxies clumps); so it could as well be that they are intrinsically different objects, with a similar amount of star-bursting activity that make them look similar in the UV continuum emission. It seems that at least some of the high redshift radio galaxies consist of a central large galaxy, that hosts the AGN and a number of small star forming subunits, resembling the UV dropout galaxies, which are located in a region as large as ∼ 50 − 100 kpc around the radio source. Powerful radio sources would then pin-point to regions in which the density of star forming units is higher than average. The central host galaxies of radio sources may well have formed trough merging of these small sub-galactic stellar systems. Note that the mergers of these gas-rich subunits with the host galaxies could have triggered (or re-triggered) the radio emission by providing fuel for the central engine of the AGNs, as it seen in many cases at low z (e.g Osterbrock 1993). Our observations provide some qualitative support for hierarchical galaxy evolution models, which predict that the morphological appearance of galaxies during their formation period should be highly irregular and clumpy (e.g Baron & White 1989). In particular semi-analytical models predict that one of the forms in which massive elliptical galaxies accrete their mass is from multiple merging of smaller subunits (Aragon-Salamanca et al. 1998, and references therein. A possible problem arises from the fact that in standard hierarchical cold dark matter models such massive systems are thought to form relatively late (Cole et al. 1994, Kauffmann et al. 1993, i.e. at much lower redshift, and in the majority of galaxies the main population of stars is formed more recently (after z = 1) Heyl et al. 1995. However, White & Frenk (1991 argue that a mechanism that could explain the formation of massive elliptical galaxies at an earlier epoch is over-merging of star-burst galaxies and indeed, as we have reviewed in the introduction, there is now increasing evidence that high redshift radio galaxies are probably located in the over-dense regions of the early universe. Therefore we conclude that high redshift radio galaxies may be formed from a aggregates of sub-galactic units, similar to the UV dropout galaxies, and will probably evolve into present day brightest cluster galaxies. Summary and concluding remarks In this paper we have presented new HST/WFPC2 images of 11 high redshift radio galaxies, all complemented with VLA radio maps of comparable resolution. The images reveal a wide variety in the morphology of the host galaxies of these high redshift radio sources: in particular most objects have a clumpy, irregular appearance, consisting of a bright nucleus and a number of smaller components. The number of clumps seems to increase with increasing radio size. The UV continuum emission is generally elongated and aligned with the axis of the radio sources, however the characteristics of the "alignment effect" differ greatly from case to case. The new data confirm that none of the proposed models can satisfactorily explain the phenomenon and that most probably the aligned continuum emission is a mixture of star light, scattered light, and nebular continuum emission. Our data show no significant evolution in the morphological properties over the redshift interval. Finally, we compare the properties of our radio galaxies with those of the UV dropout galaxies and conclude that high redshift radio galaxies might be forming from aggregates of sub-clumps similar to the UV dropout galaxies and that they will probably evolve into present day brightest cluster galaxies. In a future paper we will present complementary HST/NICMOS data of an enlarged sample of high redshift radio galaxies. The new infrared observations will provide constrains to the age of the older stellar population of the host galaxies. With the high resolution we will be able to determine if also the older stellar population shows significant clumpy sub-structures and to what extent are the forming brightest cluster ellipticals already assembled and relaxed. A. Radio images of TX 1707+105 We present here multi-frequency maps of the radio galaxy TX 1707+105 obtained with the VLA in B array. Observations were made at 4.5 and 8.2 GHz, using two frequency channel each having a 50 MHz bandwidth, for a total integration time of 700 s and 1020 s respectively. Data processing was done performed using the Astronomical Image Processing System (AIPS) in the standard way. The system gains were calibrated with respect to the standard sources 3C286. Phase calibration was performed using the nearby calibrator 1658+076. The antenna polarization response terms were determined using multiple scans of the calibrator 1850+284 over a large range in parallactic angle. Absolute linear polarization position angles were measured using a scan of 3C286. The calibrated data were then edited and self-calibrated using standard procedures to improve the dynamic range. Images of the three Stokes polarization parameters, I, Q and U were synthesized and all images were CLEANed down to a level of approximately 3 times the theoretical rms noise using the AIPS task IMAGR. The observations at the different frequencies were added in the image plane to produce the the final maps of total and polarized flux. In Fig. 1 we show the maps at 4.5 GHz and 8.2 GHz (with a resolution respectively of 1.2 ′′ and 0.7 ′′ ) of the total flux (left panels) and polarized flux (right panels). In all images contours are spaced in a geometric progression with a factor of 2, with the first contour level equal to 3σ, where σ the off-source rms which is 0.12 mJy for the 4.5 GHz map, 0.1 mJy for the 8.2 GHz map and 0.17 mJy for the polarized flux maps. The radio galaxy has a simple double morphology with no radio core detected in the present images. The two lobes are nearly symmetric in total radio brightness, but the northern hot-spot is totally depolarized at both frequencies, while the southern one is polarized. B. Radio images of MRC 2104-242 In Fig. 2 we present maps of the radio galaxy MRC 2104−242 obtained with the VLA in B array at 3 different frequencies: 1.4 GHz, 4.5 GHz, and 8.2 GHz, with a resolution, respectively of 3.9 ′′ , 1.2 ′′ and 0.7 ′′ . In all images contours are spaced in a geometric progression with a factor of 2, with the first contour lever equal to 3σ, where σ the off-source rms, and is respectively at 1.74 mJy for the 1.5 GHz map, 0.19 mJy for the 4.5GHz map and 0.05 mJy for the 8.2 GHz map. The radio source is a double showing fainter diffuse emission between the hot-spots and the core. The northern hot-spot is elongated in a direction the is different from the radio axis. Fig 1 Images of total (left) and polarized (right) intensity for the radio galaxy 1707+105, at 8.2 GHz (upper panels) and 4.5 GHz (lower panels). The intensity contour levels are a geometric progression in 2 1=2 , which implies a factor 2 change in surface brightness every two contours. The surface brightness of the rst level is respectively, 0.14 mJy and 0.17 mJy for the 4.5 GHz maps, and 0.12 mJy and 0.17 mJy for the 8.2 GHz maps. Fig 2 Images of total intensity for the radio source 2104-242, at 8.2 GHz (left), 4.5 GHz (center) and 1.4 GHz (right). The intensity contour levels are a geometric progression in 2 1=2 , which implies a factor 2 change in surface brightness every two contours. The surface brightness of the rst level is, respectively, 0.05 mJy, 0.19 mJy and 1.7 mJy for the 8.2, 4.5 and 1.4 GHz maps. 1
2014-10-01T00:00:00.000Z
1998-09-04T00:00:00.000
{ "year": 1998, "sha1": "d092a47d5f31315f53aa70b2287fbf3b37b22181", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d092a47d5f31315f53aa70b2287fbf3b37b22181", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
214004241
pes2o/s2orc
v3-fos-license
Thermomechanical behavior of graphene nanoplatelets and bamboo micro filler incorporated epoxy hybrid composites The present study is focused on the development of micro bamboo filler/epoxy hybrid composite with the incorporation of varied weight percentage of graphene nanoplatelet (GNPs). To check the effect of inclusion of dual fillers on the structural x-ray diffraction (XRD), morphological analysis by Scanning electron microscope (SEM) and thermomechanical analysis (TMA) are carried out. The micro bamboo and GNPs filler in the epoxy polymer are incorporated to eradicate the problem associated with natural and synthetic fibers alignment, delamination, and anisotropic property in the thermoset composite materials. Results revealed that with the inclusion of graphene nanoplatelet with bamboo filler in epoxy composite improves the synergetic effect, which in turn increases the tensile, flexural, loss modulus and storage modulus of developed hybrid composite material. SEM analysis confirmed the proper distribution of fillers and their presence from XRD analysis. All fabricated hybrid composite displayed improved thermal conductivity value and a marginal increase in the corrosion rate. The overall result predicts that the improvement is quite better compared to neat or solo bamboo filler based epoxy composite. The improvement is ascribed due to the proper interfacial bonding or cross-link between micro bamboo filler/epoxy polymer with the addition of GNPs. Developed filler based hybrid composite may be utilized for the application of thermal interface material, circuit board, electronic packaging, etc. Introduction There is a constantly growing demand for innovative materials with improved properties to fulfill new challenging necessities. The inclusion of fiber/filler materials to polymer matrix is the usual practice for developing composite materials. Polymers have extended wider application areas in different branches of the industry due to their lightweight, nominal price and excellent corrosion resistance [1,2]. Among various thermoset and thermoplastic polymers, thermoset epoxy resin possesses high mechanical properties, excellent thermal and dimensional stability. However, as epoxy polymers undergo the state of solidification, it becomes more brittle in nature [3,4]. High brittleness is a matter of serious problem which can be minimized by incorporating natural or inorganic fiber/filler. The great performance of continuous natural and synthetic fiber reinforced in a polymer matrix is well known and established. However, these long fiber-based polymer composite have some drawbacks. Such as major problems like delamination which can be avoided by the inclusion of micro/nanofiller instead of fibers. These disadvantages often limit their area of application and create the requirement to develop the improvement of polymer composite materials. Also, significant growing cognizance and concerns of greener and eco-friendly environment in humanity developed a deep interest in the utilization of natural filler and fiber [5]. Use of different recyclable and renewable reinforcement materials are encouraged such as plant leaves, crop, husk, jute, flax, kenaf, wood dust, and bamboo dust, etc, in spite of many efforts towards applying renewable and biodegradable polymer for industrial use, there is serious limitation identified in the thermomechanical properties. Since the last few years, most of the composite materials have been developed from natural and synthetic fibers [6,7]. The major benefits Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. of natural fiber over synthetic fiber are eco-friendly, biodegradable, plentifully availability etc Meanwhile, there are few shortfalls found such as higher hydrophilicity nature due to the existence of hydroxyl groups, miserable compatibility with epoxy matrix, homogeneous distribution etc But, the low price of natural filler/fiber and performance ratio at low weight in combination with environmental friendly nature become the major factor for the social acceptance of natural fibers in a huge volume in the research and material development area such as automobile, decorative and building construction [8]. The natural fibers such as jute, hemp, banana, bamboo, kenaf etc are introduced as incorporated material in the polymer matrix and these natural fiber-based developed composites have a many advantages such as higher stiffness, high modulus, and improved strength. Amid the various natural fiber/fillers, Bamboo is very versatile and an adequate amount is avilable in the northeast regions of India. Bamboo is a renewable natural resource due to its fast-growing and availablity in many other countries, especially in South East Asia, with total bamboo forest area in the world approximately 22 million hector. Bamboo is the fastest natural plant with a life cycle of nearly 3 to 4 years and it is also low in price compareed to other wood resources [9]. The marketing of bamboo-based products and their manufacturing and development have increased rapidly. Hence, effective utilization of unwanted bamboo particulate is concentrated for developing micro bamboo filler based polymer composite, it appears as an opportunity to replace natural fiber for enhancing the properties of the materials [10]. Bamboo possesses wonderful mechanical properties compared to its mass due to the unique natural physical structure [11]. Bamboo filler becomes one of the finest natural filler for inclusion in the epoxy matrix [12]. This is also observed that composite developed from natural filler and epoxy matrix exhibit certain drawbacks, like poor binding between epoxy and natural filler, agglomeration problem at higher filler weight percentage and rise the moisture content. The main constituents of bamboo are cellulose, hemicellulose, lignin and other residues possess strong hydrophilic nature. However, for developing stronger bond, hydrophobic properties of filler is most suitable for the epoxy matrix. Also, to improve their reinforcing effect in epoxy matrix composite, chemical treatment of raw bamboo filler is essential. Chemical treatments such as alkali, silane, benzoylation, acrylation, isocyanates, permanganate and acid based treatment are mostly prefeered of the natural filler and fiber [13,14]. Polymer composite with natural filler inclusion does not show major enhancement in thermal and electrical properties. In order to accomplish a better thermomechanical property in polymer-based composite, various types of conductive fillers are added such as metal powder, carbon black, carbon nanotube, and natural graphite are used as inclusion fillers in an epoxy matrix. In the present work, GNPs are selected as potential alternative fillers as compare to other conductive fillers since GNPs are well known for their superior strength and thermal properties. Developing a hybrid composite with both natural micro bamboo filler and high conductive GNPs filler is characterized for an initial attempt. This approach will help to produce a cost-effective and lightweight polymer hybrid composite. Improved thermomechanical properties of the hybrid composite may fulfill the mechanical strength along with improved heat transfer capability. The suitable application areas are like thermal interface materials, electronic chips, light-emitting diode, and conductive packaging. The literature study makes a clear confirmation that no study has been conducted for developing hybrid composite with micro-sized bamboo filler reinforced in the epoxy composite with varied wt% of GNPs as a conductive filler. In this study, an effort has been taken to fabricate a nano-sized conductive filler blended with natural filler in the epoxy polymer. The developed conductive hybrid composite is prepared by changing GNPs wt% from 0.1, 0.2, 0.4, 0.6, 0.8, 1 and fixed micro bamboo filler at 5 wt%. The effect of GNPs filler inclusion in bamboo/epoxy composite on structural, morphological and thermomechanical properties are examined, through XRD, SEM, TMA also corrosion and conductivity tests are included. In order to open a fresh platform, GNPs and micro bamboo filler are added into epoxy for developing hybrid composite. Material and methods The hybrid composite was developed by using waste bamboo powder extracted from Bambousa Balcooa collected from Bamboo and cane development institute (BCDI), India and GNPs are obtained from platonic nanotech private limited with purity >99%. The standard epoxy matrix has two parts with resin diglycidal ether of bisphenol A (DGEBA) and hardener Triethylenetetraamine (TETA). The bamboo particulate filler was thoroughly cleaned with distilled water for removing external impurities and then dried in an oven at a temperature of 50°C for 10 h. This specific set of temperature and duration of drying time of bamboo filler were standard on the basis of consecutive experimentation. The processed bamboo dust was crushed into the small microparticle by using ball milling to obtain the particle size less than 75 μm. These small bamboo particles were allowed to pass through 75 to 63 micron test sieve. Chemical reaction It was well-identified that natural bamboo filler contains cellulose, hemicellulose, lignin, wax, and pentose structural rings with an attached hydroxyl group. Chemical treatment was carried out to accomplish the proper bonding between the epoxy resin and bamboo filler, and to abolish the oils, wax from the surface and depolymerizing cellulose filler for enhancing the thermomechanical properties. The mercerization reaction of bamboo particulates was carried out in 6 wt% of sodium hydroxide solution, in a cylindrical container. The container was placed on the magnetic stirrer for 8 h at 47 o C and 850 rpm. Once the reaction was finished, the bamboo particulate solution was cleaned multiple times with distilled water and acetone till pH 7 was reached. Later the surface improved bamboo particulates were filtered and dried in the muffle furnace at 60 o C for 6 h. Future filler particulates were stored in an air-sealed polythene pouch containing silica gel to reduce the moisture present. The schematic diagram of a chemical reaction among NaOH salt and bamboo filler is shown in figure 1. The chemical treatment eliminates a definite amount of hemicellulose, lignin, wax, and oils covered on the outer layer of the filler cell wall. The basic chemical starts with the breakdown of the ionic bond of NaOH molecules into Na+cations and OH─ anions, as a result of the opposite polarity with H 2 O molecules. Subsequently, Na +ions are attached to the bamboo filler particulates through covalent bonding. Treatment improves surface topography, the highly polarised filler surface enhance the filler matrix binding [15]. Fabrication of hybrid composite and test specimens The micro bamboo and GNPs filler based hybrid composite specimen were developed by hand layup technique using silicon rubber moulds as per the sample dimension for a different kind of testing. Silicon oil was used as a liberating agent for the easy removal of the final hybrid composite sample from the mould cavity. The measured quantity of epoxy resin, hardener, treated bamboo, and GNPs were weighted using the electronic balance. The epoxy resin diglycidal ether of bisphenol A and hardener Triethylenetetraamine were blended in ration 10:1 by weight percentage and continuously mixed for 3 to 4 min The NH 2 group presences in the TETA is responsible for the cross-linking mechanism where covalent bond are established with the carbon atoms of DGEBA, Graphene nanoplatelet and cellulose bamboo filler [16,17]. Then, a measured quantity of GNPs and treated bamboo filler were added in the resin, followed by continuous mechanical stirring at 200 rpm for 5 to 8 min and sonicated for 1 h at 75 watt power. Then, the prepared homogeneous mixture was placed inside a round vacuum desiccator for degassing at 0.1 Torr vacuum pressure. After degasifying, hybrid mixture was poured into the silicon rubber mould and trapped air and extra materials were removed by using hand roller with mild pressure. Then, the mixture was left for the curing process at room temperature as recommended by the manufacturer. Afterward, samples were removed from the mould cavity by applying little pressure and subjected to a different kind of mechanical and thermal testing. The specimens for tensile, flexural DMA, conductivity and corrosion tests were prepared using varied GNPs filler weight fraction of 0.2%, 0.4%, 0.6%, 0.8%, and 1%, and the treated micro bamboo filler was kept constant at 5 wt% for all the prepared specimens. Also, neat epoxy samples were prepared without any filler addition. Here onwards the neat sample has been named as BG0, BG0.2 means GNPs with 0.2 wt% and 5 wt% micro bamboo filler sample is noted BG0.2, similarly BG0.4, BG0.6, BG0.8, BG1 represent corresponding GNPs wt% at constant micro bamboo weight filler inclusion of 5 wt%. Characterization The crystallinity and amorphous phase of chemically treated and untreated natural bamboo filler were characterized using advance XRD setup. The deflection intensities were recorded on 2θ scale from 5°to 100°w ith step size 0.008°to identify the spectra of chemically treated and untreated filler. The wavelength of the x-ray source maintained at 1.5406 Å (CuKα radiation), operated at 40 mA, 45 kV. The void estimation was conducted as per ASTM D 2734 to examine the percentage of void presence for marketable acceptance. The tensile strength of all prepared samples was carried out using the universal tensile machine with a grip capacity of 50 kN. The tensile test was conducted as per ASTM standard D-638 type V, having a typical measurement of tensile specimen 63.5 mm×10 mm×3.2 mm in dumbbell shape with gauge length 7.65 mm. The flexural test had been performed in three-point bending mode as per ASTM standard D 790-03 with specimen measurement of 65 mm×12.7 mm×6.25 mm with a support span length of 50 mm. The hybrid composite tensile and flexure samples were tested at three different crosshead speed of 1, 2 and 3 mm min −1 to examine the results at variable strain rate for identifying the technical viability of developed hybrid composite. The dynamical mechanical analysis of hybrid composite was examined for measuring the temperature dependency properties like storage modulus, loss modulus, and tan delta characteristics. A double cantilever clamp bending fixture with a load of 150 N was used for analysis. The specimen measurement of 63.5 mm×12.7 mm×3 mm, was tested under nitrogen environment from room temperature to 180°C at a heating rate 3°C min −1 , at a constant frequency of 1 Hz, with a static and dynamic strain rate of 0.2% and 0.1% respectively. The thermal conductivity of developed hybrid composite was determined using a third-generation C-Therm TCi thermal conductivity analyser which has broad testing capability from 0 to 90 W mK −1 which is based on the modified transient plane source principle. The conductivity samples were of cylindrical shape with a dimension of 5 mm thickness and a diameter of 22 mm. Equipment errors may originate from variation in the current, environment temperature in most practical cases a very small residual error of less than 0.1%. The electrochemical analysis of hybrid composite material was characterized by using potentiostat/ galvanostat Ivium work station and electrochemical impedance spectroscopy (EIs) in 3.5 wt% NaCl and NaOH salt aqueous solution. The configuration was prepared with three electrode cell consisting of a carbon steel working electrode, saturated calomel reference electrode, and platinum plate counter electrode. The tested specimen with a contact area of approximately 1 cm 2 made contact to the working electrode. EIs tests were conducted in the frequency range of 0.01 to 100 Hz by the alternative current amplitude of 10 mV. The potentiodynamic polarization graph was plotted in a range of −250 mV to +250 mV with respect to open circuit potential (OCP) with a scan rate of 1 mV s −1 . Fractured surface morphology of hybrid composites was examined by employing an SEM with an operating voltage of 20 kV. Before the examination, the composite sample was sputter with a thin layer of gold particles to avoid charging under electron beams. The filler zones were viewed perpendicularly earlier for the examination. Results and discussion 4.1. XRD characterization of chemically treated micro bamboo filler The physical and mechanical characteristics of hybrid composite reinforced with natural micro bamboo filler and GNPs are influenced by stronger bonding development in the interface zone of reinforcement material and epoxy matrix. The surface modification of natural filler improves the magnitude of matrix and filler binding. This is possible by the enhancement of hydrophobicity, mobility, adherence of filler, and minimizing agglomeration rate. The chemical treatment with NaOH is preferred over other chemical treatments which is established based on acceptable results of natural fiber and filler, also this treatment preferably cost economical. The extraction of non-cellulosic constitutes from natural filler improves surface adhesion among filler and matrix and helps to extend the filler matrix bonding [18]. The XRD measurements are carried out for the confirmation of the purity and quality of filler material used in the epoxy matrix and also to evaluate the overall structural information. Figure 2. shows the XRD patterns of NaOH treated and untreated bamboo filler. GNPs filler is used as reinforcement material for achieving synergetic effect in the virgin matrix and micro bamboo filler. The chemical treatment of raw bamboo filler improves the crystallinity index of treated bamboo filler. The crystallinity of cellulose index (Ic), is calculated based on the Segal empirical technical approach [19]. Where, ¢ I 002 is the peak intensity corresponding to the plane cellulose miller indices plane (002), and ¢ I am is the minimum peak intensity corresponding to the amorphous plane (110). The improvement in the crystallinity of amorphous micro bamboo filler obtained as a result of partial extraction of wax, lignin and hemicellulose content after treatment. The two prominent well defined peaks are obtained at 2θ=16.4°and 22.4°. The higher peak intensity exhibited at plane ¢ I 002 because of the α-cellulose. The overall increment in the crystallinity index after chemical treatment is around 23.76%. This is owing to elimination of protective material and possible recreation of stress in the cellulose chains as a consequence of the deduction of amorphous and pectin from the untreated micro bamboo filler [20]. Void analysis The presence of higher percentage of void rises the amount of water absorption inside the hybrid composite material. To calculate the void percentage, both experimental density (D e ) and theoretical density ( ) D t values of the composite are determined. The density of GNPs, bamboo and resin are 0.17 g cm −3 , 0.35 g cm −3 and 1.15 g cm −3 respectively. The experimental density value of the composite is determined by water immersion density measurement kit. The theoretical density of the composite is calculated using equation (2). Dynamic mechanical properties The variation of graphene weight percent in the bamboo epoxy composite under frequency and temperature is ascertained using DMA experiment. The results of storage modulus G′ (MPa), Loss modulus G′′ (MPa) and tan delta are evaluated. The temperature variation effect for neat epoxy as well as for BG0.2, BG0.4, BG0.6, BG0.8, BG1 hybrid composite sample is depicted in figures 4(a)-(c). The storage modulus value endorses valuable insight envisaging of the stiffness and molecular relaxation for all the prepared composite samples. Figure 4(a). shows the significant variation of G′ value of neat epoxy and developed hybrid composite with treated bamboo micro filler and GNPs. On examining, the variation of G′ value is observed with temperature for micro bamboo filler and GNPs based hybrid composite. In all the cases the G′ values fall continuous with the increase in the temperature. Also, from the exhibited results it is clear that the G′ remains broader in the glassy zone, as the hybrid composition is tightly closed in the low temperature region, the G′ value remains in a higher range before entering the glass transition zone. However, in the temperature range from 65°C to 95°C the modulus path falls quickly, representing the leathery to the rubbery zone. As the temperature increases, the molecular movement of the hybrid composite takes place rapidly, this might be due to breakdown molecular linkage. Then slowly the G′ value decreases in the rubbery zone. But, no significant change in the rubbery zone is observed for the developed hybrid composite. The storage modulus curve exhibits the influence of higher percentage of GNPs in the micro bamboo filler based polymer composite. As the wt% of GNPs decrease, the storage modulus value minimizes. This drop in the storage modulus endorsed to the fact of addition of the GNPs percentage. The inclusion of GNPs enhances the interfacial attachment between micro bamboo filler and the epoxy matrix [22]. The G′ for neat epoxy is~1341 MPa, which has been improved to maximum value ∼1708 MPa for the BG1 prepared specimen. Hence demonstrating a significant improvement of G′ around 28% with increasing the graphene weight percentage. The higher G′ value of epoxy hybrid composite also reflects relatively higher thermomechanical behavior compared to neat epoxy sample. Loss modulus (G′′) values of the DMA result versus temperature for the developed hybrid composites are depicted in the figure 4(b). The graph representing a similar trend like storage modulus value for different filler weight percentages. The loss modulus value increases with the rise in the GNPs filler weight percentage. All the loss modulus path attain a maxima point for the highest dissipation of mechanical energy and reduce with increasing temperature, because of free molecular motion of polymer links. Interesting the G′′ value for the maximum wt% of GNPs exhibited higher loss modulus value in comparison to neat polymer composite. This behavior is attributed due to the intensification of the internal friction that escalates energy dissipation [23] The neat epoxy polymer and GNPs with 0.2 wt% displayed almost the same loss modulus value. However, the modulus curve falls down at a maximum temperature around 90°C. For the hybrid composite with 1 wt% of GNPs though it exhibits maximum loss modulus value, meanwhile modulus path falls in the beginning when the temperature rises above 75 o C. It can be explained that in both the case of storage and loss modulus improved with the addition of higher GNPs filler wt%. The damping factor or tan delta for micro bamboo filler and GNPs hybrid composite is elucidated in the figure 4(c). The BG0.2 exhibited higher Tan delta value and BG1 specimen displayed lowest tan delta among all the developed hybrid composite samples. The tan delta value increases with the rise in the temperature and it reaches the highest level in the transition zone, followed by decrease in the rubbery zone for all the composites. As the wt% of GNPs increases, the damping factor value keep on dropping. Thus with the addition of GNPs filler, the tan delta peak becomes wider, the wider peak represents a higher time for relaxation of molecules because of inferior polymeric link movement. Significantly higher crosslinking density is developed for the hybrid composite due to better interfacial connection. The exhibited results are in line with various other DMA research findings [24][25][26]. Thermal conductivity of the hybrid composite The experimental results of thermal conductivity values for the micro bamboo and GNPs filler incorporated hybrid composites are depicted in figure 5. The exhibited results clearly represent the improvement in the thermal conductivity of the hybrid composite materials. Improvement in the thermal conductivity value suggests the formation of conductive path inside hybrid composite material. The result exhibited that with the inclusion of GNPs filler thermal conductivity value of the hybrid composite continuously increased. The maximum thermal conductivity value obtained in the present study is 1.21 W mK −1 with the inclusion of 1 wt% of GNPs filler in the micro bamboo epoxy composite. The improvement of thermal conductivity value is almost four times that of neat epoxy sample. These results are achieved because of the higher thermal conductive network, also the crystallinity of epoxy composite is increased with filler addition [27]. Mechanical behavior of hybrid composite The mechanical behavior of developed hybrid is determined and the evaluation of the results of uniaxial tensile strength, elastic modulus at different crosshead speed are presented in figures 6(a)-(c). The ultimate tensile strength and elastic modulus of the material is widely accepted for providing the basic structural design information. The values of tensile strength and elastic modulus are assessed from the stress-strain graph developed during the uniaxial tensile test. The influence of GNPs filler variation and increase in crosshead speed help to examine the difference in the results of prepared hybrid composites. The inclusion of GNPs in the micro bamboo epoxy composite significantly improves the mechanical properties. With the reinforcement of varying GNPs wt% and fixed amount micro bamboo filler epoxy composite, the ultimate tensile strength value also improved up to 0.8 wt of GNPs. Further, as the GNPs percentage increases the downfall in the tensile result is observed. This enhancement in the result up to GB0.8 might be endorsed due to improved stress transfer potential in the epoxy matrix. But at 1 wt% GNPs the decrease in the result is exhibited due to increased filler agglomeration. Agglomeration development inside the composite minimizes the stress transfer from the epoxy matrix to filler materials and generate a higher stress concentration zone [28]. The maximum tensile strength obtained for GB0.8 specimen is 52.48 MPa and elastic modulus of 1.28 GPa. From the result, it is examined that variation of crosshead speed from 1 to 3 mm min −1 does not contribute much change to the results. The inclusion of hybrid filler in the epoxy matrix enhances the ultimate tensile strength and elastic modulus result, which helps to rise the deflection resistance of the material. Also, an increase in the crystallinity percentage of developed hybrid composite with the addition of micro bamboo filler and GNPs, reduces the amorphous zone. As the amorphous region of composite minimizes, which help to significantly improve the elastic modulus of the polymer composite [29]. The flexural behavior of the micro bamboo and GNPs filler epoxy-based hybrid composite specimen are tested in three point bending modes. The exhibited results of flexural strength and flexural modulus at different crosshead from 1 to 3 mm min −1 are shown in figures 7(a)-(c). It is clear from the results that the addition of filler improves the flexural strength and flexural modulus of the hybrid composite. These trends of improvement in the flexural properties can be well understood by the interaction of micro bamboo and GNPs fillers that develop a good bond. The stronger bonding interaction is the major role for improving the load transfer capability between epoxy matrix and hybrid filler [30]. The experimental results of both tensile and flexural behavior of hybrid composite increase with the incorporation of dual filler, however GNPs filler percentage variation up to 0.8 wt percentage exhibit maximum improvement. A similar trend of variation in the tensile and flexural properties are also reported for natural filler blended epoxy composite. The maximum value of flexural strength and flexural modulus for the developed hybrid composite is 56.8 MPa and 5.19 GPa respectively. The exhibited results for the prepared hybrid composite materials are much better than individual natural filler based polymer composite. The flexural modulus and flexural strength of the epoxy hybrid composites are increased by filler inclusion up to around 43% and 13% respectively. Corrosion analysis Corrosion resistance analysis of a material is significantly essential because it slowly damages the product by environmental chemical reaction. Products that are developed from a metal such as iron, aluminium, copper gets quickly corroded when exposed to moisture environment. The epoxy polymer is used for abating corrosion resistance. But, to understand the effect of corrosion behavior of the polymer composite after filler inclusion is essential. In the present study, the corrosion rate of the hybrid composite is measured after immersing it in 3.5 wt% of NaCl and NaOH solution for 36 h. The corrosion rate of the developed hybrid composite with respect to varied filler wt% is shown in figure 8. The lowest corrosion rate is exhibited for the neat epoxy specimen which is approximately 0.00035 and 0.00186 mm/year in the NaOH and NaCl respectively. However, when the filler percentage increases in the epoxy composite, corrosion rate rises slowly and the maximum corrosion rate achieved in the present work is 0.062 and 0.057 mm/year in the NaOH and NaCl aqueous solution. Slight rise in the corrosion rate of the hybrid composite might be due to increase in void and chemical reaction with GNPs fillers [31]. Micro bamboo and GNPs filler surface morphology The characteristic of the morphology structure of the micro bamboo filler and GNPs is depicted in figure 9. At lower and higher magnification. It can be observed that a large amount of bamboo fillers are distributed randomly in the epoxy matrix. The superior GNPs nano-fillers interact in the gaps of micro bamboo filler for enhancing the bonding strength and improving the properties of the developed composite material [32][33][34][35]. The shape and size of bamboo filler and GNPs are prominently visible from the SEM analysis of the fractured surface of the hybrid sample. The higher weight percentage bamboo filler is examined by optical micrograph image. Figure 10 shows the visualisation of micro sized bamboo filler in the composite material at 100 μm. The main objective of this study is to examine void formation and bond developed between filler and matrix material. Conclusion The recent study is focused to utilize waste natural fillers that are sufficiently available to us. Aim is to minimize various problems associated with the use of natural fiber in epoxy polymer. Micro-sized bamboo filler and GNPs are used here for developing a new hybrid composite material for enhancing the thermomechanical behavior. After a thorough examination on the developed hybrid composite subsequent conclusions are summarized below. • Micro bamboo filler and GNPs reinforced epoxy hybrid composite exhibited enhanced thermal and mechanical properties. Chemical modification supported in the formation of stronger binding strength by altering its physical structure. In the meantime, GNPs filler helped in the development of the conductive path inside the polymer composite. • Presence of nanofiller played a major role to minimize the crack generation and reduced brittleness character of the hybrid composite material • Hybrid filler improved synergetic effect in the polymer composite which helps to raise the storage, loss modulus and damping factor of the composite. This also helps to increase the temperature range of the polymer material. • Improvement in the thermal conductivity value was achieved for hybrid composite up to 1.21 W mK −1 from 0.25 W mK −1 with increasing the GNPs filler inclusion percentage. • Mechanical properties such as tensile strength, flexural strength, elastic modulus and flexural modulus showed maximum values for developed GB0.8 hybrid composite. The conclusion suggests that higher wt percentage of GNPs can be attempted further for examining thermomechanical properties. Also, modification of natural filler is essential for improving crystallinity and morphological behavior.
2020-01-09T09:14:22.327Z
2020-01-20T00:00:00.000
{ "year": 2020, "sha1": "8e374613d0c0e4ff7b5393ff061538fbfd78e7c2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2053-1591/ab67f8", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "16ecf19a15a149ebdbdbeed729fbb8e32e29b0b0", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
244057009
pes2o/s2orc
v3-fos-license
Decolorization of azo dyes by newly isolated Citrobacter sp. strain EBT-2 and effect of various parameters on decolourization Azo dyes constitute around 70% of the total dyes in the world. Almost 10%–15% of dye is released in wastewater during manufacture of the dye and its application, and is a prime source of pollution. Various physiochemical methods available for their decolorization have some or the other disadvantages like cost or time inefficiency. Hence, bacterial decolorization has been studied for cheap and efficient decolorization. In this study, Citrobacter sp. strain EBT-2 was isolated from a textile industry dumping site and used to optimize dye decolorization conditions for three Azodyes: methyl orange (MO), congo red (CR), and Eriochrome Black T (EBT). Decolorization was measured by UV–Vis spectroscopy analysis. The strain showed 100% decolorization for all the three dyes up to 100 mg/l concentration in 96–120 hours. It was able to decolorize till 300, 500, and 500 mg/l of dye concentration for MO, CR, and EBT, respectively. Decolorization efficiency was independent of initial dye concentration. Optimum pH for decolorization was 7, 7, and 9 for MO, CR, and EBT, respectively. The effect of agitation on decolorization was studied under static and agitated (200 rpm) condition. About 90% decolorization was observed at static condition and about 20% decolorization was observed under agitated condition in all the three dyes in 96 hours. Complete decolorization was obtained for MO and EBT at 35°C and 45°C, respectively. CR showed complete decolorization only at 35°C. The results conclude that Citrobacter sp. can be used for the successful dye decolorization of Azo dyes, primarily MO, CR, and EBT under optimum physiochemical conditions. INTRODUCTION Azo dyes constitute around 60%-70% of dyes synthesized globally [1,2]. These are characterized by the presence of one or more azo groups, −N=N− [3]. These further consist of phenyl and naphthyl groups modified with various functional groups [3]. Such complex modifications in their structure makes them xenobiotic in nature and highly resistive to breakdown [4]. They are resistant to light, washing, chemical and microbial attack, easy to synthesis with low energy, and low-cost consumption [4][5][6]. They are extensively used in textile, paper, food, leather, cosmetics, and pharmaceutical industries [7]. Almost 10%-15% of dye is released in wastewater during its manufacture and application in various industries [8]. Contaminants generated by azo dyes constitute of dye particles and their breakdown products; primarily amines that are proven to be toxic and mutagenic [9,10]. They also degrade the water quality by increasing its biological oxygen demand and chemical oxygen demand [11,12]. Many reports indicate that textile dyes and effluents have toxic effect on the germination rates and biomass of several plant species [11,13]. Azo dyes like tartrazine and carmoisine affect the functioning of vital organs like kidney and liver in rats and also induce the formation of free radicals leading to oxidative stress [14]. Various physiochemical methods like membrane filtration, adsorption on activated carbon, flocculation, electro coagulation, ozonation, froth floatation, reverse osmosis, and ion exchange are currently implied for azo dye decolorization. However, these methods have limitations like high cost, high energy input, and sludge generation [15]. Recently microorganisms have been shown to decolorize azo dyes in a cost effective and environment friendly manner [16]. Bacteria, algae, fungi, and yeast have shown cost effective and eco-friendly degradation of textile dyes [11]. Furthermore, versatility in microbial structure makes their exploitation easy for decolorization of most of the dyes [17]. Dye *Corresponding Author Smriti Gaur, Department of Biotechnology, Jaypee Institute of Information Technology, Noida, India. E-mail: smriti.gaur @ jiit.ac.in decolorization by higher order organisms like fungi is a slow process due to the slow growth of the organism [18]. Bacteria on the other hand have emerged as a very promising category for dye decolorization [19]. They proliferate rapidly under both aerobic and anaerobic condition thus can achieve dye degradation at a faster rate [20]. They primarily decolorize dyes under anaerobic conditions [21]. Few bacteria have been reported to decolorize azo dyes in aerobic conditions as well [21]. It has also been shown that under a mixture of anaerobic and aerobic conditions, complete degradation of some dyes can be achieved [22]. Often textile waste waters have high metal content and salinity; hence, it is important to utilize microorganisms tolerant to extreme environment conditions. Bacterial species like Enterococcus faecalis and C. bufermentas have been identified as highly halotolerant azo dye degrading microorganisms [23]. This study was done to isolate and identify a bacterial strain capable of decolorizing three azo dyes: methyl orange (MO), congo red (CR), and Eriochrome Black T. MO is an acid class single azo dye and was found to be mutagenic in a Salmonella/microsome assay [24]. CR is a benzidine-based anionic diazo dye and has been shown to metabolize to benzidine, a known human carcinogen [25]. Eriochrome Black T (EBT) is a single azo naphthol derived azo dye and is known to be recalcitrant to oxidative biodegradation [26]. Various physiochemical parameters like temperature and pH were also optimized for efficient decolorization of dyes. Isolation, Screening, and Identification Soil sample was collected in a sterile glass container from dumping site of textile industry located in Bada Bagh Industrial area, New Delhi. Samples were serially diluted 10 5 times and were enriched in Luria broth (LB) amended separately with 100 mg/l of MO, CR, and Eriochrome Black Tat 37°C for 48 hours. Subsequent transfers of 1 ml of the culture were carried out in fresh media amended with dye (100 mg/l) till decolorization of the media was observed. Decolorized samples were serially diluted 10 7 times and were spread on petri plates with 100 mg/l dye concentration. Plates were incubated at 37°C for 48 hours. Morphologically distinct colonies with zone of decolorization were screened for their decolorizing ability in media supplied with dye (100 mg/l). Out of all the colonies observed on plate, EBT-2 strain showed the best decolorization zone and hence was picked up for further experiments. EBT-2 strain was identified using 16s rDNA sequencing and similarity search was conducted against database through BLASTn for 16s rDNA. MEGA 4.0 software was used to conduct evolutionary analysis by Neighbour Joining method [27]. Biochemical characteristics of EBT-2 strain were also assessed according to Bergey's Manual of Determinative Bacteriology [28]. Effect of Initial Dye Concentration on Dye Decolorization The effect of initial dye concentration on dye decolorizing potential of EBT-2 strain was measured by studying decolorization at different initial concentrations of MO, CR and Eriochrome Black T (10-500 mg/l). LB media was amended with dye concentrations ranging from 10 to 500 mg/l and inoculated with 1% EBT-2 isolate (OD 0.8-1). It was incubated at 37° for 16 hours under static condition. Samples were centrifuged at every 24-hour interval at 10,000 rpm for 10 minutes. Supernatant was collected and absorbance was measured at absorbance peak of each dye i.e., 480, 492, and 535 nm for MO, CR, and EBT respectively. Decolorization efficiency was measured for a period of 120 hours in terms of percentage decolorization [29]. % Decolorization = [(Initial absorbance -Observed absorbance)/ Initial absorbance] × 100 The experiment was done thrice, and data was represented as the mean ± Standard error of mean. Abiotic controls were always included. Effect of pH on Dye Decolorization Decolorization efficiency of EBT-2 strain was studied at different pH values. LB containing MO, CR and EBT at the concentration of 100 mg/l was inoculated with 1% inoculum and pH was set from 5 to 11 (with increment of 1 pH unit). pH was adjusted with 0.1 N HCL or 0.1 N NaOH. The test was conducted under static condition at 37°C for a period of 96 hours and decolorization was measured for every 24-hour interval. Effect of Agitation on Dye Decolorization Effect of agitation on decolorization was checked at 37°C under static and agitated state (200 rpm) for MO, CR, and EBT. Dye concentration was 100 mg/l, and the study was carried out for 96 hours with decolorization percentage measured for every24-hour interval. Effect of Temperature on Dye Decolorization 100 mg/l dye containing LB media was incubated at 25°C, 35°C, 45°C, and 55°C at pH 7 to study the effect of temperature on dye Diagrammatic representation of Methodology decolourization. The test was conducted under static condition and decolourization was measured at every 24-hour interval for 96hour duration. Isolation, Screening, and Identification EBT-2 strain was identified as Citrobacter sp. on basis of morphological, biochemical (Table 1), and 16s rDNA sequencing (Fig. 1). It matched 86% identity with Citrobacter sp. and fell in the cluster of Klebsiella sp. Effect of Initial Dye Concentration on Dye Decolorization The isolate was able to decolorize MO up to 300 mg/l (Fig. 2a-c) and both CR and EBT up to 500 mg/l dye concentration (Figs. 3a-c and 4a-c). Contrary to results shown by previous works [18,30], dye decolorization did not show a gradual decreasing pattern with increase in its initial concentration. It remained independent of dye concentration and drastically fell to ~50% for 300-500 mg/l dye concentration. It has been also shown that Acid Red dye decolorization by Acinetobacter radioresistens is independent of its concentration [31]. A probable reason for this observation can be extracellular reduction of dye, the rate of which is independent of its concentration [32]. The decrease in decolorization efficiency beyond 100 mg/l may occur due to several reasons like the toxicity of the dye to bacteria and/or insufficient biomass concentration for the uptake of higher concentrations of dye [33]. Effect of Agitation on Dye Decolorization MO showed 98% decolorization in 96 hours but could only achieve 10% decolorization under agitated condition for the same time period (Fig. 5a). Similar trend was observed for CR, it showed 95% decolorization in static and 18% decolorization in shaking condition in 96 hours (Fig. 5b). Eriochrome Black T showed 90% decolorization in static and 35% decolorization in shaking condition for the same time period (Fig. 5c). Dye decolorization was reduced significantly in shaking condition as compared to static condition. This agrees with the results previously demonstrated studies [34,35]. A possible cause for this trend could be that in many bacteria, degradation of azo dyes to their corresponding amines occurs due to reduction of azo linkage with the aid of cytoplasm azoreductase enzyme. Azoreductase mediated degradation of azo dyes is inhibited by the presence of oxygen because oxygen is a preferable terminal electron acceptor over the azo groups for the oxidation of reduced electron carriers such as nicotinamide adenine dinucleotide -hydrogen (reduced) [22,36]. Maximum decolorization was observed in a range of pH 7-9 (~85%) for Eriochrome Black T (Fig. 6c). Dye decolorization decreased at lower pH (5-6) and very high pH (10)(11). It was observed that in general, azo dye show better decolorization from neutral to alkaline conditions. Similar results were observed for decolorization of several azo dyes by Micrococcus sp. [37]. Since alkaline environment is required for binding of most azo dyes to fibers, dye degrading bacteria are better adapted to alkaline environment [38]. However, the dye degradation pattern for all the dyes treated with EBT-2 strain are different from each other, which prove that degradation is dependent on the chemical structure and reactivity of dyes as well. Effect of Temperature on Dye Decolorization EBT-2 completely decolorized MO at both 35°C and 45°C in 120 hours (Fig. 7a). It also decolorized EBT completely at both 35°C and 45°C. However, decolorization at 45°C was achieved much faster, i.e., in 72 hours as compared to 120 hours at 35°C (Fig. 7c). It decolorized CR completely only at 35°C in 120 hours (Fig. 7b). For all the three dyes, decolorization efficiency increased with increase in temperature from 25°C to 35-45°C. This is in accordance with results previously given for decolorization of Acid Orange by Staphylococcus hominis and decolorization of azo dyes by Micrococcus sp. [30,37,39]. Increase in temperature can increase bacterial growth hence increasing decolorization efficiency [19]. No decolorization was observed at 55°C for all three dyes which can be caused due to loss of Azoreductase enzymatic activity at high temperature or decrease in cell viability [39]. CONCLUSION EBT-2 strain was identified as Citrobacter sp. It showed complete decolorization of all the three dyes up to 100 mg/l dye concentration in 96-120 hours. It was able to decolorize MO, CR, and EBT up to 300, 500, and 500 mg/l of dye concentration, respectively. Optimum pH for decolorization was 7, 7, and 9 for MO, CR, and EBT, respectively. Similar results were observed for decolorization of several azo dyes by Micrococcus sp. [37]. Effect of agitation on decolorization was studied under static and shaking (200 rpm) condition. More than 90% decolorization was observed at static condition for each dye but only 10.3%, 18.47% and 35.92% decolorization was observed at shaking condition in MO, CR, and EBT respectively in 96 hours. This agrees with the results previously demonstrated by bacterial degradation of Reactive Red 141 and Amaranth dyes [34,35]. Complete decolorization was obtained for MO and EBT at 35°C and 45°C. CR showed complete decolorization only at 35°C. This is in accordance with results previously given for decolorization of Acid Orange by S. hominis and decolorization of azo dyes by Micrococcus sp. [30,37,39]. Temperature increase can stimulate bacterial proliferation and increase decolorization efficiency [19]. The results conclude that Citrobacter sp. can be used for successful dye decolorization of Azo dyes; primarily MO, CR, and EBT under optimum physiochemical conditions. Further work needs to be done on complete mineralization of dyes. Scaling up the process also remains a challenge. Since metal ions like Copper, Lead, and Cadmium are present in high concentration in industrial effluents [23], it is also important to study their effects individually on azo dye bioremediation. AUTHORS' CONTRIBUTION All authors made substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data; took part in drafting the article or revising it critically for important intellectual content; agreed to submit to the current journal; gave final approval of the version to be published; and agree to be accountable for all aspects of the work. ETHICAL APPROVAL This study does not involve the use of animals or human subjects.
2021-11-13T16:02:38.962Z
2021-11-10T00:00:00.000
{ "year": 2021, "sha1": "43fdf34c02c86c510fd9f0c8584282390be3db6a", "oa_license": "CCBYNCSA", "oa_url": "https://jabonline.in/admin/php/uploads/603_pdf.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7cd526b4e871ac89da7a3397bb1af096dd301856", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
88522828
pes2o/s2orc
v3-fos-license
A flexible and computationally tractable discrete distribution derived from a stationary renewal process A class of discrete distributions can be derived from stationary renewal processes. They have the useful property that the mean is a simple function of the model parameters. Thus regressions of the distribution mean on covariates can be carried out and marginal effects of covariates calculated. Probabilities can be easily computed in closed form for only two such distributions, when the event interarrival times in the renewal process follow either a gamma or an inverse Gaussian distribution. The gamma-based distribution has more attractive properties and is described and fitted to data. The inverse-Gaussian based distribution is also briefly discussed. Introduction Discrete distributions are used when modelling count data, and the dependence of counts on covariates. There is a very wide range of application areas, e.g. life sciences, economics, maintenance and reliability. The Poisson distribution is the best-known discrete distribution. However, count data often show overdispersion, or, more rarely, underdispersion, and the probability of occurrence of zero events often differs from what the Poisson distribution would predict. Very many 2-parameter discrete distributions exist (e.g. Johnson et al. (2005)), derivable in many ways; here the focus is on distributions that can be derived from renewal processes (RPs). 1 Most count data (but not all) are derived from processes occurring in time, such as number of doctor visits over a period, number of children born, etc., so that an RP may be an approximation to the underlying process that generated the observed counts. The Poisson and negative binomial distributions can be derived from Poisson processes, which are RPs, and so fall in this class. Among several desirable properties for a discrete distribution an important one is that the mean η = E(N ) should be simply expressible in terms of the model parameters. A major interest is the dependence of the distribution on a vector of covariates x, and this is usually expressed as η = η 0 exp(β T x), where β is a vector of coefficients. Modelling the distribution mean as a function of covariates gives an easily-interpretable model, from which economists and others can make simple calculations. Thus, in the example of completed birth rate given later, it becomes trivial to ask how many more or how many fewer children should be born in the population if 50% of women had received a university education. For distributions derivable from Poisson processes, the mean is easily calculable. However, in general this is not so, and the best that can be done theoretically is to present an asymptotic form for the expected number of events by time t as E(N (t)) ≃ t µ + σ 2 −µ 2 2µ 2 , where µ and σ 2 are the mean and variance of the interarrival time, e.g. Cox (1962). In practice one would need to compute the mean exactly for two different values of the covariate to read off the marginal effect of the covariate. This can be done, but is a further step of analysis. Also, it is desirable for simplicity of interpretation to model the mean of the discrete distribution as a function of covariates, not the mean of the interarrival process. To derive a discrete model with a simple formula for the mean, it is necessary to consider a stationary (equilibrium) RP, an ERP. In this case E(N (t)) = t/µ. One can conveniently name discrete distributions derived from renewal processes as RP-X, ERP-X, where X is the name of the interarrival time distribution. Quite a lot of work has been done to compute count probabilities arising from RPs. The connection between renewal processes and discrete distributions is discussed in Cox (1962), who describes the distributions arising from the Erlang RP, and the negative binomial as a mixture of Poisson RPs with gamma-distributed stopping times. The gamma RP case has been developed by Winkelmann (1995). Computations for the Weibull RP were considered by McShane et al. (2008) following Lomnicki (1966), and these models have also been used in sport analytics (Boshnakov et al., 2017). The Mittag-Leffler distribution has also been used (Jose and Bindu, 2011). An excellent summary is given in Jose and Abraham (2013). Stationary RPs have hardly been explored, but the R language Countr package (Baker et al., 2016;Baker and Kharrat, 2017) allows modified RPs, of which a stationary RP is a special case. Hence models based on these processes are becoming available to the user. Here however, the aim is to explore the two distributions where it turns out that the probabilities can conveniently be derived in closed-form, i.e. in terms of special functions. The ERP with gamma-distributed interarrival times gives rise to a discrete distribution, called here the ERP-γ distribution. The corresponding distribution derived from an ordinary RP has been used by Winkelmann (1995), and allows for both under and overdispersion. He comments (Winkelmann, 2013) on the 'small catch' that the mean is not calculable, and this paper addresses this problem. Since Cox (1962) has mentioned the ERP-γ distribution in the content of the distribution of the number of renewals, it cannot be claimed as a new discrete distribution. Rather, the original contribution here is to propose this distribution as a useful distribution for count regression, and to show how the necessary computations can be done with it. The discrete distribution based on the inverse Gaussian (IG) distribution (the ERP-IG distribution) has not been looked at before. Although it is easier to compute with than the ERP-γ distribution, it does not contain the Poisson distribution as a special case, and it did not fit the example data even as well as the Poisson. However, it may prove useful in some contexts and so is briefly described in appendix A. The next section introduces some notation and discusses the distribution used by Winkelmann. Next, the probabilities for the ERP-γ distribution are derived, and its properties given. Some extensions of the ERP-γ and RP-γ distributions are discussed. The new distribution is fitted to a well-used dataset of fertility (number of children as a function of mother's age, etc.) to demonstrate its feasibility. The ERP-γ distribution 2.1 Definitions and Notation To introduce some notation, the gamma distribution probability density To be amenable to computation, ERP and RP distributions must possess the additive property, that a sum of i.i.d. random variables from the distribution belongs to the same family of distributions. Both the gamma and inverse Gaussian distributions possess this property. This is sometimes called the reproductive property, which term is commonly used in a more general sense, i.e. that sums of random variables from the distribution family, but with different parameters, belong to the same distribution family. Both the gamma and IG distributions also have this property, e.g. a sum of gamma r..v.s with different β parameters is gamma. This property allows greater flexibility in constructing RP distributions, but cannot be invoked for ERP distributions without the process ceasing to be an ERP, and so losing the simple formula for the mean. Finally there is the still more general property of divisibility, which means that a random variate can be decomposed into two or more others, not necessarily from the same distribution, and which is not needed here. There may be infinitely many survival distributions with the additive property, e.g. the class of exponential dispersion models (Jørgensen, 1987). The Tweedie distributions belong to this class (Jørgensen, 1987), and both the gamma and inverse Gaussian distributions are Tweedie distributions. There do not seem to be any others that are computationally tractable; e.g., the compound Poisson-gamma distribution is also a Tweedie distribution, but the pdf must be expressed as a Bessel function. By the additive property of the gamma distribution, the sum of n gamma random variables has pdf f (n) (t) = f (t; α, nβ). Let the corresponding cdf be F (n) (t), also written as the incomplete (regularised) gamma function γ(x) = x 0 u nβ−1 exp(−u) du/Γ(nβ). Successive events at times X 1 , X 1 +X 2 · · · form an RP with N (t) events having occurred by time t. The probability of n events is Prob( The corresponding pdf for the distribution proposed here is g(t; α, β), with cdf G (n) (t) and count probabilities Q n (t). The arbitrary stopping time t is retained throughout, but without loss of generality one can set t = 1. Note that the results here have been checked by simulating count proba-bilities from the distributions and comparing with the formulae derived. The fortran prototype programs, which use NAG (Numerical Analysis Group) library routines, are available online. Derivation of the ERP-γ probability mass function (pmf) The derivation of the pmf has two steps: first, obtaining the probabilities Q n (t) as an integral, then showing that the integral can be evaluated in terms of the incomplete gamma function. In an equilibrium RP, the time to first event has pdf S(t)/µ, where S is the survival function, i.e. S(t) = 1 − F (t), and so the cdf for n events is This equation means that the first event occurs at time w, and n − 1 events then occur by time u, so at least n events have occurred by time t. We can also write for an RP which means that exactly n events have occurred by time t when n events have occurred by some time u and no further events then occur. The similarity of this equation to (1) can be exploited to obtain for n > 0; of course, G (0) (t) = 1. Hence the probabilities Q N (t) can be written as: Equation (3) is true for any RP, and is found in Cox (1962), where it is derived using Laplace transforms, rather than by the probabilistic argument used here. From here on, the derivation is specific to the ERP-γ distribution. The second step evaluates integrals such as where n > 0. Exchanging the order of integration or integrating by parts, we have that The last term can be simplified by integrating by parts, to finally obtain The probabilities are now The cdf is needed when there is censoring, so that for example large counts are recorded as being greater than some count M . This is Properties From the formulae for the probabilities Q it is trivial to verify that ∞ i=0 Q i = 1 and that E(N (t)) = t/µ. The formula for the variance simplifies to If β > 1, var(N (t)) < t/µ so the distribution is underdispersed, and if β < 1 it is overdispersed. From the general formula for the asymptotic variance of an equilibrium RP given in Cox and Miller (1965) we have that which works well (used as an exact formula) when αt ≫ 1. The distribution is asymptotically normal, but can be overdispersed or underdispersed, as shown in figures 1 and 2. Figure 1 shows a peak at zero. Intuitively, this arises because the first random event has a distribution with higher mean than the others, if β < 1. The peak at zero arises when the distribution is very overdispersed, but it is absent for more modest overdispersion. When the mean is low enough, the overdispersed distribution is J-shaped. Random numbers can be generated for the RP-γ distribution by generating random numbers X 1 , X 2 · · · from the gamma distribution, and counting how many numbers it takes for the sum X 1 + X 2 + · · · to exceed t (the discrete random number is 1 less than this). For the ERP-γ distribution the first random number comes from a different distribution. However, as ever the solution is implicit in Cox (1962). He gives a derivation of the time to first event by considering the length biased pdf xf (x)/µ (Cox, 1962, section 5.4). Following this argument, one can generate the time to first renewal by generating a random number Y from the gamma (α, β + 1) distribution, and then taking the time to first renewal as U Y , where U is a random number from the [0, 1] uniform distribution. The ERP-γ distribution does have some causal basis. Sometimes one starts collecting data when a counting process akin to a renewal process is already underway, e.g. one starts counting failures of equipment that is already in use. In this case, the ERP-γ distribution is a flexible model of what is happening. In many other cases, the connection to a renewal process will be vaguer, and sometimes, as when for example counting bacteria on a microscope slide where the process does not occur in time at all, the distribution is simply a mathematically and computationally convenient choice. Other related distributions It is worth mentioning that Winkelmann's RP-γ distribution can be easily generalised into a simple hurdle model, by exploiting the reproductive property of the gamma distribution. The time to the initial event can have shape parameter β + δ, where δ > −β. Then the probability of n events is P n (t) = γ(αt; nβ + δ) − γ(αt; (n + 1)β + δ), P 0 = 1 − γ(αt; β + δ). This allows the probability of zero events to be varied, and a test of whether a hurdle is present or not to be done. In the case of fertility, one might suppose that many parents decide to stop after having two children, so the third interarrival time could be increased by increasing β to β + δ. Precisely this model gives the best fit to the fertility data used in the example in the next section. Example This is the completed fertility dataset from the second (1985) wave of the German Socio-Economic Panel, described in Winkelmann (1995). It contains number of children (0-11) and 10 demographic covariates for 1243 women. The count distribution is slightly underdispersed, and becomes more so after regressing on the covariates. Six distributions were fitted: the Poisson, Winkelmann's RP-γ distribution, the ERP-γ distribution, a mixture of ERP-γ distributions with different β values, ditto with different α values, and the RP-γ(3) distribution. In addition, McShane et al. (2008) has fitted a distribution with Weibull interarrival times and the heterogeneous Weibull distribution. First, omitting all covariates, table 1 shows the fitted parameters and minus the log-likelihood values, and figure 3 shows the data and some of the fitted distributions. It can be seen that the ERP-γ fits slightly better than the RP-γ, so using an equilibrium RP has not worsened the fit. The mixture of ERP-γ distributions obtained by using two values of β requires two additional parameters, and gives a much better fit; almost the same can be achieved by varying α. However, the best fit results with the RP-γ(3) distribution, with δ = 0.66 added. This probably best reflects the underlying reality, that probably many couples decide that two children is enough. The 'hazard' of producing the third child is reduced. For completeness, table 2 shows the fit results obtained with covariates, using the RP-γ and ERP-γ distributions. They are very similar, but in general, the ERP-γ coefficients are slightly larger, as are the standard errors. The ERP-γ results have the merit of being more easily interpretable. The estimated mean E(N (1)) = 2.314, and so the estimated marginal effects are just the coefficient values multiplied by this number. The marginal effect ∂E(N |x)/∂x j = β j E(N |x) is also trivial to calculate given the model fit, and the standard error of the marginal effect can be found by applying the delta-method and using the estimated covariance matrix on the fitted model parameters. Conclusions A new class of discrete distributions based on an equilibrium renewal process has been introduced. A member of this class where the probabilities can be written in closed-form has been derived, its properties discussed, and fitted to data. This is a flexible distribution that generalizes the Poisson, can model under or over-dispersion, and which allows marginal effects to be computed. Computation of probabilities requires only the incomplete gamma function, available on just about every computing platform. Hurdle models are widely used to model an excess of zero counts. Introducing a hurdle directly would mean that the RP was no longer an equilibrium RP, and so the simple expression t/µ for the mean and the ability to easily compute marginal effects would go. However, at the cost of introducing two extra parameters, one can model a hurdle as a mixture of two ERP-γ distributions with different values of the shape parameter β. Extensions to Winkelmann's RP-γ distribution have also been introduced. This distribution is simpler to compute with than the ERP-γ distribution, but does not allow easy computation of the mean and hence of marginal effects. Having sacrificed this property, however, one can introduce hurdles at any point, e.g. in the example, a hurdle after the birth of two children. This requires only one additional parameter. The inverse Gaussian distribution has also been discussed. As it does not contain the Poisson distribution as a special case, it does not currently look as attractive as the ERP-γ distribution, but it may yet find applications. Compared to a Poisson distribution of the same mean and variance it has more probability at zero, a peak shifted slightly to the right, and a shorter tail. Further work could proceed on two fronts, the first being the search for more ERP distributions with tractable computational properties. Also extensions with additional parameters that are easy to compute would add more flexibility. This could be done by allowing the termination time t to have a distribution, or using a 3-parameter distribution for interarrival times. The second front is experience with other distributions such as the ERP-Weibull, for which finding probabilities requires more extensive computation, but is still quite feasible. To generate random numbers from the IG distribution, the method quoted in Gentle (2003) or Chhikara and Folks (1989) is efficient and simple to program. Thus random numbers from the RP-IG distribution can be found as for the ERP-γ distribution. For the ERP-IG distribution, one can generate the first number from the length-biased distribution as before. Using the connection between the length-biased IG and IG distributions already mentioned gives the length-biased r.v. Y = µ 2 /X, where X ∼ IG(µ, λ). This is then multiplied by U , a uniformly-distributed random variable, as done for the ERP-γ distribution. To regress λ, µ on covariates, one can reflect that φ = λ/µ plays the rôle of a shape parameter (the coefficient of variation is φ −1/2 ) and so is analogous to β for the gamma distribution, and can be kept constant, while λ/µ 2 , the hazard function in the exponential tail, is analogous to α, and should depend on the covariates. The same conclusion follows by equating the formulae for means and variances between the gamma and IG distributions. Hence we take µ The fits to the number of births dataset was worse than for the Poisson distribution. The IG distribution with coefficient of dispersion equal to unity differs from the Poisson in having a higher probability of zero, a peak at higher counts, and a shorter tail. The RP-IG and ERP-IG distributions can be over or underdispersed. For large t, the criterion for overdispersion is that λ < µ, but for small t a value of λ of somewhat less will give overdispersion. The computational conclusion is that the ERP-IG distribution is even easier to compute with than the ERP-γ, requiring only the ubiquitous error function. Further, generation of gamma-distributed random variables is not easy, although most platforms will have routines that can do this. The corresponding problem for the IG distribution is trivial, requiring only the generation of Gaussian and uniform random numbers. Table 2: Fits of RP-γ and ERP-γ models to the fertility data with covariate regression.
2018-02-28T10:33:20.000Z
2018-02-28T00:00:00.000
{ "year": 2018, "sha1": "4bc167a7bc85b02b326c66ebfa1fbb0d4ef1c732", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4bc167a7bc85b02b326c66ebfa1fbb0d4ef1c732", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
247619142
pes2o/s2orc
v3-fos-license
Interpretable Prediction of Lung Squamous Cell Carcinoma Recurrence With Self-supervised Learning Lung squamous cell carcinoma (LSCC) has a high recurrence and metastasis rate. Factors influencing recurrence and metastasis are currently unknown and there are no distinct histopathological or morphological features indicating the risks of recurrence and metastasis in LSCC. Our study focuses on the recurrence prediction of LSCC based on H&E-stained histopathological whole-slide images (WSI). Due to the small size of LSCC cohorts in terms of patients with available recurrence information, standard end-to-end learning with various convolutional neural networks for this task tends to overfit. Also, the predictions made by these models are hard to interpret. Histopathology WSIs are typically very large and are therefore processed as a set of smaller tiles. In this work, we propose a novel conditional self-supervised learning (SSL) method to learn representations of WSI at the tile level first, and leverage clustering algorithms to identify the tiles with similar histopathological representations. The resulting representations and clusters from self-supervision are used as features of a survival model for recurrence prediction at the patient level. Using two publicly available datasets from TCGA and CPTAC, we show that our LSCC recurrence prediction survival model outperforms both LSCC pathological stage-based approach and machine learning baselines such as multiple instance learning. The proposed method also enables us to explain the recurrence histopathological risk factors via the derived clusters. This can help pathologists derive new hypotheses regarding morphological features associated with LSCC recurrence. Introduction Histopathological features are useful in identification of tumor cells, cancer subtypes, and the stage and level of differentiation of the cancer. Hematoxylin and Eosin (H&E)-stained slides are the most common type of histopathology data and the basis for decision making in the clinics. H&E-stained slides include several cellular morphological features but the relationship between these features and patient prognosis or genetic mutation of the corresponding tumor tissue remains unknown. Current findings are primarily based on pathologist expertise. For example, the papillary pattern in lung adenocarcinoma is a recognizable signal of invasive tumor cells and poor prognosis (high recurrence rate). However, these predictive morphological patterns are not available for many other cancer subtypes. Specifically, at the moment, there are no known pathological patterns associated with recurrence in lung squamous cell carcinoma (LSCC). Deep learning has achieved promising results in several classification and prediction tasks based on histopathology images (Bejnordi et al., 2017;Coudray et al., 2018;Wulczyn et al., 2020;Fu et al., 2020;Kather et al., 2020). In this paper, we focus on predicting LSCC recurrence and metastasis using H&E stained slides. Each whole slide images (WSI) contains more than 3.5×10 8 pixels on average, and can be cropped into hundreds of tiles, but the label is at the slide/patient level. A typical cohort with high quality data, at the moment, only has around 500 patients. While for several tasks such as cancer subtype detection or cancer vs normal cell identification, such cohorts have led to successful models (Bejnordi et al., 2017;Coudray et al., 2018;Campanella et al., 2019;Iizuka et al., 2020;Hong et al., 2021), prediction of other outcomes such as recurrence or genomics mutation remain challenging using standard supervised learning (Fu et al., 2020;Kather et al., 2020;Wulczyn et al., 2021). Most prior work extends slide-level labels to all the tiles within the slide, which is a reasonable assumption in subtype prediction tasks, but is not valid in other classification tasks. Lack of tile-level labels remains one of the challenges in predicting cancer recurrence. Self-supervised learning (SSL), which leverages unlabeled images to learn model parameters, is an alternative approach that can utilize the rich tile-level image data. Once histopathological features are trained, they can be fine-tuned for various downstream classification tasks using fewer labeled data (Li et al., 2020). Also, the pretrained image representations can also be used to interpret histopathological features. Wulczyn et al. (2021) interprets the histopathological features associated with survival of colon-cancer patients by clustering tile-level embeddings pretrained with natural images. Models trained via SSL on histopathological data can learn domain-specific features and are better suited for downstream tasks compared to natural images. However, naive SSL application can also be susceptible to batch effects, which are a common problem in histopathology. Models trained via standard SSL methods are likely to learn undesirable features due to the batch effect, which may lead to overfitting (Tsai et al., 2021b). In this study, we explore the morphological features of LSCC recurrence and metastasis with novel SSL method, based on conditional SSL. We propose a sampling mechanism within contrastive SSL framework for histopathology images that avoids overfitting to batch effects. Using our proposed SSL training combined with clustering, we show significant improvement in the recurrence prediction on the LSCC compared to pathological stage-based model, and several deep learning baselines. We also provide interpretation of the learned model to identify morphological patterns that may be associated with LSCC recurrence. Related Work Representation Learning for Histopathology Whole Slide Images (WSI) are large and are therefore usually cropped into several tiles, and the tiles are used as units of data. Ground-truth annotations for each tile are not available in many tasks, because this would require intensive human labor, or is infeasible. For instance, for some tasks such as prognosis, only patient-level labels are available. A prevalent approach in the literature has been to train a tile-level network with slide-level label using standard supervised learning, and pool the tile-level predictions during inference Coudray et al., 2018;Fu et al., 2020;Iizuka et al., 2020;Hong et al., 2021;Bejnordi et al., 2017;Campanella et al., 2019;Kather et al., 2020). Another approach is multiple instance learning (MIL) (Ilse et al., 2018), which interprets each slide as a bag of instances, each corresponding to a tile. However, computer memory is a significant constraint when training these models. Leveraging pretrained convolutional neural network (CNN) weights is a shortcut to avoid this memory issue. Pretraining can be carried out via fully-supervised tasks like tumor classification (Fu et al., 2020), ranking tasks on nature images (Hegde et al., 2019), MIL with downsampled bags (Zhao et al., 2020), or self-supervised learning. Self-supervised Learning Recent advances in self-supervised learning (SSL) for computer vision have improved the quality of latent representations in cases without sufficient annotated samples. These methods have shown promising results in medical imaging (Li et al., 2020;Ciga et al., 2022;Dehaene et al., 2020;Sowrirajan et al., 2020;Azizi et al., 2021;. Contrastive SSL is currently the state-of-the-art SSL method for natural images (Chen et al., 2020b,c;Tsai et al., 2021a;Chen et al., 2020a;He et al., 2020;Grill et al., 2020;Caron et al., 2020;Zbontar et al., 2021;Caron et al., 2021). In this approach, models are trained to increase similarity of representations corresponding to augmented copies of the same image, while decreasing the similarity of augmented copies from different images. Contrastive SSL has been applied to histopathology (Li et al., 2020;Ciga et al., 2022). However, the contrastive loss, InfoNCE (van den Oord et al., 2018) is not tailored to the special properties of WSIs. Robinson et al. (2021) show that contrastive loss tends to learn "shortcuts" within the similar and dissimilar samples, which inadvertently suppress important predictive features. In histopathology, features associated to slide-specific batch effects (i.e. staining, procedural artifacts, etc) are captured by contrastive learning more easily than meaningful pathological features, as we show in Section 3. Some methods have been proposed to improve SSL for multiple instance learning approach. Azizi et al. (2021) use instances from the same bag as positive augmented samples instead of general augmentation. However, the method promotes similarity between tiles from the same slide, which makes it even harder to avoid batch effects. Conditional contrastive SSL (Tsai et al., 2021b) has been proposed to remove the undesired features in contrastive learning by conditionally sampling tiles based on these features during the training. It can potentially remove undesired batch-effect-induced feature such as slide identity in the histopathological case, but as we will show in Section 5 standard conditional SSL is also sub-optimal in learning global features differentiating histopathological variations between different slides. Survival Analysis The likelihood of LSCC recurrence varies as the time progresses, so we use survival analysis tools to model the risk of LSCC recurrence over time. The Kaplan-Meier estimator (Kaplan and Meier, 1958) is a non-parametric model that estimates the survival function over time. It shows what fraction of patients remain recurrence-free for a certain amount of time after treatment. Cox proportional hazard regression (Cox, 1972) is a parametric method to model the effect of variables on the time a specified event is expected to happen. Variants of Cox regression method have been proposed to extract features from complex or high-dimensional data such as images. Cox-nnet (Ching et al., 2018) and DeepSurv (Katzman et al., 2018) perform regression with multiple-layer perceptron (MLP) networks. They are also compatible with other feature extraction networks such as CNNs. Another approach to survival analysis is to transform the regression into multiple classification problems over discretized time periods (Gensheimer and Narasimhan, 2019). Proposed Method Conditional Contrastive Learning Most contrastive SSL methods including SimCLR (Chen et al., 2020a) and MoCo (He et al., 2020) optimize InfoNCE However, when the batch size is not much greater than the number of slides, there hardly are tiles from the same slide in a batch. Hence, The probability of forming negative pairs from tiles in the same slide is low. The network tends to learn features caused by slide-level batch effects, which are easily captured by contrastive learning. Figure 1 (left panel) shows the UMAP projection of the representations of tiles learned using standard contrastive SSL (MoCo (He et al., 2020)) with random sampling. We can see that tiles from the same slide tend to cluster together, indicating that this method learns representations that are slide-specific and may lead to low generalization when presented with new patients. As an alternative, conditional contrastive learning (Robinson et al., 2021) optimizes C-InfoNCE, where x + , {x − } are sampled given some condition z (i.e. having the same slide id) in histopathology. However, as we will show in Section 5, this approach also prevents the model from learning features that differentiate between slides and are also useful for the downstream task of interest. In order to learn from both inter-slide and inter-tile variations, we propose a two-level sampling method during training. Suppose the batch size is m, we first randomly sample n slides (n m), and then sample m/n tiles from each selected slide. Figure 1 (right panel) shows the UMAP projection of the representation of tiles leaned using the two-stage sampling approach. Tiles from the same slide no longer cluster together, indicating that slide-specific features are less significant using this SSL approach. Consequently, conditional SSL is able to achieve better performance in the downstream task than the classic MoCo, as we will show in Section 5. Clustering the Representation Space Unsupervised clustering algorithms have been shown to be an effective tool for processing SSL representations (Caron et al., 2018). We use a Gaussian Mixture Model (GMM) (Pedregosa et al., 2011) to cluster the tile representations in the training set. GMM is a probabilistic model that estimates the probability of a sample belonging to a cluster. GMM assumes that samples {x i } N i=1 are generated from a mixture of k Gaussian distributions. It computes the probability that each tile embedding x i belongs to each z among the k clusters (i.e. p(z | x i )). After clustering each tile, we aggregate tile probabilities to generate the slide-level features. Suppose the slide S j is composed of tiles whose representations are x s ∈ S j . The new slide-level feature v j ∈ R k is assigned by the average pooling of probabilities over the slide. is the z-th entry of v j , for z = 1 · · · k. Analyzing the clusters can help with interpretability, revealing common morphological patterns in the cells that may be associated with cancer recurrence. Therefore, we use the cluster-generated features in our prediction model. Survival modeling In order to predict recurrence, we combine the cluster features obtained via GMM with a survival-analysis model. For each slide, the triplet of features and slide labels {(v j , y j , t j )} N j=1 will be used, where v j is the vector of cluster features, y j is the binary label indicating LSCC recurrence, and t j encodes the recurrence-free followup times for the patient. i.e. If a patient was not observed to have recurrence during the followup period, we use the length of followup time t j as the time of censoring. Each t j is computed with a granularity of 6 months. We fit a Cox regression model with L 2 -norm regularization using {(v j , y j , t j )} N j=1 to compute the proportional hazard function of recurrence λ(t|v). Experiments Data This study analyzed lung squamous cell carcinoma (LSCC) patient data, including hematoxylin-and-eosin stained (H&E) histopathology slides from frozen specimens, recurrence status, and demographic information, from two cohorts - We combine the data from two datset and split them by patient and institution (TCGA contains data from 42 institutions) into train, validation, and test sets containing 70%, 10%, and 20%, respectively. Since the size of dataset is small, analysis was conducted by nested cross validation, replicating training and testing procedures in five different splits. Baseline Models Our baselines include a pathological stage-based model and a number of deep learning methods. (details on models and inference are in Appendix B). Pathological stage Pathologists evaluate the progress of cancer with stages. Patients at higher stages are more likely to suffer from recurrence. It is a clinical "golden rule" to estimate recurrence and metastasis prognosis. In the case of LSCC, at the moment, it is the only method available for estimating the prognosis. We use Kaplan-Meier estimator to compute the empirical recurrence hazard of each stage over time. Deep survival models We use the continuous-time DeepSurv (Katzman et al., 2018) and the discrete-time model NN-Surv (Gensheimer and Narasimhan, 2019) as baselines. Results We evaluate the performance of recurrence prediction using the concordance index (C-index) and Brier score at 2-year, comparing our approach to the baselines in Section 4. C-index equals to the ratio of concordance between recurrence-free time and predicted risk. C-index measures the discriminative power of a survival model. Brier score at time t is the mean square error between recurrence status and predicted probability on recurrence at time t, weighted by the inverse probability of censoring. The Brier score is a metric for calibration and probability estimation . In Table 1, we report the performance of recurrence prediction. The tumor's pathological stage provides a biologically motivated baseline, and the C-index of stage shows that predicting LSCC recurrence based on this metric remains a hard task. Our Cox regression model based on SSL clusters improves the C-index by 10% , which is a significant improvement beyond the current clinical approach. The Kaplan-Meier (KM) curves in Figure 2 also show the gap between the machine learning and stage prediction. The left panel shows that Stage I has slightly less recurrence rate than Stage II and III. However, three curves have significant overlap. The right panel in Figure 2 shows the KM curves for cancer recurrence on the heldout test set according to our model. High risk (and low risk) are defined as top half (and bottom half) of patients according to recurrence risks, respectively. The results show that high vs. low risk patients can be differentiated with our method. We also compared our method with other deep survival baseline models in Section 4. All of the baselines under-perform our SSL-based clustering approach. Among these baselines, we find the models with multiple instance learning (MIL) to generally perform better than end-to-end (E2E) learning with tile images and slide-level labels. MIL can avoid the inconsistency between the tile and slide labels. Also, MIL takes advantage of pretrained CNN with frozen weights, which may alleviate the overfitting. Comparing two type of survival models, the differences between discrete and continuous-time models are not significant. We also quantitatively evaluated the impact of the batch effect on contrastive SSL, illustrated in Figure 1. We experimented with different combinations of sampling in conditional contrastive learning, and evaluated their performance on the downstream recurrence prediction, Table 2: The performance of recurrence prediction with conditional SSL by sampling different number (n) of slides in each batch when batch size m = 128. Random uses the classic MoCo with random sampling. The model with n = 1 uses C-infoNCE. Large clusters of tumor (40) Tumor with infiltration (48) Ring structured tumor (43) C3N-02575-22 C3N-02575-22 C3N-00221-26 C3N-02575-22 TCGA-21-1080 TCGA-85-7696 TCGA-85-8048 TCGA-21-1083 TCGA-21-1078 TCGA-85-8287 C3L-02127-22 TCGA-77-6842 C3N-03051-21 C3 Figure 3: Visual samples from clusters with positive regression coefficients in our survival model for recurrence prediction (more samples in Figure 5 Appendix D) shown in Table 2. We kept the batch size the same in terms of number of tiles, and varied the number of slides in each batch in the first sampling layer, which ranges from 1 (the fully conditional case) to random sample (the classic MoCo). The model performance peaks at n = 4. As n increases, the models are more likely to learn features corresponding to batch effects. When n = 32, the C-index becomes close to random sampling. The standard C-InfoNCE (n = 1) prevents the contrastive learning method from learning variations among different slides beyond the batch effect. The experiments show that two-layer sampling with the appropriate n can achieve a good balance between the two extremes. Discussion To interpret the histopathological features indicating a high risk of recurrence, we evaluate the association between each tile cluster and recurrence based on our survival model. As the hazard proportion are computed by λ(t|v) = λ 0 (t) exp(β T v), we apply an exponential function to the Cox regression coefficients corresponding to the cluster features in order to obtain a measure of feature importance, reported in Appendix C. If the result is greater than one, then the corresponding cluster is positively associated with recurrence. Figure 3 shows the description on the dominant morphological pattern visible in the top clusters associated with high risk of recurrence in the Cox regression. We select tiles with highest probability belonging to each cluster. The selected clusters and tiles were reviewed by a pathologist who summarized the dominant pathological features in the cluster. High risk histopathological features include large tumor clusters, tumors with infiltration, and ring structure of tumor cells. This may yield novel hypothesis to explore the causes of LSCC recurrence. A number of small outlier clusters also occur in our analysis, like cluster 49, that indicate frozen or slicing artifacts occasionally occuring in different parts of WSIs. Future studies can potentially exclude these tiles, as they are irrelevant to biological features. Conclusion and Future Work In this paper, we leveraged conditional self-supervised contrastive learning to learn the morphology of whole slide images, and analyze the features associated with lung squamous cell cancer recurrence and metastasis. To refine the self-supervision in histpathology domain, we proposed a two-layer sampling method to alleviate overfitting slide-level batch effects while retaining strong discriminative performance. Our method outperforms clinical and deep learning baselines for LSCC recurrence prediction. In addition, it makes it possible to identify tissue morphology patterns that may be helpful in identifying future recurrence. Survival Models We experiment with other deep survival models which directly use the WSI tiles as the inputs. We use a continuous-time model (DeepSurv (Katzman et al., 2018)) and a discrete-time model (NN-Surv (Gensheimer and Narasimhan, 2019)) as baselines. DeepSurv, similar to Cox proportional hazard regression, models λ(t|x) = λ 0 (t) exp(f (x)), where f (x) ∈ R is network output of sample x, and λ 0 (t) is a baseline proportional hazard function, estimated by Breslow's method (Breslow, 1975). NN-surv models the probability of recurrence separately on each time interval. Assume there are T intervals (0 ≤ t 0 < t 1 < · · · < t T ), the final layer of NN-surv has T outputs, and each of them represents the probability of recurrence not occurring during that time interval conditioning on the previous interval (i.e. p (t > t i+1 | t > t i ). The hazard ratio is defined as λ(t|x) = j i=1 p(t > t i+1 | t > t i , x)), ∀t ∈ [t j , t j+1 ). We compare our methods with these two types of deep survival models in Section 5. We apply each model using fully-supervised end-to-end learning, and multiple-instance learning. For end-to-end learning, we conduct the tile-level training with patient-level labels, and aggregate the prediction by averaging over the tiles in each slide at inference. For multiple-instance learning, we take tile representations pretrained by MoCo (He et al., 2020), and learn attention weights over the tile representations to generate a slide-level prediction (Ilse et al., 2018).
2022-03-24T06:47:33.457Z
2022-03-23T00:00:00.000
{ "year": 2022, "sha1": "5e2bdaa473cdf4e92a2319b583a560fdf512da55", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2e450b5a845df055f44a88b0d724784b2d7d69d1", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
2135256
pes2o/s2orc
v3-fos-license
Cellular factories for coenzyme Q10 production Coenzyme Q10 (CoQ10), a benzoquinone present in most organisms, plays an important role in the electron-transport chain, and its deficiency is associated with various neuropathies and muscular disorders. CoQ10 is the only lipid-soluble antioxidant found in humans, and for this, it is gaining popularity in the cosmetic and healthcare industries. To meet the growing demand for CoQ10, there has been considerable interest in ways to enhance its production, the most effective of which remains microbial fermentation. Previous attempts to increase CoQ10 production to an industrial scale have thus far conformed to the strategies used in typical metabolic engineering endeavors. However, the emergence of new tools in the expanding field of synthetic biology has provided a suite of possibilities that extend beyond the traditional modes of metabolic engineering. In this review, we cover the various strategies currently undertaken to upscale CoQ10 production, and discuss some of the potential novel areas for future research. Background Coenzyme Q, commonly known as ubiquinone or CoQ, is a lipid-soluble, powerful antioxidant, and an essential cofactor in mitochondrial oxidative phosphorylation [1][2][3]. Coenzyme Q is species specific, with differences dictated by the number of isoprenyl units on the isoprenoid side chain. For example, 10 isoprenyl units are found in human and the fission yeast Schizosaccharomyces pombe but fewer units are found in other species (CoQ 8 in Escherichia coli, CoQ 9 in Arabidopsis thaliana, and CoQ 6 in Saccharomyces cerevisiae) [1]. The isoprenoid side chain is responsible for the lipid-soluble nature of CoQ, whereas its antioxidant capacity derives from its quinone head, which can enable electron transfer (Fig. 1). Because of this electron-sequestering property, CoQ 10 acts as an antioxidant at cellular membranes to counteract the oxidation of lipids or lipoproteins [4]. CoQ 10 has roles in other physiological processes, including sulfide oxidation, regulating the mitochondrial permeability transition pore, and in the translocation of protons and Ca 2+ across biological membranes [5,6]. A detailed account of the various aspects of CoQ biosynthesis have been described at length elsewhere [1][2][3][4][5][6]. CoQ 10 is the only lipid-soluble antioxidant produced by humans, and it localizes to almost every membrane, ranging from mitochondrial membranes to that of very low density lipoproteins (VLDL) [7]. This solubility means that CoQ can protect lipoproteins and lipids from peroxidation and oxidative damage [8]. CoQ 10 also serves alongside other antioxidants, such as vitamins C and E, to combat free-radical damage arising from energetic mitochondrial reactions [9,10]. Given its myriad functions and physiological importance, it is not surprising that CoQ deficiency can result in numerous diseases. In model organisms, such as S. cerevisiae and S. pombe, CoQ deficiency is not lethal but results in growth defects on minimum medium, and a heightened sensitivity to oxidative stress [11][12][13][14][15]. In Caenorhabditis elegans, CoQ deficiency leads to GABA neuron degeneration, and in Drosophila melanogaster, it can cause mitochondrial stress and neuronal apoptosis [16,17]. In humans, CoQ 10 deficiency has been implicated in various diseases involving muscle and neural development, with the severity of the disease correlated with the acuteness of the CoQ 10 shortfall [18]. These diseases may manifest in conditions such as central nervous system (CNS) dysfunction, By virtue of its therapeutic relevance, CoQ 10 is of particular importance in the biomedical and health supplement scene. Oral CoQ 10 supplements are often prescribed alongside treatments for various diseases [26]. One example is its co-administration with HMG-CoA (3-hydroxy-3-methylglutaryl-coenzyme A) reductase (HMGR) inhibitors, widely used cholesterol-lowering drugs otherwise known as statins. HMGR catalyzes the formation of mevalonic acid, the precursor for cholesterol and CoQ 10 biosyntheses [27]. Patients using statins show lower blood levels of CoQ 10 , and this justifies the need for CoQ 10 supplementation to reduce the cardiomyopathy risk associated with statin use [27][28][29][30]. The presence of CoQ 10 is however implicated in resistance to chemotherapeutic drugs, and this calls for caution in administering CoQ 10 alongside certain agents [31,32]. CoQ 10 production decreases with aging [33], as does the antioxidant capability of the cell. Increased oxidative stress in aging cells may be ameliorated with dietary supplementation of CoQ 10 [34]. Indeed, CoQ 10 has garnered great popularity as an antioxidant in moisturizers, anti-wrinkle and anti-aging skin care treatments [35][36][37]. With the growing demand for skin care cosmetics and public awareness of the importance of antioxidants, we will likely see an increase in the demand for CoQ 10 products on the market quite quickly [38]. Given that CoQ 10 is endogenously synthesized, there should be fewer unwanted side effects from its therapeutic use as compared with other synthetic compounds, and this has been supported by tolerability studies for high CoQ 10 doses [39]. Hence, attention has surged in the therapeutic use of CoQ 10 in non-curable diseases challenging modern societies including Alzheimer's, Huntington's and Parkinson's, and cardiovascular diseases [40][41][42]. Industrial production of CoQ 10 The range of uses for CoQ 10 across the pharmaceutical and cosmetics industries has meant that there is great commercial interest to scale up the production of CoQ 10 . Frederick Crane first isolated CoQ 10 from a bovine heart source in the late 1950s [43]. Since then, industrial attempts to produce CoQ 10 have centered on animal tissue extraction, semi-chemical synthesis, and microbial fermentation [44,45]. The chemical synthesis of CoQ 10 has typically involved solanesol as a starting substrate and the source of the isoprenoid tail, and this is carried out before it is combined with the quinone head [46]. However, as with most chemical processes, there are numerous costs associated with such high-energy catalysis reactions because of the need for expensive substrates and because of the significant chemical waste generated from its production [47][48][49]. The chemical synthesis of CoQ 10 also lacks stereoselectivity, and this makes it difficult to separate optical isomers to obtain the all-trans biologically viable isomer [50]. Owing to these difficulties, microbial biosynthesis has become a preferred avenue of CoQ 10 production. The cell-based catalysis of compounds does not require harsh catalytic conditions of heat and pressure that typify many chemical synthesis processes. Furthermore, the production costs tend to be lower, cheap growth media provides an appropriate substrate, and expensive co-substrates can be recycled [48,51]. A living cellular system is also scalable, and the precision of the cellular catalytic machinery circumvents the problems of stereoselectivity [52,53]. Furthermore, unlike with chemical processing, altered genetics does not significantly affect the operating costs, meaning that the efforts associated with constructing a high-titer-producing organism are worthwhile. Through microbial biosynthesis, metabolic engineering approaches can be used to increase the titer of CoQ 10 and overcome some of the limiting steps along the biosynthetic pathway. Metabolic engineering approaches initially used chemical mutagenesis-based selection and chemical engineering procedures that centered on manipulating substrate flux; however, the field has since expanded to include other strategies from a genetics standpoint [48,54]. The process varies depending on promoter choice and strength, cassette copy number, and the localization or tethering of enzymes to scaffolds [55,56]. The choice of cassette and promoter are typically host dependent, given that promoter strength and usability rely on a species-specific genetic environment and functionality. Furthermore, enzymes involved in the tail end of CoQ 10 production are localized in the mitochondria, leading to models that propose the involvement of a membranebound complex containing multiple polypeptides of the CoQ 10 biosynthesis enzymes [57]. Improving flux remains one of the most straightforward methods to increase yield [48,58]. Typically, this involves finding and circumventing rate-limiting steps in Fig. 1 Chemical structure of coenzyme Q 10 . This molecule consists of a isoprenoid side chain composed of ten tandemly linked isoprenyl groups attached to a quinone head group biochemical pathways and then employing strong promoters to increase the expression of key pathway genes to direct biochemical flux. A parallel option entails knocking down the expression of genes in alternate pathways that branch off the pathway of interest, and this can be concomitantly administered, with care taken to ensure that these manipulations do not undermine cellular viability and robustness. Alleviating chemical bottlenecks that might hamper the production of the desired compound can also be achieved by including genes that reconstitute cofactors, such as NADPH and S-adenosyl methionine (SAM). These cofactors play essential roles in numerous biochemical pathways [54,59]. Overall, it is clear that close scrutiny and careful optimization of biosynthetic pathways can optimize and direct the metabolic flux. Biosynthesis of CoQ 10 Entry points to CoQ 10 biosynthesis CoQ biosynthesis involves discrete synthetic stages: production of the aromatic group that forms the quinone head, production of the isoprene tail, attachment of the quinone head to the isoprene tail, and the subsequent steps that culminate in the formation of the final CoQ 10 product [1,60]. In yeast, mitochondria are responsible for CoQ synthesis. However, in humans, both mitochondria and Golgi apparatus are proposed sites for CoQ synthesis. The chemical precursors for both the quinone head and isoprene tail are organism specific. The quinone head is derived from the chorismate precursor in the shikimate pathway in prokaryotes but from tyrosine in higher eukaryotes (Fig. 2). The isoprene tail derives from MEP (2-C-methyl-d-erythritol 4-phosphate) in prokaryotes and plant plastids, which stems from glyceraldehyde 3-phosphate (G3P), whereas, in eukaryotes, the tail is produced from acetyl-CoA in the mevalonate pathway [2,61]. These multiple entry points into the pathway could be exploited to optimize flux for yield improvement. The engineering concept of 'push and pull' to divert metabolic flux implicates that both the inflow and outflow reactions must be increased synchronously, otherwise an accumulation of one product will limit the flux and cause an imbalance in the system. Therefore, it is crucial to understand the species-specific biosynthetic pathways that lead to CoQ 10 production. There are several biosynthetic pathways of concern, each of which we will address separately. Rate-limiting steps in biosynthesis of the isoprenoid chain The first pathway provides the precursors for synthesizing the isoprene tail; if using a prokaryotic system, this is achieved through the MEP pathway. The MEP pathway starts with the interaction between G3P and pyruvate to form 1-deoxy-d-xylulose 5-phosphate (DXP) (Fig. 2), which is reported to be the major limiting step in the formation of the isoprene tail [62]. Indeed, efforts to increase the prokaryotic expression of carotenoids (which share the isoprenoid precursor pathway of MEP) have focused on improving the first catalytic step of DXP formation. Under such contexts, 1-deoxy-d-xylulose-5-phosphate synthase (DXS) and 1-deoxy-d-xylulose 5-phosphate reductoisomerase (DXR) are typically overexpressed to improve the catalytic formation of DXP and its subsequent conversion to MEP [60]. These reactions eventually yield isopentenyl diphosphate (IPP), which is used to initiate isoprene chain elongation in the isoprenoid pathway. Similar efforts can be co-opted for the production of CoQ [63]. Conversely, in the eukaryotic platform, the mevalonate pathway begins with acetyl-CoA and ends with the similar production of IPP (Fig. 2). Midway through the pathway is the catalysis of HMG-CoA to mevalonate by HMGR, the target of statins. Unlike with statins, however, which seek to reduce HMGR activity, here aiming to increase its activity instead, so as to increase flux to the IPP pathway. Indeed, the lower K m values of the downstream IPP pathway enzymes (farnesyl transferase and geranylgeranyl transferase) imply that the enzymatic reactions catalyzed (by enzymes including farnesyl and geranylgeranyl transferases) will reach saturation before that of HMG-CoA [7,64]. This concept of exploiting HMGR for increased metabolic production is common; for example, a truncated HMGR lacking its inhibitory site can delay enzyme saturation [65,66]. Regardless of the pathway source, downstream signaling leads to IPP and its isomer dimethylallyl diphosphate (DMAPP) (Fig. 2). IPP and DMAPP combine to form geranyl diphosphate (GPP), and this compound is sequentially lengthened by additional IPP moieties to form farnesyl diphosphate (FPP), geranylgeranyl diphosphate (GGPP), and the subsequent n-isoprene tail [61]. Depending on the host organism, components of the IPP pathway are also crucial branch points for several important compounds, which makes optimization of the isoprenoid pathway a lucrative endeavor (and one that has been done extensively in S. cerevisiae [59]). GPP can branch off and undergo reactions that lead to the formation of monoterpenoids; FPP, likewise, can form steroids and cholesterol; and GGPP can form carotenoids and retinoids before decaprenyl diphosphate [1]. Studies suggest that inhibiting these various branch points could direct metabolic flux from GPP towards decaprenyl diphosphate, as seen in FPP yields through the downregulation of squalene synthase [67]. CoQ 10 production rates are thought to be limited by the availability of IPP, since the quinone head is produced from the relatively abundant chorismate or tyrosine [68,69]. Lee et al. Microb Cell Fact (2017) 16:39 However, the tail length of CoQ, which contains varying numbers of IPP units, may also be rate-limiting. Although CoQ can be produced by multiple microbial platforms, each microbe synthesizes CoQ with a characteristic number of the IPP units. For example, S. cerevisiae and E. coli produce CoQ 6 and CoQ 8 , respectively, whereas S. pombe and humans naturally produce CoQ 10 [60]. Evidence shows that polyprenyl diphosphate synthase is the key determinant of IPP chain length, as this enzyme catalyzes polyisoprenoid tail extension [70]. In comparison, the polyprenyl diphosphate:4-HB transferase (UbiA/Coq2), which joins the tail and the quinone head, is promiscuous in terms of its isoprenoid chain length choice [71]. Therefore, any attempts to utilize a heterologous, non-native host to produce CoQ 10 would need to optimize or replace the polyprenyl diphosphate synthase to achieve the appropriate tail length (10 isoprene subunits). Many groups have in fact approached this problem by introducing the decaprenyl diphosphate synthase (DPS) gene [72][73][74]. Rate-limiting steps in biosynthesis of the aromatic quinone group Another likely avenue to increase metabolic flux is through the optimization of the aromatic quinone core. The precursor that contributes to the head group is 4-hydroxybenzoic acid (PHB or pHBA), which, in prokaryotes [60], forms from the condensation of phosphoenolpyruvate (PEP) and erythrose-4-phosphate, past shikimate, to chorismate and then PHB ( Fig. 2) [68]. Chorismate is a branch point metabolite necessary in the formation of folate and aromatic amino acids (tyrosine and phenylalanine) [75]. Thus, it would be advantageous to increase the catalytic conversion of chorismate to PHB for both proper cell growth and metabolic flux [76]. Earlier work has also shown that CoQ production can be increased by the overexpression of chorismate pyruvate lyase (UbiC) in E. coli alongside the overexpression of several key catalytic enzymes that tend to limit CoQ production rates [77]. Similarly, an eightfold increase in CoQ 10 was reported in the native producer Sporidiobolus johnsonii [78]. In other organisms, however, the source of PHB differs: mammals produce PHB from tyrosine, whereas yeast and plants use both chorismate and tyrosine (yeast) or a β-oxidation-like mechanism using p-hydroxycinnamic acid (plants) [60,61]. In these cases, the exogenous addition of PHB can increase CoQ 10 production; albeit, production rates are still reliant on the supply of IPP, which is rate-limiting [79,80]. Rate-limiting steps in condensation of isoprenoid tail to the quinone group In the final stages, polyprenyl-4-hydroxybenzoate transferase is required to combine the moieties to form the 4-hydroxy-3-polyprenylbenzoate precursor [60,61,81]. The isoprene group varies depending on the species, and the ring group undergoes a series of modifications (decarboxylation, hydroxylation and methylation) before the complete CoQ is synthesized. Flux is primarily determined by polyprenyl diphosphate transferase, and its overexpression in E. coli can generate a 3.4-fold increase in CoQ 10 production [82]. Conversely, the overexpression of genes involved in ring modification leads to only a minor increase in CoQ 10 content in E. coli and S. pombe, even if several genes are overexpressed together (in S. pombe) [83]. Overall, these findings suggest that the bottleneck in CoQ 10 production still lies predominantly with IPP flux and is then limited by the quinone head formation and the required transfer steps [84]. Host platforms employed for CoQ 10 production CoQ 10 is only native to a few organisms [2,81] and it remains unknown whether human metabolic reactions can cope with a shorter CoQ [85,86]. Traditionally, most efforts have focused on native CoQ 10 producers, and screening for mutant strains that show higher CoQ 10 yields. However, there is great potential in exploiting heterologous hosts armed with extensive toolbox like E. coli and S. cerevisiae into platforms for CoQ 10 production. Here, we explore the benefits and disadvantages of both native and non-native producers. Native producers of CoQ 10 Native producers have an advantage over heterologous hosts, as they do not produce any unwanted CoQ species (CoQ 8 or CoQ 9 ), which vary by chain length that and are specific to the host. The additional costs required to extract and separate CoQ 10 from other shorter-tailed CoQ products may shift the balance in favor of using native producers of the enzyme. Indeed, these other, (See figure on previous page.) Fig. 2 Biosynthesis of coenzyme Q 10 . Schematic showing the pathway of various metabolic precursors leading to the formation of the quinone head (PHB), the isoprene tail (decaprenyl diphosphate), and the final Coenzyme Q product. Reflected in red are the various enzymatic steps that are rate limiting. UbiC and UbiA are specific genes from E. coli, and Coq2 is from S. cerevisiae. Unlabelled arrows between chorismate and tyrosine and PHB; FPP and decaprenyl diphosphate; and decaprenyl-4-hydrobenzoic acid and coenzyme Q 10 denote the presence of multiple steps that have been abbreviated shorter products will compete for the biochemical flux and affect the yield of the desired CoQ 10 [60]. Several native producers of CoQ 10 have been identified or optimized as candidates for CoQ 10 production, including S. pombe, S. johnsonii, Rhodobacter sphaeroides and Agrobacterium tumefaciens [78,83,87,88]. Several other organisms, including Pseudomonas, Paracoccus bacteria, Candida and Saitoella yeasts also produce CoQ 10 natively but have not been sufficiently characterized as producing hosts, and many require the inclusion of expensive constituents in the growth media for proper function. Here, we will explore four of the most feasible native hosts for CoQ 10 production: (1) S. pombe, (2) S. johnsonii, (3) R. sphaeroides and (4) A. tumefaciens. Native producer: Schizosaccharomyces pombe Schizosaccharomyces pombe (fission yeast) is a wellstudied model organism with similar molecular pathway makeup and genetic mechanisms as those in humans [89,90]. However, little effort has been made to develop S. pombe into a suitable framework for high-value compound production [91], and so efforts to increase CoQ 10 in S. pombe have thus far been limited. In one study, genes encoding enzymes directly involved in CoQ 10 biosynthesis (dps1 + -dlp1 + , ppt1 + , and coq3 + -coq9 + ) and HMGR [83] were overexpressed. However, only overexpression of HMGR-and not the CoQ 10 biosynthesis genes-led to a prominent 2.7-fold increase in CoQ 10 yield (Table 1). It was posited that the lack of effect from the biosynthetic genes was because these enzymes are not rate-limiting. More success has been attained in the production of ricinoleic acid, a fatty acid from castor oil in S. pombe [92], and it may be possible to hijack this system to co-produce both CoQ 10 and fatty acids, with CoQ 10 participating as a lipid-soluble antioxidant to protect polyunsaturated fatty acids (PUFA) against oxidative damage during storage. A similar approach has been explored in Yarrowia lipolytica, an oleaginous yeast, even though Y. lipolitica is a non-native producer of CoQ 10 , and this approach is currently undergoing approval for production [93]. The approach capitalizes on the same IPP pathway to produce carotenoids, and it has been suggested that this may lead to a reduction in flux and the generation of alterative products that will include CoQ 10 . Indeed, high CoQ 10 selection based on mutant strains of Protomonas extorquens and R. sphaeroides are correlated with low carotenoid production [94]. Native producer: Sporidiobolus johnsonii Sporidiobolus johnsonii was recently discovered as a natural producer of CoQ 10 at 0.8-3.3 mg/g dry cell weight (DCW) ( Table 1), which, in an unmodified strain, suggests a great potential as compared with the current top native (A. tumefaciens; 6.92-9.6 mg/g DCW) and heterologous (E. coli; 2.4 mg/g DCW; see below) producers [78,95]. Efforts to use S. johnsonii as a production host at an industrial level have achieved 10 mg/g DCW; albeit, this yield involved exogenous PHB in the media [78]. Other mutagenesis attempts led to a mutant UF16 strain with 7.4 mg/g DCW [96]. Native producer: Rhodobacter sphaeroides Rhodobacter sphaeroides is a photosynthetic bacterium [97] initially selected by screening mutant strains based on color change, which indicated a reduction in carotenoid production, and thus, by correlation, an increase in CoQ 10 [94]. Promoter-based balancing of metabolic flux increased the production to 7.16-8.7 mg/g DCW [60,98], and a recent study reported production as high as 12.96 mg/g DCW [87] (Table 1). However, other efforts to increase MEP pathway flux did not translate well into increased CoQ 10 production, probably due to an accumulation of toxic intermediates [99]. R. sphaeroides, however, is reported to have limited growth rates, even when grown in optimal fermentation conditions [84]. This, coupled with other difficulties (such as requiring anaerobic and light conditions to produce higher CoQ 10 titers) makes R. sphaeroides a less ideal host choice [100,101]. Native producer: Agrobacterium tumefaciens Agrobacterium tumefaciens is a Gram-negative bacterium that is widely used as a transmission vector tool for plant genetic modification [102]. Besides R. sphaeroides, it is one of the top producers of CoQ 10 at 6.92-9.6 mg/g DCW [61,83] (Table 1). Initial attempts to increase its production yield involved selecting cells based on their growth on inhibitory precursor analogues [103]. Later efforts involved targeting the overexpression of IPP pathway genes, especially DXS [60]. A. tumefaciens, however, produces unwanted exopolysaccharides, which increases the viscosity of the sample and affects CoQ extraction [88,104]. Issues with native hosts Native producers initially have higher CoQ 10 yields as compared with non-native producers. However, few, if any, of the biosynthetic pathways leading to CoQ 10 production have been optimized in these organisms, and the toolbox of promoters and genetic modules needed for effective tuning of native producers is lacking [84,98]. Neither A. tumefaciens nor R. sphaeroides produce sufficient quantities of CoQ 10 to meet current market demands, and this has led to higher prices of CoQ 10 [38]. Furthermore, rather than optimizing the hosts, recent efforts in the field have been to develop toolkit pieces, such as promoter-regulated vectors [98,99], or to determine ways to select for particular strains after mutagenesis [105]; only a few studies have attempted to harness metabolic engineering (to increase gene expression) or protein engineering [83,87]. Other efforts garnered toward a more immediate solution have had to rely on the addition of precursors to increase yield, and this comes at a higher cost and therefore remains less feasible [106,107]. Heterologous hosts One method to circumvent the shortfalls seen with native producers is to use a heterologous platform that hosts high pliability towards genetic manipulation [108]. Heterologous systems are often avoided because of the production of unwanted CoQ species, the lengths of which are influenced by the chain length of the host organism and the nature of the heterologous polyprenyl diphosphate synthase; this is particularly complicated, as the synthases may function as either homo or hetero-dimers [109]. However, organisms that possess a large toolkit for host engineering are desirable, and their use holds promise to overcome some of the limitations seen with native hosts, assuming that these species can be appropriately optimized to produce CoQ with the correct chain length. In light of this, here, we explore two options-E. coli for prokaryotes and S. cerevisiae for eukaryotes [108]-as well as the utility of plants as heterologous hosts. Heterologous host: Escherichia coli The success in engineering E. coli to produce human insulin paved the way for a new frontier in metabolic engineering [110]. E. coli grows fast and is cheap to culture, and the large range of molecular tools, coupled with an extensive knowledge of its genetic, cellular and metabolic profiles, makes it a widely used production platform. Indeed, most compounds produced by metabolic engineering of E. coli command a good chance of success [108,111,112]. Hence, it is not surprising that strategies developed and optimized for the metabolite production in E. coli can be exploited for the production of CoQ 10 . However, E. coli natively produces CoQ 8 not CoQ 10 [77], and efforts to produce CoQ 10 involved the addition of DPS from a native producer (A. tumefaciens or G. suboxydans) [113,114]. Yet, despite producing CoQ 10 , the bacteria also produced CoQ products of variable tail lengths (CoQ 8 and CoQ 9 ) [115]. This was solved by knocking out the octaprenyl diphosphate synthase (IspB), which led to a minimal production of the other CoQ variants [116]. Other efforts used a DPS of greater stringency, and found that DPS from R. sphaeroides was more discerning in producing CoQ 10 than DPS from A. tumefaciens [115]. Methods to improve the titer of CoQ 10 in E. coli sought to increase the flux from the MEP pathway toward IPP [94,116], while others reconstructed the complete mevalonate pathway to divert flux without encountering interference from negative regulators, such as HMGR by FPP in its native context [117,118]. Although this reconstruction successfully increased CoQ 10 yield, there was a metabolic bottleneck at the top end of the pathway involving mevalonate conversion (Fig. 2). When the lower part of the pathway was ectopically expressed, a twofold increase in yield was observed; yet, expression of the entire pathway led to only a 1.5-fold increase. Several metabolomic studies in E. coli have investigated the rate-limiting steps in CoQ 10 production [68,119] by adding in the precursors exogenously to decouple the pathway away from cellular flux production. Not surprisingly, both the isoprenoid tail and aromatic quinone head are rate-limiting in E. coli [68,[120][121][122]. Yet, when these two precursors are no longer limiting, the downstream genes involved in ring modification (ubiB, ubiH and ubiG) becomes limiting [68]. In an effort to increase flux to the quinone precursor PHB, another study overexpressed chorismate pathway genes, including the gene encoding for 3-deoxy-d-arabinoheptulosonate 7-phosphate synthase, which initiates the first step in combining PEP with d-erythrose 4-phosphate [122] (Fig. 2). However, despite these efforts, CoQ 10 levels in E. coli (0.45-3.63 mg/g DCW) still fall short of the levels produced by native producers (R. sphaeroides and A. tumefaciens) [99] (Table 2). Heterologous host: Saccharomyces cerevisiae Another popular host in metabolic engineering efforts is the budding yeast, Saccharomyces cerevisiae. As a model organism, the genome of S. cerevisiae has been extensively studied and modified, and there are many already optimized tools for efficient gene expression and genetic building blocks for promoters and other regulatory elements [123,124]. S. cerevisiae has a fast growth cycle of about 90 min, and has a high cultivable density as compared with bacteria. The budding yeast can also perform homologous recombination and compartmentalize subcellular processes, making it an excellent host for metabolic engineering purposes. It is also a 'Generally Recognized As Safe' (GRAS) organism (United States Food and Drug Administration) ( Table 2), and this reduces any potential complications that could arise from its use in the production of a health supplement or a nutritional product [56]. Most importantly, the IPP pathway has been extensively optimized in S. cerevisiae [59]. Unfortunately, similar to E. coli, S. cerevisiae natively produces CoQ 6 not CoQ 10 [1]. Early attempts to delete the COQ1 gene in S. cerevisiae and replace it with DPS from G. suboxydans under the COQ1 promoter reportedly yielded 12.3 µg/g DCW [85]. However, DPS tends to require a heterodimer formation for proper function and, when expressed, may instead form dimers with native polyprenyl diphosphate synthases to produce products of differing lengths [125] (Table 2). An alternative approach would be to examine the functionality of the DPS enzyme by fine-tuning is length-determining function. This would be advantageous on several levels, given that the DPS reaction is a limiting step in CoQ 10 production. Indeed, polyprenyl diphosphate synthase belongs to the protein family of prenyl-synthases, many of which are involved in generating the polyisoprenoid chain components of commercially interesting compounds like alkaloids and monoterpenes [7,68]. If successful, this will conceptually sidestep the aforementioned problem of homodimerization of overexpressed heterologous DPS. We thus propose that an understanding of the mechanism by which polyprenyl diphosphate synthase determines chain length may allow for the production of CoQ 10 in S. cerevisiae without generating off-target products. Heterologous host: plants Another suggested strategy for CoQ 10 production is the use of plant hosts for the ease of CoQ 10 supplementation into the diet [38]. Such efforts are currently underway, in conjunction with other nutritional supplements, such as vitamin A (beta-carotene) in 'golden rice' (Oryza sativa), which can be likewise co-opted in the context of CoQ 10 given that carotenoid production also employs the IPP pathway [38,126]. However, the political hassle associated with the commercialization of 'golden rice' or other genetically modified foods is expected to be a counter-rationale for the biosynthesis of CoQ 10 in plant hosts [127][128][129][130]. Furthermore, because CoQ 10 is also prescribed for deficiency-associated diseases and as an ingredient in various cosmetics, it must be properly extracted. Plant production hosts also have further technical obstacles, such as difficulties in engineering and manipulating the plant host; the need for large plots of expensive, arable land; a dependency on harvesting time; and the risk of unpredictable climate conditions in sync with market demand. It is for these various reasons that plant hosts are not deemed economically viable for CoQ 10 production. These challenges, along with the comparatively less effort in the scientific community to exploit plant hosts, has meant that microbial hosts are a better choice for CoQ 10 production [131]. Potential future engineering approaches for CoQ 10 production There have been frequent attempts to engineer key enzymes within the CoQ 10 pathway to increase the yield, including attempts to regulate IPP chain length. Recent interest in synthetic biology-which involves the fine-tuning of biosynthetic processes by controlling the genome and global organellar organization-promises to further revolutionize traditional bioengineering approaches. Several of the newly innovated methodologies will be discussed in the context of improving CoQ 10 biosynthesis in the following sections. Decaprenyl diphosphate synthase In essence, there are two ways to induce a non-native heterologous host to make CoQ 10 : (1) Engineer the polyprenyl diphosphate synthase-which is solely responsible for chain length-to assume the function of DPS, or (2) introduce a DPS into the host and delete the native polyprenyl diphosphate synthase. The latter is based on earlier reports, where CoQ of differing tail lengths have been produced by heterologous hosts [2,15,71,74]. Specifically, the introduction of ddsA and sdsA into E. coli from G. suboxydans and Rhodobacter capsulatus, respectively, can result in the formation of CoQ 10 (and also CoQ 9 ) [74,113,132]. PHB-polyprenyl diphosphate transferase (COQ2) lacks the specificity of polyprenyl diphosphate synthase, as it is able to transfer isoprenoid tails of varying length; e.g., E coli UbiA can utilize isoprenoid chains of 5-10 residues in length. Based on this promiscuity, the PHB:polyprenyl diphosphate transferase is expected not to be a limiting factor in engineering a non-native host for the production of CoQ 10 . However, engineering a heterologous host via the introduction of exogenous DPS suffers from challenges that cannot, as yet, be explained. Even with the efforts of removing endogenous CoQ production by deleting the native polyprenyl diphosphate synthase gene, there remains a lack of stringency in these reactions. For example, when the DPS gene from G. suboxydans is expressed in E. coli, deletion of the native IspB gene only reduces the production of CoQ 8 and CoQ 9 ; even though it still predominantly produces CoQ 10 [74,113]. A further complication to the engineering effort lies with the complex formation of polyprenyl diphosphate synthases, which function as homodimers (IspB in E. coli, Coq1 protein in S. cerevisiae, and DdsA in G. suboxydans) or heterotetramers (Dps1-Dlp1 in S. pombe and HsPDSS1-HsPDSS2 in humans) [133,134]. For instance, when heterologously expressed in E. coli, COQ1 from S. cerevisiae can replace IspB, an otherwise essential gene for the production of CoQ 6 [70,132]. However, when COQ1 from S. cerevisiae is expressed in Dlp1-deficient S. pombe, it rescues the dlp1 deletion by forming a heterodimer with Dps1 to produce CoQ 10 [131]. Similarly, Dps1 or Dlp1 in S. pombe can complex with defective IspB mutants to restore functionality in E. coli [135]. In such cases, heterologous expression of DPS creates artificial interactions with the host DPS, calling for caution in considering CoQ 10 production through host chassis engineering. Polyprenyl diphosphate synthase residue functionality Polyprenyl diphosphate synthases catalyze the formation of the polyprenoid tail by adding IPP units to an allylic diphosphate base [136]. These enzymes are categorized depending on the final carbon chain length of the synthesized product: class I for C10-20, class II for C30-35, class III for C40-50, and class IV for even longer products [115]. Class IV synthases also catalyze some cisconfiguration double bonds, whereas the other classes all catalyze trans-configuration bonds. Synthases from class II and III categories should be chosen when studying tail length determination because these classes reflect both the final carbon chain length product and possess a similar stereo configuration of double bonds to that of DPS, with an average homology of 30-50% between the polyprenyl diphosphate synthases and DPS enzymes [113,133]. There are seven conserved regions within trans-type prenyltransferases, two of which (domain II and VI) possess a DDXXD motif [71,137]. These motifs are located in two helices that face each other, and are the binding sites for FPP (Helix D) and IPP (Helix H) with the aid of Mg 2+ in substrate binding [136,137] (Fig. 3a). The fifth residue before the DDXXD motif in domain II determines tail chain length. In GGPP and FPP (Thermoplasma), this residue is Tyr-89, a large bulky residue; in OPP (Thermotoga maritime) and IspB (E. coli), it is Ala-76 and Ala-79, respectively. These amino acid differences are associated with an inverse relationship between residue size and chain length [136]. Indeed, when Ala-76 and Ala-79 are changed to Tyr, the product chain length decreases from C40 to C20 [136]. In another study, this same substitution (in E. coli) in the absence of wild-type IspB, produces a non-functional protein, but one that is still able to heterodimerize with the wild-type protein to produce CoQ 6 [71]. Elongation of the polyprenyl chain takes place in a 'tunnel' between helices H and D, where A76 (in T. maritime) lies at one end near the DDXXD motif and Phe-132 (Met-135 in E. coli) lies at the other end; this Phe residue is thought to serve as a cap-like residue [136,138]. Mutating Phe-132 to Ala can increase the chain length from C40 to C50, which suggests a method to increase chain length synthesis by polyprenyl diphosphate synthases. This was confirmed by others, who, using a cistype prenyltransferase, found that substituting leucine for alanine increased the chain length from C55 to C70 [139]. In addition to Met-135 in E. coli, another residue, Met-123, compositely serves to limit the elongation of the IPP chain; hence, Met-123 and Met-135 are proposed to contribute to a 'double-floor' ('floor' is synonymous with 'barrier'), as opposed to the 'single-floor' created by Phe-132 in T. maritime [138]. Efforts to engineer the polyprenyl diphosphate synthases in a host that is highly malleable to genetic and metabolic engineering, such as S. cerevisiae, may provide a prospective avenue to increase the yield of CoQ 10 . A sequence alignment of polyprenyl diphosphate synthases from various organisms shows high similarity at the amino acid level in the helices that constitute the chain elongation tunnel (Fig. 3b). Such high conservation means that functional studies conducted with T. maritime polyprenyl diphosphate synthases could serve as a reference to guide engineering efforts in other species, such as COQ1 from S. cerevisiae. Spatial metabolic organization with synthetic compartmentalization Metabolic production of CoQ 10 may also be increased by manipulating the spatial organization of the enzymes in the cell. This is particularly important when faced with potential off-target reactions or when the accumulation of products results in toxicity [140][141][142]; albeit, clinical studies indicate that toxicity from CoQ 10 supplementation is not a huge concern [29]. Some of the more common ways to recruit the pathway into a localized complex involves the use of protein scaffolds or linkers to tether the pathway enzymes to proteins of interest [143][144][145][146][147]. This manipulation concentrates the substrate close to the enzyme, and may favor the forward metabolic flux, as an intermediary metabolite may be captured and shunted into the next step of the pathway. Conceptually, such spatial arrangement reduces the emergence of unwanted by-products, especially with more promiscuous enzymes. When stoichiometric ratios of sequential reactions are of relevance, tethering also helps to modulate the ratio of enzyme to protein [56]. However, tethering may cause rigidity in the protein scaffold or direct enzyme fusions that could affect enzymatic function. However, these issues can be overcome with the use of a linker sequence, which provides increased flexibility to orientate the direction of the reaction and lower the risk of potential disruptions to enzyme folding. Fig. 3 a Protein homology modeling of COQ1 (YBR003W) was performed using ModBase [159] and was viewed using Swiss PDB Viewer [160]. The template for modeling was based on the medium/long-chain length prenyl pyrophosphate synthase of Arabidopsis thaliana (3aq0A) with 42% sequence identity. Helix D and Helix H bind to the elongating isoprene chain and IPP, respectively, at the conserved DDXXD regions. Helix F contains Met-244 and Helix E contains Ser-231, which are thought to be the residues that regulate chain length elongation. The right figure represents the 180° view of that on the left and is superimposed with the structure of CoQ 10 In lieu of scaffold or linker systems, synthetic subcellular compartmentalization can also be used, whereby the enzyme complex is targeted to protein shells or organelles (Fig. 4). This would further reduce any unwanted side-effects or steric problems, which likely occur on protein scaffolding. The use of such synthetic compartmentalization may also sequester any toxic products produced by the reaction and preserve cell viability. One potential pathway for the ectopic induction of compartmentalization is through the use of bacterial microcompartments-proteinaceous organelles derived from prokaryotes [148][149][150][151]. These synthetic organelles possess selectively permeable surfaces comprising thousands of shell proteins and can sequester the enzymatic pathways by means of N-terminal targeting sequences to link the enzymes to the surface of the organelle. Carboxysomes are one example of a bacterial microcompartment that contains ribulose-1,5-bisphosphate carboxylase oxygenase (RuBisCO) for carbon-fixing activities [152][153][154]. In eukaryotes, protein-based compartments (which comprise ribonucleoprotein particles) known as 'vaults' , can also be used; albeit, less is known about the structure and mode of formation of these compartments [155,156]. Finally, it may be simpler to target the eukaryotic organelle pathways that already exist; for instance, one group sought to increase opioid production by altering the pathway of proteins to the endoplasmic reticulum (ER) by ER-tagging of the relevant enzymes. This modification increased the titer and specificity of the product of interest [157]. In cases where modifications are made to the precombined quinone head and isoprene tail, the enzymes required are already localized in the mitochondria in a membrane-bound complex (eukaryotes) or on the cell Fig. 4 Spatial metabolic organization with synthetic compartmentalization. Diagrammatic representation of a synthetic proteinaceous or nanotube micro-compartmentalized organelle can be engineered in microbial cells [149][150][151]. The organelle consists of a scaffold on which the biosynthetic enzymes can be immobilized to direct the biochemical flux such that the substrate of an enzyme is the product of another juxtaposed enzyme. Toxic byproducts may conceptually be shunt into sub-compartments within the organelle and sequester therein to ensure optimal growth of the microbial host membrane (prokaryotes); although, there is, as yet, no evidence for a complex in prokaryotes [11,158]. However, the other pathways involved in generating the precursor head and which lack bio-orthogonal chemistry are still candidates for spatial organization; for example, the mevalonate pathway, which leads to the IPP precursor, could be one option. Indeed, SH3 ligands and domains are used to link HMG-CoA synthase with HMGR to prevent the accumulation of HMG-CoA and reduce its associated cytotoxicity [143]. Chorismate could be another option. As mentioned earlier, chorismate is a branch point metabolite and thus its recruitment could be spatially separated so as to prevent its conversion into off-target aromatic amino acids. This segregation would be advantageous, as this pathway is essential and cannot be completely disrupted. If a plant platform were to be used, attention would have to be given to the alternate and possibly competing products of GPP, FPP and GGPP. In non-native hosts, CoQ products will present with a range of tail lengths because of the use of the promiscuously inserted decaprenyl diphosphate synthase and its interactions with host polyprenyl diphosphate transferase. These are some possible candidate biosynthesis modules that may benefit from the manipulation of spatial organization and can be optimized in future experiments. Conclusions CoQ 10 is a valuable and commercially important product that has yet to be produced to a level that can support market demands. This review gives an overview of the native and heterologous hosts reported thus far for the production of CoQ 10 . Currently Rhodobacter sphaeroides triumphed as the native host in producing 12.96 mg/g DCW of CoQ 10 . On the other hand, the most widely used workhorse for industrial production of valuable compounds-E. coli-only achieved 3.63 mg/g DCW as the most productive of the heterologous hosts by far. Thus, use of native hosts still remains as the best option for industrial scale production of CoQ 10 . However, with new tools and progress made in recent years with the advent of synthetic biology, CoQ 10 production may stand a chance to be revolutionized. It will be exciting to expect future new technological breakthroughs in this field to take production to new levels either in native or heterologous producers. Author details 1 Department of Biochemistry, National University of Singapore, Singapore, Singapore. 2 National University Health System (NUHS), Singapore, Singapore. 3 NUS Synthetic Biology for Clinical and Technological Innovation (SynCTI), Life Sciences Institute, National University of Singapore, Singapore, Singapore. 4 NUS Graduate School for Integrative Sciences and Engineering, National University of Singapore, Singapore, Singapore. 5 School of Chemical & Life Sciences, Nanyang Polytechnic, Singapore, Singapore. 6 Faculty of Life and Environmental Science, Shimane University, Matsue 690-8504, Japan.
2018-01-30T03:58:37.682Z
2017-03-02T00:00:00.000
{ "year": 2017, "sha1": "5ef92643bf4026eaeaf481af3ee8643c77aa745d", "oa_license": "CCBY", "oa_url": "https://microbialcellfactories.biomedcentral.com/track/pdf/10.1186/s12934-017-0646-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5ef92643bf4026eaeaf481af3ee8643c77aa745d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
236157271
pes2o/s2orc
v3-fos-license
Pathogen‐induced inflammation is attenuated by the iminosugar MON‐DNJ via modulation of the unfolded protein response Abstract Sepsis is a life‐threatening condition involving a dysregulated immune response to infectious agents that cause injury to host tissues and organs. Current treatments are limited to early administration of antibiotics and supportive care. While appealing, the strategy of targeted inhibition of individual molecules in the inflammatory cascade has not proved beneficial. Non‐targeted, systemic immunosuppression with steroids has shown limited efficacy and raises concern for secondary infection. Iminosugars are a class of small molecule glycomimetics with distinct inhibition profiles for glycan processing enzymes based on stereochemistry. Inhibition of host endoplasmic reticulum resident glycoprotein processing enzymes has demonstrated efficacy as a broad‐spectrum antiviral strategy, but limited consideration has been given to the effects on host glycoprotein production and consequent disruption of signalling cascades. This work demonstrates that iminosugars inhibit dengue virus, bacterial lipopolysaccharide and fungal antigen‐stimulated cytokine responses in human macrophages. In spite of decreased inflammatory mediator production, viral replication is suppressed in the presence of iminosugar. Transcriptome analysis reveals the key interaction of pathogen‐induced endoplasmic reticulum stress, the resulting unfolded protein response and inflammation. Our work shows that iminosugars modulate these interactions. Based on these findings, we propose a new therapeutic role for iminosugars as treatment for sepsis‐related inflammatory disorders associated with excess cytokine secretion. INTRODUCTION Recent changes to the consensus definitions of sepsis and septic shock highlight a shift in clinical risk stratification to identify patients at greater risk of mortality through a focus on dysregulation of the immune response to invading pathogens [1]. The complexity of dynamic immune responses complicates identification of individual proteins or signalling networks that are responsible for sepsis. Nevertheless, there is evidence for dysregulation of several systems contributing to septic pathophysiology including excessive inflammation, coagulopathy, endothelial dysfunction, immune suppression, epigenetic alteration and metabolic dysregulation [2][3][4]. The most thoroughly explored avenue for development of sepsis is excessive inflammation through a process often described as a 'cytokine storm'. Unfortunately, approaches aimed at inhibiting expression or signalling by single molecules expressed early in the onset of inflammation (e.g. TNFα) have failed to improve mortality in sepsis in individual trials although a meta-analysis suggests potential benefit [5]. Systemic anti-inflammatory therapy (e.g. with glucocorticosteroids) has demonstrated similar mixed efficacy in clinical trials [6,7]. Novel strategies are therefore desirable to identify possible therapeutic strategies. In this study, we investigated the use of the iminosugar N-(9-methoxynonyl)-1-deoxynojirimycin (MON-DNJ) as a potential therapy for attenuation of the excessive response to infection. Iminosugars are promising candidates for treatment of dengue virus (DENV) infection with antiviral efficacy demonstrated in cell culture and animal models [8][9][10]. These glycan mimics inhibit host glycoprotein processing enzymes necessary for correct folding of viral glycoproteins and therefore infectious virus production, and this activity is responsible for reduction of infectious virus burden [11,12]. Prior work in vivo has demonstrated MON-DNJ-mediated reduction of cytokine expression in DENV-infected mice [9,13], but the reduction in infectious virus obfuscates the mechanism of cytokine reduction. Because of the host-directed nature of iminosugars, we hypothesized that inflammatory signatures generated in response to DENV infection might be altered in the presence of iminosugar independent of changes in infectious virus production. We have recently demonstrated iminosugar-mediated interference of IFN-γ and TNF-α receptor signalling as well as mannose receptor binding in the context of infection [14]. Bicyclic sphingolipid mimic iminosugars have demonstrated the capacity to reduce inflammation in chronic disease conditions (e.g. diabetic retinopathy) via direct binding of p38a MAPK [15,16]. We therefore sought to investigate the potential for iminosugar to alter the network-level cellular response to diverse pathogens. The unfolded protein response (UPR), is one such network-level response to cellular stress, whereby excess unfolded proteins are detected in the endoplasmic reticulum (ER) and three interdependent pathways are activated to manage the ER stress response. The three arms of the UPR: IRE1/ERN, ATF6 and EIF2AK3/PERK pathways, serve to increase folding of nascent ER proteins while providing crucial feedback signals to inflammation, autophagy, apoptosis and reactive oxygen species pathways among others. Our group and others have reviewed these complex dynamics in cellular stress elsewhere [17,18]. Given the inhibition of glycoprotein folding caused by iminosugars [11,12], we hypothesized that iminosugars would induce UPR. We reasoned that the significant interdependence of the UPR with innate inflammation and generation of reactive oxygen species (ROS) could provide a mechanism whereby iminosugar-induced blockade of glycoprotein processing would have pathogen-independent effects on inflammation. We further hypothesized that changes to inflammation would extend to diverse pathogens including bacteria and fungi commonly implicated in sepsis. To evaluate these hypotheses, we compared MON-DNJmediated modulation of network-level responses to DENV infection with MON-DNJ-mediated changes arising from activation of the pattern recognition receptor TLR4 signalling by lipopolysaccharide (LPS) in macrophages. human macrophages. In spite of decreased inflammatory mediator production, viral replication is suppressed in the presence of iminosugar. Transcriptome analysis reveals the key interaction of pathogen-induced endoplasmic reticulum stress, the resulting unfolded protein response and inflammation. Our work shows that iminosugars modulate these interactions. Based on these findings, we propose a new therapeutic role for iminosugars as treatment for sepsis-related inflammatory disorders associated with excess cytokine secretion. K E Y W O R D S dengue virus, iminosugar, inflammation, sepsis, unfolded protein response Iminosugars attenuate inflammatory cytokine production in macrophages Macrophages play a central role in directing the immune response to sepsis [19] and are among the determinant cells in the outcome of DENV infection in humans, as target cells of DENV and orchestrators of the innate immune response required for viral control [20]. To investigate the effects of iminosugars on macrophages, we used a model that increases the susceptibility of macrophages to infection by DENV [21][22][23][24] (Figure S1) to evaluate cytokine production in response to pathogen ( Figure 1). Whereas LPS stimulation induced 8 of 12 cytokines tested by 24 h post-infection (p.i.), cytokine induction by DENV was limited to TNF-α, IFN-γ and IP-10 at 24 h suggesting a specific role of these cytokines in initiating the macrophage response to DENV. IL-8, MCP-1 and MIF were not induced by either DENV or LPS at 24 h (data not shown). F I G U R E 1 MON-DNJ reduces DENV-and LPS-induced inflammation in macrophages. (a-i) Nine of twelve cytokines assayed on a Luminex bead-based platform at 24 h p.i. demonstrate differential expression in macrophages treated with media (UI), DENV 2 16681 at moi =1 (DENV) or LPS from S. enterica at 100 ng/ml (LPS). Equivalent data were collected and analysed at 72 and 120 h p.i. ( Figure S2). MON-DNJ samples (grey bars) were compared to untreated samples (black bars) by one-way, repeated-measures ANOVA or equivalent ANOVA of ranks for non-normally distributed data with post hoc pairwise comparison using the Holm-Šídák method (parametric) or Tukey test (non-parametric). Induction of cytokine by each mode of infection was tested independently by ANOVA with correction for multiple comparisons across time points using Holm-Šídák method. Biological replicates (n = 5) were assayed in technical singlicate. Discontinuous axes are used where necessary as a consequence of >10-fold difference in level of cytokine induced by DENV and LPS. Samples were normalized to untreated controls for individual donor and stimulus and evaluated by parametric t-testing with Holm-Šídák correction for multiple testing. *p < 0·05, **p < 0·01, ***p < 0·001 Table S1. A heat map of normalized cytokine protein expression demonstrates the donor variability of cytokine expression in DENV infection ( Figure 2b) and LPS treatment ( Figure 2B) and further reveals that the statistically significant reduction in cytokine levels with MON-DNJ does not lead to complete reduction of cytokine to uninfected, untreated levels (i.e. columns remain pink rather than blue). Steroid treatment in the same model results in reduced cytokine production and an antiviral effect in the first 24 h; however, in the presence of steroid infectious virus production rebounds to untreated levels by 72 h p.i. [25]. In contrast, antiviral efficacy is maintained with MON-DNJ treatment up to at least 120 h p.i. (Figure 3) suggesting that the reduced inflammation does not potentiate virus replication. Whereas cytokines produced in LPS-treated cells are not dependent on pathogen replication, cytokines produced in DENV-infected macrophages are elicited in the context of active viral replication. To assess whether reduced cytokine in DENV infection is solely dependent upon reduced virus production, we performed time course ( Figure 4a) and MON-DNJ titration (Figure 4b) experiments. The flavivirus infectious cycle in cell culture is virus and cell-type dependent but generally results in release of progeny virus around 12 h p.i. [26][27][28], and in our model, F I G U R E 3 Sustained antiviral efficacy of MON-DNJ. Cell culture supernatants were collected at the time points indicated (n = 4 biological replicates), and infectious DENV titre was detected by LLC-MK2 plaque assay (technical triplicate) as previously described [45]. Untreated samples (black bars) were compared to MON-DNJ-treated samples (grey bars) using repeatedmeasures ANOVA with correction for pairwise comparison using the Holm-Šídák method. Error bars represent standard deviation. *p < 0·05, **p < 0·01, ***p < 0·001 Figure 4c). Taken together, these results suggest that iminosugar-mediated reduction of TNF-α is not exclusively dependent on inhibition of viral replication. Iminosugars modulate the macrophage transcriptome To identify pathways involved in the disruption of inflammation by MON-DNJ, a time course of macrophage transcriptomic responses to DENV, LPS and MON-DNJ was investigated. An early time point of 6 h p.i. was chosen to evaluate changes to transcriptional patterns independent of the effects of infectious virus release, and a further point at 24 h p.i. was chosen to capture changes occurring in the presence of replicated virus and to allow comparison of the dynamic changes likely to be occurring. A total of 21,705 transcripts were expressed above background in at least one sample and these probes were subjected to principal component analysis (PCA) to identify changes to gene signatures (Figure 5a,b). Principal components 1 and 2 account for 54 per cent and 36 per cent of the variance of the entire dataset, respectively. In general, there were relatively modest changes to the transcriptome with DENV and iminosugar treatment in comparison with LPS. Comparing uninfected to DENV-infected and LPS-treated samples in the absence of drug treatment demonstrates the dynamic macrophage transcriptomic responses to stimuli. As anticipated, switching from alternate (M2) activation stimulation with IL-4 to classical (M1) activation with LPS induces a profound and early shift in transcript levels with 9,188 and 6,142 transcripts modulated at 6 and 24 h, respectively. In comparison, DENV infection induces statistically significant change in 914 and 6,517 transcripts at 6 and 24 h, respectively. We focus on conserved changes to the macrophage transcriptome induced by treatment with MON-DNJ ( Figure S1b) with 655 differentially expressed probes (mapping to 324 unique genes, Table S2) of particular interest. These 655 probes were analysed by K means clustering ( Figure 5c) to identify patterns of response to MON-DNJ. Two major response patterns, induction (cluster 1-3, Figure 5c) and down-regulation (cluster 4-6, Figure 5c), were identified. Difference in timing of response to MON-DNJ appears to be the principal factor that differentiates sub-clusters (e.g. sustained induction in cluster 1 vs. early induction only in cluster 2 vs. late induction only in cluster 3). Unsupervised hierarchical clustering (Euclidean distance with complete linkage) was performed (Figure 5d), and transcripts associated with each cluster were analysed in STRING-DB to F I G U R E 4 MON-DNJ reduces TNF-α produced by macrophages in response to viral, bacterial and fungal antigens. (a) 24-h time course of TNF-α in response to DENV infection was undertaken (n = 4 biological replicates) to assess the time at which MON-DNJ (grey bars) begins to limit cytokine production. Total cytokine was measured by ELISA (technical duplicate) and the maximal TNF-α level observed for each donor was set to 100 per cent. Repeated-measures ANOVA on the arcsine transform of normalized data was used to assess significance with Holm-Šídák method of multiple comparisons. (b) The dose-response relationship of MON-DNJ and total TNF-α production (solid line) in DENV-infected macrophages at 24 h p.i. was assessed by ELISA in technical duplicate (n = 3 biological replicates). Functional TNF-α (dashed line) was assayed in HEK-blue cells in technical and biological triplicate. The IC 50 (*) and IC 90 (**) of MON-DNJ for reduction of infectious DENV in this model [8] are indicated for comparison to reduction of cytokine. (c) Functional TNF-α secretion in response to LPS and heat-killed C. albicans at 24 h p.i. was assayed in HEK-blue cells for 3 biological replicates each assayed in technical triplicate as in (b). Samples were normalized to untreated controls for individual donor and stimulus and evaluated by parametric t-testing with Holm-Šídák correction for multiple testing. *p < 0·05, ** p < 0·01, ***p < 0·001 Table S2) with the greatest differential expression in our dataset (as described in Figure S1b) were subjected to K means clustering into 6 groups based on Euclidean distance. Expression patterns are noted with fold change (log 2 ) for MON-DNJ treatment relative to each untreated infection condition at each time point displayed on the x-axis as an averaged value of n = 5 biological replicates. Individual transcripts are represented by a single line, and colour further represents magnitude of fold change such that darkest red is ≥1 log 2 induction with MON-DNJ relative to untreated and darkest blue is ≥1 log 2 down-regulation with MON-DNJ relative to untreated. assess for enrichment of genes associated with biological processes. Of note, cluster 1 and cluster 2 are heavily associated with ER stress and the UPR (Table S3A,B). Cluster 6 is the only down-regulated gene cluster with STRING-DBidentified enrichment of biological processes, and those processes identified are almost exclusively associated with inflammation (Table S3C). Additional biological processes associated with the 655 differentially expressed probes were identified in STRING-DB using the entire unclustered dataset (Table S3D). These networks can generally be classified into three categories: ER UPR (Table S4A), inflammation (Table S4B) and cell fate (e.g. autophagy vs. apoptosis signalling, Table S4C). These three networks account for 228 of the 324 differentially expressed genes, and 24 genes are involved in all three processes ( Figure 6). Extensive overlap between the UPR and inflammation has been observed [18,29], and the particular role of the UPR in DENV infection has also recently gained interest [17]. As MON-DNJ treatment reduces cytokine production and induces the UPR, we were interested to identify links between the systems that are altered with MON-DNJ treatment. To do so, we first identified genes that are associated with the immune response that are differentially expressed with MON-DNJ treatment as described in the Methods and generated a heat map of mean fold change with iminosugar treatment for unsupervised hierarchical clustering (Figure 7a). Among these 62 genes, 24 overlap with the previously described cluster 6 associated with inflammation. An additional node of 14 genes (*) was of particular interest. These genes exhibit strong downregulation with MON-DNJ in uninfected macrophages 6 h p.i., and by 24 h p.i., the transcript level is reduced in the presence of MON-DNJ irrespective of infection conditions. This pattern was identified by K means clustering (Figure 5c,d, cluster 4) to match 134 total transcripts (mapping to 67 genes, Table S5). Given the abundance of transcripts with this interesting response pattern, identical methods were applied to the differentially expressed genes associated with signalling of cell fate, and 14 transcripts (6 of which were included in the inflammation F I G U R E 6 Differentially expressed genes overlap in molecular function between UPR, inflammation and signalling of cell fate. (a) The list of all biological processes (Table S3D) overrepresented by MON-DNJ-modulated transcripts (Table S2) was curated to identify processes associated with the UPR (Table S4A), inflammation (Table S4B) and signalling of cell fate (Table S4C), and the overlap between the genes involved with these processes was identified as represented by the Venn diagram. Of the 324 differentially expressed genes identified, 228 are involved in these three networks. The total number of genes for each process is listed under the process label, and the number of genes in each intersection set is represented by graphical location. list) with a similar response were identified (Figure 7b). We therefore generated a network of known interactions for the union set of these genes in addition to the 24 differentially expressed genes with published involvement in UPR, inflammation and cell fate determination ( Figure 6) using STRING-DB (Figure 7c). The full gene name is provided in Table S6 for all protein members of the network in Figure 7c. Several nodes within this network demonstrate a high level of interaction including HMGB1, PPP2CB, TLR1 and SUMO1-all of which are connected to at least 4 other network members and conform to the previously described transcriptional response pattern (highlighted in red text in Figure 7c). Because of the biologically interesting intersection of pathways at HMGB1, we investigated total protein secretion in a small subset of donors (n = 2); however, we were not able to identify any consistent pattern of modulation with MON-DNJ ( Figure S3). Our initial clustering analysis ( Figure 5) suggests that the strongest signature associated with MON-DNJ treatment is for induction of the UPR, and this network was further investigated. Among transcripts with the greatest differential expression, 79 transcripts mapping to 57 genes had at least twofold expression change in two of the three infection conditions (uninfected, DENV and LPS) with drug treatment at 6 h p.i. or at least 1·5-fold expression change in all three conditions. A heatmap of the fold change of these genes (Figure 8a) was generated with 33 genes mapping to a single network (related to the UPR, Figure 8b). Notably, almost all genes identified in this manner have elevated expression with MON-DNJ across infection groups at 6 h with the exception of RPLP1 and ANP32A. Unsupervised hierarchical clustering demonstrates a robust rise in transcript at 6 h p.i. with a variable return towards baseline by 24 h. Both the persistently induced genes (e.g. CRELD2) and the transiently induced genes (e.g. PGM3) demonstrate involvement in the UPR including all three major arms (IRE1/ERN, ATF6 and EIF2AK3/PERK) [18]. Quantitative reverse transcriptase PCR (qRT-PCR) was Table S4B. Average fold change (log 2 ) in gene expression of n = 5 biological replicates with iminosugar treatment is represented by a single coloured box. HCL using Euclidean distance identifies a subset of genes (*) with early inhibition in uninfected macrophages that extends to all infection conditions by 24 h. (b) Heatmap of MON-DNJ modulated transcripts related to cell fate. Genes for clustering were identified from the intersection of those differentially expressed with MON-DNJ treatment as presented in Table S4C. Average fold change (log 2 ) in gene expression of n = 5 biological replicates with iminosugar treatment is represented by a single coloured box. HCL using Euclidean distance identifies a subset of genes (*) with early inhibition in uninfected macrophages that extends to all infection conditions by 24 UDP-glucose 6-dehydrogenase) are all components of the UPR with strong induction noted in our transcriptomic data and qRT-PCR confirms induction with MON-DNJ (grey bars) with return towards baseline levels at 24 h p.i. (Figure 8c-h). Iminosugars reduce generation of reactive oxygen species Association between the UPR, TNF-α and generation of reactive oxygen species (ROS) in DENV infection [30] and the role of ROS (and reactive nitrogen species) in severe DENV disease [31] led us to investigate whether MON-DNJ could alter the generation of ROS. As anticipated, DENV infection of macrophages induces ROS (Figure 8i). With MON-DNJ treatment in DENV-infected macrophages, ROS are restored to baseline suggesting that MON-DNJ is able to control the generation of potentially damaging free radical species in addition to inflammatory cytokines. Thus, MON-DNJ treatment appears to reduce oxidative stress and inflammation while modulating the unfolded protein response and maintaining effective antiviral activity. DISCUSSION These studies provide the first comprehensive characterization of the effects of the iminosugar MON-DNJ on host processes. In so doing, we have identified a mechanism of action that expands the therapeutic potential specifically of MON-DNJ and more generally of ER α-glucosidase inhibiting iminosugars. In addition to reducing infectious virus production, MON-DNJ is able to reduce pro-inflammatory cytokine production induced by actively replicating viral pathogen in addition to that caused by the TLR4 ligand LPS and C. albicans fungal antigen. The reduced cytokine production is co-ordinated with a reduction in ROS further suggesting reduced oxidative stress as a consequence of iminosugar treatment. The UPR is robustly induced by MON-DNJ treatment, a result that is in keeping with the inhibition of ER-resident α-glucosidases necessary for glycoprotein processing [11,12]. Although there appears to be a return to baseline for many of the induced UPRassociated genes by 24 h p.i., the functional consequences of iminosugar treatment (i.e. cytokine reduction and antiviral efficacy) appear to be longer in duration. Such observations are consistent with our recent finding that a single large dose of iminosugar late in the course of DENV or influenza virus infection confers a survival benefit in murine models [32]. Indeed, the experiments presented herein demonstrate that the limited set of genes related to inflammation that are modulated by iminosugars are most consistently suppressed at 24 h p.i. Taken in concert with our prior murine data, this suggests that a single-dose, high concentration bolus of iminosugar late in the course of diseases with acute dysregulation of inflammation (such as viral, bacterial and fungal sepsis) may be an effective means of controlling the disease process. Although induction of the UPR is generally associated with increased inflammation in macrophages [18,33], there are distinct roles for each of the three arms of the network, and pathogens are known to actively antagonize particular elements of the response to favour their own replication and avoid the anti-pathogenic elements of the UPR [34]. By overriding pathogen-mediated regulatory responses through wholesale manipulation of the UPR by addition of iminosugar, we hypothesize that the balance of UPR and inflammatory responses is restored in favour of anti-pathogenic pathways. The finding that a small number of genes associated with both the UPR and inflammation are down-regulated at an early time point following iminosugar treatment is suggestive of a mechanism whereby inhibiting the cell's N-linked glycoprotein processing leads to an altered transcriptional profile that favours a limited but productive inflammatory phenotype. It is tempting to speculate that one of these genes provides a singular mechanistic link. Indeed, we considered that HMGB1 could be solely responsible for the changes observed, but we did not detect a difference in level of secreted protein with iminosugar treatment ( Figure S3). However, HMGB1 signalling is complex [35][36][37][38][39] and dependent upon a number of factors including oxidation state and subcellular localization, and these details may underlie a physiologic explanation for our observations. While there is much support in the literature for a molecule such as HMGB1 playing a key role in mediating MON-DNJ-induced dampening of inflammation [40], particularly in DENV disease [41][42][43][44], it is also probable that the concerted manipulation of several/all of these genes and their downstream signalling is essential for successful clearance of pathogen in the context of a controlled inflammatory milieu. The data presented herein demonstrate that iminosugars can control inflammation in viral, bacterial and fungal sepsis. These results expand the therapeutic potential of iminosugars to include control of inflammation in diverse pathologies irrespective of their ability to limit pathogen replication. Virus DENV2 strain 16681 (a gift from G. Screaton, Oxford, UK) was propagated in mosquito C6/36 cells (a gift from Armed Forces Research Institute of Medical Sciences, Thailand), collected from supernatant and concentrated by precipitation with 10% (w/v) poly(ethylene glycol) M r 6,000 (Sigma), 0·6% sodium chloride (Sigma) overnight at 4℃. Following precipitation, virus was centrifuged at 2830 g for 45 min at 4℃, resuspended in Leibovitz's L15 media +10% HI FBS and stored at −80℃ until use. Virus titres were obtained by plaque assay on LLC-MK2 monkey kidney cells (a gift from Armed Forces Research Institute of Medical Sciences, Thailand), as described previously [45]. Isolation of monocytes and macrophage model Human PBMCs (peripheral blood mononuclear cells) were isolated from buffy coats (NHS Blood and Transport) by centrifugation over a Ficoll-Paque TM PLUS (Amersham) gradient and monocytes isolated by adherence as previously reported. Autologous plasma was collected, heatinactivated (56℃, 30 min) and used to supplement (1%) X-VIVO10 (Lonza) medium to produce complete growth medium. Cells were differentiated for 3 days (37℃, 5% CO 2 ) in complete growth medium +25 ng/ml recombinant human IL-4 (rhIL-4, Peprotech) to generate alternatively activated macrophages [14,21]. The use of human blood was approved by the NHS National Research Ethics Service (09/H0606/3). Macrophage stimulation, infection and drug treatment Macrophages were stimulated with LPS (200 ng/ml from Salmonella enterica, Sigma), heat-killed Candida albicans (2 × 10 6 c/ml, Invivogen), unstimulated (media-only) or infected with DENV2 16681 diluted to a multiplicity of infection (MOI) of 1, in X-VIVO10 without supplements for 90 min (20℃, with rocking). Subsequently, virus or media was removed and replaced with fresh complete growth medium without IL-4, but containing MON-DNJ (25 µM, unless otherwise indicated) or media-only control. For LPS and C. albicans, stimulus was not removed, and MON-DNJ (25 μM final concentration) or control medium was added such that LPS stimulation continued at 100 ng/ml and C. albicans stimulation at 1 × 10 6 c/ml. Cells were incubated for indicated times (37℃, 5% CO 2 ) before supernatant harvesting and centrifugation for 5 min (room temperature, 400 g) to pellet any cells/debris. Aliquots were stored at −80℃ until analysis. Luminex detection of cytokines Cytokines/chemokines (IL-6, IL-8, IL-10, IL-17A, G-CSF, IFN-γ, IP-10, MCP-1, MIF, MIP-1β, RANTES and TNF-α) were detected by a multiplex fluorescent bead-based assay (Bio-Rad) in supernatants following treatments described above. Collected supernatants were centrifuged (5 min, 900 g) to remove cellular debris, aliquoted and stored at −80℃ until analysis. Samples were handled in a 96-well format using a filter-bottom plate (Bio-Rad) to allow washing of magnetic beads as per the manufacturer's instructions and analysed on a Luminex 200 (Luminex) fluorescence detector. Concentrations of cytokines were determined based on 5-point linear regression curves in comparison with standards. The iminosugars NB-DNJ, NB-DGJ and MON-DNJ were used to treat four donors, and statistical evaluation was performed on all cytokines with correction for multiple comparisons as reported in Table S1. Statistical analyses were performed using SigmaPlot 12 (Systat Software) using either parametric t-tests or ANOVA or, in the case, where data compared did not meet necessary assumptions of normality by Shapiro-Wilk test or equality of variance by F-test, non-parametric t-tests or ANOVA. Post hoc testing was conducted by Holm-Šídák method for parametric methods or by Dunnett's test for non-parametric methods. Data in the main text are limited to MON-DNJ for clarity; however, all significance is corrected for multiple comparisons based on all three iminosugars tested. Analysis of cytokines by ELISA Supernatant TNF-α concentration was determined by enzyme-linked immunosorbent assay (ELISA), based on manufacturer supplied TNF-α standard curve (Invitrogen, KHC3011). DENV-infected samples were diluted 1:2, and LPS-treated samples were diluted 1:10 in X-VIVO10 to ensure cytokine levels were within the range of the standard curve. IP-10 and MIP-1β cytokine levels were quantified by ELISA using Quantikine kits (R&D Systems) based on manufacturer supplied cytokine standard curve, with all supernatants diluted 1:100 in X-VIVO10 (with centrifugation for 4 min at 2,000 g). ELISAs were conducted as per the manufacturer's instructions. Plates were read on a SpectraMax M5 microplate reader (Molecular Devices) to determine absorbance at 450 nm with subtractive correction of absorbance at 540 nm. Samples were assayed in technical singlicate with biological replicates averaged for statistical analyses. HMGB1 secretion was quantified by Shino-Test ELISA (Tecan IBL, ST51011). All samples were diluted 1:2 in diluent buffer and assayed in technical duplicate according to the manufacturer's instructions for the high sensitivity standard curve. Absorbance at 450 nm was determined using a SpectraMax M5 microplate reader, and biological triplicates were averaged for statistical analysis. Analysis of significant differences was conducted as for Luminex assays above. Quantification of functional TNF-α HEK-Blue™ TNF-α cells (Invivogen) were cultured, and detection of biologically functional cytokine in stimulated macrophage supernatants was performed according to the manufacturer's instructions. Plates were incubated (37℃, 5% CO2) for 30 min to 1 h and secreted alkaline phosphatase levels quantified by measuring absorbance at 645 nm using a NOVOstar microplate reader (BMG Labtech). Cytokine concentration was determined relative to a standard curve of recombinant human TNF-α (Peprotech) using GraphPad Prism version 7·01 (GraphPad Software, Inc). Transcriptomic sample generation and quality control Macrophages were stimulated as previously described to generate time points post-infection of 6 h and 24 h. Following appropriate incubation, cells were washed in 37℃ PBS (Sigma) then lysed with TRIzol (Life Technologies) (5 min, 20℃). Cellular debris was cleared by centrifugation (12,000 g, 1 min, room temperature), and RNA-containing supernatants were mixed 1:1 with 100% EtOH (Fisher Scientific). Samples were applied to a Direct-zol TM RNA Mini-prep column (Zymo Research) and washed in accordance with manufacturer's instructions. RNA was eluted by two sequential applications of 25 μl of RNase-free water (50 μl total). Samples were stored at −80℃ and processed at Cambridge Genomic Services (CGS). Transcriptomic RNA quantification and normalization RNA samples were amplified and biotinylated using the Illumina® TotalPrep TM RNA Amplification Kit (Ambion), directly hybridized to a HumanHT-12v4 BeadChip (Illumina) and scanned using an iScan system (Illumina). Illumina GenomeStudio analytical software was used to generate mappings and intensities and perform bead-level processing (data available at: GSE12 8303). GenomeStudio generated data were imported into R Bioconductor v2·14 using the lumi package [46][47][48][49]. Data were normalized via the neqc protocol [50] to account for variation in negative control probes. Inclusion criteria for further analysis required a detection value <0·01 for at least one sample for a given probe; this cut-off allows for 'on/off' responses to various stimuli (e.g. DENV, LPS or MON-DNJ) and reduced the dataset to 21,705 expressed probes. GSE12 8303 contains the list of normalized expression values for all expressed probes. Statistical analyses of macrophage transcriptome in response to DENV, LPS and MON-DNJ Differentially expressed genes were then identified for various treatments using an ANOVA approach implemented in lumi with thresholds requiring adjusted for multiple comparisons p-value less than 0·01 and fold change greater than 20 per cent. Unsupervised hierarchical clustering was performed using Euclidean distance with complete linkage. Principal component analysis and hierarchical clustering of the entire transcript set were done using ClustVis [51]. Hierarchical clustering of transcript subsets identified in STRING-DB was performed using TMeV [52,53]. STRING database identification of functional gene networks Transcripts modulated in response to MON-DNJ treatment were identified in uninfected (UI), DENV-infected (DENV) and LPS-treated (LPS) macrophages at both 6 h and 24 h p.i. The top 100 differentially expressed transcripts identified by absolute value log 2 fold change for each condition were merged in addition to all transcripts with at least a twofold change (absolute value log 2 ≥ 1) to obtain a list of 655 differentially expressed probes mapping to 324 unique genes. This set was evaluated using STRING-DB.org v10·5 [54,55] for enrichment of biological processes (Tables S3-S4). To identify consistent changes with drug treatment, probes with at least a twofold change in 2 of 3 infection conditions at 6 h (n = 44) were combined with probes with at least a 1·5-fold change in all 3 infection conditions at 6 h (n = 66) for a total of 79 probes mapping to 57 unique genes. At 24 h, an identical sorting process yields n = 12 probes with at least twofold change in 2 of 3 infection conditions and n = 8 probes with at least 1·5fold change in all 3 infection conditions (for 13 unique genes, Table S7). From the list of 57 genes identified at 6 h, STRING-DB identified a single network of interactions principally associated with ER response to stress (FDR = 6·66 × 10 −16 ), cellular response to topologically incorrect proteins (FDR = 6·66 × 10 −16 ) and the ER UPR (FDR = 4·92 × 10 −15 ). This network was used to generate the UPR-associated heatmap, network image, and to evaluate enriched biological processes. Unsupervised hierarchical clustering using Euclidean distance, complete linkage was executed in TMeV based on mean fold change with MON-DNJ treatment compared to untreated controls. Inflammatory transcriptional changes were identified from the set of 324 unique genes identified. STRING-DB gene ontology enrichment identified 9 pathways associated with the immune response based on 60 genes (Table S4B). Additional TNF-related genes TRAF1 and TNFRSF9 not identified by the above analysis but included in the set of 324 genes were added to the set to obtain a final list of 62 differentially expressed immune-related genes. Unsupervised hierarchical clustering using Euclidean distance was executed in TMeV based on mean fold change with MON-DNJ treatment compared with untreated controls. The fifteen gene subset with strong early suppression in uninfected macrophages and ubiquitous suppression by 24 h p.i. was combined with a similarly identified gene set for cell fate and those genes involved in UPR, inflammation and cell fate determination (Figure 6), and this union was used to assemble a network in STRING-DB. Detection of reactive oxygen species (ROS) Macrophages were DENV-or mock-infected and MON-DNJ-treated as previously described. ROS levels were measured 18 h post-infection for n = 3 donors in technical triplicate. Cells were washed once in PBS and stained with 5 μM CellROX Green reagent (Invitrogen, C10444) for 30 min at 37℃. Cells were then scraped and fixed in 4% paraformaldehyde for 15 min at 4℃. After a further washing step, cells were resuspended in PBS with 0·5% BSA and 5 mM EDTA and fluorescence was measured using a BD FACS Calibur. Cells were gated to exclude debris and doublets and geometric mean fluorescent intensity was recorded based on a minimum of 5000 gated events. Geometric mean fluorescence intensities (gMFI) were exported to Prism, normalized to uninfected, untreated controls, and then analysed by two-way ANOVA for statistically significant differences. qRT-PCR gene validation Selected transcripts were assayed by qRT-PCR to confirm differential regulation observed by microarray. Macrophages from n = 4 donors were mock-infected, DENV-infected or LPS-treated and treated with control or 25 μM MON-DNJ (grey bars) in duplicate in identical fashion to samples generated for microarray. RNA was collected at 6 h and 24 h p.i. by TRIzol lysis and Direct-zol (Zymogen) isolation as described above and stored at −80℃ prior to assay. Samples were thawed and added to Verso 1-step RT-qPCR kit (Thermo Fisher Scientific) with ROX, and amplification was monitored by fluorescence detection using an AB7500 Real-Time PCR System (Thermo Fisher Scientific) as per the manufacturer's instructions. FAM-MGB experimental primer/probe sets were obtained from Thermo Fisher Scientific and validated to have efficiency of 100 ± 10%. All experimental probes were normalized to relative quantity of RPLP2 VIC-MGB endogenous control (Thermo Fisher Scientific) based on observed homogeneity of RPLP2 levels across all treatments in transcriptomic experiments. Experimental samples were normalized to relative quantity of transcript at the equivalent time point for uninfected, untreated samples. Holm-Šídák correction for multiple comparisons was performed to identify statistically significant differences for pairwise comparisons. Statistical analyses Specific statistical tests used to evaluate for significance are described in the appropriate figure legend. In general, parametric tests were used where possible with post hoc pairwise testing using Holm-Šídák correction for multiple comparisons. Where assumptions of normality were not met, equivalent non-parametric tests on rank were used with Tukey testing for multiple comparisons. All transcriptomic data handling was performed in R as described in the associated section of the methods. All further data handling and statistical analyses were conducted in SigmaPlot and GraphPad Prism. N.P. performed experiments and edited the manuscript. M.L.H. performed experiments. R.A.D. designed the study and edited the manuscript. J.L.M. designed the study, conducted experiments and co-wrote the manuscript. N.Z. designed the study and co-wrote the manuscript.
2021-07-22T06:18:01.486Z
2021-07-20T00:00:00.000
{ "year": 2021, "sha1": "da367367d6848658a59ae2a078ed8b25ff5b758a", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/imm.13393", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "f45c0b5e3b7e07a7fbc0b7f061d576077a17b727", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
169830139
pes2o/s2orc
v3-fos-license
The Use of Photos of the Social Networks in Shaping a New Tourist Destination: Analysis of Clusters in a GIS Environment The Use of Photos of the Social Networks in Shaping a New Tourist Destination: Analysis of Clusters in a GIS Environment The use of new photo-sharing services in social networks has favoured a perception of the interests of locals and visitors. The photos presented in these networks are geocoded by the users, residents or visitors, allowing extensive databases to be obtained. The research that was conducted between 2015 and 2016 followed an essentially quantitative approach. Based on the georeferenced photos on social networks, the main distribution patterns of places of interest to tourists, visitors and residents were analysed in a rural emergent tourist destination in northeastern Portugal. We used geographical information systems (GISs) to apply various spatial and statistical analysis techniques. One the main conclusions was that there is a high number of natural and cultural heritage locations with tourism potential, and, in some cases, their accessibility standards make them favourable destinations for tourists. Introduction The development of new methods of analysis of the images of various destinations has accompanied the emergence of extensive research that uses the method of analysing photos [1][2][3][4][5]. Another fact is due to the massive use of the Internet, which is increasingly introducing significant changes in the way people interact in society [6,7]. This new dimension of communication has made the expression of multiple cultural values possible [8]. Because of the evolution of the Internet and the advent of smart technology, the proliferation of the number of photos registered per trip has occurred in recent decades [9]. However, the habit of sharing them with co-workers, family and friends, as well as in the broader media, comes as a result of online sharing, often in real time and synchronously on various websites, such as Flickr, Picasa, Facebook, Panoramio and Pinterest [4,[10][11][12][13][14][15]. Images have assumed a fundamental role in the dissemination of the dimensions and amplitudes of tourist spaces. The sharing of photos of a place through social networks has an increasingly strong power in promoting tourism, where the dissemination of the tourist image has an effective and lasting role. Different social media platforms have been accommodating new features, and coupled with their simplicity of handling and attractiveness, they are competing increasingly with traditional travel guides or leaflets because they allow people to post photographs on the Internet immediately after they are taken [4,12,13,[16][17][18][19][20][21]. However, it should be noted that these platforms still tend to be used by the younger age groups and by groups with higher socioeconomic characteristics. Conversely, research centred on the analysis of online images of rural destinations has not been explored extensively [22][23][24]. Increasingly, urban dwellers' interest in staying overnight and visiting rural areas shows that it is essential to market and promote rural destinations online [18,[25][26][27][28][29]. With the evolution of geographic information technologies, it is possible to analyse distribution patterns (intensity, concentration and dispersion) of tourism resources in different spaces through density maps, central trend measures or with the use of indicators of distribution patterns (e.g. Getis-Ord General G or Moran's Index). Based on these assumptions, the main objectives of this research are (i) to identify the spatial distribution patterns of the photos of visitors and residents; (ii) to characterise the different "looks" of the tourist destination and (iii) to contribute to the development of a tourist image that is closer to the interests of visitors and locals. This study complements other approaches undertaken in the territory, resulting mainly from an exploratory analysis of existing and potential tourism resources, the results of the focus group that was conducted, and the results of a self-administered survey [30,31]. This chapter consists of four sections. After the introductory section, the second section presents the main methods and sources used to acquire the data that were used. Section 3 is a summary of the main results that were achieved, and it highlights the main potentialities of the analysis of the images and how they can contribute to the segmentation of the visitors. Section 4 presents our discussions. Section 5 presents the conclusions concerning the main results, proposes some challenges for future research and identifies the main limitations that were intrinsic to the study. Geographical context The case study was based on the municipality of Boticas, which is located in the Nomenclature of Territorial Units for Statistics (NUTS) III of Alto Tâmega in the northeast of Continental Portugal. According to the Typology of Urban Areas (TIPAU) of 2014, seven of the parishes are considered Medium Urban Areas (MUA) and three as Predominantly Rural Areas (PRA) [32]. The area is subdivided into 10 parishes, with an area of 322 km 2 (Figure 1). In 2011 (last census), the population of the municipality of Boticas was 5750, of which 1510 residents (26.3% of the population) lived in Boticas and Granja. In fact, this municipality, like other territories located in the interior of the country, has lost its vitality. One of the municipality's main potentials is the use of its endogenous resources to attract tourists, associated economic activities and to enhance its population. Materials and methods The database used in this research was constructed with data from Panoramio's photo platform, Google Earth. The period from January 2005 to March 2016 was used, and 728 photos were used. The photos were analysed by two quantitative methods, that is, (1) univariate and multivariate statistical analysis using the SPSS 22.0 statistical package and (2) geospatial analysis using the ArcGIS 10.3 package. The data were grouped into several typologies considering the assumptions defined in previous studies [3][4][5] of tourism equipment and infrastructures, such as accommodations, catering and signs). The situations in which images with more than one set of elements were verified always opted for the predominant set. ii. Zoom of the image-this was analysed by checking the following assumptions: (1) if the image is focused on a single element (e.g. a window of a dwelling or a church), (2) in its context (a church in the housing complex) or (3) whether it is a panoramic view or a scenery (e.g. a mountain or a river). iii. Presence of people. iv. Origin-whether the image originates from locals or visitors. After defining the cataloguing criteria of the images, the main assumptions that are inherent to the analysis of the spatial distribution of the photos should be emphasised: 1. the data were aggregated in hexagons with 150 m of side and 300 m of diameter; 2. standard distance calculations were performed to infer the degree of concentration or the dispersion of resources around the mean geometric centre; 3. two indices were used to determine global localization patterns, that is, Getis-Ord General G and Global Moran's I. These served to identify the degree of agglomeration of high and low values and the spatial correlation based on local resources and attribute values; 4. Anselin Local Moran's statistics (LISA statistics) were used, making it possible to determine emerging local trends for the intensification of Boticas' tourism. These typologies were the basis of the identification of clusters. This technique allows the identification of groups with high homogeneity within the group as well as intergroup heterogeneity [3]. The k-means method was used considering the following assumptions: (i) the choice of a sample was made based on a sample in each group, from a group of groups (2-5); (ii) minimum loss of information through the merging of the elements and (iii) instead of considering the categories per se, items were used to segment the groups. After obtaining the clusters, an ANOVA test was performed to determine the differences between the groups. Results The destination image can be promoted based on the sets of photos that residents and visitors take, considering their ability to become souvenirs, postcards or tourists' objects. Photo density analysis of Sightsmaps (http://www.sightsmap.com/, developed by [33]) clearly showed a density of photos in more urbanised areas (e.g. Oporto Metropolitan Area (OMP), Braga, Guimarães, Viana do Castelo, Chaves and Bragança) in the course of the Douro River (in Portugal, Barca d'Alva to the mouth between Porto and VN Gaia) or in the Peneda and Gerês National Park (PNPG), which was different from what occurs in territories such as Boticas. In the municipality of Boticas, as with the other municipalities of Alto Tâmega (with the exception of Chaves), there is a density of photos that were much smaller and circumscribed to the central area of the municipality. Figure 2 summarises the distribution of the photographic series, differing according to whether they were taken by residents or visitors. In general, there is a concentration of photos in the northwest and central areas of the municipality of Boticas, although most of the photos are distributed somewhat throughout the municipality, mainly along the main axes of the road network. However, it is in the parish of Boticas and Granja that there was a greater density of photos, which was due to the amount of equipment available to visitors. For this reason, in all parishes, there are more photos of visitors than residents (258 for locals and 470 for visitors- Table 1). The maximum of photos recorded per hexagon for this group was 55 photos ( Table 2). The grouping of the photos in hexagons shows a concentration of photos to the north of the municipality, and the areas with the lower densities of photos are located to the south, especially in the parish of Pinho (three photos- Figure 3). To identify the clusters, the G-statistics were analysed, and the Moran index was used. The G-statistics indicated that there was a tendency for the concentration of values (high clusters), with high levels of significance (p-value < 0.01). Likewise, the Moran index indicated a very strong spatial correlation for the formation of spatial agglomerates (p-value < 0.01, Figure 4). The Anselin local Moran's statistic was calculated with the intention of mapping the presence of these groups, and it was found that the number of clusters with High-High (HH) values was not very significant. Concerning the distribution of these clusters, there was a spatial agglomeration in several villages typical of the municipality (e.g. Vilarinho Sêco, Coimbró), in places of landscape-natural interest (e.g. Carvalhelhos, Mosteiró, Vilarinho de Mó) and in the village centre, either by locals or visitors. There are some differences between places photographed by locals and visitors. The former has High-High (HH) clusters in Coimbró, Serra do Barroso Wind Farm and in Fiães do Tâmega, while the latter have an agglomeration in Sapiãos (e.g. anthropomorphic graves, some typical houses, river beach), which are not present in the photographs made by the residents. It should be noted, however, that there were small clusters expressed in Low-High values, which correspond to atypical values. Figure 6 presents the typologies created for the photos. The concentration of photos is strongly associated with two types of characteristics, that is, built-heritage (57%) and natural elements (35%). Tourism services (5%) and local culture (4%, Figure 5A) appear less often. The distances from which the images were photographed were very different. In some cases, the person taking the photos focused only on the details, while others sought to take photographs of the entire scene. Therefore, we chose to classify the photographs into three subgroups, as shown in Figure 5(B). The results allowed us to conclude that 57% of the sites/ (Figure 5C). The most common circumstance for visitors and residents was to take photos without the presence of people (93%; Figure 5D), and only 7% of the photos included people. In addition, 67.3% of the photos were taken by visitors, showing their renewed interest, which is perfectly understandable. In fact, it is surprising that there were a significant number of locals placing Panoramio photos, which may indicate the recognition of the natural and built heritage that exists in the municipality. Figure 6 represents the temporal distribution of the photos taken by visitors and locals. It should be noted that the figures do not refer to all of the photos sent to the Panoramio platform over the review period; rather, they refer only to the photos that contain information concerning the dates they were taken. There were 528 photos with year-to-year, intra-annual and intra-day information that were included among the 729 previously classified photos. In fact, the year with the highest number of photographs taken by visitors was 2014, while in the case of locals, the highest number of photographs taken occurred in previous years, that is, between 2006 and 2008 ( Figure 6A). Thus, in cumulative terms, there has been a redoubled interest in taking photographs by visitors in recent years. However, when analysing the values recorded monthly in the period from 2013 through 2015, there is a seasonality of the photo records, with a concentration in August ( Figure 6B). Concerning the hourly range, these photographs generally were taken between 10:00 am and 5:00 pm ( Figure 6C). It should be noted that residents contributed most to post-5:00 pm photographs, which may be due to the return of excursions to the place of departure after an overnight visit to the Boticas municipality, indicating that the municipality had visitors but not tourists. There are some records during the night period, but they should be viewed with caution because there may have been errors associated with the instruments' recordings of the time when the photographs were taken. An analysis also was conducted for the areas with the highest concentration of photos (i.e. the areas classified in Figure 7 with High-High (HH) clusters). It should be noted that these concentration areas received a larger number of photos than the others, and it was decided to compare the evolutionary rhythms by year and month with respect to the total records of photos in the municipality. Although there was a trend toward the concentration of photographs in some areas of the Boticas municipality, as was previously indicated, this centralization only began to occur in 2014 ( Figure 7A). Even so, these photos were taken mostly in the summer months, with emphasis on the month of August, which should result from visitors to the country (Figure 7B). Table 3 summarises the results of grouping items into clusters. It was observed that the characteristics of the photographs indicated that there were two types of patterns, that is, tourists and residents. Note that, during the analysis, it was verified that all the items presented a p-value of ≤0.05. The ANOVA test result also showed that the variables included in the model were sufficiently different for their grouping, and, thus, three groups were created. Cluster A (n = 250) essentially presents photographs of nature (100%), and the cluster is associated with photos that focus on scenarios (55.6%) and includes the visitors who most often take this type of photos. Cluster B represents only the class of visitors, highlighting the elements in the context (64.9%) and the built heritage category (89.8%). Cluster C represents the least expressive group and incorporates photographs that focus on elements located in the context (76.5%) and in the category of the built heritage (78.4%). Clusters B and C contain essentially photographs that do not include people. Discussion As demonstrated during the study, Boticas is a tourist destination with seasonal demand, particularly during the summer period (Figure 8), a fact substantiated by the intersection of the number of visitors registered in Interactive Shop Porto and North Tourism and the number of photographs available in Panoramio. In any case, it is important to mention that some of the rural tourism houses in the municipality have a higher number of visitors during the winter period, as was evident from direct observation and discussions with these local agents. The relationship between the photos that were taken and the promotion conveyed by various means of communication does not always present a direct relationship. In fact, although the Nadir Afonso Arts Center and the Boticas Nature and Biodiversity Park present a significant number of photographs and are key tourist resources for the promotion of the territory, there are other resources that the municipality presents that are underutilised due to the lack of the dissemination of information about their existence. When the number of photographs is considered with respect to the tourist resources identified in the territory (Figure 9), it was verified that there are villages in which some of the traditional customs have been preserved, for example, Vilarinho Sêco and Coimbró. Although such considerations also are woven by residents and other stakeholders (i.e. [23,30,31]) when they identify the valued tourist attributes or those with high potential for appreciation, they are still not as effective in the tourism promotion strategy. The results also show that visitors and local people more often have photographed patrimonial and natural elements. However, it should be noted that this difference was more significant in the case of the local people. Nevertheless, it is important to mention that these evidences present a high degree of similarity with the results obtained by [3]. The perceived image of Boticas is very much related to the built heritage rather than to nature, although the differences are not so significant when compared with the categories of tourist services and local culture. This was a surprising discovery, and, if it is true that this municipality has a unique built-heritage (especially churches and museums), there also were some atrocities committed during the patrimonial recovery processes with profound changes to the structures of churches and chapels. However, the relevance of nature, especially on the part of the visitors, is consistent with the reading made by the various stakeholders, considering the magnificence of the natural ecosystem, that is, the mountains and valleys, of this territory. Another element of analysis that allows us to infer the image of visitors is associated with the degree to which people are present in the photographs. While in some European destinations the captured heritage photographs do not have individuals present, in others they appear more frequently, especially in exotic or allocentric destinations. Photographs with the presence of visitors occur commonly in destinations where there is a degree of interaction between the receiving community and visitors [3]. A careful analysis of the photographs allowed us to infer that those taken in the Boticas territory were largely free of the presence of both local people and visitors. This situation is not uncommon, as was evident in the investigations conducted by Girona [3,34,35] and Vale de Boí [3]. One of the reasons that support the absence of people in the photos is the ideal of individual consumption of places, which is supported by the assertion that the presence of people in the photographs could represent a kind of alteration of reality [36]. Nevertheless, in most cases, tourists who visit a destination such as Boticas seek to eliminate any vestige of humanity to avoid detracting attention away from the element or the landscape, for example, empty churches or the splendour of the landscape. These photographs usually also have a smaller focus, where the elements in the context (56.9%) are the most photographed, which may be due to the search for a broader framework than they are visiting (global vision of the place), to facilitate their later recognition. In fact, visitors' behaviours may be framed in accordance with the photographs that are taken in the sense that the length of stay in the municipality is very small. Most of the photos with a visiting author occur between 10:00 A.M. and 4:00 P.M, which may denote (i) the absence of tourist facilities that are open for a longer time period, (ii) the absence of other interesting activities in the municipality, (iii) the absence of catering services that are open until later hours during the weekly period and (iv) the lack of desirable hotel accommodations that meet visitors' expectations. Considering these elements, it would seem reasonable to start some efforts to counteract this trend and to create a precise and accurate image of this territory. This could be done in the form of tourism products, based, for example, on nature, health and well-being. This approach could attract visitors who have an interest in these locations, and it may allow the maintenance of sustainable tourism practices. Conclusion The photos taken by locals and visitors show that there are certain observable distribution patterns and that these patterns have some similarities and some significant differences. In fact, visitors take photos less concentrated in certain housing areas than in locals, which shows a certain predisposition to value elements of the intangible heritage. It should be noted that 34.5% of the photos that were taken were associated with the natural component. In some cases, there is a certain disarticulation between the promotion of tourism conducted by the municipality and the image of the destination based on photographs. In this way, there are some clues that can be put forward for future work, namely (i) assessing the seasonality of photographs taken by residents and visitors and, thus, which potential sites to promote and at which time of the year; (ii) verification of the distribution of the photographs at the supra-municipal scale, given that this sample is incipient and that it is possible, based on photographs standards on a broader scale, to develop common strategies with other adjacent territories; and (iii) objectives, goals and strategies that can be outlined in the short term and medium term for the development of tourism. Such research has some limitations. While it is critical to take into consideration that residents or visitors only share the photos they deem to be most relevant, only some groups are frequent users of these platforms. Nonetheless, these approaches must be used in combination with other approaches, such as surveys, focus groups and interviews, to determine important information concerning the use of the territory's resources from the perspective of enhancing tourism.
2019-05-30T23:46:13.745Z
2018-11-05T00:00:00.000
{ "year": 2018, "sha1": "870199374ae456324060e41b3e6c5019730bed7c", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/62441", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "4fbe40c8bf5681bf3e42d397a8c18b2424338b0c", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Geography" ] }
152290915
pes2o/s2orc
v3-fos-license
Human Resources Development and Migration: New Potential Determinants for Monetary Policy The main objective of the present paper is to determine the potential impact the qualitative and quantitative tendencies in the labor market on the decisions which influence the design of monetary policy worldwide. The analysis is focused on how human resources and phenomena associated with them could influence potential growth and, further on, how they can impact monetary policy decisions at national level for European countries outside the euro area and at ECB level for the euro zone countries. Moreover, the paper will envisage potential macroeconomic reactions (monetary decisions herewith included) to human resources dynamics. The economic variations are regarded through the perspective of growth potential shown by the Research & Development sector and also through the effects of labor force migration. The analysis of statistical data aims at pointing out the different economic perspectives in the European Union, the United States, and Japan, also considering the disparities between EU member states. The analysis is completed by the use of the ranking method, the conclusions stating once more the crucial importance of the human factor in drawing monetary policy decisions. with prolonged shortages of skilled workers especially in the tertiary sector. Fewer and fewer qualified human resources enter the labour markets in public works, public services, and constructions as a consequence of inadequate education strategies in the field of vocational education and training. Another associated phenomenon is migration of labour force from Eastern to Western and Northern Europe, thus increasing the shortage of workers on the Eastern markets and putting inflationist pressure on the origin markets of workers via remittances. In such a context, labor markets play an important indirect role in the design and conduct of monetary policy, highlighting the importance of labor as an important factor in the production functions of various economies. The present paper comprises four parts: the first one is dedicated to analyzing the rationale behind establishing a direct link between monetary policy and economic growth; the second part analyzes the role of human capital in the economic growth process in contemporary economies; the third part depicts and researches into four main challenges facing economic growth from the point of view of human resources, namely, lifelong learning, mobility, rigidities, and displacement; and the fourth part is dedicated to conclusions. Monetary Policy and Economic Growth Monetary policy is not an objective per se but a mechanism that is put in place in order to better serve the achievement of macro-economic objectives. Monetary policy objectives and instruments are designed in order to support general macroeconomic objectives, among which economic growth is essential. Empirical attempts have been made in order to establish a quantitative relationship between monetary policy and economic growth. In economic literature they have been categorized as either deterministic (the Taylor rule) or normative (quantitative benchmark definitions for money supply growth rate). The Taylor rule provides recommendations on how the Federal Reserve should set the short-term interest rates in accordance with the economic conditions in order to achieve its short-run goal for stabilizing the economy and its long-run goal for inflation. Starting October 1998, the Governing Council of the European Central Bank decided to announce a reference value for the growth rate of the broad monetary aggregate (M 3 ) having as regressors the potential growth, the desired inflation rate (following the ECB's definition of price stability as an year-on-year increase in the HICP for the euro area of below 2%) and the estimate of trends in the inverse of the velocity of circulation of money. A reference value for monetary growth of 4.5% per annum has been successively reconfirmed by the ECB, based on the assumptions regarding a trend for potential growth in the range of 2%-2.5% and a decline in M 3 income velocity of 0.5%-1% per annum in the euro area. This correlation mechanism does not, however, function automatically. There are several underlying factors for GDP growth rate, some of them radically changing during the last decades. One of the very dynamic factors is represented by human resources and their presence in the production function of a state under the categorization of labor. Human Capital and Economic Growth Human capital is without any doubt a key variable in the macro-economic equation of every state. Its quantity and quality exert impact on the level and trend of GDP growth together with other production factors. While quantity is affected by low birth prospects across most of the developed countries, quality of the human capital has gained importance through various initiatives such as investing more and more in education, as well as in research and development. Education increases the mobility of the workforce within a labor market and is therefore essential for the functioning of a monetary union in which asymmetric economic shocks can no longer be absorbed by adapting the exchange rate relations but have to be offset by flexible factors of production (Liebscher et al., 2006). In its theory of optimum currency areas, Mundell (1961) has identified labor mobility as a strategic facet of an optimum currency area. His argument was that, when this production factor moves freely within the monetary area, adjustments to real shocks do not imply dramatic changes in the level of prices and income for member states. If, on the contrary, mobility is low, the monetary union is not desirable. Education does not only increase labor force mobility, but also its adaptability, productivity, and competitiveness, as key issues of Europe's revised Lisbon agenda. Although a time lag has been identified between the investment in education and its results in terms of increased competitiveness and economic growth, there is clear evidence that education and lifelong learning are an indispensable input for economic growth in the last decades. Research and development (R&D) should become a driving force behind economic growth, job creation, innovation of new products, and increasing quality of products. A minimum set of six indicators can be used to assess the competitiveness potential of the EU economy in the spirit of the Lisbon Strategy for growth and jobs (see Tables 1-6 The findings are as follows: (1) Unless properly financed, R&D is less likely to foster economic growth and job creation. If comparing the gross domestic expenditure on R&D in the EU, USA, and Japan (see Table 1 and Figure 1), one can easily notice that further investements should be made in this direction in the EU in order to achieve the goals of the Lisbon Strategy. (2) Table 2 shows data on tertiary science and technology graduates in the EU as an indicator of the science and technological potential of high-skilled graduates (see Figure 2). As it can be noticed, huge disparities persist between different countries in the EU, which brings on the top of the agenda the need to ensure homogenous priorities for higher education in the field of science and technology accross Europe. The European Union needs to train and use on the labor market as many high-skilled graduates as possible. This has been included among the priorities of the Bologna process and the financing priorities of the European Social Fund. (3) In the spirit of the Lisbon strategy for growth and jobs, there can be established a correlation between the amounts of investments dedicated to R&D and the employment rate (see Table 3 and Figure 3). Table 3 Employment Rate in Percent (4) Both the level of expenditure on R&D and the level of investments in education are to be reflected in the productivity per person employed (see Table 4 and Figure 4). Table 4 Labor Productivity Per Person Employed EU 27 = 100 (5) The information regarding the percentage of personnel currently working in the Research & Development sector can be viewed below (see Table 5 and Figure 5). (6) The level of interest regarding the Research & Development sector amongst graduate students who wish to pursue a career in this innovative domain, is reflected in the following table (see Table 6 and Figure 6). The connection between schooling and economic growth and between education and the development of financial markets has been also explored (Papademos, 2007). It has been pointed out that private returns on investment in education ranged between roughly 6.5% and 9% and that social returns were possibly even higher due to positive externalities. An additional year of formal schooling is associated with an increase in wages of 7.5% on average over the entire working life. Education can also influence growth via innovation. Higher education levels foster innovation and the adoption of technological advances. Particularly the most technologically advanced countries benefit from better education, which fuels growth in new sectors such as pharmaceuticals and electronics. Based on the analyses through the usage of the ranking method (see Table 7), it is concluded that, considering the range of the appointed indicators, the most competitive country/Union in the Reaserch & Development sector is the United States of America, followed by Japan and the European Union. The differences between the three subjects of the analyses through the ranking method are not overwhelming, which can be a sign of a possible change in the hierarchy, especially considering the emphasis on R&D in the European Union's 2020 Strategy. Applying the same ranking method but to different subjects (see Table 8 Challenges Ahead Monetary authorities become more and more aware of the importance of labor as a variable in the macroeconomic equations. This is particularly supported by certain recent phenomena such as: migration and remittances; rigidity of labor and wages in certain markets and the rising importance of services and associated labor in the production function. This type of phenomenon, if properly managed, can prove beneficial for both the origin and the destination country of migrant labor. For the destination country it is a source of labor needed in the production function of the economy, especially in sectors such as services, constructions, and agriculture as well as a dynamic factor for the economy. For the origin country of migrant labor and, consequently, the destination country of remittances, this is a poverty reduction factor. This inflow of foreign currency supports economic growth and helps increasing the living standard, smoothing social tensions. Various econometric models show contradictory results as regards the impact of remittances on economic growth. Chami, Fullenkamp, and Jahjah (2003) conclude that, because of asymmetries and uncertainties, remittances have, at the end of the day, a negative impact on the economic growth in their countries of destination. However, using a similar model with slight changes and additional institutional variables, Mansoor and Quillin (2006) show that remittances stimulate economic growth. Irrespective of these contradictions, there are issues that have general validity, namely: (1) A very debated effect of remittances flows is the one related to the appreciation of the real exchange rate and the connected macroeconomic effects, such as: adverse effects on the tradable sector of the economy affected by the associated loss of international competitiveness; reductions in the labor supply of the tradable sector in favour of the non-tradable sector, wage pressure, and price increase in the non-tradable sector; widening the current account deficit when consumption driven by remittances is also directed towards tradable goods, thus increasing the demand for imports; inflationary pressures when remittances flows do not leave the country and inflate monetary aggregates; distortions in the sectoral allocation of investments, given the fact that most of the remittances flows are directed towards the real estate market, thus artificially inflating the price of assets. Rigidity of Labor and Wages in Certain Markets A recent study carried out by the European Central Bank (Christoffel, Kuster, & Linzert, 2006) highlights the role of labor markets for understanding business cycle fluctuations and the implications for monetary policy in particular. The focus of the analyses is linked to the approach of rigid labor markets when conducting monetary policy based on regimes such as inflation targeting. Rigid labor market regimes can influence monetary policy transmission mechanism according to an algorithm similar to the following one: nominal wage rigidity, the speed of mobilizing idle labor resources and the cost of mobilizing them all influence the marginal cost of labor; this will be transposed in firms' marginal cost as part of their price setting mechanism and finally feed aggregate inflation. Hence, it can be stated that wage inertia level and the efficiency of labor demand-supply matching process have a strong impact on monetary policy transmission mechanism. This is the reason for which the labor market and wage flexibility have been considered key pre-requisites for an optimum currency area. The higher the degree of wage rigidity, the stronger inflation persistence can be. In such a context, optimal policy should deviate from the strict regime of inflation targeting and fully acknowledge the unemployment/inflation trade-off. Thus optimal monetary policy should envisage a mix of inflation targeting and unemployment targeting. Changes in the Production Function If trend growth of GDP is a key ingredient for monetary policy determination, then volatility and determination of trend growth must be understood as well as possible. Against this background it is worrying that recently volatility of potential growth seems to have increased and its determination seems to have shifted, whereas it is at first sight puzzling why and in which direction. Potential growth is the trend growth of the economy. Actual growth is regarded as the result of this structural growth and the deviation from it due to the business cycle stance. Insight in the level of structural or potential growth of the economy is important, e.g., for monetary policy and to assess the employment situation. This is the more the case because short term economic developments seem to have also become more volatile, less policy driven, and more difficult to explain (Kolodziejak & Gherghinescu, 2005). The trend is caused by underlying factors, which are the determinants of economic growth. These factors are endowments or production inputs on the one hand and their respective productivity on the other hand. In the history of economic thought it can be observed that the interest moves from the first sector of the economy, agriculture, to the second, industry, in the course of the 19th century which also marks the birth of economics as a science. Following the traditional concept of economic growth determination a capital stock that has been built will always result in production as long as labour costs are in accordance with the competitive position or technological position of that capital stock. However, with the preferences of the consumers drifted to the output of the third and fourth sector of the economy, to commercial and non-commercial services, it is doubtful whether a capital centred approach to potential growth determination is adequate. This is sometimes solved by the introduction of human capital in the production function. What it is noticed nowadays is that the production function of the economy changes in response to preference drifts, sectoral changes, and consumption changes. Immigration/inflow of labor can trigger changes in the economic preferences for services that were previously desired, e.g., because of aging, but not possible. Nowadays the service sector plays a major role in our economy. In response to aging preference dynamics towards services may continue and deepen. Such an approach shows that changes in the production function of the economy (i.e., shifts from the capital intensive sector-industry to the labor intensive sector-services) are able to support economic growth even in economies characterized by low accumulation, relatively low savings, and a relatively low capital output ratio (the case of USA or Canada). This is in line with the Rybczynski (1955), a core result of Heckscher-Ohlin trade theory, stating that when a region is open to trade with other regions, changes in regional relative factor supplies can be fully accommodated by changes in regional output without requiring changes in regional factor prices. Conclusions Although historically ignored as significant variable for monetary policy decisions, human resources, under their various facets, can play a crucial part in the formulation and transmission of monetary policy and are definitely crucial for supporting economic growth. Several conclusions have been depicted from the present paper as regards the relevance of incorporating human resources and labor dynamics in macroeconomic analyses and, in particular, in monetary analyses. (1) Labor is an important factor in the production function of many economies. Moreover, labor dynamics among countries can initiate changes in the preference dynamics for certain sectors (i.e., services) that had not been able to be satisfied before. This explains how economies with low accumulation, relatively low savings and a relatively low capital output ratio, but open to immigration flows have been able to efficiently integrate this labor into the services sector and gain economic growth. (2) Although quantity of labor is affected by low birth prospects across most of the developed countries, quality of the human capital has gained importance through various initiatives such as investing more and more in education, as well as in research and development. It has been statistically demonstrated (even though with a time lag) that investments in human resources support economic growth and encourage mobility and flexibility in the labor market as prerequisites for the functioning of optimum monetary areas. (3) Effects of the remittances flows can not be neglected while formulating monetary and exchange rate policies in the recipient countries as they are likely to feed real exchange rate appreciation, inflation, and distortions in certain markets. (4) Wage inertia level and the efficiency of labor demand-supply matching process have a strong impact on monetary policy transmission mechanism. In rigid labor markets, an optimal monetary policy based on inflation targeting should not neglect unemployment targeting.
2019-05-14T14:03:18.905Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "8555ca189ca98a5613c69670a48ab9484f9a9b05", "oa_license": "CCBYNC", "oa_url": "http://www.davidpublisher.com/Public/uploads/Contribute/5513a52800f98.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "47c701a3dba838272e06377b5a074b77cc67888b", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
64160935
pes2o/s2orc
v3-fos-license
Cutting without Cursing: A Successful Cancellation Project ABSTRACT The Salisbury University Libraries embarked on a serials and database cancellation project in the 2014–2015 academic year, eventually cutting nearly 20% of journals without causing any faculty protests. Picking up ideas from numerous other libraries, the three-person project task force developed a three-stage process: 1) preparation—gathering data and laying the groundwork for getting feedback; 2) getting feedback from liaisons, faculty, and departments; and 3) making decisions about what to cut and sharing the results. This article details the steps taken and key recommendations for other libraries undertaking similar projects. Introduction Cutting journals and databases is neither popular nor easy, yet sometimes it simply has to be done. That was the situation faced by the Salisbury University Libraries in late 2014. Learning about how we succeeded in cutting costs nearly 20% without arousing the ire of the faculty may prove helpful to other libraries undertaking a similar project. Background Salisbury University (SU) is a highly-ranked public regional comprehensive university on Maryland's Eastern Shore. With 8,090 full-time equivalent (FTE) students, it features strong programs in nursing and other health sciences, social work, education, business, and biology, with a growing number of courses offered online. SU emphasizes undergraduate research and experiential learning, and the 497 FTE faculty are excellent teachers and mentors. While SU offers two doctorates and 14 master's degrees, the graduate students number only 773. The SU Libraries comprise the main library, Blackwell Library; the Edward H. Nabb Research Center for Delmarva History and Culture; and the Curriculum Resource Center. The staff is relatively small for a university this size, 28 full-time staff, including 15 faculty librarians. The Libraries have a robust liaison program, with most faculty librarians serving as liaisons to several academic departments. The liaisons are responsible for instruction, collection development, and research support for their departments. The collections also are relatively small, because Blackwell Library long ago ran out of collection space. The Libraries owned 294,282 books and maintained 1,016 journal subscriptions at the beginning of the 2014-2015 academic year. However, the Libraries' participation in the University System of Maryland and Affiliated Institutions consortium provides our users with quick access to much bigger collections. Like many universities, SU's budget has suffered since the 2008 recession, and consequently, the Libraries' budget also has suffered. For several years, the Libraries cut other areas of spending even as spending on journals and databases increased due to spiraling prices and faculty requests. Nonetheless, in the 2014-2015 academic year, it became apparent that we could no longer avoid cutting journal and database spending. The dean of libraries, the director of collection management, and the scholarly communications librarian formed a task force to carry out a cancellation project. We established a goal of 20% or between $150-175,000. The rest of this article will detail how we carried out the project and the lessons we learned. Literature review Librarians have been recounting their experiences with serials cancellation projects since at least the 1980s, sharing their processes and offering warnings and advice to others. The context has changed quite a bit with the transition to electronic journals and more robust usage statistics, but many of the basic principles remain the same. This overview will focus on the articles we found helpful rather than summarizing all the articles that are available. In 1992, Paul Metz, principal bibliographer at the University Libraries at Virginia Tech, published an excellent piece, "Thirteen Steps to Avoiding Bad Luck in a Serials Cancellation Project." He laid out the many steps that Virginia Tech took in cutting more than 1,250 serials. While some of what he has to say is dated, the steps themselves make a great deal of sense. One idea we picked up from him was to have librarians nominate titles for cancellation and have faculty react rather than asking the faculty to nominate titles, which "invites delays, nominations of dead titles or titles that don't add up to much money, and bias on the part of the self-nominated individuals willing to take the lead." Another key suggestion he made was to nominate more titles than needed, so that some titles can be magnanimously spared. 1 In 1993, an article by Suzanne Wise, reference librarian at Appalachian State University, was helpful in that her university is very similar to Salisbury, a regional comprehensive with a relatively small staff. One bit of advice she gave that struck a chord with us was to "develop a list of service priorities and follow them in decision-making. These priorities should conform to the mission of your institution. In our case, occasional use by faculty was not deemed as important a factor as frequent use by undergraduate students." 2 In 2000, Janice Jaguszewski and Laura Probst, collections coordinators at the University of Minnesota, detailed how electronic resources affected serial cancellations and remote storage decisions. While their focus on large academic research libraries had little relevance for us, their discussion of the complexities of evaluating electronic resources resonated with our situation. Among the criteria they cited are competition among vendors, consortial arrangements, and archiving options. These criteria informed both our data collection and our decision making, with competition among vendors being especially important for us in regard to databases and consortial arrangements and archiving being especially important in regard to journals. 3 An article published in 2005 proved especially important for our efforts at getting faculty input. Ronadin Carey, Stephen Elfstrand, and Renee Hijleh, all of the University of Wisconsin(UW)-Eau Claire, described their library's strategy for involving the faculty and thereby gaining acceptance for a 15% cut in serials spending. While we did not attempt to use the CORE® Project Management Method, the formal project management system that UW-Eau Claire used, we did find the steps they took to get input useful. One important point they made was that it was important to not just collect data but present it to the faculty in a useful way, or, as they put it, "The task was to transform data into information." 4 In 2010, Ryan Weir, serials and electronic resources librarian at Murray State University, laid out a planning process for serials cancellation projects at small-and medium-sized academic libraries. He emphasized the importance of "developing a strategic plan to address the budget shortfalls and having an effective communication plan" both for internal communication within the library and for external communication with stakeholders. 5 Not officially part of the literature but equally important in influencing how we went about our project were the websites other academic libraries established for their serials cancellation projects. From the University of North Carolina-Wilmington's 2009 project, we picked up a simple rating system for individual faculty to use: essential, useful, minimally useful. We also found the schedule and frequently asked questions (FAQs) helpful. 6 Colgate University's 2009 project website included a copy of the letter liaisons used to share proposed cancellations with department chairs; we liked the idea of having departments as a whole rank the proposed cancellations rather than relying simply on individual faculty feedback. 7 From N.C. State's 2014 project, we adopted the idea of having an online web form for faculty feedback. 8 The University of Nevada-Las Vegas Libraries created an excellent website for their 2014 project that gave us ideas for what to include in our own website. Process The literature review helped us develop an outline for how to carry out the journal/database project. We filled in some details as we proceeded. The project basically fell into three stages: (1) Preparation, including both gathering data about our journals and databases and laying the groundwork for getting feedback. Preparation: Data sources and collection Historically, the serials librarian was responsible for the maintenance of the serials collection and the collection of serials usage and a research and instructional services librarian was responsible for database renewals and the collection of database usage statistics. Simultaneous resignation and retirement of the individuals in these positions paved the way for merging these functions into a new position: serials and electronic resources librarian. Unfortunately, all hiring at Salisbury University was frozen when the need for the cancellation project arose. Therefore, collection of usage statistics and organization of the data fell to the director of collection management, assisted by the scholarly communications librarian. The dean considered the scholarly communications librarian a good choice to help with this cancellation project in the absence of a serials and electronic resources librarian because of her knowledge of the ever-increasing costs of journal subscriptions. Also, as a librarian liaison herself, she served as a point of contact for the other liaisons to answer questions about the project overall, how the data was collected, and how the liaisons were to assist their departments in reaching the goal of cutting 20% of the journal budget. The first step in data collection involved changing the administrative contact for databases and e-journal platforms from the previous individuals to the director of collection management. As she accomplished this, she also gathered usage statistics. For e-journal analysis, she found that EBSCO, the Libraries' subscription agent, provides excellent tools for data collection. She used EBSCONet and data stored in EBSCO's Usage Consolidation tool in addition to data on individual platform sites. For print journals, she relied on in-house usage statistics that are collected regularly and recorded on the Libraries' intranet. For databases, she gathered statistics from OCLC Online Computer Library Center, Inc. (OCLC), Lyrasis, the EBSCO Administration module, SFX Collection Management, and individual platform sites. The data collected from this wide variety of sites were, predictably, widely variable: (1) Some sites were Counting Online Usage of Networked Electronic Resources (COUNTER) compliant, although not all at the same level. (2) Terminology lacked consistency across sites, but most sites with robust statistics provided a glossary defining what it was they were counting. (3) Terms were in no way uniform even across sites supposedly utilizing the same standard. (4) Some sites provided data but did not comply with any industry standard. (5) Some platforms/databases did not provide online access to data, requiring contact with a customer service representative. In order for the study to be useful, the data had to be comprehensible to non-technical librarians and faculty, as well as truly comparative. Because Salisbury uses Microsoft products as standard software, we chose to create an Excel spreadsheet to share with the faculty. The director of collection management met with the technology librarian to discuss what data elements were the most useful to collect and compare. For databases, the data elements they chose were Searches/ Sessions/Full-Text/Abstracts. The director of collection management created a spreadsheet of all non-consortial subscription databases to record these data elements for FY2014, covering July 1, 2013-June 30, 2014. A Notes column contained information for specific databases, such as "Abstracts = TOC Views," "Events," and "Only searches reported." This spreadsheet also included columns for Title, Package (if applicable), Provider, Payment Details, Consortium, Cost, and Cost per Search. While a comparison of multiple years of usage would have yielded more reliable results, the time-frame of the study did not allow this comparison. Data elements for the journal master spreadsheet were simpler, being divided into print usage and online usage. Though simpler, these data were less reliable. While print usage statistics went back many years, we relied on only the two most recent calendar years for comparison with online usage. We recognized that print usage statistics for journals are always highly suspect. Despite signs asking patrons not to re-shelve print journals they have used, very few patrons leave their journals on the re-shelving book trucks. Items left on the re-shelving trucks may be moved before being counted and re-shelved by serials staff. Patrons sometimes deliberately mis-shelve journals they are using to ensure access at a later time or date. Unfortunately, despite its problems, these were the only data we had for print usage. The director of collection management compiled and entered these data in the master spreadsheet. For electronic journals, EBSCO Usage Consolidation provided online usage data for the past two calendar years. Since Salisbury had not activated all e-journals in Usage Consolidation, we also relied on individual platform data. The scholarly communications librarian handled collecting and entering this data. The journal master spreadsheet contained these data elements: Preparation: Laying the groundwork for getting feedback Even as the director of collection management and the scholarly communications librarian collected data, the dean of libraries began preparations for getting feedback. She met with the Deans' Council, consisting of all academic deans and the provost, to solicit ideas for how best to communicate with the faculty. She called a meeting of the Library Committee, a Faculty Senate committee with representatives from each of SU's four schools, to inform them about the project and hear their thoughts about what information the faculty would need and how best to solicit feedback from the faculty. As a result of these meetings and the experiences of the data collectors, the project task force drew up a timeline and made some key decisions: (1) We would only ask librarian liaisons and not academic faculty to provide feedback on databases. We thought it would be too complex to explain what page views, table of contents (TOC) views, and so on meant to people not familiar with library jargon, especially since the data were not entirely consistent from database to database. (2) We would ask the librarian liaisons to narrow the lists of journals for each department to only those titles we would seriously consider cutting, aiming for about 25% of the costs overall. We only were intending to cut 20% from the budget, but we wanted to have some flexibility while not including so many titles as to set off panic. We knew there were some titles that were required for accreditation, received heavy use, or were the basis for class assignments that we simply were not going to cut. There was no reason to get the faculty riled up about those being potentially cut when in reality that was not going to happen. (3) We would provide individual faculty the opportunity to comment on not only their own department's journals but any journals that were being considered for cancellation. Given the interdisciplinary nature of much research and teaching at SU, this was important. (4) To ensure that we understood each department's priorities, not simply those of individual faculty, we would ask each department chair to provide a departmental response, gathering that information by whatever means-a departmental meeting, a poll, or whatever-the chair considered appropriate. (5) We also decided that we would call this a journal review project rather than a serials cancellation project. First, it was clear from the meeting with department chairs that faculty did not really understand what we meant by serials, but they understood what journals meant. Second, calling it a review made it seem less painful. It also allowed the possibility that if faculty wanted to cancel additional titles beyond what we were asking them to cut, they could subscribe to new titles. While no one took advantage of this opportunity, calling the project a review rather than a cancellation sounds better psychologically. The dean of libraries also began creating materials to share with liaisons and faculty. She developed a LibGuide, available at http://libraryguides.salisbury.edu/journalreview, which included a detailed description of the process and timeline for the project, a memo for faculty, information on why some of the journals needed to be cut, Frequently Asked Questions, and what the future looked like for the SU Libraries despite these current cuts. This guided the liaisons through the process and also served as a valuable resource that they could refer faculty members to in later stages to help explain the history of the project. Two of the most important pieces the dean created were graphs. The first compared the growth in serials prices for International Scientific Indexing (ISI) titles, as published in Library Journal each year, to the Consumer Price Index and the Higher Education Price Index from 1994 to 2014, clearly demonstrating how serials price increases far outpaced inflation. See Figure 1. The second showed how journal and database spending had grabbed an ever-larger share of the non-personnel budget from 2006 to 2014. See Figure 2. These visual representations of the data clarified the financial position of the Libraries and were very effective tools in helping faculty understand and appreciate why cuts were necessary. Getting feedback: Liaison input Liaison librarians were to provide input relating to both databases and journals. For databases, they were to consider all the databases listed in the database master spreadsheet and provide comments and suggestions about which ones to cancel or why specific databases needed to be kept despite high cost per use. Five liaisons provided feedback on the databases, and there fortunately was widespread agreement about which databases could be cut without hurting our users. In relation to journals, the liaisons were responsible for deciding which journals in their subject areas should be added to the review list that would be shared with faculty. Once the director of collection management and the scholarly communications librarian finished entering data in the journal master spreadsheet, they considered what format would best help the liaisons in determining which titles to nominate for review. They decided that in addition to providing the master spreadsheet, the scholarly communications librarian would create subject-specific spreadsheets based on fund codes. These sheets would have a built-in formula so that for each title a liaison selected for review, the spreadsheet would calculate how close that selection brought them to the reduction goal for that subject. To help to determine which titles to nominate, liaisons considered some core concerns such as: was the title a requirement for accreditation; was the title available in a database, and, if so, the liaison was to note the source and dates of coverage; and how important was the journal considered in the discipline. Clearly, if a title was a requirement for accreditation, it must be kept, regardless of the cost or usage level. If a title was available in a database, this may have made it a more attractive option to nominate, as the university would not be losing complete access to that journal's contents, but at worst only have a delay in accessing the most recent material. As to the importance of the journal, in some disciplines this may be roughly determined by the journal's impact factor; another way of gauging importance was if it was used heavily in a department's research and/or teaching. Some liaisons relied primarily on the usage statistics and costs to narrow down the titles. Others involved faculty members in discussions at this point and this faculty input played a larger role in their decisions. As the liaisons have the most department-specific knowledge on the Libraries staff, they were best placed to come up with the first "short list" in the way that was most appropriate for their disciplines. In some cases, the task force decided that a department should not have to nominate any titles at all. For example, the Physics Department only had four titles assigned to its fund code, and all of them were heavily used. Also, smaller departments such as Theatre & Dance and Conflict Resolution had so few titles supporting them that it did not seem fitting to reduce them further. Getting feedback: Faculty and departmental input Once the liaisons nominated titles for review, the task force members discussed the best way to share this information with faculty. They agreed that in this case, less was more. Where some of the statistics in the liaisons' spreadsheet were useful in making a first pass, the level of detail was bound to create potential roadblocks in getting feedback quickly from departments and faculty. Therefore, the task force decided that the spreadsheets shared with faculty would include only Title, Print ISSN, Online ISSN, Fund Code, Cost, and Database Coverage. If a faculty member wanted to see more information, the liaison could then share the more detailed spreadsheet with them in a one-on-one meeting. After the task force had decided on the process and the liaisons had narrowed the list of candidates for review, we were ready to start soliciting faculty feedback. The dean met with all academic department chairs at their beginning-of-spring-semester meeting to inform them about what to expect. She also did a brief presentation to the Faculty Senate, where the graphs made a huge impact. The Faculty Senate president, in fact, chose to send them out to the entire faculty. Liaisons then sent out a memo from the dean of libraries explaining the project to their department chairs along with the concise, subject-specific list of journals for review. The chairs were free to determine the best way to collect feedback from their faculty at this level. These lists were also made available in MyClasses, the university's learning management system, and liaisons encouraged individual faculty, in person and via e-mail, to view the list for their department as well as the lists for any other departments whose journals they used regularly and then provide feedback. The dean followed up with an e-mail to all faculty about a week after the liaisons had contacted their departments and sent a reminder a few days before the deadline. MyClasses included a link to an online survey that individual faculty could fill out. The survey allowed respondents to rank each journal as essential, useful, or minimally useful and asked them to explain their reasons for any ranking of essential. To allow for interdisciplinary feedback, respondents could choose to rank journals from other departments in addition to their own, an option many took. Making the decision and sharing results Once the survey closed, the dean and the director of collection management met to make the final decision about what databases and journals to cancel. The liaisons who responded had achieved some consensus about which databases should be cut, so that decision was easy. We cut three databases that the liaisons recommended but kept one where we share the cost with another campus unit. In considering what journals to cancel, faculty feedback was important. Thirteen departments provided formal written feedback ranking the proposed journals, and others provided informal feedback to their liaisons. Some ninety-six faculty filled out the online survey. The online survey included separate questions for each title being considered for cancellation; consequently, the spreadsheet of responses was quite cumbersome, with more than two hundred columns. The dean consolidated the responses for each journal into one note field per title, incorporating both the rankings and any comments made about the title. The faculty comments about what journals were or were not valuable and how they used journals proved especially valuable. The dean and the director of collection management reviewed each title, looking at usage, cost, coverage in databases, and faculty responses. Faculty responses were the single most important factor influencing the decision to keep or cancel. The criteria used included: (1) If a department or multiple faculty rated a journal as essential, we kept it. (2) If only one faculty member rated a journal as essential, we looked at the explanation they provided for why it was essential. If the faculty member provided a good explanation, such as using it for an assignment or being required for accreditation, we kept it. If the explanation was not persuasive, we looked at how other faculty rated it; if anyone else rated it useful, we kept it. (3) If the journal dealt with diversity in some way, we tended to keep it, unless the faculty rated it minimally useful. Resources on diverse populations and cultures are not a collection strength at Salisbury, and we were leery of weakening them. (4) All other journals were cancelled. In all, we cancelled 146 journals, along with the three databases, saving a total of $116,500. This was less than our goal of $150,000, but the task force believed that any additional cancellations would hurt our users. In addition, the president of the university took notice of the journal review project and promised to look at securing some additional funding for the SU Libraries. The dean of libraries shared the cancellation list first with the Libraries staff and then the deans and vice presidents. She then sent an e-mail with the cancellation list to all faculty and staff, thanking them for their input. Several faculty actually contacted us to thank us for the opportunity to provide feedback, and no faculty members have complained-at least not to us-about the cancellations. Lessons learned? (1) Keep it simple, especially for the faculty. Do not use library jargon while talking with them about the review. Make it easy for them to provide their input. (2) Have a good communication plan, both internally and externally. Cancelling journals can be upsetting, but be honest about what you are doing, why it is necessary, and how decisions will be made, and then share this information widely. That can go a long way toward calming fears. (3) Visual aids are very effective in helping people understand why cuts are necessary. When people saw the graph illustrating the increases in serials prices far above the rate of inflation, they stopped questioning why the Libraries needed to reduce the number of subscriptions. (4) Get feedback from departments as well as individuals. Having feedback from departments about what was important to them as a unit tempered instances where one or two faculty members rated highly journals that were important to them personally but that might not have been important at all for supporting the department's teaching. Also, several individual faculty members rated every journal in their department as essential, so having the departmental feedback gave us a better gauge. (5) Involve liaison librarians. The involvement of the liaison librarians allowed us to customize our approach for each department, which was valuable given the potential sensitivity of the project. It also allowed us to provide the appropriate amount of data for each department, as determined by the liaison: we shared more data with some departments, but in cases where too much data would only serve to distract or go unnoticed, we shared less. The liaison librarians also benefited from this collaboration in several ways. The detailed exposure to the high price of this specific type of material helped to generate a deeper appreciation of some of the challenges the serials staff faces. Many of the liaisons were aware of the "serials crisis" but when confronted with the costs of journals for their departments and given the task to recommend which ones should first be looked at for cancellation, the crisis became real to them in a very tangible way. Also, they now understood the frustration administrators and serials staff feel with titles that, despite high cost and low usage (and in many cases, no usage at all in the past two years), could not be cancelled due to bundle deals or other kinds of contracts. Participation in this project also helped the liaisons to better understand their departments' needs and behaviors with regard to journals and provided the opportunity to educate faculty in their departments about the high costs involved in providing journal materials. The first will serve us well if future budgets allow us to expand our collections. The second is important for the scholarly communications landscape as a whole. As faculty members are the authors of journal content, they have a vital role in the current publishing process and great influence on whether or not the "serials crisis" will continue. Serials cancellation projects are, unfortunately, a reality for many libraries and likely to continue to be so for years to come. With proper planning and communication, however, they do not have to be a miserable experience and can even have some benefits beyond improving the library's finances.
2019-02-16T14:28:45.549Z
2016-07-01T00:00:00.000
{ "year": 2016, "sha1": "73ff142d44100b292481850a8832bf088bc603df", "oa_license": "CCBY", "oa_url": "https://mdsoar.org/bitstream/11603/3061/1/Cutting%20without%20Cursing.pdf", "oa_status": "GREEN", "pdf_src": "TaylorAndFrancis", "pdf_hash": "1c9bdb2f46f693110e3d58c5ad8e14e2f039ab11", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
249391640
pes2o/s2orc
v3-fos-license
American Mobilization and the Justice Cascade first, a consistent and widespread pattern of atrocities: killings, rapes, burning of villages committed by Jingaweit [ sic ] and government forces against non-Arab villagers; second, three-fourths of those interviewed reported that the Sudanese military forces were involved in the attacks; third, villages often experienced multiple attacks over a prolonged period before they were destroyed by burning, shelling or bombing, making it impossible for the vil-lagers to return to their villages. This was a coordinated effort, not just ran-dom violence. When we reviewed the evidence, . . . I concluded that genocide has been committed in Darfur and that the Government of Sudan and the Jingaweit bear responsibility. . . . We believe the evidence corroborates the specific intent of the perpetrators to destroy ‘a group in whole and in part,’ the words of the [Genocide] Convention. 6 In addition to civil society groups, and often tightly interwoven with them, state actors contributed to raising awareness of the mass violence in Darfur and contributed to its representation as human rights crimes. One interviewee from a large European country had worked for his foreign ministry's human rights division and represented his country on the ICC's Assembly of States during the period when the UN Security Council referred the Darfur situation to the court. A lawyer by training, he strongly stressed the primacy of human rights concerns ahead of other goals: "You need to give them justice, and once they have the feel that justice, more or less, is taken care of, then I think you can create within such a society a willingness to overcome postconflict and enter a new phase of peace building." This chapter, on state actors and their linkages to civil society in the human rights field, highlights the case of the United States-among the countries considered here, the most pronounced supporter of a criminalizing response and a strong proponent of the application of the genocide label. After a brief review of the US Save Darfur campaign, a massive mobilization of civil society organizations, I look at American media representations (outliers in international comparison) and discuss government responses. Those responses show how a state-civil society amalgam emerged and made itself unmistakably heard with its intense pursuit of criminalizing definitions of the violence in Darfur. Chapter 3 American Mobilization and the Justice Cascade The American story is particularly interesting as the United States has never ratified the Rome Statute and generally keeps a critical distance from the ICC. William Schabas (2004) in fact writes about "United States hostility to the International Criminal Court" (see also Deitelhoff 2009). Specifically with regard to Darfur, the United States initially displayed considerable resistance against a referral of the Darfur situation to the ICC. Yet, in a surprising and quite radical turn, it eventually embraced a criminalizing strategy and abstained from the UNSC vote on Resolution 1593, thereby allowing the case of Darfur to be referred to the ICC. According to the Security Council minutes: anne woods patterson (United States) said her country strongly supported bringing to justice those responsible for the crimes and atrocities that had occurred in Darfur and ending the climate of impunity there. Violators of international humanitarian law and human rights law must be held accountable. Justice must be served in Darfur. By adopting today's resolution, the international community had established an accountability mechanism for the perpetrators of crimes and atrocities in Darfur. The resolution would refer the situation in Darfur to the International Criminal Court (ICC) for investigation and prosecution. While the United States believed that a better mechanism would have been a hybrid tribunal in Africa, it was important that the international community spoke with one voice in order to help promote effective accountability. 1 I ask why the US government eventually aligned with a strong civil society movement, despite its refusal to ratify the Rome Statute. In the end, civil society, the federal government, and media alike were international outliers in their determination to articulate the story of Darfur as one of criminal-in fact, genocidal-violence. A closer look at representations that emerged from these American discourses sheds additional light on the nation-specific conditions that color representations of mass atrocities. They include the peculiarities of US civil society, the organization of government in the United States, and its media market. Based on interviews and media data, we shall also see, as we did in chapter 2, that the institutional logic of law still colors representations of mass violence at the periphery of the legal field, albeit in a weakened form compared to that applied at the center. Toward the end of this chapter, in a brief excursus, I examine how the US section of an international rights-based NGO, again Amnesty International, maneuvers within a highly mobilized civil society environment, dominated by Save Darfur, with which it disagreed on a number of positions. What organizational and linguistic strategies did it use to act effectively in this context? the save darfur MoveMent in the united states The United States Holocaust Memorial Museum took the lead in the American civil society movement when, in January 2004, it issued a genocide alert on the situation in Darfur. The first, widely publicized media pronouncements articulating the plight of the people of Darfur for a broad public soon followed. Eric Reeves, an English professor at Smith College and one of the leading individual problem entrepreneurs on Darfur, had his famous, trendsetting op-ed published in the Washington Post on February 24, 2004, following rejections of previous submissions. One month later, on March 24, the New York Times followed with an op-ed by Nicholas Kristof, the first in a series of his contributions on Darfur. A wave of other opinion pieces followed. Deborah Murphy (2007), in counting editorial responses to Darfur by select (prominent) US media in 2004, identifies twelve in April, eight in May, nine in June, sixteen in July, fifteen in August, and nineteen in September. Following the USHMM's January 2004 genocide alert, the first op-ed pieces, and UN secretary-general Kofi Annan's April 2004 speech on the occasion of the tenth anniversary of the Rwandan genocide, a massive wave of civil society activism unfolded in the United States. It partly preceded, but also accompanied and followed, formal interventions by the UN and the ICC. Most noteworthy, the period between June 2004 and July 2005 witnessed the founding of the Save Darfur Coalition, which eventually brought together almost two hundred organizational members under its umbrella. Prominent among the great variety of groups were Christian evangelical groups, including Christian Solidarity International (CSI), that represented an important constituent bloc for then-president George W. Bush. These conservative groups and churches formed a rare coalition with liberal organizations such as the American Jewish World Service (AJWS); various specialized organizations, including the USHMM and Africa Action, a Washington, DC-based NGO; and mainstream human rights organizations such as Amnesty-USA. Preceding and advancing the constitution of the Save Darfur coalition, the USHMM organized a July 2004 conference at the City University of New York. There Holocaust survivor and Nobel Peace Prize laureate Eli Wiesel delivered a forceful speech in which he linked the violence in Darfur to the Rwandan genocide. The wave of activism was further spurred by the release of the film Hotel Rwanda in September 2004, which by depicting the Rwandan genocide in Hollywood fashion, helped explicate it for a broad public. About one year after the second peak of the violence in Darfur, in April 2005, Harvard's John F. Kennedy School of Public Policy hosted a largely student-led event on divestment from Sudan. One year later some fifty thousand people gathered on the National Mall in Washington, DC, for an impressive demonstration under the title "Save Darfur: Rally to Stop Genocide." Speakers included Barack Obama, Elie Wiesel, Nancy Pelosi, and celebrities such as George Clooney. Speakers and demonstrators demanded a UN peacekeeping force, better humanitarian access to refugees, adhesion to existing treaties and cease-fire agreements, and a commitment to a lasting peace agreement in the Abuja peace talks. Importantly, they also called for justice to be delivered (see figures 6 and 7). Along the way, activists sought to exert direct influence on the political process, as when Save Darfur leaders met with Deputy Secretary of State Robert Zoellick and organized a "National Call-in Day" on Darfur. And civil society organizations found strong resonance, and reinforcement, in the way American media covered Darfur. darfur in us Media The New York Times and the Wall Street Journal are among America's most prestigious print media; both are mainstream, though the former occupies the left-liberal and the latter the conservative end of the political spectrum. Neither the presidential administration nor Congress would be ignorant of positions taken by these papers. While a more detailed analysis of media is presented in chapters 8 and 9, I here highlight patterns that speak to the special role that US media played, in comparison to media elsewhere in the world, to generate a criminalizing account of the situation in Darfur. Numerous articles and commentaries appeared between 2003 and 2010 in both the New York Times and the Wall Street Journal. They acknowledged the suffering in Darfur, contributed to framing the violence, and built bridges to past mass atrocities (for details on analytic strategies see the introduction). This applies to all media documents, somewhat to news articles or reports, and decidedly to opinion pieces. 2 Consider the reporting of killings (analyzed separately from natural deaths), of rapes, and of displacements in Darfur. Figure 8 (A-C) shows that the likelihood that American media reports informed readers of killings and, especially, rapes was substantially higher than that for media reports from outside the United States. The same applies, even more strongly, to opinion pieces. Only for displacements do we find only minor differences, partially even a reversal of the pattern observed for the other types of victimization. This should not be surprising as addressing displacements is more in line with a humanitarian emergency and aid frame, as I show in detail in chapter 5. 3 Framing Framing, more than acknowledgment, is an interpretive endeavor. Where we find substantial variation in terms of acknowledgment of victimization and suffering, we might expect a wider range in the framing of violence. As in the interviews I conducted, the coding scheme for the analysis of media reports asked about different frames, the presence or absence of which in the articles were to be noted. Frames included rebellion or insurgency, humanitarian emergency, civil war, and criminal violence. Here I report only on the last-named frame as I am concerned with the criminalizing discourse on Darfur. Figure 9.A shows that US media used the crime frame more often than those of other countries. Yet the difference is remarkable only for opinion pieces. There, where normative and value-based statements are expected, almost 60 percent of editorialists in all papers used the crime frame, whereas about three-quarters of opinion pieces in American media did so. The difference becomes more pronounced for the use of the genocide frame (figure 9.B). While US news reports cited the genocide frame more frequently, the difference more than doubled for opinion pieces. 4 Bridging In addition to frame selection, another way of making sense of news events that we otherwise cannot yet interpret is the strategy of bridging. Journalists cite past occurrences on which interpretive clarity has been reached and use them to shed light on current-day events. In the context of genocide, the most powerful reference is to the Holocaust. Figure 9.C shows the percentage of news articles that built analogical bridges from the Holocaust to the violence in Darfur. The introduction offers an especially powerful example from the op-ed pieces of renowned New York Times journalist Nicholas Kristof, who used terms such as Lebensraum and final solution. The numbers presented here show that the likelihood that journalists would cite or make such comparisons was more than one-third higher in American news reports than in those from other countries and more than twice as high in opinion pieces. 5 Here we see a strong affinity between frames chosen by American movements focused on Darfur and representations in American media. This linkage between civil society movements and media representations is likely to be enhanced by the relative competitiveness of the US media market (Benson 2013). Under such conditions media organizations keep their eyes on and ears attuned to sentiments of those publics they target as customers. Irrespective of such causal issues, however, data show that American civil society and media were major promoters within the international community of criminalizing the violence in Darfur. united states governMent Given the strength of the Save Darfur movement in the United States, and the substantial support social movements received from media reporting, the US government found itself in a peculiar position within the international community. On the one hand, it had declined to ratify the Rome Statute and in fact fought the creation of the ICC; to this extent, its position to enhance criminal justice intervention against Darfuri actors was weakened. On the other hand, the United States tends to embrace criminalizing frames, domestically and in cases of foreign atrocities, and it was under massive civil society pressure to do so. How did it respond? Different branches of the US government were certainly receptive to the Darfur-focused movement, which included groups in American society ranging from very conservative to very liberal. The movement was predominantly white, but included passionate involvement of African Americans who identified with those seen as victims of the violence: black Africans. It was thus no surprise when, on June 24, 2004, Representative Donald Payne, Democrat and leading member of the Congressional Black Caucus, joined forces with conservative Republican senator Sam Brownback to introduce a resolution into their respective chambers of Congress. Barely a month later, on July 22, 2004, the House and Senate simultaneously passed a resolution declaring that genocide was occurring in Darfur. In the meantime, on June 30, 2004, Secretary of State Colin Powell returned to Washington from Khartoum, declaring the he did not have the information needed to decide whether the violence constituted genocide. Simultaneously, however, he commissioned a survey to be conducted among Darfuri refugees in camps in Chad, just beyond the border of Sudan and Darfur, to gather appropriate information. A basic analysis of this "Atrocities Documentation Survey," with 1,136 respondents, helped change Powell's position. In a famous hearing before the Senate Foreign Relations Committee, on September 9, 2004, he declared that responses to the survey indicated: first, a consistent and widespread pattern of atrocities: killings, rapes, burning of villages committed by Jingaweit [sic] and government forces against non-Arab villagers; second, three-fourths of those interviewed reported that the Sudanese military forces were involved in the attacks; third, villages often experienced multiple attacks over a prolonged period before they were destroyed by burning, shelling or bombing, making it impossible for the villagers to return to their villages. This was a coordinated effort, not just random violence. When we reviewed the evidence, . . . I concluded that genocide has been committed in Darfur and that the Government of Sudan and the Jingaweit bear responsibility. . . . We believe the evidence corroborates the specific intent of the perpetrators to destroy 'a group in whole and in part,' the words of the [Genocide] Convention. 6 A few weeks after Secretary Powell's testimony, President Bush himself declared, in a speech to the UN General Assembly, that genocide was part of the pattern of violence in Darfur. The US government's rhetoric both followed and promoted the American movement that pushed for intervention in Darfur, for labeling the violence genocide, and for criminal prosecution of those responsible. It thus became a player in the field that placed Darfur in the justice cascade. Again, this is remarkable given the US stance regarding the Rome Statute, on which the ICC is based, the very court to which the UNSC referred the Darfur case. The United States allowed the referral to go forward, despite its objections to the ICC, by abstaining from the vote (together with Algeria, Brazil, and China). Actions of the US government were considerably more cautious, however, than its rhetoric. They included, at the UN, sponsorship of the resolution that created the Commission of Inquiry; support, on August 31, 2006, for a new UN peacekeeping force for Darfur; and-domestically-President Bush's signing into law the Sudan Accountability and Divestment Act on December 31, 2007. This law authorizes local and state governments to divest from Sudan, and excludes companies from federal contracts that operate in Sudan's military, minerals, and oil sectors. 7 Among the countries I examined, the society-government amalgam in the United States turns out, in cross-national comparison, to have been the strongest force for promoting a crime-focused representation of the Darfur conflict. Specifically, the American narrative privileged the most dramatic depiction of the violence, and its characterization as genocidal, much more than civil societies or governments did in other countries. Three questions arise. Why this forceful amalgam in the case of the United States? Why such as strong movement specifically concerning Darfur? And why did strong representation not translate, in this case, into similarly forceful government action? While I return to country-specific patterns of foreign policy and diplomacy in detail in chapter 7, a brief paragraph on each of these questions is in order here. First, reasons for the close correspondence between civil society and government rhetoric lie in the nature of American institutions. The boundary between state and society is particularly porous in the United States (Bendix [1949] 1974Gorski 2003;Roth 1987;Rueschemeyer 1973;Kalberg 2014;Savelsberg and King 2005). Candidates for legislative office are selected via popular vote in primary elections; the head of the executive branch is elected in a general election; and even many officeholders in the judiciary branch are elected. As a consequence, wherever strong mobilization occurs among civil society groups, especially among constituents of the current administration, the administration and the Congress are likely to be attentive to their demands. And exactly this situation occurred in the case in Darfur. Also the role of media (as a branch of civil society) in the United States is exceptional. Journalism scholarship applies the term media-politics complex to the US, alluding to especially close ties between media and politics; these scholars stress that "the experiences of other countries have been significantly different from the experience of the United States" (Mazzoleni and Schulz 1999:258). In addition, news media are driven more strongly by competitive pressures in the US than elsewhere (Benson 2013). Consequently, they seek alignment with market forces and target groups. A strong civil society movement, encompassing several sectors of society and including a diverse ideological spectrum, is thus likely to leave its traces in media reporting-and especially media commentary-and government actors better listen up or pay a political price. Second, the strong American mobilization specifically in the Darfur case is remarkable. Such a response can never be taken for granted when genocide or other mass atrocities occur (Power 2002). In this particular case, however, it resulted from a combination of forces. First among them was the strong representation issuing from specific carrier groups, the crucial contributors to national patterns of knowledge formation to which Max Weber (2009) and Karl Mannheim (1952) alert us in their classic works (see also Kalberg 1994). In the American Darfur mobilization, influential carrier groups included, first, conservative evangelical Christians, a highly mobilized and well-represented constituency for President Bush. Evangelicals had been most active in missionary work in the southern part of Sudan (today South Sudan) when they learned about mass violence in Darfur. When the violence was initially misrepresented as perpetrated by Arabs against Christians, these religious groups spoke up, and the Bush administration listened. Second, once the specter of genocide was raised, Jewish groups became engaged in the cause of Darfur. The USHMM and the AJWS played crucial roles. Further, once victims of the conflict were identified as black, African Americans and the Congressional Black Caucus mobilized. Finally, as public representations now depicted "Arabs" or "Muslims" as perpetrators, it was easy for broad segments of post-September 11 American society having anti-Arab or anti-Muslim sentiments to sympathize with the message of the Save Darfur movement. Such mobilization of carrier groups on behalf of Darfur interacted with particular cultural features of US society: a preference for black-and-white depictions of conflicts and an associated punitive orientation toward perpetrators (Whitman 2005), a savior identity in world affairs (Savelsberg and King 2011), and a dominant progressive narrative (Alexander 2004a). Thus, the availability of mobilized, well-organized carrier groups and a conglomerate of cultural features (explored in previous scholarship) help explain the amalgam of forceful state-society representations of mass violence in the case of Darfur as we observed it for the United States. Third, there were multiple reasons why the US government, despite intense American rhetoric, did not more aggressively pursue the case of Darfur in its actions. These factors include, first, the growing skepticism toward military engagement abroad that began to grow among the American public after the costly and much debated interventions in Afghanistan and especially Iraq. Government actors were also concerned with the country's increasingly thin-stretched military capacities. In addition, the US government sought cooperation from the Sudanese government in its fight against al Qaida terrorism. To secure such cooperation, it was even willing to temporarily downgrade its rhetoric and lower its estimates of the death toll in Darfur as Hagan and Rymond-Richmond (2008) show. The American administration had also been a strong force in the Comprehensive Peace Agreement between North and South Sudan, and many diplomats likely saw cooperation on the part of the al-Bashir regime as a necessary condition for its implementation. Finally, social movements can at times be easily pacified by symbolic government actions, such as those the US administration and Congress delivered. exCursus: aMnesty and save darfur-strategies of global aCtors in national Contexts Within the massive Save Darfur movement, Amnesty-USA had to find its place without disconnecting from the principles of the international organization, its many other national sections, and its headquarters in London. My interview with an American Amnesty activist, volunteer, and coordinator of the US Darfur campaign, revealed organizational and linguistic strategies that helped the national section navigate between its international obligations and its domestic environment: Amnesty International wanted a Darfur coordinator. . . . I volunteered to do this, but I recognized that there was a lot more with this than report to the group what Amnesty was doing and have them sign letters. I saw what the interests were of the group members. Somebody was very interested in violence against women, so I connected that [Darfur] to violence against women in armed conflict. . . . I created a yearlong panel series on violence against women in armed conflicts. . . . And it was very successful. I got funding from Amnesty. This was all as a volunteer. In addition to strategies to broaden the campaign and bring it in line with diverse strains of American civil society engagement, Amnesty activists had to manage divergences between Save Darfur and Amnesty-USA strategies. One example is Save Darfur's demands for divestment, a method Amnesty did not support. One interviewee described organizational strategies to circumvent such conflict: "I saw an opportunity to marry two strains of activism, to keep Amnesty current and to bring people into the fold that wanted to work with Amnesty but couldn't because they supported divestment and Amnesty didn't. So I created an economic activism campaign, centered on the oil industry. So that way, people who wanted to do Amnesty, and who were interested in divestment . . . could do stock-and stakeholder engagement. It gave them a way to try to impact the oil industry." Such organizational inventiveness, a skilled effort to maneuver between American activism and international, centralized Amnesty, is supplemented by linguistic strategies. Again, a conflict had to be resolved, in this case conflict over language. The Save Darfur movement insisted on calling the violence in Darfur genocide, a position Amnesty rejected. In the words of the volunteer interviewee: "I had to work with a lot of people who thought we should . . . call it a genocide. I spoke to a lot of groups, gave a lot of talks. And I would always say, whether you call it genocide or crimes against humanity, we know there were mass atrocities, and that the government is targeting its own civilians. And whatever we want to call it, the response is the same." Working in the context of the larger US movement, Amnesty activists thus became organizationally and linguistically innovative. This allowed them to operate effectively in the United States-another illustration of the fact that national conditions matter even within INGOs, and an observation in support of Stroup's (2012) findings about the weight of national contexts in INGO work. 8 But these adaptive strategies also show that contradictions between international and national positions can be managed. It also matters, of course, that Amnesty-USA is Amnesty's largest national section. Activists are aware of the fact that Amnesty-USA's size provides them with strength within the larger organization despite the formal leadership of the International Secretariat. "Well, the US section is the largest," one respondent said. "I was in Amsterdam for a meeting of different sections that were working on Sudan. And I was learning that European sections were coming to the US website and using our materials. . . . The reason I bring this up is that the US section was driving more of the Darfur campaign. We wanted more. We wanted to be doing more. We wanted to push the envelope. [JJS: "More than the International Secretariat?"] Yeah. Yeah." This comment is significant as it illustrates how activists within a national section do not just have to engage in organizational and linguistic maneuvers between contending forces in their home country, vis-à-vis the discipline demanded by their international headquarters. To bridge the gap, they may actually seek to pull the INGO over to their national campaign strategy, at least when representing a powerful country such as the United States. And yet the effect of such strategies is limited. National sections continue to be bound by the organization's agenda as defined, in the case of Amnesty, by the International Secretariat. Interested in the effects this tug-of-war between national movements and INGOs has on the representation of Darfur, I worked with two students at the University of Minnesota, Meghan Zacher and Hollie Nyseth Brehm, to analyze the websites of Save Darfur and Amnesty-USA. 9 Methodological and substantive details of this study are reported elsewhere (Zacher, Nyseth Brehm, and Savelsberg 2014; see also note 4); 10 a summary of findings suffices here. Our analysis of websites shows that representations of the Darfur conflict, as part of a broad-based American civil society campaign, did differ between Amnesty-USA and Save Darfur. Amnesty's website engaged in a more detailed depiction of different types of victimization. The pages displayed rapes much more frequently than Save Darfur and, somewhat more often, killings and the destruction of livelihood through looting, burning villages and crops, and poisoning water sources. Amnesty webpages also referred more often to categories of international criminal law, depicting the violence as a violation of international humanitarian law and human rights. Save Darfur web entries, on the other hand, used simpler and more dramatic vocabulary. Instead of specifying types of crimes, they more often simply referred to what had occurred as "criminal violence" (85% compared to Amnesty's 31%). Most important, while Amnesty-USA web entries almost completely avoid reference to genocide, in line with the international organization's policy, Save Darfur sites-in line with the central message of the campaign-insist on calling the violence just that: genocide (more than 70% of all Save Darfur entries). In one respect, however, Amnesty-USA (in line with the International Secretariat's policy) and Save Darfur agree. Both urge interventions by the ICC. Even if such support is explicated somewhat more frequently on Save Darfur sites (35%), it certainly appears prominently on Amnesty-USA sites as well (25%). On February 1, 2005, after the delivery of the Commission of Inquiry report to the UN Security Council, executive director of Amnesty-USA Dr. William F. Schulz was quoted as saying: "Given the scale and sheer horror of the human rights abuses in Darfur, anything less than immediate action on the report's findings would be a travesty for the people of Darfur. The International Criminal Court should be given jurisdiction to prosecute war crimes and crimes against humanity that have taken place in Sudan." 11 In the United States such a demand is backed by Save Darfur, the movement within which Amnesty-USA was one among almost two hundred constituent organizations. For instance, in an article written on April 27, 2007, the day on which an arrest warrant was issued against Ahmed Harun and Ali Kushayb, two leading perpetrators in Darfur, Save Darfur's executive director stated, "We welcome the ICC's continued efforts to ensure accountability for the genocide in Darfur. This important step by the court sends yet another message to the government of Sudan that the international community will bring to justice those responsible for these horrendous crimes." 12 Clear statements were accompanied by massive demonstrations and demands for justice. They also spurred artistic depictions, which appeared on the websites of movement organizations (see figure 10). In short, while interview statements illustrate how activists of national sections of INGOs (here Amnesty-USA) seek to build organizational and linguistic bridges to domestic political movements (Save Darfur in our case), public representations of massive violence as displayed on websites of the national section remain distinct from national contexts and in line with the INGO's central policies. With regard to the perceived necessity of ICC interventions, however, both organizations agree: they strongly advocate criminal justice intervention by the International Criminal Court against those responsible for the mass violence in Darfur. In their general assessment of the situation-as a campaign of criminal, indeed genocidal, violence or as war crimes and crimes against humanity respectively-and in the conclusions drawn for judicial intervention, NGOs in the United States aligned closely with other segments of American civil society, as our media analysis documented. And they shaped the rhetoric of the US government. ConClusions regarding the periphery of the JustiCe field Clearly, in the United States, civil society and government stood out in international comparison as both sought to advance a criminalizing frame for Darfur and a definition of the violence as genocide. This does not mean, as we have seen, that rhetoric necessarily translates into action. Obviously the Clinton administration was mistaken when it refused to identify the 1994 violence in Rwanda as genocide, fearing that such a label would necessarily prompt military intervention. The George W. Bush administration proved this assumption wrong in the case of Darfur. It spoke loudly about genocide but refused to intervene decisively. Further, despite the rather forceful mobilization and rhetoric in the Darfur case, the world cannot always rely on the United States and American civil society when mass atrocities are being committed. As discussed above, the American response to Darfur was characterized by a particular constellation of societal and cultural conditions. It contrasts with the silence shown in many other cases, such as the long-lasting lack of public and governmental attention to the long and painful history of the Democratic Republic of Congo with its fractured lines of conflict. More extreme are cases, such as those in Guatemala, in which American civil society long failed to react to massive human rights violations and genocidal violence abroad despite the US government's own contributions to their execution. Despite noting gaps between rhetoric and practice, and even instances of massive cynicism, this chapter shares one essential finding with the preceding ones. It shows how the entire justice field, both core and periphery, including international judicial institutions, rights-oriented INGOs, civil society movements, and supportive governments, contributes to a representation of the mass violence of Darfur that deviates radically from those of comparable situations in past centuries and millennia. The emerging narrative depicts those responsible for mass violence as criminal perpetrators and their actions as crimes. This narrative has moved us far from eras in which leaders of violent campaigns were celebrated as heroes (Giesen 2004b). In addition, this new narrative and its construction across national boundaries opens the eyes of the public to the suffering of victims. It supports Jenness's (2004:160) contention that criminalization processes in late modernity reflect an "institutionalization that involves the diffusion of social forms and practices across polities comprising an interstate system." In Darfur and in other cases like it, global actors, here especially the UNSC and the ICC, play a central role in this diffusion process. Finally, the justice narrative has at least the potential of ingraining in the global collective conscience the notion of mass violence as evil, through a process described in recent work on collective memory (Bass 2000; Osiel 1997; Levy and Sznaider 2010) and its classical predecessors (Durkheim [1912(Durkheim [ ] 2001Halbwachs 1992). That representations of mass violence adapt to national context may be considered a disadvantage by some; others may regard it as advantageous, as global movements always concretize in local contexts, succeeding only if they adjust to local conditions. The story of Amnesty International in the US context is a case in point. An earlier word of caution bears repeating, though. By creating criminalizing narratives, the justice field buys into the limits imposed by the institutional logic of the criminal law. The resulting account, neglecting structural conditions and historical roots, may be too limited a foundation for long-term policies that can prevent mass violence and genocide. Then again, the criminal justice field is not the only representational force. Its narrative faces other, conflicting ones, narratives to which I now turn.
2019-08-20T03:36:42.380Z
2016-01-01T00:00:00.000
{ "year": 2015, "sha1": "665d2d4afed01904400cb2f92e5a3391041919c5", "oa_license": "CCBY", "oa_url": "https://www.luminosoa.org/site/chapters/10.1525/luminos.4.d/download/303/", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "a502734cb3253d7af78159d21041856ebf270fbf", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
257126163
pes2o/s2orc
v3-fos-license
Pharmacological interventions for reducing catheter-related bladder discomfort in patients undergoing elective surgeries under general anaesthesia: A systematic review and meta-analysis ABSTRACT Background and Aims: Catheter-related bladder discomfort (CRBD) is identified as a major concern after surgery as it can lead to increased morbidity and prolonged hospital stay. A suitable agent to prevent and treat postoperative CRBD is not yet established, and the literature is scarce in this regard. So, we aimed to find the efficacy of various drugs in preventing CRBD after elective surgery. Methods: The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed for the study, and electronic databases like PubMed Central, Cochrane database and Embase were searched. The methodological quality of selected studies was assessed by the Cochrane Collaboration risk of bias tool. Review Manager 5.4.1 was used for statistical analysis. Results: The meta-analysis revealed that antimuscarinic agents were able to lower the incidence of CRBD significantly at 0 hour, 1 hour, 2 hours and 6 hours (P < 0.01) after the surgery. Tramadol was effective at 1 hour, 2 hours and 6 hours postoperatively (P < 0.01), whereas ketamine was effective at 2 and 6 hours (P < 0.01) postoperatively. Antiepileptic drugs (pregabalin and gabapentin) were able to lower the incidence of CRBD at 0 hour (P < 0.01), 1 hour (P < 0.05), 2 hours (P < 0.05) and 6 hours (P < 0.01) postoperatively while dexmedetomidine at 0 hour (P < 0.01) and 2 hours (P < 0.01) after the surgery. Injections paracetamol, amikacin and diphenhydramine were also shown to reduce the incidence of CRBD in separate randomised controlled trials. Conclusion: The current meta-analysis showed that antimuscarinic agents, tramadol, pregabalin, gabapentin, paracetamol and dexmedetomidine are effective in significantly reducing the incidence of postoperative CRBD. INTRODUCTION Catheter-related bladder discomfort (CRBD) implies distressing symptoms in patients post-urinary catheterisation. The urge to void even after a good passage of urine and discomfort in the suprapubic region are indicators of CRBD. CRBD is associated with similar symptoms as overactive bladder. The mechanism underlying CRBD is postulated to be contractions of detrusor muscle even when the bladder is not fully distended and is mediated by muscarinic receptors, mainly the M3 subtype. [1] The incidence of CRBD ranges from 50 to 90%. [2] For the last two decades, CRBD has gained recognition as a clinical entity, and several studies have been conducted to address as well as manage this clinical entity. Antimuscarinic agents, such as butyl scopolamine, tolterodine and oxybutynin, have been studied for the prevention of CRBD. [3,4] Ketamine is used to relieve CRBD in a variety of circumstances. [5] Tramadol, a synthetic opioid, has been shown to reduce the incidence of CRBD. [6] Likewise, other drugs which have additional antimuscarinic action on M3 receptor like dexmedetomidine, non-steroidal anti-inflammatory drugs (NSAID) and various general anaesthetics like sevoflurane and propofol were also studied. The role of dorsal penile nerve block in the prevention of CRBD has also been studied. [7] With this background, this systematic review and meta-analysis were undertaken to compare the efficacy of drugs and interventions to prevent and treat CRBD. We have performed a comprehensive systematic review and meta-analysis of the randomised controlled trials (RCTs) evaluating the effect of drugs or interventions for reducing CRBD in patients undergoing elective surgeries under general anaesthesia. Study protocol This systematic review and meta-analysis were conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. [8] Inclusion and exclusion criteria RCTs in the English language, including adult patients between the age group of 18 and 60 years, posted for elective surgeries under general anaesthesia and belonging to the American Society of Anesthesiologists (ASA) physical status I or II were included. Research articles were excluded if the above criteria were not met and if they recruited patients receiving any of the pharmacological agents administered other than through the oral or intravenous route, to study the incidence of CRBD, groups including non-surgical patients (e.g. patients in the intensive care unit or ward without surgery), patients with mental disorders (e.g. dementia, schizophrenia or depression) and when no functional outcomes were compared. Search strategy PubMed, EMBASE and Cochrane Library were electronically searched for relevant studies from their inception to June 2021. Only human studies and those published in the English language were selected. The search was done using the following keywords: catheter-related bladder discomfort, CRBD, catheter-induced bladder irritation, catheter-induced bladder discomfort, general anaesthesia. Selection of trials Search strategies were run in the databases mentioned above, and titles, as well as abstracts, were reviewed by authors to identify studies. The potentially eligible studies were assessed based on the review question as in the PICO (acronym for Population, Intervention, Comparison, Outcome) system mentioned below. Studies not meeting the inclusion criteria were excluded after a full-text review [ Figure 1]. The review question was formulated according to the PICO system and included: Population: Adult patients between the age group of 18 years and 60 years and belonging to the ASA physical status I or II who underwent elective surgeries under general anaesthesia; intervention: any of the pharmacological agents administered either orally or intravenously to study the incidence of CRBD; comparison: study groups which received placebo; outcome: the incidence of CRBD in both study group and control group. Data extraction Microsoft Excel sheet was used to extract data from the included studies regarding the study population, year of publication, procedure done in the study, drug dosages given, time of administration of the drug, route of administration of the drug, type of surgery, duration of surgery, size of Foley catheter inserted, number of subjects in the study group, number of subjects in the outcome group, outcome assessed at different time intervals, i.e., number of patients who manifested CRBD, etc. Risk of bias in studies All 15 studies which were included in the systematic review were assessed by two authors individually for risk of bias (RoB) assessment using the Cochrane risk of bias tool in Review Manager software version 5.4.1 (The Cochrane Collaboration, Copenhagen, Denmark, 2014). In case of any discrepancy between them, the opinion of the third author, was taken and the outcome was assessed accordingly. RoB diagrams were made for each study after analysis of the grades of bias by each of the authors [ Figure 2]. Statistical analysis Extracted data were analysed using Review Manager version 5.4.1. (The Cochrane Collaboration, Copenhagen, Denmark, 2014). For analysis of dichotomous variables, risk ratio (RR) was calculated and each point estimate was reported with a 95% confidence interval (CI). P value ≤0.05 was considered statistically significant. Statistical heterogeneity was investigated with the help of the I² statistic (I² ≥50% significant heterogeneity). In the case of statistical heterogeneity, subgroup analysis was not done because of an insufficient number of included studies. Study selection Thus, we got a total of 46 articles for further assessment from the total bulk of 2045 primary search results. Full-text screening of these selected 46 articles was done and was analysed to know whether they met the inclusion criteria. The final systematic review was done by compiling the results of 15 articles which met the criteria [ Table 1]. Category 2: Tramadol We identified three eligible studies with 190 patients comparing tramadol with placebo. [6,11,16] Meta-analysis using random effect model was applied for incidence of CRBD at 0 hour, 1, 2 and 6 hours postoperatively. Category 6: Diphenhydramine In the prospective, randomised, double-blinded, placebo-controlled trial by Yu Y et al., [19] in 2019, 96 adult female patients undergoing gynaecologic laparoscopic surgeries were tested for incidence of CRBD by comparing diphenhydramine (30 mg) against placebo. Urinary bladder catheterisation was done with 14 French Foley catheter after the induction of general anaesthesia. The presence and severity of CRBD were assessed at 1, 2 and 6 hours postoperatively. Assessment of the incidence of CRBD did not yield a significant difference between diphenhydramine and control groups (41.3% versus 51.2%, P = 0.30) at 1 hour postoperatively, whereas the same was effective in reducing the incidence of CRBD at 2 hours (34.8 versus 58.7%, P = 0.02) and 6 hours (23.9 versus 56.5%, P < 0.01). Absolute risk reduction with the administration of diphenhydramine came to be 24% and was also effective in decreasing the severity of CRBD at 6 hours postoperatively (P < 0.05). Category 7: Amikacin A prospective, randomised, double-blinded, placebo-controlled trial conducted by Verma R et al., in 2021 [20] evaluated the efficacy of intravenous French Foley catheter was utilised for catheterisation of the urinary bladder. Incidence and severity of CRBD were assessed at 0, 1, 6, 12 and 24 h after completion of the surgery. There was a significant difference in the incidence of CRBD among the two groups (P < 0.05). The incidence of CRBD in the control group was 66%, whereas that in the amikacin group was 44%, while the incidence of CRBD noted by the blinded investigator was reduced in the amikacin group (P < 0.05) at 1 hour and 6 hours. Furthermore, a significant reduction in the severity of CRBD (moderate) was observed at 1 hour in the amikacin group (P < 0.05), whereas no significant difference in the severity of CRBD was recorded between the two groups (P > 0.05) at the rest of the time points. Category 8: Paracetamol A prospective, randomised, double-blinded, placebo-controlled trial conducted by Ergenoglu P et al., [12] in 2012 evaluated the effect of single-dose intravenous paracetamol (15 mg/kg) on CRBD in 64 adult patients undergoing percutaneous nephrolithotomy under general anaesthesia. All patients were catheterised with 18 French urinary catheter. CRBD was assessed at 30 minutes and 1, 2, 4, 6 and 12 hours postoperatively. The number of patients who experienced moderate discomfort was significantly lower in the test group compared with the control group at 1, 2, 4 and 6 hours (P < 0.05), and only one patient had severe CRBD in the test group at 1 hour of assessment. So, they concluded that intraoperative paracetamol administration decreases the severity of CRBD. DISCUSSION CRBD, defined as an urge to void or discomfort in the suprapubic area, is usually seen in patients after surgery who undergo Foley catheter insertion intraoperatively. [3] According to earlier research, CRBD after surgery is reported in 55-91 percent of cases. [2,10,11] In the postoperative phase, CRBD can lead to patient discontent and an increased risk of surgical complications like wound dehiscence, and bleeding and also may lead to arrhythmias and circulatory instability. A detailed understanding of CRBD and its pathogenesis could lead to more effective management and decreased morbidity. The pelvic nerves supply cholinergic innervation to the bladder, while the hypogastric nerves supply adrenergic innervation. Type 2 and 3 muscarinic receptors (M2 and M3) are found in the urothelium and on efferent neurons. [1] Foley catheter can stimulate the afferent nerves and can result in involuntary contractions mediated by cholinergic pathways. [21] Based on this notion, various treatment plans for CRBD with a variety of muscarinic receptor antagonists have been implemented, with varying degrees of efficacy. Detrusor muscle contraction and inflammatory mediator activity caused by catheterisation, on the other hand, can cause prostaglandin synthesis, which may play a role in the development of CRBD. [21] So, non-steroidal anti-inflammatory drugs (NSAIDs) have been also implemented in the management of CRBD. [22] Studies using antimuscarinic agents, solifenacin, tolterodine, darifenacin and oxybutynin were compared in the meta-analysis with placebo, and at all periods of observation, antimuscarinics were able to reduce the incidence of CRBD significantly (P < 0.01). One of the major limitations in the evidence included in the study is that one of the studies showed a high risk of selection bias and performance bias which may sharpen the bias of the results. [3] Another limitation is that the gender distribution was not equally maintained either in the comparison groups of individual studies or in between the studies. As previous studies have shown that gender can affect the incidence of CRBD, it can hamper the result of our analysis. [23] Also, studies in the comparison include both urological and non-urological surgeries as urological surgeries have been shown to be associated with an increased risk of CRBD which may tilt the balance of the analysis results. [24] Antimuscarinic agents have been reported to cause dry mouth in the postoperative period, facial flushing, blurred vision and nausea and vomiting. [3,15] Therefore, more research is needed to evaluate the response of different doses of antimuscarinics and the side effect profile of these increased doses as well as the implications of continuation of the therapy in the postoperative period rather than single-dose administration preoperatively as done in our studies of interest. While in the comparison of tramadol, the meta-analysis revealed a significant reduction in the incidence of CRBD in the desirable group compared to the control group in 2 and 6 hours postoperatively. The RRs at 0 hour and 1 hour were 0.61 with a 95% CI between 0.09-4.37 and 0.45 with a 95% CI between 0.16 and 1.23. The results of this comparison were not valid and not significant (P-value of 0.40, 0.12 respectively). They also showed considerable heterogeneity, as the intervention time point in the study by Li S et al. [6] was in the postoperative period and the route of administration of the drug varied. Furthermore, the study was conducted only in the female population. The major limitation of the evidence included in the analysis was that one of the studies was restricted to the female gender and gender can be a confounding factor as evidenced by previous studies. [6,23] Tramadol was reported to cause an increased incidence of postoperative nausea and vomiting, postoperative sedation as well as postoperative respiratory depression. [11] Also, there was no uniformity in the timing of interventions. In addition, studies in the comparison contain both urological and non-urological procedures, which may skew the results of the analysis. Urological surgeries have been proven to increase the incidence of CRBD. [24] In addition, all these studies assessed the severity of CRBD up to 6 hours postoperatively. However, the half-life of tramadol is 6 hours. Hence, prolonged periods of observation may be needed and further studies are required to evaluate the dose-response titration and effect of tramadol in the treatment of CRBD and its side effects. New studies comparing tramadol and other opioids like tapentadol are also providing promising results. [25] In the meta-analysis comparing ketamine with placebo, a significant reduction in the incidence of CRBD was perceived in the 2 and 6 hours postoperatively. The RR at 2 hours and 6 hours were 0.31 with a 95% CI between 0.15-0.64 and 0.23 with a 95% CI between 0.11 and 0.48, respectively, indicating that ketamine reduced the incidence of CRBD in the 2 and 6 hours postoperatively (P < 0.01). The RRs were 0.75 with a 95% CI between 0.17-3.44 and 0.62 with a 95% CI between 0.08-4.63 respectively for 0 hour and 1 hour observations. Results for both these variables are statistically insignificant with P values of 0.72 and 0.64, respectively. Both these variables showed considerable heterogeneity among the studies. Various elements can contribute to heterogeneity. First of all, the interventions were not done at similar time points in the studies considering the pharmacological profile of the agent. Secondly, the number of samples in the studies used for the pooled effect was very less, which hampered the degree of freedom significantly. A major limitation in the evidence included in the review is that one of the studies included patients who complained of CRBD spontaneously in the postoperative period, whereas the other one used a preemptive approach. [9,13] Furthermore, the studies of interest had used different doses of ketamine which can result in skewing of the results. In addition, ketamine can also cause sedation in the postoperative period, nausea, vomiting, hallucination and diplopia. [9] Also, in one of the studies, fentanyl was used for patient controlled analgesia, whereas in the other, morphine was used for postoperative analgesia, and this may have an added effect on the bias in the studied observation. [9,13] In the meta-analysis of antiepileptics (pregabalin and gabapentin) with placebo, there was a significant difference between the two groups, pointing out that antiepileptics were effective in reducing the symptoms of CRBD at all time points of observation (P < 0.05). One of the studies in the comparison assessed the incidence of CRBD in non-urological surgeries, and the other two in urological surgeries. [10,14,16] As the existing data from the previous studies show an increased risk of CRBD in urological surgeries, it may distort the evidence obtained and may be a potential cause for bias. [24] Furthermore, the high risk of selection bias in one of the included studies may affect the interpretation of the results. [10] The other limitations in the evidence included in the analysis are mainly due to the difference in the doses of pregabalin used in the two studies comparing the effect of pregabalin and placebo. [14,16] Pregabalin is reported to have an increased incidence of postoperative sedation. [14] Existing data from a previous study suggests a reduction in the overall incidence of CRBD when gabapentin 1200 mg was used. [26] Future studies are required to compare the dose-response titration and its effects on the prevention and treatment of CRBD. In the meta-analysis conducted with studies comparing dexmedetomidine with placebo, the RRs obtained are 0.47 with 95% CI between 0.32-0.71 and 0.44 with 95% CI between 0.28 and 0.68 respectively (P < 0.01 for both) at 0-hour and 2-hour observations. There was negligible statistical heterogeneity also. One of the limitations of the evidence included in the analysis for dexmedetomidine is that the study by Li SY et al. [17] was restricted to the female gender. Dexmedetomidine is reported to have an increased incidence of postoperative sedation, and it may also have interfered with the results of our study. [18] As previous studies have shown that gender can affect the incidence of CRBD, it can also skew the results of our analysis. [23] Also, in our study, we have systematically reviewed the effect of other drugs like diphenhydramine, amikacin and paracetamol on the incidence of CRBD compared with placebo. In the study on paracetamol, it was effective in reducing the incidence of CRBD in almost all time periods. [12] The study on amikacin was able to provide a statistically significant difference only in the first hour postoperatively. [20] The study on diphenhydramine implied a reduction in the incidence of CRBD at 2 hours and 6 hours after surgery and absolute risk reduction came to be 24%. [19] There is only a limited amount of literature available that systematically reviews the interventions of CRBD. In the classic meta-analysis conducted by Hu et al., [24] eight studies on tramadol, ketamine, tolterodine and other anticholinergics, as well as gabapentin were considered. It was shown that anticholinergic drugs like tolterodine, tramadol and gabapentin were able to reduce the postoperative incidence and severity of CRBD. In a recent meta-analysis by Hur M et al., [26] they carried out an arm-based network analysis of 29 trials in patients undergoing urologic surgery including amikacin, solifenacin, darifenacin, butylscopolamine, dexmedetomidine, gabapentin, glycopyrrolate, ketamine, oxybutynin, resiniferatoxin, tolterodine, tramadol, caudal block, dorsal penile nerve block and lidocaine-prilocaine cream. Gabapentin 1200 mg was found to be the most efficacious intervention in reducing the overall incidence of CRBD, whereas tolterodine was shown to be the most effective in reducing the severity of CRBD. Bai et al. [27] conducted a narrative review that included 14 studies from 2005 to 2014. They found that muscarinic antagonists, anaesthetics, antiepileptics and analgesics like paracetamol effectively reduced the symptoms of CRBD and decreased the incidence of it significantly compared with placebo. They speculated that the most treatment-resistant surgery would be transurethral resection of bladder tumours. The current meta-analysis and systematic review is restricted to RCTs and published articles only to aim for high-quality evidence and peer-reviewed data. However, it also has some limitations. Although our criteria emphasised studies with modest variation in their cohorts, we found significant heterogeneity among the studies that were included. Heterogeneity was in terms of the type of surgery, gender of the patient, mode and timing of drug administration. A limited sample size in RCTs and a small number of RCTs in a systematic review and meta-analysis limits the degree of freedom, leading to statistical heterogeneity. The lack of proper mention of allocation concealment in some studies has contributed to bias. However, we have taken publication bias into account, and funnel plots of comparison to assess the publication bias of the studies are made [ Figure 8]. CONCLUSION The available evidence suggests that agents such as oxybutynin, tolterodine, solifenacin, darifenacin, tramadol, pregabalin, gabapentin, paracetamol and dexmedetomidine are effective in significantly reducing the incidence of postoperative CRBD. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2023-02-24T17:34:42.354Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "ee173cf744d92896b213f91ae1b9d6961b1333aa", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ija.ija_200_22", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "db02065d71a34be57f1aaed5d66afb8579882595", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229526177
pes2o/s2orc
v3-fos-license
Depression, Anxiety and Stress Symptomatology among Swedish University Students Before and During the COVID-19 Pandemic: A Cohort Study. Background. The COVID-19 pandemic has had a profound effect on societies, economies, and daily life of citizens worldwide. This has raised important concerns about the mental health of different populations. We aimed to determine if symptom levels of depression, anxiety, and stress were different during the COVID-19 outbreak compared to before, with the Depression, Anxiety and Stress Scale as main outcome. We also aimed to determine whether pre-pandemic loneliness, poor sleep quality and mental health problems were associated with worse trajectories of mental health. Methods. We conducted a cohort study with 1658 Swedish university students answering questionnaires before the pandemic and a 81 % response-rate to follow-ups during the pandemic. Generalized Estimating Equations were used to estimate mean levels of symptoms before and during the pandemic, and to estimate effect modication by levels of loneliness, sleep quality and pre-existing mental health problems. Results. We found small differences in symptoms. Mean depression increased by 0.23/21 (95% CI:0.03 to 0.43), mean anxiety decreased by -0.06/21 (95% CI: -0.21 to 0.09) and mean stress decreased by -0.34/21 (95% CI: -0.56 to -0.12). Loneliness, poor sleep quality and pre-existing mental health problems minimally inuenced trajectories. Conclusions. Contrary to widely held concerns, we found minimal changes in mental health among Swedish university students during the rst months of the COVID-19 pandemic. Background The COVID-19 pandemic has had a profound impact on societies and the daily lives of citizens globally. In a recent call for action, Holmes et al. [1] raised concern about the potential detrimental effects that the pandemic might have on mental health and urgently called for research to evaluate the impact of the pandemic. With repeated reports of high levels of mental health problems among university students [2,3], it is of outmost importance to describe the impact of the pandemic on the mental health of university students. A recent review of the literature suggests that the COVID-19 pandemic has led to increased symptoms of depression and anxiety, and that some groups might be more at risk than others [4]. However, most studies included in the review were cross-sectional and it is therefore challenging to determine how mental health status evolved in the early months of the pandemic. For university students speci cally, there are some studies indicating short-term increases in depression and anxiety during COVID-19 [5,6], while others have found no changes [7]. Moreover, it is plausible that the mental health impact of the pandemic differs between regions and countries, as well as between sub-groups within populations. Most of the current research has been performed in China, and none in Sweden. Sweden's strategy to contain the spread of the virus has received international attention both due to its lack of restrictions, and because of its comparably high mortality. Sweden reported its rst case of COVID-19 on January 31 st , 2020. The rapid spread of the virus in March 2020 prompted the Swedish government to implement regulations and policies to contain the spread of the virus. Like for most other countries, the public health strategy in Sweden was to promote physical distancing. Unlike many other countries, the implementation of this strategy has relied on voluntary behaviors. No lockdowns have been issued, and although reductions in social contacts, mobility and travelling has been strongly encouraged by authorities, it has not been enforced. Nevertheless, on site university-based education was cancelled on March 17, 2020 and replaced by online education. Given Sweden's somewhat unique strategy there is an urgent need to conduct longitudinal evaluations of the mental health impact of the COVID-19 pandemic. University students, who generally show comparably high levels of mental health problems, are a potential at-risk group. With-in this group different factors might put some students at higher risk than others. Social isolation and loneliness A recent survey identi ed concerns about the impact of physical distancing and loneliness on mental health during the COVID-19 pandemic [8]. Loneliness is a predictor for the development of depression [9], and it is associated with worse prognosis for depressed individuals [10]. Recent research has found that social isolation triggers neural craving responses similar to hunger [11]. Therefore, people with preexisting loneliness may feel even more socially deprived during the pandemic, and this may negatively impact their mental health. Sleep quality Poor sleep quality is a prevalent and increasing problem among university students [12]. Sleep disturbances have bidirectional etiological associations with depression, anxiety, and stress [13][14][15][16], and has shown associations with depression and anxiety during the pandemic [4]. Physiologically, poor sleep quality impairs emotional regulation and increases affective reactivity [18]. Further, poor sleep quality is associated with increased negative emotions following disruptive events [19] and a lower threshold for perceived stress and increased negative affect following mild cognitive stressors [20]. Therefore, it is important to determine whether changes in mental health in university students during the COVID-19 pandemic are modi ed by sleep quality. Mental health problems As highlighted by Yao [21], individuals with pre-existing mental health problems may be at risk of worsening symptoms during the COVID-19 pandemic. Two recent studies have reported increased symptoms of anxiety, symptoms of eating disorder and other psychiatric symptoms among psychiatric patients during the COVID-19 pandemic [4]. This may be signi cant also for individuals with minor mental health problems because they are at risk for developing more severe problems [22][23][24]. Our primary aim was to determine the mean differences in depression, anxiety, and stress symptomatology in university students in Stockholm comparing symptom levels before and during the rst three months of the COVID-19 pandemic. Our secondary aim was to determine whether pre-pandemic loneliness, poor sleep quality and mental health problems were associated with different trajectories of changes in symptoms of depression, anxiety, and stress. Our hypotheses were that symptoms of depression, anxiety, and stress would worsen over the rst months of the COVID-19 pandemic. We also hypothesized that the trajectories would be worse for groups of students reporting loneliness, poor sleep quality and pre-existing mental health problems. Design and study population We conducted a cohort study of university students in Stockholm, Sweden before and during the outbreak of the COVID-19. The study is nested within a large on-going dynamic cohort study of university students: the Sustainable University Life (SUN-study) (ClinicalTrial.gov ID: NCT04465435). All full-time undergraduate students enrolled at Karolinska Institutet (KI), Sophiahemmet University (SHH) and The Scandinavian College of Naprapathic Manual Medicine (NPH) with at least one year left to complete their degree were eligible for inclusion in the study. We also invited all students from the architectural program at Royal Institute of Technology (KTH), students in the bachelor program Business and Economics from the Stockholm School of Economics (SCE) and targeted bachelor programs at The Swedish School of Health and Sports Sciences (GIH) to enroll in the study. Data collection started in August 2019 and is still ongoing. Data collection The data was collected online. Students received information about the study through in-class presentations by study staff. Students were invited to complete the baseline survey and provided with access links to the study questionnaire via e-mail. All participants provided informed consent electronically before entering the study. Information about the study was also given in relevant social media channels (e.g. student union social media channels), and through on-campus information sites. Included students were followed with web surveys every three months starting in November 2019. Participants not responding to the follow-up received reminders by email, phone text-message and one phone call over the following month. The study was approved by the Swedish Ethical Review Authority (reference number: 2019-03276, 2020-01449). The data collected from December 1, 2019 to February 28, 2020 was used as baseline information "before pandemic" (except for the demographic variables for participants from SHH and NPH which were collected August-September 2019). Data collected from March 1 to May 20, 2020 provided follow-up information "during pandemic" (see Table 1). This categorization is based on the facts that Sweden had very few cases of COVID-19 throughout February, with an accelerating spread in March. Measurement of loneliness, sleep quality and pre-existing mental health problems Loneliness was measured using the UCLA Three-Item Loneliness Scale [25] with a total score ranging from 3-9 points. In accordance with previous research, we used a cut-off of ≥ 6/9 to de ne loneliness [26]. UCLA Three-Item Loneliness Scale has acceptable internal consistency (Cronbach's α= 0.72) and high correlation (r=0.82) with the 20 item Revised UCLA Loneliness Scale [25]. Sleep quality was measured using the Pittsburgh Sleep Quality Index (PSQI) [27]. A score of > 5/21 is used to classify poor sleep quality. This cut-off has shown a sensitivity of 89.6 % and a speci city of 86.5 % for differentiating between good and poor sleepers [28]. The PSQI has adequate internal consistency (Cronbach's α =0.82) and test-retest reliability (r=0 .82) over one month [28]. Pre-existing mental health problems were measured with the Depression, Anxiety and Stress Scale (DASS-21; see psychometrics under 'Outcomes') [29] and classi ed as moderate symptoms if scoring above cutoff for on any of the three subscales ( ≥ 7 on the depression subscale or ≥ 6 on the anxiety subscale or ≥ 10 on the stress scale) [30]. Loneliness, sleep quality and pre-existing mental health problems were all measured before the pandemic. Outcomes We measured symptoms of depression, anxiety, and stress with DASS-21 [29]. DASS-21 has good psychometric properties, with convergent and divergent validity distinguishing between subscales, and Cronbach's α of 0.82-0.90 for the three subscales [29]. The primary outcomes are scores of depression, anxiety, and stress, respectively. For each participant, these are measured before and during the COVID-19 pandemic. Statistical analyses Participants baseline characteristics are presented in Table 1 as number of participants and percentages or as means with SDs for all participants, participants completing the follow-up and participants lost to follow-up. We used Generalized Estimating Equations (GEE) to model mental health symptoms during two time periods, before and during the pandemic. GEE models treat correlation between observations from the same individual as nuisance parameters and provide estimates of the marginal population mean of the outcome. Our data was not normally distributed, which was one reason for choosing GEE since the model do not rely on the assumption of normally distributed outcome measures or the normality of residuals. We built three separate models, one each for symptoms of depression, anxiety, and stress, to assess overall mean differences in symptoms from before to during the pandemic. These models included only time-point (before vs. during pandemic) as the predictor. Since the models evaluated differences over time for the full group, no covariates were used in these models. An exchangeable correlation matrix was speci ed, although with only two time points, it makes no difference which structure is used. Subsequently nine separate models were tted to assess if differences between the outcomes from before to during the pandemic varied by loneliness, poor sleep quality or pre-existing mental health problems. These models included an exposure variable of dichotomized loneliness, sleep quality or preexisting mental health problems and time-point as predictors. A two-way interaction term between exposure and time point was included letting the differences over time vary by exposure level. These models were adjusted for age, female gender, and highest parental education level (for unadjusted coe cients see Online Resource 1). An exchangeable working correlation structure was used in all models. We conducted a sensitivity analysis using the same methods described above in a sample of 496 participants followed from August-September 2019 to November 2019-January 2020 to compare the effect modi cation of the exposures during the pandemic to those of an earlier time period (Online Resource 2). All analyses were performed using RStudio version 1.2.5001, the packages 'geepack' and 'emmeans' were used to perform GEE analyses, and to derive estimated marginal means from the models. Three items of the PSQI were missing (5b, 5f and 5j) for the rst 333 included participants, due to initial technical problems with the web survey. We imputed these missing variables by imputing the individual mean values from observed items 5b-5j on the PSQI. No other measures had missing items. We investigated whether loss to follow-up was random, or systematic by investigating the association between baseline characteristics and dropping out of the study by building a series of logistic regression models. We report crude odds ratios (ORs) of being lost to follow-up (Table 1). The attrition analysis suggests that participants with moderate-severe symptoms of stress symptoms before the pandemic were more likely to be lost to follow-up (OR 1.32, 95% CI: 1.0 to 1.74 When comparing the symptom levels before and during the pandemic, we found a small mean increase in depressive symptoms (0.23/21, 95% CI:0.03 to 0.43), small decrease in mean anxiety symptoms (-0.06/21, 95% CI: -0.21 to 0.09) and a small decrease in mean stress symptoms (-0.34/21, 95% CI: -0.56 to -0.12) from before to during the pandemic (Table 2, Figure 2). Pre-existing mental health problems (PEMHP). All models are adjusted for age (interval), highest parental education (university vs. all others) and female gender. Difference-in-difference is calculated as change in exposed -change in unexposed with a 95% CI. Discussion We investigated differences in symptoms of depression, anxiety, and stress in Swedish university students from before to during the rst few months of the outbreak of COVID-19. The results indicate that differences in mean levels of depression, anxiety, and stress were minimal for all the comparisons. We found a small increase in depressive symptoms and small decreases in anxiety and stress symptoms. However, the differences are so small that the clinical signi cance of these results is debatable. Overall, our results suggest that there were no clinically meaningful mean differences in mental health in our sample of Swedish university students during the rst months of the COVID-19 pandemic compared to before the pandemic. Contrary to our hypotheses, students who were lonely, had poor sleep quality or pre-existing mental health problems before the pandemic did not show a worse trajectory of symptoms over the rst months of the pandemic. We found that there were small difference-in-differences over time for all these groups when comparing to students without these characteristics. The small differences that we found were all in the opposite directions from what we had hypothesized. The groups with loneliness, poor sleep quality, and pre-existing mental health problems all showed more favorable trajectories of mean depression, anxiety, and stress scores during the rst months of the pandemic, compared those the unexposed groups. Our study has strengths. First, we were able to conduct a natural experiment by investigating the differences in mental health symptoms before and after the pandemic reached Sweden, unlike most previous studies with similar aims [4]. Second, our follow-up rate was high (81.2 %). Attrition analyses suggests that those lost to follow-up had minimal impact on our results. Third, the instruments used for measurements of all variables have good psychometric properties, limiting the risk of misclassi cation. Finally, we included a large sample of university students from six universities. We recruited 24.8 % of eligible students. Therefore, there is a possibility that selection bias in uenced our results. However, the baseline pre-pandemic levels of mental health symptoms measured in our cohort were similar to those reported in previous studies of Swedish university students using the same Instrument (DASS-21) [31]. This suggests that our sample is representative of the mental health status of Swedish university students before the pandemic. Overall, our results suggest that symptoms of depression, anxiety and stress among Swedish university students changed minimally in the rst months of the COVID-19 pandemic compared to the pre-pandemic period. Our hypothesis that those who experienced loneliness, poor sleep quality and pre-existing mental health problems before the pandemic would experience worse changes in symptoms of depression, anxiety and stress was not supported by our data. For the group with pre-existing mental health problems this can be explained by regression to the mean. A similar pattern can be seen among students followed during the fall of 2019, before the pandemic (Online Resource 2). Our interpretation is that this group did not have worse trajectories during the pandemic but, as would be expected from regression to the mean, experienced decreased symptoms. The favorable trajectories for individuals who were lonely or had poor sleep quality during the pandemic cannot, however, be explained by regression to the mean. The participants followed during the fall of 2019 had more parallel trajectories, showing no or smaller difference in trajectories between exposed and unexposed (Online Resource 2). Although the clinical relevance of these small differences over time is debatable, one might speculate about the underlying mechanisms for these patterns. The fact that lonely individuals had a more favorable trajectory during the rst months of the pandemic might be related to the subjective and relative aspects of experiencing loneliness. The pandemic has arguably led to more restricted social lives for most people. This might lessen the contrast when comparing one's own social life to that of others, making the experience of loneliness somewhat less emotionally painful. Our results dovetails with a recent American study, showing increases in perceived social support and no mean changes in loneliness during the rst months of the COVID-19 pandemic [32]. The more favorable trajectory of those with poor sleep quality, might be explained by changes in day-to-day life brought about by the COVID-19 pandemic that makes poor sleep quality easier to deal with. Students studying from home might have less stressors during the day, perhaps decreasing the negative impact of poor sleep quality on mental health. Our results, showing minimal differences in depression, anxiety and stress, contrasts against much of the previous research into the mental health effects of the COVID-19 pandemic. A systematic review of the literature showed that most evidence points to higher levels of depression and anxiety in the general public during the pandemic than before the pandemic, and has indicated that pre-existing mental health problems and poor sleep quality might be risk factors for depression and anxiety during the pandemic [4]. One explanation for the differences of our results to those of many previous studies are the differences in design between the studies. While we used a longitudinal design, most other previous studies were crosssectional. Another explanation for the contrast between our results and those of many previous studies might be differences in the impact of COVID-19 on different study populations, and that the time of exposure may differ between the populations. It is possible that people in different countries has been affected differently. One reason might be that the spread of the virus has been more severe in some places than others. However, our study was performed in Stockholm, which has had comparatively high mortality. Yet another reason may be that different governmental strategies to contain the spread might have had differing impacts on mental health. Sweden's strategy, which has been less restrictive than many other countries, may have had less detrimental effects on mental health. More high-quality research is needed to compare mental health changes between populations during the COVID-19 pandemic. Conclusions In conclusion, contrary to previously expressed concerns, we saw only minimal differences in mental health among Swedish university students when comparing symptom levels before and during the rst months of the COVID-19 pandemic. We also did not see meaningful differences in mental health for students exposed to loneliness, poor sleep quality and pre-existing mental health problems. Differences in DASS-21 scores before and during the pandemic. Graphs of estimated means from GEE models for overall differences of DASS-21 scores before and during the COVID-19.
2020-10-28T18:02:54.869Z
2020-09-09T00:00:00.000
{ "year": 2020, "sha1": "238c2460aa42a58012e60b22fcaecd2bc574d353", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-70620/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "2418d091ea2a2299fa9ccfdd4bd1f2bc0c957482", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
209448936
pes2o/s2orc
v3-fos-license
Association of Soybean Food Intake and Cardiometabolic Syndrome in Korean Women: Korea National Health and Nutrition Examination Survey (2007 to 2011) Background Soybean food consumption has been considered as a possible way to lower incidence of cardiometabolic syndrome (CMS) among Asians. However, results from studies investigating its efficacy on CMS in Asians have been inconsistent. Methods We analyzed the association between soybean intake frequency and prevalence of CMS based on data from the Korea National Health and Nutrition Examination Survey 2007 to 2011. Data of 9,287 women aged 20 to 64 years were analyzed. Food frequency questionnaire was used to assess soybean food consumption frequency. General linear model and multivariable logistic regression model were used to examine the association of soybean intake quintile with CMS and its risk factors. Least square means of metabolic factors mostly showed no significant relevance except liver indexes. Results Compared to participants in the 1st quintile (<2 times/week of soybean food), odds ratios (OR) for CMS and abdominal obesity (AO) in the 4th quintile (8.5 times/week<soybean food≤17 times/week) were 0.73 (95% confidence interval [CI], 0.57 to 0.95) and 0.72 (95% CI, 0.58 to 0.90), respectively. After excluding Tofu products, ORs of CMS, AO, high blood pressure, and hypertriglyceridemia were lower than those without excluding Tofu products. However, results still did not show significant inverse linear trend across frequency quintiles. Conclusion Our findings suggest that soybean intake of 8.5 to 17 times/week was inversely associated with CMS in Korean women. The relation between soybean intake >17 times/week and CMS varied depending on soybean food items. INTRODUCTION Increasing prevalence of cardiometabolic syndrome (CMS) is a health concern worldwide. According to the World Health Organization, some developing countries with population adhering to their traditional eating habits such as those in South East Asia do not show increase in incidence of CMS. However, countries that adapted to western diet such as Iran had incidence of CMS that was even higher than certain developed countries [1]. Similarly, the prevalence of type 2 diabetes mellitus (T2DM), one of conditions of CMS, has been lower in Asian populations than that in Western countries [2]. CMS prevalence in South Korea increased between the mid-1990s and mid-2000s. However, in 2013, it was 28.9% without a signifi-Diabetes Metab J 2020;44:143-157 https://e-dmj.org cant increasing or decreasing trend for the past 5 years [3]. On the contrary, in the United States, nearly 35% of all adults and 50% of those aged 60 years or older were estimated to have CMS in 2012. The rate had been increasing for the past 10 years [4]. Soybeans and other processed soybean food have been consumed for a long time in Asia to compensate incomplete protein content in rice. Asians typically consume 9 to 30 g of soybeans per day, with individual and regional variations [5]. According to the Korean National Nutrition Survey [6], daily mean intake of total genistein and daidzein in the Korean population is estimated to be 21.0 mg per person. This means that a Korean consumes more isoflavones than a person who lives in United States or Europe [7]. Dietary isoflavone is consumed by only 35% of adults in a day with an average intake of 3.1 mg/ day, resulting in a mean intake of 1.0 mg/day for all United States adults [8]. Frequent intake of cultivated soybean food which is unique to traditional Asian cuisines is not common in western diet. This seems to have relation with the lower incidence of CMS among Asians [9,10]. Although there are several epidemiologic and experimental data reporting an inverse relationship of the consumption of soybean with several metabolic disorders [11][12][13], several studies have found no clear association between soy intake and incidence of cardio metabolic disorders [10,14,15]. According to some randomized clinical trials, soybean food consumption does not show any improvement for insulin resistance, serum glucose level, or lipid profiles [16,17]. Furthermore, soybean intake tends to have sex dependent effects on risk of CMS and specific cancer [18,19]. Overall, research results about the effect of soybean consumption on CMS, especially among metabolically healthy women, were inconsistent [10,14,20]. Papers suggesting that soybean consumption was not associated with CMS were mostly based on people who did not routinely eat soybean food or eat small amounts of them [17,21]. Thus, it is necessary to examine effects of soybean foods on healthy women who eat soybeans on a daily basis. For South Korean population, studies on this topic have also shown conflicting results recently. Several findings suggested that soybeans had protective effects against CMS among obese women [22,23]. However, other studies did not find a significant relation between soybean food intake and CMS [20]. Thus, the objective of this study was to investigate the association between CMS and soybean food intake among South Korean women. Participants In this study, we used data from the Korea National Health and Nutrition Examination Survey (KNHANES) IV and V (2007 to 2011). It contains health interview survey, physical examination research, and nutrition questionnaires. Stratified multistage sampling design was used. Sampling was done according to geographical area, residential environmental type, age, and sex. All participants provided written informed consent before their participation. Initial candidates for the present study were those having completed nutrition questionnaires. We then excluded subjects who were men (n=12,891) and those who were under 19 years of age or over 64 years of age (n=5,194). Additionally, we excluded people who reported to intake implausible amount of total energy (n=85; <450 or >6,300 kcal/day for women) as data from those participants could not be applied to the general population. They could cause distortion of data. We excluded people whose daily energy intake was less than 25% or over 300% of the estimated energy requirements (EER). According to the Dietary Reference Intakes for Koreans 2015 (KDRI) [24], the EER for the adult is from 1,800 to 2,100 kcal, with less than 25% being 450 and over 300% being 6,300 kcal [24,25]. Pregnant (n=198) and lactating women (n=282) were also excluded because their physiological conditions were changed during pregnancy and breast feeding. We also excluded those who had already been diagnosed by their doctors with diabetes mellitus (n=429), hypertension (n=1,330), or hyperlipidemia (n=399). In addition, we excluded subjects who had not completed anthropometric examination or a blood test (n=893). Finally, 9,287 participants were included in the final analysis. Fig. 1 shows the entire flow of participant selection. Demographic and health behavior Trained interviewers collected data of demographic factors and health behaviors of participants via personal interviews. Demographic variables included age, sex, achieved educational level (high school education or less, and college education or more), and monthly household income (tertile of equivalized household income). Equivalized household income was calculated as total monthly household income divided by the square root of the total number of household members. Health behavioral variables included smoking, alcohol consumption, and physical activity. Participants were asked to choose whether they were never smokers or were past or present smokers. The average amount and number of consumed alcoholic beverages was assessed by self-questionnaire. It was then assigned into five groups based on frequency per week. Physical activity was quantified as metabolic equivalent of task minutes per 7 days (MET-minutes per week). It was calculated using the scoring protocol of the Korean version of the International Physical Activity Questionnaire (IPAQ) form [26]. MET means energy expenditure per kg/minute (kcal/min/kg). MET-minutes was calculated as 'MET level of each behavior×minute×frequency/ week. ' Met level of light intensity activities (walking 4.8 km/hr) was 3.3 METs. Moderate intensity activities had 4.0 METs and vigorous intensity activities had 8.0 METs. Participants were asked to choose whether they performed low, moderate, or high physical activity. Physical activity levels were then classified as low (<600 MET-minutes per week), moderate (more than 600 but less than 3,000 MET-minutes per week), or high (>3,000 MET-minutes per week). Dietary assessment Dietary assessments were conducted by using food frequency questionnaire (FFQ). FFQ consisted of 63 mainly consumed food items in South Korea with 10 categories of frequency value (almost never, 6 to 11 times/year, 1 time/month, 2 to 3 times/ month, 1 time/week, 2 to 3 times/week, 4 to 6 times/week, 1 time/day, 2 times/day, 3 times/day). Soybean food consisted of three items: soybean group (which contained rice with soybean, beans cooked in soy sauce, etc.), tofu group (which contained tofu put in soup, stew, flat cake, boiled down in soy sauce or other seasonings, soft tofu), and soybean milk group. All analyses accounted for the complex sampling design effect and appropriate sampling weights of the national survey. We converted the soybean FFQ into daily intake frequency and then grouped it into quintile according to the number of participants proportionally. Energy intake variables used in this study were extracted from 24-hour recall data. They included total energy (kcal/day), carbohydrate (g/day), protein (g/day), total fat (g/day), and sodium (mg/day) intakes presented as total energy adjusted values using the residual method. Anthropometric examination and blood test Physical examination was performed by well-trained medical staff following standard procedures. Body weight and height were measured to 0.1 kg and 0.1 cm, respectively, with participant wearing indoor clothing without shoes. Waist circumference (WC) was measured at the narrowest point between the lower border of the rib cage and the iliac crest. Body mass index (BMI) was calculated as the ratio of weight (kg)/height squared (m 2 ). Blood pressure (BP) was measured for the right arm using a mercury sphygmomanometer (Baumanometer 0850 wall unit 33; W.A. Baum Co. Inc., Copiague, NY, USA) three times per subject by trained nurses. Before 2011, the height of the arm on which BP was measured was different by year. Thus, we used corrected BP variables to prevent the issue of combining BP data from different years. Levels for total cholesterol (TC), high density lipoprotein cholesterol (HDL-C), triglyceride (TG), and glucose, and so on were measured using blood samples obtained from the antecubital vein after a 12hour overnight fast. Fasting plasma glucose (FPG), TC, TG, and HDL-C levels were evaluated using a Hitachi 700-110 Chemistry Analyzer (Hitachi, Tokyo, Japan). Fasting insulin levels were estimated by immunoradiometric assay (Biosource, Louvain-la-Neuve, Belgium) using a γ-counter (1470 Wizard; PerikinElmer, Turku, Finland). glycosylated hemoglobin (HbA1c) was measured by high performance liquid chromatography (HLC-723G7; Tosch, Tokyo, Japan). Definition of cardiometabolic syndrome and cardiometabolic syndrome score CMS is a combination of metabolic disorders or risk factors, including a combination of diabetes mellitus, systemic arterial high BP, central obesity, and hyperlipidemia. Definitions for CMS and its components were obtained from the National Cholesterol Education Program Adult Treatment Panel III guideline. We used the ethnicity-specific values for WC based on data from the World Health Organization and the Korean Society for the Study of Obesity [27]. CMS was defined by the presence of three or more of the following risk factors: central obesity (WC ≥90 cm for men, and ≥85 cm for women); high BP (systolic BP ≥130 mm Hg and diastolic BP ≥85 mm Hg, or using antihypertension drug); fasting glucose levels ≥100 mg/dL; TG levels ≥150 mg/dL; and low HDL-C levels (<40 mg/dL for men, and <50 mg/dL for women). If each criterion is met, a score of one point is imposed. We named this as CMS score, ranging from 0 to 5 points. Those with CMS score of more than 3 points were classified as having metabolic syndrome (MetS). Definitions of other cardio metabolic variables Homeostasis model assessment (HOMA) is the most broadly used in epidemiologic research [28]. Homeostasis model assessment of insulin resistance (HOMA-IR) is a test for insulin resistance while HOMA-β indicates insulin formation potential regarding pancreatic β-cell function [28]. These two HOMA parameters are calculated as follows: H OMA-IR=fasting insulin (µIU/mL)×FPG (mg/dL)/ 22.5×18 H OMA-β=20×fasting insulin (µIU/mL)/FPG (mg/dL)/ 18-3.5 Nonalcoholic fatty liver disease (NAFLD) is defined as the presence of cytoplasmic lipid droplets in more than 5% of hepatocytes in individuals without significant alcohol consumption and negative viral and autoimmune liver diseases [29]. Presence of NAFLD has recently been considered as the hepatic component of CMS as it is a result of obesity for which there is ectopic accumulation of TG in the liver parenchyma [30]. Based on significant evidence, NAFLD appears to play an important role in the disease mechanism of metabolic disorders [30]. As a standard diagnosis for NAFLD, "NAFLD liver fat score" is mainly based on magnetic resonance spectroscopy [31]. It has 95% sensitivity and specificity. Statistical analysis All statistical analyses were performed using SAS software version 9.4 (SAS Institute Inc., Cary, NC, USA). All analyses accounted for the complex sampling design effect and appropriate sampling weights of the national survey using SAS Proc survey. We applied a sample weighting in the analysis using weight code in Proc survey procedure. A two-tailed P value <0.05 was considered to be statistically significant. Values for demographic, health-related behavior, and biochemical variables were expressed as mean±standard deviation (SD) for continuous variables or number of participants (percentage) for categorical variables. Calculated probability derived from either analysis of variance (ANOVA) comparing mean values (for continuous variables) or chi-square tests comparing distribution of categorical variables was used to test differences by soybean food intake. Cardio metabolic factors derived from several blood test results are expressed as mean±SD across quintiles of soybean food intake. Significance determined by general linear model (GLM) with Tukey multiple comparisons test (P<0.05) after adjusting for education (high school or less, college or more), income (tertile of equivalized household income), physical activity (low, moderate, high), smoking status (nonsmoker, exsmoker, current smoker), alcohol consumption (times/week), total energy (kcal/day), carbohydrate (g/day), total fat (g/day), protein (g/day), and sodium (mg/1,000 kcal) intakes as continuous variables. We used the residual method to account for effects of total energy intake on each nutrient as a confounding factor to CMS. PROC GLM procedure was used to examine a linear trend (P for trend) across soybean consumption categories by using median value within each exposure category. Multivariable logistic regression (MLR) analysis was performed to estimate the odds ratio (OR) and 95% confidence interval (CI) of individual component of CMS according to soybean food intake quintiles using the lowest quintile as reference. We developed three different models which adjusted confounders. When we chose covariates for multivariable model, we referred to results of analysis for baseline characteristics of study subjects by soybean food consumption. ORs were initially calculated following adjustment for age in model 1. In model 2, categorical variables were further adjusted for education, monthly household income, physical activity, smoking status, and alcohol consumption activity. Dietary factors such as total energy, carbohydrate, protein, total fat, and sodium intakes were additionally adjusted for as continuous variables in model 3. We applied three types of model to know the influence of each model of confounders on ORs. The prevalence of high BP, hyperlipidemia, T2DM, myocardial infarction/ angina pectoris, stroke was compared across soybean food intake groups. MLR analyses were performed to estimate ORs and 95% CI with the lowest quintile group as the reference group after adjusting for confounding variables by the three models as described above. General characteristics of participants according to soybean food intake According to soybean food consumption, CMS score, socio demographic, anthropometric, and biochemical characteristics of participants are presented in Table 1. CMS score means the score corresponding to CMS component. With higher quintiles of soybean intake, age and high intensity exercise of participants also increased. The percentage of current smoker decreased with higher quintile of soybean intake. Statistical analysis results from either ANOVA comparing mean values or chi-square test comparing distribution of categorical variables showed significant difference by soybean food intake frequency. Variables such as age, CMS score, household income, education, smoking status, current alcohol intake, vigorous physical activity, and daily dietary intake were designated as confounders. Table 2 shows relationships between several variables related to cardio metabolic diseases and soybean product intake. When comparing average values, all variables were significantly (P< 0.05) different across soybean intake quintiles except fasting serum insulin and HOMA-IR. Especially, the 5th quintile had significant difference in all variables except fasting serum insulin and HOMA-IR. However, the least square means of those variables after adjusting for various covariates (age, education, income, physical activity, smoking status, alcohol consumption, total energy intake, energy adjusted carbohydrate daily intake, energy adjusted protein daily intake, energy adjusted fat daily intake, energy adjusted sodium daily intake) significance disappeared except for aspartate aminotransferase/alanine aminotransferase (AST/ALT) and HOMA-β. P values for trend in AST/ALT using median value of soybean product intake quintile after adjusting for the same confounders listed above showed marginally significant linearity across soybean food intake groups. Soybean food intake and cardiometabolic syndrome Soybean food intake was related to CMS and abdominal obesity (AO) after adjusting for various confounders as shown in Table 3. Participants in the 4th (more than 8.5 to 17 or 17 times/ The 4th quintile of soybean food intake had significantly lower ORs of having AO than that the 1st quintile in models 1, 2, and 3. The 5th quintile of soybean food intake had 25% lower OR of having AO than the 1st quintile. However, after adjusting for various confounders in models 2 and 3, significance attenuated. ORs for AO significantly decreased by the quintile of soybean food intake in model 3 (P for trend <0.05). Sensitivity analysis (soybean food intake and cardiometabolic syndrome) After excluding soybean or soybean milk items from FFQ data, there were no significant relevance of CMS or CMS compo- Significance determined by general linear model with Tukey multiple comparisons test (P<0.05) after adjustments for age (continuous), education (high school or less, college or more), income (quartile of equivalized household income), physical activity (low, moderate, high), smoking status (never smoker, past smoker, current smoker), alcohol consumption (g/day quartile), total energy (kcal/day), carbohydrate (g/day), total fat (g/day), protein (g/day), sodium (mg/1,000 kcal) intakes as continuous variables. Sharing the same alphabet indicates no significant difference between two groups, b P values for trend using median value of soybean product intake quintile after adjustments for confounders as same as above. Diabetes Metab J 2020;44:143-157 https://e-dmj.org (4) hypertriglyceridemia (≥150 mg/dL); and (5) low HDL-C (<40 mg/dL in male, <50 mg/dL in female), b Model 1: adjusted for age (continuous), c Model 2: same as model 1 and additionally adjusted for education (high school or less, college or more), income (quartile of equivalized household income), physical activity (low, moderate, high), smoking status (never smoker, past smoker, current smoker), and alcohol consumption (times/week), d Model 3: same as model 2 and additionally adjusted for total energy (kcal/day), carbohydrate (g/day), total fat (g/day), protein (g/day), sodium (mg/day) intakes as continuous variable, e P for trend using median value of soybean product intake quintile after adjustments for confounders as same as model 3. All analyses accounted for the complex sampling design effect and the appropriate sampling weights of the national survey. Multivariate adjusted logistic regression was used to estimate the odds ratio (95% confidence interval). CMS, cardiometabolic syndrome. a CMS score: CMS was defined using the National Cholesterol Education Program Adult Treatment Panel III criteria with a modified waist circumference cutoff for Korean adults if any three or more of the five components were present. Five components: (1) abdominal obesity (waist circumference ≥90 cm in male, ≥85 cm in female); (2) high blood pressure (≥130/85 mm Hg); (3) fasting hyperglycemia (≥100 mg/dL); (4) hypertriglyceridemia (≥150 mg/dL); and (5) low HDL-C (<40 mg/dL in male, <50 mg/dL in female). The number corresponding to the component was scored, b Model 1: adjusted for age (continuous), c Model 2: same as model 1 and additionally adjusted for education (high school or less, college or more), income (quartile of equivalized household income), physical activity (low, moderate, high), smoking status (never smoker, past smoker, current smoker), and alcohol consumption (times/week), d Model 3: same as model 2 and additionally adjusted for total energy (kcal/day), carbohydrate (g/day), total fat (g/day), protein (g/day), sodium intakes (mg/day) as continuous variables. nents across soybean consumption. Meanwhile, after excluding tofu items from FFQ data, lower ORs and significant associations were found for several CMS components across soybean food consumption groups than those when considering all soybean food items. For CMS prevalence (Table 4), after adjusting for age, all quintiles had significantly lower ORs than the 1st quintile in model 1. After adjusting for age, education, income, smoking status, and alcohol consumption in model 2, the significance was maintained across quintiles. After additionally adjusting for total energy intake, carbohydrate intake, fat intake, protein intake, and sodium intake in model 3, the 2nd (2 or more than 2 to 4 or less than 4 times/week), 3rd (more than 4 to 8.5 or more than 8.5 times/week), and 4th quintile of soybean intake also had significantly lower ORs than the lowest quintile ([OR, 0.75; 95% CI, 0.57 to 0.98], [OR, 0.76; 95% CI, 0.59 to 0.98], and [OR, 0.61; 95% CI, 0.48 to 0.79], respectively). However, the 5th quintile of soybean intake had 20% lower OR than the reference group with 95% CI of 0.62 to 1.02 which was marginally significant. ORs for CMS prevalence significantly decreased by the quintile of soybean food intake in model 3 (P for trend <0.00). For AO prevalence after adjusting for age (Table 4), all quintiles of soybean food intake had significantly lower ORs than the reference group. After additionally adjusting for education, income, smoking status, and alcohol consumption in model 2, the significance was maintained except for OR of the 2nd quintile of soybean food intake. After additionally adjusting for total energy and four nutrients' intake in model 3, the 3rd, 4th, and 5th quintiles of soybean intake showed significantly lower ORs than the 1st quintile ([OR, 0.77; 95% CI, 0.62 to 0.95], [OR, 0.68; 95% CI, 0.56 to 0.84], and [OR, 0.78; 95% CI, 0.63 to 0.97], respectively). In model 3, ORs for AO prevalence significantly decreased across soybean food intake quintiles (P for trend <0.05). ORs of high BP were significantly lower in the 3rd and 4th quintiles of soybean food intake than the reference quintile (Table 4). However, after adjusting for confounders in models 2 and 3, only ORs in the 4th quintile of soybean food intake were significantly lower than the 1st quintile ([OR, 0.73; 95% CI, 0.57 to 0.94] and [OR, 0.74; 95% CI, 0.58 to 0.96], respectively). Participants in the 3rd quintile of soybean intake had the lowest OR for hypertriglyceridemia which was significant even after adjusting for confounders in models 2 and 3 ([OR, 0.74; 95% CI, 0.59 to 0.93] and [OR, 0.76; 95% CI, 0.61 to 0.96], respectively). ORs of the 4th group showed significantly lower OR that the reference after adjusting for age in model 1, although the significance attenuated after adjusting for various covariates in models 2 and 3. Soybean food intake and cardiometabolic syndrome score People in the 4th quintile (8.5 to 17 times/week) of soybean food intake had 26% lower ORs of having CMS score of 3 than those in the 1st quintile (less than 2 times/week) of soybean food intake after adjusting for confounders in model 3 (OR, 0.74; 95% CI, 0.55 to 0.99) ( Table 5). There was no significant difference in ORs for having CMS scores of 4 and 5 across soybean food intake quintiles compared to the reference. DISCUSSION We evaluated the association between soybean food consumption and cardio metabolic diseases in Korean women based on nationally representative survey data. For overall soybean food, moderate soybean food consumption was associated with lower CMS prevalence and central obesity. Especially for central obesity, ORs had significant decreasing linearity across soybean food intake groups. The observed inverse association for CMS prevalence was attenuated in the highest quintile of soybean consumption (more than 17 times/week of soybean consumption). This reverse J shape relationship was also found for other CMS components as OR of the 5th group was higher than that of the 4th group. Even for AO prevalence which had significant inverse linear trend across soybean intake quintiles, OR of the 5th quintile did not show significance. After excluding tofu items, CMS prevalence and central obe-sity had prominent decreasing association across soybean intake groups, while only moderate consumption was related to lower OR in high BP and hypertriglyceridemia. P values for trend in CMS and AO prevalence using median value of soybean product intake quintile after adjusting for confounders in model 3 showed significant linearity across soybean food intake groups. Others had reverse J shape relation with soybean intake. It tended to decrease until the 4th quintile, but it increased from the fifth to the reverse. When observing general characteristics, women in the 4th quintile of soybean product intake showed some desirable health condition-lowest BMI, highest household income, vigorous physical activity, and high energy adjusted protein intake. Meanwhile, women in the 5th quintile showed some undesirable heath condition-eldest, highest BMI, highest total energy intake, and high energy adjusted carbohydrate intake. We did some additional analysis to see the effect of each variable which was higher in the 5th group. To determine the age modified on CMS, first we examined the interaction of soybean intake quintiles with age. However, there was no significant interaction over all quintiles for CMS components (results not shown). Secondly, as the elder women tended to be postmenopausal, we stratified participants by post-or pre-menopause. All the significance disappeared in premenopausal group. Postmenopausal women showed significantly lower ORs in the 4th and 5th quintiles of soybean intake for AO. The influence of age and menopause to the relation between soybean intake and CMS components is presumed to be small. To determine the interaction of BMI with soybean intake, we stratified participants by BMI as obesity, pre-obesity, and under pre-obesity [32] and then analyzed CMS components. There were no significant ORs in groups that were not obesity after adjusting for various covariates in model 3. In the obesity group, the 4th and the 5th quintiles had significantly lower ORs than the reference group for AO prevalence in model 3. This suggested that the lowering significance in the 5th quintile was not due to higher BMI. Total energy intake and energy adjusted carbohydrate intake might be the reason for the reverse J shape association of CMS components with soybean intake quintiles. Park et al. [33] has reported that MetS group has higher carbohydrate intake and lower protein and fat (monounsaturated, polyunsaturated, and saturated fatty acids) intake than the non-MetS group. Carbohydrate intake was positively associated with the risk of MetS Diabetes Metab J 2020;44:143-157 https://e-dmj.org in a previous study [33] while protein and fat intake exhibited a prominently reverse association with the prevalence of MetS [33]. According to the study of dietary characteristics of nutrient intake with health status among Koreans, the proportion of energy intake from carbohydrates was significantly higher in the group with CMS than that in the group without CMS [34]. Results of the present study suggested that the group that consumed soybean the most frequently have eaten the highest energy adjusted carbohydrate to show inverse association with the risk of CMS. Also, effects of carbohydrate intake were different by BMI in previous research. In a cohort study, results showed a positive relation between high intake of added sugars from liquid and components of the CMS among overweight participants, but not among normal-weight participants [35]. As participants of the top quintile group of soybean food intake had the highest BMI, they might be more vulnerable to metabolic disorder than any other groups when they consumed carbohydrates. Soy protein have been found to have implications for insulin/glucagon ratio. Isoflavones appear to influence lipid metabolism by altering gene expression for lipid-related genes [36]. By combining mRNA and macroarray analysis, genes involved in lipid metabolism, regulation of transcription and translation, protease inhibition, apoptosis, and cell proliferation regulation have been found to be expressed at higher levels in rat livers fed with low or high isoflavone soy protein diets compared to those in livers fed casein diet [37]. For human, according to a meta-analysis of 14 studies conducted on 11 cohorts, the group with the highest legume consumption was associated with a decreased risk of 10% in both cardiovascular disease and coronary heart disease compared to the group with the lowest legume consumption [38]. Recent studies in Korean adult women have shown that they mainly have intake of isoflavones from soybean foods known to have beneficial effects on several metabolic disorders [22,39,40]. Consistent with our study, previous researches have indicated that frequent consumption of soybean diet could regulate overall levels of lipid and fat accumulation in the liver which causes insulin resistance and dyslipidemia [29,41]. Some experimental studies have reported that soybean have proteins that are more effective than animal proteins in weight reduction and arterial stiffness. In addition, soybean contains isoflavonoids and saponins that can improve lipid profiles and risk of CMS [13,42,43]. Potential mechanisms by which soy protein and isoflavones might prevent CMS include a beneficial effect on plasma lipid concentrations, antioxidant effects, antiproliferative and antimigratory effects on smooth muscle cells, effects on thrombus formation, and maintenance of normal vascular reactivity [36]. AST to ALT ratio showed different phase in the most frequent soybean consumption group. Serum AST-to-ALT ratio is a substitute measure for NAFLD. It has been shown to be inversely associated with metabolic disorder and insulin resistance in previous clinical and epidemiological studies [44,45]. A cohort study showed that an increased AST-to-ALT ratio was correlated with a consistent reduction in the onset of CMS and its components [46]. This implies that, the more we eat soy bean food, the more beneficial it will be for our liver metabolism. At last the onset of CMS would be prolonged [29]. Since we could not observe an inverse relation of soybean food intake with insulin resistance, HbA1c level, or other probes related to glucose metabolism, it suggests that our participants might be a comparatively healthy population who are not diagnosed with T2DM, hyperlipidemia, or hypertension before analysis. Thus, we could not see any statistical significance. The beneficial effect of soybean could be shown more prominent among unhealthy people. An epidemiologic study has shown that higher intake of soy food is related to lower incidence of T2DM among overweight women [10]. Some cross-sectional studies have found that the association of soybean intake with AO and cholesterol level appears to be stronger among postmenopausal women than that among premenopausal women [10,20]. ORs of CMS individual components according to overall soybean food intake frequency had different results compared to soybean food intake without tofu items which showed more significant decrease in AO, BP, and TG. Similarly, some observational studies have found that eating soybean food has significant negative relation with CVD, although eating tofu and miso has borderline significant association [38]. Epidemiologic and experimental studies have also reported that components in soybean are different depending on processing method which might play a role in the positive association [47,48]. We acknowledge that the present study has several limitations. First, we included only women in this study. It is possible that men differ from women in effects of soybean food consumption on CMS. The reason why we did not include men was because soy consumption in Korea was especially high among women. This is because soy isoflavones are associated with female hormones. Thus, soy consumption seems to be more related to women's health. Second, our study had a cross-sectional design. Thus, we could not determine a causal relationship of soybean consumption with risk of CMS among Korean women. Lastly, we could not rule out the presence of unknown confounders. Some unmeasured and residual confounding factors could exist. However, it is doubtful that those confounding would completely erase the association found in the present study. In addition, the assessment of soybean intake was mainly based on questions related to the intake of food ingredients-soybean milk, tofu, and soybean. We were unable to distinguish the effect of different processing types of soybean food (fermented, fried, unsweetened food, etc.). However, other studies on Korean adult women have shown that ingestion of soy isoflavones is mainly induced by intake of soybean paste, soybean milk, and tofu, although there is a difference in the level of soybean food intake [20,39]. So identifying the frequency of eating soybean foods in Korean women may have a similar effect as comparing the nutrient intake of soybeans. Although this study had limitations, we adjusted for several potential confounders in our analyses. The present study also has strengths. KNHANES had a large nationally representative sampling design that provided detailed information, thus allowing for better control of potential confounders among the Korean population. These results could provide insight into the influence of soy consumption on general population statistically. In addition, we conducted a sensitivity analysis to determine the association varied from each soybean food item. When we analyzed the association with total soybean food item, the result showed that only AO had significant association. In Asia, there are plenty of processing method using soy bean. Thus, it was unclear whether each subgroup of soybean food would show the same result with CMS prevalence. In conclusion, we found that moderate consumption of soybean food showed negative association with having CMS and AO among healthy women. Our results warrant further studies such as randomized controlled trials about the effect of various kinds of processed soybean food intake frequency on CMS prevalence. If our results are confirmed by further research, it could encourage the consumption of soybean food as a healthy alternative to Western-style meals. CONFLICTS OF INTEREST No potential conflict of interest relevant to this article was reported.
2019-12-12T10:50:17.349Z
2019-12-02T00:00:00.000
{ "year": 2019, "sha1": "faf633a42c9cb7ff87bb6d1715010b5bb5e93042", "oa_license": "CCBYNC", "oa_url": "https://www.e-dmj.org/upload/pdf/dmj-44-143.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a7236fd78743052dcbdf73f4b830f39b58a2218e", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247945403
pes2o/s2orc
v3-fos-license
Cruciform DNA Structures Act as Legible Templates for Accelerating Homologous Recombination in Transgenic Animals Inverted repeat (IR) DNA sequences compose cruciform structures. Some genetic disorders are the result of genome inversion or translocation by cruciform DNA structures. The present study examined whether exogenous DNA integration into the chromosomes of transgenic animals was related to cruciform DNA structures. Large imperfect cruciform structures were frequently predicted around predestinated transgene integration sites in host genomes of microinjection-based transgenic (Tg) animals (αLA-LPH Tg goat, Akr1A1eGFP/eGFP Tg mouse, and NFκB-Luc Tg mouse) or CRISPR/Cas9 gene-editing (GE) animals (αLA-AP1 GE mouse). Transgene cassettes were imperfectly matched with their predestinated sequences. According to the analyzed data, we proposed a putative model in which the flexible cruciform DNA structures acted as a legible template for DNA integration into linear DNAs or double-strand break (DSB) alleles. To demonstrate this model, artificial inverted repeat knock-in (KI) reporter plasmids were created to analyze the KI rate using the CRISPR/Cas9 system in NIH3T3 cells. Notably, the KI rate of the 5′ homologous arm inverted repeat donor plasmid (5′IR) with the ROSA gRNA group (31.5%) was significantly higher than the knock-in reporter donor plasmid (KIR) with the ROSA gRNA group (21.3%, p < 0.05). However, the KI rate of the 3′ inverted terminal repeat/inverted repeat donor plasmid (3′ITRIR) group was not different from the KIR group (23.0% vs. 22.0%). These results demonstrated that the legibility of the sequence with the cruciform DNA existing in the transgene promoted homologous recombination (HR) with a higher KI rate. Our findings suggest that flexible cruciform DNAs folded by IR sequences improve the legibility and accelerate DNA 3′-overhang integration into the host genome via homologous recombination machinery. Introduction Directed repeats (DRs) and inverted repeats (IRs) create genome instability and have many functions via the formation of specific secondary structures in the genome [1]. The genome instability of IRs causes cruciform extrusion [2,3]. IRs are partially mutated or deleted during replication failure or enzyme digestion in vivo [4]. Some environmental factors affect DNA conformation, such as salt conditions, epigenetic modification, or DNAbinding protein interactions. For example, 14-3-3 cruciform-binding protein binds to cruciform DNAs and regulates eukaryotic DNA replication [5]. The nucleotide composition of the IR structure is approximately 150 to 200 bp. However, the existence of mismatches and spacers in the IRs' structures improve the stability of IRs [6]. The instability of the DNA structure due to directed or inverted repeat sequences is the primary cause of genetic disorders, such as Huntington's disease or type A hemophilia. Type A hemophilia occurs because of clotting FVIII deficiency. Approximately 50% of FVIIIdeficient patients have large fragment inversions at intron 1 (int1) and intron 22 (int22) in their FVIII gene (F8). Notably, a large IR region was found at int22 homolog 2 (int22 h2) and int22 h3 [7]. Although int1 also contains a hotspot for F8 inversion, only one int1 h region was found. The int1 inversion rate was much lower than the int22 inversion rate [8,9]. There are many inversion hotspots, including the inverted repeat region in autosomal chromosomes, such as hVIPR2 [10] and human chromosome 15q11-q13 [11]. These reports suggest that non-allelic homologous recombination (HR) generally occurs at large inverted repeats. Therefore, we investigated whether cruciform DNA was easier to identify as a legible template for accelerating HR processing. Current studies revealed that most virus genomes integrate into host chromosomes using inverted repeat sequence machinery, such as inverted tandem repeats (ITRs) from adeno-associated virus (AAV) families [12] and long terminal repeats (LTR) from retrovirus and lentivirus families [13]. Notably, the piggyback transposon system, which was isolated from Cabbage looper, contains two terminal repeats [14]. All of the integration mechanisms require specific DNA-binding proteins to import viral DNA into nuclei and recombinant viral DNA into the host genome. However, the AAV vector integrates at double-strand break (DSB) sites without DNA-binding proteins [12,15]. The AAV2 ITR mimics the DSB site to gather DNA repair enzymes [16]. These reports showed that IR sequences acted as motifs of DNA-binding proteins from viruses and interacted with the repair mechanisms of host cells. Taken together, DNA structures or specific recombinases participate in exogenous DNA integration. Except for viral systems, other transgenic methods in animal fields showed a low efficiency of exogenous DNA integration or required a long time and a high cost to select successful recombinant cells or individuals [17]. Therefore, the discovery of new methods to improve the transgenic efficiency in animal genetic engineering is an important issue. However, the exogenous DNA integration mechanisms were not clearly elucidated using pronuclear microinjection methods with linear DNA or gene editing with CRISPR/Cas9 tools. Newly developed gene-editing tools provide more powerful methods to create sitespecific gene knockout (KO) or knock-in (KI) animals. However, the KI efficiency for the large exogenous fragments of gene insertions remains low in all platforms. Single-strand DNA (ssDNA) is the most popular material to increase the KI rate for the generation of gene-edited animals [18][19][20][21]. Previous studies demonstrated that the legibility of ssDNA is better than double-stranded DNA (dsDNA) in DNA repair mechanisms. The present study hypothesized that the stability of dsDNA with cruciform structures was a factor in template alleles for DNA repair mechanisms. Four transgenic (Tg) or gene-edited (GE) animals were used to study the structural characteristics of the surrounding sequences of integration sites before and after transgene insertion or KI using next-generation sequencing (NGS) and the bioinformatics tool of the UNAFold Web Server "DNA folding form". We constructed different artificial IR structures in the 5 -or 3 -end of transgenes to elucidate the mechanism of template allele determination that accelerates the KI rate in HR processing. Cruciform DNA Structures Are a Hotspot of HR in Transgenic Animal Genomes We were interested in whether some structures surround predestinated transgene (Tg) integration sites of the three Tg animals. The Tg cassettes randomly inserted into the host genomes because those Tg animals were generated by pronuclear microinjection with linear DNA. The host genome sequences were analyzed using NGS and secondary structures were evaluated by the UNAFold Web Server to identify the repair mechanisms used for exogenous DNA integration. Notably, imperfect, large, stem-loop structures were found at approximately 2000 nucleotides (nts) around predestinated Tg integration sites from host genome sequences ( Figure 1A-D). The results showed that these regions were hotspots of HR because cruciform structures were present upstream and/or downstream of the target site to create DNA instability. According to the results, we hypothesized that the Tg cassettes integrated into the host genomes via HR. Moreover, the linear Tg cassettes should be 5 -to 3 -resected by the Mre11-Rad50-Nbs1 complex (MRN complex). The linear Tg cassettes should match the predestinated template sequences in the host genomes. Therefore, we fused the sequences of the transgene (Tg) cassette 3 -overhang and the predestinated matched template sequence to predict the secondary structures for finding the match regions. Imperfect stem-loop structures were formed between the Tg cassette 3 -overhang and the predestinated matched template sequence. The data suggested that Tg cassettes imperfectly matched the template DNAs in three different analyzed transgenic animals, including αLA-LPH Tg goat (Figure 1a,b), NFκB-Luc Tg mouse (Figure 1c,d), and Akr1A1 eGFP/eGFP Tg mouse (Figure 1e,f). The following three observation rules were shown: First, the integration sites occurred in the loop region of cruciform structures. Second, the intervals of predestinated matched templates were replaced. Third, the Tg cassettes remained complete after integration. DNA integration may be highly tolerant of large cis-stem-loop structures using the HR mechanism. The cruciform DNA structure frequently presented near the predestinated Tg integration sites of transgenic animals. Therefore, we further determined whether the gene-edited animals generated using knock-in methods also showed the same pattern as the Tg animals that were generated via pronuclear microinjection ( Figure 2). We analyzed the secondary structure of gene knock-in surrounding sequences in the mouse ROSA locus. A strong stem-loop secondary structure was identified in the ROSA locus ( Figure 2a). The ICR-Gt(ROSA)26Sor em(αLa-AP1x6)BM2/M GE mouse was prepared using the clustered, regularly interspaced, short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9) system to induce a DSB and knock-in the αLA-AP1x6 Tg cassette in the ROSA26 locus following the HR mechanism. The repair result should be the most precise in our donor Tg cassette. However, a large unpredictable insertion (U.P. ins.) DNA sequence was observed between the Tg cassette 3 -terminus and the ROSA 3 -homologuse arm (HA) (Figure 2d,e). Notably, the Gibbs free energy (dG) value of the stem-loop structure of the integrated genome sequence was higher than the vector sequence. The same situation was observed between the αLA-LPH Tg goats with or without the unpredictable insertion ( Figure 3). These data suggested that dsDNA is more stable with U.P. ins. than before the insertion. This finding suggests that other repair mechanisms stabilize junction sites by modifying sequences after HR processing. The αLA-LPH Tg goat full-genome sequencing data showed that repair mechanisms recombined more than one chromosome fragment in the insertion site. These results suggest that the HR mechanism generated the U.P. ins. ( Figure S1). Table S1. Tg cassette, transgene cassette; U.P. ins., unpredictable insertion; U.P. del., unpredictable deletion; U.P. r-arngmt., unpredictable rearrangement. Blue arrow, blue sequences in secondary structures. Black arrow, black sequences in secondary structures. Table S1. Tg cassette, transgene cassette; U.P. ins., unpredictable insertion; U.P. del., unpredictable deletion; U.P. r-arngmt., unpredictable rearrangement. Blue arrow, blue sequences in secondary structures. Black arrow, black sequences in secondary structures. Table S1. Tg cassette, transgene cassette; U.P. ins., unpredictable insertion. Blue arrow, blue sequences in secondary structures. Black arrow, black sequences in secondary structures. The Cruciform DNA Structure in HA Improves KI Efficiency We wanted to explain whether the cruciform DNA structure is a legible template for HR. Therefore, three artificial inverted repeat (IR) HA donor plasmids, including a standard knock-in reporter (KIR), a 5′-inverted repeat knock-in reporter (5′IR), and a 3′-extra inverted terminal repeat (ITR)/inverted repeat knock-in reporter (3′ITRIR), were designed to elucidate whether the cruciform DNA actually improved the legibility of HR efficiency in NIH/3T3 cells (Figure 4a). Our hypothesis was that the artificial inverted repeat HA in the donor plasmid would accelerate HR efficiency when a double-strand break (DSB) was created by the CRISPR/Cas9 machinery. The data showed that the KI efficiency of the 5′IR donor with ROSA gRNA (31.5%) was significantly higher than that with the KIR donor (21.3%) (p < 0.05). However, the KI efficiency of the 5′IR donor without the ROSA gRNA group was not different from the KIR without the ROSA gRNA group (17.7% vs. 13.7%) ( Figure 4a). These results suggest that the 5′ inverted repeat HA improves the HR efficiency in cells. The improvement of KI efficiency only occurred after DSB induction using gene-editing tools. Table S1. Tg cassette, transgene cassette; U.P. ins., unpredictable insertion. Blue arrow, blue sequences in secondary structures. Black arrow, black sequences in secondary structures. The Cruciform DNA Structure in HA Improves KI Efficiency We wanted to explain whether the cruciform DNA structure is a legible template for HR. Therefore, three artificial inverted repeat (IR) HA donor plasmids, including a standard knock-in reporter (KIR), a 5 -inverted repeat knock-in reporter (5 IR), and a 3extra inverted terminal repeat (ITR)/inverted repeat knock-in reporter (3 ITRIR), were designed to elucidate whether the cruciform DNA actually improved the legibility of HR efficiency in NIH/3T3 cells (Figure 4a). Our hypothesis was that the artificial inverted repeat HA in the donor plasmid would accelerate HR efficiency when a double-strand break (DSB) was created by the CRISPR/Cas9 machinery. The data showed that the KI efficiency of the 5 IR donor with ROSA gRNA (31.5%) was significantly higher than that with the KIR donor (21.3%) (p < 0.05). However, the KI efficiency of the 5 IR donor without the ROSA gRNA group was not different from the KIR without the ROSA gRNA group (17.7% vs. 13.7%) (Figure 4a). These results suggest that the 5 inverted repeat HA improves the HR efficiency in cells. The improvement of KI efficiency only occurred after DSB induction using gene-editing tools. Table S1. Tg cassette, transgene cassette; U Ins., unpredictable insertion. Blue arrow, blue sequences in secondary structures. Black arrow, bla sequences in secondary structures. dG, Gibbs free energy. Only the Presence of the Cruciform Structure in HA Improves the KI Efficiency We further examined whether the location of the cruciform structure was an im portant issue in KI efficiency. An inverted repeat AAV2-ITR sequence (ITRIR) was co structed outside of the ROSA 3′HA. After transfection, the KI efficiency of ROSA 3′H (3′ITRIR) did not show improvement compared to the KIR with ROSA gRNA grou (23.0% vs. 22.0%) (p = 0.318) (Figure 4b). Our data suggested that the KI efficiency of th HR mechanism was only improved by the cruciform structure located in the HA s quence. Table S1. Tg cassette, transgene cassette; U.P. Ins., unpredictable insertion. Blue arrow, blue sequences in secondary structures. Black arrow, black sequences in secondary structures. dG, Gibbs free energy. Only the Presence of the Cruciform Structure in HA Improves the KI Efficiency We further examined whether the location of the cruciform structure was an important issue in KI efficiency. An inverted repeat AAV2-ITR sequence (ITRIR) was constructed outside of the ROSA 3 HA. After transfection, the KI efficiency of ROSA 3 HA (3 ITRIR) did not show improvement compared to the KIR with ROSA gRNA group (23.0% vs. 22.0%) (p = 0.318) (Figure 4b). Our data suggested that the KI efficiency of the HR mechanism was only improved by the cruciform structure located in the HA sequence. Discussion This study provides three main findings: (a) We observed an inverted repeat structure that was frequently present at Tg integration sites and genome inversion hotspots. (b) We are the first group to hypothesize a putative model of the transgene integration mechanism via HR processing and use the gene-editing tool to analyze the effects of cruciform DNA structures in the HR mechanism. (c) We successfully demonstrated that the presence of an extra inverted repeat structure in the HA region of the 5′IR plasmid construct significantly increased the KI efficiency under gRNA guidance, but not in the 3′ITRIR plasmid construct that contained an inverted ITR repeat outside of the HA region. Discussion This study provides three main findings: (a) We observed an inverted repeat structure that was frequently present at Tg integration sites and genome inversion hotspots. (b) We are the first group to hypothesize a putative model of the transgene integration mechanism via HR processing and use the gene-editing tool to analyze the effects of cruciform DNA structures in the HR mechanism. (c) We successfully demonstrated that the presence of an extra inverted repeat structure in the HA region of the 5 IR plasmid construct significantly increased the KI efficiency under gRNA guidance, but not in the 3 ITRIR plasmid construct that contained an inverted ITR repeat outside of the HA region. According to our findings, we proposed a putative model for exogenous DNA insertion cross-reacting via the stem-loop structure of host genomes. The instability of the template DNA with several inverted repeat cruciform DNA structures improves recombination enzyme recognition. The template DNA was easily integrated via the 3 -overhang of linear exogenous DNAs (Figure 5a-c). The linear DNAs were linked with gDNA via asymmetric Holliday junction resolution (Figure 5d-g). The HR or other repair mechanisms may insert unpredicted sequences into junctions of Tg cassettes and host genomes [22]. The addition of dG values after unpredicted insertion may provide the evidence that the gDNA stability was related to the unpredicted modification. According to our findings, we proposed a putative model for exogenous DNA insertion cross-reacting via the stem-loop structure of host genomes. The instability of the template DNA with several inverted repeat cruciform DNA structures improves recombination enzyme recognition. The template DNA was easily integrated via the 3′-overhang of linear exogenous DNAs (Figure 5a-c). The linear DNAs were linked with gDNA via asymmetric Holliday junction resolution (Figure 5d-g). The HR or other repair mechanisms may insert unpredicted sequences into junctions of Tg cassettes and host genomes [22]. The addition of dG values after unpredicted insertion may provide the evidence that the gDNA stability was related to the unpredicted modification. Several mechanisms may explain the improvement in template DNA legibility by inverted repeat sequences. First, inverted repeat sequences are relatively unstable in vivo [23,24]. Therefore, HR enzymes, especially Rad51 and Rad54, may easily identify inverted repeat sequences. Second, inverted repeat sequences are likely the origin of DNA replication [5,25]. The DNA replication mechanism may help the template DNA amplify itself and maintain the half-life in cells. HR enzymes are more activated during DNA replication in S phase [26]. Therefore, HR enzymes are more likely to recognize the template DNA. Third, many types of cruciform DNA-binding proteins (CBPs), such as Rad51AP1 [27,28], Rad54 [29], 14-3-3 [30], PARP-1 [31], p53 [32], and BRCA1 [33], play roles in DNA replication and repair in mammalian cells [34]. These reports suggested that the structural conformation of inverted repeat DNA strongly interacted with mechanisms of DNA replication and repair. Our data provided two new pieces of evidence. First, gDNA sequences around the predestinated Tg integration site generally formed stem-loop structures. Second, the artificial cruciform DNA structure played a role in improving HR for the template DNA in vivo. Therefore, the cruciform DNA structure increased the DNA legibility in vivo via the HR mechanism. AAV viruses are useful tools for long-term maintenance in mammalian cells without integration into host genomes. The recombinant AAV (rAAV) system is generally episomal concatemerization (>99.99%) in vivo [35], which includes cruciform DNA structures at ITR-IR regions of episomes. The cruciform structure was formed via HR after second-strand DNA synthesis. The episomal concatemer is highly persistent in nuclei in a chromatinlike structure [36]. AAV-ITR-IR may play a key role in increasing plasmid persistence in vivo [37]. The AAV ITR sequence includes transcriptional activity in vivo [38]. Notably, the transcriptional silencing of rAAV in mammalian cells is related to the interaction between the ATM/MRN complex and T-shaped ITRs [39][40][41]. These reports indicate that the ITR structure is the key that mediates rAAV interaction with host cells. Notably, some inverted repeat sequences, such as AAV2 ITR, AAVS1, and Alu long inverted repeats [42], slightly induce the HR repair mechanism [16]. This previous study suggested that cruciform DNA structures induced HR and directly replaced the structure by inducing a DSB. Our study also showed that all four transgene cassettes integrated into cruciform DNA structures of insertion sites in Tg animals. However, we found that predestinated matched templates, which imperfectly matched with Tg cassettes, were near integration sites in host chromosomes without DSB. All predestinated matched templates were replaced by Tg cassettes. Therefore, assembly regions were not found after DSB. Our data suggested that Tg cassettes integrated and imperfectly matched with assembly regions before DSB. Therefore, cruciform DNA structures act as templates for Tg integration in the HR mechanism, but cruciform DNA structures did not induce DSB before Tg cassette integration. Platforms for large IR plasmid construction and amplification are not stable. Therefore, it is a critical to improve methods of IR plasmid production to deeply analyze the relationship between DNA structures and the HR mechanism. Some commercially competent cells are compatible with plasmids, including unstable structures, such as Stbl strains (Invitrogen), SURE strains (Agilent), Endura strains (Lucigen), and NEB stable strains (New England Biolabs). Although commercially competent cells may be used, the stability and yield of plasmid production is dependent on plasmids. The plasmid yield of 5 IR in the SURE2 strain was lower than in the DH5α strain (ECOS101 TM ) in our study (data not shown). The closed linear plasmid from Lucigen may overcome issues of unstable DNA construction and amplification. Although these companies use their products to overcome unstable plasmid production, individual customers should test the precise protocol. Gene-editing methods were developed over the past decade. KO and KI induce DNA double-strand breaks using protein-or RNA-guided site-specific endonucleases, such as zinc-finger nucleases (ZFNs) [43], transcription activator-like effector nucleases (TAL-ENs) [44], and CRISPR/Cas9 [45]. After DSB, cells repair their DNA via non-homologous end-joining (NHEJ) or HR [46]. Most NHEJ repairs result in insertion-deletion mutations. Genes generally lose functions when NHEJ repair sites occur at the coding DNA sequence (CDS), which is called the gene KO. The HR mechanism uses multiple steps to repair DNA. The donor DNA integrates into the DSB locus when the donor DNA is used as the HA template. However, the HR efficiency is much lower than NHEJ in cells [47,48], which explains the lower KI rate compared to the KO rate [49]. Therefore, understanding the NHEJ/HR switch mechanism is the foundation for improving HR efficiency. Table 1 summarizes the factors involved in the NHEJ/HR switch mechanism, including repair enzymes and DNA conformation. For repair enzyme factors, the activities of DSB repair enzymes were manipulated to improve the KI efficiency in recent reports, such as HR enzyme overexpression or RS-1 addition [50][51][52] for approximately 0-20%, NHEJ inhibitor addition [53][54][55] for approximately 0-40%, and small molecule addition to arrest the cell cycle [56,57] for approximately 20-40% of KI efficiency improvements. However, these reports showed a wide range of results and were not consistent in KI improvement. For the DNA conformation factors, the distance between the template and the host DNA was manipulated to improve KI efficiency using a modified biotin-streptavidin approach to localize repair templates to target sites. This study hypothesized that the distance of two alleles would significantly affect HR efficiency [58]. However, their results showed a high variation in KI efficiency. Current experimental data suggest that the variation in KI efficiency is largely sequence-dependent [59]. However, these data provide no clear answers of how DNA sequences affect KI efficiency. Moreover, we were interested in which is sequence-dependent in the HR mechanism, either DSB alleles or template alleles. Notably, the KI efficiency of the 5 IR plasmid was better than the KIR plasmid in our report. These data suggested that the template DNA containing the unstable cruciform DNA structure improved the HR efficiency. Therefore, the structure/stability-dependency may be more important in the template allele than in the DSB allele. Repair Enzymes Activation of HR pathway Weak improvement of KI efficiency by HR enzyme overexpression or RS-1 addition (0~20%) [50][51][52] Inhibition of NHEJ pathway Moderate improvement of KI efficiency by NHEJ inhibitors (0~40%) [53][54][55] Arrest of cell cycle Strong improvement of KI efficiency by using small molecules to arrest cell cycle (20~40%) [56,57] DNA conformation The distance between the template DNA and the host DNA Strong improvement of KI efficiency by modifying donor DNA (20~40%) [58] The structure of the template DNA Indirect evidence showed that the cruciform structure from ITR affected KI efficiency [60,61] Our data suggested that the cruciform structure improved KI efficiency (10~20%) This study The transcription activity of the DSB allele High level of transcription activity around DSB sites induced HR via Rad52 activation and 53BP1 inhibition [62][63][64] DNA:RNA hybrid forms are related to DNA repair mechanisms at DSB loci [65,66] Akr1a1 eGFP/eGFP Tg Mice Our previous report used the compatible ends ligation inverse-PCR (CELI-PCR) method to analyze the Akr1a1 eGFP/eGFP Tg integration site. Briefly, genomic DNA was isolated from the tail snip of Akr1a1 eGFP/eGFP Tg mice. The genomic DNA was digested with BglII and BamHI (NEB, Ipswich, MA, USA). Digested DNA fragments were ligated into a circular form. The targeted DNA was amplified using inverse PCR from circular DNA. PCR products were cloned into the pGEM-T Easy vector (A1360; Promega, Madison, WI, USA). Plasmid DNAs from a single colony were isolated and used for Sanger sequencing. NFκB-Luc Tg Mice The targeted locus amplification-next-generation sequencing (TLA-NGS) method was used to analyze the genome of NFκB-Luc Tg mice in this study. We isolated primary tail-snip fibroblasts from NFκB-Luc Tg mice. The cells were fixed with 2% formaldehyde, and the genomic DNA (gDNA) was subjected to in-cell digestion by Hin1II-FD (FD1834; Thermo Fisher Scientific Inc., Waltham, MA, USA). The digested gDNAs were further ligated using T4 ligase (15224041; Thermo Fisher Scientific Inc.). After re-crosslinking formaldehyde using the phenol/chloroform and ethanol precipitation method, DNAs were digested by XceI-FD (FD1474; Thermo Fisher Scientific Inc.). The TLA template was prepared via a second ligation using T4 ligase (15224041; Thermo Fisher Scientific Inc.). We used the primer sets TLA-L and TLA-R to amplify the target region using PfuUltra II Fusion HS DNA polymerase (600672; Agilent Technologies Inc., Santa Clara, CA, USA). The Celero PCR Workflow with enzymatic fragmentation library preparation Kit (9363; TECAN, Seestrasse, Männedorf, Switzerland) was used to construct the DNA library. The NGS library was analyzed via paired-end sequencing with a MiSeq system (SY-410-1003; Illumina Inc., San Diego, CA, USA) by Tri-I Biotech Inc. (Taipei, Taiwan). Secondary Structure Prediction The UNAFold Web Server "DNA folding form" (http://www.unafold.org/mfold/ applications/dna-folding-form.php, accessed on 19 August 2021) was used to analyze the secondary structure of approximately 2000 nt of the surrounding sequences in Tg integration sites [69]. We chose "polymer" in correction type, "Untangle with loop fix" in structure draw mode, and "Flat" or "Flat-alt" in exterior loop type for all analyses. Other options were used in the default setting. All sequences were deposited into the National Center for Biotechnology Information (NCBI) database (https://www.ncbi.nlm.nih.gov accessed on 22 October 2021). The analytic intervals are provided in Table S1. Construction of Knock-In Reporter Plasmids We established a KI efficiency reporter system using the FLEX system. The SA-T2A-Cre cassette was constructed downstream of ROSA 5 HA in the pDonor MCS Rosa26 (a gift from Charles Gersbach; Addgene plasmid #37200; http://n2t.net/addgene:37200 (accessed on 30 November 2015); RRID: Addgene_37200). The FLEX system was constructed downstream of the SA-T2A-Cre cassette. The KIR donor plasmid included ROSA 5 HA, SA-T2A-Cre, the FLEX system, and ROSA 3 HA. The 5 IR donor plasmid was constructed via the addition of the inverse ROSA 5 HA to the KIR donor plasmid. The 3 ITRIR donor plasmid was constructed by adding an inverted AAV2 ITR sequence to the KIR donor plasmid. Quantitation of Knock-In Rates KI rates were measured 24 h post-transfection using flow cytometry (Accuri TM C6 Plus; BD Bioscience, East Rutherford, NJ, USA) with 20,000 transfected cells per sample. Red fluorescence-positive cells were used as the control for transfection efficiency. Green (G) and red (R) fluorescence-positive cells (G + R) indicated successful transfection and KI. The KI rate was determined by (G + R)/R. The background signals of the knock-in reporter system and transfection efficiencies were tested for finding the optimal condition to ignore the leak expression ( Figures S2-S4) [70]. We used an optimal transfection condition of co-transfection of the 50 ng donor plasmid and 1.8 µg CRISPR/Cas9-mCherry all-in-one plasmid per well in 24 wells ( Figure S3). Statistical Analysis The results are expressed as the means ± SD (standard deviation). Statistical analyses were performed using one-way ANOVA with Fisher's least significant difference (LSD) test in SPSS software (Version 20, IBM Corp., Armonk, NY, USA). p < 0.05 was considered statistically significant (* p < 0.05 and ** p < 0.01). Conclusions We first reported the presence of cruciform DNA structures of genome sequences in the transgene integration sites analyzed in four different strains of Tg animals. We found that the ROSA26 locus exhibited a similar cruciform DNA structure in the knock-in site. We analyzed the potential function of cruciform DNA structures during HR processing using the CRISPR/Cas9 knock-in system. Notably, our data suggested that the cruciform DNA structure in the HA region improved the knock-in rate. In conclusion, the inverted repeat sequence may be more legible and recognized as a template for the HR mechanism based on the structural instability of the cruciform DNA. Data Availability Statement: The data presented in this study are available upon request from the corresponding author.
2022-04-05T15:37:42.514Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "01611b6c6070ca044006fa61c1d793a60599433e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/7/3973/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d0bd87c6125ad4e7d18903efca98df94ce5e913a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
149797547
pes2o/s2orc
v3-fos-license
Precise assembly and joining of silver nanowires in three dimensions for highly conductive composite structures Three-dimensional (3D) electrically conductive micro/nanostructures are now a key component in a broad range of research and industry fields. In this work, a novel method is developed to realize metallic 3D micro/nanostructures with silver-thiol-acrylate composites via two-photon polymerization followed by femtosecond laser nanojoining. Complex 3D micro/nanoscale conductive structures have been successfully fabricated with ∼200 nm resolution. The loading of silver nanowires (AgNWs) and joining of junctions successfully enhance the electrical conductivity of the composites from insulating to 92.9 S m−1 at room temperature. Moreover, for the first time, a reversible switching to a higher conductivity is observed, up to ∼105 S m−1 at 523 K. The temperature-dependent conductivity of the composite is analyzed following the variable range hopping and thermal activation models. The nanomaterial assembly and joining method demonstrated in this study pave a way towards a wide range of device applications, including 3D electronics, sensors, memristors, micro/nanoelectromechanical systems, and biomedical devices, etc. Introduction Over the last decade, micro/nanostructures have been receiving increasing attention due to their potential applications in modern nanotechnology and emerging fields, such as microelectronics [1], flexible electrodes [2], micro/nanoelectromechanical systems (MEMS/NEMS) [3], photonics and optoelectronics [4], metamaterials [5], and energy storage [6]. As it becomes increasingly challenging for Moore's Law to continue to push the two-dimensional miniaturization limit toward the atomic level, research on the integration/assembly of functional micro/ nanostructures in three-dimensional (3D) space for device applications is becoming increasingly important [7]. Direct laser writing by two-photon polymerization (TPP) has been established as one of the most promising methods for achieving 3D fabrication in micro/nanoscales, due to its ability to produce arbitrary and complex 3D structures with subwavelength resolution [8]. However, the lack of TPP-compatible and functional materials represents a significant barrier to realize the functionality of the fabricated devices, such as high electrical conductivity, high environmental sensitivity, high mechanical strength, etc [9]. Recently, several studies have demonstrated the use of TPP for the fabrication of electrical conductive microstructures, including selective plating [10], metallic inversion [4], in situ photoreduction with photopolymerization [11], and direct photopolymerization of composite resins [12]. Both selective plating and metallic inversion methods involve multiple, time-and cost-intensive synthetic steps. In situ photoreduction synthesis allows single-step fabrication of two-dimensional patterns of noble metallic NPs [11], discrete NP-doped polymer microstructures [11], and even bridge-like conductive elements [11]. However, this method is intrinsically difficult to simultaneously satisfy precise morphology and high electrical conductivity [13]. In our previous work, multiwalled carbon nanotubes (MWNTs) were employed for the fabrication of 3D conductive micro/nanostructures [12]. The composite photoresist showed significantly enhanced electrical conductivity (up to 46.8 S m −1 @ 0.2 wt% MWNTs), with strong anisotropic properties and high optical transmittance. Nevertheless, the composite conductivity is still limited by the intrinsic conductivity of MWNTs (roughly 5×10 3 to 5×10 6 S m −1 ) [14] and relatively high tube-totube contact resistance. Compared with carbon nanotubes, silver (Ag) is considered to be an ideal conductive material (6.3×10 7 S m −1 ). The small dimension and high aspect ratio of one-dimensional silver nanowires (AgNWs) could effectively transport electrical carriers along one controllable path [15], and moreover, interconnected nanowire networks can be realized by the wire-to-wire junction joining/welding [2,16], thereby leading to increased electrical properties with a low nanofiller loading concentration. However, to the best of our knowledge, 3D AgNWs-based micro/nanostructures of high electrical conductivity and surface morphology have not been realized yet. In this paper, we report a method for designing and preparing a TPP-compatible AgNW-thiol-acrylate (ATA) composite photoresist and simultaneously achieving reliable 3D micro/ nanofabrication with high structuring accuracy and high electrical conductivity ( figure 1(a)). Moreover, a femtosecond laser was used for the nanojoining of AgNW junctions inside a polymer matrix to further reduce the wire-to-wire junction resistance and thus increase the electric conductance of the overall 3D micro/ nanostructure substantially. Finally, the temperature-dependent electrical conductivity and the resistance switching mechanism of the as-fabricated ATA bridge structures were investigated by employing the variable range hopping (VRH) model and thermal activation estimation. The as-fabricated bridge structures made of ATA composites showed a substantial three-step increase in electrical conductivity over ten-orders of magnitude with the AgNW addition, femtosecond laser nanojoining and resistive switching, respectively ( figure 1(b)). These nanocomposites are distinct from the conventional view of the electrical properties in polymer nanocomposites as either insulating or conducting below and above the percolation threshold, respectively. The fabrication method simultaneously realized the user-defined arbitrary 3D micro/nanostructuring, high spatial resolution, fine surface quality, and superior electrical conductivity, which makes it promising for various functional device applications, including 3D electronics, sensors, plasmonics, memristors, MEMS/NEMS, and biomedical devices, etc. Preparation of ATA composite resin The TPP-compatible ATA composite resins were prepared by directly mixing thiolated AgNWs with acrylate-based resin (table S1, supporting information available online at stacks. iop.org/IJEM/1/025001/mmedia) and characterized using transmission electron microscopy (TEM) and mass spectrometry. Silver nanowires were used in the composites with lengths ranging from 0.1 to 4.15 μm and concentrations from 0.005 to 0.4 wt%. Nanowires with an average diameter of 40 nm and lengths from 0.48 to 4.15 μm were prepared by breaking >30 μm long AgNWs using ultrasonic vibration (figures S1(a)-(e), supporting information). Using TPP processing to fabricate structures, the laser beam will be scattered by agglomerated nanowires at high concentration, which will distort the fabricated structures. Therefore, the maximum 0.4 wt% AgNW concentration was chosen to avoid agglomeration until changes in the electrical properties of the composites reaching a plateau value. Moreover, it is expected that shorter AgNWs could result in better dispersion and smoother composite structure surfaces compared with longer ones. However, a balance on AgNW length should be considered since it is easier to form a connecting network in polymers with the same effective concentration by using longer AgNWs. To obtain stable and homogeneous dispersion of AgNWs, thiol with a suitable length of carbon chains (HOC n H 2n+1 SH, n=6) was used as the surfactant in the composite resins (figure 2(a)). The composite resins appeared to be stable under ambient conditions with a color transition from clear to gray as the AgNW concentration increased ( figure 2(b)). TEM images of the ATA composite (figure 2(d)) were captured by directly analyzing the thin film made by TPP with a thickness of ∼1 μm on a copper grid. A shell (2-4 nm thickness) comprised of multiple, intertwined layers of thiol molecules formed on the surface of the AgNWs, as the thiol sulfur formed a strong chemical bond with Ag [17], leaving the other group (OH) miscible in the acrylate resin to undergo subsequent polymerization reactions [18] (magnified image in figure 2(d), figures S1(f) and (g), supporting information). The thiol layer allowed the individual AgNWs to maintain distance between each other, preventing them from aggregation and causing the resulting thiolated AgNWs to be well dispersed in the polymer matrix. In addition to TEM characterization, a KrF excimer laser (248 nm, 23 ns) was used to assist laser ablation in ambient mass spectrometry using a time-of-flight mass spectrometer (TOF-MS). Two prominent peaks were observed in the spectra of a micro-woodpile composite structure, at 107 and 109, which were ascribed to isotopes of 107 Ag and 109 Ag, respectively (figure 2(e)) [19]. A clear change in contrast was observed in the scanning electron microscopy (SEM) image of the micro-woodpile (figure 2(e) inset), indicating a compositional difference of metallic phase inside the polymer matrix, revealing the existence of the AgNWs in the ATA composite [20]. The SEM and TEM results provide experimental evidence that the AgNWs within the polymer matrix distributed in a combined way of separate nanowires and connected networks. Therefore, TPP-compatible ATA composite resins, with thiolated AgNWs uniformly dispersed in the acrylate resins, were successfully prepared. Our approach also paves a way for dispersion and assembly of various functional nanomaterials into 3D polymer structures, including carbon nanotubes, metallic NPs, semiconductor NPs or magnetic NPs, for the fabrication of advanced micro/nanodevices with additional functional properties. Fabrication of 3D metallic micro/nanostructures After identification and characterization of the composite resins, the ATA composites were employed to fabricate conductive 3D micro/nanostructures with high spatial resolution and fine surface quality. An integrated set of beam transport optics directed the laser output with circular polarization to a final focusing objective (63×, NA=1.4) that scanned in 3D according to the user-defined geometric designs and assemble AgNWs within the focal spot. After TPP lithography, the samples were developed. The unsolidified resin was rinsed away, leaving AgNWs embedded inside the solidified ATA composite structures on the substrates. To optimize the laser fabrication condition, the dependence of structural resolution on the laser power (figure 3(i)) was carefully evaluated. Large parallel cuboid supports were fabricated first, and then a series of lines were scanned across the supports. The suspended lines were fabricated using different laser powers and a fixed writing velocity of 100 μm s −1 . A minimum line width of 202±18 nm was observed at a polymerization threshold power of 3 mW. It was also found that the feature size of the fabricated woodpile structures decreased with the existence of AgNWs in the composite under the same processing conditions (figure S2, supporting information). The line width of the sample prepared with 0.2 wt% AgNWs exhibited a ∼15% decrease compared with that of pure acrylate structure, which can be attributed to absorption by AgNWs in the path of the laser beam prior to reaching the focal point [21]. The elliptically shaped cross-section was revealed by multi-photon ablation (MPA) on the solidified grating structures on a glass substrate (figure S3, supporting information). This 'TPP+MPA' method has been demonstrated to be an effective technique for the fabrication of microvoids and microfluidic structures [22]. Polarization of the laser beam has been reported to affect the intensity distribution and thermal gradients around the focal spot thus leading to different polymerization rates, which can, in certain cases, affect the feature size [23]. In our work, the circular polarization of incident light could ensure a more spherical voxel within the xy-plane [24] and avoid polarization-dependent linewidth. The electrical conductivity of the composites was characterized by performing cyclic I−V measurements of 100 μm long line and bar-shaped channel bridging two pairs of gold (Au) electrodes. A single line was fabricated using the ATA composite (0.05 wt% AgNW concentration, 0.1 μm long-AgNWs), and the resulting conductivity from I-V measurement was calculated to be 12.5 S m −1 (figure S3(c), supporting information). Bar shape channels with a cross-section of 5×5 μm 2 were fabricated in the following experiments, due to the multilayer structure of the host polymer could provide sufficient charge carriers' transport channels (i.e. connected AgNW junctions in 3D), and therefore avoid instability compared with single layer or line structures. Figure 3(j) shows the electrical conductivity of the ATA composites as a function of AgNW concentrations at a fixed AgNW length of 0.1 μm. The experimental results indicate that the AgNW loading plays a significant role in determining the conductivity of the composites. With only 0.005 wt% AgNWs loaded into the acrylate resin, an approximate sevenorders of magnitude increase in the electrical conductivity was observed, which could be ascribed to the highly uniform distribution of short-length AgNWs. The conductivity of the composite polymers increased as the AgNW concentration increased and reached a maximum of 32.51 S m −1 at 0.2 wt% AgNW concentration. Furthermore, the effect of AgNW length on conductivity is shown in figure 3(k). With a fixed 0.02 wt% AgNW concentration, the conductivity increased from 2.35×10 −4 to 1.13×10 −3 S m −1 as the AgNW length increased from 0.3 to 1.35 μm, due to the reduced nanowire distance and sufficient conduction pathways established by the longer AgNWs. However, as the AgNW length exceeded 1.35 μm, the conductivity of the ATA composite dropped slightly. The 4.15 μm long AgNWs were prone to forming larger aggregates which were centrifuged out of the composite resin, thus the ATA composite would have a smaller 'effective' filler concentration. The same phenomenon was observed for the composite with 0.1 μm long AgNWs at a 0.4 wt% loading concentration (figure 3(j)). Moreover, long AgNWs tended to produce a high number of networked AgNWs with a relatively large nanowire cross-junction resistance [25]. Consequently, the as-fabricated conductivity of the ATA composites with 0.3-4.15 μm long AgNWs (10 −4 to 10 −3 S m −1 ) was lower than 0.1 μm long AgNWs at the figure 3(j)). Based on the ATA composites, the TPP technique offered arbitrary 3D structuring capability with relatively high electrical conductivity as well as submicron spatial resolution. However, in the ATA composites, although the thiol sheath layer significantly enhanced the disperse quality of the AgNWs in the polymer matrix, it also introduced conduction barriers between nanowires. Therefore, the authors needed to find a way to further enhance the electrical conductivity of the asfabricated 3D micro/nanostructures. Laser joining of the AgNW junction With an aim of optimizing the electrical conductivity of the ATA composites, a nanojoining process was developed using femtosecond laser pulses to further reduce the wire-to-wire junction resistance of AgNWs. A variety of attempts have been made to improve the junction contacts, including Joule heating [26], thermal annealing [27], mechanical pressing [28], and plasmonic welding [29]. However, the high surfaceto-volume ratio of the AgNWs makes them vulnerable to traditional thermal treatments which cause the AgNWs to undergo a morphological instability and fragmentation into a chain of nanospheres, due to the Rayleigh instability (figure S4, supporting information) [30]. This instability presented a challenge to applying thermal annealing as a method of reducing the junction resistance of AgNWs. Femtosecond laser-induced plasmonic joining appeared to be a good candidate in terms of noncontact processing and minimum thermal damage to the nanowires and the substrate [29]. It is known that femtosecond laser irradiation can deposit energy with a time scale shorter than the electron-phonon equilibrium time and constrain the heat generation within a welldefined hot spot region determined by the distribution of the localized electrical field between the gap of AgNWs [29,31]. After TPP fabrication, a femtosecond laser (1 kHz amplified Ti:sapphire laser system, Legend F, Coherent Inc.) at 800 nm was used to join the junctions of AgNWs inside the composite structures ( figure 4(a)). As shown in figure 4(b), the nanojoining process can be described in three stages: (1) high intensity laser field was concentrated at the AgNW junctions, which initiated the decomposition of the thiol layer and softened the lattice of the AgNWs [32]; (2) the Ag atoms at the junction were thermally excited, and some portion of the material was ejected in the form of NPs (supported by the formation of NPs after laser irradiation, figure S5, supporting information); and (3) these excited NPs possessed high mobility, which diffused to the AgNW junctions and enabled the AgNWs to be joined locally ( figure 4(d), figure S6, supporting information). In a first attempt, AgNW networks were irradiated with different laser energies to obtain optimized parameters (figure S6, supporting information). As the network was processed with a sufficient laser fluence (35 mJ cm −2 ), the SEM images showed evidence of spheroidization, fusion, and significant local change in morphology at the junctions, while the rest of the AgNWs remained intact. A further increase in laser fluence induced photofragmentation and caused the breakup of the AgNWs [33]. Once the optimized irradiation conditions were obtained, laser nanojoining was carried out on the solidified ATA composites after the TPP process. The ATA composite films were highly transparent at 800 nm (figure S7, supporting information), thus enabling laser energy to be predominately absorbed by the AgNWs for proper nanojoining. High resolution TEM (HRTEM) images of the AgNW junctions before and after laser irradiation are shown in figures 4(c) and (d). Fourier spectra were obtained by fast-Fourier transform (FFT) from three different areas in each figure. Before laser irradiation, two AgNWs were laid on each other and separated by the thiol sheath. The diffraction patterns of the two nanowires were visible along two different directions, with roughly equal intensity and certain rotation symmetry (∼45°). These diffraction patterns represented the [110] growth direction of the AgNWs (the long axis of the nanowires). The junction showed diffraction spots primarily along a single direction, matching the diffraction pattern from the bottom nanowire. After laser illumination, the diffraction patterns and lattice structures at the contact junction were different from both of the individual nanowires (figures 4(d) and (e)). This revealed a localized laser-induced Ag atom diffusion and recrystallization following the (111) plane at the junction area during laser irradiation [34] (figure 4(f)). Moreover, it is important to point out that the original crystal orientation was intact in each NW at locations away from the junction. A four-orders of magnitude increase in electrical conductivity, from ∼3.88×10 −4 to 4.34 S m −1 , was observed in a bar-shaped channel structure fabricated using the ATA composite (0.02 wt% AgNW concentration, 0.48 μm long-AgNWs) after laser illumination, indicating the successful nanojoining of AgNWs inside the composites ( figure 4(i)). In addition, the laser nanojoining process was demonstrated to be more effective on composite resins with longer AgNWs (figures 4(j) and (k)). The enhanced electrical performance was gauged by the ratio of the conductivity with and without laser irradiation (σ joined /σ as-fabricated ). For the ATA composite with 0.1 μm long AgNWs, the ratio increased from 1.18 to 126.89 as the AgNW concentration increased from 0.005 to 0.4 wt%. The conductivity reached a maximum of 92.9 S m −1 with 0.2 wt% AgNWs concentration ( figure 4(j)). In the case of the composite with a longer AgNW length, the ratio increased from 2.65×10 3 to 1.97×10 5 with the increase in the AgNW length, due to the effective joining of an increased number of networked AgNWs in the composites ( figure 4(k)). To disclose how the AgNWs were locally joined together, finite-difference time-domain (FDTD) modeling of an individual AgNW and AgNW junction was conducted. The localized surface plasmon (SP) of a free-standing single AgNW (40 nm diameter, 1.5 μm length) occurred at 364 nm, which was far away from the wavelength of the femtosecond laser. Therefore, the joining of AgNWs does not rely on the SP of individual AgNWs ( figure S8, supporting information). With the 780 nm laser incident to the AgNW junction, the free electrons on the metal surface oscillated collectively, leading to a local electromagnetic field enhancement in the near field. At the AgNW junction, resonance coupling of SP endowed an intense and durative near-field enhancement of approximately 35 times the incident light, which suggests the occurrence of the localized joining effects of the nanowires, matched with our experimental results (figures 4(g) and (h), figure S8, supporting information). Therefore, both experimental and numerical simulation results confirmed that femtosecond laser irradiation provided a direct, efficient, and selective nanojoining of AgNW junctions in ATA composite, resulting in significantly improved electrical performance. Temperature-dependent electrical conductivity of ATA composite To reveal the physical origin of the electrical charge transport in the composites and explore the potential device applications, temperature-dependent studies of the ATA composites were conducted. As a proof-of-principle demonstration for electronic devices, an array of bar-shaped channels (5×5×100 μm 3 W×H×L) fabricated between two Au electrodes using the ATA composite (0.02 wt% AgNW concentration, 0.48 μm long-AgNWs) were connected in a simple circuit with a light-emitting diode (LED) ( figure 5(a)). The sample was placed in a heating chamber with nitrogen gas protection to control the temperature. When the temperature increased from room temperature (RT) to 523 K, the LED was lit with a DC voltage of 10 V ( figure 5(b)). The LED went off as the temperature decreased back to RT. The composite conductivity increased from 4.34 S m −1 at RT to 6.56×10 4 S m −1 at 523 K ( figure 5(c)). This result is a clear demonstration of the temperature-dependent conductive ATA composites and their potential for temperature-dependent electronic devices. Temperature-dependent I-V curves were collected for composites with different AgNW lengths and a fixed concentration of 0.05 wt%. The temperature was controlled below 523 K to avoid spheroidization of the AgNWs and decomposition of the polymer matrix. Figure 5(d) depicts the two-stage conductivity increase of the ATA composite with 0.1 μm long AgNWs. The conductivity first increased slowly by ∼15 times from 273 to 473 K and then increased abruptly by ∼350 times from 473 to 523 K. This temperature dependency of electrical conductivity was reversible and applicable for all composite samples. In Stage 1, from 273 to 473 K, charge carriers were activated by elevated temperature with a stable increase in conductivity, showing the semiconducting behavior of the composites [35]. Using the slopes of an Arrhenius plot, the activation energy E A was determined for quantitative analysis ( figure S10(b), table S2, supporting information). In Stage 2, from 473 to 523 K, the composite exhibited a jump in electrical conductivity and reached a maximum of 3.6×10 5 S m −1 at 523 K (0.05 wt% AgNW concentration, 0.48 μm AgNW length). This jump in conductance was demonstrated to be a typical resistive switching phenomenon [36] and occurred by the combination of temperature and field-assisted excitation. Reversible switching mechanism of electrical conductance The I-V measurement revealed that the temperature-dependent reversible switch of electrical conductance was reproducible in the ATA composites. This two-stage variation of electrical conductivity can be described by the VRH model [37] in Stage 1 (from 273 to 473 K) and reversible switching model in Stage 2 (from 473 to 523 K). The reversible switching was attributed to carbonaceous pathways that resulted from degradation of the organic polymer [38] (figures 6(a) and (b)) and metallic Ag filament formation across the polymer matrix (figures 6(c) and (d)). In order to clarify the electrical conductance enhancement by the AgNW loading in Stage 1, a 3D Mott's variable hopping model, which is a model depicting the low Figure 6. Schematic concept of the resistive switching mechanisms for the ATA composites. (a) Junction structure, (b) formation of sp 2 clusters of acrylate polymer by thermal treatment, (c) temperature-and field-induced Ag filament formation, and (d) recovery to high resistance at RT. (e) The conductivity (ln(σ)) data of the composites described by the VRH model for a 3D transport mechanism. The AgNW length was kept at 0.05 wt%. All data were linearly fitted. (f) Temperature-dependent Raman spectra of the ATA composites ranged from 298 to 523 K. The AgNW length and concentration were 0.48 μm and 0.05 wt%, respectively. The average laser power used for Raman spectroscopy was 10 mW (wavelength 514.5 nm). temperature conduction in strongly disordered systems with localized states, was used to calculate the variation of electrical conductivity with elevated temperature [39]: where σ 0 is the temperature-independent prefactor, which represents the limiting value of the conductivity at infinite temperature; and T 0 is the characteristic temperature, which is inversely proportional to the localization length of the charge carriers, thus a small T 0 implies a weak localization and increased conductivity [40]. In figure 6(e), ln(σ) versus T −1/4 for the composite resins is plotted and fitted for the ATA composites with different AgNW lengths. The characteristic temperature T 0 and linear quality factor R 2 for each sample are summarized in table S2 (supporting information). The characteristic temperature T 0 decreased from 1.44×10 9 K (pure acrylate) to 1.85×10 4 K (ATA composite, 1.35 μm long AgNWs), indicating a reduced localization of the charge carriers and thus enhanced electrical conductivity [41]. In addition, a diminishing tendency of T 0 and R 2 as the AgNW length increased was observed. Since the metallic ohmic conductance and VRH conductance had the opposite temperature dependences [42], the decreased values of T 0 and R 2 implied that the number of metallic ohmic contacts in the composites increased as the length of the AgNWs got longer, which is consistent with the fact that laser nanojoining is more effective on composite resins with long AgNW lengths. However, in the case of the ATA composite with 4.15 μm long AgNWs, the values of T 0 slightly dropped due to the formation of large clusters that enlarged the hopping distance in the composite. In Stage 2, the internal chemical reactions and structural phase transitions of the composites were studied by temperature-dependent Raman spectroscopy [43] (figure 6(f)). In the authors' composite system, the glassy transition temperature (T g ) for acrylate monomer was found to be 286 K [44], which was unlikely to be the cause of the reversible switching at around 473 K. As shown in the Raman spectra, the transition occurred in a temperature range from 423 to 523 K, that resulted in three Raman peaks at ∼1366, ∼1593, and 1731 cm −1 , which were ascribed to the D, G, and C=O bands. As the temperature increased, the G band intensity slowly increased from RT to 473 K and then increased abruptly until 523 K. The significantly increased sp 2 bonding indicated the degradation of the acrylate matrix, which was promoted with the AgNWs serving as the catalyst [45], creating special sp 2 cluster pathways with high electron mobility within the polymer medium ( figure 6(b)). However, the G band intensity did not drop as the temperature decreased, indicating a permanent degradation of the polymer matrix. At temperatures above 473 K, resistive switching behavior occurred with a threshold voltage of 10 V, ascribed to electromigration-induced Ag filament formation (figures S11(a)-(c), supporting information). When an electric field was applied, due to the increased Ag mobility at high temperatures, Ag atoms dissociated from individual AgNWs to form bridging filaments among the closest AgNWs, bringing them together to form electrical contacts (supported by the nanoparticle formation around AgNW junctions, figure S11(d), supporting information). This Ag filament formation was demonstrated in a single AgNW junction [46] and AgNWs/polystyrene composites [47]. Furthermore, when the bias was removed, the filaments were ruptured, which can be attributed to the thermal motion of Ag atoms within the filaments. Finally, at temperatures below 473 K, the Ag mobility was limited; and the AgNWs were kinetically trapped, returning the ATA composite back to a high-resistance state ( figure 6(d)). The authors' findings demonstrated that the resistive switching of ATA composites can be explained by a combination of sp 2 carbon cluster pathways and Ag filaments. It is promising that micro/nanostructures made of the ATA composite material can be engineered to exhibit prominent temperature-and electrical-field-dependent electrical behaviors. Conclusions In this work, a new strategy was presented for realizing metallic 3D micro/nanostructures with ATA composites via TPP followed by a femtosecond laser nanojoining process. By employing thiol functionalization, TPP-compatible ATA composite resins were obtained with AgNWs uniformly dispersed in a polymer matrix. Micro/nanoscale 3D conductive structures were successfully fabricated with ∼200 nm resolution. Moreover, a femtosecond laser irradiation process was conducted to further enhance the electrical conductivity of the solidified ATA composites by up to 10 5 times, resulting from the effective joining of AgNW junctions. A strong temperature dependence of the conductivity of the ATA composite was observed and analyzed, revealing the charge carriers' transport mechanism following the VRH and reversible switching models. The nanomaterial assembly and joining method demonstrated in this study represent a new opportunity for developing functional devices for a broad range of applications, such as 3D electronics, temperature sensors, memristors, MEMS/NEMS, and biomedical devices. Photoresist preparation Homemade ATA composite resins were prepared by direct mixing of thiol-functionalized AgNWs, acrylic monomer (Di-TMPTTA), and photoinitiator (BDMP). First, the AgNW aqueous solutions underwent ultrasonic agitation (agitation power: 60 W, SONIFIER ® SLPe Energy and Branson Ultrasonics) from 30, 60, to 120 min to reduce the average length to 4.15, 1.35, and 0.48 μm, respectively. The surface modification of AgNWs was then performed by mixing the liquid thiol 6-MCH with the aqueous AgNW solution at a concentration of 10 mmol l −1 . The aqueous solution underwent a 120 min ultrasonic bath, resulting in the formation of thiolcapped AgNWs. Excess IPA solvent was removed from the solution obtained using high-speed centrifugation (10 min at 15000 rpm, bench top centrifuge Z230M, Hermle Labortechnik GmbH). After the centrifugation, the IPA was evaporated completely from the precipitation. The remnant thiolcapped AgNWs were dispersed in acrylic monomer with concentrations varying from 0.005, 0.02, 0.05, 0.2, to 0.4 wt%. Photoinitiator was added to the dispersion with a constant concentration of 1 wt% in all samples. The composite resins underwent ultrasonic agitation for 60 s followed by a magnetic stirring for 24 h at RT (VWR, standard hot plate stirrer). They were then purified using centrifugation (30 min at 6000 rpm, mini spin 5452, Eppendorf) to remove large AgNW aggregations from the resins. The as-prepared resins were stored in brown glass bottles and stirred continuously. Two-photon polymerization TPP fabrication was performed on a 3D laser lithography system (Nanoscribe GmbH, Photonic Professional). A frequency-doubled, Er-fiber laser (center wavelength of 780 nm, pulse width of 100 fs, repetition rate of 80 MHz, and maximum power of 150 mW) was used as the irradiation source. An oil immersion objective lens (63×and 1.4 NA) was used to focus the laser beam. Femtosecond laser nanojoining A femtosecond Ti:sapphire laser with a regenerative amplification system (center wavelength of 800 nm, pulse width of 120 fs, repetition rate of 1 KHz) was employed in the laser nanojoining experiment. The laser beam was linearly polarized with a Gaussian energy distribution (M squared factor <1.2). A half-wave plate and a polarizer were used to adjust the laser pulse energy. A plano-convex lens with a focal length of 100 mm was used to focus the laser beam. The sample was stationed on an X-Y-Z stage, and the stage was vertically displaced 4 mm from the focal plane to have a defocused beam of 170 μm spot size on the sample surface. The laser beam scanned the sample surface by moving the sample stage at a speed of 100 μm s −1 . Scanning electron microscopy A field-emission SEM (Hitachi, S4700) with an acceleration voltage of 5-10 kV was used for the observation and analysis of the TPP-fabricated structures and AgNW morphology. A chromium layer of 5 nm thick was deposited on the samples before the SEM characterization to prevent the electric charge effect. Tunneling electron microscopy A HRTEM (FEI Tecnai Osiris) with a voltage of 200 kV was used for analyzing the AgNW distribution and femtosecond laser nanojoining. Pristine AgNW TEM samples were prepared by drop casting on copper grids with Formvar/carbon supporting film and 400 mesh. The AgNW-acrylic composite TEM samples were prepared by direct TPP writing of a single layer thin film on the same kind of copper grid. Laser ablation ambient mass spectrometry The laser ablation ambient mass spectrometry system consists of three major modules: a KrF excimer laser (COMPexPro 205 F, wavelength=248 nm, pulse duration=23 ns, Coherent Inc.), a time-of-flight mass spectrometer (TOF-MS, JEOL, AccuTOF ™ , JEOL, USA Inc.), and a sample stage. The laser fluence was fixed at 8.75 J cm −2 . The laser beam was focused at normal incidence on the samples with a spot size of 0.80×1.60 mm 2 . The voltages of the outer and inner orifices of the TOF-MS were fixed at 30 and 5 V, respectively. The temperature of the skimmer cone was fixed at 100°C. The accumulation time was 1 s for each measurement. Temperature-dependent electrical conductivity measurement To characterize electrical conductivity, bar-shaped channels were fabricated between two Au electrodes with sputtering deposited on the desired regions through shadow masks (Kurt J Lesker, 99.99% purity Au target). The homemade electrical conductivity measurement system consisted of a semiconductor parameter analyzer (Agilent 4155 C), a heating chamber (Linkam Scientific Instruments (UK), 77-800 K, THMS600), a heating power/temperature controller, and a probe station (Cascade Microtech, MPS 150). A liquid nitrogen tank was used for providing a nitrogen environment. Before measurements, composite devices were loaded into the chamber and placed at the center of the heating stage. A purging process was then performed to ensure all of the air was purged out. After that, the heating stage began to ramp up the temperature from 273 to 523 K at a rate of 10 K min −1 and then cooled down at the same rate. A current-voltage (I-V ) sweep was then performed at an incremental temperature of 50 K. Temperature-dependent Raman microspectroscopy Raman microspectroscopy was conducted using a Raman microscope (Renishaw, InVia ™ H 18415). To conduct temperature-dependent Raman characterization, the samples were placed in the same heating chamber (Linkam Scientific Instruments (UK), 77-800 K, THMS600) used for electrical conductivity measurements. The excitation laser beam, with a wavelength of 514.5 nm, was focused onto the sample surface through the chamber window by an objective lens (50×, NA 0.75). Before each measurement, the purging process was performed. The heating stage ramped up the temperature from 273 to 523 K at a rate of 10 K min −1 . The average laser power used to produce Raman spectra was 10 mW. Raman spectra were recorded with an accumulation time of 10 s each at an incremental temperature of 50 K. Optical characterization The optical absorption was measured using an ultravioletvisible spectrophotometer (Evolution 201, Thermo Scientific ™ ). The samples were prepared by spin coating (1 min at 1500 rpm, WS-650MZ-23NPPb, Laurell) with a thickness of 13.5 μm.
2019-05-12T14:14:54.018Z
2019-05-07T00:00:00.000
{ "year": 2019, "sha1": "ff755cc7e1ee084431ba38310af1c468d70e0712", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2631-7990/ab17f7", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c1ea6902f1ef3cc52668d76a9d3af22225c49dc8", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
256659905
pes2o/s2orc
v3-fos-license
Chinese expert consensus on antithrombotic management of high‐risk elderly patients with chronic coronary syndrome Abstract The prevalence and mortality of coronary artery disease (CAD) in China are still at an increasing stage. CAD can be classified as acute coronary syndrome (ACS) or chronic coronary syndrome (CCS). CCS is the main manifestation type of elderly patients with CAD, with a large number of patients, long course of disease, and poor prognosis, leading to decreased quality of life and heavy disease burden and economic burden. Especially in patients with high‐risk CCS, the case fatality rate and total mortality are high. In order to better standardize the antithrombotic treatment of elderly patients with high‐risk CCS, the Geriatrics Branch of the Chinese Medical Association organizes domestic experts to develop this consensus for clinicians' reference based on published clinical research evidence, combined with relevant guidelines, consensus, and expert recommendations in China and abroad. | Other characteristics of elderly patients with high-risk CCS Elderly patients with CCS may have multiple other clinical conditions, such as atrial fibrillation (AF) and venous thromboembolism (VTE). The proportion of elderly CAD patients with AF is 6% to 21%, and the proportion of AF patients with CAD is 20% to 30%. 6 Despite the current situation in clinic, there is still lack of CAD combined with VTE epidemiological data. Only a few have reported that the proportion of ACS patients with VTE is 4.96% to 14.90% (of which about 5% are fatal pulmonary thromboembolism), while the proportion of acute VTE patients with CAD is 10% to 17%. 7,8 Atrial fibrillation or VTE itself may require anticoagulant therapy, so the benefits and risks of antithrombotic therapy should be particularly considered when these disorders occurred in the elderly with CCS. 9 | Principles of antithrombotic therapy in elderly CCS patients with high ischemia risk Antithrombotic therapy can significantly benefit patients with CAD, especially in elderly patients with high-risk CCS. 2,3 Ischemia and bleeding risk assessment is a key step in antithrombotic therapy. Antithrombotic principles in patients with CCS: (1) Ischemic and bleeding risks should be adequately assessed to determine treatment strategies before initiating antithrombotic therapy. 2,10 (2) Dual antiplatelet therapy (DAPT) or dual pathway inhibition (DPI) is recommended for patients with high ischemia risk and without high Table 3). 2,10 (4) Genetic Personality Traits may affect the reactivity of certain drugs; CYP2C9 and VKORC1 gene polymorphisms are associated with warfarin-related bleeding adverse reactions, CYP2C19 gene polymorphisms are associated with clopidogrel response, and selective gene polymorphism monitoring is helpful for individual conditions assessment and medication selection, but it is not recommended for routine use. 4 | Long-term antithrombotic therapy for secondary prevention Aspirin as a single antiplatelet therapy (SAPT) has a clear benefit in reducing major adverse cardiovascular events (MACEs) in patients with CCS. 11 The CAPRIE study showed a minor benefit of clopidogrel TA B L E 2 High bleeding risk profile Recommendation 10: Antiplatelet therapy should be resumed as soon as possible (preferably within 24 h) after surgery. Recommendation 11: In patients who planned for elective noncardiac surgery and treated with DPI, the strategies for whether aspirin is discontinued or not and the duration of discontinuation are the same as DAPT treatment ( Figure 2). The preoperative discontinuation duration of rivaroxaban should be adjusted according to the surgical bleeding risk and creatinine clearance (Tables 5 and 6). Recommendation 12: The continuation/administration of rivaroxaban is suggested to be several hours after surgeries with low bleeding risk and 24 to 72 h after surgery with high bleeding risk. | CCS patients with PAD Clopidogrel monotherapy reduced MACEs compared with aspirin in patients with previous myocardial infarction and complicated with PAD or stroke. 12 In patients with previous PCI complicated with PAD, ticagrelor monotherapy had similar efficacy to aspirin. 29 Clopidogrel combined with aspirin benefited patients with multi-bed vascular disease. 30 Ticagrelor combined with aspirin reduced MACEs but increased the risk of major bleeding in patients with multi-bed vascular disease and a 1-3-year history of previous myocardial infarction. 16 Ticagrelor combined with aspirin reduced the risk of ischemic events but increased the risk of major and intracranial bleeding in CCS patients with type 2 diabetes mellitus. 14 Ticagrelor combined with aspirin had a higher incidence of ischemic events than aspirin alone in CCS patients with type 2 diabetes mellitus and multi-bed vascular disease. 14 In the COMPASS study, compared with aspirin alone, DPI reduced the risk of MACEs and limb ischemic events, increased the risk of major bleeding, but did not increase the risk of fatal bleeding in patients with concomitant PAD (Table 7). 31 In the VOYAGER study, DPI decreased the composite endpoint of MACE and limb ischemia, increased ISTH major bleeding, but did not increase TIMI major bleeding in patients with PAD who had undergone lower extremity revascularization. 32 Recommendation 13: Dual pathway inhibition (aspirin + rivaroxaban 2.5 mg bid) is recommended for secondary prevention in elderly CCS patients with PAD and low bleeding risk. | CCS patients with stroke Analysis of the previous stroke subgroups of MATCH, SPS3, and CHARISMA showed that aspirin + clopidogrel had no efficacy benefit in the secondary prevention of stroke and increased the risk of major bleeding. [33][34][35] In the CHANCE study, in patients with acute mild ischemic stroke or transient ischemic attack (TIA) (within 24 h of onset), DAPT for 21 days followed by clopidogrel 75 mg/day alone to 90 days reduced 90-day MACEs without increasing the risk of major bleeding 36 ; in the POINT study, DAPT for 90 days reduced 90-day MACEs but increased the risk of major bleeding. 37 In the SOCRATES study, there was no significant difference in MACEs and major bleeding between ticagrelor monotherapy and aspirin monotherapy within 90 days in patients with acute non-severe stroke. 38 The THALES study confirmed that aspirin + ticagrelor reduced the 30-day risk of stroke or death in patients with mild to moderate noncardiogenic ischemic stroke or TIA (within 24 h of onset), but there was increased risk of major bleeding and no difference in disability. 39 COMPASS subgroup analysis showed ( | Elderly CCS patients with other highrisk factors In elderly CCS patients with other high-risk factors, such as diabe- and age ≥ 75 years is 2 points. Therefore, elderly CCS patients with AF should consider whether and how to apply OAC based on the results of (1) and (2) above. | Bleeding risk assessment At present, the HAS-BLED scoring system is mainly developed to assess the bleeding risk in patients with NVAF, 6,41 in which a score ≤ 2 points is considered to be an indicator of low-risk bleeding and a score ≥ 3 points indicates high-risk bleeding, it does not mean mandatory discontinuation of anticoagulation therapy, but attention should be paid to screening and correction of reversible factors that increase the bleeding risk, as well as enhanced monitoring after receiving anticoagulant therapy, such as strict control of hypertension within the target range and monitoring INR to ensure its stability in the therapeutic window. | Antithrombotic therapy in elderly CCS patients with AF AFIRE study demonstrated that rivaroxaban monotherapy was noninferior to rivaroxaban + SAPT, with a lower incidence of major bleed- | Antithrombotic therapy after PCI in CCS patients with AF A meta-analysis showed that PCI (OAC + SAPT) was more effective than triple antithrombotic therapy (OAC + DAPT) in reducing the risk of bleeding and had a similar effect on the incidence of major adverse cardiovascular events in patients after PCI. 44,45 The risk of bleeding may be inversely related to the quality of anticoagulation (stability of INR) 46 ; among the factors influencing the risk of major bleeding, the risk factors for bleeding may be greater than the combined antithrombotic regimen itself. 47 | Percutaneous left atrial appendage closure (LAAC) LAAC is one of the strategies to prevent thromboembolic events in patients with atrial fibrillation. LAAC may be considered for patients with CHAD 2 S 2 -VASC score ≥ 2 (male)/≥3 (female) and with the fol- Table 9. 9,51-57 | Elderly CCS patients with acute VTE In elderly CCS patients who developed with acute VTE, different antithrombotic strategies depend on the indication for antiplatelet therapy. Recommendation 31: In CCS patients with acute VTE, who have undergone PCI and treatment with antiplatelet therapy but without a history of ACS, different antithrombotic regimens could be considered according to the time after PCI (see Figure 3), and 3 scenarios could be set and recommended accordingly as follows: (1) In the patients who have undergone PCI no more than 6 months, it is recommended to discontinue aspirin, continue clopidogrel, and start anticoagulant drugs (preferred NOACs) for most patients; (2) In the patients who have undergone PCI between 6-12 months, it is Anticoagulation regimen Specific dosing regimen Heparin + warfarin bridging regimen Warfarin 2.5 to 6.0 mg/day bridging with parenteral heparin (unfractionated heparin or LMWH) initially. Measure the INR 2 to 3 days later, discontinue the heparin, and continue warfarin until the INR is in the therapeutic range (2.0-3.0) and maintain for 24 h. Rivaroxaban monotherapy regimen 15 mg bid for the first 3 weeks, 20 mg qd a after 3 weeks to 6 months, and 10 mg qd after 6 months (20 mg qd is considered for patients with high VTE recurrence risk b ) Heparin + edoxaban sequential regimen Initial heparin injection for 5-10 days, followed by edoxaban 60 mg qd a Heparin + dabigatran sequential regimen Initial heparin injection for 5-10 days, followed by dabigatran 150 mg bid a Abbreviations: bid, twice daily; INR, international normalized ratio; LMWH, low molecular weight heparin; qd, once daily. a Dose adjustment is required according to the degree of renal insufficiency, as detailed in the individual product labeling. b Patients with complex complications, or patients suffer recurrent VTE when receiving rivaroxaban 10 mg qd. | Elderly CCS patients with VTE undergoing elective PCI Different antithrombotic strategies should be determined according to whether the anticoagulant course is completed and the type of anticoagulant agent in VTE patients who require PCI. Recommendation 37: In a patient has completed the course of OAC therapy for VTE, it could switch to aspirin + P2Y 12 receptor antagonists based on perioperative antithrombotic principles of PCI. Bleeding risk assessment in elderly CCS patients treated with antithrombotic therapy also requires attention to dynamic assessment. Various clinical and physiological parameters of elderly patients are constantly changing (e.g. renal function, blood pressure control, anemia, etc.). Various factors fluctuated affect geriatric assessment, which will further affect the assessment of grade of bleeding risk. Clinicians should closely observe the clinical situation of elderly patients and make plans for dynamic monitoring and evaluation, know the change in bleeding risk of patients at any time, and timely optimize the antithrombotic treatment regimen. [71][72][73][74][75] Recommendation 40: The PRECISE-DAPT score is recommended for bleeding risk assessment in patients on antiplatelet therapy. Recommendation 41: The HAS-BLED score is recommended for bleeding risk assessment in patients on anticoagulant therapy, with high predictive accuracy and good predictive value for intracranial hemorrhage. | Bleeding risk assessment in elderly CCS patients treated with antithrombotic therapy Bleeding risk assessment in elderly CCS patients with antithrombotic therapy should fully consider the bleeding risk of antiplatelet therapy and anticoagulant therapy, and individualized risk assessment should be performed based on comprehensive assessment of the elderly. Recommendation 42: In elderly CCS patients without AF or VTE, it is recommended to use the PRECISE-DAPT score to assess the bleeding risk of antithrombotic therapy and develop a long-term antithrombotic treatment strategy based on the complexity of coronary artery lesions (Table 10, Figure 1). Recommendation 43: In elderly CCS patients with NVAF or VTE, it is recommended to use the HAS-BLED score for bleeding risk assessment of antithrombotic therapy and develop a long-term antithrombotic treatment strategy based on the thrombosis risk assessment and the complexity of coronary artery lesions (Table 11). | Judgment of bleeding degree There have been many criteria for the definition or grading of bleeding. For standardization and comparison, a unified BARC bleeding classification criteria (Table 12) | Anemia caused by bleeding Hemorrhage complicated by antithrombotic therapy may cause anemia, and anemia is an independent risk factor for the hemorrhagic and ischemic events in elderly patients with chronic CAD. 89,90 Recommendation 57: In the elderly patients, more attention should be paid to volume resuscitation, and a restrictive volume resuscitation strategy is recommended to achieve target blood pressure until bleeding is controlled. | Prevention and treatment of bleeding risk in patients with hepatic insufficiency Antithrombotic therapy is more difficult in patients with chronic liver disease due to increased risk of thrombosis and bleeding. 94,95 There are no available guideline recommendations for anticoagulant therapy in patients with liver disease. Due to impaired hepatic function, decreased plasma albumin levels (plasma protein binding rate: rivaroxaban 95% > apixaban 85% > edoxaban 55% > dabigatran 35%), and decreased cytochrome P450 enzyme activities (apixaban F I G U R E 6 Management of bleeding associated with anticoagulant drug 6,27,77,[79][80][81] | Prevention and treatment of bleeding risk in advanced-age or frail patients Advanced age (≥75 years) or frailty is one of the main risk factors for bleeding. Based on the clinical study evidence, the majority of the antiplatelet study populations are patients with a median age < 65 years, while data are still lacking in patients older than 65 years of age. Particularly, advanced-age patients are often excluded from randomized controlled studies, so caution should be exercised in the use of antiplatelet therapy in advanced-age patients. [96][97][98][99] The phenomenon of comorbidities and polydrug use is common in advanced-age patients, who are vulnerable to drug-drug interactions especially in patients taking warfarin, which interacts with a wide range of drugs. NOACs also have definitive interactions with some specific drugs such as antifungals, immunosuppressants, and dronedarone, which increases the risk of bleeding, and extra caution is required. 27 HAS-BLED score can be used to identify patients at In the treatment of elderly high-risk CCS patients, the efficacy and safety of antithrombotic agents should be carefully evaluated. For those patients, the risk of bleeding and ischemia should be accurately assessed. and the indications for extensive antithrombotic therapy with DPI or DAPT should be well specified, regular follow up and correcting the risk factors of ischemia or bleeding in a timely manner should also be considered, and the extent and site of bleeding should be accurately determined and treated promptly; all attributes above are aimed to maximize the antithrombotic benefit and minimize the risk of bleeding. FU N D I N G I N FO R M ATI O N Not applicable. ACK N OWLED G M ENTS Not applicable. AUTH O R CO NTR I B UTI O N S Initiate and organization of this consensus: Cuntai Zhang and CO N FLI C T O F I NTE R E S T Zhang Cuntai is the Editorial Board members of Aging Medicine and co-authors of this paper. To minimize bias, they were excluded from all editorial decision making related to the acceptance of this paper for publication. Other authors have nothing to disclose.
2023-02-08T16:14:29.365Z
2023-02-05T00:00:00.000
{ "year": 2023, "sha1": "f462c0e45b246476fee351ac1df1b0c1d0b7a6a5", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1002/agm2.12234", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "d030137d257a3ef4decf1b36be8a18cbf7b3cef5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218607819
pes2o/s2orc
v3-fos-license
SAT-LB130 Tamoxifen Affects MiRNA Expression in Uterus and Breast Tamoxifen Affects MiRNA Expression in Uterus and Breast Tamoxifen Affects MiRNA Expression in Uterus and Breast Abstract In addition to genetic factors, environmental factors and lifestyle can play a significant role in the development of hormone-dependent tumors, such as endometrial cancer (EC) and breast cancer (BC). The discovery of microRNAs (miRs) involved in the post-transcriptional regulation of many genes, including those of hormonal carcinogenesis, namely, steroid receptors and their target genes, strengthened the epigenetic direction in the study of carcinogenesis mechanisms. A critical event in the development of hormone-dependent human tumors is violation in the metabolism of steroid hormones, primarily estradiol. An interesting aspect of the problem of ERα inhibition is the use of tamoxifen (TAM) in clinical practice in the treatment of hormone-dependent BC. A well-known side effect of TAM is increased proliferation in the endometrium and an elevated risk of EC. One of the mechanisms explaining such differences in the effects of TAM is formation of DNA adducts in endometrial cells, but this mechanism has not yet been substantiated. Therefore, the problem of carcinogenesis of the uterus with this drug remains unresolved and requires further research. The aim of our study was to evaluate the expression of miRs and target genes for hormonal carcinogenesis in the uterus and mammary gland under the exposure with TAM. As an object of study, we used female rats, primary human cell cultures and tissues of TAM-induced human endometrial hyperplasia. The results showed that estradiol enhances the expression of oncogenic microRNAs miR-21-, 221, -222 by three-ten times, both in the rat mammary gland and endometrium, which confirms its oncogenic properties. In the rat endometrium, TAM, to a greater extent than estradiol, increased the expression of oncogenic miRs, especially miR-419, -23a, 24-2,- 27, and significantly reduced the expression of their target genes. In addition, TAM caused a multiple (8-fold) increase in the expression of cyclin D in uterus compared with mammary gland. In most cases, TAM reduced expression of oncogenic miR-21,-221,-222 by 50% in BC primary cell culture whereas in EC primary cell culture expression of oncogenic 190a was increased. We also investigated the activity of estrogen-metabolizing enzymes in tamoxifen-induced human endometrial hyperplasia. A significant difference was found in the expression of estrogen-metabolizing genes (CYP1A,1B, CYP19, SULT1A1, SULT1E1, GSTP1,2, COMT, STS) in TAM-induced endometrial hyperplasia, which may be due to the difference in miRNA expression. Thus, both for the animal model and human cell cultures, it was shown that TAM causes other changes in the expression of microRNAs in the endometrium compared with the breast. Further studies with the identification of target miRNA genes will help identify molecular targets of TAM-induced endometrial hyperplasia. This work was supported by Russian Science Foundation, grant # 19-15-00319. catecholamine levels, HbA1c of 5.4% and LVEF of 40-45%. Conclusion: Pheochromocytoma can rarely present with multi-organ failure. It warrants a high index of suspicion in non-ischemic cardiomyopathy. As per recent Mayo Clinic criteria, diagnosis of takotsubo cardiomyopathy mandates ruling out pheochromocytoma. As seen in our patient, it is a reversible cause of left ventricular dysfunction, focal weakness and DM. Based on our knowledge, this is the only contingently diagnosed pheochromocytoma with varied clinical presentations. It has been aptly described as "The Great Masquerader". SAT-LB130 In addition to genetic factors, environmental factors and lifestyle can play a significant role in the development of hormone-dependent tumors, such as endometrial cancer (EC) and breast cancer (BC). The discovery of microRNAs (miRs) involved in the post-transcriptional regulation of many genes, including those of hormonal carcinogenesis, namely, steroid receptors and their target genes, strengthened the epigenetic direction in the study of carcinogenesis mechanisms. A critical event in the development of hormone-dependent human tumors is violation in the metabolism of steroid hormones, primarily estradiol. An interesting aspect of the problem of ERα inhibition is the use of tamoxifen (TAM) in clinical practice in the treatment of hormone-dependent BC. A well-known side effect of TAM is increased proliferation in the endometrium and an elevated risk of EC. One of the mechanisms explaining such differences in the effects of TAM is formation of DNA adducts in endometrial cells, but this mechanism has not yet been substantiated. Therefore, the problem of carcinogenesis of the uterus with this drug remains unresolved and requires further research. The aim of our study was to evaluate the expression of miRs and target genes for hormonal carcinogenesis in the uterus and mammary gland under the exposure with TAM. As an object of study, we used female rats, primary human cell cultures and tissues of TAM-induced human endometrial hyperplasia. The results showed that estradiol enhances the expression of oncogenic microRNAs miR-21-, 221, -222 by three-ten times, both in the rat mammary gland and endometrium, which confirms its oncogenic properties. In the rat endometrium, TAM, to a greater extent than estradiol, increased the expression of oncogenic miRs, especially miR-419, -23a, 24-2,-27, and significantly reduced the expression of their target genes. In addition, TAM caused a multiple (8-fold) increase in the expression of cyclin D in uterus compared with mammary gland. In most cases, TAM reduced expression of oncogenic miR-21,-221,-222 by 50% in BC primary cell culture whereas in EC primary cell culture expression of oncogenic 190a was increased. We also investigated the activity of estrogen-metabolizing enzymes in tamoxifen-induced human endometrial hyperplasia. A significant difference was found in the expression of estrogen-metabolizing genes (CYP1A,1B, CYP19, SULT1A1, SULT1E1, GSTP1,2, COMT, STS) in TAM-induced endometrial hyperplasia, which may be due to the difference in miRNA expression. Thus, both for the animal model and human cell cultures, it was shown that TAM causes other changes in the expression of microRNAs in the endometrium compared with the breast. Further studies with the identification of target miRNA genes will help identify molecular targets of TAMinduced endometrial hyperplasia. This work was supported by Russian Science Foundation, grant # 19-15-00319. SAT-LB312 Back Ground: Epidermoid cysts (ECs) result from the inclusion of squamous epithelial elements during neural tube closure. ECs are tumors constituting 02-1.8% of all brain tumors. ECs are typically found in cerebellopontine angle, but occasionally develop in sellar region. ECs are usually clinically silent, but may produce signs of mass effect as headaches, visual field defects. ECs presenting with Central Diabetes insipidus is reported but rare. Only two cases were reported in literature (Ref: 1).Here we report a case of sellar Epidermoid cyst presenting with Diabetes insipidus. Case Description: 49-year male presented with one-month history of polyuria, polydipsia and weight loss. The initial work up identified normal blood glucose, serum calcium and renal function. The water deprivation test confirmed the diagnosis of central Diabetes insipidus. Further pituitary hormonal assessment revealed panhypopituitarism along with diabetes insipidus. The MRI of brain showed evidence of large sellar supracellar cystic mass with a differential diagnosis of craniopharyngioma, Rathkeys cyst. Surgery performed in order to remove the tumor. The pathological report confirmed the tumor as epidermoid cyst. He did well through hospital stay. DI and along with panhypopituitarism persisted post operatively and treated with hormonal replacement. Conclusion: ECs of sellar region vary in presentation depending upon their location, and extension into surrounding areas producing mass effects. Diabetes Insipidus is a rare presentation in these rare tumors. References: 1: CW huo, C Caputo,YY Wang: Supracellar keratinous cyst: A case report and review on its radiological features and treatment: Surgical Neurology International 2018,9;15 Richard B. Guttler, MD,FACE,FACP,ECNU. Santa Monica Thyroid Center, Santa Monica, CA, USA. SAT-LB77 Recent FDA approval of thyroid RF has made it possible for endocrinologists in the USA to finally treat their own patients after obtaining training. I have 5 years experience working with these systems and have trained many endocrinologists in my practice. In 2019 I began a preliminary study of 12 patients with negative biopsies to see the feasibility of doing thyroid RF in my ultrasound room in my office without going to imaging centers or the hospital. The fee for office based RF is 3-6 times less expensive. RF system by RF Medical Korea was used in all cases. The results are promising. Skin and thyroid capsule local injection was all that was needed for pain control. Vital signs were monitored by my roving nurse. The maximal watts used was 20-40. There were no major complications and only one bruise in the neck area. No vocal symptoms. All 12 tolerated the procedure and after 30 minutes observation left with only a small band aid over the injection site. Two flew out of state that night.Conclusion: A preliminary assessment of in office thyroid RF without general of conscience sedation by trained endocrinologists suggests larger study with 80-100 cases is the next step. SUN-LB77 Incidental Anaplastic Thyroid Carcinoma: An Uncommon Entity Background: Anaplastic thyroid cancer is an aggressive thyroid malignancy with a median survival of 3 to 9 months. It is rare and represents 2-5% of all thyroid tumors. Even more uncommonly in about 2%-6% of all ATC cases, it is identified as a small, incidental finding after surgical resection of a predominantly non-anaplastic tumor. Clinical Case: We report a case of 67 year old Caucasian male who presented with history of hoarseness of voice for one month. Fine needle aspiration biopsy of right dominant thyroid nodule revealed papillary thyroid cancer. Pre-operative imaging was negative for involvement of surrounding structures or distant metastasis. He underwent total thyroidectomy and final pathology revealed Anaplastic carcinoma arising in papillary carcinoma measuring 3.6cm in greatest dimension. Undifferentiated (Anaplastic) Carcinoma comprised approximately 5% of the tumor. Areas from anaplastic and papillary tumor were dissected
2020-05-13T18:10:04.884Z
2020-05-08T00:00:00.000
{ "year": 2020, "sha1": "93fbc847799e4794d550a5681a121e400d3dd3f5", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/jes/article-pdf/4/Supplement_1/SAT-LB130/33188377/bvaa046.2031.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "93fbc847799e4794d550a5681a121e400d3dd3f5", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
231869964
pes2o/s2orc
v3-fos-license
The hemogenic endothelium: a critical source for the generation of PSC-derived hematopoietic stem and progenitor cells In vitro generation of hematopoietic cells and especially hematopoietic stem cells (HSCs) from human pluripotent stem cells (PSCs) are subject to intensive research in recent decades, as these cells hold great potential for regenerative medicine and autologous cell replacement therapies. Despite many attempts, in vitro, de novo generation of bona fide HSCs remains challenging, and we are still far away from their clinical use, due to insufficient functionality and quantity of the produced HSCs. The challenges of generating PSC-derived HSCs are already apparent in early stages of hemato-endothelial specification with the limitation of recapitulating complex, dynamic processes of embryonic hematopoietic ontogeny in vitro. Further, these current shortcomings imply the incompleteness of our understanding of human ontogenetic processes from embryonic mesoderm over an intermediate, specialized hemogenic endothelium (HE) to their immediate progeny, the HSCs. In this review, we examine the recent investigations of hemato-endothelial ontogeny and recently reported progress for the conversion of PSCs and other promising somatic cell types towards HSCs with the focus on the crucial and inevitable role of the HE to achieve the long-standing goal—to generate therapeutically applicable PSC-derived HSCs in vitro. Introduction Definitive bona fide hematopoietic stem cells (HSCs) are defined based on specific and unique hallmarks of selfrenewing cells with long-term engraftment and full multilineage reconstitution potential after transplantation in a conditioned recipient. Postnatally, HSCs reside in specialized bone marrow (BM) niches that preserve (1) HSC in a multipotent, self-renewing steady state or (2) facilitate differentiation into mature progeny via asymmetric cell divisions. HSCs form the apex of the hierarchical scheme of adult hematopoiesis and give rise to hematopoietic progenitor cells (HPCs), which, in contrast to HSCs, are characterized by limited self-renewal, engraftment and lineage potential. HSCs provide a constant supply of all hematopoietic cells throughout the entire lifetime of an organism. These hallmarks make HSCs an invaluable cell source and HSC transplantation has become a standard for cell replacement therapy to treat a variety of hematological diseases and malignancies [1,2]. While murine HSC ex vivo expansion is well established, ex vivo long-term expansion of functional human HSCs is still challenging [3]. This poor ex vivo expansion leads to relatively low quantity and quality of functional human HSCs. Furthermore, immunological incompatibilities are another limiting factor for the use of HSCs for transplantation and necessitate human leukocyte antigen (HLA) matching between donors and recipients [1,2]. Advances in the cultivation, generation and differentiation of pluripotent stem cells (PSCs) and especially the reprogramming of somatic cells into induced pluripotent stem cells (iPSCs) [4,5] would overcome many of these limitations and represent a potential paradigm shift in regenerative medicine. Generally, iPSCs are generated by ectopic expression of the transcription factors (TFs) OCT4, SOX2,MYC and KLF4 [4] in somatic and well-accessible cells. iPSCs have an indefinite proliferation potential in culture and the capacity to be differentiated into all somatic cell types. These properties offer a potential use of the iPSC technology for personalized and autologous cell-based therapies in a variety of diseases. Improved genome editing technologies further enhance the potential use of iPSC as powerful tools in basic research, disease modeling, drug screening as well as to mimic ontogenetic and pathophysiological processes in vitro [6]. However, the clinical utility of iPSC-derived cell products is heavily dependent on several factors, including the differentiation techniques, cost-effective scale-up to produce adequate numbers of therapeutic cells and, most strikingly, on the safety and functionality of the final cell product. Despite these advances and vigorous research over the last decades, de novo generation of PSC-derived functionally transplantable HSCs in vitro remains challenging and a high priority for hematology and regenerative medicine. PSC-derived functional HSCs generated under experimental conditions had reconstitution and engraftment potential as shown by in vivo teratoma formation approaches [7,8], providing evidence for the HSC capacity of PSCs. However, these approaches highly rely on specific, instructive niches and cell-cell interactions and are far from defined conditions. Two major approaches are predominantly used for in vitro differentiations: (1) use of defined cell-extrinsic factors for directed differentiation (e.g., defined morphogens, serum, conditioned media or co-culture systems) and/ or (2) direct conversion and forward programming through TF-mediated cell fate determination. Both strategies rely on recapitulating crucial aspects of ontogenetic processes and require a detailed understanding of critical stages of early hemato-endothelial development. To more thoroughly explore these different strategies, it is important to first discuss primitive and definitive hematopoiesis. Embryonic hematopoiesis in mammals: drawing lessons from development The hematopoietic ontogeny is complex and encompasses temporal and spatial patterns. These spatiotemporal differences are most commonly represented as a simplified twostage model of successive waves of primitive (first wave) and definitive (second and third waves) hematopoiesis, that differ in their hematopoietic potential (Fig. 1). Primitive hematopoietic wave The primitive wave is considered as the initial program of embryonic hematopoiesis. Shortly after mesodermal Fig. 1 Simplified two-stage model of the spatiotemporal organization of embryonic hematopoiesis in mice and humans. Scheme of the timing and emergence of hematopoietic cells during hematopoietic ontogeny of mice (in red) and humans (in blue). Primitive hematopoiesis is the initial wave in the extraembryonic yolk sac (YS), followed by the emergence of definitive erythro-myeloid progenitors (EMPs) and lymphoid-primed progenitors (LMPPs) in the extraem-bryonic compartment. The first HSCs arise in the intraembryonic aorta-gonad-mesonephros region (AGM). The AGM-derived, immature, pre-HSCs migrate and colonize the fetal liver for a maturation and expansion step. After this expansion, the mature HSCs mobilize to the bone marrow, where they reside throughout the adult life after birth formation, cells of the primitive streak form the extraembryonic yolk sack (YS) and the vascular plexus, containing the blood islands [9]. The primitive hematopoietic program is initiated between embryonic day 7 (E7) and E8.5 within the blood islands of the mouse embryo [10] and during the third week in the human ontogeny [11] (Fig. 1). This hematopoietic wave is highly restricted, with the primary function to produce primitive erythrocytes, macrophages [10] and megakaryocytes [12], independent of HSCs. Definitive hematopoietic waves In the mouse embryo, between E8.25 and E10, YS hematopoiesis also gives rise to multipotent progenitors, with definitive erythrocytes, megakaryocytes and granulocyte-macrophage progenitors, and is broadly termed EMPhematopoiesis (erythro-myeloid hematopoiesis) as the second wave of hematopoiesis [10,13,14]. In the later stage of the second wave of murine hematopoiesis and overlapping with the EMP-hematopoiesis, extraembryonic YS hematopoiesis also gives rise to multipotent progenitors with lymphoid (NK, B and T cell) potential [15][16][17]. Based on their lymphoid potential, these progenitors have been named lymphoid-primed progenitors (LMPP) (Fig. 1). This transient second hematopoietic wave produces multipotent progenitors (EMP, LMPP) with several blood lineages and definitive erythrocytes, independent of the primitive hematopoietic wave. Therefore, this wave can be considered as the onset of definitive hematopoiesis [18,19], although the origin of these hematopoietic cells is the YS and prior to the activity of HSCs. Lineage-tracing studies further provided evidence for the HSC-independent lymphoid progenitor potential of the YS. These progenitors demonstrated lymphoid and myeloid potential, but lacked erythro-megakaryocytic potential and were traced back to E9.5 in the extraembryonic YS. These data suggest the existence of a lympho-myeloid progenitor that precedes HSC development [20,21]. In contrast to the murine embryo, de novo generation of YS-derived LMPPs was not observed before the onset of circulation in the extraembryonic YS during human hematopoietic ontogeny [22,23]. This further indicates significant evolutionary differences between human and mouse embryonic hematopoiesis. The defining quality of the third wave of hematopoiesis is the generation of bona fide HSCs with the capacity to engraft adult recipients. Embryonic origin of HSCs Transplantation experiments of cells acquired from different developmental stages of murine hematopoietic cells showed that the first occurrence of definitive HSC with the capacity to engraft adult recipients arise between E10.5 and E11.5 [24] and in the human embryo at day 32 of gestation [25] ( Fig. 1), independently of the YS hematopoiesis [26]. At this time point, a splanchnopleural mesoderm-derived, intraembryonic, definitive hematopoietic site was identified as the aorta-gonad-mesonephros (AGM) [27,28] region, particularly the dorsal aorta (DA) [29][30][31], which is probably the best-studied site for de novo HSCs generation. Although the DA is an origin for HSC emergence, the numbers of HSCs in the AGM region are low [32][33][34], and HSCs are only present transiently at this site. Therefore, the AGM is not considered to be a major site for HSC expansion. Shortly after HSC emergence, AGM-derived HSC migrate and colonize different fetal hematopoietic sites, where they mature and expand. In mouse, cells with multi-lineage repopulation activity were first detected in the fetal liver (FL) at E12, concomitant with a dramatic expansion and formation of an FL HSC pool [35], until mobilization of HSCs out of the FL towards other hematopoietic tissues like thymus and finally, the bone marrow [36]. Although several different sites with hematopoietic activity have been described during mammalian ontogeny, the primary origin(s) of hematopoietic cells and, in particular, the embryonic ancestor of HSCs remain controversial and a current area of extensive research. The hemogenic endothelium: an endothelial link to hematopoietic development and the womb of definitive HSCs More than 100 years ago, Sabin observed aggregates of hematopoietic cells budding from a layer of endothelial cells in chick embryos [37]. This observation and the concomitant temporal and spatial emergence of endothelial and hematopoietic cells during vertebrate ontogeny led to the hypothesis of a close developmental correlation between endothelial and hematopoietic cells. This hypothesis was further supported and validated in later experiments. Ex vivo culture of murine KDR + (kinase insert domain-containing receptor; vascular endothelial growth factor receptor 2) endothelial cells gave rise to multi-lineage hematopoietic cells with reconstitution potential after intrahepatic injection into conditioned newborn recipient mice [38]. Lineage-tracing studies tracked the fate of CD144 + endothelial cells that gave rise to multi-lineage hematopoietic cells in vivo [39]. Time-lapse confocal imaging of murine E10.5 DA validated the endothelial origin and showed a dynamic emergence of hematopoietic cells, directly sprouting from ventral aortic endothelial cells [40]. These elegant experiments demonstrated that hematopoietic cells, including HSCs, arise through an intermediate endothelial state known as hemogenic endothelium (HE). HE surface markers: murine and human By definition, HE is a transient, specialized endothelium with the capacity to generate hematopoietic cells through a gradual process of endothelial-to-hematopoietic transition (EHT) [41]. So far, no unique surface marker has been described to identify HE. Murine endothelium with hemogenic potential are generally identified retrospectively by the potential to give rise to hematopoietic cells and are often characterized by co-expressed surface markers CD144, CD31, KDR, CD117, CD34, and the lack of hematopoieticassociated markers such as CD41, CD45 and Ter-119 [42,43] (Fig. 2). A similar immunophenotype was also found on human PSC-derived HE with co-expression patterns of surface markers CD144, CD31, KDR, CD117 and CD34 and lack of CD43 [44][45][46]. In combination with these markers, the lack of CD73 expression was identified to demarcate endothelium with hemogenic potential from non-hemogenic endothelium. During the transition from endothelial cells towards a hematopoietic cell type, endothelial cells gradually lose endothelial characteristics, and concomitantly acquire a hematopoietic phenotype and morphology [41,47]. In humans, the early emerged hematopoietic committed cells can be identified based on the surface markers CD43, CD34, CD144, CD117, CD90, CD45, CD105, low CD38, and the lack of CD45RA (Fig. 2) [31,46,48,49]. Although the concept of the HE as a precursor of hematopoietic cells is best studied in the AGM region, several other endothelial sites with hemogenic potential have been described in the YS, placenta, major arteries (umbilical and vitelline arteries) and head [30,[50][51][52][53][54][55][56][57]. It was shown that lymphoid cells and, more strikingly, HSCs are predominantly derived from arterial-type hemogenic endothelium [58][59][60][61]. However, are all of these hemogenic cells identical, or are there functional, transcriptional, and/or developmental heterogeneities among HE cells? Fig. 2 Simplified model of HSC emergence through different intermediate endothelial stages in mice and humans. Two major fate decisions precede HSC emergence in a dynamic process. Primitive endothelial cells first acquire an early arterial fate, followed by hemogenic endothelial specification, segregated from mature arterial endothelium. The early arterial endothelium-derived hemogenic endothelium (HE) gives rise to pre-HSCs through a gradual endothe-lial to hematopoietic transition. All intermediate developmental stages can be segregated based on their functionality and different gene expression and surface marker profiles. The phenotype of the different developmental stages is based on a combination of PSC differentiation, in vivo lineage tracing and single cell transcriptome fate mapping. Figure includes data from references [31, 41-49, 64, 71, 73, 78, 112] Signal transduction and gene expression patterns in HE to HSC transition Notch signaling has a central role during HSC emergence, endothelial development, and arterial identity of the endothelium [62][63][64][65][66][67]. Notch knockout studies in zebrafish demonstrated that primitive hematopoiesis is independent of Notch signaling. Definitive hematopoiesis and HSC emergence necessitated Notch signaling and were linked to runx1 expression as a direct downstream target [68]. Similarly, HE demonstrated pre-existing arterial endothelial characteristics, suggesting an arterial endothelium as a direct precursor [69]. The hematopoietic commitment of the arterialized endothelium was initiated by runx1 expression and, as a consequence, resulted in downregulation of runx1regulated arterial genes like sox17 or the Notch ligand dll4 in zebrafish [69]. Interestingly, the dosage of Notch signaling and the balance between Notch-Dll4 and Notch-Jag1 signaling was described to be crucial for either arterial endothelial or hemogenic cell fate in mice [70]. High Notch-signaling activity through Dll4 favors arterial endothelial specification, whereas low Notch signaling through Jag1 activates hematopoietic genes and commitment [70]. In the same study, Jag1-induced microRNA expression was described, which might posttranscriptionally regulate the endothelialassociated gene expression [70]. Single-cell RNA sequencing of different developmental stages of the human AGM validated the arterial origin and mapped the developmental fate of HSCs through an arterial endothelium, and an intermediate arterialized HE [71]. This approach identified an ETV2-expressing endothelial precursor, which independently gave rise to both arterial and venous endothelium with distinct hemogenic potential. The human AGM region HE exhibited typical expression of genes, associated with arterial-type endothelium (e.g., GJA5, GJA4, HEY2, CXCR4, DLL4, MECOM, and HES4), including crucial genes of the NOTCH-signaling pathway and was almost entirely absent of venous characteristics. Along with arterial HE differentiation, expression levels of EMCN, RUNX1T1 and PROCR were increased, which decreased upon hematopoietic commitment, concomitant with upregulation of PTPRC, ANGPT1 and SPINK2 in emerging HSCs [71]. Interestingly, the same study identified the surface marker CD44, a marker previously described to be expressed in the inner layer of endothelial cells in the DA [72], to be almost explicitly expressed on arterial endothelial cells with hemogenic potential, but seldom on venous HE [71]. Thus, CD44 might be a suitable marker to characterize the developmental stages and identify arterialized HE and HSC emergence for in vitro differentiation. In line with this approach, single cell transcriptome analyses used to map the fate of endothelial cells towards hematopoietic cells in mid-gestational mice AGM regions between E9.5 and E11 similarly demonstrated an early arterial endothelial precursor of HE [73]. Computational prediction of their singlecell RNA-sequencing data revealed two bifurcations and fate decisions of endothelial cells during HE specification, which were distinguishable by their gene expression (Fig. 2). The first fate decision occurred in primitive endothelial cells between a venous endothelial phenotype and a primitive arterial-type endothelium. Later, the early arterial endothelium acquired either a mature arterial phenotype (late arterial endothelial cells) or became HE with the capacity to generate committed HSC precursors (pre-HSC) [73] (Fig. 2). Pre-HSCs can be subdivided into pro-HSCs, type I pre-HSCs and type II pre-HSCs based upon their maturation stage and engraftment capability [61,[74][75][76]. Surprisingly, the same study described a bi-potent rare, putative committed pre-HSC capable of endothelial and hematopoietic specification [73,77]. This finding underlines the dynamic progress of this transition and raises the question at which time point the final commitment of pre-HSCs occurs. Similar to the transient upregulation of PROCR (also known as EPCR or CD201) expression upon hemogenic fate specification in the human AGM [71], CD201 marked murine HE populations and pre-HSCs [73,78] and could be a putative marker for in vitro-derived HSC-primed, arterialized HE. Interestingly, EPCR was further found to mark engraftment-and reconstitution-competent HSCs derived from human cord blood CD34 + cells that were expanded with UM171 [79]. Emergence of HSCs from the HE In vivo, the vast majority of the endothelial cells within the hematopoietic sites during ontogeny are vascular endothelial cells without hemogenic potential. Only small subsets of endothelial cells demonstrate the capacity for de novo hematopoietic cell generation. During murine embryogenesis, hematopoietic cells arise through an intermediate HE between E7.25 [55] to shortly after birth [80]. During the EHT process in the DA of the E10.5 AGM, endothelial cells and the derived hematopoietic cells are organized in clusters, attached to an endothelial layer and bud into the lumen of the vessel (Fig. 3) [81][82][83]. For the DA, these clusters are later referred to as intra-aortic hematopoietic clusters (IAHCs) and the formation is highly conserved and described for several vertebrate species [30,84,85], including humans [49]. Not all hematopoietic cells that arise in the AGM region through IAHCs are bona fide mature HSCs. In mouse, IAHCs consist of an HSC precursor (type II pre-HSC) and already committed hematopoietic progenitors. However, in mice, the majority of the IAHC probably comprise very immature pre-HSC (type I or pro-HSC), yet incapable of long-term engraftment or multi-lineage reconstitution of neonate recipients [74][75][76], but able to progressively mature towards bona fide HSCs within different hematopoietic sites or neonatal environments [34,58,86]. While IAHCs are also formed within lateral, dorsal and ventral endothelial layers, preferentially the ventral section had the autonomous capacity to generate HSCs with reconstitution potential [29]. This implies a dorso-ventral polarity of HSC generation and indicates a putative functional heterogeneity among IAHCs and, consequently, HE. Astonishingly, the presence and functional activity of pre-HSCs have also been shown in the dorsal domain of the DA [87]. In accord with these findings, RNA-sequencing comparison of murine dorsal IAHCs and ventral IAHCs between E10 and E11 revealed only minor differences in the transcriptome [75]. However, the most dramatic transcriptional changes were observed mainly in the ventral IAHCs during the EHT process, as well as during formation and maturation of pre-HSCs [75]. This indicates an instructive role of anatomically distinct environments and, as a consequence, differential influences of signaling and cell-extrinsic factors on HE and IAHCs. Interestingly, the expression of hematopoietic-associated genes such as Runx1 and Gata3 was described in murine CD45 − mesenchymal cells, which are located ventral to the DA [88][89][90][91]. This led to the hypothesis of a putative different and direct origin of HSCs and it has been speculated whether the sub-aortic mesenchymal floor of the AGM has an instructive role or directly gives rise to pre-HSCs. In vivo, lineage-tracing studies in the murine AGM indicated that this sub-aortic mesenchyme was not a direct progenitor of HSCs [39]. Some evidence suggests that the early, transient, lateral plate mesodermal-derived, mesenchymal population may contribute to the aortic floor endothelium [92], which, in turn, has the capacity for hematopoietic cell generation through an intermediate endothelium [39]. However, the role of the sub-aortic mesenchyme is still a matter of controversy in the field, which remains to be resolved [93]. The sub-aortic mesenchyme potentially provides an instructive microenvironment and signaling that supports EHT and, subsequently, HSC emergence. Elegant ex vivo studies of mouse E10.5 AGM region identified that interactions among three main signaling pathways favor HSC emergence in the ventral domain of the DA [87]. Overlapping gradients and asymmetric patterns of (1) urogenital ridge and ventral DA domain-derived stem cell factor (Scf), (2) sonic hedgehog (Shh) produced in the dorsal domain of the DA and (3) ventral Bmp4 inhibition through Noggin expression were found to be essential for the generation of HSCs in the ventral DA domain [87]. Similarly, different key signaling pathways were described to be indispensable . This endothelial-to-hematopoietic transition (EHT) and HSC emergence is regulated and directly or indirectly influenced by signaling and cell-extrinsic factors from the microenvironment (e.g., vascular endothelial cells (light blue) and perivascular mesenchyme (purple)) for early mesodermal patterning and hemato-endothelial ontogenesis in animal models and that precise and spatiotemporal regulation of these pathways is critical. Early during zebrafish embryonic hematopoiesis, Bmp4 signaling was described to promote hematopoietic specification from mesoderm, mainly through induction of Wnt and, as a consequence, upregulation of caudal-related homeodomain (Cdx) TFs [94]. In mice, Cdx1 and Cdx4 are directly regulated by Wnt signaling [95,96] and Cdx genes have been further described to control cell fate determination through Hox gene regulation [97]. These key signaling pathways are mostly conserved among vertebrates and have been exploited to direct in vitro hematopoietic differentiation. All of these pathways act as a dynamic, complex network of interacting signaling cascades to precisely mediate control of developmental stages and cell fate decisions. Although there is remarkable evolutionary conservation among vertebrate genomes, considerable genetic differences between human and other vertebrate species have been observed, which contribute to crucial dissimilarities during embryonic hemato-endothelial development. These differences can be significant and might preclude the transfer of developmental concepts and regulation from model organisms to human developmental processes. Therefore, the use of human cells and especially differentiation of human PSCs emerged as a powerful tool to mimic and investigate human developmental processes and their regulation in vitro. However, many of the developmental concepts observed in model organisms, the cell-extrinsic and -intrinsic regulation of hemato-endothelial development, have been used to design successful hematopoietic differentiation protocols of human PSCs in vitro. The insights of these PSC-based hematopoietic differentiations can be used to validate the in vivo findings and complement and shape our knowledge about human hematopoietic ontogeny. Directed differentiation of PSCs towards hematopoietic cell types Directed differentiation is based on recapitulating and mimicking key aspects of embryonic hematopoiesis and the regulation of ontogenetic processes in vitro by instructive cellextrinsic factors [45,[98][99][100][101][102][103][104][105]. Directed hemato-endothelial differentiation of PSCs has been explored for decades [100,106] and paved the way for the upcoming differentiation protocols. While many current directed differentiation protocols rely upon well-characterized serum-free medium components, there are still some less well-defined culture components such as factors derived from co-cultivation systems. In addition, it is difficult to quantify effects of cellextrinsic factors like cell-cell interactions on in vitro differentiation. This limits accurate control and reproducibility of the differentiation process and has thus far only produced a limited range of mature hematopoietic lineages and HPCs without long-term reconstitution potential. Thus, it is likely that directed hematopoietic differentiation rather resembles the first, transient, HSC-independent waves of embryonic hematopoiesis. Usage of more defined, serum-free media, defined morphogens, small molecules and culture conditions enables the production of hematopoietic cells and HE in a more defined and reproducible manner. However, the generation of PSC-derived, bona fide HSCs under in vitro conditions remains a significant challenge and is still a high priority in the fields of hematology and regenerative medicine. These restrictions are probably due to the lack of detailed understanding of human hemato-endothelial ontogeny and the limitations of recapitulating complex, dynamic, multifactorial developmental processes in vitro. Although the clinical use of PSC-derived HSCs remains to be achieved, in vitro hematopoietic differentiations undoubtedly contributed to our current understanding of early human hematopoietic development. More importantly, early experiments provided compelling evidence that crucial stages of human ontogeny can be modeled in vitro. Several studies have convincingly demonstrated that primitive and definitive hematopoietic cells arise through specialized endothelial cells with hemogenic capacity [45,55,[107][108][109][110][111]. Mostly, directed differentiations generate different subtypes of mesodermal progenitors and cells with different hemogenic or vascular endothelial potential. Choi et al. identified hemogenic endothelial cells based on the immunophenotype CD144 + /CD73 − /CD235a − /CD43 − . This HE gave rise to HPCs with an enhanced myeloid and erythroid lineage potential [45], but, more importantly, they neatly dissected hemato-endothelial specification from human PSCs and identified populations of cells with distinct endothelial and hematopoietic potential [45]. This indicates the simultaneous emergence of transient primitive and definitive hematopoietic programs in vitro and potential functional heterogeneity of the hemogenic capacity of the endothelial cells. Single cell transcriptional analysis of human iPSCderived CD34 + cells confirmed the functional heterogeneity. Transcriptional stages of HE cells (CD34 + /CD43 − /CD90 + / CD73 − /CXCR4 − ) during the narrow window of the EHT process were dissected and were used to identify sub-populations with distinct hematopoietic lineage potential [112]. Based on these findings, it was hypothesized that the distinct hematopoietic lineage capacities are defined within the cell populations at the EHT stage, and therefore, before the complete loss of endothelial characteristics [112]. A different study proposed that distinct hematopoietic potential is already determined during mesodermal patterning. The erythroid surface marker CD235a was surprisingly found to be expressed on mesodermal, KDR + precursor cells, fated to the primitive hematopoietic lineages [105]. In contrast, the KDR + /CD235a − mesodermal population could generate a broader spectrum of mature hematopoietic lineages, including T-lymphoid cells [105]. This fate determination was attributed to a dynamic interplay between the WNT signaling pathway and Activin-Nodal signaling [105] and has also been linked to the CDX-HOX pathway [113]. Similarly, modulation of the WNT and Activin-Nodal signaling pathways in mesodermal cells resulted in the upregulation of CDX4 and, as a putative consequence, upregulation of HOXA3, HOXA5, HOXA7, HOXA9 and HOXA10 expression. Moreover, this modulation directed endothelial cells towards a SOX17 + aorta-like endothelial cell phenotype with hemogenic potential [114]. Although these studies demonstrated an enhanced hematopoietic potential, the definitive hematopoietic potential was measured based on the emergence of T-lymphoid cells. Generation of HSC-like cells with repopulating potential was not observed [105,113,114]. This suggests that the cells were either a progenitor of the transient EMP/LMPP hematopoiesis or indicated the requirement for additional, complementary regulatory factors or signaling to facilitate HSC function. Consistent with the pivotal role of Notch signaling during endothelial development formation of the dorsal aorta and, as a consequence, HSC emergence [65][66][67][68][69][70], NOTCH-DLL1 signaling facilitates arterialization of human PSC-derived HE in vitro [64]. Interestingly, and in contrast to murine in vivo data [70], immobilized JAG1-Fc had only minor effects on hematopoiesis. Activation of NOTCH signaling through immobilized NOTCH-ligand DLL1-Fc in CD31 + (PECAM1, an endothelial-specific marker) cells led to the upregulation of typical NOTCH-downstream genes (HES1) and expression of typical arterial-associated genes (e.g., DLL4, EFNB2, HEY2, SOX17, and CXCR4) in a transient, CD144 + /CD73 − /CD43 − /DLL4 + HE population. This HE had the capacity to undergo EHT and produce lymphoid, myeloid and erythroid cells in a NOTCH-dependent manner. In contrast, the non-arterialized HE population (CD144 + / CD73 − /CD43 − /DLL4 − ), showed mostly primitive hematopoietic potential. Although the arterialized HE was able to give rise to definitive lympho-myeloid hematopoietic cells, these cells were not engraftment-competent HSCs. Most strikingly, the arterialized HE had only the capacity to generate hematopoietic cells in co-culture with OP9-DLL4 cells, but not under defined, serum-free conditions. Thus, additional, unknown, stroma-cell derived factors were crucial for the EHT process, which activates or inhibits different signaling pathways. The overall mode of action of signaling pathways is mostly similar. A cell-extrinsic signal is converted into a cellular response through intracellular signaling cascades and usually results in gene expression changes. TFs are often direct targets of signaling cascades, which directly alter the transcriptional response and, subsequently, downstream regulation of associated genes and transcriptional networks. Overexpression of these downstream TFs might bypass or provide shortcuts to complex cellular processes, cell-cell interactions and signaling cascades and might help to simplify demanding differentiation protocols and improve hemato-endothelial differentiation processes. Transcription factor-mediated enforced hematopoietic specification Alternative approaches have emerged to overcome the limitations of directed hematopoietic differentiation strategies by the generation of HE, hematopoietic cells, or even HSClike cells through ectopic expression of cell fate-determining TFs [115,116]. These TFs can either be overexpressed in (1) mature cell types for direct conversion into less committed intermediate precursors, or (2) PSCs for forward programming into specific lineages ( Fig. 4 and Table 1). The identification of master regulators and, more importantly, interacting TF combinations and transcriptional networks is crucial for both strategies. Many TFs have been used in in vitro differentiation approaches based on their described key roles during vertebrate ontogeny in vivo. Expression of the TF combination Gata2, Gfi1b, and cFos (enhanced with Etv6) induced hematopoietic potential in murine fibroblasts. The transduced fibroblasts formed endothelial-like structures and produced hematopoietic cells in a dynamic process through a Tie2 + CD144 + CD31 + endothelial intermediate [117]. Similarly, ectopic expression of these TFs (GATA2, GFI1B and FOS) was later used for initial induction of endothelial signature, followed by hematopoietic gene expression in human fibroblasts [118]. The hematopoietic cells arise through an endothelial intermediate and demonstrated an HSC-like immunophenotype of CD34 + /CD49f + /CD90 + /CD38 − /CD45RA − [118], similar to the phenotypic definition of human cord blood HSCs [119,120]. Most strikingly, these cells demonstrated moderate multi-lineage reconstitution potential in NSG mice up to 12 weeks post-transplantation [118]. While GATA2, GFI1B and FOS form a transcriptional complex that initiates expression of endothelial and hematopoietic genes, GATA2 was described to be the dominant transcription factor in this complex [114]. The conserved function of Gata2/GATA2 in mice and humans indicates the cooperative, dominant and instructive role of Gata2/GATA2 for induction of hemato-endothelial programs. In vivo, conditional knockout of Gata2 cis-regulatory elements in the murine AGM region resulted in diminished Scl and Runx1 expression and abolished HSC generation from HE [121]. Gata2 knockouts in CD144 + endothelial cells resulted in similar effects, along with lack of IAHC formation in the murine DA and HSC generation [122]. In vitro differentiation of human embryonic stem cells (ESCs) suggested that GATA2 is crucial for the EHT process [123], likely due to transcriptional regulation of downstream targets. In mice, the Runx1 cis-regulatory element (+ 23 Runx1 enhancer) contains Gata and Ets motifs that regulate transcription [124]. Further, a transcriptional complex of Gata2, Fli1 and Scl was found to be recruited to the Runx1 cisregulatory element, which placed the key hematopoietic TF Runx1, directly downstream of these TFs [124]. Overexpression of some of these TFs (Erg, Gata2, Lmo2, Runx1c, Scl) in murine fibroblasts induced hematopoietic specification and generation of multipotent HPCs through an intermediate endothelial stage, with expression of typical endothelial markers (e.g., Cdh5, Tie2, Pecam1, and Vwf) [125]. These multipotent progenitors demonstrated an HSC-like immunophenotype, with robust erythroid, megakaryocytic and myeloid potential as well as lymphoid potential after loss of p53 function, but only short-term reconstitution ability of predominantly erythroid cells [125]. These approaches indicate that various transcriptional regulators, or even one specific TF, might be sufficient to activate and regulate similar gene regulatory networks to induce hemato-endothelial specification. However, the generation of bona fide HSCs was not achieved with these factors. This shortcoming might A screening approach identified 6 out of 36 HSC-associated transcriptional regulators to induce re-specification of committed, murine lymphoid and myeloid progenitor cells into HSCs without an endothelial intermediate [126]. Transient overexpression of these six transcriptional regulators Run1t1, Hlf, Lmo2, Prdm5, Pbx1 and Zfp37 was sufficient to confer HSC functionality and, most strikingly, long-term, multi-lineage reconstitution potential in primary and secondary recipients [126]. Interestingly, transient ectopic expression of these factors was sufficient to sustain the HSC functionality in vivo and stably activate gene regulatory networks that govern HSC function and identity [126]. Taking advantage of the close ontological relation between endothelial cells and hematopoietic cells, enforced expression of the four TFs FOSB, GFI1, RUNX1 and SPI1 reprogrammed human, non-hemogenic mature and fetal endothelial cells into self-renewing, engraftment-competent multipotent progenitors, although, with insufficient T cell potential [127]. An immortalized endothelial cell line that was previously described to support HSC expansion, likely through AKT-regulated factors, was shown to contribute to an instructive niche, which is crucial for the formation of the HE, the EHT process and the generation of multipotent progenitors [128]. More recently, overexpression of the same TFs (Fosb, Gfi1, Runx1 and Spi1) and these vascularniche-derived factors were sufficient to fully reprogram adult murine endothelial cells into HSCs with proper functionality [115]. While these approaches were mostly initiated from mature somatic cells, TF-mediated differentiation was also used to direct hemato-endothelial specification from PSCs. In vertebrates, the ETS-family (E26 transformation specific) TFs contain approximately 30 members (e.g., FLI1, ERG, ETV2, ETV6, SPI1, and ETS1) and have been described as key TFs that regulate early vasculogenesis and hematopoietic development [129]. The ETS-family TF ETV2 (ETS variant 2) is expressed early during mesodermal formation in cells with endothelial and hematopoietic potential [130,131] and was shown to induce expression of several endothelial-and hematopoietic-associated downstream targets [132], indicating that ETV2 governs activation of hemato-endothelial transcriptional networks. Knockout studies in mice further supported this crucial role of Etv2 during endothelial development. Etv2 ablation resulted in significantly diminished Kdr expression and early embryonic lethality due to a complete lack of endothelial and hematopoietic specification [132,133]. In vitro differentiation experiments validated the crucial and instructive role of ETV2. Ectopic expression of ETV2 induced expression of endothelial-associated genes (e.g., FLI1, ERG, CDH5, KDR, and PECAM1) and, more importantly, was sufficient to directly convert human fibroblasts into functional endothelial cells [134]. Inducible overexpression of ETV2 in human iPSC-derived, mesodermal-primed cells resulted in an almost pure population of cells with a vascular endothelial immunophenotype (CD144 + /CD73 + ) [135]. Similarly, transient expression of exogenous ETV2 by modified and stabilized mRNA efficiently generated functional endothelial cells with the ability to form perfused vascular networks in vivo [136]. However, overexpression of ETV2 alone was not described to robustly induce hemogenic potential. A gain-of-function screen of hemato-endothelial-associated TFs in human PSCs revealed synergistic effects of ETV2/ GATA2 or SCL/GATA2 on hematopoietic specification [137]. Both TF combinations induced a hemato-endothelial program and generated hematopoietic cells through an intermediate endothelial state with distinct hematopoietic lineage potential [137]. Using a forward programming approach of human iPSCs, controlled overexpression of the TF combination SCL/LMO2/GATA2/ETV2 robustly induced hemato-endothelial specification with an almost pure population of cells with an HE-like phenotype and, subsequently, multi-lineage HPCs [135]. However, both attempts [135,137] demonstrated restricted lineage potential (erythroid, myeloid, megakaryocytic) with lymphoid limitations and, most strikingly, without significant engraftment and reconstitution potential. Furthermore, collective overexpression of the six TFs Gata2, Lmo2, Scl, Sox17, Pitx2 and Mycn directly converted murine PSCs to hemato-endothelial cells, smooth muscle cells and hematopoietic cells [138]. Downregulation of these TFs resulted in the generation of multilineage hematopoietic cells through an endothelial intermediate, which were, however, restricted to erythroid, myeloid and megakaryocytic lineages [138]. In concordance with the crucial role of the arterial identity of definitive HE in vivo [59-62, 65, 67, 69, 71, 73] and the impact of arterialized HE through Notch signaling in vitro [64], overexpression of ETS1 or modulation of the MAPK/ERK signaling pathway induced HE with arterial characteristics and enhanced lineage potential [139]. Upon ectopic ETS1 expression at the mesodermal stage, the formation of KDR + /CD144 + endothelial cells was increased. In this endothelial population, arterial-associated genes were upregulated, including CXCR4, EFNB2, SOX7, SOX17, SOX18 and genes of the NOTCH signaling pathway such as DLL4, NOTCH1, NOTCH4, and HEY1. The venous-specific gene NR2F2 was not upregulated upon enforced ETS1 expression, implying an arterial-specific effect of ETS1, and suggested that HE, similar to the vascular endothelium, can acquire an arterial identity. The resulting, arterialized CD144 + /CD43 − /CD73 − /DLL4 + HE generated an increased number of CD45 + /CD235 − /CD41a − HPCs with erythro-myeloid and lymphoid potential. In line with a similar approach [64], the effect of the ETS1-mediated arterialization and enhanced hematopoietic potential was primarily mediated through upregulation of the NOTCH-ligand DLL4 and activation of NOTCH-mediated signaling [139]. However, both studies [64,139] failed to achieve short-or longterm engraftment and led to speculation about the necessity for additional arterialization and NOTCH-independent mechanisms that regulate HSC specification, such as the HOXA gene cluster. A variety of TFs act as master regulators and govern endothelial as well as hematopoietic ontogenesis. Transient overexpression of the MLL (mixed lineage, myeloid lymphoid leukemia)-fusion protein MLL-AF4 reprogrammed human iPSC-derived hematopoietic cells into highly engraftment-competent HSCs [140]. Although these HSCs demonstrated high levels of engraftment and reconstituted both lymphoid and myeloid lineages, the MLL-AF4-induced iPSC-derived HSCs were prone to leukemic transformation after transplantation [140]. In vivo, MLL is a positive regulator of Hox genes through direct binding to promoter sequences [141,142] (as discussed below). Especially HOXB4 was shown to enhance self-renewal, hematopoietic capacity and, most strikingly, engraftment and repopulating potential in mice [143]. Overexpression of HoxB4 in murine yolk sac hematopoietic progenitors or murine ESCs enabled the generation of HSC-like cells, which were able to engraft and reconstitute lympho-myeloid hematopoiesis in irradiated murine recipients [144]. Similarly, enforced expression of the LIM-homeobox TF Lhx2 conferred long-term reconstitution potential to murine ESCs and iPSCs in primary and secondary recipient mice, however, without T-lymphoid contribution [145]. In contrast to murine ESCs/iPSCs, the repopulating capacity of human ESC/iPSC-derived hematopoietic cells was not positively affected by overexpression of HOXB4 [146], indicating considerable differences between transcriptional regulation of human and murine hematopoietic development. Nevertheless, these studies convincingly demonstrated that overexpression of a single TF could significantly influence PSC differentiation. However, these studies mainly focused on ESC/iPSC-derived HSCs. The direct, intermediate precursor, the HE, was neglected. Later, ectopic HOXB4 expression during the KDR + -stage of differentiated ESCs was associated with the promotion of HE formation [147]. The acquisition of the HE cell fate was linked to a shift of the transcriptional signature and upregulation of crucial genes and TFs for endothelial specification and hematopoiesis, such as Cdh5, Cd34, Scl, Gata2, Erg, Fli1, Lyl1 and Lmo2 [147]. Combinatorial expression of some of these TFs has been used to direct hemato-endothelial differentiation in several approaches. The HOX genes are located in different clusters, HOXA-HOXD, characterized by the common homeobox DNA-binding domain [148]. The specificity and selectivity of HOX TFs are relatively low and mostly mediated and increased through co-factor binding [149]. Concomitant with the low specificity of HOX TFs, the functionality of HOX TFs during embryogenesis is diverse. Members of the HOX gene cluster are required for maintenance and self-renewal of hematopoietic progenitors or HSCs [143,150]. Expression of the HOX gene clusters is controlled by upstream regulators, such as the MLL [142], members of the CDX TFs (CDX1, CDX2, and CDX4) [97] or the retinoic acid signaling pathway [151]. Dysregulation of the HOX TFs was associated with different hematopoietic malignancies [148], reflecting the crucial role and complexity of regulation of the HOX gene cluster. Hox knockout studies validated the crucial role of their function during hematopoietic ontogeny and HSC maintenance. Especially HoxA9 knockouts demonstrated severely impaired HSC self-renewal and proliferation [152] and significantly decreased the reconstitution capacities of fetal liver HSCs in mice [153]. It was hypothesized that the medial HOXA genes have a key role during hematopoietic differentiation and that the lack of HOXA expression might be a significant barrier that prevents the in vitro generation of human PSC-derived bona fide HSCs [151]. Several gain-of-function studies validated the crucial role of HOX genes. Ectopic expression of HOXA9 alone was insufficient to confer self-renewal or long-term repopulation potential to human ESC-derived HPCs [154]. A different approach identified crucial TF combinations to overcome erythro-myeloid restriction and confer enhanced, HSC-like properties to human PSC-derived hematopoietic cells [155]. An extensive in vitro screen identified the TF combination HOXA9, ERG and RORA to be sufficient to respecify the myeloid restricted, PSC-derived CD34 + /CD38 − HPCs to a proliferative, self-renewing stage with an enhanced erythroid and lymphoid lineage potential. The addition of SOX4 and MYB overexpression enabled short-term myelo-erythroid engraftment. Although ectopic expression of these TFs enhanced the stem cell properties of the formerly restricted HPCs, long-term engraftment and multi-lineage reconstitution were not achieved. Interestingly, it was hypothesized that the definitive hematopoietic program and HSC generation in vitro might be actively repressed through epigenetic silencing [156]. A screening experiment for DNA-and histone-modifying factors that repress the definitive hematopoietic program and multipotency identified EZH1 as a crucial repressor. EZH1 is a component of the Polycomb repressive complex 2 and mediates target-site-specific epigenetic silencing through histone methylation. Strikingly, EZH1 was found to directly bind promoters of HSC-associated genes, such as HLF, HOPX, MEIS1, PRDM16, LMO2, ETS1, HES1, RUNX1 and HOX clusters. EZH1 knockdown increased gene expression of arterial-and HSC-associated genes such as NOTCH, HES1, HEY1, SOX17, RUNX1T1 and FOXC2, and elicited robust T and B cell potential of the previously described [155] differentiation protocol [156]. In mice, Ezh1 deficiency or haploinsufficiency increased the HSC frequencies compared to wild type animals, and stimulated the precocious generation of bona fide HSCs during in vivo ontogenesis, presumably through enhanced accessibility of key HSC TF-binding sites [156]. A combined approach that applied directed differentiation and TF-mediated specification was shown to confer HSC-like functionality to human PSC-derived HE [116]. A library of 26 fetal liver HSC-enriched TFs was used to screen for a factor combination to confer HSC functionality to a PSC-derived HE population. The CD34 + /KDR + /CD43 − /CD235a − endothelium was transduced with this library and 24 h later intrafemorally injected into sublethally irradiated mice. Multi-lineage engraftment of myeloid, erythroid and lymphoid lineages was observed 12 weeks post-transplantation. Enrichment of the seven TFs HOXA5, HOXA9, HOXA10, ERG, LCOR, RUNX1 and SPI1 was consistently detected, indicating that these factors enabled self-renewal, engraftment and multi-lineage reconstitution potential [116]. Engraftment of secondary recipient mice validated the self-renewal capacity that was conferred by the 7 TFs. However, compared to cord blood CD34 + -transplanted mice, the robustness of the multilineage engraftment (9/76 mice) was lower and also the full recapitulation of the reconstituted lineages was biased [116]. This approach suggested that the generation of PSC-derived bona fide HSC is becoming more feasible. However, TFbased strategies for the in vitro generation of PSC-derived, bona fide HSCs for clinical use remains a high priority that has yet to be realized. Conclusion and perspectives In summary, recent work clearly illustrated remarkable progress in the conversion of (i) PSCs and somatic cell types into HE as an important intermediate towards the development of HSCs. As the generation of fully engraftment-competent HSCs with multi-lineage developmental capacity in the sense of definitive hematopoiesis is cumbersome and a goal that remains to be achieved, we can further learn from the natural development of HE, subsequent HSCs and their neighboring niche components to identify crucial extrinsic and intrinsic regulating factors. Here, insights in single cell transcriptomics, including scRNAseq, will continue to identify critical developmental steps and cell types and will shed further light on the underlying transcriptional network, including instructive TFs and their expression levels. Moreover, spatial transcriptomics may further unravel the role of neighboring cells, including the role of crucial components of the microenvironment, necessary for conferring HSC identity, functionality, maintenance and expansion. Correct dosing and timing of expression of transcription factors and extrinsic niche factors will be important to mimic and recapitulate the complex developmental process in vitro, for which state-of-the-art vector systems for regulated, timed and dosed expression will be needed. Here, especially transient vector expression systems will be interesting to explore the possibilities to mimic the waves and levels of hematopoietic factor expression. In addition, transient expression patterns will be desirable to avoid permanent expression of potentially oncogenic TFs and growth factors, and thus reduce (pre)malignant transformation of hematopoietic progenitors. For example, controlled delivery of the necessary TFs at the optimal time window during differentiation could be a further improvement of direct conversion protocols and forward programming strategies. Although enforced overexpression of TFs has been used to increase hematopoietic potential and functionality of in vitro-derived hematopoietic cells, clinical translation of TF-based approaches remains to be achieved due to insufficient functionality and quantity of the cell product. The knowledge gained from TF-based strategies is helping to elucidate the key regulatory pathways whose modulation is necessary for directed differentiation towards HSCs. Future strategies will exploit this information to generate bona fide HSCs without the potential dangers of transformation due to TF overexpression. Looking into the future, while we are getting closer to being able to generate high-quality transplantable hematopoietic cells, it will be necessary to establish the framework for GLP-/GMP (good laboratory practice/good manufacturing practice)-compliant production, including the generation of standard operating procedures (SOPs) and the inclusion of fully traceable and animal-free reagents in a GLP-compatible lab environment, to create a perspective for upscaling as needed in future clinical trials. For example, GMP-compliant cell modification and TF delivery strategies will have to be developed. Moreover, thinking in the context of next-generation hematopoietic cell transplants, the horizon of combined gene and cell therapeutics should be considered. The use of precision medicine approaches, e.g., clinically used viral vectors and next-generation genome editing tools, will allow the tailored repair of genetic defects of autologous transplants as well as the generation of allogeneic "off-the-shelf" transplants, which may be transplantable to a broad spectrum of patients and diseases, especially in cases in which no suitable HSC donor is available. In addition to HSCs, also PSC-derived T, NK and NKT cells are interesting tools for tailored immunotherapeutics. HLA borders represent an important bottleneck to allogeneic cell replacement strategies. Here, the use of biobanking of iPSC for frequently used HLA subtypes or other "off-theshelf" implementation strategies could be helpful. Taken together, the increasing insights in PSC-derived hematopoiesis as well as the HE may allow the tailored generation of hematopoietic cells for disease modeling, cell therapy and potentially even next-generation transplants. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2021-02-11T06:18:17.782Z
2021-02-09T00:00:00.000
{ "year": 2021, "sha1": "63fa96847e3a07a282bddbb4e7c9bbe46aa1dc49", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00018-021-03777-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "89a6ecee18d60cb210a67efd34f9560aba5a0b45", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
17354540
pes2o/s2orc
v3-fos-license
Expression Levels of GABA-A Receptor Subunit Alpha 3, Gabra3 and Lipoprotein Lipase, Lpl Are Associated with the Susceptibility to Acetaminophen-Induced Hepatotoxicity Drug-induced liver injury (DILI) is the serious and fatal drug-associated adverse effect, but its incidence is very low and individual variation in severity is substantial. Acetaminophen (APAP)-induced liver injury accounts for >50% of reported DILI cases but little is known for the cause of individual variations in the severity. Intrinsic genetic variation is considered a key element but the identity of the genes was not well-established. Here, pre-biopsy method and microarray technique was applied to uncover the key genes for APAP-induced liver injury in mice, and a cause and effect experiment employing quantitative real-time PCR was conducted to confirm the correlation between the uncovered genes and APAP-induced hepatotoxicity. We identified the innately and differentially expressed genes of mice susceptible to APAP-induced hepatotoxicity in the pre-biopsied liver tissue before APAP treatment through microarray analysis of the global gene expression profiles (Affymetrix GeneChip® Mouse Gene 1.0 ST for 28,853 genes). Expression of 16 genes including Gdap10, Lpl, Gabra3 and Ccrn4l were significantly different (t-test: FDR <10%) more than 1.5 fold in the susceptible animals than resistant. To confirm the association with the susceptibility to APAP-induced hepatotoxicity, another set of animals were measured for the expression level of selected 4 genes (higher two and lower two genes) in the liver pre-biopsy and their sensitivity to APAP-induced hepatotoxicity was evaluated by post hoc. Notably, the expressions of Gabra3 and Lpl were significantly correlated with the severity of liver injury (p<0.05) demonstrating that these genes may be linked to the susceptibility to APAP-induced hepatotoxicity. minant hepatic failure from over-dosage (Park, 2006). APAP induces hepatotoxicity via the generation of reactive and electrophilic metabolites like N-acetyl-p-benzoquinone imine (NAPQI) by cytochrome P450 that can result in glutathione (GSH)-adduct formation, covalent binding to vital endogenous macromolecules, and depletion of cellular antioxidant reserve (Bessems and Vermeulen, 2001). N-acetylcysteine given (NAC) within 12 h seems to be effective in protecting most patients against severe liver damage induced by APAP (Prescott et al., 1977) but frequently APAP-induced DILI could be recognized only after symptoms have developed by when NAC is no longer effective, reflecting that APAP-induced hepatotoxicity progresses in multiple stages and diverse factors may be involved that include inflammation, metabolism and lipid synthesis (Hinson et al., 2010). Moreover, APAP-induced DILI appears with variable degrees of severity ranging from minimal increases in the levels of serum alanine transaminase (ALT)/aspartate transaminase (AST) to severe hepatic necrosis and fatal hepatic failure (Larson et al., 2005;Watkins et al., 2006;Yun et al., 2014), reflecting the existence of "individual factors" in the manifestation of APAP-induced hepatotoxicity. To identify factors that determine the severity of APAPinduced hepatotoxicity or to develop a biomarker to screen susceptible individuals, various approaches have been attempted. Liu et al. (2010) have analyzed hepatic gene expressions related to APAP toxicity using resistant strain SJL/J and three sensitive strains C57BL/6J, DBA/2J, and SM/J. Stamper et al. (2010) have found several genes with significant differences in the expression between non-toxic APAP regio-isomer 3-hydroxyacetanilide and APAP in TGF-α transgenic mouse hepatocytes. Also, Umbright et al. (2010) have reported that many blood genes associated with inflammation, immune, stress responses and energy metabolism were statistically different in their expression levels following APAP treatment. While these approaches are effective in the identification of target molecules or markers that can diagnose the extent of APAP-induced hepatotoxicity, their applicability as biomarkers to screen-out susceptible individuals is limited since the exposure to APAP might have extensively altered the genetic landscape due to direct or collateral tissue damages and sub-sequent inflammatory responses. Recently, to draw an intact genetic picture without the interference from APAP-induced toxicity, we examined through the comparison of pre-dose blood genes and the severity of APAP-induced hepatotoxicity posterior in rats in vivo, which demonstrated that protein kinase A (PKA) inhibitor alpha (Pkia) expression level in predose blood can be employed to predict susceptible individuals without the interference from the exposure to APAP . More importantly, this study has provided an insight into the role of PKA in the manifestation of APAP-induced hepatotoxicity. This approach has been further corroborated by Lu et al. (2015) who demonstrated that four genes, including Incenp, Rpgrip1, Sbf1, and Mmp12, which are associated with cell proliferation and tissue repair functions, in the blood collected from individuals prior and posterior to APAP administration can be used for identifying susceptible population to DILI. However, these studies offered surrogate genetic biomarkers in blood rather than that of direct target, liver, therefore, its utility and implication might be limited. In this study, we further investigated the transcriptome in the biopsy of liver pre-dose to uncover the genetic factors for individual susceptibility of APAP-induced hepatotoxicity in mice in vivo in an effort to provide a clue to understand the progression of, and defense mechanisms against APAP-induced DILI. Animals Outbred ICR mice was selected as the experimental species for this study since rats do not manifest a clear profile of blood chemistry to APAP-induced hepatotoxicity when compared with the responses to carbon tetrachloride or D-galactosamine (Shin et al., 2014). Male ICR mice aged 7 weeks were purchased from Jung-Ang Lab Animals (Seoul, Korea) and housed in a specific pathogen-free (SPF) facility of Ewha Womans University. We used ICR mice, one of the outbred strains, which retain a certain level of genetic diversity in the test population in present study. The mice were kept under controlled environmental conditions (23 ± 3°C, 40-60% rela- Experimental protocol To identify the innate genes for individual variation of liver injury, we analyzed the gene expression profiles in the liver pre-biopsied from individual mice prior to the oral administration of APAP (Sigma, St. Louis, MO, USA) and compared them with the individual severity of liver injury after APAP administration according to the method described previously (Yun et al., 2009(Yun et al., , 2010 (Fig. 1). In brief, the mice were randomly assigned into three groups consisting of negative control group (no biopsy group) (N=5), biopsy control group (N=5), and APAP group (N=32). The operation was performed for minimum liver biopsies (about 10 mg) from left lobe of the liver of anesthetized mice in biopsy groups (biopsy control group and APAP group) using an isoflurane vaporizer (Midmark, Orchard Park, OH, USA) with an isoflurane of 1.5% to 3% and an oxygen flow of 0.5 L/min. After conducting the sutures to close the subcutis tissue and the skin using Surgifit 6-0 (AILEE, Busan, Korea) and Black silk 6-0 (AILEE). For the assessment of postoperative recovery, ALT, AST, albumin, albumin/globulin ratio (A/G ratio), total bilirubin and bile acid were measured using serum samples from blood collected via retro-orbital plexus of anesthetized animals into a gel serum separator plain blood tubes (MiniCollect 0.8 ml Z Serum Sep, Greiner Bio-One, Frickenhausen, Germany). After recovery for 3 weeks, the mice were administered with APAP through oral gavage at a dose of 300 mg/kg (dissolved in deionized water) according to a previous method (Saha and Nandi, 2009) with minor modifications. At 24 h after administrations of APAP, we performed the biochemical analysis using the blood collected via the postcaval vein from anesthetized animals, and then conducted microarray analysis using the pre-collected liver samples of the 10 animals (5 susceptible and 5 resistant) selected based on the results of the biochemical analysis and compared the gene expressions with the individual severity of liver injury after APAP administration to identify the innate genes for individual variation of liver injury. Additionally, to identify whether the innate genes selected by microarray experiment can predict the susceptibility of liver injury, we conducted the real-time PCR analysis for selected genes using liver samples biopsied from another set of animals (N=32) and then compared the gene expression with individual severity of liver injury after APAP administration (300 mg/kg, p.o.). The severity of liver injury were analyzed with the biochemical indicators, including ALT, total bilirubin, AST, lactate dehydrogenase (LDH) and bile acid at 24 h after administrations of APAP. RNA isolation and microarray analysis The liver tissue biopsied before APAP administration was processed with TRIzol reagent (Invitrogen, Carlsbad, CA, USA) for isolation of total RNA. RNA precipitates were dissolved in RNase free DEPC treated water (USB, Cleveland, OH, USA). The concentration of RNA was determined Nano drop 1000 spectrophotometer (NanoDrop Technologies, Inc., Wilmington, DE, USA). Affymetrix (Santa Clara, CA, USA) GeneChip1 Mouse Gene 1.0 ST arrays were used to analyze the differential gene expressions, as described previously (Yun et al., 2009). The normalized scanned probe array data were compared between the groups to generate p-value and signal log ratio (fold change). Unpaired t-test was applied to determine statistically reliable probe sets. Reverse transcription-polymerase chain reaction (RT-PCR) Relative expression levels of mRNAs were measured by quantitative real-time PCR. Total RNA, extracted from liver prior to APAP treatment, was used to synthesize cDNA using pre-master mix with oligo dT (Bioepis, Seoul, Korea). Each reaction was performed using power SYBR Green PCR master mix in a 7300 real-time PCR machine (Applied Biosystems, Warrington, UK). The sequence of primers of mice liver were as follows: forward target gene in the sample with the lowest level of the normalized gene expression of the target genes, expressed as 2 -ΔΔCt . Statistical analysis All data were analyzed by student t-test or one-way analysis of variance (ANOVA) followed by post hoc Tukey's multiple range tests to determine treatment effect and to compare differences between group means. Differences were considered to be significant at p<0.05. Assessment of the influences of liver biopsy on liver function of ICR mice Pre-biopsy scheme needs the liver biopsy at the naïve condition prior to APAP administration to identify the innate genetic factors associated with the individual variation of APAP-induced liver injury afterwards (Fig. 1). Firstly, to check if pre-biopsy procedure does not affect normal liver function of mice, small amount of liver tissue (about 10 mg) was pre-biopsied from the identical region of left lobe and liver function was evaluated after sufficient postoperative recovery (3 weeks) by measuring ALT, AST, albumin, A/G ratio, total bilirubin, and bile acid in the serum. As a result, liver function of biopsy control group was not statistically different from those in Negative control group (Fig. 2), suggesting that the effects of pre-biopsy scheme on the liver function are minimal for mice as shown in rats previously (Yun et al., 2010). Hepatotoxicity induced by APAP administration after liver biopsy After liver pre-biopsy and postoperative recovery for 3 weeks, APAP was orally administered to the ICR mice (N= 32), an outbred strain at a dose of 300 mg/kg. The APAP-treated mice exhibited clear signs of liver injury, as indicated by increased serum ALT and total bilirubin (Fig. 3). Importantly, there was a substantial inter-animal variation in the level of these biochemical indicators, suggesting that mice can display different susceptibility to APAP. On the basis of ALT and total bilirubin levels, animals could be grouped into 2 groups, that is, susceptible and resistant animals (top 5, susceptible and bottom 5, resistant, Fig. 3A). Serum ALT and total bilirubin in susceptible group were substantially higher when compared with both Negative control group and resistant group (Fig. 3B, 3C). Further histological examination (Fig. 4A) and blood biochemistry (Fig. 4B) also revealed that the animals grouped as susceptible manifested higher toxic response to APAP than resistant which could be readily determined by significantly higher liver toxicity markers and massive hepatic injury including numerous apoptotic cells and inflammatory cell infiltration in the liver tissue. Gene expression analysis with microarray to identify genes associated with susceptibility to APAP-induced hepatotoxicity To determine the genetic factors associated with these inter-individual variations to APAP-induced hepatotoxicity, microarray analysis was performed with the liver samples of 5 susceptible and 5 resistant animals pre-obtained before APAP treatment as described in scheme (Fig. 1). The reliability of the transformed and normalized data was statistically analyzed using one-way ANOVA. This was visualized by hierarchical clustering of the calculated data from the experiment (Fig. 5A). From this analysis, 16 genes (excluding unknown sequences and noncoding genes) were found to be different with statistical reliability at p<0.05 with more than 1.5 fold difference between two groups in their expression levels (Table 1). Among them, uppermost two genes expressed higher innately in susceptible group were found to be Gdap10 and Lpl, and lowermost two genes expressed lower innately in susceptible group were Gabra3 and Ccrn4l (Fig. 5B, 5C). Prediction of the susceptibility of animals to APAPinduced liver injury by real-time PCR analysis of the selected genes To further confirm whether the individual expression levels of Gdap10, Lpl, Ccrn4l and Gabra3 are related to the interindividual variation in the susceptibility of APAP-induced liver injury indeed, the expression of these genes was analyzed with liver biopsy samples pre-collected from 32 animals before APAP-treatment using quantitative real-time PCR analysis. While a meaningful relationship could not be found with the expression levels of Gdap10 and Ccrn4l with the severity of APAP-induced hepatotoxicity, the innate expression level of Gabra3 matched well with the severity of APAP-induced liver injury as could be determined by the strong correlation with APAP-induced AST (Spearman Correlation Analysis, p<0.05; Correlation coefficients, -0.4209) (Fig. 6A) and ALT (Spearman Correlation Analysis, p<0.05; Correlation coefficient, -0.4012) decrease (Fig. 6B). Moreover, the innate expression level of Lpl matched well with the severity of APAP-induced liver injury as could be determined by the strong correlation with APAP-induced total bilirubin (Spearman Correlation Analysis, p<0.05; Correlation coefficient, 0.4052) increases (Fig. 6C), of which relationships were further corroborated by the comparison of AST, ALT and total bilirubin of grouping based on the genes (Fig. 6D-6F). DISCUSSION Here, we explored the genetic markers in the liver that are associated with the individual susceptibility to APAP-induced hepatotoxicity in ICR mice employing pre-biopsy scheme. Through this approach, we could identify that low Gabra3 and high Lpl expression may be related to the susceptibility to APAP-induced hepatotoxicity. More importantly, we could confirm that susceptible animals could roughly be predicted through determining the expression levels of Gabra3 and Lpl in liver. In addition, this study suggests that Gabra3 may play a protective role against APAP-induced hepatotoxicity in the liver while Lpl may contribute to the development or aggravation of liver damages, which warrant further studies. In the present study, a substantial inter-individual variation 2.94 0.0034 Transport // ion transport // chloride transport // signal transduction // signal transduction // gamma-aminobutyric acid signaling pathway // synaptic transmission // ion transmembrane transport on APAP-induced hepatotoxicity was found among individual mouse, a well-known species susceptible to APAP, as evidenced by the histopathological examination and biochemistry analysis. Many previous transcriptomic approaches have examined the genetic landscape after APAP treatment and hence, can have limited applicability for the prediction of the individual susceptibility to APAP-induced hepatotoxicity. We previously demonstrated (Yun et al., 2009(Yun et al., , 2010) that the predose scheme using pre-biopsied liver, which has a unique ability to fully regenerate after injury for the maintenance of its functions on metabolism and detoxification (Fausto et al., 2006;Michalopoulos, 2007) may provide a useful tool for prescreening of the susceptible individual and for the discovery of key molecules in the manifestation of toxicity. In the current microarray experiment using the pre-biopsy method, we identified 16 genes including Gdap10, Klf10, Malat1, and Ccrn41 (unpaired t-test: FDR <10% and fold change >1.5) as candidate genes predictive of severity on APAP-induced liver injury. Malat1, which has been known to be associated with a prognostic marker for metastasis in early stages of lung adenocarcinoma (Ji et al., 2003;Gutschner et al., 2013) and ganglioside-induced differentiation-associated-protein 10 (Gdap10), which is one of Gdap genes involved in different signal trans-duction pathway (Liu et al., 1999). Klf10 has been known to regulate TGF-β signaling by blocking expression of the negative regulator, Smad7 (Johnsen et al., 2002), and activating expression of the positive effector, Smad2 (Johnsen et al., 2002). Ccrn4l is a gene that encodes a circadian deadenylase, and its disruption in mice showed lower body weight and reduced visceral fat, reflecting resistance to fatty liver and diet-induced obesity (Green et al., 2007). Genes encoding cellular components like Ankrd55, Cspp1,Dennd4a,Chd7,Taf1d and Pdilt, are also shown to be associated. Other genes, Gas5, Dclre1c, and Meig1 also need further studies to examine their roles in the manifestation of APAP-induced hepatotoxicity. Although DNA microarray can simultaneously quantitate the expression of thousands of genes, a second methodology including quantitative real-time RT-PCR is required to assess the accuracy of the candidate genes discovered by microarray measurements (Draghici et al., 2006). Furthermore, the margin of difference in the expression is largely small, which necessitates a verification step. To confirm if the gene expression profiles detected with the microarray is reproducible and can be used to predict the animal to be susceptible or resistant to APAP-induced hepatotoxicity indeed, the liver samples pre- biopsied from another set of animals were undergone realtime PCR analysis for the expression levels of 4 genes including higher two and lower two genes selected from microarray analysis. Among four candidate genes, it is notable that the innate expression of Lpl in the liver was significantly correlated well with total bilirubin which was well-correlated with the severity of APAP-induced hepatotoxicity (Spearman correlation coefficients 0.4052, p<0.05). The expression of Lpl in liver was expressed higher innately in susceptible group to APAP-induced hepatotoxicity when compared to resistant group. Lpl, which is a member of the lipase superfamily that includes pancreatic, hepatic and endothelial lipase, is widely expressed in many tissues such as liver, brain, heart and adipose tissues (Wang and Eckel, 2009). Kim et al. (2001) have demonstrated that the overexpression of Lpl, which has been known to be the rate-limiting enzyme involved with triglyceride hydrolysis (Goldberg, 1996), causes profound insulin resistance in liver (Baron et al., 1988). Insulin resistance can lead to selective accumulation of fatty acid-derived metabolites (i.e., fatty acyl CoA, ceramide, diacylglycerol) in liver and may be implicated in the development of acute liver failure (ALF) (Clark et al., 2001) through impaired peripheral glucose utilization and a failure to fully suppress endogenous glucose production, contributing to the catabolic state that occurs in ALF. These suggest potential involvement of Lpl in the susceptibility to hepatotoxicity although little is known currently for the exact mechanism on link with the APAP-induced hepatotoxicity. We also observed that Gabra3 expression level was found to be correlated well with the APAP-induced hepatotoxicity as shown by significant Spearman correlation coefficient of -0.4209 (p<0.05) and -0.4012 (p<0.05) for AST and ALT increase, respectively. Gabra3 has been originally found in the CNS but its expression in the liver tissue has been reported (Moe et al., 2008;Oh et al., 2009). Moreover, Biju et al. (2001) have showed that hepatic GABA-A receptor can give inhibitory signal for hepatic cell proliferation. Bozogluer et al. (2012) also showed that flumazenil, a GABA-A receptor antagonist, attenuated the APAP-induced hepatotoxicity. Conversely, the expression of Gabra3 was down-regulated following the exposure to a hepatotoxicant, 4,4'-methylenedianiline in mice (Oh et al., 2009). Despite many contradicting results on the role of GABA-A receptor in the liver, we could speculate that the susceptibility to APAP-induced hepatotoxicity in the animals with innately lower expression of Gabra3 can be attributable to the altered signaling for hepatocyte proliferation which is important for the recuperation processes. Previously, we demonstrated that high expression level of Pkia in pre-dose blood may be related to the susceptibility to APAP-induced hepatotoxicity . In our microarray analysis, however, the difference of Pkia expression between susceptible and resistant was only marginal (1.12 fold higher in susceptible with p-value of 0.055, t-test, data not shown) which may be considered confirmatory but unapparent. This discrepancy may be from different species employed for the studies (SD rats for blood genes, ICR mice for liver genes). Here, we employed ICR mice since this species manifests clear blood chemistry profiles for APAP-induced hepatotoxicity in contrast to the rat which needs histology to monitor APAP-induced hepatotoxicity additional to blood chemistry. These results suggest that other experimental animal species or strains may produce different genetic markers owing to dis-tinct physiology and translational clinical research in human is ultimately necessary to draw the solid answers. In conclusion, we demonstrated that the two gene biomarkers in the liver pre-biopsied prior to administration were related to the inter-individual variation in the severity of APAPinduced hepatotoxicity although other important factors like different gastrointestinal absorption of APAP (Sanaka et al., 1998) or alteration of metabolic capacity like CYP2E1 (Lee et al., 1996) could not be examined due to the limitation of study design and animal species employed. However, the pre-dose expressions of Lpl and Gabra3 in the liver were all correlated well with the post-dose changes of ALT, AST, and total bilirubin. Accordingly, the data presented in this study suggest a novel role of Lpl and Gabra3 in the liver in the manifestation of APAP-induced hepatotoxicity. In addition, we could speculate that these genes can be employed to screen out individuals susceptible to APAP-induced hepatotoxicity although further studies should be conducted.
2018-04-03T05:39:14.235Z
2016-08-19T00:00:00.000
{ "year": 2016, "sha1": "00c161e46d6771938bc4fdb7bd981b61909c0974", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc5340535?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "00c161e46d6771938bc4fdb7bd981b61909c0974", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
229356635
pes2o/s2orc
v3-fos-license
Balance Problems, Paralysis, and Angina as Clinical Markers for Severity in Major Depression Major depressive disorder (MDD) is a heterogeneous disorder. Our hypothesis is that neurological symptoms correlate with the severity of MDD symptoms. One hundred eighty-four outpatients with MDD completed a self-report questionnaire on past and present medical history. Patients were divided into three roughly equal depression severity levels based on scores from the APA Severity Measure for Depression—Adult (n = 66, 58, 60, for low, medium, high severity, respectively). We saw a significant and gradual increase in the frequency of “muscular paralysis” (1.5–5.2–16.7%) and “balance problems” (21.2–36.2–46.6%) from low to medium to high severity groups. We repeated the analysis using only the two most extreme severity categories: low severity (66 samples) vs. high severity (60 samples). High severity patients were also found to experience more “angina” symptoms than low severity patients (27.3 vs. 50%). The three significant clinical variables identified were introduced into a binary logistic regression model as the independent variables with high or low severity as the dependent variable. Both “muscular paralysis” and “balance problems” were significantly associated with increased severity of depression (odds ratio of 13.5 and 2.9, respectively), while “angina” was associated with an increase in severity with an odds ratio of 2.0, albeit not significantly. We show that neurological exam or clinical history could be useful biomarkers for depression severity. Our findings, if replicated, could lead to a simple clinical scale administered regularly for monitoring patients with MDD. INTRODUCTION Major depressive disorder (MDD) is characterized by one or more major depressive episodes (MDEs) and the absence of mania and hypomania throughout an individual's lifetime (1). An MDE includes depressed mood, loss of interest or pleasure, change in weight or appetite, sleep disturbances, psychomotor problems, fatigue, worthlessness or guilt, impaired concentration or indecisiveness, and thoughts of death or suicide (2). Negative outcomes in depression, such as suicidal behavior, highlight the importance of early diagnosis and treatment (3). For an individual to be diagnosed with MDD, at least five symptoms need to be present within a period of 2 weeks. Of these five symptoms, depressed mood or loss of interest and pleasure must be present for a diagnosis of MDE to be made. MDD is a complex and heterogeneous disorder with a wide range of risk factors, severity, and treatment response. Several studies have shown that behavioral and cognitive phenotypes can be useful for biomarker discovery in MDD. Taylor et al. (4) reported that psychomotor slowing was predictive of poor response to fluoxetine, while Gorlyn et al. (5) showed how global cognitive functioning can serve as a marker for predicting selective serotonin reuptake inhibitors (SSRI) treatment response. Lastly, a recent study has shown that movement data collected from wearable devices had a high correlation with depression severity (6). Depression is encountered in different neurological disorders and idiopathic MDD and "neurologic" depression seem to share common abnormalities in specific brain areas (7). For instance, depressive symptoms have been well-documented in patients with stroke (8), epilepsy (9), multiple sclerosis (10), and dementia (11). According to Gutzmann et al. (2015), the severity of depression increases with increasing severity of neurological impairments. Similarly, Smith et al. (12) found that cognitive performance in individuals with prodromal Huntington disease is related to depressive symptom severity. Moreover, very mild depressive symptoms have also been shown to be associated with gait disturbance in early Parkinson's disease (PD) and it has been hypothesized that depression may influence mechanisms of gait disturbance (13). This hypothesis is in line with the results of a multicenter randomized study showing that gait instability (freezing of gait), in patients with PD, responds to treatment with antidepressants (14). Our hypothesis is that neurological symptoms correlate with the severity of the depression symptoms. This study explores whether any findings in non-psychiatric past medical history correlate with depression severity, potentially allowing their use as biomarkers for the prognosis and monitoring of patients with depression. Participant Recruitment and Data Collection The study protocol was approved by the Research Ethics Board of the Douglas Mental Health University Institute (DMHUI), the McGill University Health Centre (MUHC) and the Institut Universitaire en Santé Mentale de Montréal (IUSMM). One hundred eighty-four consecutive, unselected patients with major depression, between the ages of 19 and 77 years, were recruited from tertiary outpatient depression clinics at the DMHUI, Allan Memorial Institute (AMI), and IUSMM. All patients were diagnosed by certified psychiatrists, using the the Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-5). A self-report questionnaire screening for non-psychiatric medical symptoms was given to all study participants after providing written informed consent. This questionnaire consisted of 49 individual questions relating to distinct categorical clinical variables. It was used to survey participants on their past and present medical history, as well as that of their family and their educational history. We also surveyed participants for psychiatric medication use. Finally, a self-report questionnaire called the APA Severity Measure for Depression-Adult [adapted from the Patient Health Questionnaire−9 (PHQ-9)] (15, 16) was used to evaluate the severity of each participant's condition. The APA Severity Measure for Depression that was used was developed by the American Psychiatric Association. It was adapted from the Patient Health Questionnaire−9 (PHQ-9), which has been shown to be a reliable and valid test for documenting the severity of depression. This measure was chosen, as it provides more detailed instructions for scoring and interpretation than the PHQ-9, while maintaining the same questions and general marking scheme as the PHQ-9. Statistical Method Participants were divided into three severity categories based on their depression severity scores: (1) low severity, scores range from 1 to 12 (66 individuals), (2) medium severity, scores range from 13 to 18 (58 individuals), and 3) high severity, scores above 18 (60 individuals). Forty-nine categorical clinical variables, including age, sex, education level, neurological features, family history, dietary and gastrointestinal features, cardiovascular features, and other clinical features were analyzed with the chi-square test in contingency tables (χ2). The age variable was analyzed using a Student's t-test. Significance was set at P ≤ 0.05 (two-tailed). We then repeated the analyses using only the two most extreme severity categories: low severity (66 samples) vs. high severity (60 samples). Ordinal logistic regression was applied to determine the contributions of the significant variables from the chi-square tests in discriminating low vs. medium vs. high severity depression patients. Variables meeting the significance cut-off from the chi-square test for the three severity categories analysis were used as input variables for the model. Similarly, binary logistic regression was applied to determine the contributions of the significant variables in discriminating low vs. high severity depression patients. Lastly, a linear regression was applied to model the relationship between the number of psychiatric medications used by patients and their depression severity score. We excluded outlier patients who had more than 6 medications based on a boxplot of medication counts (Supplementary Figure 1). We also calculated the percentage of patients receiving each specific medication in the high vs. low severity category and identified the medications with the largest differences among the two groups. We then examined the side-effects of these medications and verified whether they were related to any significant clinical variables identified. The chi-square tests were performed using the scipy python package (ver. 1.1.0), while the binary logistic regression and linear regression analyses were performed using the statsmodels python package (ver. 0.10.1). The ordinal logistic regression analysis was performed using the MASS R package (ver. 7.3-51.3). RESULTS For the three severity categories analysis, a significant difference between observed and expected frequencies was found for the "muscular paralysis" and "balance problems" variables. Of note, as part of our questionnaire these two terms were described as "muscular paralysis (e.g., complete loss of muscle function)" and "problems with balance and/or movement coordination (e.g., falls, bumping into objects, short repeated episodes or progressive episodes)." Low, medium, and high severity categories had patients with "muscular paralysis" with frequencies 1.5, 5.2, and 16.7%, and "balance problems" with frequencies 21.2, 36.2, and 46.6%, respectively. We then repeated the analysis with two severity categories. A significant difference was found again for "muscular paralysis, " "balance problems, " but also for "angina" (defined as "chest pain and/or chest tightness"). The low severity category had 27.3%, while the high severity category had 50% of patients with "angina." The significant clinical variables ("muscular paralysis" and "balance problems") identified from the three-severity category analysis were introduced into an ordinal logistic regression model with severity (ordered from high, medium, to low) as the dependent variable. Both "muscular paralysis" and "balance problems" were significantly associated with increased severity of depression with an odds ratio of 6.5 (p = 0.0022) and 2.5 (p = 0.0024) respectively (Table 1). Similarly, the significant clinical variables ("muscular paralysis, " "balance problems, " and "angina") identified from the two severity categories were introduced into a binary logistic regression model as the independent variables with high or low severity as the dependent variable. After obtaining the optimal fit, the model accounted for 13.0% of the variance (p = 4.8 * 10 −5 ). Both "muscular paralysis" and "balance problems" were significantly associated with increased severity of depression with an odds ratio of 13.5 (p = 0.016) and 2.9 (p = 0.011), respectively, while "angina" was associated with an increase in severity with an odds ratio of 2.0, albeit not significantly (p = 0.092) ( Table 2). Lastly, we applied linear regression with the number of psychiatric medications of each patient as the independent variable and the depression score as the dependent variable. A table of the list of medications we considered to be psychiatric is available in Supplementary Table 1. One patient was removed as an outlier based on the boxplot of the number of medications (Supplementary Figure 1). After obtaining the optimal fit, the model only explained 0.4% of the variance (p = 0.40) (Supplementary Figure 2) of the depression score. For every medication, the percentage of patients receiving it in the high vs. low severity category is shown in Supplementary Table 1. Bupropion and sertraline, were taken by a larger proportion of high severity category patients (18 and 12%) compared to low severity category patients (11 and 2%), however, no medication side-effects were found to be related to "balance problems, " "muscular paralysis, " or "angina" for either medication. DISCUSSION This study explores whether any findings in non-psychiatric past medical history correlate with depression severity. Our hypothesis was that neurological symptoms correlate with the severity of the depression symptoms, potentially allowing their use as clinical biomarkers for the prognosis and monitoring of patients with depression. Our results show that more severely depressed patients have a higher likelihood for neurological and cardiovascular symptoms. More specifically, "muscular paralysis" and "balance problems" are associated with increasing depression severity (p < 0.05). Similarly, some evidence was found for "angina, " albeit not meeting our cut-off for significance (p = 0.092). One possible explanation is that as the severity of depression increases, so does the number of medications, leading to sideeffects reported as symptoms. However, this does not seem to be the case in our study based on the regression analysis performed for the number of medications vs. severity, and based on the sideeffects profile of the medications that were taken by a higher percentage of high severity patients compared to low severity patients. In brief, we found no significant differences between age, sex, education levels, or the number of psychiatric medications taken between patients with different severity of depression. To explore the possibility that pathophysiological changes underly both depressive and neurological symptoms, a literature review was performed to search for existing evidence supporting a link. We found evidence for "balance problems" or ataxia in MDD. For example, a slow walk with reduced arm swinging and a more slumped posture are characteristic of depression (17). Moreover, an association of MDD with falls has been published by different studies (18). Studies have shown a significantly smaller vermis in patients with MDD without ataxia (19), and smaller cerebellum in patients with bipolar disorder (20). These brain structures are known to be important in equilibrium and coordination. Of note, patients with cerebellar dysfunction show higher scores on depression inventories when compared to controls (21). Interestingly, in some genetic conditions characterized by ataxia, depression appears to be an important feature. For example, a recent study found that 57% of patients with spinocerebellar ataxia type 3 (SCA3) had depression and that this seemed to have a significant impact, positively contributing to the severity of their ataxia (22). Similarly, in a study of patients with Friedreich's ataxia, 21% of participants were found to have depression in the moderate/severe range (23). A similar search was performed for "muscular paralysis." Szklo-Coxe et al. (24) found that having severe depression lead to a 500% increase in the odds of having sleep paralysis (24). It has also been shown that leaden paralysis may be common in atypical depression, with one study reporting 47% of their patients with atypical depression presenting with leaden paralysis (25). Of note, leaden paralysis is not referring to a real "muscular paralysis (e.g., complete loss of muscle function)." Rather, it consists of severe fatigue creating a sensation of extreme heaviness of the arms or legs and it is considered a reliable marker of atypical depression. An ordinal logistic regression contrasting the 60 high, 58 medium, and 66 low depression severity patients was performed with the significant variables from the chi-square tests ("Muscular paralysis," and "Balance problems"). The table details the model coefficients, standard errors, t-values, and p-values of the coefficients, as well as the odds ratio (OR) and the 95% confidence interval (CI) (OR 2.5%, OR 97.5%) derived from the coefficient and standard errors. The coefficient for both "Muscular paralysis" and "Balance problems" were significantly different from zero (p < 0.05). A binary logistic regression contrasting the 60 high and 66 low depression severity patients was performed with the significant variables from the chi-square tests ("Muscular paralysis," "Balance problems," and "Angina"). The table details the model coefficients, standard errors, z-values (Z), and p-values of the coefficients, as well as the odds ratio (OR) and the 95% confidence interval (CI) (OR 2.5%, OR 97.5%) derived from the coefficient and standard errors. The coefficient for both "Muscular paralysis" and "Balance problems" were significantly different from zero (p < 0.05), while the coefficient for "Angina" was borderline significant. The 95% CI is large for "Muscular paralysis," suggesting that a larger sample size would be required to have a more precise estimate of its effect on depression severity. Of note, the intercept is showing the odds of being severely depressed if someone did not have any features of "muscular paralysis", "balance problems" or "angina". Finally, a link between "angina, " as well as other cardiac conditions, and depression, is well-established. MDD is a risk factor for cardiovascular disease (CVD), even after adjusting for demographics and traditional cardiovascular risk factors (26). In a longitudinal study of a cohort of patients without CVD at baseline, it was determined that depression was significantly associated with the incidence of a cardiac event and that this was unlikely to be due to the effects of hypertension, diabetes, or dyslipidemia. Of the 592 persons who experienced a cardiac event in this study, 160 were classified as "angina" (27). Additionally, an increase in PHQ-9 depression severity scores have been associated with an increase in "angina" frequency, thus validating our finding; further, newly depressed individuals have been shown to report more "angina" than those who do not have depression (28). Moreover, a study assessing "angina" in patients with MDD and coronary artery disease found that having depression predisposed an individual to a greater risk of "angina" and that the severity of their coronary artery disease did not seem to impact this (29). Finally, a recent study found that symptoms of chest tightness/chest pain were predictors of the onset of symptoms of depression and anxiety in patients that had been recently referred to neurology outpatient clinics (30), which further supports the findings of our current study. In conclusion, we provide evidence that non-psychiatric clinical symptoms, including neurological features, can serve as clinical markers for disease severity. Our findings, if replicated, could lead to a simple clinical scale administered regularly for monitoring patients with MDD based on review of systems and/or physical examination. LIMITATIONS AND FUTURE DIRECTIONS One of the limitations of this study is the relatively small sample size. Our findings should be replicated in the future with larger sample sizes, which are adequately powered to explore interactions between variables, and maybe capture other potentially relevant patient populations (e.g., hospitalized patients with MDD and/or patients with bipolar disorder). Moreover, in our study, the PHQ-9 self-report questionnaire was used. It would be important for future studies to consider adding objective assessments by trained personnel to ensure that the data collected is more standardized between study participants. Patients in our study were recruited from different tertiary depression clinics and, although all patients met DSM-5 criteria for MDD, there was no uniform use of a structural instrument as part of their evaluation. Future studies could consider using a structural instrument such as the Mini International Neuropsychiatric Interview (MINI) or the Structured Clinical Interview for DSM (SCID). Most importantly, our questions for the significant features were asking the patients if they "experience short episodes of chest pain and/or chest tightness (also known as angina)?" or "problems with balance and/or movement coordination (e.g., falls, bumping into objects, short repeated episodes or progressive episodes)" or "muscular paralysis (e.g., complete loss of muscle function)." However, there was no clear question allowing temporal qualification, and this was one of the limitations of our study that future studies need to address. Close monitoring of temporal relationship of the clinical markers identified in our study to the depressive symptoms is important. It can validate the importance of these clinical markers for monitoring of MDD and potentially ensure adjustment in the antidepressant regimen. A prospective study focused on targeted medical history for these features, along with physical examination for cerebellar findings, would be needed to explore if changes in these features precede the subjective experience of worsening symptomatology of MDD. If, indeed, our findings are replicated and subtle changes on the neurological examination or clinical history are proven to be useful clinical markers of changes in depression severity, our findings could lead to a simple clinical scale administered on a regular basis, along with the validated neuropsychiatric tools already in use, for monitoring patients with MDD. Ultimately, this could result in earlier intervention in patients with depression, enabling the physician to adjust the treatment regimen before the depressive symptoms become very severe. This could potentially help us optimize pharmacological interventions and reduce negative outcomes, such as suicide. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Research Ethics Board of the Douglas Mental Health University Institute (DMHUI), the McGill University Health Centre (MUHC) and the Institut Universitaire en Santé Mentale de Montréal (IUSMM). The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS BQ performed the statistical analyses and drafted the manuscript under the supervision of YT who conceived and coordinated the project. KM performed quality check and pre-processing of the data, and the literature reviews and coordinated patient recruitment, and data collection. BQ and YT designed the original methodology. MB, AF, EL, NL, SR-D, VT, and GT are MDs who actively engaged in patient recruitment. All authors reviewed and provided feedback on the manuscript. ACKNOWLEDGMENTS We acknowledge and thank Felicia Russo and Alishia Poccia for aiding in the coordination of patient recruitment. We also thank Felicia Russo for helping with quality check and pre-processing of the dataset. We would also like to acknowledge and thank Drs. Eduardo Chachamovich, Marie St-Laurent, Stephen Vida, and Gerald Wiviott for their support in patient recruitment.
2020-12-23T14:20:16.353Z
2020-12-23T00:00:00.000
{ "year": 2020, "sha1": "3aa41b77fb2f5db2669eed52921f1e7990a0882d", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2020.567394/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3aa41b77fb2f5db2669eed52921f1e7990a0882d", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255084538
pes2o/s2orc
v3-fos-license
Nanoformulation of Curcuma longa Root Extract and Evaluation of Its Dissolution Potential Medicinal plants have been widely used for therapeutic purposes for a long time, but they have been found to have some major issues such as low water solubility and bioavailability. In the present study, the nanoformulation of Curcuma longa L. plant extract was prepared to enhance its dissolution potential and biological activities. For the formulation of the nanosuspension, an ethanolic extract of C. longa was prepared through Soxhlet extraction using the nanoformulation technique. The nanosuspensions were formulated using four different stabilizers, namely sodium lauryl sulfate (SLS), hydroxy propyl methyl cellulose (HPMC), poly(vinyl alcohol) (PVA), and polysorbate-80 (P-80). The scanning electron microscopy (SEM), polydispersity index, and ζ potential were used for characterization of the nanoformulation. Among all of these, the surfactant stabilizer SLS was found to be the best. The average particle size of the selected optimized nanosuspension was found to be 308.2 nm with a polydispersity index (PDI) value of 0.330. The ζ potential value of the optimized nanosuspension was recorded at −33.3 mV. The SEM image indicated that the particles were slightly agglomerated, which may have occurred during lyophilization of the nanosuspension. The highest dissolution rate recorded at pH = 7 was 192.32 μg/mL, which indicates pH = 7 as the most appropriate condition for the dissolution of the C. longa nanosuspension. The antioxidant, antimicrobial, and antifungal activities of the optimized nanosuspension were also determined with regard to the coarse plant extract. The study findings suggested that the nanoprecipitation approach helps in enhancing the dissolution potential and biological activities of C. longa root extract. INTRODUCTION Nanotechnology is a newly emerging field with versatile applications in cosmetics, drug delivery, packaging of food ingredients, and many other applications in the field of medical science. 1 The biosynthesis of nanoparticles is facing some serious challenges. Nanoparticles are applied as the best antioxidants and antimicrobial agents for accelerating reactions. 2 The synthesis of nanoparticles involves many techniques, including the wet chemical process, decomposition, and microwave-assisted method. 3,4 The chemical agents utilized in these methods pose health issues as these are toxic and flammable. 5 Therefore, nontoxic and environment-friendly methods for the preparation of nanoparticles are now under consideration. 6 The use of herbal products for the treatment of various human ailments has a very long history. Plants produce a large number of bioactive natural products with interesting chemical diversity and biological effects. Poor water solubility, poor permeability, limited systemic availability, instability, and intensive first-pass metabolism of phytomedicines and their distribution are the major challenges of phytochemicals. 7 Moreover, nanotechnology has opened many ways for the development and formulation of drugs with enhanced therapeutic potentials, pharmacokinetics, and pharmacodynamics. Solid lipid nanoparticles (SLNPs), liposomes, nanoemulsions, and polymeric nanoparticles are the various nanobased formulations consisting of herbs that have been described with amazing properties. 8−10 Green synthesis of nanoparticles is a safe alternative to the trivial synthetic methods, and it has gained enormous attention from researchers since the 2010s. Green pathways for synthesizing nanoparticles have recently engrossed a lot of interest due to their cost effectiveness, simplicity of synthesis, environmental friendliness, sustainable supplies, and so on. 11,12 With the green synthesis of nanoparticles, there has been a considerable reduction in the toxic/harmful discard. 13 Green synthesis of nanoparticles can be used to boost clinical practice because of its potential to reframe the anticancer, 14 antibacterial, 15,16 bioimaging, 14,17,18 biosensing, 19 and drug delivery activities. 20,21 The most important part of green synthesis is the use of nontoxic chemicals and reusable constituents. Many biomolecules, including microbes, vitamins, herbs, fungi, biodegradable polymers, plant extracts, and enzymes, are used in synthesis of nanomaterials. 22 On a commercial scale, using plant extracts to produce nanoparticles results in reduced costs. A previous study illustrated that the plant extracts used for nanomaterial preparation in the process of green synthesis play crucial roles as capping and reducing agents. 13 Turmeric (Curcuma longa L.) is a Zingiberaceae-family rhizomatous, flowering, herbaceous, and perennial plant. It is found all over South and Southeast Asia, while some of its species can be found in Australia, the South Pacific, and Australia. 23 Its roots are used as a flavor in food and it has gained great fame in culinary, medical, and scientific fields. It is the source of curcumin that has been used in therapeutics for several years. 24 Curcumin has various pharmacological properties, such as immunomodulatory, chemoprotective, antihyperlipidemic, antineoplastic, antiulcer, and neuroprotective. 25−30 Literature has revealed that curcumin has some potential against the coronavirus 2019. 31−34 Curcumin must be accessible at the infectious position for the best therapeutic efficacy. Despite having therapeutic activities, it has limited benefits because of its low bioavailability. Drugs with low solubility could be formulated via different techniques, including emulsions, cosolvents, dissolution in a surfactant, liposomes, polymer solution, solid dispersion, and pH adjustment. 35−38 Nanotechnology has offered a credible drug delivery system for increasing the efficiency of the drugs by enhancing their flow in the blood and also decreasing their harmful effects. For enhancing the solubility and absorption of curcumin, its size is reduced to a nanometer scale via biological membranes. 39 Though various strategies have been used for the size reduction and dissolution potential of curcumin, still it is challenging. Medicinal plants have been widely used for therapeutic purposes for a long time, but phytochemicals have the major issues of low water solubility, poor permeability, limited systemic availability, instability, and intensive first-pass metabolism of the phytomedicine and its distribution. To overcome these challenges, nanosuspensions can be used as the best alternative. Therefore, the current study emphasized the green synthesis of curcumin nanoparticles to enhance effectively their biological activities and dissolution potential. These synthesized nanoparticles were characterized physiochemically to observe their pre-pharmacological parameters and pharmacological activities. Preparation and Optimization Studies. In the present study, the nanoprecipitation method was selected and applied for the preparation of nanosuspensions because of its simplicity, reproducibility, rapidity, and cost effectiveness. For the preparation of the stable nanoformulation, important process parameters like the stabilizer, extract amount, and its ratios with the stabilizer and amount of stabilizer were optimized. 2.1.1. Selection of Stabilizer. The goal of the stabilizer screening was to choose an appropriate stabilizer for the formation of a stable nanosuspension. The importance of the selection of stabilizer in the formulation of nanosuspensions is that it maintains the physical stability, enhances the activation energy, and suppresses the agglomeration of the whole nanosystem. Another significant function of the stabilizer is to make available a significant mechanical and thermodynamic fence at the boundary that slows down the amalgamation of formulated nanoparticles and hence prevents Ostwald ripening. 40 Furthermore, the type of stabilizer may have a remarkable effect on the size of nanoparticles. To select an appropriate stabilizer, different stabilizers (PVA, SLS, P-80, and HPMC) were used for the formulation of nanosuspensions (Table 1). During the stabilizer selection, a constant concentration of each stabilizer (1%) was used, with a predefined amount of plant extract (0.25 g) and antisolvent to solvent ratio (1:10). The stabilizer that provided a physically stable nanosuspension was selected for further studies. Table 2 shows the results for the selection of the suitable stabilizer for C. Longa nanoformulation. SLS was found to be the best stabilizer for C. Longa nanoformulation because it provides the most stable nanoformulation when prepared freshly and remains stable for three months after preparation of the nanoformulation. The remaining three (HPMC, PVA, and P.80) also give physical stability, but after our desired time, their nanosuspension does not remain stable for further use, which is why they were not suitable for the formation of a stable nanoformulation. 2.1.2. Optimization of Stabilizer Amount. In this step, we have to find the appropriate amount of the selected stabilizer and its ratio with regard to the plant extract for the formation of a physically stable nanosuspension. In this regard, the following results were found, and the given amount and ratio of stabilizer to plant extract were found to be the most appropriate for the formation of a stable nanosuspension ( Table 2). Scanning Electron Microscopy (SEM) Analysis of C. longa Nanosuspension. To investigate the shape and size, the synthesized nanosuspension was characterized through SEM. The obtained images show irregular flakes in the shape of the C. longa nanosuspension ( Figure 1). The SEM image shows agglomerated flakes, which might occur during the drying or lyophilization of the sample, and demonstrates the good surface properties of the C. longa nanosuspension. The appearance of bigger and nonuniform particles at certain locations may be attributed to individual particle adhesion and aggregation during drying. ζ Size and Potential Analysis. Further, the obtained nanosuspension was analyzed through a Zetasizer. ζ potential analysis is usually performed to investigate the morphology and stability of a nanosuspension. The stability of the nanosuspension is maintained by the electrostatic repulsion requiring a minimum ±30 mV ζ potential. 41 For stable nanoparticles, the ζ potential value must be greater or lesser than +25 or −25 mV. 42 The recorded results for the C. longa nanosuspension show good stability and interaction with the plasma membrane at a ζ potential of −33.3 mV ( Figure 2). Particle Size and Polydispersity Index. The polydispersity index (PDI) determines the heterogeneity of the sample based on the size distribution. The value of PDI set by the International Organization for Standardization (ISO) is less than 0.05, similar to mono-dispersed samples, and values > 0.7 show a large-magnitude distribution. Figure 3 shows the optimized value (0.330) of PDI for C. longa nanosuspension and a particle size diameter of 308.2 nm. Physical Stability Studies of Nanosuspensions. The physical appearance of the suspension at various time intervals indicates its stability. Table 3 shows the stability of the synthesized nanosuspension, that stored at room temperature, and a refrigerated sample. The results indicate good physical stability at room temperature, but after refrigerating a surface layer was developed; however, the PDI and particle size remained intact. The layer can partially dissolve after shaking the sample. 2.6. In Vitro Dissolution Profile of C. longa Nanosuspension and the Coarse Extract. Because plants contain a huge number of bioactive phytoconstituents and it is extremely difficult to assess the quantities of all of these components, only one essential bioactive component was employed as a reference chemical to evaluate the findings in the current research. For this purpose, quercetin, which is the major constituent of C. longa root extract, was used as a standard compound to evaluate the dissolution study. The concentration of active constituents was evaluated from the calibration curve of quercetin. The dissolution profiles of the coarse plant extract and nanosuspensions are shown in Figure 4. The dissolving media was a phosphate buffer medium with a pH of 7.4, and sink conditions were maintained throughout the dissolution rate tests. It is seen in Figure 4 that the dissolution rates of the plant extract and nanosuspension increase with time, creating a huge difference between their dissolution rates. A drug's dissolving rate can be boosted by reducing the particle size and increasing the surface area, according to the equation. Some of the physical characteristics that influence a drug's solubility and dissolution rate in physiological parameters include the particle size, shape, state (amorphous or crystalline), and habit (needle or spherical). 43 The dissolution rate was evaluated at pH = 7.2, which showed a remarkable change with change in pH. From Figure 4C, it may be noted that the dissolution rate of the plant extract and nanosuspension at the 120 min time point was 120.49 and 177.5 μg/mL, respectively. The dissolution enhancement by nanosuspensions might be related to particle conversion to amorphous nature, decrease in ACS Omega http://pubs.acs.org/journal/acsodf Article the particle size from micron to nanometer range (size measurements), and particle shape (SEM determination). The graphical presentation (Figure 4) From the above results, we can conclude that for different pH values the dissolution rates are different. This is due to the difference in dissolution velocity of the component, which varies with the pH because of the anionic and cationic interaction of component particles in the medium or due to some van der waal interactions. 44 In conclusion, we observed that among all the three pH values (6.8, 7.0, and 7.2), at pH = 7.0 the plant extract and nanosuspension showed the highest dissolution rate, which shows that it is the most appropriate medium for the absorption in any solvent and medium. DPPH Radical Scavenging Activity. To investigate the antioxidant effects of curcumin, the DPPH test is a quick, straightforward, and commonly used method for determining a compound's free radical scavenging activity, as described in the literature. 45 The DPPH method (2.2-diphenyl-1-picryl hydra-zyl) is based on the capture of DPPH free radicals by antioxidants, producing a decrease in absorbance at 515 nm wavelength. 46 At room temperature, DPPH is stable and creates a violet solution in organic solvents such as methanol, ethanol, and so on, which is decreased in the presence of curcumin and shows reduction in the color. 47 The radical scavenging activity of the coarse plant extract and nanosuspension was found in the range of approximately 65−80% at a curcumin concentration of 0.02−0.10 mg/mL. The ascorbic acid (AA), a common antioxidant in foods, was used as a reference and presented a higher antioxidant activity than the curcumin. 48 The IC 50 value is inversely proportional to the free radical scavenging activity/antioxidant property of the sample. The nanosuspension, coarse suspension, and ascorbic acid have IC 50 values of 123.8, 205.2, and 189.06 μg/mL, respectively, as shown in Table 4. As shown in Figure 5, the IC 50 value varies inversely to antioxidant activity; in this sense, it is found that the coarse suspension has least antioxidant activity due to the highest value of IC 50 and the nanosuspension has the highest activity, even greater than ascorbic acid, which is used as the standard. This is due to the presence of phenolic components because as their concentrations increase, there is an increase in the antioxidant activity rate. 49 2.8. Antibacterial and Antifungal Activity. Antibacterial and antifungal activities were assessed against Escherichia coli and Aspergillus Niger, respectively. Different strains were used for evaluating the antimicrobial activity, which include C. longa coarse suspension and nanosuspension, fluconazole, rifampicin, and methanol. The antimicrobial activity mostly depends upon the phenolic components: the higher the concentration of phenolic components, the higher the activity. In Table 5, the inhibition diameter for the strains of coarse suspension and nanosuspension of C. longa is 7.75 ± 0.15 and 11 ± 0.06 mm, respectively. The higher the inhibition diameter, the higher the antimicrobial activity. The antimicrobial activity is basically due to the concentration of the hydrophobic compounds 48 It is observed that the nanosuspension of C. longa was more effective against the fungus and bacterial strain than the C. longa coarse suspension, as indicated in Figure 6. Methanol does not affect the antimicrobial activity of any microorganism in the absence of any hydrophobic component. The strains fluconazole and rifampicin have maximum values of inhibition region diameter: 43.5 ± 0.23 mm for antifungal activity and 37.5 ± 0.13 mm for antibacterial activity. This is due to their higher rates of antifungal and antibacterial activity in the respective order. According to all of these investigations, the polycationic character of turmeric extracts is the key to their antifungal effects, and the length of the polymeric chain boosts this activity. This study concluded that the nanoformulation has increased levels of antimicrobial activities, many times greater than those of the normal coarse extract of turmeric. This is due to the enhancement of the bioavailability of particles upon formation of their nanoformulation as it increases the activity ratio compared to coarse extracts. Due to these antimicrobial and antioxidant activities, turmeric ascorbic acid and many other plants are used for the preservation of food and also for antiseptic purposes as they retard microorganisms' activity ( Figure 7). CONCLUSIONS C. longa nanosuspension had considerably higher antioxidant and antimicrobial capability when compared to its original suspension in the current investigation, demonstrating the usefulness of innovative nanosizing techniques in boosting the biological activities of herbal extracts. Sodium lauryl sulfate was proved the best stabilizer for preparing the nanosuspension of C. longa among different stabilizers. The nanosuspension had a good dissolving rate with the maximum dissolution recorded at pH-7, which is predicted as the most suitable condition for C. longa to be absorbed in any solvent, medium, or body fluids. Moreover, antioxidant and antimicrobial activities were enhanced in the nanosuspension as compared to the coarse extract, which showed the high efficiency of the nanosuspension. The C. longa nanosuspension showed significant antibacterial activity against E. coli and A. niger with zone of inhibition values of 11 ± 0.06 and 13.7 ± 0.16, respectively. The nanosuspension had the highest antioxidant activity (IC 50 = 123.8 μg/mL), even greater than that of ascorbic acid. According to the current findings, nanosuspensions of the selected herbal extract can be employed as a better option to treat various disorders with enhanced therapeutic efficacy when compared to coarse suspensions because of their biological potentials. Reagents. This study's chemicals were all of analytical grade. The n-hexane and ethanol (EtOH) were procured from Sigma-Aldrich. Stabilizers SLS, PVA, HPMC, and P-80 were purchased from Caledon (Canada). Plant Material and Sample Preparation. The roots of C. longa (turmeric) were collected from the north of Okara, Punjab, and Pakistan during Nov 2020. The plant was placed in the dark for 15 days in order to dry it out at room temperature. At the end of the 15th day, it was crushed into powder using a mortar and pestle and passed on to a set of standard mesh sieves. Finally, a fine powder of C. longa was collected for the onward extraction. The ethanolic extract of the plants was prepared using the Soxhlet apparatus. Excessive fat contents of the plant were removed by defatting with nhexane. A 30 g fine powder of C. longa was placed in the thimble of a Soxhlet extractor for defatting, and 200 mL of nhexane was poured into it. The entire system was put on standby for 8 h. The flavonoids were extracted from the predefatted plant extract using 200 mL of ethanol. The resulting ethanolic extract was filtered and concentrated under reduced pressure and was stored for the onward steps. Preparation of Nanoformulation. The nanosuspension was made using the nanoprecipitation method suggested in a previous study. 41 Plant extract 0.25 g was dissolved in 10 mL of ethanol (organic phase) and 0.25 g of the stabilizer was dissolved in 100 mL of distilled water (aqueous phase). The ensuing organic layer was gradually (1 mL/min) poured with the help of a syringe into the aqueous phase with constant stirring at 1000 rpm for 6 h at room temperature. The entire formulation was stored at room temperature. Optimization of Formulation Parameters. Several preparative factors, such as the stabilizer, concentration of the stabilizer, and the amount of plant extract, were enhanced in the current study for the preparation of the stable nanoformulation with the lowest particle size. Initially, screening of After deciding on the stabilizer, the remaining parameters (concentration of the stabilizer and amount of plant extract) were adjusted. For the formulation of stable nanosuspensions, four different stabilizers (PVA, SLS, HPMC, and P-80) were used. In the present study, all of the four stabilizers (PVA, SLS, HPMC, and P-80) were screened by using 0.25 g of plant extract and 0.25 g of stabilizer, and the solvent to antisolvent ratio was fixed at 1:10. The concentration of the stabilizer is an important parameter for the formulation of stable nanosuspensions. In this study, the amounts of stabilizer used were 0.125, 0.25, 0.5, and 1.0 g. The amount of the plant extract was kept constant, i.e., 0.25 g. The experimental conditions used for the preparation of the C. longa nanoformulation are given in Table 6. Characterization of the Nanoformulation. 4.5.1. SEM. Physical evaluation of nanoformulations is based on stability; the one with higher stability is taken as the ideal nanoformulation. Scanning electron microscopy (SEM) (JEOL, JSM-6400, Japan) with a secondary electron detector was used to obtain the digital pictures of the surface morphology at an accelerating voltage of 15 kV. The solid nanoformulation (obtained after evaporating the excess amount of solvent) was used for this purpose. Particle Size and Polydispersity Index. The mean particle size (z-average, nm) and polydispersity index (PDI) of the prepared nanosuspensions were measured by the dynamic light scattering (DLS) technique using Malvern Zetasizer (Nano ZS). For measuring particle size and PDI, freshly prepared nanoformulations were added to the glass cuvette and placed in a sample holder unit, and measurement was carried out using software. ζ potential was also measured similarly by using a quartz cuvette. 50 4.6. In Vitro Dissolution Studies of the Optimized Nanoformulation. The in vitro dissolution behavior of the optimized nanoformulation as compared to the coarse plant extract was determined by adopting the literature method as described in previous literature. 51 For in vitro dissolution testing of the coarse herbal extract and nanoformulation, a semipermeable membrane was utilized. For dissolution testing, the nanoformulation was added to a semipermeable (egg) membrane and this membrane was placed in different phosphate buffer systems (pH = 7.2, pH = 7.0, and pH = 6.8) as dissolution media, followed by magnetic stirring. Throughout the whole experiment, the dissolving medium's temperature was set constant at 370.5°C, and the stirring speed was set to 50 rpm. An aliquot (5 mL) was withdrawn from the dissolution media at predetermined time intervals (0, 15, 30, 45, 60, 75, 90, 120 min) and the same volume of the prewarmed (37°C) dissolution medium (phosphate buffer) was added to the dissolution vessel immediately to maintain the sink conditions. The concentration of dissolved drugs was determined spectrophotometrically. Pure quercetin (QT) was used as a standard compound for the C. longa coarse plant extract and its nanoformulation to compare their dissolution rates. Samples were analyzed at 373 nm wavelength (λ max of quercetin) spectrophotometrically. The percentage release of coarse plant extracts and that of the optimized nanosuspension were compared. The regression equation produced by the properly designed calibration curve of quercetin was used to estimate the concentration of active components. Results for the coarse plant extract and nanosuspension experiments were provided as the percentage of drug dissolved. Experiments were carried out in triplicate. 4.7. Antimicrobial Activity. The antimicrobial potential of the plant extracts and their respective nanosuspensions was determined with the disc diffusion method as described in previous literature 52 by employing one fungal strain (A. niger) and a bacterial strain (E. coli). 4.8. Antibacterial Activity (Assay Protocol). Nutrient agar (28.08 g/L) medium was poured into Petri dishes and injected with the bacterial cultures. Tiny filter paper discs were saturated with 30 μL (20 mg/mL) of the plant suspension and nanosuspension samples. Methanol and rifampicin were employed as negative and positive control, respectively. The discs were placed flatly on the growth media and the Petri dishes were incubated at 37°C for 24 h. Inhibiting the development of bacteria, herbal extracts with antibacterial activity resulted in the formation of clear zones. By using a zone reader, we were able to measure the inhibited zone (zone of inhibition) in millimeters (mm). The antibiotic's active site is surrounded by a zone of inhibition where bacterial colonies cannot grow. To assess the bacteria's susceptibility to the antibiotic sample, the zone of inhibition was assessed. 4.9. Antifungal Activity (Assay Protocol). Potato Dextrose Agar (PDA) (39.06 gm/L) was added in Petri dishes and inoculated with the fungal species. Appropriately cut discs of filter paper were impregnated with 30 μL samples (20 mg/mL) of plant extracts and the nanoformulation. Fluconazole (5 μL, 15 mg/250 μL) was used as a positive control. The plates were incubated at 2°C for 48 h, and the antifungal activity was measured using a zone reader to determine the inhibited zones. 4.10. DPPH Radical Scavenging Activity. The antioxidant activities of the native plant suspension and optimized nanosuspension were assessed by DPPH assay by following the previous literature method. 53 Five different concentrations of coarse plant suspensions and the respective nanoformulation in the range of 0.02−0.1 mg/mL were made. An aliquot (3 mL) of this concentration was taken and a newly prepared DPPH solution (0.1 mM, 1.0 mL) was added to it. At room temperature, these solutions were incubated for 30 min. A ultraviolet−visible (UV−Vis) spectrophotometer (Shimadzu, Japan) was used to measure the solution's absorbance at 517 nm. The significant free radical scavenging activity was shown by the decrease in absorbance with increasing concentrations. To analyze the data, ascorbic acid was employed as a standard chemical. The same procedure was applied to the blank solution. The following formula was applied to calculate the % age inhibition of the DPPH radical.
2022-12-25T16:11:08.405Z
2022-12-23T00:00:00.000
{ "year": 2022, "sha1": "1de89bbcdbc76dd960799c5ac9f1178490e3a6e5", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "00c89d768c6a68fb1b6263ee607a8a3f6de791a0", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
21143180
pes2o/s2orc
v3-fos-license
Acetic and Acrylic Acid Molecular Imprinted Model Silicone Hydrogel Materials for Ciprofloxacin-HCl Delivery Contact lenses, as an alternative drug delivery vehicle for the eye compared to eye drops, are desirable due to potential advantages in dosing regimen, bioavailability and patient tolerance/compliance. The challenge has been to engineer and develop these materials to sustain drug delivery to the eye for a long period of time. In this study, model silicone hydrogel materials were created using a molecular imprinting strategy to deliver the antibiotic ciprofloxacin. Acetic and acrylic acid were used as the functional monomers, to interact with the ciprofloxacin template to efficiently create recognition cavities within the final polymerized material. Synthesized materials were loaded with 9.06 mM, 0.10 mM and 0.025 mM solutions of ciprofloxacin, and the release of ciprofloxacin into an artificial tear solution was monitored over time. The materials were shown to release for periods varying from 3 to 14 days, dependent on the loading solution, functional monomer concentration and functional monomer:template ratio, with materials with greater monomer:template ratio (8:1 and 16:1 imprinted) tending to release for longer periods of time. Materials with a lower monomer:template ratio (4:1 imprinted) tended to release comparatively greater amounts of ciprofloxacin into solution, but the release was somewhat shorter. The total amount of drug released from the imprinted materials was sufficient to reach levels relevant to inhibit the growth of common ocular isolates of bacteria. This work is one of the first to demonstrate the feasibility of molecular imprinting in model silicone hydrogel-type materials. Introduction As the contact lens industry continues to grow and develop, novel uses and applications of contact lenses are constantly being contemplated and investigated. Contact lens materials as a vehicle for sustained ophthalmic drug delivery to the eye has had a renewal of interest in the past decade, mainly due to the advent of silicone hydrogel materials, which provide sufficient oxygen delivery to the eye to permit hypoxia-free wear during overnight use [1]. Indeed, in the original patents and designs of soft contact lens materials, the concept of using contact lenses as a reservoir for drugs delivered to the eye was noted, although little work investigating this application has been conducted for over thirty years [2]. Recently, there has been an explosion in the number of studies and groups who have demonstrated an interest in the development of contact lens drug delivery materials. The rationales for the use of contact lenses as drug delivery devices are numerous. First, contact lenses are arguably the most successful biomaterial currently available, with estimates of over 140 million wearers worldwide [3], and are thus firmly embraced by patients and, more importantly, practitioners. Second, contact lenses have already been demonstrated to successfully correct refractive errors in patients. The addition of drug delivery to this correction of refractive error can potentially increase the quality of life in patients by decreasing dosing frequency, while also potentially increasing compliance rates in acute or chronic ophthalmic treatment. Third, there is some evidence that concurrent contact lens and topical ophthalmic treatment is more effective than topical treatment alone. Use of contact lenses has been demonstrated to increase the residence time and/or increase ocular penetration of topically administered agents [4,5]. Use of contact lenses may thus decrease the amount of drug needed to successfully treat ocular disease in patients. Finally, there are many situations or locales around the world where access to pharmacological therapy is inconsistent at best, necessitating the use of treatments that can be administered at a single time and have a long lasting effect. Development of these devices to combat these medical challenges is thus warranted and potentially useful. There are several clinical scenarios in which a contact lens is already used medically to aid the healing of a patient, with topically prescribed agents being used concurrently with contact lenses. For example, following photorefractive keratectomy (PRK), an ocular laser surgical method used for the correction of refractive error, a bandage contact lens is used for several days post-surgery, due to the absence of the corneal epithelium, which is removed during the course of the procedure [6]. Antibiotic drops are used on top of the lens prophylactically to prevent any post-surgical infection. In patients who present with a traumatic corneal abrasion, a bandage contact lens is often used to increase the rate of healing, while also providing symptomatic pain relief. These patients are often prescribed an antibiotic agent, either prophylactically or to treat any current infection sustained during the trauma. It is evident that if the bandage lens was concurrently providing the symptomatic relief as well as the release of the prophylactic antibiotic agent, then the patient could be permitted to rest and recuperate rather than worrying about drug dosing schedules. The extended release of drugs from soft contact lens materials (hydrogels) is unfortunately not that simple. Previous studies have demonstrated that commercially available lenses soaked in ophthalmic pharmaceuticals are capable of releasing clinically relevant amounts of drugs, but the release times from these materials is in the order of only minutes to hours [7][8][9][10]. Furthermore, these materials are not designed for extended wear, so even if long term release was achieved, the hypoxia of the cornea that would occur with extended wear would necessitate their removal. Thus, strategies to optimize release times to be more on the order of days or even weeks are needed, if these devices are to be used and marketed effectively. Numerous strategies have been investigated to slow and/or control the release of pharmaceuticals from contact lenses. Some investigators have found the addition of a diffusion barrier could impede the movement of the drug out of the lens, thus slowing the release. In recent studies investigating this concept for the delivery of dexamethasone and timolol [11,12], vitamin E was used as a diffusion barrier and the authors were able to demonstrate sustained release from these materials for days to weeks, with the time for release being controlled by the amount of vitamin E used. This technique may prove particularly beneficial as it can be used with commercially available materials, thus shortening the regulatory approval processes. Other authors have proposed the use of a drug-impregnated coating on the surface of the lens, using cyclodextrins, nanoparticles or liposomes [13][14][15]. This strategy may be particularly useful for drugs with poor solubility in aqueous environments, as the microenvironment of the coating can be different from the rest of the lens. One of the more successful strategies in generating extended release times from contact lens materials has been molecular imprinting. Molecular imprinting is a polymerization strategy in which a molecule of interest is present within the pre-polymerization solution of a polymer. The addition of other molecules known as functional monomers, which serve to interact with the functional groups of the template molecule, create "cavities" or "molecular memory" within the material after polymerization is complete [16]. These "cavities" specifically interact with the template molecules, slowing the diffusion of the templates out of the material into solution, and thus extending release times [17]. This technique was originally designed for highly crosslinked, hard plastics for the specific removal of components out of solutions [16]. The challenge has been to adapt this technique for contact lenses, in which a highly crosslinked, rigid type material would not be useful. Despite these challenges, several recent papers have shown this technique to be applicable to the creation of contact lens materials to deliver anti-glaucoma, antibiotic, antihistamine, non-steroidal anti-inflammatory agents (NSAIDs) and wetting agents [17,[18][19][20][21]. The gains in delivery time for materials created using this concept have been substantial; whereas non-modified materials may release for only a few hours at most, delivery from imprinted materials in the order of several days have been achieved [21]. Several key insights have been gleaned from previous authors. First, the choice of the template and functional monomer is crucial. There has to be an appropriate interaction between the template and functional monomer to efficiently create the cavities to be fixed during the polymerization process [21]. Second, the amount of functional monomer relative to the template in the polymerization mix is also important. A low functional monomer:template will yield an insufficient number of cavities being created around the template; a too high functional monomer:template ratio will lead to inefficient creation of cavities, as much of the functional monomer will not have the opportunity to interact with the template [19]. Much of the work to-date on imprinted molecules have involved "conventional" higher water content hydrogel materials based on poly-hydroxyethyl methacrylate (pHEMA) [20,22], but more recent work has been performed on the more oxygen permeable siloxane-based hydrogels [23]. Ciprofloxacin-HCl is a second generation fluoroquinolone antibiotic. It interferes with bacterial DNA gyrase, preventing bacterial DNA replication [24]. It is a broad spectrum antibiotic, with activity against both gram-negative and gram positive bacteria [25,26]. It is used ophthalmically as either an eye drop or as an ointment. It is commonly used as a treatment for bacterial conjunctivitis, and is one of only a few drugs that have United States Food and Drug Administration (FDA) indications for the treatment of bacterial ulcers/microbial keratitis [27,28]. Ciprofloxacin exhibits poor aqueous solubility at physiological pH due to its overall neutral charge as a zwitterion at this pH, and the presence of its dual aromatic rings [29]. Its solubility in aqueous media is greatly enhanced in acidic or basic solutions, leading to commercially available ophthalmic preparations having a pH of approximately 4.0, which may cause some stinging or irritation upon instillation [28,29]. When dissolved in high concentrations, ciprofloxacin solutions have a yellowish colour. During a severe infection, the dosing of ciprofloxacin can be as frequent as two drops every fifteen min. This high dose and long term use, coupled with poor solubility of the drug at physiological pH, can lead to the development of white, crystalline precipitates in the cornea or inferior conjunctival sac, although this does not necessarily indicate the need to discontinue treatment [30]. In this current study, molecular imprinting techniques were used to create model silicone hydrogel materials for the delivery of the antibiotic ciprofloxacin-HCl. Acetic and acrylic acid were used as functional monomers, and the effect of functional monomer:template ratio, overall functional monomer concentration and drug loading concentration were all investigated and explored. This study is one of the few studies investigating the use of silicone hydrogel-type materials for the delivery of pharmaceuticals using a molecular imprinting strategy. Pilot Study: Ciprofloxacin pHEMA-Methacryloxypropyltris (Trimethylsiloxy) Silane (TRIS) Materials with Acetic Acid Functional Monomers The water content and dry weight of the different acetic acid imprinted model materials is detailed in Table 1. Model lenses created would all be classified as being of low water content, and would require some increase in water content if they were to be used as actual contact lenses on the eye. There was no statistically significant difference between the pHEMA-TRIS-Acetic Acid controls and the pHEMA-TRIS-Acetic Acid Ciprofloxacin imprinted materials, based on a one way analysis of variance (ANOVA) (p > 0.05). The release curves from these materials loaded with 9.06 mM, 0.10 mM and 0.025 mM ciprofloxacin over the first 24 h are seen in Figure 1(a-c). There was no statistically significant difference seen between the imprinted and control model lenses loaded with 9.06 mM, over the course of the 24 h (p > 0.05). The initial release from the 0.10 mM and 0.025 mM model lenses are of interest. For 0.10 mM loaded model lenses, the control exhibited a very fast release and almost immediate plateau, at a level higher than the two imprinted materials. For the 0.025 mM loaded model lens, the control model lens again almost immediately reached its final plateau level, but in this situation it was at a level that was below that of the two imprinted materials. Whether this was caused by some residual loading solution on the 0.10 mM loaded discs is unknown. For the imprinted materials, for both the 0.10 mM and 0.025 mM loaded model lenses, there was a slow release of ciprofloxacin into solution over the course of the 24 hours, but there was no statistical significance between the 4:1 and 8:1 imprinted materials. The release curves of the acetic acid imprinted materials and controls after 14 days of release is seen in Figure 2(a-c). For model lenses loaded with 9.06 mM of ciprofloxacin, there was an overall statistically significantly greater amount of drug released by the imprinted materials compared to the control (p < 0.05), but there was no significant difference between the two imprinted materials (p > 0.05). The time to reach the plateau was also different; interestingly, the imprinted materials appeared to reach their plateaus within 4 or 5 days, while the statistics suggest that the control was releasing for up to 8 days. Unfortunately, there is a greater amount of variation in the determination of the concentration of ciprofloxacin within the solution when loading with such a high concentration, as dilutions are necessary to reach concentrations relevant to the linear portion of the standard curve, potentially confounding results. The effect of imprinting in comparison with the non-imprinted controls is most evident again when the materials are loaded with the lower concentration solutions (0.10 mM and 0.025 mM), as seen in Figures 2b and 2c. Here, the imprinting demonstrates two key advantages over the non-imprinted control, with a longer release time and a greater amount of ciprofloxacin being released. For the 0.10 mM loaded materials, analysis suggests that a plateau level is reached in as little as 45 min for controls. In contrast, the 4:1 imprinted and 8:1 imprinted materials demonstrate continued significant release compared to earlier time points out to 10 days. Similar results are seen in model lenses loaded with 0.025 mM solutions. The control released so little that there was statistically no difference over the course of the 14 days compared to the initial time point, whereas the imprinted materials were releasing for up to 8 days. As can be clearly seen from the release curves, there was no statistically significant difference between the two ratios of acetic acid to ciprofloxacin in terms of the plateau amount of ciprofloxacin released, or the time to reach a plateau. The results from these initial attempts to create imprinted silicone hydrogel materials were very encouraging in that they achieved two separate goals. First, the effect of the imprinting was demonstrated when the model lenses were loaded with lower concentrations of the drug, as there was a clear difference between the imprinted and non-imprinted materials in their ability to deliver drugs for an extended period of time, as evidenced by drug release occurring for a period of 8 to 10 days (depending on the loading concentration). Second, we were able to confirm the delivery of relevant amounts of the antibiotic. When loaded with the clinical concentration of ciprofloxacin (9.06 mM), concentrations were achieved in the 2 mL reaction vial that were clinically relevant in achieving the minimum inhibitory concentration (MIC 90 ) of common ocular isolates [31]. Not surprisingly, when the loading concentration was decreased by approximately 100 times, the amount of drug released was less, and the MIC 90 only reached concentrations relevant to more susceptible bacteria. Finally, the pilot study failed to demonstrate any differences between the ratio of acetic acid to ciprofloxacin used to create the imprinting that has been demonstrated previously [2,16,17,19,21]. This was possibly due to the lack of precision in choosing to add the imprinting mixture on the basis of percentage weight rather than by molar concentration of the functional monomer, in relation to the number of moles of the other components of the polymerization as a whole. Ciprofloxacin pHEMA-TRIS Materials with Acrylic Acid Functional Monomers To further explore the effect of imprinting on the model silicone hydrogel materials, a second, larger study was conducted with a few key modifications to the imprinting process. The overall functional monomer concentration within the polymerization mix was varied between two concentrations (100 mM and 200 mM), and the functional monomer was changed to a related molecule, acrylic acid, which has had some success in the literature in terms of efficiently creating imprinted cavities [20]. The same three loading concentrations were used, and three separate imprinted ratios of acrylic acid to ciprofloxacin were used: 4:1, 8:1 and 16:1. The dry weight (g) and the water content (%) of the created materials are listed in Table 2. Similar to the model materials, the majority of the model materials were of low water content, and some degree of modification would be necessary to increase the water content if these materials were to be used on the human eye. A one way ANOVA revealed a significant difference between the dry weights and water contents of the materials (p < 0.05). Post Hoc Tukey tests revealed that this difference was mainly confined to two model-the pHEMA + TRIS + 200 mM Acrylic Acid, 8:1 ratio to ciprofloxacin and the pHEMA + TRIS + 200 mM Acrylic Acid, 4:1 ratio to ciprofloxacin were found to be statistically different than the other model lens materials (p < 0.05). Ciprofloxacin release curves from 100 mM acrylic acid materials loaded with 9.06, 0.10 and 0.025 mM of ciprofloxacin within the first 24 h are detailed in Figure 3(a-c). A similar trend to that seen with the acetic acid imprinted materials is seen, as there are little differences in the amount or the rate at which ciprofloxacin was released from the 9.06 mM loaded model lenses, but when the materials were loaded with progressively lower amounts of ciprofloxacin the difference between the imprinted and the non-imprinted control became more apparent, with the imprinted materials releasing relatively more and at a greater rate. The release curves from these materials over the course of two weeks are detailed in Figure 4(a-c). Analysis of the model lenses loaded with 9.06 mM ciprofloxacin (Figure 4a) showed that the control model lens was only releasing for a maximum of 3 days before reaching a plateau, while the imprinted materials were releasing for periods up to 7 days. At plateau, the materials with 4:1 imprinting were found to be statistically significantly higher than the other model lens types (p < 0.05). The other model lens types (including the control) tended to cluster together. Analysis of the 0.10 mM loaded materials showed no significant release compared to the initial time point for the control, and significant release from the imprinted materials for up to 14 days in the case of the 8:1 imprinted material. The 16:1 imprinted material was found to be different from the other two imprinted materials (p < 0.05), while releasing for 11 days. The 4:1 imprinted materials released the most drug, but for the shortest period of time, at only 5 days. For materials loaded with 0.025 mM ciprofloxacin, the results were similar but with more extended release times. The 4:1 and 8:1 model lenses tended to cluster together and release the most amount of drug, while the 16:1 was statistically significantly lower, but still higher than the control (p < 0.05). All of the imprinted materials in this case took 10 days to reach a plateau level. Figure 5(a-c). The loading of the high concentration (9.06 mM) led to all materials releasing a significant amount of drug, but there was no difference between the imprinted materials and the control (p > 0.05) over the first 24 h. For the model lenses loaded with 0.10 mM and 0.025 mM, the imprinted materials released a larger amount and at a faster rate compared to the control (p < 0.05), but there was no difference between the imprinted materials, although it appeared that the 4:1 loaded materials released more than the 8:1, and the 16:1 imprinted material released the lowest amount. Figure 6(a-c). The 9.06 mM loaded materials again showed a large amount of variation, and there was not statistically significant difference between the various imprinted materials versus the controls. The materials did release more than the required amount of antibiotic to be clinically relevant against common ocular pathogens. In the course of measurement over the two weeks, there was one anomalous group of readings. The 0.10 mm loaded, 4:1 imprinted materials began to show a declining concentration of ciprofloxacin within solution over time. Whether this was due to contamination, or drug degradation is unknown, regardless, the data is not presented here. Examination of the other 0.10 mM loaded materials shows that the imprinted materials released for up to 4 days, significantly different than the control (p < 0.05). The 0.025 mM loaded model lenses demonstrated significant differences between the 4:1 loaded and the other imprinted materials and the control, although the release time was relatively short at only 2 days. The 8:1 and 16:1 imprinted materials released comparatively less ciprofloxacin, but released it for longer periods of 13 and 14 days respectively. The control material loaded with 0.025 mM in comparison released relatively little ciprofloxacin over the course of 4 days, before no further changes were measured. Thorough examination of the acrylic acid imprinted materials leads to several conclusions. The loading concentration of ciprofloxacin has a large role on the ability to detect the effect of the molecular imprinting. When the model lenses are loaded with a large concentration (9.06 mM), which is equivalent to the concentration of ciprofloxacin in commercially available 0.3% eye drops, there is little to no difference in the various imprinted materials and the controls. In this situation, it is likely that the majority of the ciprofloxacin was loaded into the material through non-specific concentration gradients, and the release from all the materials reflected that. One cannot discern the effect of the need for dilution to generate readings in the range of the linear standard curve as this could potentially affect the sensitivity to detect subtle changes in concentration within the solution, and may have contributed to the variability. However, this effect would be minimal. When loaded with lower concentrations of ciprofloxacin, a different picture emerges from the data, in that the effect of imprinting these materials with template and the functional monomer become apparent. The imprinted materials release a larger amount compared to similarly loaded control materials, and for a significantly longer time. Release times for up to 14 days were seen in some cases, such as the 0.025 mM loaded, 200 mM acrylic acid 8:1 imprinted material, while control materials were confined to minimal release amounts for periods of only a few days. Interestingly, there was little to no difference between materials created with the two different concentrations of acrylic acid in terms of the amount or rate of ciprofloxacin being released. There has been some evidence in the literature that not only is the functional monomer:template ratio important, but so is the functional monomer:cross linker ratio [32]. In this experiment, there was no variation in the amount of crosslinker chosen, which was ethylene glycol dimethacrylate (EGDMA), so it would be interesting to see if the drug release rate dependence on functional monomer to crosslinker ratio would prove to be important in this model silicone hydrogel-type system. In comparison with the pilot study, the functional monomer was changed to acrylic acid, and the precision to which the imprinting process was performed was more carefully controlled. In doing so, greater differences in the imprinted materials were demonstrated, with materials imprinted with the 4:1 ratio in general releasing the greatest amount of drug, with decreasing release from 8:1 and 16:1 imprinted materials respectively. This is similar to the results that were seen in a previous paper imprinting norfloxacin, another fluoroquinolone antibiotic [19]. The majority of the model lenses released enough antibiotic to reach concentrations that were clinically relevant for common bacterial isolates, especially with model lenses loaded with the clinical concentration of ciprofloxacin [31]. The difficulty is that sustained release over time was really only observed when loading with much lower concentrations, which can pose a problem with antibiotic therapy in preventing the development of bacterial resistance. To combat this, future studies should use newer and more potent antibiotics, whose minimum inhibitory concentrations are much lower than ciprofloxacin, such as the fourth generation fluoroquinolones moxifloxacin and gatifloxacin [33]. The challenge for these contact lens combination devices, especially antibiotic ones, beyond the demonstrated ability to sustain drug release, is acceptance into clinical practice. Considering the perception of the role of contact lenses in the etiology of severe ocular infections, use of a contact lens in such a situation faces an uphill climb in acceptance, and it will be the challenge to researchers and companies marketing such products to demonstrate advantages of such a device over traditional therapy. The results from this study were generated using what is commonly known as the "infinite sink" technique, in which the release of drug is into the same static solution over time. This clearly does not necessarily mimic the ocular surface, in which tear production, evaporation and drainage can play a significant part in drug residence time and ultimately bioavailability to the cornea. The use of a static solution can also have a significant effect on release times for a drug such as ciprofloxacin, which is poorly soluble at physiological pH, potentially limiting release times due to the drug reaching a maximum soluble concentration within the solution. Several authors have proposed different solutions to this infinite sink problem. The simplest is to transfer the lenses to fresh solutions free of any drug at various time points, and sum up the release from all these release solutions [34]. A more sophisticated solution involves creation of an ocular tear flow device, in which the flow into, and drainage out of a tear solution as it interacts with the drug delivery device is controlled to mimic ocular tear flow. When such a system is used, authors have found that release rates are much slower than in infinite sink conditions, which is probably due to significantly smaller volumes of solution available to the device at any one given time. The release was also shown to follow zero order kinetics [22], and it would be interesting to test the materials created in this study under such conditions to observe any changes in release kinetics. Model Silicone Hydrogels Model silicone hydrogel materials were created using a UV induced polymerization process. 3.6 g of HEMA was mixed with 0.4 g of TRIS. 0.2 g of EGDMA was subsequently added, allowed to mix, and finally 0.02 g of the photoinitiator IRGACURE was added. The mixture was poured into aluminum foil molds, and cured in a UV chamber (CureZone 2 Con-trol-cure) for 20 min at 340 nm. The surfaces were then placed in a 50 °C oven overnight to ensure completion of polymerization. Samples were then placed in Milli-Q water for a minimum of two days to rehydrate, with the water being changed daily to remove any unreacted monomers [35]. Molecular Imprinted Materials-Acetic Acid Functional Monomer Acetic Acid imprinted materials were created using a similar process to the model silicone hydrogels. To each polymerization mix before the addition of the IRGACURE initiator, acetic acid solution with various amounts of ciprofloxacin dissolved within it were added to the reaction mixture, creating an approximate 0.01 M acetic acid concentration in the final polymerization mixture. Control materials had a solution of acetic acid added without any ciprofloxacin. Molecular Imprinted Materials-Acrylic Acid Functional Monomer The imprinting of acrylic acid materials was more carefully controlled to determine the effect of the imprinting on the drug release characteristics of the technique. To that end, materials were created using similar procedures to the model silicone hydrogels. Before the addition of the IRGACURE initiator, acrylic acid was added to a final concentration of either 100 mM or 200 mM. Ciprofloxacin powder was subsequently added to the mixture, in molar ratios to the acrylic acid varying from 1:4 to 1:16, and the polymerization of the materials was initiated as previous. Molecular Imprinted Materials-Washout Materials imprinted with ciprofloxacin were rehydrated in Milli-Q water in glass jars, with the water being changed daily. The water used in the washout period was measured for ciprofloxacin concentration, and materials were only used after ciprofloxacin concentrations within the water were at minimal/non-existent levels. Drug Solutions A 0.3% (w/v) (9.06 mM) stock solution of ciprofloxacin-HCl was created in a phosphate buffered saline. The pH of the solution was adjusted to 4.0 to ensure the complete solubilization of the ciprofloxacin at this high concentration. Using this stock solution, samples were diluted approximately 4,000 times to be read by a Hitachi F-4500 fluorescence spectrophotometer (Hitachi Ltd., Tokyo, Japan), with an excitation wavelength of 274 nm and an emission peak at 419 nm to create a linear standard curve. This standard curve was used to correlate emission amounts with the concentration of ciprofloxacin within the solution. Water Content, Centre Thickness, Volume and Dry Weight Determination After soaking in Milli-Q water for a minimum of two days, discs of the materials were punched out using a #4 cork borer with a diameter of 5 mm. The water content of these discs was determined using the gravimetric method, using the Sartorius MV 100 (Sartorius Mechatronics Canada, Mississauga, ON, Canada). The dry weight of the disc was also determined. The centre thickness was determined using a dial lens gauge for rigid contact lenses (Vigor Optical, Carlstadt, NJ, USA), and the volume was calculated from thickness and diameter data, assuming a cylindrical shape. Drug Loading into Materials After determination of the water content, discs were placed in a ciprofloxacin drug loading solution. Three separate concentrations were used-the stock 9.06 mM, and two diluted loading concentrations, 0.10 mM and 0.025 mM. 2 mL of the loading solution was used, and this was undertaken in amber vials, as ciprofloxacin is light sensitive. Loading discs were left at room temperature for one week. Drug Release Kinetics Loaded discs were removed from the loading solution amber vials using plastic tweezers. The surface was partially dried on lens paper to remove any excess loading solution, and the disc placed into another amber vial containing 2 mL of an artificial tear solution (NaCl 90 mM, KCl 16 mM, Na 2 CO 3 12 mM, KHCO 3 3 mM, CaCl 2 0.5 mM, Na 3 Citrate 1.5 mM, Glucose 0.2 mM, Urea 1.2 mM, Na 2 HPO 4 24 mM, HCl 26 mM, pH 7.4) [36]. The vials were then placed in a shaking water bath at 34 °C. At various time points, the concentration of ciprofloxacin in the solution was determined using spectrophotometry. For model lenses loaded with 9.06 mM ciprofloxacin solution, samples were removed and diluted 100× to get into the range of the standard curve. For the other two loading conditions, 1 mL of the release solution was removed from the vial, read in the spectrophotometer, and returned to the vial. Readings were taken every 5 min for the first 20 min, then after 30, 45, 60 and 90 min. Readings were then taken hourly until 8 h had passed, then daily until 14 days had passed. Statistical Analysis Statistical analysis was performed using Statistica version 8 (StatSoft Inc., Tulsa, OK, USA) using a repeated measures ANOVA, and post hoc Tukey tests as indicated. A p value of less than 0.05 was considered statistically significant. Conclusions In this study, model silicone hydrogels for the delivery of the antibiotic ciprofloxacin were developed using a molecular imprinting strategy. Synthesized materials had water contents in the mid to low teens, and when loaded with various solutions of ciprofloxacin they demonstrated different release kinetics. Loading with high concentrations of ciprofloxacin led to very few differences in the various imprinted materials and the control. When loaded with lower concentrations, the effect of the imprinting was more clearly seen, with model lenses created using a 4:1 ratio of acrylic acid to ciprofloxacin template consistently releasing the greatest amount of drug, and certain model lenses continuing to release the drug for up to 14 days. As the use of these contact lens combination devices will likely involve some element of overnight or extended wear, the results from this study using model silicone hydrogel materials has provided some insight into how these materials behave as drug delivery devices when formed using molecular imprinting.
2016-03-22T00:56:01.885Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "b898be20c0a45cf773e34b898ac70dac6f447b7f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/5/1/85/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b898be20c0a45cf773e34b898ac70dac6f447b7f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
264499720
pes2o/s2orc
v3-fos-license
Retelling education in emergencies through the black radical tradition: on racial capitalism, critical race theory and fugitivity ABSTRACT This article asserts that the Black Radical Tradition (BRT), grounded in historical and structural inquiry, offers tools to reinterpret EiE radically—the BRT encompasses a tradition rooted in diverse African intellectual and activist inquiries, providing a multifaceted theoretical framework. Relevant to humanitarian scholarship, the BRT challenges omissions of colonisation, capitalism, and enslavement histories in forced migration and aid, shedding light on their roles in perpetuating ‘white saviours’. The paper, adopting my roles as a scholar and aid practitioner, critically examines the EiE sector through three BRT lenses: racial capitalism, critical race theory, and fugitivity. It employs case studies, aligning with the BRT's interconnected focus, revealing the pervasive influence of educational aid, racial injustice, and structural inequalities. These lenses collectively illuminate the potential of Black radical thought to transform the EiE landscape. By tracing EiE's genealogies through Black radical historiography, the article advocates for sector-wide introspection, emphasising power redistribution, centring marginalised voices, and challenging prevailing hierarchies in humanitarian contexts. Introduction In October 2020, the Inter-Agency Network for Education in Emergencies (INEE), a renowned member network that has established itself as an authoritative body in the field of Education in Emergencies (EiE) over the past two decades, issued a groundbreaking Anti-Racism and Racial Equity Statement.This statement constituted an acknowledgement of complicity in upholding the organisation's deeply racialised and hierarchical system.Within this statement, it was explicitly acknowledged that the INEE secretariat had played a role in perpetuating what was described as 'white supremacy culture' and 'institutional racism' through organisational structures and actions (INEE 2020).This revelation was particularly significant given the limited scholarly attention paid to the influence of racial constructs and racism on individuals' educational experiences within the EiE sector, where discussions surrounding racism are scarcely evident in advocacy, policy formulation, research endeavours, or programme development (Oddy 2020;Sriprakash, Tikly, and Walker 2019).Consequently, the INEE statement signals a momentous call for transformative change within the field. To paraphrase critical race scholar Gloria Ladson-Billings (1998), it may be surprising for white supremacy 1 'to crop up in a nice field' like EiE.However, acknowledging such deeply entrenched systemic disparities within humanitarian aid echoes the observations of scholars who argue that the aid system itself is inherently colonial (Baughan 2020;Rutazibwa 2018;2020).Pallister-Wilkins draws on the insights of W. E. B. Du Bois to highlight how humanitarianism has historically played a role in creating, regulating, and perpetuating 'whiteness' while allowing 'white supremacy culture to go unchallenged and thrive' (Pallister-Wilkins 2021, 98).Similarly, Bian (2022) emphasises the intricate connection between the humanitarian sector's emergence and histories of empire and colonialism, which persist and influence contemporary constructions of 'race.'Their assertion that 'expertise is covertly racialised' within the humanitarian aid sector (Bian 2022, 2) underscores INEE's omission of perpetuating a particular worldview and systemic exclusion. For EiE scholars specifically, the INEE statement is a call to action to critically reflect on and address the reproduction of racial injustice in the sector and to substantively engage with the production and effects of marginalisation and othering across its research and policy.Decades of scholarly engagement by critical education scholars with theories and movements stemming from resistance to racial injustice that examine structural injustice and their implications in educational settings, EiE, constrained by its 'white gaze' (Pailey 2020), has shown limited involvement with the wider literature on educational inequities (Shuayb and Crul 2020).Until recently, the field's examination of structural injustices within the ecosystem (Alameldeen and Fatima 2021;Menashy, Zakharia, and Shuayb 2021) and practitioners' complicity in perpetuating these structures (Greer 2023;Novelli and Kutan 2023;Shah et al. 2023) has been even more scarce.To dismantle institutional racism and white supremacy culture, new intellectual tools are required, enabling us to 'define and seek a world where all can flourish' by embracing diversity as a source of strength (Lorde 2018, 19). In this paper, I assert that the Black Radical Tradition (BRT), through its attention to historical and structural inquiry, provides the necessary tools, vital epistemic challenge and a radical reinterpretation of EiE as a field.This assertion stems from the need to recognise the contextual significance of the INEE's Anti-Racism statement.It was released in response to global protests against racial injustice triggered by George Floyd's tragic murder amid a pandemic that laid bare the deeply entrenched, historically interconnected, and multifaceted nature of structural inequities, marking a culmination of years of societal reckoning with systemic discrimination (Strong et al. 2022); attributed to the Black radical tradition (Toliver 2021). The BRT, as expounded by Robinson in his seminal work Black Marxism: The Making of the Black Radical Tradition encompasses a tradition rooted in diverse intellectual and activist inquiries originating from African cultures, languages, and belief systems.Robinson meticulously traces five centuries of resistance against oppression (which he saw as deeply connected to capitalism), underscoring historical inquiry's significance in countering the epistemic and systemic erasure of countless individual, spiritual, and collective endeavours that contested enslavement and other oppressive systems.As shall be explored, the BRT is not a singular, concrete, or stand-alone theory; 2 within it, a rich tapestry of concepts and perspectives exists (Johnson and Lubin 2017;Robinson 2021;Thomas 2019).The multifaceted nature of the BRT is particularly relevant to humanitarian scholarship precisely as it challenges the omission of colonisation, capitalism, and enslavement histories within the realms of forced migration and aid (Carpi and Owusu 2022;Sikka 2020;De Brankamp 2021;Genova 2018).This elision has allowed the field to become a conduit for 'white saviours' transitioning into aid providers (Carpi and Owusu 2022, 22).By anchoring its focus on historical context, narrative challenge, and the exposure of structural injustices, the BRT offers a distinctive opportunity to address historical omissions and profoundly reshape the paradigm of EiE, necessitating a fundamental overhaul of our theoretical framework. In this article, I begin by establishing my dual role as a scholar and aid practitioner, positioning this work as a constructive critique of the EiE sector aimed at enhancing its practices.Next, this paper employs three critical lenses derived from the BRT to examine the entanglements of colonialism and racism within educational aid: Racial capitalism, Critical Race Theory, and the notion of Fugitivity.I argue that these three lenses form a cohesive and mutually reinforcing framework for analysing power dynamics, histories of oppression, and resistance.They enable a simultaneous examination of macro-level factors, such as policies and systemic inequities, and micro-level factors, including individual experiences.This multi-scalar approach provides a comprehensive analytical lens to unearth the EiE field's complex dynamics, encompassing historical and contemporary struggles against structural inequities underlying the educational disparities observed in crisis contexts.In what follows, I take these three concepts in turn, using each to frame different empirical examples from the field of EiE.I engage with multiple case studies as an epistemological stance aligned with the BRT's emphasis on interconnectedness and to illustrate that the interplay of educational aid, racial injustice, and structural inequalities transcend regions or contexts. Collectively, Racial capitalism, Critical Race Theory, and Fugitivity illuminate the potential contributions of Black radical thought to reimagining the history and horizon of EiE.As scholar Michelle Fine aptly notes, critical research holds the promise of narrating alternative stories, particularly 'when lives are situated in historical and structural analysis' (Fine 2018, 11).This paper unfolds a radical reimagining of the field by retracing EiE's genealogies through Black radical historiography.It advocates for deeper introspection within the sector, encouraging more profound reflection on its potential transformation, emphasising the need to redistribute power, centre marginalised individuals and their educational aspirations, and challenge prevailing hierarchies in humanitarian contexts. My positionality within the EiE ecosystem The Black Lives Matter moment in 2020 called for critical self-reflection on our work's ethical implications and social responsibility.Danewid (2017, 118) argues that the humanitarian 're-constitutes itself as 'ethical' and 'good', innocent of its imperialist histories and present complicities.'Similarly, Negrón-Gonzales posits that aid practitioners seldom concede 'how they are part of the systems and processes that produce and reproduce poverty'-that poverty is actively constructed rather than inevitable (2016,3).I offer this personal vignette because it lends credence to the importance of critical scholarship on lived experience and the tensions, nuances and possibilities of power that arise from self-reflexivity. As a Black, dual-heritage woman from a working-class background, my identity intersects with my awareness of power dynamics.My experiences with race, racialisation, and othering have profoundly shaped my educational and professional journey.However, this self-awareness also requires me to confront my complicity within the exclusionary EiE ecosystem.While migration and racism have influenced my educational and career paths, I must acknowledge that I have not experienced forced displacement, limiting my understanding of the complexities faced by those pursuing education in displacement contexts.My background as a long-time practitioner in the humanitarian EiE sector, combined with other privileges like cisgender identity, able-bodied, UK education, UK citizenship, and fluency in English, places me in proximity to whiteness within a system that often marginalises those closest to displacement.These identity factors have also influenced how others perceive me.For instance, I remember a moment during a teacher training session in Uganda when two participants joined me during the lunch break, jokingly remarking, 'Let us sit with the one who sits closest to power.' Their comments were a stark reminder of the complex ways colonial legacies persist and adapt, even extending invitations to those on the margins. 'Coloniality', as Mignolo (2021) describes, touches me in multiple ways.As a scholar and practitioner in this field, I hope to be constructively complicit (Joseph-Salisbury and Connelly 2021, 210).This idea builds on Walter Rodney's concept of the scholar activist or guerrilla intellectual, which calls on Black academics to ground their work and transcend epistemological challenges to the racist and colonial foundations of mainstream scholarship by anchoring intellectual inquiry in struggles and pedagogical frameworks that reject Eurocentrism (2019, 67).To embody 'constructive complicity,' I have engaged in teaching within open learning initiatives for forcibly displaced people (Hall, Lounasmaa, and Squire 2019;Oddy et al. 2022), committed to critical participatory action research praxis, and collaborated with collectives striving to disrupt and challenge aid-related inequities.Within this paper, I have sought to expand conceptual framings and referencing practices by citing intellectuals who contribute to the BRT's continuation and scholars from forcibly displaced backgrounds. 3This positioning challenges EiE research, which often focuses on 'what works' within programmatic implementation (Burde et al. 2016) rather than embracing scholaractivism. Racial capitalism and EiE In this section, I provide an overview of the concept of racial capitalism, which I then use to analyse several empirical examples from the field of EiE.In Cedric Robinson's seminal work, Black Marxism: The Making of the Black Radical Tradition, he argues that the BRT opposes racial capitalism. 4Racial capitalism underscores the inseparable connection between capitalism's reliance on racial hierarchies to maximise profit, resulting in racial disparities in wealth, employment, and resource access, ultimately perpetuating systemic economic injustice (Johnson and Lubin 2017, 11). As an analytical framework, racial capitalism aligns with critical forced migration studies, as Robinson notes the connection between the emergence of millions of Black refugees and the conditions created by racial capitalism (2021,318).Over the decades, scholars have expanded on the concept, emphasising the iterative and mutually constitutive historical structures of capitalism and race, as well as how institutions actively produce, sustain, and reproduce racial power relations within the systemic logic of historical capitalism (Bhattacharyya 2018;Gilmore 2007;Lentin 2021).Recent scholarship has extensively explored the intersections between racial capitalism and migration.Harsha Walia (2021) and Gargi Bhattacharyya (2018) argue that practices like hostile bordering structurally perpetuate violence and precarity experienced by displaced and migrating populations (Walia 2021, 12). This underscores the critical relevance of these discussions for EiE, given its predominant operation within contexts involving forcibly displaced people.Within these contexts, the enduring presence of racial capitalism, as emphasised by Walia (2021, 35), consistently devalues the lives and well-being of displaced individuals within broader societal structures.This devaluation hinders their access to education and perpetuates disparities and injustices in providing educational aid and opportunities (Dryden-Peterson and Reddick 2017).Consequently, it is imperative to critically scrutinise and challenge these intersecting systems of oppression within EiE.Moreover, it is crucial to recognise the pivotal role of education in the global production and perpetuation of racial inequalities, as Gerrard, Sriprakash, and Rudolph (2022) assert.Education, they argue, is intricately intertwined with reinforcing racial capitalism, effectively 'building its house' on these systemic inequities (Gerrard, Sriprakash, and Rudolph 2022, 437). Equally crucial in analysing systemic inequities and their influence on educational trajectories is recognising the significance of 'threads of resistance' in challenging and reshaping racial capitalism (Fine 2018, 96).As Bhattacharyya emphasises, while racial capitalism encompasses the 'historical legacies of racialised dispossession in shaping economic life, it is not reducible to those histories' (2018, x).The forthcoming sections of this paper will provide a more in-depth exploration of this theme of resistance. Historical manifestations of racial capitalism and Education Education has historically been wielded as a tool of empire, exemplified by Cecil Rhodes' belief in its pivotal role in extending British rule globally (Flint 1974, 252).The evolution of the United Kingdom's (UK) education system was intricately tied to its colonial endeavours, a relationship meticulously detailed by Rebecca Swartz (2019).Starting in 1833, the UK's education system began to stratify along class lines, with distinct provisions for poor and criminal children, alongside workhouses industrial and reform schools (Swartz 2019, 39).This highly stratified emerging educational system mirrored the hierarchical structures that evolved in colonial contexts.Swartz suggests that 'stories of colonial education are central to understanding attitudes about difference, whether of class, race, gender or age ' (2019, 2).Colonisers' perspectives informed the content of colonised people's education on the 'educability' of their subjects, a term understood as a preconceived idea of ability (McLeod and Paisley 2016;Swartz 2018).Educational spaces serve as crucial sites for 'producing and understanding newly colonised subjects,' with schools becoming instrumental in measuring the aptitudes, including both physical and intellectual capacities of individuals, by various colonial actors such as missionaries, researchers, and colonial officials (Swartz 2019, 10).Notably, charity and faith-based non-governmental organisations (NGOs) frequently administered education in colonial contexts, laying the groundwork for the early foundations of Education and International Development with imperial objectives that aimed to establish hierarchies of intellect through metrics and measurements (Swartz and Kallaway 2018;Takayama, Sriprakash, and Connell 2017).In subsequent sections of this paper, I will further explore the implications of educability, and its connections to contemporary education aid by re-examining classroom experiences through a Critical Race Theory (CRT) lens. The economic exploitation of colonies Racial capitalism sheds light on another under-acknowledged facet of education's role in empirebuilding.As Bhambra (2022a) asserts, the development of institutions like the welfare state in the UK and universal primary education in the 19th and 20th centuries was financially underpinned by the wealth extracted from its colonies.This funding was derived from resource extraction, exploitative labour practices, and the collection of taxes imposed on colonial subjects.Such wealth fuelled Britain's domestic development, including institutionalising the welfare state and universal primary education while leaving colonised territories economically disadvantaged. Using archival data, Bhambra relates how war and debt-ravaged European countries like the UK used the money that colonies had been obliged to deposit in their banks as part of its colonial fiscal and monetary policy to service Britain's debts (Bhambra 2022a).In return, colonised territories were either lent or given a paltry £40 million; at the same time, their deposits in UK banks were over £250 million, which the British government repurposed to fund its welfare state-building instead of enabling the colonies to use that money for their own needs (Bhambra 2022b, 13).The Colonial Development Act, enacted in 1948, laid the groundwork for the emerging framework of contemporary humanitarian aid, effectively concealing the extensive wealth accumulated by colonisers in the metropolis (Bhambra 2022b).This underscores the pivotal role played by the economic exploitation of colonies in shaping the current aid structure, a system with which educational aid is intricately entwined. Aid, trade and self-interest In the mid-twentieth century, the UK, as mentioned above, and the United States (US), through its 'Four Programme' (Francis 2022), initiated humanitarian aid programmes, further intertwining education with imperial interests.While ostensibly aimed at supporting emerging economies, these programmes were laden with conditions favouring the interests of donor nations, including trade tariffs that benefited the Global North (Bhambra 2022b).These links between aid, trade, and self-interest are remarkably underexplored in EiE scholarship.A striking omission from educational aid discourse more broadly is any reference to the dissolution of colonial empires in the mid-twentieth century, the forced removal of post-independence socialist-leaning leaders, the reinstatement of authoritarian rulers, and the introduction of stabilisation and structural adjustment policies (SAPs) designed to manage mounting debt in many of the contexts where educational aid is now provided.These policies have shaped what is now described as 'a global extractive system of debt peonage,' which forms the foundation of the global education architecture today (Tikly 2019, 71).Nations subjected to SAPs have conformed to global education policies and international action frameworks (Novelli et al. 2014, 4). Contemporary EiE is inextricably woven into the intricate tapestry of Western humanitarianism.Its governance primarily hinges on the political interests and financial backing of governments situated in the Global North.Paradoxically, these governments exhibit a pronounced hostility towards welcoming refugees within their borders, often demonstrating a preference for confining aid recipients to the Global South (Shuayb and Crul 2020).The capricious disposition of 'benevolent' imperial powers comes to the fore when funding commitments are abruptly rescinded.A pertinent example is the United Kingdom, which 2019 positioned itself as a champion of education in regions beset by crises by pledging a substantial £25 million for EiE research.However, the landscape shifted within just a few years when the newly established Foreign and Commonwealth Development Office (FCDO) initiated substantial cuts to humanitarian aid funding.These actions had direct, adverse consequences on thousands of recipients of educational assistance, as exemplified by the situation in South Sudan, whereby funding already earmarked for primary and secondary education projects was abruptly cut (Sparks 2021).The actions of the FCDO illuminate the paradoxical nature of foreign aid, underscoring that donor decisions are seldom rooted in meaningful participation or genuine consideration of the needs and aspirations of aid recipients.Simultaneously, there is a discernible tendency among donor countries to align EiE funding with their national security interests (Department for International Development 2015).Furthermore, stakeholders engaged in EiE initiatives have found themselves entangled in bordering regimes.These dynamics further complicate the landscape, raising questions about aligning humanitarian aid objectives with political interests and border control agendas. The legacy of colonial control and educational aid As Ruth Gilmore Wilson highlights, the 'shadow state' concept emphasises the intricate ways states and governments exert control over non-profit organisations or non-governmental organisations (NGOs) (2017,41).Historical records reveal how early educational aid programmes were instrumental in controlling, surveilling, and furthering the goals of colonialism.For instance, during the Kenyan Mau Mau War of Independence (1952)(1953)(1954)(1955)(1956)(1957)(1958)(1959)(1960)(1961)(1962), missionary schools were seen as potential centres of dissent and, consequently, banned by British colonial authorities.The Kenya Teachers College even had its campus converted into a prison camp for opponents of colonial rule 'where proponents of resistance to colonialism were hanged' (Thiong'o 2011, 166).British colonial authorities established internment camps across Kenya, detaining male children and youth to forestall 'radicalisation' or resistance to colonial rule (Baughan 2020).Collaborations between international NGOs like Save the Children, who colluded with colonial administrations to secure funding and manage education programmes within the internment camps (Baughan 2020), further underscored the controversial roles of organisations in the early educational aid landscape. However, as racial capitalism implores us to recognise, coloniality endures.Today, supporting education in crises enables narratives of 'benevolent imperialism' to prevail, shrouding the violence of colonial rule and expansionism in humanitarianism's language whilst simultaneously positioning military acts as rescuing or liberating others (Shirazi 2020, 60).Notably, INGOs have been implicated in practices such as migrant detention, raising questions about the tensions of providing humanitarian education aid whilst producing and protecting bordering logics.The example of Save the Children Australia's involvement in child protection and EiE services on Nauru in 2012-2015, an island used by the Australian government to detain asylum seekers, exemplifies the ethically dubious decisions and complicit actions by these organisations (Bessant and Watts 2018, 51).Only when whistle-blowers 'forced' the organisation's hand did Save the Children Australia make 'public disclosures about the conditions in which the inmates lived' (Bessant and Watts 2018, 51).As Harsha Walia notes, 'border imperialism and state practices of migrant detention create huge corporate profits' (Walia 2013, 57).A child rights organisation's willingness to sign non-disclosure agreements as part of government service contracts for financial gain underscores the dubious line between humanitarian imperatives and profit-driven decisions (Bessant and Watts 2018, 51).Furthermore, it demonstrates how prevailing colonial practices interact with educational aid and illustrates how EiE interventions can operate as part of the 'shadow state' (Gilmore Wilson 2017, 41) apparatus. To summarise, examining EiE through a racial capitalist lens necessitates a critical re-evaluation of the field, urging us to delve into its historical origins and contemporary involvements within the broader framework of racial capitalism.It demands attention to the interconnected, broader, and systemic contours of EiE, firmly positioning the field as an outcome of global forces, historical and present, and their local effects that can also perpetuate educational disparities during emergencies. Critical race theory (CRT) and EiE In the prior section, our exploration delved into racial capitalism, emphasising economic aspects and structural inequalities evident in the EiE field.In this section, I pivot our focus towards CRT to delve deeper into both the individual and systemic forms of racism that continue to perpetuate inequalities (Tate and William 1997).Formed within the legal scholarship movement in the United States in the 1980s, CRT offered a paradigmatic shift in mainstream discourse that positioned racism as an individual act, bias, or prejudice rather than deeply embedded within legal systems, frameworks, and policies.In the 1990s, scholars such as William Tate and Gloria Ladson-Billings (1995) applied CRT to educational research and practice, examining how structures of racism mutate to reproduce educational inequity both in and out of educational spaces, 'relying on racial characterisations and stereotypes' that legitimise stratification (Tate and William 1997, 199).Over the past several decades, critical approaches to pedagogy have developed within education studies in response to the multiple issues rendered visible through CRT (Yosso 2005).Pragmatically, this includes embedding a curriculum that acknowledges and challenges structural inequities, often advocating for asset-based, linguistic and cultural pluralism, unsettling knowledge inequities and valuing distinct traditions (Paris and Samy Alim 2017).Thus, CRT offers a valuable lens through which to analyse EiE, shedding light on the pervasive influence of racial dynamics in educational settings. Educability and curriculum Education was pivotal to early nineteenth-century humanitarian movements within Britain and its colonies (Swartz 2019).The colonial education policies sought to dislocate populations from their communities and lands, reinforce capitalist individualism, and dismantle social solidarity (Rodney 1972, xvii).Missionaries viewed it as a means to convert enslaved and Indigenous subjects to Christianity.It was also not in the colonial power's interest to educate the colonised population extensively, as this could challenge their subjugation (Swartz 2019).Within this context, colonial curricula played a crucial role in upholding the racial capitalist system, reinforcing racial, gendered, and sexual hierarchies (Fúnez 2022).Colonisers, missionaries and early educational aid providers rigidly defined what education should entail and which knowledge was valued, failing to recognise when education was already taking place (Swartz 2019).Practical knowledge, including numeracy and literacy, was favoured, with state educationists and missionaries drawing on pseudo-scientific discourse that reinforced stereotypes of colonised populations' limited capabilities and educability (McLeod and Paisley 2016;Swartz 2019).Consequently, education spaces became central to the global production and reproduction of racial inequalities, intimately tied to the perpetuation of racial capitalism (Gerrard, Sriprakash, and Rudolph 2022). Scholars like Ngũgĩ wa Thiong'o (2011) and Leon Tikly (2004) have emphasised the persistent legacies of imperialism within the realm of education.These legacies necessitate critical scrutiny of elitism, eurocentric curricula, and the sidelining of Indigenous knowledge within education and international development (Tikly 2019, 2).Moreover, INEE's recognition of a culture steeped in white supremacy (2020), especially in its role in setting standards and shaping knowledge production, underscores that EiE interventions often draw from an Anglo-European epistemology inherited from the colonial era.It can be argued that the sector's preoccupation with standard-setting and toolkit production (dominated by actors in the Global North), is not entirely detached but rather can be traced back to colonial interests in defining educational content, assessments and standardisation. Transnational insights and counter-hegemonic examples CRT's framework further urges us to examine educational policies and decision-making beyond national borders and their relevance to the EiE field.For instance, critical race scholar Gloria Ladson-Billings' examination of the Brown vs. Board of Education decision in the United States during the 1950s illustrates the transnational nature of social movements and the political power they wielded (1998).Ladson-Billings' analysis goes beyond portraying the desegregation of schools as a benevolent act; she contends that it was a strategic move by the United States aimed at bolstering its international image among newly emerging independent nations, validating its political and economic ideologies and curbing the spread of communism (Ladson-Billings 1998, 17). I argue that the Brown vs. Board of Education case is relevant to EiE for several reasons.Firstly, it challenges the conventional narrative of knowledge flowing unidirectionally from the Global North to the Global South.It serves as a compelling counter-narrative, demonstrating how dissent and resistance can drive meaningful educational reform, even from within the heart of an empire.The tendency to suppress multi-directional learning is evident within the context of EiE, where the 'Global North' predominantly assumes the role of primary knowledge producer.This limited perspective fails to acknowledge how the political ideologies of formerly colonised nations can influence educational policies, even within the centres of colonial power.Secondly, this landmark case raises critical questions about the educational disenfranchisement of marginalised populations within the 'Global North'-an issue often overlooked as a crisis (Shirazi 2020;Shuayb and Crul 2020).Across EiE scholarship and practice, sites of deprivation, acts of land removal, hostile bordering practices, and state violence that impact marginalised communities across the 'Global North' are unseen or hegemonically understood, consolidating the 'limitations of the humanitarian imagination' (Shirazi 2020, 76) In sum, CRT provides a powerful analytical framework for understanding the structural continuity of educational injustice by considering historical context, policy analysis, counter-narratives, and the concept of structural racism within educational aid. Fugitivity In the previous sections, the lenses of racial capitalism and CRT shed light on how educational aid perpetuates systemic inequities.However, as the BRT emphasises, the enduring forces of resistance against oppression are ever-present.The BLM movement, instrumental in catalysing INEE's statement on racial inequity, embodies the 'accretion of collective intelligence gathered from struggle' (Robinson 2021, I).This section delves deeper into narratives of resistance within the context of EiE. For enslaved individuals, one of the most prevalent forms of resistance was their pursuit of freedom through physical and psychic means of flight.Cedric Robinson deliberately employed the concept of fugitivity to describe those who had escaped slavery.This choice of terminology aimed to counteract infantilising language and underscore the agency of enslaved populations (Robinson and Robinson 2017, 3).Robinson's conceptualisation of resistance encompassed a spectrum of actions, ranging from open revolts to more discreet forms of rebellion.These acts included practices like obeah, voodoo, Islam, and Black Christianity, demonstrating how enslaved populations preserved their identity and autonomy in the face of imperialism (2021,310).These perspectives invite scholars to acknowledge the multi-dimensional nature of resistance throughout history, emphasising that confronting oppression systems can lead to new possibilities for justice. Scholars like Sikka (2020) have urged those studying forced migration to consider the concept of fugitivity, as it challenges conventional definitions of 'refugee' or 'migrant' that can oversimplify identities, negate agency, and reinforce ahistorical perspectives on displacement.In addition, the exploration of fugitivity as a conceptual framework in critical education scholarship has revealed educational histories that have endured historical and contemporary forms of oppression (Givens 2021).Some argue that it provides a fertile ground for educational theorising, curriculum development, and pedagogical practices (Kazembe 2018;McNeill et al. 2021;Stovall 2020).As a paradigm, fugitive pedagogy calls for exposing the physical and symbolic violence inherent in existing knowledge structures and social arrangements, demanding a radical departure from the status quo (Givens 2021, 272).It underscores recognising individuals as products of historical forces and radical social movements (Johnson and Lubin 2017, 25). Fugitivity and EiE Present-day fugitive classrooms can be discovered in conflict zones worldwide.They persist in the Nuba Mountains of Sudan, where cultural and academic education continues within caves despite the ever-present threat of aerial bombardment (Oddy 2023;Warren 2020).Similarly, undercover schools for girls in Afghanistan subvert the nationwide ban on post-primary education for girls (Graham-Harrison 2022).These poignant examples vividly illustrate that, despite life-threatening consequences, communities persist in creating 'fugitive spaces' for learning (Harney and Moten 2013;Patel 2019;Stovall 2020).Nevertheless, narratives that depict such agency and resistance to educational aid, such as teacher and student strikes, often remain conspicuously absent from research.Okello et al. (2021)provide a poignant example from the inception of education in the Dadaab refugee camp in Kenya, where they remark, 'UNHCR did not start the schools in the camp.Professor Abdul Aziz did, under a tree.He taught for a decade, although you will find his name in no book.' The case of Professor Abdul Aziz, an EiE pioneer in Dadaab, highlights the concealed yet pivotal roles that crisis-affected individuals play in initiating and implementing educational initiatives.Although communities are typically the first to extend aid during crises, these instances of 'mutual aid' (Spade 2020) often remain marginalised in EiE literature.The hidden histories of people's agency, often excluded from official narratives and archives, echo Walter Rodney's assertion that 'historical knowledge' is 'a weapon of struggle' (2019,52).The omission of these individuals, their actions, and their movements from official camp narratives (Monaghan 2019) necessitates a deeper reflection on why EiE scholarship and the field, in general, have been reluctant to acknowledge and celebrate those who initiated education in crisis contexts without donor backing or NGO support. Hidden histories Fugitivity is also a useful concept when exploring histories at the macro level.For example, while the INEE played a pivotal role in advancing EiE research by establishing the Journal of Education in Emergencies, the limitations of historical research are apparent in an EiE timeline launched by INEE in 2019.This interactive timeline aimed to chart the history of EiE by including the 'key interventions, conventions, actors, events, and publications that have shaped the EiE field over the past sixty years' (INEE n.d.).UN agencies and international non-governmental organisations (INGOs) were positioned as 'key actors' in this timeline and credited with groundbreaking interventions and policies that advanced refugee education during these sixty years.However, notable absences within this timeline are evident.Firstly, over the seventy years covered by the INEE timeline, no references are made to pivotal moments, such as the monumental independence movements that swept the world during the post-World War Two period.The timeline fails to recognise the significance of education in the post-independence nation-building processes (Tikly 2019).Furthermore, there is an elision of important cross-regional and multilateral alliances organised by leaders of formerly colonised states, such as the 1955 Bandung Conference, which brought together 29 African and Asian countries to propose strategies for promoting economic, political, technological, and cultural spheres (Timossi 2015). Reinserting people, events, and places into the historical narrative represents an essential act of contestation against narratives of white saviorism, epistemic superiority, and benevolence, which have been central to black radical scholarship. 5Long before UNHCR's landmark 1985 document on refugee education, another significant omission from the official EiE timeline is the role of countries like Libya and Cuba in supporting anti-colonial and liberation movements by providing educational aid.Cuba, for instance, has granted thousands of students from crisis-affected populations primary, secondary, and higher education scholarships (Fiddian-Qasmiyeh 2015).This scholarship support extended to Sudanese 'lost boys,' Namibians, Palestinians, Sahrawi refugees, and marginalised communities in the USA and beyond (Fiddian-Qasmiyeh 2015).Over six million people globally acquire basic literacy skills through Cuba's 'Yo Si Puedo' literacy drive (Boughton and Durnan 2014, 325).Governments, non-state entities like Brazil's Landless Workers Movement, and various political and social actors in the Global South, including civil society groups, have sought educational support from Cuba (Boughton and Durnan 2014). These collaborations are not characterised as conventional 'aid' but as solidarity-based humanitarianism, emphasising mutual benefit, solidarity, reciprocity, and non-interference in the national sovereignty of participating states (Fiddian-Qasmiyeh 2015, 18).It is important to note that although I am not idealising Cuba's political ideologies, Cuba's educational interventions actively challenge the notion that education aid and 'capacity building' are exclusively the domain of the 'Global North.'This raises questions about why the extensive histories of educational provision by countries like Cuba and others have been marginalised, left unacknowledged, and erased from mainstream narratives related to refugee education (Fiddian-Qasmiyeh 2015, 3).Fiddian-Qasmiyeh's (2015) exploration of South-South humanitarian education aid prompts broader reflections on who receives recognition in history and who is omitted.It challenges and scrutinises established ideas and conceptual frameworks within EiE.The erasure of multiple historical moments from INEE's linear historical chronology of the sector exemplifies how hegemonic power shapes and centres the West as the sole and legitimate provider of educational opportunities in displacement scenarios. In summary, these macro-level examples, alongside Okello et al.'s (2021) illustration of Professor Abdul Aziz's erasure from official narratives, underscore the risk of EiE research becoming analytically irrelevant in comprehending educational experiences, disadvantages, and injustices.These omissions and silences underscore how it constructs a specific version of history, underpinning contemporary discourse and practice.Recognising the historical connections and power dynamics that have shaped and informed EiE is imperative.Consequently, transformative agendas within the field may only emerge with critical reflection on the divergent discourses that have shaped the origins of educational aid. Conclusion In conclusion, we cannot ignore the stark reality that EiE, much like humanitarian aid in its broader context, is intricately entwined with the workings of racial capitalism.Nevertheless, despite the critical examination presented in this paper, it is essential to recognise that countless individuals aspire to access educational systems and the prevailing EiE ecosystem.While fraught with exploitation, EiE offers crucial financial support and educational opportunities for millions of forcibly displaced children and youth, although marked by significant disparities (Luchs and Miller 2016).These inherent contradictions do not negate the urgency of cultivating more radical visions for education.History has proven that even within the confines of colonial educational structures, acts of epistemic resistance and solidarity networks were possible, with early African, Asian, and Arab independence leaders studying and collaborating across colonial metropoles (Olorunshola 2021).Inspired by contemporary global movements, such as the South African #RhodesMustFall campaign, we are reminded that systems of dispossession and inequity transcend borders and that educational institutions serve as crucial arenas for pushing the boundaries of the status quo (Bhambra, Gebrial, and Nişancıoğlu 2018).To speak of EiE, thus, is to speak of colonisation and capitalism, bordering and othering, and institutions that reproduce inequities by excluding, devaluing, and silencing forcibly displaced populations.However, in the same vein that Black radicalism, as a conceptual lens, highlighted how residuals of colonial administrative practices permeate EiE, it also enabled generative political possibilities, as resistance to the humanitarian aid sector's will to dominate forcibly displaced people's lives is ever-present As critical EiE scholars, we must heed Rutazibwa's cautionary reflection on the haste of dismantling aid: is there a risk of discarding the potential for positive change along with the existing flawed systems?(Rutazibwa 2018). Through the lens of the Black Radical Tradition, our exploration of EiE has revealed that the limited opportunities offered to crisis-affected populations are merely points on a long historical continuum marked by restrictions on the type of education available.It has exposed the intricate relationships between organisations and states and has demonstrated that EiE, since its inception, has been deeply intertwined with the legacies of colonialism.By revisiting history through this critical lens, the BRT has exposed how dominant narratives within the EiE field have crafted a single narrative that disconnects its origins from colonial forms of assimilation and containment.In doing so, it has obscured the myriad afterlives of historical and ongoing coloniality in numerous spaces, practices, and social relationships.Consequently, multiple erasures at various junctures have shaped the EiE field. Finally, as this paper has argued, EiE represents a significant convergence of coloniality, revealing that history is not a series of isolated events but an ongoing influence on contemporary institutional practices and educational experiences.Acknowledging the pervasive structural racism in the EiE sector is not enough to eliminate its presence.Practitioners must recognise that even with good intentions, intentionality alone cannot rectify the entrenched inequitable dynamics.Informed by the insights of the BRT, the challenge for EiE scholarship is to transcend the confines of prevailing bordering logic and reimagine the possibilities.Shirazi's (2020) call to challenge the limits of the humanitarian imagination implores us to explore what kinds of spaces, relationships, ways of knowing, and institutions a Black Radical Tradition-informed EiE field could usher into existence.As we engage with these challenges, it is crucial to acknowledge the situated position of EiE intellectual inquiry, especially as calls for a critical examination of the epistemological foundations of education gain prominence.This calls for vigilance and recognition that systems of exclusion are deeply entrenched.Critical practitioners and scholars must embrace constructive complicity (Joseph-Salisbury and Connelly 2021; Rodney 2019).The Black Radical Tradition underscores the enduring struggle, cautioning us to acknowledge the extended duration of this endeavour.It also signifies that solutions to the systemic and entrenched inequities within the EiE ecosystem may not emerge solely from those in privileged positions but from those historically excluded from meaningful engagement with systems, structures, and institutions.Changing EiE will not happen overnight, but as this study demonstrates, change is coming.Notes 1.For Okun and Jones (2000), white supremacy is defined as a 'historically based, institutionally perpetuated system of exploitation and oppression of continents, nations, and people of colour by white peoples and nations … to maintain and defend a system of wealth, power, and privilege.' 2. More theorectical concepts that the three explored in this paper can trace their genealogies to the Black radical tradition.Contemporary trans-and inter-disciplinary work in the BRT builds upon insights from abolitionists, feminists, anti-colonial scholars, and Marxist thinkers (Michael J. Viola et al., 2019, 6), foregrounding the macro-level phenomena of structural racism alongside its micro-level and intersectional formations.3. My citational practice intentionally seeks to redress prevailing citation exclusion detrimental to the recognition, credibility and visibility of those relegated to the margins of academia.This has included referencing early career scholars, thesis, and online academic contributions (e.g.Twitter, blogs and podcasts) alongside academic journals and books.4.Although the term racial capitalism was populised by Cedric Robinson, Racial capitalism, prominent in South Africa during the 1970s and championed by Neville Alexander (No Sizwe), was integrated into the National Forum's manifesto (Strong et al. 2022).This coalition, comprising Black consciousness and radical anti-apartheid groups, recognised apartheid's deep-rooted connection to racial capitalism.Their struggle aimed not only to end apartheid but also to dismantle the system of racial capitalism, which perpetuates disparities in wealth, employment, and resource access, challenging the notion of capitalism without racial oppression (Strong et.al., 2023) 5.In recent years, Sadiyaa Hartman have used critical fabulation 'to (re)write history to fill narrative gaps in archives, honouring ancestors, communities, and people's rightful places in history' (Hartman, cited in (Kermit 2021).
2023-10-27T15:32:28.449Z
2023-10-24T00:00:00.000
{ "year": 2024, "sha1": "fc9825a1cc03d7d5d8ba6a55ca0a031a95f01929", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/14767724.2023.2272740?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "dee9188cd7affb8e96fb417246c15f2125155b05", "s2fieldsofstudy": [ "Education", "Sociology" ], "extfieldsofstudy": [] }
119242873
pes2o/s2orc
v3-fos-license
Disk Dispersal: Theoretical Understanding and Observational Constraints Protoplanetary disks dissipate rapidly after the central star forms, on time-scales comparable to those inferred for planet formation. In order to allow the formation of planets, disks must survive the dispersive effects of UV and X-ray photoevaporation for at least a few Myr. Viscous accretion depletes significant amounts of the mass in gas and solids, while photoevaporative flows driven by internal and external irradiation remove most of the gas. A reasonably large fraction of the mass in solids and some gas get incorporated into planets. Here, we review our current understanding of disk evolution and dispersal, and discuss how these might affect planet formation. We also discuss existing observational constraints on dispersal mechanisms and future directions. the derived lifetimes is limited by the uncertainty in determining ages (e.g., Soderblom et al. 2014, Bell et al. 2013, there is clear evidence that NIR excess fractions decline with age from the classical T Tauri stage to the non-accreting, weak line T Tauri stage-a transition that spans a few Myr at the most. Most disks seem to survive just long enough to allow planet formation (Lissauer et al. 2009), and only in a small fraction (∼ 10%, Winn & Fabyrycky 2014) may the formation of giant planets be possible. If gas does manage to persist late into planet formation epochs, it can further affect planetary dynamics. Even small amounts of gas can influence the dynamics of young planetary systems: causing migration, damping eccentricities and mitigating the effects of planetesimal collisions. Disk depletion lifetimes at longer wavelengths, tracing dust in regions farther from the star, are similar (e.g., see review by Williams & Cieza 2011). Sub-millimeter emission is rarely seen from disks without NIR excesses (Andrews & Williams 2005), indicating either that the entire disk is depleted simultaneously or that the larger grains are lost earlier due to some combination of drift, fragmentation and/or planetesimal formation. Dust at mid-infrared wavelengths appears to last slightly longer (e.g., Wahhaj et al. 2010, Hardy et al. 2015; see Figure 1), however, debris disks may contaminate emission statistics at these wavelengths. Nevertheless, the transition from optically thick to optically thin disksbelieved to be represented by a class of objects called transition disks-appears to proceed from inside-out, .i.e., the inner dust is depleted first. Transition disks have lower accretion rates and dust holes in their inner radii and constitute about ∼ 10% of the disk population (Williams & Cieza 2011). While they are often believed to be a result of planet formation (e.g., Najita et al. 2015), these disks may represent a mixed class of objects (see recent review by Espaillat et al. 2014) with only the lower mass disks on the verge of dispersal (Ercolano et al. 2011, Koepferl et al. 2013. Despite the fact that dust disk depletion timescales are fairly well known, disk dispersal mechanisms are not yet well constrained. In fact, the cluster age vs. disk frequency plots are also consistent with the interpretation that the ∼ 3 − 5 Myr dust disk lifetime tracks the process of dust agglomeration into planetesimals. Planet formation from planetesimals, and even giant planet formation if the gas reservoir is still present, may then proceed over substantially longer periods. In this interpretation, which holds if transition disks were mainly caused by planet formation, dust disk depletion does not necessarily imply dispersal of the disk material, Gas, which dominates the disk mass through most of its evolution, could in this case be removed on different timescales. However, observationally inferred timescales for the dispersal of gas in disks are less certain, mainly due to the fact that gas emission is intrinsically very faint. The earliest study was the seminal CO survey of ∼ 10 disks by Zuckerman et al. (1995) which set a loose constraint on the gas disk dispersal time (at large ∼ 100AU radii) of ∼ 10 Myr. The FEPS legacy survey on the Spitzer Science Telescope, based on non-detections of H 2 , set a dispersal timescale of the order of about 5 − 30 Myr at radii 1 − 40 AU (Pascucci et al. 2006). A more recent [OI]63µm survey using the Herschel Space Observatory (GASPS program) derived a similar timescale for the dispersal of gas (∼ 5 − 200AU). Dent et al. (2013) quote that ∼ 18% of stars retain more than 1 M J worth of gas at ages of 4 Myr, and that all disks are dispersed by ∼ 10 − 20 Myr. Thus, gas dispersal times could, in principle, be longer than the ∼ 3 − 5 Myr dust disk depletion time. Different gas tracers have different sensitivity thresholds, making it difficult to compare gas and dust disk lifetimes. In the inner disk (∼ 1AU), however, there appears to be clearer evidence of simultaneous dispersal. The fraction of accreting disks (withṀ acc > 10 −11 M ⊙ yr −1 ) in stellar groups declines on timescales similar to those of NIR excesses (Fedele et al. 2010), with some non-accreting sources (withṀ acc < 10 −11 M ⊙ yr −1 ) still showing IR excesses (also Ingleby et al. 2013, Hardy et al. 2015. This may indicate that gas in the inner disk is removed first, consistent with dispersal scenarios (see Alexander et al. 2014). The main processes believed to disperse disks are a combination of viscous accretion and photoevaporation (see reviews by Hollenbach et al. 2000, Armitage 2011, Clarke 2011, Alexander et al. 2014 and to some extent, planet formation. Protostellar disks build the central star, hence it is to be expected that much of the disk gas is channeled into the central object. Evidence that disk evolution is largely driven by accretion during the bulk of the disk lifetime is provided by the fact that disk accretion rates approximately decline with age as expected from viscous accretion theory (e.g., Hartmann et al. 1998). To begin with, disks form due to the rotation of a gravitationally collapsing cloud core. Gravitational instabilities in the initially massive disk lead to strong accretion onto the central star (e.g., Laughlin & Bodenheimer 1994). At early stages, magnetic fields drive powerful jets and winds that carry away angular momentum from close to the star (e.g., Königl 1991, see reviews by Li et al. 2014, Turner et al. 2014. As star formation proceeds, infall decreases and the disk becomes gravitationally stable. Viscous accretion through the disk continues to transport mass inwards to the central star and angular momentum outwards. As the star reaches close to its eventual final mass, the rate of accretion declines , Mendigutia et al. 2012. However, accretion does not proceed indefinitely and appears to abruptly halt with the disappearance of the inner gas and dust (e.g., Haisch et al. 2001, Fedele et al. 2010. Dispersal by pure accretion alone implies an indefinite expansion of the outer (gas) disk as angular momentum is re-distributed; there is no observational evidence to support such expansion. Further, relatively short (∼ 10 Myr) gas disk lifetimes call for an additional dispersal mechanism that removes gas from the system. Photoevaporation, whereby gas is heated to escape temperatures by the central star, is thought to disperse gas at later stages of evolution. Although observational evidence only requires the operation of some such mechanism at late stages, irradiation due to accretiongenerated high energy photons should be even stronger and proceed to remove disk material even at earlier stages of disk evolution. When the magnetically-driven outflow/jet weakens along with the accretion rate (e.g., Hartmann et al. 1998, White & Hillenbrand 2004 and becomes more transparent, stellar high energy photons begin to penetrate the outflow column and irradiate the disk to heat its surface. Penetration is first by hard X-ray 1 and FUV photons when accretion rates fall toṀ acc < 10 −6 M ⊙ yr −1 , and later by EUV and soft X-ray photons (Ṁ acc < 10 −8 M ⊙ yr −1 ) ). This irradiation marks the beginning of disk photoevaporation, where surface gas is heated to temperatures sufficient to overcome gravity and mass is lost in a slow, thermal wind. Viscous accretion and photoevaporation subsequently work together to disperse the disk with time . Theoretical estimates for a disk evolving under the influence of viscous accretion and photoevaporation, including parametrized population synthesis models, agree with observationally inferred values (∼ 1 − 10 Myr) and can reproduce a range of observational diagnostics of disk dispersal (e.g., Alexander et al. 2006, Ercolano et al. 2009, Gorti et al. 2015. The continuous removal of gas by photoevaporation may hold consequences for the evolution of the disk as it forms planets. While viscosity depletes gas mass uniformly at all radii and spreads the outer disk as it transports angular momentum, photevaporation preferentially removes gas at specific radii (inner 1 − 10 AU and outer 50 AU, depending on which of EUV, FUV and X-rays dominate). If the mass loss rate due to photoevaporation (Ṁ pe ) is low, then planet formation proceeds unaffected by photoevaporation, except at late stages where the presence or absence of gas influences planet dynamics (e.g., Baruteau et al. 2014, Coleman & Nelson 2015. IfṀ pe is high, then photoevaporation can influence early stages of planet formation by altering the gas/solids ratio (e.g., Gorti et al. 2015) and the type of planet formed (rocky vs. gaseous). Exoplanet statistics to date appear to indicate a relative paucity of gas giants (estimated frequency of ∼ 10% in solar-mass stars, Winn & Fabrycky 2014), but an abundance of Super-Earths (M p ∼ 3 − 10M ⊕ ) with some gas in an envelope. There must be insufficient gas present after the formation of planetesimals and rocky cores (a process thought to last ∼ 1 Myr, Connelly et al. 2012) or gas giants would be more common. On the other hand, gas must necessarily be present at planetary core formation epochs to explain the frequency of Super-Earths. The rate at which gas is dispersed is thus closely aligned to planet formation timescales. Photoevaporation may also influence the final architectures of planetary systems due to the migration of planets in a disk with gaps cleared during dispersal, leading to preferred semi-major axis distributions for exoplanets (Alexander & Pascucci 2012, Ercolano & Rosotti 2015. This chapter, in keeping with the theme of this book, mainly deals with these above connections or links between disk dispersal and planet formation. We are interested not only in the lifetime of gas disks but also in the radial distribution of material at different stages of disk evolution. Dust evolution is discussed only as it pertains to gas dispersal mechanisms. (We refer the reader to chapters by Birnstiel et al. and Wyatt et al. for more indepth reviews of dust.) We also do not discuss MHD winds which might deplete some disk mass especially at early stages. The structure of the chapter is as follows: we first describe early stages of disk evolution and accretion ( §2), then photoevaporation due to the central star ( §3) and in a cluster environment ( §4), planet formation and dispersal ( §5), and in §6 describe observational constraints on disk dispersal. We end with a discussion on future directions ( §7). Accretion It is widely accepted that the formation of stars and planetary systems is fundamentally governed by the action of gravity and angular momentum. Whereas the overall picture is quite clear, the details are still far from fully understood. This concerns both theory and observation as both suffer from resolution problems. In addition, the highly complex interplay between physics and chemistry in the dusty plasmas of interstellar clouds leaves a complete description essentially intractable, at least for quite some time to come. We focus here on the formation of low and intermediate-mass stars with masses < ∼ 8 M ⊙ , for which Kelvin-Helmholtz time scales are longer than other time scales of relevance and consequently, for which we can follow the time evolution of the formation process. These less massive stars form in density enhancements of rotating molecular clouds. At supercritical density, the dense cores collapse, conserving angular momentum and forming flattened structures (Terebey et al. 1984, Walch et al. 2009) that eventually develop into disks and rings. The hallmark of the dynamical phase of star formation, i.e. the infall phase, would be spectrally resolved molecular transitions with very high optical depths, as illustrated in Fig. 2. There, the spectral infall signature is qualitatively shown by a radiative transfer model, depicted as a smooth red line. Observed excess emission in the blue and red wings is attributed to outflowing gas. It is now firmly established that both early gravitational infall and later accretion are accompanied by mass loss phenomena, and these most often exhibit a bipolar geometry. Accretion processes in disks It is believed that the infall occurs onto the disk, and that the matter is accreted onto the central object through the disk. This, however, needs removal of a significant fraction of the angular momentum that is carried by the disk to prevent the break-up of the central object. Magnetic fields that thread through the core and the disk are invoked to act as a lever arm to brake the rotation. Outside magnetic dead zones, the fields are capable of providing the necessary viscosity due to the combination of magnetic and Reynolds stresses in a turbulent shearing flow (e.g, Balbus 2003, Cao &Spruit 2013 andreferences therein). The theoretical foundation of accretion disks was laid by the works of Shakura and Sunyaev (1973) and Lynden-Bell & Pringle (1974). The viscosity is described parametrically by the product of a turbulent eddy size H (of the order the pressure scale height) and its sound speed c s , i.e. ν = αc s H, where α is typically10 −4 to 10 −2 . The time evolution of the surface density (Σ(r)) of the disk is given by the solution of which describes the basic characteristics of an accretion disc, viz. that angular momentum is transported outward through the disk as matter is accreted into the inner regions. Disk observations have revealed rotational Keplerian signatures (Sargent et al. 1987, Dutrey et al. 1994, Olofsson et al. 2001, Guilloteau et al. 2014) but predicted radial accretion drift velocities are too small to be measurable (on the order of some cm s −1 ). Another observable would be the Spectral Energy Distribution (SED). The SED of a classical accretion disk is essentially that of a multi-temperature broadened blackbody. Recent models still exhibit these basic features (e.g., MCFOST, Pinte et al. 2006). A useful quantity is the integrated SED, i.e., the accretion luminosity L acc = η GṀ acc (M/R) star , where 0.5 ≤ η < 1 is an energy conversion efficiency. For typical parameters one finds L acc = 10 L ⊙ , which is far above what had been determined from observations (Hartmann et al. 1997). It was concluded that the accretion luminosity most likely is not steady in time, i.e. dL acc /dt ∝ dṀ acc /dt 0, but variable withinṀ acc ≈ 10 −8 − 10 −4 M ⊙ yr −1 . The intermittent high states would be reached during FU Orionis type outbursts (Hartmann et al. 1996), whereas the low states would correspond to the typical T Tauri phase. Rise and decay times are of the order of 1 yr and 100 yr, respectively. Shorter time scales for dṀ acc /dt have been examined by Costigan et al. (2014). There are observational signatures of accretion. Optical emission lines from T Tauri stars, e.g. Hα, have been modeled as excited by shocks at the foot-prints of magnetized funnel flows (Muzerolle et al. 2001). However, it appears that the geometry and magnetic field topology are much more complex than envisaged in these one-dimensional models (e.g., see Fig. 1 of Gunther 2013). Mass loss accompanying accretion Jets Optically visible HH-objects (Haro 1950, Herbig et al. 1951, Reipurth et al. 2000 and jets (e.g., Mundt et al. 1985, Shang et al. 2007) are the cooling radiation from fast interstellar shock waves in star forming regions. Observations reveal jets on many length scales, viz. micro-jets (sub-arcsec) to pc-scales. As the name indicates, the collimation of jet flows is very high. The absence of detectable [O III] but prominent [S II] emission often indicates that the excitation (or density) is not very high, consistent with jet velocities not exceeding about 80 km s −1 . However, a number of jets are now known to emit in X-rays, implying jet velocities of the order of 500 km s −1 or higher (Liseau 2006 and references therein). In many cases, but not all, these jets are seen together with generally much less collimated molecular outflows (Bally et al. 1983). Figure 3 is based on the compilation of literature data of CO-outflows by Wu et al. (2004), and shows the relation of the mass loss rate, as determined from CO-line mapping, and the bolometric luminosity of the driving sources, over seven orders of magnitude, and obtained from their infrared SEDs. The plot exhibits a large scatter that is due to the heterogeneity of the sample. However, in spite of this, it seems pretty clear that there is a dichotomy between low-luminosity (< 100 L ⊙ ) and high-luminosity (≥ 100 L ⊙ ) stellar sources. However, in both cases, the data can be fit by power laws, viz. L bol ∝Ṁ a loss . While the low-luminosity distribution (where the luminosity is dominated by accretion) is consistent with a = 1, the distribution steepens at the higher end, with a = 2.5 (see also Beuther et al. 2002 and references therein). In the latter case, the luminosity most certainly is due to nuclear burning (objects already on the main-sequence). This power-law behavior strongly suggests that the underlying physics have common grounds and that the same physical laws govern these processes. Molecular outflows Theories of jet acceleration all invoke the presence of relatively strong magnetic fields, whether for protostellar X-winds or for disk-anchored disk winds (Pudritz et al. 2007, Li et al. 2014). The nomenclature "wind" describes the idea that the flows are initially poorly collimated. The precise nature of the interplay between disk-jet-molecular flow is difficult to determine observationally, primarily due to insufficient spatial resolution capability. However, there are a few clues: for instance, Hartigan et al. (1995) derived an outflow mass loss-to-accretion rateṀ loss /Ṁ acc < ∼ 0.01 from optical observations, while White & Hillenbrand (2004) derive a value ∼ 0.05 − 0.1. Intruigingly, theoretical estimates of this parameter are 0.1 < ∼Ṁloss /Ṁ acc < 1. Hartigan et al. (1995) concluded that these flows traced by [O I] forbidden lines may not carry enough momentum to drive the heavy CO-outflow. This was also the conclusion arrived at by Liseau et al. (2005) in their detailed study of the protostellar object L1551-IRS, its jets and its CO flow. For this young binary, the dynamical mass is known (Rodriguez et al. 2003). For the ratio of the rates, Liseau et al. (2005) founḋ M loss /Ṁ acc = 0.23 ± 0.10 for the primary, and 0.7 ± 0.3 for the secondary, which were based on the large-scale CO outflow. These values are more in agreement with the theoretical prediction. Since both the observed and theoretical values of the outflow mass loss rate are lower than the accretion rate, outflows cannot overwhelm accretion and hence do not play a major role in dispersing the disk at later stages. Photoevaporation: Central Star Brief History Photoevaporation was first studied in the context of massive stars by Hollenbach et al. (1994), who examined the effects of the rather strong radiation fields of massive stars on their disks (as suggested by Bally & Scoville 1982). The basic premise is that the heating of the surface gas drives thermal winds from the disk (c 2 s > GM * /r) which then results in mass loss and a steady depletion of the disk material. Clarke et al. (2001) combined viscous evolution with photoevaporation to find that gaps open in disks at a preferred inner location (the gravitational radius, r g = GM * /c 2 s ). Viscosity depletes matter interior to the gap, leading to inner holes. Adams et al. (2004) concluded that angular momentum support against gravity leads to the launching of flows at smaller radii (∼ 0.1 − 0.2r g , also Begelman et al. 1983, Liffman 2003, Font et al. 2004). Alexander et al. (2006) recognized that the creation of a hole leads to the direct irradiation of the inner rim and results in a rapid dispersal of the outer disk. These theories were more recently extended to include the heating effects of FUV and X-ray irradiation (Ercolano et al. 2009. Overall, photoevaporation and viscous evolution together lead to the dispersal of gas on observed timescales (∼ 1 − 10 Myrs). For a more complete account of earlier work, we refer the reader to existing reviews of this topic (Hollenbach et al. 2000, Dullemond et al. 2007, Clarke 2011, Alexander et al. 2014). Overview of Photoevaporation The gravitational pull exerted by the central star decreases with distance, and so does the gas temperature; hence the ease with which flows can be launched depends on disk radius. The rate of change of surface densityΣ pe (r, t) = ρ b v f low ∝ ρ b c s , where ρ b is the density of gas at the base of the flow and the flow velocity v f low is proportional to the sound speed c s at this location.Σ pe is therefore sensitive to the density and temperature of the heated disk surface. In order to escape the system, the critical temperature needed at a given radius is ∼ 18, 000/r AU K for a neutral atomic flow and ∼ 9500/r AU K for fully ionized gas to escape with a flow velocity equal to the sound speed. Typical launch speeds are slightly subsonic, ∼ 0.5 − 1 c s , Gorti et al. 2015. At early stages of evolution, accretion rates (Ṁ acc ∼ 3πνΣ) are high compared to photoevaporative mass loss (Ṁ pe ) and although photoevaporation may remove mass, its effects on the radial surface density distribution in the disk are minimal. As the surface density Σ decreases with time due to viscous accretion,Ṁ acc declines with it.Ṁ pe , on the other hand, stays fairly constant with time; disk mass and its depletion is concentrated at the midplane whereas the density and temperature at the surface (and henceΣ pe ) stay relatively unaffected. EventuallyṀ acc drops belowṀ pe and this is when photoevaporation begins to play a dominant role in determining the evolution of the disk surface density with radius and time ( Figure 4). Gaps open in the inner disk when accretion can no longer replenish photoevaporative mass loss (e.g., Clarke et al. 2001). This happens at inner radii of ∼ 1 − 10 AU for solarmass stars due to the strong heating by the high energy photon flux. Gap opening halts the advection of mass from the outer disk, the inner disk drains rapidly and a hole is created. The disk continues to photoevaporate from the irradiated inner rim outward (e.g., Alexander et al. 2006). We note that holes can be created only if the high energy radiation field has a significant non-accretion generated component, i.e., it is mainly chromospheric/coronal in origin. If not, then the cessation of accretion chokes photoevaporation and gaps and holes cannot be sustained (see Gorti et al. 2015). In the case of FUV photoevaporation, flows are also launched from the outer disk where the surface temperature is still high but escape speeds are relatively low (e.g., . Under some conditions, gaps may also open in the outer disk. Since disks have shallow surface density distributions (e.g., Andrews et al. 2011), most of their mass is at large radii and photoevap-oration here can affect the evolution of the entire disk. Viscous expansion in the outer disk is curtailed and the disk evolves into a shrinking torus of material (Gorti et al. , 2015. Concurrent dust evolution plays an additional role for FUV photoevaporation. FUV heating of gas is due to collisions with energetic electrons ejected by small dust grains (small grains here include polycyclic aromatic hydrocarbons or PAHs) that absorb FUV photons (e.g., Bakes & Tielens 1994). The evolving abundance of small grains in the disk therefore affects the heating. However, small grains also attenuate FUV photons and their depletion increases penetration and shifts the base of the flow to higher densities. Since as noted earlier,Σ pe (r, t) ∝ ρ b T gas , the depletion of small grains simultaneously decreases T gas and increases ρ b resulting in a smaller net effect onṀ pe . Overall, 2-fluid models of the evolution of the gas and dust (with a range of sizes) show that FUV mass loss rates are not significantly affected by dust evolution. See Figure 7 for the evolution of one such photoevaporating, viscous disk model. Interestingly, the gas/solids ratio in the disk is reduced by photoevaporation (first noted by Throop & Bally 2005) because dust grains are not coupled to the low-density gas in the wind and preferentially leave dust particles behind (Gorti et al. 2015). Owen et al. (2011) describe how wind entrainment of dust could be observationally detected via their emission in edge-on disks. Photoevaporative mass loss rates As discussed in §1, the principal determinant of the relevance of photoevaporation for disk evolution and planet formation is the rate at which the disk loses mass. The mass loss rate also dictates how early on during evolution photoevaporation becomes important. As long as accretion dominates (i.e.,Ṁ acc Ṁ pe ), viscous diffusion and advection will smear out any radial effects and replenish regions where photoevaporative mass loss has occurred. While there is reasonable agreement on the qualitative behaviour of disk photoevaporation between different models, the calculated mass loss rates vary by over two orders of magnitude, from 10 −8 −10 −10 M ⊙ yr −1 . At the high end,Ṁ pe >Ṁ acc during the Class II stage and photoevaporation determines the radial distribution of disk material and can significantly affect planet formation. Rapid dispersal may even preclude the formation of planets. For loẇ M pe , the role of photoevaporation may be limited to clearing the disk of small amounts of remnant gas and facilitating the circularization of planetary orbits (Kominami & Ida 2002). Some of the differences in estimatedṀ pe can be attributed to the high energy photons under consideration. For pure EUV models,Ṁ pe can be low (< 10 −10 M ⊙ yr −1 ). Although ionized gas is heated to ∼ 10 4 K, EUV is absorbed at very small column densities and the low ρ b results in low mass loss rates. High EUV luminosities can yield higher mass loss rates, but recent studies suggest photon luminosities 10 40 − 10 41 s −1 (L EUV ∼ 10 30 erg s −1 ) and hence that the associatedṀ pe is low ). X-ray and FUV models result in higher mass loss rates, ∼ 10 −8 − 10 −9 M ⊙ yr −1 for typical stellar radiation fields. The calculatedṀ pe is in general sensitive to the density and temperature structure of the disk which now has to be determined unlike in the EUV case. The disk structure is in turn based on disk chemistry and calculated cooling rates that are all highly model-dependent (e.g., see Rollig et al. 2007). For X-ray photoevaporation models, however state that the resulting flow properties are insensitive to the detailed thermal and density structure of the upper disk layers but are instead set by a criticality condition at height ∼ R above the disk plane where the flow makes a subsonic to supersonic transition. They further argue that provided the flow structure is optically thin to the X-rays dominating the heating at this surface, the mass loss rate is independent of any complex thermochemical effects at greater depth in the flow. If this condition is not met however, then the flow instead makes a sonic transition in regions where heating is dominated by FUV and hard X-rays and then it is essential to calculate the disk vertical structure. The X-ray spectrum assumed also impacts disk temperatures and henceṀ pe ); Gorti, Hollenbach et al. assume that soft X-rays (0.1−0.3keV) are mostly absorbed in accretion and outflow columns before they reach the disk surface, while Ercolano, Owen et al. assume a small covering factor of the accretion columns and no absorption in the column, to attain much higher temperatures in their disk models. The latter do not consider molecular cooling, but Gorti et al. find that molecular cooling can be important for regions penetrated by hard X-rays ( 1keV) and their model disks have cooler temperatures. Gorti et al. further treat the flow dynamics using simple analytical estimates drawn from previous work on thermal winds (Begelman et al. 1983, Liffman 2003, Adams et al. 2004, Waters & Proga 2014, but conduct detailed thermo-chemical modeling. Owen et al. claim that the flow structure is unimportant for soft X-rays, and adopt the opposite approach to solve forṀ pe using full radiation hydrodynamics models with simpler thermal physics. However, Gorti et al. include FUV photoevaporation along with X-rays and EUV, and in spite of a smaller role for XEUV photons, get comparable mass loss rates. The high mass loss is partly due to the time-dependent accretion FUV luminosity, which can be substantial in disks (e.g., Gullbring et al. 1998). More recent models by also find that FUV photoevaporation can dominate if the FUV luminosities are high and better reconcile the differences between the two groups. Disk mass loss rates can vary depending on a number of parameters, e.g. stellar mass, initial disk mass and radius, viscosity in the disk, EUV, FUV and X-ray luminosities, and the time-dependent XEFUV spectrum (e.g., Ercolano et al. 2009, Gorti et al. 2015, many of which are known to vary widely-often by an order of magnitude or more-in young stars. This diversity results in photoevaporation rates that can vary widely depending on the system, andṀ pe can generally range from 10 −11 to 10 −7 The disk lifetime for a disk of initial mass M d (0) and a time-averaged photoevaporation rate Ṁ pe can be approximately written as for a linear viscosity profile and assuming α = 0.01 (e.g. Clarke et al. 2001, Gorti et al. 2015. For the fiducial photoevaporation rates discussed above of 10 −8 to 10 −9 M ⊙ yr −1 and an initial disk mass of 0.1M ⊙ , the corresponding disk lifetimes are thus ∼ 2 − 10 Myr. We note that, in principle,Ṁ pe can change with time as the disk evolves, and the equation above represents an average rate over the disk lifetime (see Gorti et al. 2015). Photoevaporation in the Cluster Environment So far we have considered three flavours of disc photoevaporation driven by the EUV, FUV or X-ray radiation from the disk's central star. In the crowded environments of young clusters, however, there is also the possibility of external photoevaporation by the radiation field produced by (more massive) neighbouring stars. This is particularly to be expected in the case of the EUV and FUV where stars' photospheric outputs are a strong function of stellar mass (Diaz-Miller et al. 1998) and where, even taking into account the relative rarity of higher mass stars, the integrated contribution to the EUV and FUV backgrounds peaks at masses in the range 10-55M ⊙ (Fatuzzo & Adams 2008). This last point is important in assessing the types of cluster environments in which one expects external photoevaporation to be important. At high cluster membership number (N), even the top end of the IMF is statistically well populated and thus the distribution of total UV luminosity at a given N is sharply peaked at a value that simply scales with N. In the case of low N clusters, by contrast, there are large stochastic variations in the population of the upper IMF and the distribution of UV luminosities at given N is broad with a median that is well below the mean. This just means that external photoevaporation is unimportant in low N clusters, partly because the over-all number of stars is lower and partly because, in consequence, the IMF often ends up not containing the most massive stars that dominate the UV budget: see Fatuzzo & Adams 2008 for a detailed analysis of this issue. The behaviour of the X-ray background is more complex because X-ray luminosity does not increase monotonically with stellar mass, attaining a minimum in the case of fully radiative stars in the A star range. Measurements of diffuse X-ray emission in rich clusters such as M17 or the Rosette Nebula (e.g. Townsley et al. 2003) suggest that this emission is largely dominated by early type O stars since the ONC (for which the most massive star is of spectral type O5) lacks a comparable diffuse field. External X-ray photoevaporation It is easy to demonstrate that external X-ray photo-evaporation is negligible compared with internal X-ray photo-evaporation. The X-ray driven mass loss rate at each radius scales linearly with the X-ray flux (since, in the ionisation parameter formulation, the density corresponding to the local escape temperature is proportional to F X ). Even the X-ray flux reported in the ONC near θ 1 C Ori is orders of magnitude less than the X-ray flux of an average T Tauri star at a radius of 100 A.U. We therefore do not consider this possibility further. EUV + FUV photoevaporation from a single star We now consider the simplest case where the external UV field in a cluster is dominated by a single star. This is approximately the case in the ONC, where θ 1 C Ori has an ionising output that substantially exceeds that of the other O stars in the cluster core. We will not for now concern ourselves with the fact that there are other O stars such as θ 1 A or θ 1 B that are the major contributors of FUV flux to their nearest neighbours. If we consider the luminosity from a single source, rather than an isotropic background, then we do not expect the resulting wind structures to be spherically symmetric and indeed the photoevaporating discs in the ONC show generally cometary morphologies, often being brighter on the side facing θ 1 C and with a tail pointing away from this star (Tsamis et al. 2013). Detailed modeling of proplyd morphology assumes that the hemisphere of the ionised flow is directly illuminated whereas the far-side receives a diffuse EUV field derived from recombinations in the nebula (see Henney & Arthur 1998). We will however follow Johnstone et al. (1998) in setting out a simplified spherically symmetric photoevaporation model, an approach that yields mass loss rates that agree to order unity with more complex modeling. EUV radiation impinges on low density gas above the disc and sets up an ionisation front in which the integrated number of recombinations per unit area matches the stars ionising flux (one can readily justify a posteriori the neglect of additional consumption of EUV photons in ionising neutral material flowing through the ionisation front since this turns out to be a small addition for typical flow parameters; Churchwell et al. 1987). The ionisation front represents a contact discontinuity of the flow for which imposition of mass and momentum conservation on either side results in a flow that is transonic in the ionised region (i.e. the ionised gas flows out at around 10 km s −1 ) but enters the ionisation front sub-sonically. Naturally there must be heating processes (not involving ionising photons) that produce the pressure gradients throughout the neutral region, but provided this flow is subsonic throughout the neutral region it is in causal contact with the ionisation front and it is thus conditions at the ionisation front that set the mass flow rate. This situation is known as EUV photoevaporation, even though FUV heating must contribute also in setting up the neutral flow that feeds the ionisation front. There is however a qualitatively different situation if the solution contains a sonic point in the neutral flow. In this case the flow has to undergo shock deceleration before it can enter the ionisation front and thus neutral flow below the shock is causally decoupled from the ionisation front. This means that the agent heating the neutral flow (the FUV) now sets the wind mass loss rate. For EUV driven flows, ionisation equilibrium at the ionisation front implies that n 2 dr scales with the ionising flux so that, for a spherical transonic flow one obtains: (Johnstone et al. 1998) where Φ 49 is the ionising luminosity of θ 1 C in units of 10 49 s −1 , d 17 is the distance of the disc from θ 1 C in units of 10 17 cm and r d14 is the radius of the ionisation front in units of 10 14 cm. Note that in the case of EUV driven winds (where by definition the neutral flow is thin and sub-sonic), r d14 roughly equates with the disc radius. In the case of FUV flows, by contrast, the wind mass flux is set by the maximum density of the flow for which FUV heating is effective. For the strong FUV fields in the centre of Orion, this condition is set by dust absorption (rather than molecular self-shielding) and therefore imposes a maximum column in the FUV heated neutral flow. Since the scale height of FUV heated gas in the outer disc is of order r d (i.e. the FUV heats to around the escape temperature of the outer disc), the number density in the flow is set by nr d ∼ 10 21 cm −2 . Putting this together (again assuming spherical symmetry and that the flow velocity is transonic in the neutral gas, i.e. with velocity ∼ 3 km s −1 ) we obtain the relation: (Johnstone et al. 1998). Note that, unlike, the expression forṀ EUV , this rate does not depend explicitly on the strength of the FUV field. However, this expression does not apply at arbitrarily low FUV levels because at some point self-shielding by molecular hydrogen becomes more important than dust absorption in setting the column of neutral gas (also see discussion in §4.5). Storzer & Hollenbach (1999) argued that the critical flux level was G 0 ∼ 5 × 10 4 which corresponds to distances from θ 1 C of around 0.3 pc. 1 We thus have a schematic picture which would imply three radial zones with distinct disc photoevaporation properties. Noting thatṀ EUV scales inversly with distance from θ 1 C whilė M FUV is constant over a range extending out to a limiting G 0 , we then have i) a small inner region with very fierce EUV photoevaporation, ii) an intermediate FUV zone with spatialy constant mass loss rate and iii) an outer EUV zone which resumes at the point that FUV photoevaporation is no longer effective. The latter is at ∼ 0.3pc while the interface between i) and ii) is set by equality of the EUV and FUV mass loss rates at that point, implying a radius of ∼ 0.15r 1/2 d14 . The observational evidence for external photoevaporation in Orion An immediate implication of the FUV model described above is that the ionisation front is spatially offset from the disc because it is separated from the disc by a spatially thick supersonic neutral flow. Such a structure corresponds well to the 'proplyds' first imaged in Orion by O'Dell et al. 1993 (see Fig.). (Note that whereas O'Dell applied the term 'proplyd' to all the circumstellar structures imaged in Orion, including the pure silhouette discs, we will here restrict the definition to those showing offset ionisation fronts, regardless of whether a central silhouette disc is also detected.) The spatial distribution of the bulk of the proplyds accords well with the FUV zone described above. (Note that the innermost EUV zone is plausibly filled in by projection effects). Although the bulk of the proplyds indeed lie within the 0.3pc radius predicted above, there are several tens of proplyds at larger radius, only some of which are explicable as being instead powered by the more modest FUV heating provided by θ 1 A and B (Vicente & Alves 2005). It is possible that FUV winds may be driven at lower values of G 0 than argued above but in this case it is unclear why the number of proplyds at large radii is small, though non-zero. Alternatively, Clarke & Owen (2015) suggested that these far flung prolyds result from external EUV ionisation of a neutral flow driven by internal X-ray photoevaporation. They showed that the numbers and sizes of far flung proplyds are consistent with statistical expectations based on the X-ray luminosity function (though the correlation between instantaneous X-ray luminosity and resolvable proplyd structure on a source by source basis is weak, an effect that might be attributed to variability). Similarly there are far flung proplyds detected in Carina (Smith et al. 2003), NGC 3603 (Brandner et al. 2000) and Cyg OB2 (Wright et al. 2012) although some of these are likely to instead be ionised clumps of molecular gas (Sahai et al. 2012 a,b). Having established that theory more or less correctly predicts the spatial distribution of proplyds, we now turn to the mass loss rates in these objects. These were first deduced from resolved radio free-free observations (Churchwell et al. 1987) combined with the assumption of transonic expansion in the ionised flow. As the resulting flow rates (comparable witḣ M FUV as given above) imply problematically short disc lifetimes (see below) O'Dell (1998) instead suggested that the structures might be pressure confined. These high mass loss rates were however subsequently confirmed through emission line modeling (Henney & ODell 1999) which showed that the kinematics were clearly incompatible with a static, pressure confined structure but involved free expansion. These high mass loss rates, both predicted and observed, imply that the total mass photoevaporated over the cluster lifetime is of order a solar mass. This is far more than the initial gas reservoir in discs and implies that we would expect to see a distinct deficit of discs at the present epoch in the centre of Orion. The most troubling aspect of the proplyd lifetime problem is that the disc fraction in the core of Orion is actually high; in fact, when projection effects are taken into account, the disc fraction is ∼100% within the central FUV region. Störzer & Hollenbach (1999) suggested that the unexpected survival of discs at small radii could be explained if stars were on very radial orbits and just 'lit up' as proplyds during their brief sorties through the core region. However, such an orbital configuration was found to be unsustainable in N-body simulations (Scally & Clarke 2001). Another effect that may mitigate the very short predicted survival times is that FUV photoevaporation is consider-ably less efficient in the case of "sub-critical" discs (i.e., those in which the escape velocity at the disc outer edge exceeds the sound speed in the heated gas). Adams et al. (2004) have shown that the mass loss rate in this case declines with decreasing radius much more steeply than implied by Eq.(4); although Clarke (2007) suggested that this might enhance the survivability of small disks in the ONC, the important of this effect is limited by the fact that even in the case of negligible FUV mass loss, the rate reverts to the EUV rate (Eq. (3)). This then only leaves two possibilities (or a combination of the two): unexpectedly large disc masses and/or a recent switch-on time for θ 1 1C. Large disc masses are however not indicated by any of the sub-mm studies in Orion (see Mundy et al. 1997, Eisner & Carpenter 2006, Mann et al. 2014) and although estimation of gas disc masses is traditionally beset by systematic uncertainties regarding the gas to dust ratio and grain opacity, it is clear that in a comparative sense the discs in Orion are not unusually massive. This instead encourages the last gasp interpretation, in which θ 1 C has only been 'switched on' (or perhaps, more correctly, been optically revealed) over a timescale that is a small fraction of the cluster lifetime. In this scenario, the Orion proplyds (and indeed all disc emission in the inner parts of Orion) will be gone within a timescale of > 10 5 years (also see Clarke 2007). Although it is slightly uncomfortable to argue that we are witnessing our nearest massive star formation region at a special epoch there is some circumstantial evidence that this is indeed the case. The disc mass distribution derived by Mann et al. (2014) does show some hints that preferential depletion of disc mass is indeed starting to occur at the very small radii characterising the inner EUV zone (where the mass loss rates are even higher than in the FUV zone: Eq. 3). Secondly, proplyds indeed seem to be rather rare in other star forming regions (Yusef-Zadeh et al. 2005, Balog et al. 2006, Koenig et al. 2008, although this result has to be interpreted with caution given that resolving prolyds is more challenging in more distant environments than in Orion. Perhaps the most conclusive argument indicating that Orion is being observed at a special epoch is that it is unusual among massive clusters in not showing a deficit of disc bearing stars in the proximity of massive stars. It is to the disc demographics in other clusters that we now turn. The distribution of disc bearing stars in massive star clusters A number of studies have now conducted disc censuses in star forming regions where one can reasonably expect that external photoevaporation will be important: for example Guarcello et al. (2007Guarcello et al. ( , 2009Guarcello et al. ( , 2010 in NGC 6611, Fang et al. (2012) in Pismis 24 and Guarcello et al. (2015) in Cygnus OB2. In all cases it is seen that the disc fraction is lower in the proximity of massive stars. Guarcello et al. (2015) have quantified this effect by generating an estimated map of the ambient FUV field across Cyg OB2 and have demonstrated that the disc fraction declines monotonically with increasing FUV flux, being around a factor two lower in the highest FUV versus lowest FUV category. There is also some evidence from this analysis that the (anti-)correlation between disc frequency and FUV flux is more convincing than that between disc frequency and the expected frequency of star-disc collisions. This is to be expected given that in any environment where star-disc collisions are at all significant, the effect of external photoevaporation is likely to be much more important (Scally et al. 2001). The effect of mild FUV fields in sparse star forming environments Environments like the ONC are highly atypical in terms of the strength of the ambient FUV field. The population synthesis exercise of Fatuzzo & Adams (2008), which assembled clusters according to the observationally inferred spectrum of cluster richness, demonstrated that regions which -like the core of Orion -have G 0 values in excess of 10 4 are environments in which at most a few percent of star formation occurs. On the other hand, other star forming environments span an enormous dynamic range in G 0 values (4 − 6 orders of magnitude). Adams et al. (2004) studied flow solutions in a range of low G 0 environments, arguing that in this case photoevaporation is predominantly in the (cylindrical) radial direction from the outer edge of the disc. Recently Facchini et al. (Facchini, S., Clarke, C., Bisbas, T. 2015, submitted to MNRAS) have revisited the problem and obtained solutions over a wider range of parameter space, additionally iterating on the thermal solution to take account of the fact that only small grains are entrained in the flow. Preliminary solutions indicate rather significant mass fluxes for discs larger than 100 A.U. even at low G 0 . A generic feature of these solutions is a cliff in the gas surface density at the disc outer edge and then a low density plateau of (nearly dust free) gas at larger radii. Such solutions offer the prospect of possibily constraining the properties of such flows through deep molecular line imaging. Regardless of the numbers that emerge from this exercise, it is also worth noticing that the interplay between viscosity and outer edge can have some unexpected outcomes. A viscous disc without a photoevaporative flow evolves towards a viscous similarity solution in which the disc outer edge grows in such a way that the viscous time is always of order the disc age. Consequently the surface density in such a disc declines as a power law in time and -as the viscous time gets longer and longer as the disc expands -the time required for the disc to clear (e.g. become optically thin in the near infrared) is extremely long. It might be thought that the addition of a disc flow from the outer edge might at best reduce the disc lifetime to a value equal to t evap , the ratio of the present disc mass to the photoevaporation rate. In fact, however, the effect is much more dramatic: when external photoevaporation is coupled to the viscous flow then the disc at some stage stops growing and then shrinks from the outside in (an effect also seen in the internal FUV photoevaporation models of Gorti et al. (2009), Gorti et al. (2015). In this case, the viscous timescale decreases with time and hence the clearing is accelerated. This effect results in the disc clearing an order of magnitude faster than t w (Clarke 2007, Anderson et al. 2013 and it is obvious that in this case most of the disc mass is cleared by accretion rather than photoevaporation. Nevertheless photoevaporation is playing a decisive role in preventing disc spreading and thus keeping the viscous timescale short. Inventory of disk's solid material and its depletion For disks that form planetary systems, the planets themselves form an important sink for the initial mass in solids. If giant planets are present, then a fraction of the disk gas is also consumed. In our solar system, there was a minimum of about ∼ 1 − 3 × 10 −4 M ⊙ of solids and a little more than a Jupiter mass or ∼ 1 − 3 × 10 −3 M ⊙ of gas that was "depleted" into planet formation. We use the term "depletion" because this disk component is dark for most external systems. A disk-less object may very well have a nascent planetary system that is undetected. The protosolar disk therefore lost 10 times as much gas as the dust, and this is a fact that dispersal theories must necessarily explain. Exoplanetary disks, especially the birth-sites of the compact Kepler systems, may efficiently convert even more mass into rocky planets (e.g., the Minimum Mass Extrasolar Nebula models of Hansen & Murray 2012, Chiang & Laughlin 2013. The low frequency of gas giants ( 5 − 10%, Winn & Fabrycky 2014) and the abundance of Super-Earths (∼50%) indicates that in most disks gas is depleted relative to dust even more than in the solar nebula. Unless migration is not as efficient as theories predict, accretion cannot preferentially remove gas. Small dust is well coupled and must be carried along with the gas. Radial drift, in fact, may result in the rapid loss of mm-size grains where most of the solids mass is initially contained (e.g., Testi et al. 2014). If larger planetesimals form prior to dispersal, it is not clear that they survive migration. Hasegawa & Ida (2013) conclude that even gas giants may not survive migration in a massive disk. In a recent study, Coleman and Nelson (2014) modeled migration and planet formation via oligarchic growth (high relative planetesimal velocities) and dynamical evolution. They find that as long as the gas disk is present, the formation and retention of a giant planet ( 10M ⊕ ) is difficult because it would rapidly migrate. In their scenario, low mass disks form close-packed systems, and in high mass disks planets continuously form and migrate into the star and the last generation survives. All of these processes deplete solids relative to gas, and hence do not provide an explanation for the preferential removal of gas, at least in our solar system. On the other hand, gas in photoevaporative flows is not dense enough to lift any dust but the smallest particles; since the mass is typically concentrated in larger sized particles this mechanism leaves most of the dust behind. Disk dispersal in the classic core accretion scenario Dispersal of protoplanetary disks by photo-evaporation has a strong effect on planet formation since it limits the time needed to build planetary systems. We recall that planet formation in the core accretion scenario (Lissauer & Stevenson 2007) should explain a growth process through 12 orders of magnitude from the micron sized dust to Jupiter-like giant planets within the very limited lifetime of a protoplanetary disk. Planet formation in the core accretion scenario has different, well separated phases: (i) dust coagulation and formation of planetesimals, (ii) formation of planetary embryos and growth of the solid cores of giant planets, (iii) runaway gas accretion by the solid cores to form giant planets, and finally, (iv) the assembly of the planetary embryos to terrestrial planets. In the following we overview the timescales of the above phases of planet formation, and investigate how they are influenced by photoevaporation of the disk. Formation of planetesimals due to dust coagulation is presently very uncertain due to several barriers to planetesimal growth: the bouncing barrier (Zsom et al. 2010), the charge barrier (Okuzumi et al. 2009), and the meter-size barrier (due to drift and fragmentation) (Blum & Wurm 2000). New approaches, such as formation in pressure maxima (Lyra et al. 2009), particle concentration due to streaming instability and gravoturbulent planetesimal formation (Johansen et al. 2014), have attempted to clarify these issues. In spite of many uncertainties, the timescale of particle growth to km-sized planetesimals is believed to be quite short; thus planetesimal formation takes place in the gas-rich protoplanetary disk and is unaffected by the disk's photoevaporation. The next step is the further growth of planetesimals toward terrestrial planets leading to runaway (Wetherill & Stewart 1989) and oligarchic growth (Kokubo & Ida 1998). These initial stages of the early phase of terrestrial planet formation in the classic core-accretion paradigm are rapid, lasting between 0.01 and 1 Myr. As the result of these processes, ∼ 100− 1000 bodies between Moon and Mars size are formed in the relevant region of terrestrial planet formation still in a gaseous environment. Planetesimals are affected by aerodynamic drag (due to the slightly sub-Keplerian motion of gas), while the formed oligarch feel torques leading to type I migration. Drag and torques both result in angular momentum loss for the formed bodies, which spiral in. Large planetesimals ( 100 km in size) are less sensitive to drag force, but gravitationally perturb the ambient disk creating dense spiral wakes. Gravity due to these over-dense regions results in a net torque which modifies the planetesimal orbit. In earlier studies assuming isothermal disk models the net torque calculated was negative (Ward 1997) causing a significant loss of angular momentum and therefore inward migration of the planetesimal. More recent studies (Paardekooper et al. 2010) suggest the possibility of outward migration as well. We note that the speed of type I migration is linearly proportional to the mass of the migrating body, therefore the rapid inward or outward migration of relative massive objects will result in their loss from the region of terrestrial planet formation. Formation of giant planets clearly occurs in the presence of gas. Therefore, one of the most pressing issues in planet formation is building a giant planet within the few Myr disk lifetime. According to core accretion theory, first a solid core forms beyond the water snowline, where the abundance of solids is increased due to ice condensation on dust grains. The higher surface density of solids leads to efficient oligarchic growth allowing fast formation of a planetary core. The characteristic mass of a solid core able to capture a gaseous atmosphere at Jupiter's orbit is roughly M crit = 1M ⊕ . Both the core and envelope slowly grow, until the core reaches a critical mass of ∼ 10M ⊕ and runaway gas accretion ensues leading to the rapid formation of a giant planet (on a 10 5 year timescale). This rapid gas accretion slows down when the giant planet opens a gap in the gas disk. According to hydrodynamic simulations (Kley 1999) the planet is not entirely isolated from the ambient gas, and accretion continues along tidally generated spiral arms (Lubow & D'Angelo 2006). Therefore, gas accretion onto the planet's surface stops only when the surrounding gas is dispersed and formation of a giant planet ends with the photoevaporation of the gas disk. In summary, the presence of gas significantly influences the early phase of terrestrial planet formation; the effect of migration diminishes if much of the disk gas is dispersed before the runaway and oligarchic growth phases. Giant planets can form only if gas is present. However, the final assembly of terrestrial planets takes place in a few tens of million years, thus certainly in a gas free environment with no migration. We note that in more recent studies terrestrial planet formation may happen at planet traps (places where the torque felt by a planet becomes zero) on much shorter timescales, well before the disk's final dispersal. In such models the effect of gas disk dispersal should be taken into account. These models will be briefly described in the next subsection. Disk dispersal and type I migration of terrestrial planets and planetary cores Planets and massive cores, after they form, can migrate, and type I migration can be very fast resulting in a rapid loss of the formed cores or planets. Planet traps can halt migration: traps can be caused by a change in disk thermal properties due to sudden changes in dust opacity (Lyra et al. 2010), or by the formation of a large scale vortex at the outer edge of the dead zone, which stops or reduces the migration speed of massive planetary cores (Regaly et al. 2013). Lyra et al. (2010) showed that opacity and temperature jumps in the disk can help prevent migration of planets (with M p ∼ 0.1 − 10M ⊕ ) using an evolving radiative 1D disk model with photoevaporation. In this model, the external photoevaporative disk wind prescription of Veras & Armitage (2004) is used, the wind is effective only outside a critical radius of 5 AU. The locations of the equilibrium radii with zero torque migrate due to evolution of the disk and the planet migrates toward these zero torque regions. As Σ decreases, the radii of the zero torque locations (the traps) move faster toward the star than the migrating planets. As a consequence the planets are decoupled from equilibrium radii. Σ at these times is too low to cause further migration. This effect is shown in Figure 6: in these simulations the evolution of the disk due to the combined effects of gas accretion and photoevaporation keep terrestrial planets from migrating into the star. Effect of the disk dispersal on the type II migration of giant planets A massive Jupiter-like giant planet interacts with the disk to open an annular gap depleted of material, and then migrates on nearly viscous timescales (type II migration, see chapter by Lin et al., this volume). This migration timescale can be written as (see e.g. in Baruteau & Masset 2013): where r 0 = r p + 2.5r Hill , and r Hill is the Hill radius. The first term in the above formula is the effect of gas accretion and the second term is the ratio between the planet's mass and the local disk mass and can be interpreted as the "resistance" of the inner disk to the inward migrating giant planet. In equilibrium, the gap moves at the accretion velocity during type II migration. Recent hydrodynamical simulations by Durmann & Kley (2015) show that the migration of the giant planet is determined by the torques exerted by the disk. In general, they find that the migration of the giant planet does not follow the disk's viscous evolution and gas can flow through the gap. An important result of this research that if M D /M P < 0.2 (where M D = Σ(r p )r 2 p is interpreted as the local disk mass) the migration speed becomes significantly lower than the viscous speed (the type II migration rate). (We note that this behavior is also reflected in Equation 5 which accounts for the disk mass). While there is no simple analytical formulation for torque-based migration, these slower rates are more consistent with planet synthesis models that can reproduce observations. Since a low disk mass is necessary to slow migration, the disk mass must be reduced at this stage by the combined effect of gas accretion and photoevaporation. The final location of a giant planet is determined by the migration and disk dispersal times. Ignoring the effects of gravitational scattering in multi-planet systems, one can estimate the distribution of the semi-major axes of the randomly formed giant planets in a given disk model. Alexander & Armitage (2009) investigated the effects of gas accretion, EUV photo-evaporation, and the migration of the giant planet with a gap opening criterion (as in Lin & Papaloizou 1986). With plausible disk conditions and a range of planet masses (0.5M Jup < M p < 5M Jup ), the orbital distribution of planets were found to be comparable to data from the Lick radial velocity suvey (Fischer & Valenti 2005); the two distributions are qualitatively similar (see Alexander & Armitage 2009). Considering the effects of gap opening due to EUV photoevaporation, Alexander & Pascucci (2012) found more recently that there would be deficit of planets at semi-major axis values close to the gap radius at r p = 1 − 3AU can be seen. This deficit is accompanied by a corresponding increase in the number of planets just outside these radii (see figure 2 of Alexander & Pascucci 2012). We note, however, that model results could be affected by many uncertainties due to migration rates which are linked to unknown aspects of planet formation (see discussion in Ercolano & Rosotti 2015). Rapid photoevaporation of a disk with a giant planet The formation of planets may in turn influence or accelerate disk dispersal. Conventional photoevaporation theory relies on the formation of a gap and then a hole, whose rim is irradiated directly to enhance mass loss subsequently. If a planet forms a gap and creates a similar rim, photoevaporation may be accelerated and trigger disk clearing (Alexander & Armitage 2009). In a recent investigation (Rosotti et al. 2013) used the hydrodynamic code FARGO (Masset et al. 2000) coupled with a 1D code used for the initial ∼ 2Myr evolution of the disk, including X-ray photoevaporation: these models yield a rapid dispersal of the inner disk (interior to the planet) compared to the case without X-ray photoevaporation where the inner disk persisted for the duration of the simulation. Observational Constraints We next examine observational insights into the process of disk dissipation. Ideally, we would like to be able measure gas and dust masses, understand how their accretion rates decline, measure photoevaporative mass loss rates, detect disks on the verge of dispersal, and assess their planet formation activity. Rapid strides are being made in this area especially with newer, sensitive facilities capable of high spatial and spectral resolution. The broad scenario drawn from observations is consistent with the dispersal theory outlined so far, although details are not well understood. We first summarize what we currently know from observations and then discuss each of these: -Dust and gas disk lifetimes are 10Myr, very few disks survive beyond this timescale. -Disks evolve through a transition phase where the infrared excesses decrease; the inner disk probably clears first in most cases to form dust cavities. -Photoevaporative winds have been detected in [NeII]12.8µm emission, and possibly also in [OI] forbidden line and CO emission; mass loss rates are yet to be determined. -Exoplanet studies indicate that gas is present at late epochs of planet formation in most disks. Disk lifetimes As discussed in §1, infrared excesses in disks appear to decline with an e−folding time of ∼ 3Myr (e.g., Mamajek 2009). Inner gas also disappears on similar timescales (Fedele et al. 2010), as is to be expected-gas if present would drag along small dust which causes the NIR excess. Moreover, Ribas et al. (2014) find that the fraction of sources with excesses increases at longer wavelengths, suggesting that the disk evolves inside-out. Inferred lifetimes are ∼ 4 − 6 Myr at 24µm compared to ∼ 2 − 3 Myr at 3 − 12µm. The disk/star flux ratio at 24µm shows a sharp change at ∼10 Myr, suggesting that the nature of dust perhaps changes at these ages (Ribas et al. 2014). There is a peak in the fraction of evolved disks (lower disk/star 24um flux) at ∼ 10 Myr, marking the transition from primordial to debris disk stage and the dispersal of gas (cf. Ercolano et al. 2011, Koepferl et al. 2015 for a theoretical perspective). Similarly, Wyatt (2008) argues that the disk mass derived from the sub-mm remains more or less constant (albeit with a wide dispersion, see Carpenter et al. 2014) and shows a sharp decline at 10 Myr, perhaps again indicating this transition to the debris disk phase (see also Wyatt et al. 2014). Since debris disks are almost always gas-free, the 10 Myr time also serves as an upper limit on the gas disk dispersal time, which is also consistent with gas observations (Zuckerman et al. 1995, Dent et al. 2013. Photoevaporation can explain the above general behavior of dissipating disks quite well, and timescales are roughly in accordance with observations. The inside-out dispersal of disks observed is, in fact, a main predicted characteristic of XEUV photoevaporation theory. Although FUV photoevaporation causes mass loss in the outer disk, gap opening in the inner disk (at larger radii, ∼ 10AU) typically precedes eventual disk dispersal here as well (Gorti et al. 2015). The 2 − 10 Myr timescales inferred above indicate photoevaporative mass loss rates that are at least of the order ∼ 10 −9 to 10 −8 M ⊙ yr −1 , in reasonable agreement with theoretical rates as discussed in §3 (Eq. 2). The sudden and simultaneous removal of dust along with gas is harder to explain. Alexander & Armitage (2007) propose that as photoevaporation sweeps across the disk to remove gas, it may cause planetesimal formation at the expanding inner rim and deplete dust. This conversion needs to be rapid and highly efficient, if not, substantial amounts of dust may remain after gas disk removal. Debris disk processes such as PR drag could remove the remnant primordial dust (see chapter by Wyatt et al. ), but these mechanisms act on Myr timescales and are not consistent with the rapid transition to the debris disk stage (e.g., Luhman et al. 2010, Wyatt et al. 2014. The most likely scenario is that planet formation has already removed most of the dust before gas disk dispersal, although this suggests a causal link between these two processes (see discussion in Gorti et al. 2015). Transition Disks Transition disks are believed to represent one of possibly multiple pathways from primordial to debris disks (e.g. Williams & Cieza 2011). Hence, this special class of objects are particularly relevant for disk dispersal theories. While we conceptually understand how disks evolve and disperse, the origin and nature of transition disks is still under considerable debate. (There are also several definitions of what constitutes a transition disk, we adopt the definition of Espaillat et al. (2014), i.e., disks with a clear deficit in short wavelength emission.) Transition disks have larger dust grains (Pinilla et al. 2014), are typically millimeterbright (suggesting high disk mass) and accrete at rates a factor of ∼ 3 − 10 lower than full primordial disks (e.g. Espaillat et al. 2014, Najita et al. 2015. They also show differences in their gas emission, with higher line ratios of HCN/H 2 O in the infrared (Najita et al. 2013) and lower [O I] 63µm line luminosities (Howard et el. 2012), trends that have both been explained as a result of their evolved dust content. The two main mechanisms proposed to explain transition disks are photoevaporation and planet-disk interactions. XEUV photoevaporation predicts that about ∼ 10% of disks should be caught in the act of viscously draining their inner gas after gap formation (e.g., Owen et al. 2010), in agreement with the observed fraction of transition disks. However, photo evaporating disks are also predicted to be low in mass (note that this is the gas mass, however that it is the dust mass that is measured). FUV-dominated photoevaporation can open gaps at higher disk masses, but does not predict a large fraction of observable disks in the viscous draining phase (∼ 10 4 − 10 5 yr) because of longer dispersal times (∼ 3 − 5Myr). Gaps opened by planets in the cavities of transition disks is a more popular explanation, but not without difficulties. On the one hand, the planet needs to be massive to open a gap, 1M J , and massive disks may be required to form giant planets, naturally explaining the higher mass of transition disks (Najita et al. 2015). On the other hand, Jupiters are believed to be rare (Winn & Fabrycky 2014), and moreover, the formation time of giant planets is on the order of the disk dispersal time (e.g., D'Angelo et al. 2010), making the likelihood of observing an embedded Jovian mass planet too low to explain the ∼ 10% frequency of transition disks. Multiple planetary systems are more common, but loss to migration and maintaining the observed stellar accretion rate past several planets pose problems (Zhu et al. 2011, Coleman & Nelson 2015. propose that there are two classes of transition disks, low mass disks compatible with photoevaporation theory and higher mass ones perhaps better explained by planet-disk interactions. Rosotti et al. (2013) further consider the interaction between planet formation and photoevaporation and suggest that planets when they form could accelerate disk dispersal. While transition disks are believed to be an evolutionary stage that most disks go through, we note that a significant number of objects have, in fact, turned out to be unresolved binaries (e.g., CoKu Tau/4, CS Cha, HD142527 among others). Future, high resolution observations using faclities such as ALMA may shed light on the true nature of some of these disks (e.g., van der Marel et al. 2015). Gas diagnostics of photoevaporation Emission from winds is the best method to directly measure the mass loss rates and assess the efficiency with which photoevaporation can deplete gas. The most promising such detection is the blue-shifted emission in [Ne II]12.8µm line from slow, thermal winds seen from a few objects (Herczeg et al. 2007). Neon can be ionized by EUV (∼ 21 eV photons) and X-rays (∼ 1keV), and gas temperatures need to be 500K to excite the observed line. Line luminosities and profiles are well reproduced by photoevaporation models (Alexander 2008, but mass loss rates using [NeII] are hard to determine without knowledge of the ionization level of the gas. For EUV-heated, fully ionized winds, data is consistent with mass loss rates of ∼ 10 −10 M ⊙ yr −1 , while partly neutral X-ray heated gas implies higher rates ∼ 10 −9 to 10 −8 M ⊙ yr −1 . Other low velocity wind tracers such as [OI] forbidden line emission and CO rovibrational emission are more difficult to interpret with contributions from multiple components (Rigliaco et al. 2013). For a more in-depth discussion, see Alexander et al. (2014). Additional constraints on the ionization in the disk come from free-free emission, indicated by cm-excesses in a few disks (e.g., Pascucci et al. 2012, Owen et al. 2013. Recent studies by Pascucci et al. (2014) and Galvan-Madrid et al. (2014) indicate that the EUV luminosities in disks are low based on observed free-free emission fluxes. These studies also find that the observed [NeII] emission is too high to arise from EUV-ionized gas at the inferred EUV luminosities. Therefore they conclude that the [NeII] emission must trace a wind ionized by hard ∼ 1 keV X-rays, indirectly implying higher mass loss rates-due to either a X-ray driven photoevaporative wind, or a FUV-driven wind that is partly ionized by X-rays. Tracers of FUV photoevaporation are more difficult. Since the flows are launched subsonically and are considerably cooler, emission (for e.g., CO rotational lines) is dominated by the base of the flow which is at higher densities. The molecules further get dissociated higher up in the wind. The blue-shifts and asymmetries in the line profiles expected here are small for detection with current facilities and hard to disentangle from other non-Keplerian sources like turbulence (Gorti et al., in preparation). With the higher resolution and sensitivity of full ALMA and probes of the higher surface layers such as the weak [C I] 609µm line becoming accessible (Tsukagoshi et al. 2015), these flows may be detected in the near future. Gas at late stages and planet formation Exoplanet properties indicate that gas is present at late stages of planet formation in most disks. The most direct evidence stems from the detection of Super-Earths or mini-Neptunesplanets with masses 2 − 3M E and gaseous envelopes (Winn & Fabrycky 2014). Close-in systems detected by Kepler transit surveys require gas both for in-situ and migration theories of their formation (e.g., Laughlin & Lissauer 2015). The eccentricities of these systems are low, indicating that there were at least small quantities of gas present after the giant impact stage of forming terrestrial planets. On the other hand, the paucity of gas giants and the low gas masses of the Super-Earths indicate that there could not have been too much gas present. In that case, the planetary cores would have accreted more of the disk gas to form gas giants. Exoplanet masses and compositions therefore indicate that while gas was present at the epochs of planet formation, it dispersed shortly thereafter. This is particularly true for gas giant planet formation, with the final mass of the giant planet closely linked to the gas dispersal time (e.g., Lissauer et al. 2009, Movshovitz et al. 2010, Rogers et al. 2011. Contrary to the all the observational evidence presented so far, gas appears to persist in at least some debris disks (e.g., HD 21997, Kospal et al. 2013), well past planet formation epochs. Although it is still unknown if the gas is primordial in origin, it is worth noting that all of the debris disks with gas detected are A stars, pointing to either longer disk dispersal times for intermediate-mass stars (however, see Ribas et al. 2014), or a detection bias. Future Directions To summarize, the evolution of protoplanetary disks is initially dominated by viscous accretion, but at later critical planet-forming epochs internal and external photoevaporation by high energy photons (UV and X-rays) dictate the radial distribution of disk gas with time. Photoevaporation sets gas disk lifetimes and through its influence on gas disk evolution can impact all stages of planet formation, from planetesimal growth to the formation of giant planets. Although we qualitatively understand how disks evolve and disperse, photoevaporation rates are still not measured. Determining the decrease in gas mass with time and quantifying disk mass loss rates are essential toward developing a comprehensive theory of disk evolution that includes accretion, planet formation and disk dispersal. We end with a list of possible future directions that may help resolve many outstanding issues: -Some of the biggest uncertainties pertain to the stellar high energy spectrum. The flux of the accretion-generated X-ray and UV components, the relative strengths of the soft and hard X-ray fluxes and the strength of the Lyman α contribution to the FUV flux, and the evolution of all of these with time (along with relation to other variables such as accretion rates, disk and stellar masses) are some of the less well characterized inputs that need further investigation. -Tracers of subsonic flow are almost non-existent, they may be needed to actually determine mass loss rates. Ideally, a measure of the gas mass for disks of different ages is desirable to quantify the rate at which disks dissipate. Gas emission line observations probe the density and temperature structure which are important for setting flow conditions; emission line modeling further indirectly measures disk irradiation. Future observations from high sensitivity facilities like ALMA will inform disk heating and cooling physics and help calibrate disk models. -More sophisticated disk models, which treat gas and dust separately and include hydrodynamics, radiative transfer and chemistry are needed. With rapid advances in supercomputing facilities and techniques, such models may soon become possible. Disk evolution models need to self-consistently account for planet formation and dynamical evolution along with disk dispersal. As comprehensive studies become more common (e.g., Coleman & Nelson 2015), future work may allow for advanced population synthesis models that can simultaneously explain the diversity in disk and exoplanet properties. Ultimately, we would like to be able to connect disk evolution to planet formation and understand the close, and perhaps causal, correspondence between timescales for planet formation and disk dispersal. Observed line profiles with Herschel-HIFI of the optically thick ground-state line of ortho-H 2 O of a dense protostellar core, displaying, within the 38-beam (0.02 pc), signs of both infall and outflow at the same time. All data are continuum subtracted around the zero-baseline, and the two polarizations are individually shown (blue and green) to demonstrate the high quality of the HIFI data. The red curve is a qualitative example from a radiative transfer model with center optical depth over a hundred, generating the central absorption. The blue-red asymmetry of the line core is due to the infalling gas in the unstable Bonnor-Ebert sphere, and the excess emission in the line wings is due to the outflow. Fig. 3 Mass loss rate, as determined from CO rotational line observations, viz.Ṁ(CO), versus the bolometric luminosity L bol , of the outflow driving sources (data from Wu et al. 2004). The dashed line has unit slope and is for reference only. Fig. 4 The increase in photoevaporation rate (solid lines) and the decrease in accretion rate (dashed lines) near the gap opening epoch is shown for a EUV+FUV+X-ray photoevaporation model. Here, the gap opens at r ∼ 10AU, at about 1.8 Myr. After gap opening, rim irradiation increases the photoevaporation rate, lowering Σ and lowering the accretion rate further. Also note that photoevaporation rates are higher than the accretion rate in the outer disk( 100AU) where significant mass loss occurs. Fig. 5 The gas and dust surface density evolution of a viscously evolving disk, with FUV, EUV and X-ray photoevaporation, M d (0) = 0.1M ⊙ , and α = 0.01. FUV photoevaporation leads to the creation of a gap at ∼ 3 − 10AU and the gas disk disperses in ∼ 2 Myr. After the dispersal of the gas disk a substantial amount of dust is retained in the disk (∼ 3 × 10 −4 M ⊙ ). In these models, the largest solids are 1 cm in size, and no planetesimal formation is taken into account (see Gorti et al. 2015).
2015-12-15T01:48:21.000Z
2015-12-15T00:00:00.000
{ "year": 2016, "sha1": "cf2c95bf719bff0559061433cb08ef3ad6a0942b", "oa_license": null, "oa_url": "http://real.mtak.hu/41513/7/gorti_liseau_sandor_clarke_ssr2016.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cf2c95bf719bff0559061433cb08ef3ad6a0942b", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
249438475
pes2o/s2orc
v3-fos-license
The impact of Raynaud’s phenomenon on work ability – a longitudinal study Objective To determine if having Raynaud’s phenomenon (RP) affects the work ability, job retainment, or occurrence of sick leave. Methods Surveys on the working-age general population of northern Sweden were conducted in 2015 and 2021, gathering data on RP, occupation and sick leave. Work ability was assessed using the Work Ability Score. Results The study population consisted of 2,703 women and 2,314 men, among which 390 women and 290 men reported RP at follow-up. For women, the mean [standard deviation (SD)] Work Ability Score was 8.02 (2.24) for subjects reporting RP and 7.68 (2.46) for those without RP. For men, the corresponding numbers were 7.37 (2.03) and 7.61 (2.14), respectively. Multiple linear regression did not show an association between RP status and work ability (p = 0.459 for women and p = 0.254 for men), after adjusting for age, body mass index, physical workload, cardiovascular disease, and perceived stress. Having retained the same main livelihood since baseline was reported by 227 (58.5%) women with RP, 1,163 (51.2%) women without RP, 152 (52.6%) men with RP, and 1,075 (54.1%) men without RP (p = 0.002 for women and p = 0.127 for men). At follow-up, any occurrence of sick leave during the last year was reported by 80 (21.4%) women with RP, 410 (18.6%) women without RP, 48 (17.1%) men with RP, and 268 (13.7%) men without RP (p = 0.208 for women and p = 0.133 for men). Among those reporting sick leave, the mean (SD) duration in months was 2.93 (3.76) for women with RP, 3.00 (4.64) for women without RP, 2.77 (3.79) for men with RP, and 2.91 (12.45) for men without RP (p = 0.849 for women and p = 0.367 for men). Conclusion For neither women nor men was there a significant effect of having RP on work ability. Women with RP reported a slightly higher job retainment compared to those without the condition, while there was no difference in job retainment among men. For neither gender did the presence of RP influence the occurrence of recent sick leave, nor did it affect the length of time away from work. Supplementary Information The online version contains supplementary material available at 10.1186/s12995-022-00354-2. Introduction Raynaud's phenomenon (RP) is the clinical manifestation of vasospasm affecting digital blood vessels [1]. It can be defined as episodes of peripheral blanching of the fingers, triggered by exposure to cold, vibration, or psychological stress [2]. RP is a common condition, with prevalence figures of around 12-14% in the Scandinavian general population [3,4]. As often caused by occupational exposure to hand-transmitted vibration, Swedish insurance statistics from 2018 report that RP is the most commonly compensated occupational injury, representing over one third of all approved claims [5]. In current practice, subjects with RP are classified either as primary RP, when no underlying condition is found, or secondary RP, when there is associated disease. Suffering from RP can have a major effect on the quality of life [6,7]. However, little is Page 2 of 9 Stjernbrandt and Wahlström Journal of Occupational Medicine and Toxicology (2022) 17:12 known about impacts on work ability, job retainment, or sick leave. The concept of work ability is complex and entails the balance between physical and cognitive demands in relation to the resources of the individual, modified by the organizational context [8]. In addition, both work demands and individual resources are dynamic factors that change over time [9]. It could be postulated that suffering from RP should mainly be a hindrance for subjects performing manual outdoor work, since exposure to cold climate triggers vasospastic attacks [10], and RP decreases manual dexterity and physical performance [4,11]. However, as mentioned above, psychological stress is also an established trigger for RP attacks and should be considered in this context. One method for measuring work ability is the Work Ability Score (WAS) from the Work Ability Index (WAI) questionnaire, which is a commonly used and well-validated tool [8]. The WAS consists of a whole number numerical rating scale ranging from 0-10, where the current work ability is subjectively compared to the lifetime best. The WAS has been shown to have an equally good predictive value as the whole WAI instrument regarding health, pain, and sick leave [12,13]. Previous studies on Swedish workers have shown that age and psychological mood have significant impacts on the WAS [9,14]. External factors that can affect work ability include ambient temperature, humidity, dust and noise levels, hand-arm vibration, and ergonomic exposures [15,16]. However, the effects of such physical work factors are modified by the health status of the individual worker. For instance, it is likely that exposure to ambient cold and vibration poses a greater challenge to a subject with RP than a completely healthy individual. In this context, a few studies have investigated effects on work ability in specific groups of patients with secondary forms of RP, such as vibration-induced white fingers [14,16,17] and systemic sclerosis [18]. However, to the authors' knowledge, there are no previous population-based studies on the matter. The primary aim of this study was to determine if having Raynaud's phenomenon affects the work ability, job retainment, or occurrence of sick leave. Secondary aims were to investigate longitudinal effects of incident or remittent Raynaud's phenomenon on work ability, and evaluate potential gender differences. Study design and setting This prospective closed-cohort study was part of the Cold and Health In Northern Sweden (CHINS) research project, which was initiated in 2015 to broadly explore adverse health effects from ambient cold exposure, and has previously been described in detail [3]. The study sample included men and women of working age at enrollment (18-70 years), living in northern Sweden, who were recruited from the national Swedish population register. Baseline data came from the first postal survey that was administered between February and May of 2015. Follow-up data was retrieved through a digital questionnaire that collected data between March and April of 2021. All subjects who had responded to the baseline questionnaire were invited by a postal query to respond to the follow-up questionnaire, with one postal reminder. Subjects who were unable to answer digitally were given the option to respond to the questionnaire on paper. Variables and statistical analyses Responses from both surveys were merged based on social security numbers. Continuous variables data were described as mean values with standard deviation (SD), while categorical variables were presented as numbers and valid percentages. Subjects were defined as having RP through a positive response to a single questionnaire item that was present in both surveys: "Does one or more of your fingers turn white (as shown on picture) when exposed to moisture or cold?", and this was supported by a previously developed color chart [19]. In the followup survey, study participants were also asked additional questions about year of first occurrence, attack frequency and distribution, as well as progression of RP. Length and weight were collected at baseline, in order to calculate body mass index (BMI). Work ability was assessed using the WAS, which was included in the follow-up survey. Current occupation was specified in free-form text in both surveys, and manually coded in accordance with the two-level International Standard Classification of Occupations (ISCO) [20]. Physical workload was determined by a previously published job-exposure matrix (JEM) that categorized the exposure into low (e.g. desk jobs), medium (e.g. ambulatory work) or high (e.g. heavy lifting or climbing), based on the ISCO coding at baseline [21]. Perceived stress was asked about in both surveys, and responses dichotomized so that "none/very little/some" was considered a negative response, and "quite a lot/ very much" a positive response. Occupational exposure to outdoor or cold environments at baseline and followup were reported on a ten-level whole number numerical rating scale, ranging from "do not agree" to "fully agree", and responses dichotomized based on the 50 th percentile. Cardiovascular diseases were asked about in the baseline survey, and included the presence of physician-diagnosed hypertension, angina pectoris, myocardial infarction, or stroke. The Mann-Whitney U test and Pearson's chi square test was used to determine statistical differences for continuous and categorical variables, respectively. Simple and multiple linear regression was used to model the relations between the WAS and independent variables (i.e. RP status, age, BMI, physical workload, cardiovascular disease, and perceived stress). Statistical tests were chosen based on the distribution of data, and non-parametric tests opted for when the assumption of normal distribution was violated. A p value < 0.05 was considered statistically significant. Statistical analyses were performed using SPSS (version 27.0, IBM Corporation, Armonk, NY, USA). Recruitment The baseline cohort (2015) consisted of 12,627 subjects, of which 888 were deceased or had moved from the study region at the time of follow-up (2021). For an additional 31 subjects, the written invitation to participate in the follow-up survey could not be delivered by the postal services. There were 5,208 responses to the follow-up survey, yielding a response rate of 44.4%. Due to multiple responses (N = 80) and invalid social security numbers (N = 111), all survey responses could not be matched to the original dataset, leaving 5,017 subjects available for analysis ( Fig. 1). Five subjects (0.1%) requested a postal follow-up survey. Details on sampling and response rates are presented in Additional file 1. Characteristics of the study population The final study population consisted of 2,703 women (53.9%) and 2,314 men. Other baseline characteristics are presented in Table 1. There were 390 women (14.5%) and 290 men (12.7%) reporting RP affecting the hands at follow-up. Among these subjects, the mean (SD) age of onset of RP was 31 (15) years for women and 36 (17) years for men. Having had vasospastic episodes during the last two years was reported by 328 women (88.6%) and 230 men (85.1%). Regarding distribution of blanching, 330 women (85.1%) and 240 men (83.7%) reported affection of middle and/or distal phalanges for the right hand, while the corresponding figures for the left hand was 338 (85.1%) and 237 (83.7%), respectively. An increased attack frequency since onset was reported by 93 women (24.1%) and 103 men (35.8%), while an increased distribution of blanching was reported by 50 (13.0%) and 47 (12.3%), respectively. Work ability Among women, the mean (SD) WAS was 8.02 (2.24) for subjects reporting RP at follow-up and 7.68 (2.46) for those without RP. For men, the corresponding numbers were 7.37 (2.03) and 7.61 (2.14), respectively. In unadjusted analyses, there was a significant effect of RP status on the WAS among women, but not men (Table 2). However, in adjusted analyses, also including age, BMI, physical workload, cardiovascular disease, and perceived stress, there was no significant effect of RP status on the WAS for either gender (Table 3). In longitudinal analyses, there were 96 women and 112 men who negated RP at baseline but reported RP during follow-up. For women, the mean (SD) WAS for women and p = 0.407 for men). Among currently working subjects, a high number of weekly work hours was reported by women with RP ( Fig. 2) Main findings Our study did not reveal a significant effect of having Raynaud's phenomenon on work ability, when analyzed using multiple linear regression. Women with Raynaud's phenomenon reported a slightly higher job retainment compared to those without the condition, and generally long working hours. There were no statistically significant differences in sick leave occurrence or duration. Interpretation and comparison with other studies The prevalence of RP in the present surveys was comparable with the roughly 12% that was reported in a Finnish population-based study [4]. The condition was more common among women, which is also in line with previous research [10]. Further, the mean age of onset of 31 years for women and 36 years for men was quite similar to the results of a meta-analysis on longitudinal studies on RP (10 studies; 639 subjects), where the mean age of onset was 34 years [22]. The present study did not show any significant effect of RP status on work ability in the multiple linear regression models, when also using age, BMI, physical workload, cardiovascular disease, and perceived stress as covariates. However, in unadjusted analyses, there was a significant positive effect of RP status on work ability among women. Also, the results of longitudinal analyses suggested that men with incident RP had a slightly lower work ability than healthy subjects, while women with remitted RP reported a lower work ability than those with persistent disease. It is plausible that RP is indeed a hindrance for work, especially in manual outdoor occupations where exposure to ambient and contact cold, as well Table 4 Reason for change of main livelihood between 2015-2021 as hand-arm vibration, can trigger vasospastic attacks. This might reduce the work capacity in tasks requiring grip force and manual dexterity, and motivate the worker to seek a heated environment in order to regain full use of the hands. Such manual outdoor occupations are common among working men in northern Sweden, as evidenced both by the descriptive analyses on occupation in this study (Table 1), as well as official statistics from the Swedish Work Environment Authority [23]. In contrast, women with RP reported long working hours and a higher job retainment than their healthy counterparts. These finding are harder to explain, but may be due to the fact that RP does not pose a hindrance for indoor work with low physical demands, that were common among women in the study population. It is also possible that the work participation facilitation was more efficient for women with RP, since they reported a higher access to occupational health care. As shown in Table 4, the large majority of those who changed the main livelihood had retired, and only a few percent had changed into another field of work. This is at least in part explained by the age composition of the study sample, in which a large proportion had reached the general retirement age of 65 years by the time of follow-up. Among subjects with RP who had changed field of work, a distinct transition from outdoor to indoor tasks during the follow-up time could not be discerned. However, this subgroup contained few responding subjects and revealed a large variation regarding new occupations, and these facts limit what conclusions could be drawn. Importantly, most subjects reported a good work ability, regardless of having RP or not. Neither were there any significant effects on sick leave parameters. In this context, it is important to recall that most subjects reported a mild state of RP, regarding both attack frequency, distribution of paleness, and disease progression over time. Thus, it is reasonable to assume that the condition only had a minor impact on work ability. However, concern has been raised if the WAS sufficiently captures limitations in work ability for conditions that only affect the hands [14], since it only gives a rough measure of the global work ability. A more specific item for measuring hand disability, such as the hand disability index of the Stanford Health Assessment Questionnaire [24], might have revealed larger differences between groups. Also, since the WAS relates the current work ability to the lifetime best, there is a risk that perceived effects on work ability are attenuated among subjects with long-standing conditions, such as RP. Regarding the effects of other factors on work ability, the present study showed a significant effect of age on the WAS. A previous Swedish study on work ability among vibration-exposed workers subjects, where the prevalence of RP was 30% among men and 50% among women, reported an effect of age and distribution of neurosensory symptoms, but not vascular symptoms [16]. The β coefficient for age ranged from − 0.07 to − 0.09, closely resembling the results in the present study. Our study demonstrated a negative impact of high BMI on the WAS, however in the adjusted analyses only statistically significant among women. A high BMI has also previously been associated to poorer work ability, most likely due to reduced physical capacity, although overweight could also be a proxy marker for other disease [12]. In the present study, high perceived stress negatively affected work ability, with a stronger association among women. This is in line with previous research that has shown associations between stress levels and work ability, as well a greater susceptibility for stress among women [12,13]. Limitations There was a large proportion of survey non-responders, and an underrepresentation of younger age groups among responders (as presented in Additional file 1) that could have affected the generalizability of the results and introduced a sampling bias. Although socioeconomic status may affect work ability, the present study collected no data on such parameters, other than occupational title. The validity of the diagnosis of RP can be questioned, since it was based on a single questionnaire item, although supported by a previously developed color chart that has previously been shown to increase both sensitivity and specificity in comparison with only posing questions [19,25]. Using more specific criteria, or performing a thorough examination by a physician, would likely have increased the diagnostic accuracy. However, such clinical investigation was not feasible due to the large study size. Furthermore, the surveys were not designed to separate between primary and secondary RP, and it is plausible that secondary RP might negatively affect work ability more so than primary RP. Also, the underlying conditions of patients with diseases giving rise to secondary RP could have had a larger impact on work ability than the symptoms of RP in itself. Thus, in future studies on work ability in this context, more attention should be given to the etiology of RP. Finally, the low explained variance proportions of the multiple linear regression models suggest that there are other important factors that affect work ability that were not investigated in our study. Strengths To the authors' knowledge, this is the first populationbased prospective study on work ability among subjects with RP. The study was performed in a Scandinavian setting, where the condition is quite common. The
2022-06-08T13:50:04.925Z
2022-06-08T00:00:00.000
{ "year": 2022, "sha1": "253e0ecdfcf8b4cb230bcab55606cd98b7e41db9", "oa_license": "CCBY", "oa_url": "https://occup-med.biomedcentral.com/track/pdf/10.1186/s12995-022-00354-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "253e0ecdfcf8b4cb230bcab55606cd98b7e41db9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119320007
pes2o/s2orc
v3-fos-license
A new variational approach to linearization of traction problems in elasticity A new energy functional for pure traction problems in elasticity has been deduced in [23] as the variational limit of nonlinear elastic energy functional for a material body subject to an equilibrated force field: a sort of Gamma limit with respect to the weak convergence of strains when a suitable small parameter tends to zero. This functional exhibits a gap that makes it different from the classical linear elasticity functional. Nevertheless a suitable compatibility condition on the force field ensures coincidence of related minima and minimizers. Here we show some relevant properties of the new functional and prove stronger convergence of minimizing sequences for suitable choices of nonlinear elastic energies. Introduction This article is focussed on the properties of the functional In (1.1) and in the sequel we set: N = 2, 3, M N ×N skew denotes the set of skew-symmetric N ×N real matrices, Ω ⊂ R N is a Lipschitz open set representing the reference configuration of an hyperelastic material body undergoing pure traction, V 0 (x, ·) are uniformly positive definite quadratic forms on square matrices, the vector field v in H 1 (Ω, R N ) denotes a displacement and E(v) := 1 2 (∇v T + ∇v) denotes the related linearized strain, while L(v) represents the potential energy associated to displacement v, here f and g are respectively the prescribed boundary and body force fields, moreover we assume that the total load is equilibrated, say Motivations for studying functional F and its minimization over v in H 1 (Ω, R N ) rely on the variational asymptotic analysis developed in [23], where we proved that for pure traction problems in elasticity a gap arises between the classical linearized elasticity functional E, and the rigorous variational limit of nonlinear elastic energy of a material body subject to an equilibrated force field, since this limit actually is functional F, provided the load fulfils a suitable compatibility condition (see (1.12) and Theorem (3.3) below). The inequality F(v) ≤ E(v) for every v is straightforward. nevertheless the two functionals cannot coincide: indeed Notwithstanding this gap, in [23] we showed that the two functionals F and E have the same minimum and same set of minimizers when the loads are equilibrated and compatible (see Theorem (3.3) below). In the case N = 2 the gap between the two functionals can be better clarified as follows (see Remark 2.5 in [23] for more details ): where α − = max(−α, 0), thus Even more explicitly, if N = 2, λ, µ > 0 and then V 0 (x, B) = 4µ|B| 2 + 2λ|TrB| 2 and we get such evaluation in 2D approximately means that for every displacement v such that the associated deformed configuration y(Ω) is greater than the area of Ω, the global energy F(v) provided by new functional F is the same as the one provided by classical linearized elasticity, say E(v). The rigorous derivation of the variational theory of linear elasticity ( [17]) from the theory of finite elasticity ( [20], [30]) was achieved in [11] through arguments based on De Giorgi Γ− convergence theory, thus providing a mathematical justification of the classical elasticity in small deformations regime, at least for Dirichlet or mixed boundary value problem. In a more recent paper ( [23]) we have focussed the analysis on the analogous variational question related to Neumann type condition, say the pure traction problem in elasticity: the case where the elastic body is subject to a system of equilibrated forces and no Dirichlet condition is assigned on the boundary. Referring to the open set Ω ⊂ R N , N = 2, 3, as the reference configuration of an hyperelastic material body, the stored energy due to a deformation y can be expressed as a functional of the deformation gradient ∇y as follows Then due to frame indifference there exists a function V such that We set F = I + hB, where h > 0 is an adimensional small parameter and We assume that the reference configuration has zero energy and is stress free, i.e. W(x, I) = 0, DW(x, I) = 0 for a.e. x ∈ Ω , and that W is regular enough in the second variable, then Taylor's formula entails If the deformation y is close to the identity up to a small displacement, say y(x) = x + hv(x) with bounded ∇v then, by setting E(v) := 1 2 (∇v T + ∇v) , one easily obtains Right hand side in (1.8) represents the classical linear elastic deformation energy and such a limit was retained to establish a reasonable justification of linearized elasticity. However in [11] it is proved by Γ-convergence techniques that, under standard structural conditions on W, actually the linear elastic problem is achieved in the limit by exploiting the weak convergence of H 1 (Ω, R N ), in case of Dirichlet or mixed boundary condition. The variational limit is different when no Dirichlet boundary condition is present, as we outline briefly here: in [23] we studied the case of Neumann boundary conditions, that is pure traction problem in elasticiy, by considering the sequence of energy functionals and we we inquired whether the asymptotic relationship F h (v h ) = inf F h + o(1) as h → 0 + implies, up to subsequences, some kind of weak convergence of v h to a minimizer v 0 of a suitable limit functional in H 1 (Ω; R N ); to this aim next example is highly explicative: assume for every h > 0, then by choosing a fixed nontrivial N × N skew-symmetric matrix W, a real number 0 < 2α < 1 and setting (1), though z h has no subsequence weakly converging in H 1 (Ω; R N ). Therefore in contrast to [11], one cannot expect weak H 1 (Ω; R N ) compactness of minimizing sequences for pure traction problem, not even in the simplest case of null external forces: we emphasize that in nonlinear elasticity this difficulty cannot be easily circumvented in general by standard translations since with P projection on infinitesimal rigid displacements. Nevertheless, we will show in Theorem 4.1 below that, at least for some special For this reason, we exploited a much weaker topology: the weak L 2 (Ω; R N ) convergence of linear strains. Since such convergence does not imply an analogous convergence of the skew symmetric part of the gradient of displacements, one may expect that the Γ limit functional is different from the point-wise limit of F h , as actually is the case. Under some natural assumptions on W, a careful application of the Rigidity Lemma of [16] together with a suitable tuning of asymptotic analysis with Euler-Rodrigues formula for rotations show that, if E(v h ) are bounded in L 2 , then up to subsequences √ h ∇v h converges strongly in L 2 to a constant skew symmetric matrix and the variational limit of the sequence F h , with respect to the w-L 2 convergence of linear strains, turns out to be the functional F defined in (1.1): in [23] it is proved that if loads are equilibrated and fulfil the compatibility condition then pure traction problem in linear elasticity is rigorously deduced via Γ-convergence from the corresponding pure traction problem formulated in nonlinear elasticity, referring to weak L 2 convergence of the linear strains; moreover minimizers of F coincide with the ones of of linearized elasticity functional E; thus providing a complete variational justification of pure traction problems in linear elasticity at least if (1.12) is satisfied. In particular, as it is shown in Remark 2.8, this is true when g ≡ 0, f = f n with f > 0 and n is the outer unit normal vector to ∂Ω, that is when we are in presence of tension-like surface forces. In the present paper we prove some relevant properties concerning the structure of the new functional and improve its variational connection for a particular but significant class of nonlinear energies. In section 2 we prove that F is sequentially lower semicontinuous weak respect to the natural but very weak notion of convergence, e.g. weak L 2 of linearized strains (see Proposition (2.3)), though F exhibits a kind of "nonlocal" behavior (see Remark 2.5). In the 2D case we can prove that F is a convex functional for every choice of the positive definite quadratic form V 0 or, equivalently, for the variational limit of every nonlinear stored energy W fulfilling structural assumptions of general kind in the theory of elasticity: this is shown by making explicit its first variation and showing that the second variation cannot be negative (see (2.11) and Proposition 2.1). On the other hand in the 3D case the functional F cannot be convex for whatever choice of the positive definite quadratic form V 0 or, equivalently for every nonlinear stored energy W fulfilling the standard structural assumptions: see Proposition 2.2 and the general counterexample to convexity therein. The dichotomy above relies on the fact that there exist pairs of skew-symmetric matrices is not the square of any skew-symmetric matrix: e.g. see (2.7); while in the 2D case the matrix W 2 is a nonpositive multiple of the identity for every skew-symmetric matrix W. Notice that F is not subadditive: indeed already in dimension N = 2 formula (1.5) shows that functional F cannot be subadditive on disjoint sets. In Section 3 for reader's convenience we summarize and comment preliminary main results of [23] about the variational convergence of pure traction problems. Eventually, in Section 4 we refine the convergence properties for minimizing sequences of the On the other hand, if inequality in (1.12) is fulfilled only in a weak sense by the collection of skew symmetric matrices, then still argmin F contains argmin E and min F = min E, but F may have infinitely many minimizing critical points which are not minimizers of E. Therefore, only two cases are allowed: either min F = min E or inf F = −∞; actually the second case arises in presence of compressive surface load. We mention several contributions facing issues in elasticity which are strictly connected with the context of present paper: [1], [2], [3], [4] [5], [6], [7], [8], [19] [21], [22], [24], [25], [26], [27], [28]. Structural properties of functional F In this section we develop further the analysis of structural properties of functional F defined by (1.1), focussing mainly on convexity and semicontinuity issues. All along the paper we assume that the reference configuration of the elastic body is a and set these notations: the generic point x ∈ Ω has components x j referring to the standard basis vectors e j in R N ; L N and B N denote respectively the σ-algebras of Lebesgue measurable and Borel measurable subsets of R N . The notation for vectors a, b ∈ R N and N ×N real matrices A, B, F are as follows: First we recall that the minimum at right-hand side in definition (1.1) of F exists for every v in H 1 (Ω, R N ), so that F(v) is well defined: precisely the finite dimensional minimization problem has exactly two solutions which differs only by the sign, since strict convexity of the positive definite quadratic form V 0 (x, ·) entails and hence the existence of a unique minimizer W 2 . and introduce the C 2 functionals F ε by setting Then by (2.3), (2.4) and representation Moreover we claim that F ε is convex for every ε > 0 and this property entails the convexity of F since F is the supremum of a family of convex functions. Indeed F ε is a C 2 functional on the whole space H 1 (Ω, R N ) therefore its second variation, for every u, v ∈ H 1 (Ω, R N ), is By taking into account that 0 ≤ ϕ ′′ ε ≤ 2 we get Hence, representation (1.5) entails that the right hand side of (2.6) is . Therefore F ε is convex and claim is proved. Proof. Set Then ) thus proving that F is not convex in the 3D case for every choice of V 0 . Although existence of minimizers of F is already a direct consequence of convergence results in [23], in the next Proposition we provide a direct proof of sequential lower semicontinuity of F with respect to the natural, very weak convergence, for both cases of dimension 2 and 3. If lim inf n→+∞ F(v n ) = +∞ then the claim is trivial, so we may also assume without restriction that F(v n ) ≤ C. Assumption (1.3) of equilibrated load entails F(v n ) = F(v n − Pv n ), so may suppose that Pv n ≡ 0. We choose hence, if C K the Korn-Poincaré inequality in Ω and α > 0 is the uniform coercivity constant of V 0 , say V 0 (x, M) ≥ α|M| 2 , we get Therefore |W 2 n | is bounded and since W n is real skew-symmetric we obtain that |W n | is bounded too. So we may suppose that, up to subsequences, W n → W in M N ×N skew . By taking into account that Pv n ≡ 0 we get v n ⇀ v in H 1 (Ω, R N ) hence by recalling that V 0 (x, ·) is a convex quadratic form which proves the claimed lower semicontinuity inequality. Remark 2.4. The first variation of F can be explicitly evaluated in the 2D case, thanks to (1.5), as follows Remark 2.5. Functional F exhibits a nonlocal behavior: precisely in 2D, due to the representations (1.5) and (2.11) respectively of the functional of first variation, F(v) is the sum of a contribution E(v) due to local functional E related to linear elasticity plus a possibly vanishing contribution with global dependance on v explicitly evaluated by which simplifies as follows in the case of Green-Saint Venant energy: while the nonlocal coefficient Ω DV 0 (x, I)·E(v) dx − appears in Euler equations. Preliminary variational convergence results In this Section we recall the main results of [23] about the variational convergence of pure traction problems. To this aim basic notation and assumptions for general nonlinear energies is introduced first. Still we assume that the reference configuration of the elastic body is a We consider a body made of an hyperelastic material, say there exists a L N ×B N 2 measurable W : Ω × M N ×N → [0, +∞] such that, for a.e. x ∈ Ω, W(x, ∇y(x)) represents the stored energy density, when y(x) is the deformation and ∇y(x) is the deformation gradient. Moreover we assume that for a.e. x ∈ Ω that is the reference configuration has zero energy and is stress free, so by (3.3) we get also In addition we assume that there exists γ > 0 independent of x such that By (3.6) and Taylor expansion with Lagrange reminder we get, for a.e. x ∈ Ω and suitable t ∈ (0, 1) depending on x and on B: Hence by (3.9) According to (3.7) for a.e. x ∈ Ω, h > 0 and every B ∈ M N ×N we set Taylor's formula with (3.6), (3.12) where the point-wise limit of integrands is the quadratic form V 0 defined by The symmetric fourth order tensor D 2 V(x, 0) in (3.14) plays the role of the classical elasticity tensor. By (3.5) we get (3.14) and (3.15) imply the ellipticity of V 0 : For a suitable choice of the adimensional parameter h > 0, the functional representing the total energy is labeled by F h : H 1 (Ω; R N ) → R ∪ {+∞} and defined as follows where L is defined by (1.2). In order to describe the asymptotic behavior as h → 0 + of functionals F h , we refer to the limit energy functional F : H 1 (Ω; R N ) → R defined by (1.1). Definition 3.1. Given an infinitesimal sequence h j of positive real numbers, we say that We proved that for every given infinitesimal sequence h j actually the minimizing sequences of the sequence of functionals Then there is a constant K, dependent only on Ω and the coercivity constant of of the stored energy density appearing in (3.5), such that for every sequence of strictly positive real numbers h j → 0 there are minimizing sequences of the sequence of functionals F h j ; for every minimizing sequence v j ∈ H 1 (Ω; R N ) of F h j there exist a subsequence and a displacement v 0 ∈ H 1 (Ω; R N ) such that, without relabeling, 22) h j ∇v j → 0 strongly in L 2 (Ω; M N ×N ) , If strong inequality in the compatibility condition (1.12) is replaced by a weak inequality, then the uniform estimate (3.18) still hold true and also minimizing sequences of the sequence of functionals F h j exist for every infinitesimal sequence h j , but the minimizers coincidence (3.20) for F and E cannot hold anymore. Nevertheless the following general result holds true. where the last inclusion is an equality in 2D: On the other hand, assume (3.1), W as in (1.10) and f = g ≡ 0, so that the compatibility inequality is susbstituted by the weak inequality; if v j are defined as above then, hence by frame indifference, has no weakly convergent subsequences in L 2 (Ω; M N ×N ). Remark 3.6. It is worth noticing that the compatibility condition (1.12) holds true when g ≡ 0, f = f n with f > 0 and n the outer unit normal vector to ∂Ω. Indeed let W ∈ M N ×N skew , W ≡ 0: hence by (1.3) and the Divergence Theorem we get thus proving (1.12) in this case. This means that in presence of tension-like surface forces and of null body forces the compatibility condition holds true. It is quite natural to ask whether condition (1.12), which was essential in the proof of Theorem 3.3, may be dropped in order to obtain at least existence of min F: the answer is negative. Indeed the next remark shows that, when compatibility inequality in (1.12) is reversed for at least one choice of the skew-symmetric matrix W, then F is unbounded from below. Next example shows that in case of uniform compression along the whole boundary functional F is unbounded from below, regardless of convexity or nonconvexity of Ω and F. Indeed, for every W ∈ M N ×N skew such that |W| 2 = 2 we obtain Summarizing, only two cases are allowed: either min F = min E or inf F = −∞: the second case actually arises in presence of compressive surface load. The new functional F somehow preserves memory of instabilities which are typical of finite elasticity, while they disappear in the linearized model described by E. In the light of Theorem 3.3, as far as pure traction problems are considered, it seems reasonable that the range of validity of linear elasticity should be restricted to a certain class of external loads, explicitly those verifying (1.12): a remarkable example in such class is a uniform normal tension load at the boundary as in Remark (3.6); while in the other cases equilibria of a linearly elastic body could be better described through critical points of F, whose existence in general seems to be an interesting and open problem. Strong convergence of minimizing sequences of F h In this section we prove that for the special class of Green-Saint Venant energy density it is possible to choose a subsequence of functionals F h defined by (3.17) and a corresponding minimizing sequence, according to Definition (3.1), which is weakly converging in H 1 (Ω; R N ) to a minimizer of F defined by (1.1). Moreover, thanks to a result of [11], this convergence entails strong convergence in W 1,q (Ω; R N ) for 1 ≤ q < 2. Before stating the main result of this section we notice that, by frame indifference (3.3) and equilibrated load condition (1.3), without loss of of generality we can assume Therefore, if I k denotes the moment of inertia of Ω with respect to the k-th axis, by (4.1) we get Theorem 4.1. Let µ, λ > 0, Then there exists a (not relabeled) subsequence of functionals F h j and a minimizing sequence w j weakly converging in H 1 (Ω; R N ) and strongly converging in W 1,q (Ω, R N ) to w 0 ∈ argmin E, for 1 ≤ q < 2. Proof. By recalling Proposition 5.3 of [11] it will be enough to show that there exists a minimizing sequence w j for functionals F h j (say F h j (w j ) = inf F h j + o(1)) weakly converging in H 1 (Ω; R N ) to w 0 ∈ argmin F and (4.5) lim where it is worth noticing that due to (4.4) (4.6) V 0 (x, B) ≡ V 0 (B) = 4µ|B| 2 + 2λ| Tr B| 2 . To this aim let v j be a minimizing sequence for functionals F h j : by Theorem 3.3 there exist a (not relabeled) subsequence {h j } and v j , v 0 ∈ H 1 (Ω; R N ) such that Thanks to (4.1), (4.2) and (4.3) we get which, thanks to (4.9), implies , hence D h j := E(v j ) + 1 2 h j ∇v T j ∇v j are equibounded in L 2 (Ω; M N ×N ) and by setting w j := v j − Pv j , by recalling that B → V 0 (B) is convex we have (4.14) Since |V ′ 0 (B)| ≤ C|B| for some C > 0, by (4.12) and (4.13) we get (4.15) which proves that w j is a minimizing sequence too. It is now readily seen that w j are equibounded in H 1 (Ω; R N ) and (4.5) follows from (4.10) so the claim is proven. Remark 4.2. By inspection of the proof, Theorem 4.1 holds true also for more general energies: e.g if W is a convex function of F T F − I with quadratic growth, which is finite if and only if det F > 0.
2018-11-26T11:58:24.000Z
2018-11-26T00:00:00.000
{ "year": 2018, "sha1": "bbbe102978bd1a74ae64507a7c5539ccfa9d2702", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1811.11037", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "bbbe102978bd1a74ae64507a7c5539ccfa9d2702", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science", "Physics" ] }
267723742
pes2o/s2orc
v3-fos-license
Personalized probiotic strategy considering bowel habits: impacts on gut microbiota composition and alleviation of gastrointestinal symptoms via Consti-Biome and Sensi-Biome Personalized probiotic regimens, taking into account individual characteristics such as stool patterns, have the potential to alleviate gastrointestinal disorders and improve gut health while avoiding the variability exhibited among individuals by conventional probiotics. This study aimed to explore the efficacy of personalized probiotic interventions in managing distinct stool patterns (constipation and diarrhea) by investigating their impact on the gut microbiome and gastrointestinal symptoms using a prospective, randomized, double-blind, placebo-controlled clinical trial design. This research leverages the multi-strain probiotic formulas, Consti-Biome and Sensi-Biome, which have previously demonstrated efficacy in alleviating constipation and diarrhea symptoms, respectively. Improvement in clinical symptoms improvement and compositional changes in the gut microbiome were analyzed in participants with predominant constipation or diarrhea symptoms. Results indicate that tailored probiotics could improve constipation and diarrhea by promoting Erysipelotrichaceae and Lactobacillaceae, producers of short-chain fatty acids, and regulating inflammation and pain-associated taxa. These findings suggest the potential of tailored probiotic prescriptions and emphasize the need for personalized therapeutic approaches for digestive disorders. Clinical trial registration: https://cris.nih.go.kr/cris/index/index.do, identifier KCT0009111. Introduction Advances in sequencing technologies have enabled exploration of the human gut microbiome, which is a reservoir of diverse microorganisms crucial for overall health (1).Extensive projects such as the Human Microbiome Project (HMP) and Metagenomics of the Human Intestinal Tract (MetaHIT) have highlighted the profound influence of microbiome diversity and compositional clusters on various aspects of human physiology (2,3).Notably, OPEN ACCESS EDITED BY Balamurugan Ramadass, All India Institute of Medical Sciences Bhubaneswar, India Inclusion and exclusion criteria Adults aged 19-75 years with bowel irregularities were recruited from the website of Chong Kun Dang Healthcare.Participants were categorized based on the major symptoms of functional constipation and diarrhea according to the ROME IV criteria (6).The Insensitive Gut (IG) group consisted of individuals with a Bristol Stool Score (BSS) ≤2, bowel movements ≤2 times per week, excessive straining during defecation, or a sense of incomplete evacuation.The Sensitive Gut (SG) group included individuals with a BSS ≥5, bowel movements ≥2 times daily, or discomfort during defecation.The exclusion criteria encompassed pregnant, breastfeeding, or intending-to-conceive women; individuals using dietary supplements or medications affecting gastrointestinal function; and those who underwent gastrointestinal surgery. Study design This study employed a prospective, randomized, double-blind, placebo-controlled clinical trial design to investigate the efficacy of personalized probiotic prescriptions for constipation and diarrhearelated stool patterns.Remote assessments were conducted at the baseline, intervention period, and endpoint at the R&D Center of Chong Kun Dang Healthcare.Participants completed online questionnaires through a link provided for remote screening.The screening phase assessed the eligibility and exclusion criteria, and baseline and endpoint fecal samples were collected for microbiome analysis.A two-week washout period preceded probiotic intervention, and probiotic and prebiotic consumption was prohibited during the intervention period.The wash-out period was enforced to eliminate any potential influence on the composition of gut microbiome from routine probiotic intake.The intervention lasted for four weeks, and compliance was monitored through daily intake logs.Participants with insensitive and sensitive bowels were randomly assigned to the test or placebo group using a blocked randomization method, with blinding maintained throughout the study. Questionnaires and assessment metrics The BSS and stool frequency were used to assess improvements in bowel habits.The BSS employed a 7-point scale of illustrations from which participants could choose, whereas stool frequency required participants to select one of four options: 1 (less than three times a week), 2 (3-4 times a week), 3 (five times a week to once daily), and 4 (twice daily or more).Additionally, the amelioration of gut symptoms was evaluated by assessing incomplete evacuation, straining, urgency, abdominal discomfort, and abdominal pain.Each gut symptom questionnaire was rated on a 5-point Likert scale. Sample collection and microbiome analysis Fecal samples were collected using fecal collection kits (NBG-1C; NobleBio Inc., Hwaseong, Korea) containing a preservative buffer for microbiome analysis.The samples were shipped to the laboratory and stored at −80°C until further processing.Genomic DNA was extracted from the thawed samples using the Omega Mag-Bind DNA Prep Kit (Omega Bio-tek Inc., Norcross, GA, United States), and the V4 region of the bacterial 16S rRNA gene amplified using the 515F-806R primer pair (17).PCR products were sequenced on an Illumina i-Seq 100 system (Illumina, Inc., San Diego, CA, United States) at the R&D Center of the Chong Kun Dang Healthcare (Seoul, Korea).Sequence data were subjected to clustering of OTU representative sequences at 98% using a pipeline generated in the CLC Genomics Workbench 22.0 (QIAGEN, Aarhus, Denmark) at the R&D Center of the Chong Kun Dang Healthcare.Taxonomic richness was calculated using the SILVA 115 database (18) as the reference database.Differential abundance analysis (DAA) was performed using an embedded tool in the CLC Genomics Workbench.Rarefaction depth was determined based on the minimum read count per sample, and alpha diversity indices computed using the "microbiome" package in the R statistical software [v3.1.0;(19)].Beta diversity indices were calculated using the "vegan" package, whereas Spearman's correlation and scatter plot analyses executed and visualized using the "ggplot2" package.For postintervention microbiome marker exploration, linear discriminant analysis effect size (LEfSe) analysis was conducted using the "MicrobiomeMarker" package. Statistical analysis Wilcoxon signed-rank tests were used to compare bowel habit assessments, alpha diversity indices, and relative abundances of the gut microbiota within the groups before and after intervention.Wilcoxon rank-sum tests with Bonferroni correction were used to compare post-intervention characteristics between the test and placebo groups.PERMANOVA was used to compare beta diversity.Statistical analyses were conducted using R statistical software in the RStudio environment, with p-values <0.05 considered statistically significant. Participant recruitment and baseline characteristics A total of 78 participants aged between 19-75 years who selfreported experiencing bowel movement issues were recruited from September-November 2022.Among the recruited participants, 20 were excluded from the intervention because they did not meet the inclusion criteria or had met the exclusion criteria, such as reporting no disorders or being older.A total of 58 participants successfully passed the screening process and were categorized into the IG (n = 28) and SG groups (n = 30) based on their bowel habits.Through blockrandomized allocation, the IG group comprised 14 participants in the intervention group and 14 in the placebo group, whereas the SG group consisted of 15 participants in both the intervention and placebo groups.During the intervention, all participants in the IG group completed the consumption, whereas four participants in the SG group, dropped out because of voluntary withdrawal and compliance failure (Figure 1).There were no statistically significant differences in clinical characteristics, baseline symptoms, age, or sex between the intervention and placebo groups in either of the IG and SG groups (Table 1). Bowel habit improvement In the IG group, no significant differences in stool consistency (assessed using the BSS) and frequency were observed between the probiotics (CB) (3.4 ± 1.7 and 2.6 ± 1.1, respectively) and placebo (CP) groups (3.8 ± 0.6 and 2.9 ± 0.7, respectively) at the 4-week endpoint.Among the gut symptoms, there were no significant differences in reduction of straining and abdominal pain between the CB (1.9 ± 0.7 and 1.2 ± 0.4, respectively) and CP groups (1.9 ± 0.8 and 1.5 ± 0.9, respectively).However, a trend of reduced straining (p = 0.056) and a significant decrease in urgency (p = 0.037) and abdominal pain (p = 0.048) were observed in the CB group.In contrast, no significant changes in straining (p = 0.24), urgency (p = 1), or abdominal pain (p = 0.77) were observed in the CP group (Figures 2A,C). For the SG group, at the 4-week endpoint, stool consistency significantly improved in the probiotics (SB) group (3.2 ± 1.1) compared with that in the placebo (SP) group (4.7 ± 0.9), whereas stool frequency showed no significant change.Although no significant differences were observed in gut symptoms after four weeks, a trend of reduced incomplete evacuation (p = 0.071) was evident in the SB group (Figures 2B,D). Microbiome modulation Alpha diversity analyses, including Shannon, Chao1, inverse Simpson, and observed OTU indices, were performed on both the probiotics and placebo groups within the IG and SG groups at the 4-week endpoint or before and after intervention within each group.No significant differences in alpha diversity were observed between the groups (Supplementary Table S1).Beta diversity analysis, based on the Bray-Curtis dissimilarity using PCoA, and PERMANOVA, showed that there were no significant differences in beta diversity between the probiotics and placebo groups at the 4-week endpoint or before and after intervention within each group (Supplementary Figure S1A). The six strains of probiotics in Consti-Biome and Sensi-Biome significantly increased in relative abundance after intervention in the probiotics groups (CB and SB) groups, whereas no significant changes were observed in the placebo groups (CP and SP).Moreover, the probiotics groups showed a significantly higher abundance of these strains than in the placebo groups at the 4-week endpoint (Figure 3).Relative abundances of major gut microbiota at the phylum, family, and genus levels were assessed before and after intervention.After intervention, the CB group showed an increase in the abundance of Actinobacteria, Firmicutes, and Verrucomicrobia and a decrease in Bacteroidetes and Proteobacteria, whereas these changes were not observed in the CP group (Supplementary Figure S2A).The log Firmicutes-to-Bacteroidetes (F/B) ratio was also significantly higher in the CB group (p = 0.031) (Figure 4A).At the family level, the CB group exhibited a decreased abundance of Acidaminococcaceae, Bacteroidaceae, Prevotellaceae, and Porphyromonadaceae and increased abundance of Coriobacteriaceae, Ruminococcaceae, and Erysipelotrichaceae.In contrast, the CP group showed decreased Bifidobacteriaceae and increased Prevotellaceae abundances (Supplementary Figure S2B).Although no significant changes were observed before and after intervention, Erysipelotrichaceae abundance was significantly higher in the CB group at the 4-week endpoint (p = 0.014) (Figure 4B).Similar trends were observed in the SG group, with increases in the abundance of Actinobacteria, Bacteroidetes, and Verrucomicrobia and decreases in that of Firmicutes and Proteobacteria at the phylum level after intervention (Supplementary Figure S2D).Additionally, after intervention, the SB group displayed higher abundances of Actinobacteria and Verrucomicrobia than those in the SP group.However, no significant changes were observed before and after intervention.At the family level, the SB group showed a decreased abundance of Enterobacteriaceae, Lachnospiraceae, Lactobacillaceae, and Veillonellaceae and increased abundance of Bifidobacteriaceae, Erysipelotrichaceae, and Ruminococcaceae.Conversely, the SP group showed a decreased abundance of Bifidobacteriaceae (Supplementary Figure S2E).At the 4-week endpoint, the SB group showed no statistically significant differences in the log Firmicutes-to-Bacteroidetes (F/B) ratio, but exhibited a significantly higher abundance of Lactobacillaceae compared to the SP group (p < 0.01) (Figures 4C,D). Using LEfSe, a biomarker discovery tool based on linear discriminant analysis, probiotics and placebo group-specific microbial markers were identified.In the IG group, Peptostreptococcaceae, Erysipelotrichaceae, Lactobacillaceae, and Leuconostocaceae were enriched in the CB group, whereas Prevotellaceae, Eubacteriaceae, and Paenibacillaceae were enriched in the CP group at the 4-week endpoint (Figures 5A,B).For the SG group, Lactobacillaceae, Victivallaceae, Micrococcaceae, and Rhodobacteraceae were characteristic of the SB group, whereas the Firmicutes-related Family XIII Incertae Sedis and Bacteroidetes-related Uncultured bacterium were enriched in the SP group (Figures 5C,D). Correlation analysis was performed using Spearman's correlation to examine the relationship between specific characteristic microbial taxa identified through DAA and the relative abundance of intestinal probiotics after intervention in the CB and SB groups.Negative correlations were observed between characteristic indicator taxa and the relative abundance of intestinal probiotics in both the IG and SG groups (Figure 6).Notably, significant correlations were observed in the SB Group (p = 0.034) (Figure 6B). Discussion In this clinical study, we conducted a tailored probiotics trial targeting participants with constipation-and diarrhea-dominant symptoms to analyze the separate effects of probiotic supplementation on IG and SG groups with distinct symptoms.This study assessed the efficacy of tailored probiotics by evaluating improvements in bowel habits and gut symptoms, the modulation of beneficial microorganisms, and positive compositional changes in the gut microbiome. The multi-strain probiotic formulas provided to the participants, Consti-Biome and Sensi-Biome, have previously demonstrated efficacy in the management of specific bowel symptoms.For instance, the Consti-Biome formula, SmilinGut, exhibited efficacy in alleviating constipation symptoms in patients with constipation-predominant IBS (20).In vitro experiments with Consti-Biome have shown inhibitory effects against specific harmful bacteria (21), and in vivo studies in constipation-induced rat models demonstrated improved intestinal motility due to loperamide administration (22).Sensi-Biome includes various strains, such as L. acidophilus DDS-1 and B. lactis UABla-12, which are effective in managing abnormal stool consistency and abdominal pain in patients with IBS (23).In vitro inhibitory effects against specific harmful bacteria (21) and improvement in intestinal motility in an acetate-induced diarrhea rat model (unpublished data) were also observed for Sensi-Biome.This study represents the first application of Consti-Biome and Sensi-Biome probiotic formulas in humans and provides important results supporting the clinical efficacy of tailored probiotics and their ability to modulate the human gut microbiome. In the IG group, reductions in abdominal pain, urgency and straining were observed in the probiotics (CB) group after Consti-Biome intervention.These results suggest that probiotics have a positive impact on improving constipation symptoms.Notably, the symptoms of straining are used to diagnose functional constipation under the ROME IV criteria (6).In the SG group, the probiotics (SB) group showed improved stool consistency based on the BSS after Sensi-Biome intervention.Although no significant difference was observed in incomplete evacuation, a trend towards reduction was observed in the SB group, contrasting with that in the placebo (SP) group.BSS serves as a marker for colonic transit time (24) and is a key factor in distinguishing diarrhea and constipation-predominant subtypes of IBS (25,26). However, the improvement in symptoms was not specific to the IG or SG groups, and the degree of improvement was mild, with some cases showing better efficacy in the placebo group, such as for the incomplete evacuation in the IG placebo (CP) group.Considering some reported clinical cases of probiotic strains found in Consti-Biome and Sensi-Biome, the relatively short 4-week intervention duration compared with the 12-week intake might explain the limited improvement of symptoms (20,23).Probiotic supplementation in each group was confirmed by a significant increase in specific bacterial strains.Notably, the collective abundance of specific taxa unique to the IG group, including Succinivibrionaceae, Micrococcaceae, Porphyromonadaceae, Prevotellaceae, Peptococcaceae, and Alcaligenaceae, was negatively correlated with an increase in Consti-Biome (Figure 6A).Some of these taxa, such as Porphyromonadaceae, Prevotellaceae, and Alcaligenaceae, have been reported to be associated with constipation symptoms in previous studies (27)(28)(29).In the SG group, distinctive taxa, including Enterococcaceae, Planococcaceae, Enterobacteriaceae, Nocardiaceae, and Staphylococcaceae showed a negative correlation with an increase in Sensi-Biome (Figure 6B).Among these, Enterococcaceae, Enterobacteriaceae, and Staphylococcaceae have been linked to diarrhea symptoms (27,30,31). In the CB group, a significant increase in beneficial strains such as Firmicutes and Erysipelotrichaceae, known for short-chain fatty acid (SCFA) production, was supported by composition analysis and LEfSe results (Figures 5A,B).SCFAs produced by gut bacteria contribute to serotonin (5-HT) synthesis and secretion in enterochromaffin cells, regulating gut motility through the direct stimulation of TPH1 (32).This suggests a mechanistic explanation for the relief of constipation symptoms mediated by Consti-Biome, consistent with previous in vivo findings (22).Other microbial markers highlighted by LEfSe also indicated the positive effect of Consti-Biome on beneficial gut microbiota composition.The decreased abundance of Prevotellaceae in the CB group compared with that in the CP group may suggest an alleviation of inflammation (33), whereas the scarcity of Eubacteriaceae, which has been associated with reduced intestinal motility and IBS severity (34, 35), is in line with the findings in the CP group.Moreover, the increase in Peptostreptococcaceae, which is correlated with increased bowel movements (36), and SCFAproducing Erysipelotrichaceae (37) along with health-promoting lactate-producing groups, such as Lactobacillaceae and Leuconostocaceae (38), support the positive effects of Consti-Biome. In the SB group, the microbial composition and LEfSe results reflected an increase in Lactobacillaceae, indicating the activation of known beneficial microbes (21-23) (Figures 5C,D).Furthermore, in Relative abundance of probiotic strains in the Consti-Biome (CB; n = 14) and Sensi-Biome (SB; n = 13) groups before and after intervention, compared with that of the respective placebo groups (CP and SP; n = 14 and n = 13, respectively).Relative abundances in the (A) CB and CP, (B) SB and SP groups.Consti-Biome and Sensi-Biome consist of six strains each, represented as five strains in the figure owing to Consti-Biome containing two strains with identical species (Lactiplantibacillus plantarum).Wilcoxon signed-rank tests were performed for each comparison, and significance levels denoted with the following symbols: ns (not significant), ** (p < 0.01), and **** (p < 0.0001).Additionally, comparisons between the CB and SB groups and the corresponding placebo groups (CP and SP) after intervention also showed significant differences (data not shown).6), and their microbial profiles have consistently exhibited variations related to each symptom (7,27).Despite the feasibility of tailored probiotic approaches, attempts to prescribe symptom-specific probiotics are relatively limited.This study suggests the possibility of tailored probiotic prescriptions through microbial modulation and symptom improvement and underscores the need for clinical approaches that individualize probiotics for the targeted characteristics of each study subject.These findings offer insights into understanding the role of the gut microbiota in gastrointestinal health, fostering the development of therapeutic approaches for digestive disorders through microbial manipulation.However, further research and confirmation are warranted to advance clinically applicable probiotic therapies, considering the characteristics of individual study participants: there were some limitations to this study.In this clinical trial, there were a small number of participants, which limits the generalizability of the conclusions.In addition, short-term recruitment of participants based on self-reporting of constipation and diarrhea symptoms may lead to inconsistencies in symptom presentation.It is possible that different levels of intensity and frequency of symptoms exist among study participants because these symptoms result from various factors, such as diet, lifestyle, and stress.Consequently, longterm symptoms assessments with larger sample sizes must be used in future studies in order to maintain symptom consistency.Among the aforementioned taxa, there are instances that do not specifically align with previous research findings on bowel habits.For instance, Prevotellaceae, which are related to microbial features in the IG group, have been reported to be associated with enterotypes and rapid gut transit, in contrast to the existing results (16).This may be due to the influence of ethnicity, age, sex, and diet on these taxa or microbial diversity (2,14,43).Although they may not be distinctly segregated in terms of constipation and diarrhea, they could be linked to the inflammation associated with dysbiosis, potentially contributing to the exacerbation of gastrointestinal disorders due to weakened intestinal cell function (33). Although this study analyzed the clinical effects of probiotic consumption tailored to two types of bowel habits, it did not consider metabolomic analysis.Recent evidence showing close associations between metabolic pathways and over 95% of fecal metabolites underscores the need to understand complex interactions between the microbiome and human metabolic environment (44).From this perspective, this study lacks a comprehensive understanding of the potential efficacy of probiotics in modulating the microbiome and metabolic pathways.Future research on probiotics should accurately assess microbial metabolic activity and contribute to a better understanding of their impact on human health.The results presented in this study demonstrate the potential clinical applicability of tailored probiotics.It is still necessary to conduct further research to enlarge the participation pool by incorporating variables such as ethnicity, diet, and lifestyle.In addition, larger sample sizes are required for long-term studies to produce more robust and reliable results.In the future, this research could provide a better understanding of the potential benefits of probiotic prescriptions tailored to individuals with compromised intestinal function. Conclusion The consumption of Consti-Biome significantly improved urgency and abdominal pain compared with the placebo group.In addition, participants who consumed Sensi-Biome showed distinct improvements in stool consistency compared with placebo groups.From a gut microbiome perspective, it was observed that, depending on the bowel habit, the probiotics either enhanced microbial biomarkers associated with bowel motility, such as Erysipelotrichaceae, or modulated microbial biomarkers related to inflammation mitigation, such as Lactobacillaceae.These findings emphasize the potential of using personalized probiotics based on bowel habits.The results of this study also underscore the need for a multidimensional approach, including long-term consumption and observation, consideration of individual variations, and metabolomics when assessing the efficacy of personalized probiotics.Further research informed by a deeper understanding of patients with bowel disorders could facilitate the employment of effective, tailored probiotic treatments. FIGURE 2 FIGURE 2Impact of Consti-Biome (CB; n = 14) and Sensi-Biome (SB; n = 13) on gastrointestinal symptoms in the Insensitive Gut and Sensitive Gut groups compared with that in each placebo group (CP and SP; n = 14 and n = 13, respectively).Stool consistency and frequency measurements (A,B) and evaluations of other gastrointestinal symptoms (C,D) during the 4-week intervention period.These were analyzed using Wilcoxon rank-sum and signed-rank tests. FIGURE 3 FIGURE 3 addition to the Uncultured and Incertae Sedis groups, other microbial markers such as Victivallaceae (39), negatively correlated with abdominal cramping and pain, Micrococcaceae (40) contributed to mucosal barrier protection and immune response stimulation, and the rarely encountered Rhodobacteraceae (41, 42) in cases of diarrhea, presented characteristic microbial indicators in the SB group.Hence, Consti-Biome and Sensi-Biome probiotic formulas have the potential to improve the symptoms of constipation and diarrhea by modulating specific indicator taxa, promoting SCFA-producing bacteria, and regulating microbial biomarkers associated with inflammation and pain.Constipation and diarrhea are distinct clinical symptoms ( FIGURE 4 FIGURE 4Comparison of the relative abundances of prominent microbial taxa in the gut microbiota after probiotic intervention in the Insensitive Gut and Sensitive Gut groups (CB and SB; n = 14 and n = 13, respectively) as well as their respective placebo groups (CP and SP; n = 14 and n = 13, respectively).(A) Log Firmicutes-to-Bacteroidetes (F/B) ratio and (B) relative abundance of Erysipelotrichaceae in the Insensitive Gut group.(C) Log F/B ratio and (D) relative abundance of Lactobacillaceae in the Sensitive Gut group.Wilcoxon rank-sum tests were conducted to assess significance. FIGURE 5 FIGURE 5Linear discriminant analysis effect size (LEfSe) after probiotic intervention in the Insensitive Gut and Sensitive Gut groups (CB and SB; n = 14 and n = 13, respectively), as well as their respective placebo groups (CP and SP; n = 14 and n = 13, respectively).(A) Comparison of effect sizes between the CB and CP groups after intervention.(B) Abundance plot comparing the CB and CP groups.(C) Comparison of effect sizes between the SB and SP groups after intervention.(D) Abundance plot comparing the SB and SP groups.The significance level was set at p < 0.1 for the identified biomarkers.Significance levels are denoted in the abundance plot as follows: * (p < 0.05) and ** (p < 0.01). TABLE 1 Baseline characteristics of participants in the Insensitive Gut (n = 28) and Sensitive Gut (n = 26) groups. The p-values indicate statistical significance between the intervention and placebo groups within each gut group. TABLE 2 Characteristic microbial taxa at the family level for the Insensitive Gut (n = 28) and Sensitive Gut (n = 26) groups at baseline.
2024-02-18T16:02:43.476Z
2024-02-16T00:00:00.000
{ "year": 2024, "sha1": "6fa4e9108d1767dcd03ad6deb5758cd06aa4a31b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2024.1302093/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fed892ab3d2a2b29fb4f7034bd75befed32478c5", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
136811130
pes2o/s2orc
v3-fos-license
Dataset Paper Multiple Ion Cluster Source for the Generation of Magnetic Nanoparticles : Investigation of the Efficiency as a Function of the Working Parameters for the Case of Cobalt We present dataset of Co nanoparticles production using a Multiple Ion Cluster Source (MICS). We study the evolution of the mean size and deposition rate of Co nanoparticles as a function of the power and argon flux applied to the Co magnetron, the aggregation length of the Co magnetron and the total argon flux. The results show the strong influence of these parameters on the mean size of the nanoparticles and the efficiency of the process as well as on the atomic deposition rate. In particular, it is shown that nanoparticles of mean size ranging from 4 to 14 nm can be produced and that the influence of the working parameters on the production of magnetic nanoparticles is more complex than for the case of noble metal presented previously. Introduction One of the ultimate goals in nanotechnology is to provide new systems of production; however, it is reasonable to consider that the success of nanotechnology will rely on its capability of complementing actual technologies before substituting them [1].The fabrication of nanoscale materials with the expected new properties arising from their reduced size is a prerequisite for their successful use in next generation nanotechnological applications such as magnetic recording, sensing, and biological diagnosis, for example, [2][3][4][5].For these reasons, a great variety of nanoparticle synthesis methods have been developed [6][7][8][9][10][11]. Among different methods for producing nanoparticles (NPs), the gas-phase synthesis comprises well-known techniques for the production of an extensive variety of nanosized particles [12][13][14][15].These fabrication methods allow the continuous production of clusters with a wide range of sizes (few nm to tens of nm) [16][17][18][19].The gas-phase synthesis processes have been extensively studied [20][21][22][23][24] with a special focus on the nanoparticle yield issues and have become a popular technique for large-scale production and for fundamental studies.On the other hand, gas-phase techniques have the intrinsic added value of producing particles without impurities.As they are generated in vacuum conditions, they are more pure than liquid-based processes since the presence of contaminants from the solution, detrimental for electric and magnetic properties [25][26][27], is avoided.The presence of impurities can be strongly reduced or even avoided in vacuum and gas-phase systems, which make these techniques the best choice when the NPs purity is critical for specific applications.Haberland et al. [28] developed a gas-phase nanocluster fabrication system using a magnetron sputtering gun to generate the material vapor that is subsequently condensed into nanoparticles and achieved relatively high particle yields.Such model of gas aggregation source or Ion Cluster Source (ICS) has attracted much attention in the last decades for the synthesis of nanoparticles [29].Argon gas is commonly used as sputtering gas.As soon as a supersaturation condition is reached, the vaporized target atoms coalesce and form the NPs [19,[30][31][32] in the so-called aggregation zone.The nanometer-sized particles exhibit properties that are not found in the corresponding macroscopic systems.Much effort has been recently devoted to investigate their size-dependent properties, which offer "tunability" for creating new materials [33][34][35][36]. Apart form its vacuum compatibility that guarantees the high purity of the generated NPs, other characteristics of the ICS are the ability (1) to control the size distribution of the fabricated NPs (using a mass filter and/or by controlling the different growth parameters), (2) to be compatible with RF sputtering for the fabrication of NPs from insulating or semiconductor sputtering targets, (3) to allow the injection of other gases such as oxygen and nitrogen for the modification of the composition of the NPs, (4) to control the NPs density by simply adjusting the deposition time, and (5) to allow the deposition of the NPs over any substrate (provided that it is vacuum compatible).While the ICS is very versatile for the fabrication of NPs with the chemical composition identical or very close to that of the sputtering target, it does not allow the fine tune of the chemical composition unless the sputtering target is replaced for each desired stoichiometry.Since the ICS is a vacuum or UHV system, the replacement of the sputtering target can only be done by breaking the vacuum with the subsequent air contamination of the system that needs to be baked (lost of time), and so forth.In order to overcome such limitation, a new kind of gas aggregation source derived from the ICS has been recently proposed [37].In the new gas aggregation source called Multiple Ion Cluster Source (MICS), the single two-inch magnetron is replaced by 3 one-inch magnetrons that are loaded with different sputtering targets.Since each magnetron can be operated individually (each magnetron has its sputtering gas entry, electrical connection, positioning system, and cooling pipes), the density of ions extracted from each sputtering target can be adjusted and hence the stoichiometry of the resulting NPs [38].The MICS is now commercially available at Oxford Applied Research Ltd. [39].However it has been recently reported that, to some extent, the working parameters of the individual magnetrons are correlated through the total pressure inside the aggregation zone and the relative position of the magnetrons [40].Such correlation has been demonstrated using a single magnetron loaded with a noble metal (silver), where a linear correlation of the NP size and the efficiency with most working parameters was observed. Here we investigate the correlation between the working parameters with a magnetron operated with a magnetic target (cobalt) that is expected to display a different sputtering behavior due to its magnetic properties.We present the evolution of the NP size as well as the efficiency of the process in terms of NP rate and atomic rate, as a function of applied power, argon flux applied to the magnetron, total argon flux, and aggregation length.We observed that the performance of magnetic targets differs from nonmagnetic ones.The efficiency of producing NPs rarely presents a linear tendency as occurred with Ag.Thus, a full description of the evolution as a function of each parameter is needed due to the greater complexity of the system.We will also show that it is possible to generate Co NPs from 4 nm up to 14 nm diameter, that is, below and above the superparamagnetic limit that is close to 9 nm [41] using standard fabrication conditions. Methodology The MICS system is a modified ICS that combines three independent magnetrons into the aggregation zone NC 200U-B model from Oxford Applied Research Ltd.The fabrication of the magnetrons follows the original design of Professor Colino García from the Facultad de Ciencias del Medio Ambiente, Toledo, Spain [42].Each magnetron possesses its own translation motion, argon mass flow controller, cooling pipes, and electrical connection.Additionally, the main translation that allows a displacement of all the magnetrons into the aggregation zone is preserved like in a standard ICS.More details of the MICS can be found elsewhere [38].A schematic view of the MICS is shown in Figure 1(a) while Figures 1(b) and 1(c) show pictures of the side and top views of the system.Figure 1(d) displays the configuration of the magnetrons that are placed into the aggregation zone.The MICS was connected to a UHV chamber with a base pressure in the low 10 −9 mbar range. For the present study, a Co target (99.99%) of 1 inch diameter was loaded in one of three magnetrons (Figure 1(d)).The other two magnetrons, identified as M2 and M3, were also loaded with target materials but were not used to generate ions in the present study.The magnetron loaded with Co was always positioned closer to the exit diaphragm of the MICS than the other 2 magnetrons in order to avoid target contamination by Co plasma.The NPs generated with the MICS were deposited on flat silicon wafers (10 × 10 mm 2 ) introduced into UHV through a fast entry load lock and then placed in the chamber at approximately 200 mm from the exit diaphragm of the MICS.All the deposits were performed at room temperature.Note that the distance between the Si substrate and the magnetron plasma is long enough to avoid any heating effect on the substrate where no change in temperature could be detected.The spot diameter of the area where the NPs landed was ≈40 mm.No size distribution was observed in the deposited spot, although some density variations were detected, the center of the deposition spot being covered by a higher density of nanoparticles.In order to follow the evolution of the density of nanoparticles with the working parameters of the MICS, the measurements were always performed in the region close to the center of the deposition spot.The size distributions and deposition rates (NPs/m 2 s) were extracted from the analysis of atomic force microscopy (AFM) images.The AFM images were acquired using the Cervantes AFM System equipped with the Dulcinea electronics from Nanotec Electronica SL [43] in dynamic mode using commercial silicon AFM tips with a typical radius less than 7 nm.The WSxM software [44] has been used for the analysis of the images.The size distribution of the NPs was extracted from several AFM images through the NP height measurement.The number of AFM images required for the estimation of the size distribution is dependent on the coverage percentage and ranged from 2 to 11 images.The fit of the height distributions to extract the mean sizes has been performed assuming a Galton or lognormal distribution [45].The deposition rates were calculated from the count of NPs from several AFM images and the deposition time of each sample.Based on previous studies on similar systems, the NP height is equivalent to the diameter of NP [46], so it can be assumed that the particles studied in this dataset have spherical shape.Therefore, the atomic deposition rates (atoms/m 2 s) were calculated as the product of the number of atoms per NP (using the volume of a sphere of the NP mean size and the density of cobalt) and the NPs deposition rates.Different series of NPs deposits have been performed by changing a specific parameter, while keeping the other constant.The parameters that were tuned were the following: (1) the power applied to the Co magnetron (), (2) the argon flux applied to the Co magnetron (Φ Co ), (3) the total argon flux (Φ Total ), and (4) the aggregation length (defined as the distance between the magnetron head and the exit diaphragm of the aggregation zone) of the Co magnetron ( Co ).For all samples the aggregation lengths of magnetrons M2 and M3 were fixed ( M2 = 175 mm and M3 = 172 mm). For the study of the influence of the power applied to the Co magnetron, two series of samples were produced.For both series and , the same Co = 150 mm and Φ Total = 80 sccm were used.Moreover, Φ Co = 5 sccm and Φ Co = 30 sccm were used for series and , respectively (sccm is the standard cubic centimeter per minute at standard temperature and pressure).Also the power applied to the Co magnetron was varied between series of and .In Table 1, we show the values of the parameters used for both series as well as the obtained results.In Figure 2, we display the evolution of the mean size and deposition rate corresponding to series and .As can be observed, the mean size increases with increasing applied power for both series.On the other hand, the evolution of the deposition rate is not similar for both series.While this rate increases with increasing power for (highest Φ Co and ), it does the opposite for series .Figure 3 displays the evolution of the atomic deposition rate for and .It is shown that the atomic deposition rate can be tuned with both and Φ Co .The evolution of the efficiency with the argon flux is further investigated in the following. For the study of the influence of the argon flux, two series of samples were fabricated and characterized.For series Φ , Φ Co was varied while keeping Φ Total constant at 80 sccm.For series Φ , Φ Co was kept fixed at 10 sccm and Φ Total was varied.In both cases, = 5 W and Co = 150 mm.The parameters used for the fabrication and results obtained are given in Table 2 and the corresponding graphs are displayed in Figures 4 and 5. Figure 4(a) clearly shows a decrease of the mean size of the NPs and an increase of the deposition rate with the increasing Φ Co .Although both evolutions are not linear, the resulting atomic deposition rate displayed in Figure 4(b) presents a linear evolution.This indicates that the number of Co atoms extracted from the MICS steady decreases as a function of Φ Co .The observed evolution of series Φ (Figure 5) is rather different from that of series Φ .While in both cases we observed an increase of the deposition rate, for series Φ the mean size of the NPs also increases with increasing Φ Total in opposition to series Φ .Surprisingly the nonlinear evolutions observed in Figure 5(a) also give rise to a linear evolution of the atomic deposition rate (Figure 5(b)).In opposition to series Φ the atomic deposition rate increases with increasing Φ Total .The last series of samples has been elaborated in order to study the influence of the aggregation length Co of the cobalt.For this series = 10W, Φ Total = 80sccm, and Φ Co = 30 sccm were kept constant.The different parameters and results are given in Table 3. From Figure 6(a) it appears that the mean size of the NPs is almost a linear function of the increasing Co .This behavior is similar to that of a standard ICS although the size range is smaller in the case of the MICS.On the other hand, it is observed that the deposition rate (Figure 6 Dataset Description The dataset associated with this Dataset Paper consists of 22 items which are described as follows. Dataset Item 1 (Images).Eighty-seven AFM images required for the estimation of the size distribution for nanoparticles deposits.Each deposit has been measured by AFM by Figure 2: Evolution of the mean size and deposition rate of Co nanoparticles as a function of the power applied to the magnetron and for a fixed argon flux of 5 sccm on the magnetron (a).The same as (a) but with a higher argon flux on the magnetron (30 sccm) and for a higher applied power range (b).recording images of scan areas of 1 × 1 m 2 , 2 × 2 m 2 , and 3 × 3 m 2 , AFM images were analyzed, and from these images, the height of 4895 nanoparticles was extracted.For each nanoparticles deposit, the fit of the nanoparticles size distribution was performed. Dataset Item 2 (3D Object Data).Eighty-seven STP files for the full content of Dataset Item 1 (Images) that allow the analysis of the images. Concluding Remarks We have reported a study on the efficiency of fabricating nanoparticles of magnetic materials (cobalt) using a Multiple Ion Cluster Source.The mean size of the nanoparticles, deposition rate, and atomic deposition rate of the deposits can be adjusted through the tuning of the working parameters such as the power applied to the Co magnetron, argon flux injected to the Co magnetron, total argon flux, and aggregation length of the Co magnetron.The efficiency as a function of applied power follows a nonlinear evolution and it has been shown that nanoparticles with mean size ranging from 4 to 14 nm can be fabricated.While the increase of total argon flux induces a clear increase of both the mean NPs size and deposition rate, the increase of argon flux injected through the Co magnetron shows a more complex behavior where the biggest NPs are generated at the lowest Ar fluxes. Finally it has been demonstrated that the size and deposition rate increase with increasing aggregation length up to 130 mm and then decrease.The mean size of the nanoparticles can be fine-tuned by adjusting the aggregation length.In addition, it has been found that those characteristics of the deposits present more complex tendencies than the reported case of a noble metal like silver. 3 Figure 1 : Figure 1: Schematic representation of the MICS (a).Side picture (b) and top picture of the MICS (c).Picture of the magnetrons that are placed in the aggregation zone (d). (a)) reaches a maximum and then decreases indicating that the maximum efficiency is obtained at Co ≈ 130 mm.The observed behavior of the deposition rate (Figure 6(a)) strongly influences the atomic deposition rate that reaches a maximum at around Co ≈ 140 mm (Figure 6(b)). Figure 3 : Figure 3: Atomic deposition rate as a function of power applied to the magnetron derived from Figure 2. Figure 4 : Figure 4: Evolution of the mean size and deposition rate of Co nanoparticles as a function of the argon flux injected into the magnetron and for a fixed total argon flux (a).Atomic deposition rate as a function of the argon flux injected into the magnetron and for a fixed total argon flux (b). Figure 5 :Figure 6 : Figure 5: Evolution of the mean size and deposition rate of Co nanoparticles as a function of the total argon flux while keeping a fixed argon flux into the Co magnetron (a).Atomic deposition rate as a function of the total argon flux while keeping a fixed argon flux into the Co magnetron (b). Table 1 : Fabrication parameters and results as a function of power applied to the Co magnetron. Table 2 : Fabrication parameters and results as a function of argon flux. Table 3 : Fabrication parameters and results as a function of aggregation length of the Co magnetron. Table ) . ). Size distribution data extracted from images 1-8 in Dataset Item 1. Size distribution data extracted from images 20-27 in Dataset Item 1. Table ) . ). Size distribution data extracted from images 28-29 in Dataset Item 1. Size distribution data extracted from images 33-35 in Dataset Item 1. Table ) . Size distribution data extracted from images 36-39 in Dataset Item 1. Size distribution data extracted from images 58-63 in Dataset Item 1. Table ) . ). Size distribution data extracted from images 75-76 in Dataset Item 1. Number of Events Dataset Item 20 (Table).Size distribution data extracted from images 79-81 in Dataset Item 1. Size distribution data extracted from images 82-84 in Dataset Item 1. Table ) . Size distribution data extracted from images 85-87 in Dataset Item 1.
2019-04-28T13:12:14.814Z
2014-04-29T00:00:00.000
{ "year": 2014, "sha1": "4cd87a3e8780937be55deaffd297f131ff6c5773", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/archive/2014/584391.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f4c06a9335c8cb1fd83edc72f9e10ab654171062", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
134848965
pes2o/s2orc
v3-fos-license
Research on the Water Area Extraction in Suzhou Based on Remote Sensing In order to accurately obtain the information and calculate the area of water, the traditional visual interpretation method was used to extract the information of water in Suzhou with the high-resolution remote sensing image. By on-the-spot sampling measurement verification, the precision of visual interpretation was proved to meet the practical requirements. The research in experimental area indicate that the visual interpretation method is operable. The water area of Suzhou was obtained as 3205 square kilometers, and the area is 994 square kilometers except for the area of Taihu lake and the Yangtze river. Introduction With the rapid development of space information technology, it is possible to acquire remote sensing images with multi temporal, high resolution and accuracy [1]. The extraction of water information has been a basic work in the field of remote sensing information extraction [2], and the 3S technology plays a more and more important role in building base database and monitoring illegal activities such as occupying water area illegally. There are two methods for extracting water information from highresolution remote sensing images. One is visual interpretation, the other is semi-automatic or automatic information extraction. The accuracy of visual interpretation is relatively high, which depends on subjective judgment of human beings, but the efficiency is low. The kinds of semiautomatic or automatic information extraction methods are various, Li Changyou [3] used ETM to extract the water of Ulansuhai Lake by multi-band combination algorithm and multi-spectral mixture analysis method, but the accuracy of the semi-automated method can't be verified quantitatively, and the band combination can't be excluded only in specific water conditions. Although the automatic or semi-automated extraction method improves the efficiency to a certain extent, it lacks the universality of the application, most of which only adapt to the specific research area. There is no unified standard for the evaluation method of precision for the specific resolution of remote sensing images, and the extracted target is basically water, and it can't be effectively extracted. Therefore, such research does not form a relatively complete and mature theoretical system. In view of the large scope of the researching area and the complex distribution of rivers and lakes, and the need to calculate water area, the accuracy of water area is higher. In order to minimize the influence of errors, the visualinterpretation method is used for extracting information of water in Suzhou city. The Basic Situation of Researching Area and Data Source Suzhou is located in the downstream of Yangtze River and Taihu Lake, and its territory is flat. The river ports and lakes are crisscrossed and scattered, and there are more than 20 000 rivers, 300 lakes in it. Water has played a huge role in flood control, drainage, irrigation, water supply, shipping, ecological landscape and water culture in Suzhou. In order to investigate the total area and distribution of water areas in Suzhou accurately, obtaining high-precision water extraction information, as well as using aerial images of different phase to monitor the change of them and getting information of wateroccupied behaviour, we need to use the high-resolution aerial images combined with large-scale topographic maps to finish the survey in same condition. In this research, the basic images used for interpretation were taken in 2011 and 2013, and all of them are colourful aerial images with resolution of 0.1-0.3 m. Which have been processed by orthophoto correction and band fusion. At the same time, what were regarded as auxiliary information for interpretation included topographic map of Suzhou with 1:1000 scale, topographic map of Jiangsu Province with 1:10000 scale, and electronic map of Suzhou City. Technology Roadmap The roadmap for extraction of background information in water area is shown as figure 1. In principle, the definition of water area is based on two basic conditions: hard coastline and no-hard coastline: (1) Hard coastlines include embankment, hard revetment, revetment, retaining walls and roads. Generally, it appears white stripes on aerial images. When there is a rigid coastline or house within 5 meters outside the water surface, the boundary of the water is directly drawn along the edge of the rigid shoreline or the side of the water side of the building, as shown in Figure 2. (2) When there are no artificial structures along the coastal waters, we draw water boundary along the water and land trace lines on the base of waterfront of the water area combined with high-resolution aerial images at different periods, just as shown in Figure 3. 2) If there are no sluices and other controlled buildings in the estuaries but the lake is managed by municipal government, we should trace to 500 meters of the lake for the boundary, as shown in Figure 4. (3) If there are no sluices and other controlled buildings in the estuaries and the lake is not managed by municipal government, or the lake is managed by municipal government and the river is a catchment river, the area of rivers and lakes is directly bounded by the entrance and exit of rivers and lakes.(4) When the length of the river between two cities is less than 1 000 m, the water area of the lake is bounded by half the length of the river. Special situation processing .(1) When the two rivers run through, the middle line of the river channel should run through the connecting place, that is the length of the two rivers includes the range of the perforation. (2) When the two rivers run through, if the water of the intersection doesn't belong to each area, the water is incorporated into the rivers with higher or larger width, and the other is disconnected at the intersection. (3) When the river is connected with other types of water, if the water area of the intersection belongs to an independent water type, when the river area is drawn, both ends of the river are disconnected at the intersection of the intersection. (4) When the sandbank and beach in the middle of the river and lake is full of large artificial buildings, we should process them by hollow treatment, as shown in Figure 5. (5) When the sandbars and beaches in the middle of the river and lake don't have the above situation, they will be submerged at high water level, so they belong to the waters. The Result of Extracting Information of Water Area for Suzhou City in 2013 Including Taihu Lake and Yangtze River, the whole water area of Suzhou city is about 3 205.005 km2, and if not including Taihu Lake and Yangtze River, the number is about 994.348 km2. Including Taihu Lake and Yangtze River, the area of lakes is the most components of water area, accounting for 68.6% of the total area of the city's water. The second is rivers, the proportion is about 29.9%, and the area of ponds is the smallest, about 1.5%. If it does not contain Taihu Lake and Yangtze River, rivers' area of Suzhou is the largest, which accounts for about 50.5% of the total water area. The second is lakes, whose proportion is about 44.6% and the area of ponds is still the smallest, that is about 4.9%. Verification of the Results of Water Extraction by Actual Survey The RTK technology is used to measure the boundary line of the water area. The error of the calculated point position is ±0.16 m, less than ±0.3 m, which is the limitation of digital mapping technology. Based on the above discussion, we can get the conclusion that the method of water area extraction based on remote sensing is accurate and reliable, and it can meet the actual demand, too. Conclusions If the survey situation is complex, the range is large and resolution of images is high, the method of visual interpretation has high efficiency compared with traditional and manual inspection. At the same time, it can also take the accuracy of extraction and actual needs into account. Using remote sensing and GIS technology to establish a more accurate base library of water information, combined with high-resolution remote sensing images, we can get the basic information of the specific area clearly, intuitively and efficiently. The frequency of data renewal will be accelerated, and it can provide decision-making support for the timely investigation and investigation of illegal occupation. The upgrading of information management level is of great significance. The accuracy of visual interpretation is closely related to the experience of the interpreters, but there will inevitably be errors. In order to avoid the errors of human factors, the automatic extraction method is still the focus of future exploration.
2019-04-27T13:13:09.426Z
2019-02-18T00:00:00.000
{ "year": 2019, "sha1": "49122b72d9360df3fe2c3fbf5772bc9e5270be81", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/472/1/012078", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e52ae64a1e780992def191f24bace57a5f8319d7", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Engineering" ] }
209377035
pes2o/s2orc
v3-fos-license
Multi-Instanton Calculus in c = 1 String Theory We formulate a strategy for computing the complete set of non-perturbative corrections to closed string scattering in $c=1$ string theory from the worldsheet perspective. This requires taking into account the effect of multiple ZZ-instantons, including higher instantons constructed from ZZ boundary conditions of type $(m,1)$, with a careful treatment of the measure and contour in the integration over the instanton moduli space. The only a priori ambiguity in our prescription is a normalization constant ${\cal N}_{m}$ that appears in the integration measure for the $(m,1)$-type ZZ instanton, at each positive integer $m$. We investigate leading corrections to the closed string reflection amplitude at the $n$-instanton level, i.e. of order $e^{-n/g_s}$, and find striking agreement with our recent proposal on the non-perturbative completion of the dual matrix quantum mechanics, which in turn fixes ${\cal N}_{m}$ for all $m$. Introduction In a recent paper [1] we proposed that the perturbative closed string scattering amplitudes of c = 1 string theory, after Borel resummation, should be further corrected by the effect of ZZinstantons, and the resulting string theory is dual to a natural completion of the c = 1 matrix quantum mechanics where the fermi sea states with no incoming flux from the "other side" of the potential are filled. Nontrivial evidence was given at 1-instanton level, at order e −1/gs for closed string 1 → k amplitudes and order e −1/gs g s for closed string 1 → 1 amplitudes, where the string theoretic computation based on worldsheets with boundary on the ZZ-instantons is shown to agree with the proposed matrix model dual, up to the overall normalization factor of the instanton measure, a 2-loop renormalization constant of the instanton action, and a constant in canceling logarithmic divergences between worldsheet diagrams of different topologies. The latter constant was subsequently determined analytically by Sen [2] by consideration of open+closed string field theory, in agreement with the numerical result anticipated from [1]. In this paper, we extend the analysis of [1] to multi-instanton levels, namely at order e −n/gs for all positive integer n. Recall that the ZZ-instanton, considered in [1], amounts to introducing boundaries on the worldsheet that obey the (1, 1) ZZ boundary condition in the Liouville sector and Dirichlet boundary condition in the time-like free boson X 0 . A class of multi-instanton configurations involve n ZZ-instantons, of the (1, 1) type, located at separate Euclidean times x 1 , · · · , x n . We will see that, in addition, one must also consider "higher instantons" defined by ZZ boundary condition of type (m, 1), for m ≥ 2, whose action is m times that of a single (1, 1) ZZ-instanton. For reasons not fully understood, it appears that more general ZZ boundary conditions of type (m, ) with m, ≥ 2 do not contribute. Thus, a general instanton configuration at order e −n/gs consists of a set of ZZ-instantons of type (m i , 1) located at time x i , for i = 1, · · · , , with i m i = n. We will refer to such an instanton configuration as of type {m 1 , m 2 , · · · , m }. The leading n-instanton contribution to 1 → k closed string scattering involves worldsheet diagrams that consist of k + 1 disconnected discs, with one closed string vertex operator inserted on each disc as shown in Figure 1. The boundary condition is captured by the boundary state where |ZZ(m, 1) Liouville refers to the (m, 1) ZZ boundary state in Liouville theory and |D(x) X 0 refers to the Dirichlet boundary state at X 0 = x in the free boson CFT. One must then integrate over the collective coordinates x i , and finally sum over instanton types {m 1 , · · · , m }. The key nontrivial ingredient will be the choice of contour and measure in the integration over collective coordinates. Let us illustrate our strategy and prescription in the {1, 1, · · · , 1} case, i.e. n instantons of type (1,1). The collective mode integration takes the schematic form e − n gs n i=1 dx i µ(x 1 , · · · , x n ) × (worldsheet diagram) , (1.2) where the worldsheet diagram does not contain any disconnected subdiagram that has no Figure 1: Worldsheet diagrams that compute the leading non-perturbative n-instanton correction to the closed string 1 → k amplitude, of order e −n/gs , mediated by multiple ZZinstantons. For a given configuration set {m 1 , m 2 , · · · , m l } with Σ i m i = n, the boundary condition on each disc is given by (1.1). closed string insertion. In other words, all disconnected components of the worldsheet diagram that have no closed string insertion can be absorbed into the measure factor µ. As always, the overall e −n/gs due to the instanton action may also be interpreted as the exponentiation of the empty disc diagram. At order g 0 s , µ is computed by the exponentiation of the cylinder diagram, µ(x 1 , · · · , x n ) = exp 1≤i,j≤n i j + O(g s ) . (1.3) We will see in that in the limit where x i 's are close to one another, µ approaches the Vandermonde determinant squared i<j x 2 ij up to an overall normalization, as expected of the non-Abelian nature of open string modes on coincident D-instantons. The cylinder diagram with the same boundary condition on the two boundary components, namely i = j, gives a constant factor that is naively divergent and must be regularized. We will fix this constant factor by matching with the proposed matrix model dual. An important issue is the choice of integration contour in the x i 's. Conventional view of instantons as Euclidean saddle solutions may suggest integration over Euclidean times for the ZZ-instantons, i.e. purely imaginary x i . However, when a pair of ZZ-instantons of (1, 1) type are separated by 2π √ α in Euclidean time, the open string "tachyon" stretched between the two ZZ-instantons becomes on-shell, giving rise to a pole in µ(x 1 , · · · , x n ). A contour prescription is necessary to render the integration over collective coordinates well defined. Our prescription will be simply to integrate along real x i 's, i.e. Lorentzian times, rather than Euclidean times. The above prescription generalizes straightforwardly to the case of instantons of type {m 1 , · · · , m }, with the appropriately modified integration measure computed from the cylinder diagram. One novelty is that a ZZ-instanton of type (m, 1) comes with a normalization factor N m in the integration measure over its coordinate. We do not know of a way to fix N m a priori from the worldsheet perspective (say by regularizing the cylinder diagram), and will instead fix them by comparing the answer with the dual matrix model. Strikingly, after fixing N m , we will find precise agreement with the non-perturbative terms in the matrix model result for 1 → k amplitude at the 2-instanton level (order e −2/gs ), and for 1 → 1 amplitude at the n-instanton level for all n. The paper is organized as follows. In section 2, we recap the proposal for the nonperturbative completion of the c = 1 matrix quantum mechanics in [1], and explicitly evaluate what would amount to the n-instanton contribution to closed string amplitudes. In section 3, we review the definition of ZZ-instantons, compute the instanton measure from the cylinder diagram, and compute the n-instanton contribution to closed string amplitudes for n = 2, 3, 4. In section 4, we extend the computation of the closed string reflection amplitude to all instanton orders. We comment on the lessons and implications of our results in section 5. 2 Instanton expansion in the non-perturbative completion of the c = 1 matrix quantum mechanics The closed string sector of c = 1 string theory, at the perturbative level, has long been conjectured to be dual to a U (N )-gauged matrix quantum mechanics in a suitable N → ∞ limit [3][4][5][6][7]. We refer to the latter as the "c = 1 matrix quantum mechanics." It is defined by the Hilbert space of wave functions Ψ(X) in the N × N Hermitian matrix X that are invariant under the U (N ) adjoint action on X, with the Hamiltonian H = 1 2 Tr (P 2 − X 2 ), where P is the matrix of canonically conjugate momenta. Writing X = Ω −1 ΛΩ, where Λ = diag(λ 1 , ..., λ N ) and Ω ∈ U (N ), the wave function Ψ(X) can be expressed as a function of the eigenvalues Ψ({λ i }) that is invariant with respect to permutation on the λ i 's. The Hamiltonian H acting on the U (N )-invariant wave function may be expressed as The system is thus equivalently described by the wave functionΨ({λ i }) ≡ ∆Ψ({λ i }) which is completely antisymmetric under permutation of the λ i 's, subject to the HamiltonianĤ. In other words, the system describes N non-relativistic non-interacting fermions in the potential V (x) = − 1 2 x 2 . The appropriate infinite N limit, also known as the double-scaling limit, is defined by taking N → ∞ while keeping the energy of the fermi surface −µ(< 0) finite. In the duality with c = 1 string theory, µ is related to the string coupling g s by g s = (2πµ) −1 . In the semiclassical limit µ 1, the closed string vacuum corresponds to the matrix model state in which the fermions fill the right side of the potential, i.e. the region x > 0, up to the fermi energy −µ. Closed string states are dual to collective excitations of the fermi surface, which may also be viewed as particle-hole pairs. The S-matrix of the collective excitations is most conveniently computed by combining the reflection amplitudes of the individual fermions and holes in the fermi sea [8]. At the non-perturbative level, such a description of the matrix model dual is imprecise. Various proposals of the non-perturbative completion of the matrix model have been considered in the past [4,9,10], either by modifying the Hamiltonian or modifying the notion of the closed string vacuum state. In [1], we proposed a specific matrix model state as the dual of the closed string vacuum. This proposal is supported by a detailed agreement of non-perturbative corrections to the closed string scattering amplitudes on both sides of the duality, which we now review. At any given energy E, there are two linearly independent single-fermion eigenstates |E R and |E L of Hamiltonian 1 2 (−∂ 2 x − x 2 ). |E R is defined by the fermion wave function with no incoming flux from x = −∞, and |E L is the state related by x → −x. The proposed dual of the closed string vacuum state, |Ω , is one in which the fermions occupy all |E R for E ≤ −µ and none other. As a scattering state of a non-relativistic fermion, |E R has reflection amplitude [11] R The reflection amplitude of a hole ("unoccupation of |E R by a fermion") is given by (R(E)) −1 . Note that |R(E)| < 1 due to the tunneling of the fermion through the potential barrier. The same tunneling effect enhances the reflection amplitude of a hole, (R(E)) −1 , to have magnitude greater 1. This is in contrast to the type 0B matrix model [9,10], or the theory of "type II" in [4], where both sides of the potential are filled by the fermi sea and the reflection amplitude of a hole is (R(E)) * . The exact 1 → k S-matrix element of the closed strings/collective modes, computed using the particle-hole formalism [8], takes the form Here S 1 , S 2 are disjoint subsets of S = {ω 1 , ..., ω k } satisfying S 1 S 2 = S, |S 2 | denotes the number of elements of S 2 , and ω(S 2 ) is the sum of all elements of S 2 . The integrand of (2.3) can be written as In (2.4), we have separated a prefactor that captures non-perturbative corrections (suppressed by e −2πµ ) from the function K(µ, ω, x) that captures the Borel resummation of the perturbative asymptotic expansion in µ −1 . In fact, A 1→k admits an instanton expansion of the form (ω 1 , ...ω k ), (2.6) where A pert,(g) is to be identified with the perturbative closed string amplitude at genus g, and A n−inst,(L) is the perturbative expansion of the n-instanton amplitude at L-th open string loop order. In this case, the perturbative series at each instanton order is Borel-summable, and the summation over g or L in (2.6) is understood to be the Borel-resummation of the corresponding asymptotic series. Explicitly, the first few perturbative amplitudes are 1 A pert,(0) 1→1 where we wrote ω ≡ k i=1 ω i for the total energy in the 1 → k amplitude. These amplitudes were reproduced (numerically, in the case of 4-point tree-level and 2-point one-loop) from the worldsheet formulation of c = 1 string theory in [12]. At one-instanton level, the leading matrix model 1 → k amplitude and the subleading 1 → 1 amplitude are evaluated from (2.3), (2.4) to be (2.8) In c = 1 string theory, the non-perturbative corrections to the closed string amplitudes are understood as effects of ZZ-instantons [1,13]. In the worldsheet formalism, the ZZinstanton amounts to introducing boundaries to the worldsheet, together with an appropriate integration over the moduli space of the conformal boundary conditions, as will be discussed in section 3. Such a computation reproduces A numerically [1]. This agreement can also be viewed as a non-trivial check of the proposed matrix model state dual to the closed string vacuum. The leading non-perturbative contribution to the 1 → k amplitude (2.3) at the ninstanton level is given by The main objective of this paper is to understand how (2.9) arises from the worldsheet perspective for n ≥ 2. In section 3, we will extend the formalism of [1] to calculate the effect of multiple ZZ-instantons on the closed string scattering amplitudes, and reproduce the following special cases: +20 cosh(6πω) + 15 sinh(2πω) + 18 sinh(4πω) + 15 sinh(6πω)] . (2.10) A number of combinatorial observations based on these computations will then allow us to derive, in section 4, A n−inst,(0) 1→1 for all n from the ZZ-instanton computation. Multiple ZZ-instantons from the worldsheet perspective The worldsheet formulation of c = 1 string theory is based on the CFT that consists of c = 25 Liouville theory, a timelike free boson X 0 , and the bc conformal ghost system. The Virasoro primaries of the c = 25 Liouville CFT are scalar operators V P labeled by their Liouville momenta P ≥ 0, of scaling dimensions ∆ P = 2 + 2P 2 , subject to the normalization Their structure constants are given by the DOZZ formula [14,15]. 2 The closed string asymptotic states, as insertions on the worldsheet, are given by the BRST cohomology representatives of the form where the superscript + and − corresponds to in-and out-states, respectively. Sometimes referred to as "tachyons" in the literature for historical reasons, these closed string excitations behave as 1+1 dimensional massless particles in the asymptotic (weak coupling) region of spacetime. The perturbative closed string amplitudes are computed by integrating appropriate correlation functions of vertex operators and b-ghost insertions over the moduli space of punctured Riemann surfaces, as in the usual bosonic string theory. Unlike the critical bosonic string theory which suffers from closed string tachyon divergence at loop levels, the perturbative c = 1 string amplitudes are perfectly finite and compatible with perturbative unitarity [12]. Assuming the duality with the matrix model, which is checked up to 1-loop order in [12], the perturbative series of the c = 1 string amplitude is in fact Borel summable. However, the Borel-resummed perturbative amplitude by itself does not admit the interpretation as the scattering amplitude of collective excitations of free non-relativistic fermions [1]. Following the general prescription of [16], one expects non-perturbative corrections to the closed string amplitude due to D-instantons. Namely, one considers worldsheets with boundaries, subject to conformal boundary conditions that describe strings ending on Dinstantons, and integrate over the moduli space of Riemann surface with boundaries, as well as over the moduli space of boundary conditions i.e. the D-instanton moduli space. The one-instanton contribution to closed string scattering was studied in type IIB string theory in [17], and in c = 1 string theory in [1]. In the latter case, the relevant D-instantons are described by ZZ-boundary condition in Liouville theory [13] and Dirichlet boundary condition in X 0 . Delicate cancelations between worldsheet diagrams of different topologies are seen to render the instanton amplitudes well defined and agree with the proposed matrix model dual [1,2]. Multi-instanton contributions, as will be discussed below, are subject to further complications in the integration over the instanton moduli space. Firstly, as outlined in the introduction, at the n-instanton level (n ≥ 2), there can be different types of ZZ-instantons that do not lie in a single connected moduli space of exactly marginal deformations of boundary conditions, all of which contribute to the order e −n/gs closed string amplitude. These will be described in detail in section 3.1. Secondly, there is a nontrivial integration measure factor over the instanton moduli space, computed by the vacuum diagram with boundaries ending on the ZZ-instantons. We will see that the measure factor develops a pole when an open string mode stretched between a pair of ZZ-instanton becomes on-shell (or "massless"). A contour prescription for handling the integration near the poles will be given in section 3.2. We will apply this prescription to compute the closed string amplitudes at n = 2, 3, 4 instanton levels and find remarkable agreement with the matrix model result (2.10). ZZ boundary conditions and instantons Conformal boundary conditions of Liouville CFT come in two types: FZZT [18,19] and ZZ [13]. The former corresponds to a semi-infinite partially-space-filling brane, whereas the latter corresponds a point-like brane localized in the strong coupling region. In this work we are concerned with D-instantons of finite action, described by ZZ boundary condition in the Liouville CFT tensored with Dirichlet boundary condition in X 0 , and direct sums thereof. It was shown in [13] that there is a discrete family of ZZ boundary conditions, which we refer to as the (m, n)-type ZZ boundary condition, labeled by a pair of positive integers m and n. In a unitary Liouville theory, only the (1, 1) ZZ boundary condition supports a unitary spectrum of boundary operators. This gives rise to the so-called ZZ-branes which have been discussed extensively in the context of c = 1 string theory [7,20]. The ZZinstanton constructed from the (1, 1) ZZ boundary condition was considered in the oneinstanton analysis of [1]. At the multi-instanton level, we will see that the (m, n) ZZ boundary conditions give rise to a more general class of ZZ-instantons whose effect on closed string amplitudes should be taken into account. The (1, 1) ZZ boundary condition may be defined as the conformal boundary condition in Liouville CFT that supports the identity operator as the only boundary Virasoro primary. The (m, n) ZZ boundary condition has the property that the only boundary Virasoro primary that interpolates between the (1, 1) and (m, n) ZZ boundary condition corresponds to the degenerate representation of the boundary Virasoro algebra labeled by (m, n). In the c = 25 case, such a degenerate primary has weight 1 − (m+n) 2 4 , and Virasoro character where η(τ ) is the Dedekind eta-function and q = e 2πiτ . The (m, n)-type ZZ boundary state takes the form where |V P is the Ishibashi state constructed from the bulk Liouville primary V P . Consideration of the cylinder partition function with (1,1) ZZ boundary condition on one side and (m, n) on the other, where χ h (τ ) is the c = 25 Virasoro character for a primary operator of weight h, determines Ψ (m,n) (P ) to be As (3.6) is invariant under the exchange of m with n, we will restrict to m ≥ n from now on. All boundary structure constants can be bootstrapped from crossing relations among boundary correlators [13,21], although we will not make explicit use of them in this paper. The spectrum of boundary operators interpolating between the (m, n) and (m , n ) ZZ boundary conditions is given by the cylinder partition function (3.7) In c = 1 string theory, a single ZZ-instanton of type (m, n) located at time X 0 = x is described by the matter CFT boundary state |ZZ(m, n) Liouville ⊗|D(x) X 0 . More generally, one can consider direct sums of such boundary states. The action of the (1, 1) ZZ-instanton, S (1,1) , is related to the mass of the (1, 1) ZZ-brane M (1,1) by [1,7] Upon analytic continuation to P → i (so that ∆ P → 0), the disc 1-point function Ψ(P ) = V P |ZZ is proportional to the "empty disc" diagram which can be identified with minus the instanton action. This allows us to determine the action of the (m, n) ZZ-instanton to be For later use we also record the disc 1-point diagram with the closed string insertion V ± ω , with boundary on the (m, n) ZZ-instanton (generalizing (2.8) of [1]), In fact, we will see in section 3.5 that, agreement with the matrix model dual suggests that only the (m, 1) ZZ-instantons give rise to non-perturbative corrections to the closed string amplitudes of consideration. A multi-ZZ-instanton configuration described by the direct sum of boundary states |ZZ(m i , 1) Liouville ⊗ |D(x i ) X 0 , i = 1, · · · , , will be referred to as an instanton of type {m 1 , ..., m }. The moduli space of such instantons is parameterized by the collective coordinates x 1 , · · · , x . The instanton measure The instanton-mediated non-perturbative correction to the closed string amplitude is computed by worldsheet diagrams with boundaries on the D-instantons, integrated over the instanton moduli space in the form (1.2), with a suitable measure factor µ that is a function of the instanton collective coordinates. Unlike in the path integral formulation of quantum field theories, where the instanton measure can be derived by integrating over fluctuations around the instanton solution, such a derivation is not available for the D-instanton. Nonetheless, one expects that the instanton measure µ is computed by exponentiating open string vacuum diagrams of one-loop and higher orders, as (1.3). In the one-instanton case considered in [1], the measure factor is a constant (by time-translation invariance), and may be viewed as a renormalization of the instanton action. In the multi-instanton case, however, the measure factor µ depends nontrivially on the relative position of the ZZ-instantons in Euclidean time. At order g 0 s , µ is computed by exponentiating the cylinder diagram, which we will now analyze. One class of cylinder diagrams has both boundaries on the same ZZ-instanton, say of type (m, n). Such diagrams are formally independent of the instanton collective coordinate x, and is furthermore divergent. We do not know of a canonical regularization scheme of such diagrams in the worldsheet formalism. Instead, we will assume that such diagrams can be absorbed into an overall normalization constant N (m,n) associated with the integration over the collective coordinate of the ZZ-instanton of type (m, n), and will determine N (m,n) by comparison with the dual matrix model. 3 In fact, we will find that only ZZ-instantons of type (m, 1) contribute, and will use the notation N (m,1) ≡ N m . The cylinder diagrams with two boundaries on different ZZ-instantons (which may or may not be of the same type), on the other hand, can be evaluated unambiguously. Let us begin by considering the cylinder diagram between two (1, 1) ZZ-instantons located at Euclidean times x E 1 and x E 2 respectively. The free boson cylinder partition function is given by e −t (∆x E ) 2 2π /η(it), where t parameterizes the modulus of the cylinder and ∆x E ≡ x E 1 − x E 2 is the separation of the two ZZ-instantons in Euclidean time. The Liouville cylinder partition function with ZZ boundary condition is (e 2πt − 1)/η(it), as follows from a special case of (3.3) and (3.7). Combining with the bc ghost contribution η(it) 2 , we obtain the cylinder amplitude where the moduli integral is performed with the assumption |∆x E | > 2π. When |∆x E | < 2π, the lowest open string mode stretched between the two ZZ-instantons becomes "tachyonic", and we will define the cylinder amplitude by analytic continuation from the |∆x E | > 2π regime. Exponentiating the cylinder amplitude as in (1.3) then gives the order g 0 s measure factor on the moduli space of two (1, 1) ZZ-instantons, (3.12) Furthermore, a symmetry factor 1 2 should be included due to the indistinguishability of the two ZZ-instantons. Note that in the ∆x E → 0 limit, the factor (∆x E ) 2 in the measure can be interpreted as the Vandermonde determinant in gauge fixing the non-Abelian coordinate of two instantons to the diagonal form. At ∆x E = ±2π, the stretched open string mode becomes on-shell and (3.12) develops a pole. A contour prescription is needed to define the eventual integration over instanton collective coordinates in a way that circumvents the pole. Our prescription will be simply to analytic continue (3.12) to Lorentzian ∆x, and integrate the worldsheet diagram along the real x i -contour. 4 A similar analysis extends to pairs of ZZ-instantons of the more general type (m, n). For example, the cylinder diagram between a (1,1) and an (n, 1) ZZ-instanton (with n = 1) evaluates to 13) where the dashed and solid boundaries on the LHS correspond to the (n, 1) and (1, 1) ZZ boundary conditions respectively. On the other hand, the cylinder diagram between a pair of (n, 1) ZZ-instantons gives (3.14) With a Lorentzian integration contour in the x i 's, all poles in the measure factor are avoided. 2-instanton corrections to the closed string 1 → k amplitude We begin by considering the leading correction to the closed string 1 → 1 (reflection) amplitude at the 2-ZZ-instanton level, of order e −2/gs . There are two types of contributions, namely two (1, 1) ZZ-instantons, and a single (2, 1) ZZ-instanton. In the case of two (1, 1) ZZ-instantons, the worldsheet diagram consists of two discs, each with one closed string vertex operator insertion. The boundaries of the two discs may lie on the two separate ZZ-instantons at times x 1 and x 2 , or both boundaries may lie on the same ZZ-instanton, either at x 1 or at x 2 . The relevant disc 1-point diagram is evaluated in (3.10). We then integrate over x 1 , x 2 with the measure factor (3.13) along the Lorentzian contour. The contribution from the two discs ending on the same ZZ-instanton, say the one at x 1 , is given by (3.15) However, the integration over large ∆x is linearly divergent. This is in fact due to an overcounting. Namely, we should normalize all amplitudes by the vacuum amplitude, which itself contains ZZ-instanton contributions. We must then subtract from (3.15) a "disconnected two-instanton amplitude" in which the second ZZ-instanton merely contributes to the vacuum amplitude. This amounts to replacing the integrand on the RHS of (3.15) by There is an identical contribution coming from both discs ending on the ZZ-instanton at x 2 . The contribution from two discs ending on the two separate ZZ-instantons, on the other hand, is given by The contribution from a single (2, 1) ZZ-instanton comes from two disconnected discs, each with one closed string insertion, subject to the same boundary condition. After integrating out the collective coordinate, the result is Putting these together, we obtain the total 2-instanton contribution to the closed string 1 → 1 amplitude, , comparison with the matrix model result (2.10) yields (3.20) It is useful to organize the 2-instanton computation according to Figure 2, where the subtraction of disconnected instanton diagram is indicated. While the subtraction scheme is fairly simple in the 2-instanton case, it will become progressively more complicated at higher instanton numbers. The generalization of the above computation to 1 → k closed string amplitude at order e −2/gs is straightforward. Let ω label the total energy, and ω 1 , · · · , ω k the energies of outgoing closed strings. The subtraction of disconnected diagrams is similar to the 1 → 1 case. The contribution from a pair of (1, 1) ZZ-instantons to the 1 → k amplitude is (3.21) The integral in the last line can be evaluated as where we have defined S = {ω 1 , ..., ω n }, S 1 , S 2 are disjoint subsets of S such that S 1 S 2 = S, and ω(S i ) = ω ∈S i ω . {1, 1, 1} ZZ-instantons We begin with the case of three (1, 1) ZZ-instantons, located at time coordinates The worldsheet diagram at order e −3/gs is again given by a pair of discs, each containing one closed string vertex operator, such that the boundaries of the discs lie on one or two out of the three instantons. Extra care must be taken in subtracting off the disconnected instanton diagrams so as to normalize the vaccum amplitude, shown schematically in Figure 3. The first two subtractions are due to the diagram with a disconnected instanton of type {1}, whereas the third subtraction is due to a disconnected instanton of type {1, 1}. The last term in the bracket takes care of the over-subtraction of diagrams with two disconnected instantons of type {1}. The contribution from a pair of discs ending on two separate (1, 1) ZZ-instantons is computed by {2, 1} ZZ-instantons Next, we consider a (2, 1) ZZ-instanton at time x 1 and a (1, 1) ZZ-instanton at time x 2 . The measure factor is computed by the cylinder diagram between these two boundary conditions, as in (3.13). We should also subtract off diagrams with a disconnected instanton, either of (2, 1) or (1, 1) type, as shown in Figure 4. The contribution from a pair of discs ending on the same ZZ-instanton is given by where the contribution from a pair of discs ending on the two different ZZ-instantons is (3.28) After evaluating the ∆x-integral, we find the total contribution from {2, 1} ZZ-instanton configuration to be {3} ZZ-instanton Finally, the contribution from a single (3, 1) ZZ-instanton at time x is given by where the normalization factor N 3,1 is so far undetermined. Combining (3.26), (3.29), and (3.30), the order e −3/gs contribution to the 1 → 1 closed string amplitude remarkably agrees with the matrix model result (2.10) provided that we make the identification . In view of (3.9), one may further suspect that a single ZZ-instanton of type (2, 2) could contribute at this order (not to be confused with {2, 2}, which means two ZZ-instantons of type (2, 1)). We will find a remarkable agreement of the total result with the matrix model, provided a suitable choice of the measure normalization factor N 4 for the (4, 1) ZZ-instanton, and surprisingly, if we assume that the (2, 2) ZZ-instanton does not contribute, i.e. N (2,2) = 0. {1, 1, 1, 1} ZZ-instantons We begin with four (1, 1) ZZ-instantons, located at times x 1 , x 2 , x 3 , x 4 . The worldsheet diagrams again involve a pair of discs, with boundaries ending on either one or two out of the four instantons. The subtraction of disconnected diagrams is summarized schematically in Figure 5. {2, 2} ZZ-instantons The contribution from a pair of (2, 1) ZZ-instantons is evaluated similarly to the case of section 3.3 as The absence of (2, 2)-type ZZ-instanton contribution leads us to suspect that in fact N (k, ) = 0 whenever k, ≥ 2 (recall that (k, ) and ( , k) ZZ-boundary conditions are equivalent in c = 25 Liouville theory), i.e. only the ZZ-instantons of type (m, 1) can contribute to closed string amplitudes in c = 1 string theory. We will confirm this by extending the computation of the closed string 1 → 1 amplitude to order e −n/gs for all n. Closed string reflection amplitude to all instanton orders In the worldsheet description, the order e −n/gs contributions to the 1 → 1 amplitude of closed strings come from all ZZ instanton configurations of type {m 1 , ..., m }, consisting of an (m i , 1) ZZ-instanton located at time x i , for each i = 1, · · · , , subject to i=1 m i = n. The worldsheet diagram with two discs whose boundaries lie on two different ZZ-instantons, say the ones at x 1 and x , is computed by where the integration contour C is taken along Lorentzian times x i , for real energy ω of the closed string state. The cylinder diagram between an (m 1 , 1) and an (m 2 , 1) ZZ-instanton at Euclidean time separation ∆x E evaluates to (4.2) Analytically continuing (4.2) to Lorentzian times, and using (3.10), we can write (4.1) explicitly as 4 sinh(m 1 πω) sinh(m πω) The integrals over y 2 , · · · , y −1 in (4.3) are linearly divergent at large y i 's. As already seen in section 3, such divergences are cured by subtracting off disconnected diagrams that correspond to instanton corrections to the vacuum amplitude. In fact, we can conveniently take into account these subtractions by deforming the contour C to either R + i∞ or R + i∞ for various terms in the integrand, and simply keep the residue contributions while discarding the contour at infinity. Let us first consider the integral over y −1 . The poles in y −1 are located at where we defined y ≡ 0. After deforming the y −1 contour and discarding the contribution at infinity, we pick up the residue contribution, which now contains a set of poles in y −2 at (4.5) Note that some other potential poles in y −2 are canceled by zeroes in the numerator of the We can iterate this procedure and integrate out y 2 , · · · , y −2 . The remaining integrand in y 1 has poles at · · · ± iπ(m 1 + m + 2m 2 + ... + 2m −1 ), (4.6) where the i k 's are a set of distinct indices ranging from 2 to − 1. We will refer to the poles at ±iπ(m 1 + m ) as the 0-th kind, the poles at ±iπ(m 1 + m + 2m i 1 ) as the first kind, the poles at ±iπ(m 1 + m + 2m i 1 + 2m i 2 ) as the second kind, and so forth. The residue of the last line of (4.3) at any one of the poles of the k-th kind on the upper half y 1 -plane appears to be given by the formula (4.7) While we have not proven (4.7) in general, we have verified it explicitly for the case of {m 1 , ..., m ≤5 } and {1, 1, 1, 1, 1, 1} instanton configurations. Using this, we can evaluate the y 1 -integral as a sum over residues, C dy 1 cos(ω 1 y 1 ) where For an {m 1 , ..., m } instanton configuration, we must sum over all possible assignment of boundary conditions for the worldsheet diagram, namely the i-th ZZ-instanton for the first disc and the j-th ZZ-instanton for the second disc, i = j, as well as a pair of discs whose boundaries lie on the same ZZ-instanton. So far we have not explicitly discussed the latter case. In fact, one can verify that (somewhat surprisingly) for a pair of discs ending on the first instanton (of type m 1 ), after suitable subtraction of disconnected instanton diagrams, the analogous moduli integral simply evaluates to (4.8) with the replacement ω 1 → 0. In this case one should also replace m by m 1 in the prefactor of (4.3) and include an overall factor of 1 2 (since two different diagrams are accounted for in (4.3)). Taking into account the prefactors in (4.3), we arrive at the following result for the {m 1 , · · · , m } ZZ-instanton contribution to the reflection amplitude at order e −n/gs (n = (4.9) Here S is the symmetry factor of the ZZ-instanton configuration, defined as S = a a ! where a is the number of m i 's that are equal to a. The last sum in the second line is taken over all subsets S (k) ij of {1, · · · , } − {i, j} with k elements. We have also defined m(S The sum in the first line of (4.9) represents the contribution from diagrams in which both discs end up on the same ZZ-instanton. The second line of (4.9), coming from pairs of discs that end on different ZZ-instantons, can be simplified via the identity e πω(n−2m i ) + e −πω(n−2m i ) . (4.10) Using this and applying some simple rearrangements to (4.9), we arrive at a compact expression for the full ZZ-instanton contribution to the closed string reflection amplitude at order e −n/gs , 11) where the sum is taken over all (unordered) partitions {m 1 , · · · , m } of the integer n. Let us compare this with the matrix model result (2.9) specialized to the 1 → 1 amplitude (expanding out the hypergeometric function 2 F 1 ) The sinh(πωn)e πωn term in (4.11) comes from the {n} ZZ-instanton only. Matching its coefficient against that of (4.12), we fix N n to be This agrees with the results of section 3 explicitly computed for n up to 4. A term proportional to sinh(πωn)e πω(n−2k) in (4.11) comes from the sum over partitions {m 1 , · · · , m } with at least one m i = n−k, for 0 ≤ k ≤ n−1. We can reduce such a restricted sum to one that is over partitions {m 1 , · · · , m −1 } of the integer k. One can then verify that (4.11) and (4.12) are in complete agreement using (4.13) as well as the combinatorial identity which we have verified numerically. This also confirms our hypothesis that ZZ-instantons of type (m, r) with m, r ≥ 2 do not contribute, extending the result (3.38). Discussion We have extended the analysis of [1] to include the effects of multiple ZZ-instantons in c = 1 string theory. Guided by a simple proposal of the non-perturbative matrix model dual [1], we presented a detailed prescription for computing multi-instanton contributions to closed string amplitudes from the worldsheet perspective. The ingredients can be summarized as follows. The general D-instanton configuration that contributes to the closed string scattering involve k ZZ-instantons of type (m 1 , 1), · · · , (m k , 1), located at times x 1 , · · · , x k . The integration measure in x k is computed by the partition function of open strings stretched between the ZZ-instantons, up to an overall normalization constant N m for each type (m, 1), determined by comparison with the matrix model to be (4.13). The integration over the instanton moduli space is performed along the "Lorentzian contour," namely over real Lorentzian time coordinates x 1 , · · · , x k , so as to avoid the poles in Euclidean times. The worldsheet diagrams that contribute are those with boundaries that lie on the ZZinstantons. The leading contribution at the n-instanton level comes from diagrams that involve multiple disconnected discs, each with one closed string vertex operator insertion, such that the boundaries of the discs reside on a subset of the ZZ-instantons. To compute subleading corrections in g s , which has only been analyzed explicitly in the n = 1 case in [1], would require the Fischler-Susskind-Polchinski mechanism for cancelation of divergences between worldsheet diagrams of different topologies [16,22,23]. Furthermore, to fix a finite constant ambiguity in the cancelation of divergence requires carefully dividing up the moduli space of punctured Riemann surfaces with boundaries using string field theory [2]. In this paper, we explicitly computed the leading n-instanton contributions to the 1 → k closed string amplitude for n = 2, and to the 1 → 1 closed string amplitude for all n. In these computations the FSP mechanism is not required, and the main subtlety has to do with the computation of the measure on the instanton moduli space, the choice integration contour, and the subtraction of disconnected diagrams in order to normalize the vacuum amplitude. In the end, we found striking agreement with the proposed matrix model dual. One of the surprises uncovered by our computation is that, to correctly account for the non-perturbative corrections in the matrix model proposal of [1], we must take into account not only multiple (1, 1) ZZ-instantons, but also the (m, 1) ZZ-instantons with m ≥ 2, even though the latter are constructed from non-unitary ZZ boundary conditions in the c = 25 Liouville theory [13]. On the other hand, our results suggest that the more general ZZinstantons of type (n, m) with n, m ≥ 2 do not contribute to closed string amplitudes. It would be good to understand the reason behind this. Another unusual feature of our computation is the choice of Lorentzian contour in the integration over the ZZ-instanton collective coordinates, namely their locations in time. This was partially motivated by the fact that the measure on the instanton moduli space has poles at Euclidean time separations, where open string tachyons stretched between ZZ-instantons become on-shell. In the computation of perturbative string amplitudes it is often useful to consider the analytic continuation to complex energies. If we Wick rotate the energies of the external closed string states to that of Euclidean signature, we can maintain the analyticity of instanton amplitudes by rotating the integration contour in the instanton collective coordinates toward a Euclidean one, provided that no poles are crossed. In other words, at imaginary energies we may equivalently work with the Euclidean contour, defined in a way that circumvents the poles (from either above or below, as dictated by continuity of the rotation from the Lorentzian contour). The choice of instanton integration contour is tied to the breaking of time-reversal symmetry at the non-perturbative level. From the matrix model perspective, the proposed closed string vacuum state |Ω is such that the fermions occupy all |E R with E ≤ −µ and none of the |E L states. This choice breaks time-reversal symmetry. 5 This is also seen explicitly in that the instanton amplitudes do not obey the perturbative crossing symmetry relations [12] upon analytically continuing ω → −ω. There are two important simplifications that are special to c = 1 string theory underlying 5 We thank Edward Witten for bringing this point to our attention. our analysis. The first is that the perturbative expansions of closed string amplitudes are Borel summable [1] (assuming the perturbative duality with the matrix model). This renders the instanton corrections, on top of the Borel-resummed perturbative answer, unambiguously defined. The second is the simplicity of the moduli space of D-instantons, and in particular the absence of singularity in limits where multiple D-instantons collide. Neither of these features are expected to hold in, say, the ten-dimensional type IIB string theory. Nonetheless, we hope our analysis will pave the way toward understanding the effect of D-instantons more generally.
2019-12-16T03:06:57.000Z
2019-12-16T00:00:00.000
{ "year": 2019, "sha1": "d6142f1cc22552a30d3a135ce5457196920f49af", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d6142f1cc22552a30d3a135ce5457196920f49af", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235813226
pes2o/s2orc
v3-fos-license
Decreased piRNAs in Infertile Semen Are Related to Downregulation of Sperm MitoPLD Expression Currently, the molecular mechanisms underlining male infertility are still poorly understood. Our previous study has demonstrated that PIWI-interacting RNAs (piRNAs) are downregulated in seminal plasma of infertile patients and can serve as molecular biomarkers for male infertility. However, the source and mechanism for the dysregulation of piRNAs remain obscure. In this study, we found that exosomes are present in high concentrations in human seminal plasma and confirmed that piRNAs are predominantly present in the exosomal fraction of seminal plasma. Moreover, we showed that piRNAs were significantly decreased in exosomes of asthenozoospermia patients compared with normozoospermic men. By systematically screening piRNA profiles in sperms of normozoospermic men and asthenozoospermia patients, we found that piRNAs were parallelly reduced during infertility. At last, we investigated the expression of some proteins that are essential for piRNAs biogenesis in sperms and therefore identified a tight correlation between the levels of spermatozoa piRNA and MitoPLD protein, suggesting that the loss-of-function of MitoPLD could cause a severe defect of piRNA accumulation in sperms. In summary, this study identified a parallel reduction of piRNAs and MitoPLD protein in sperms of asthenozoospermia patients, which may provide pathophysiological clues about sperm motility. INTRODUCTION Infertility is a prevalent health problem and affects nearly 15% of couples all over the world (1,2). Male factors contribute to about 50% of childless couples. As a complex disease, male infertility is caused by a series of multifactorial genetic and environment factors, but the underlying molecular mechanisms have not yet been elucidated (3)(4)(5). piRNAs are approximately 26-31 nucleotides in length and expressed mainly in pachytene spermatocytes and round spermatids in the testis of mammals (6)(7)(8)(9). They are named PIWIinteracting RNAs because of their close relationship with the PIWI subfamily members. Two pathways for piRNA biogenesis have been identified: namely, primary and secondary pathways (10)(11)(12). The primary pathway is thought to produce piRNAs (primary piRNAs) from long single-strand piRNA precursors, which are derived from genomic regions called piRNA clusters. The pathway about primary piRNAs generation is not well studied, but a mitochondrial protein, MitoPLD (also known as Zucchini or PLD6), a member of the nuclease/phospholipase D family, has been proposed to function as an endonuclease to generate the 5' ends of piRNAs (13)(14)(15). In the secondary pathway, piRNAs (secondary piRNAs) are produced by the ping-pong amplification cycles from the 5' portions of RNA fragments cleaved by PIWI-piRNA complexes. The piRNAs from primary and secondary pathway guide each other's production in the ping-pong cycle to accelerate production of piRNAs (12,16,17). Currently, the main proposed function of piRNAs is to protect the germline and gonadal somatic cells and to avoid transposable elements related harmful expression and thus maintains the genomic integrity of germ cells. Increasing evidence has also shown that piRNAs may be involved in posttranscriptional regulation of protein-coding genes (18)(19)(20). Because of the diverse and pivotal roles of piRNAs in the male reproductive system, the dysregulation and dysfunction of piRNA often cause male infertility. Since piRNAs are specifically expressed in germ-cell and essential for spermatogenesis, it is not surprising to find that the levels of spermatozoa piRNAs are directly correlated to semen quality and male fertility. We have previously shown that the concentration of seminal plasma piRNAs were significantly decreased in infertile patients compared with the normozoospermic men. Several specific piRNAs in seminal plasma were even identified as molecular biomarkers for male infertility (3). However, the source of seminal plasma piRNAs remains elusive, and the cause of massive reduction of piRNAs in seminal plasma of infertile patients has not been definitively identified. Extracellular RNA profile in human semen was comprehensively characterized in a recent study and a great number of small RNAs were found within seminal exosomes (21). Thus, it is rational to speculate that piRNAs in seminal plasma are mostly derived from the secretion of exosomes from the male germ cells. In this study, we validated that a majority of piRNAs were present within the exosomal fraction of seminal plasma. Moreover, we showed the piRNAs' types and levels were significantly decreased in the sperms of asthenozoospermia patients compared with those in normozoospermic men. Finally, we identified a tight correlation between the levels of spermatozoa piRNA and MitoPLD protein, suggesting that the loss-of-function of MitoPLD could cause severe defects in the piRNA accumulation in sperms. Semen Samples Semen samples were provided from Nanjing Drum Tower Hospital, and all protocols in this study were approved by the Medical Ethics Committee of Hangzhou medical college and Nanjing Drum Tower Hospital. Informed consents were signed by both normozoospermic men volunteers and asthenozoospermia patients before sample collection for the further study. The study recruited 42 asthenozoospermia patients with infertility more than 2 years and 41 normozoospermic men volunteers who conceived naturally within 1-2 years. The demographic characteristics of all tested persons were listed in Table 1. The volunteers in this study did not receive any treatment, semen samples were analyzed in the Reproductive Laboratory of Nanjing Drum Tower Hospital. Sample Preparation Semen samples obtained through by masturbation after 3-5 days of abstinence and then were transferred into a 15 mL centrifuge tube (Corning) and liquefied for 30 min at 37°C. Sperm concentration and viability were assessed by Sperm analysis system (SAS medical). Routine semen analysis was based on the World Health Organization (WHO) criteria (22). The sperms isolated from semen samples by centrifuging at 3000 rpm for 5 min at room temperature were resuspended in PBS and stored at -80°C for further protein analysis. Isolation of Exosomes From Seminal Plasma Differential centrifugation was employed to isolate exosomes from seminal plasma that was obtained by centrifuging semen samples (850 g, 5min at room temperature). In brief, cell debris was removed by spinning at low speed (3,000 g, for 30 min) at the first step. Then, the shedding vesicles and the other larger vesicles were removed by centrifugation at 10,000 g for 30 min. Finally, the exosome pellets were collected by centrifugation at 110,000 g for 70 min, and then re-suspended in PBS buffer, and the supernatant was kept as exosome-free seminal plasma. All procedures steps carried out at 4°C. Transmission Electron Microscopy Assay (TEM) The morphology of exosome was imaged by TEM. Briefly, the exosome pellet was fixed in 2.5% glutaraldehyde overnight at 4°C, and then was rinsed with PBS and post-fixed with 1% osmium tetroxide for 1 h at room temperature. Then the exosome pellet was embedded in 10% gelatin and fixed with glutaraldehyde at 4°C and carved into blocks. Subsequently, the exosome pellet was dehydrated by incubation for 10 min with graded alcohol series (30%, 50%, 70%, 90%, 95%, and 100%, 3 times), then the samples were incubated with propylene oxide, and infiltrated with increasing concentrations of Quetol-812 epoxy resin mixed with propylene oxide (25%, 50%, 75%, and 100%). At last, exosome samples embedded in pure, fresh Quetol-812 epoxy resin and polymerized increasing temperature for 12-24 h (35°C for 12 h, 45°C for 12 h, and 60°C for 24 h), were cut into ultrathin sections using Leica UC6 ultramicrotome, and stained with uranyl acetate (10 min) and lead citrate (5 min) at room temperature. The samples were imaged with TEM (FEI Tecnai T20) at a voltage of 120 kV. Illumina High-Throughput Sequencing Total RNA from pooled sperm samples of normozoospermic men and asthenozoospermia patients (each pooled from 10 individuals) was prepared using TRIzol Reagent (Takara, Dalian, China). About 1~2 mg of quantified total RNA was performed high-throughput sequencing on Illumina NextSeq500 system according to the manufacturer's instructions. The sequences corresponding to known piRNAs after data analysis were determined by perfect sequence matching to the piRNA database piRNABank (http://pirnabank.ibab.ac.in/). All data have been uploaded to the GEO database (Accession number: GSE172486). RNA Isolation and qRT-PCR Assays TRIzol Reagent (TaKaRa, Dalian, China) was used to isolate total RNA from sperm, exosome and exosome-free seminal plasma. Briefly, sperms derived from 1 ml of semen, exosome from 100 ml of seminal plasma and 100 ml of exosome-free seminal plasma were mixed with 1 ml TRIzol, after being vortexed vigorously for 10 s, the mixture was incubated with 200 ml of chloroform for 10 min on ice, The RNA containing phase was then transferred to a fresh RNase-free tube after centrifugation at 16,000 g for 20 min at 4°C, Then the supernatant was incubated with equal volume of isopropanol at -20°C for 1.5 h to precipitate RNA. The isolated RNA was collected by centrifugation (16,000 g, 4°C, 20 min), was then washed rinsed once with 75% ethanol and dried for 20 min at room temperature. Finally, the RNA was dissolved in 20 ml RNase-free H 2 O and stored at -80°C for further analysis. Total RNA (2 ml) was reverse transcribed to cDNA using AMV reverse transcriptase based on the manufacturer's instruction. Then, 1 ml of cDNA was used for subsequent qRT-PCR analysis on a Roche LightCycler 480 PCR system. Taqman piRNA probes (GenePharma, Shanghai, China) was used to measure the piRNA level in this study. All reactions were performed in triplicate. RNU6-6P was used as an internal control. Protein Extraction and Western Blotting Sperms were lysed in RIPA lysis buffer with freshly added PMSF for 30 min on ice, and sonication (Sonics&materials Inc.VCX 130 PB) was used to facilitate sperm cell disruption, Insoluble debris were removed by centrifugation at 16,000 g, 4°C for 10 min. The protein concentration was quantified using a BCA proteinassay kit (Thermo Scientific, Rockford, IL, USA). The expression of indicated proteins were detected using their specific antibodies, including anti-MitoPLD antibody (ab170183) and anti-PIWIL1 (ab12337) purchased from Abcam (Cambridge, MA, USA), b-actin antibody (sc-69879) was served as the reference, which purchased from Santa Cruz (Dallas, TX, USA). Western blot image acquisition was performed using the Bio-Rad ChemiDoc imaging system, and Image J was used for densitometric analysis. Statistical Analysis All images are representative of at least three different experiments. The data shown are the mean ± SE of at least three independent experiments. Student's t-test was used for statistical analysis and p value < 0.05 (indicated by *), < 0.01 (indicated by **) or < 0.001 (indicated by ***) were considered statistically significant. Characterization of Exosomes From Seminal Plasma We purified exosomes from seminal plasma of normozoospermic men and asthenozoospermia patients by ultracentrifugation and examined the particle size and morphology by transmission electron microscopy. Under electron microscopy, the exosomes isolated from both normozoospermic men and asthenozoospermia patients appeared as lipid bilayer-bound single and small clumps of particles of 30-150 nm in diameter, consistent with the expected size range of exosomes ( Figure 1A). The exosomes isolated from normozoospermic men and asthenozoospermia patients were further identified by the presence of equal amounts of universal exosomal markers (CD63 and TSG101) based on immunoblotting ( Figure 1B). Moreover, the particle size and concentration of exosomes were determined by nanoparticle tracking analysis. Exosomes from normozoospermic men had an average diameter of 123 nm, most exosomes (> 85%) range from 80 to 150 nm, and the concentration of exosomes was 2.89×10 9 particles/mL ( Figure 1C). For exosomes from asthenozoospermia patients, the mean diameter was 121 nm, and the concentration was 3.0×10 9 particles/mL ( Figure 1D). The results confirmed that exosomes were present in high concentrations in human seminal plasma, but the size range and concentration of exosomes were not differentially present between normozoospermic men and asthenozoospermia patients. Differentially Expressed piRNAs in Seminal Plasma Exosomes A previous study has identified a great amount of small RNA in the exosomal fraction of seminal plasma (21). In this study, we determined to compared the ratio of piRNA levels in exosomes to that in exosome-free seminal plasma. We selected piR-1207, piR-2107, piR-5937 and piR-5939 as the representative piRNAs and measured their levels in exosomes and exosome-free seminal plasma by quantitative RT-PCR assay because our previous study have identified these piRNAs as the significantly downregulated piRNAs in seminal plasma of asthenozoospermia patients compared with normozoospermic men (3). piR-1207, piR-2107, piR-5937 and piR-5939 were mainly stored in exosomes (Figure 2A), suggesting that the majority of piRNAs were present in exosomal fraction of seminal plasma. Meanwhile, we also compared the levels of these piRNAs in exosomes between normozoospermic men and asthenozoospermia patients. The result proved that piR-1207, piR-2107, piR-5937 and piR-5939 were significantly reduced in exosomes from asthenozoospermia patients compare with the normozoospermic men ( Figure 2B). These results indicated that the reduction of piRNAs in exosomes contributed to the reduction of piRNAs in seminal plasma of asthenozoospermia patients. Profiling Sperm piRNAs by High-Throughput Sequencing Although piRNAs were significantly reduced in exosomes of asthenozoospermia patients, the source and mechanism for the dysregulation of piRNAs remain obscure. piRNAs are expressed abundantly in pachytene spermatocytes and round spermatids (23). Given that exosomes are likely secreted from multiple cellular sources in the male genital tract, we speculate that piRNAs may be actively secreted from spermatocytes during the spermatogenesis process. However, it is very difficult to get spermatocytes from asthenozoospermia patients. Alternatively, we systematically characterized the piRNA profiles in mature sperms. Total RNA was extracted from pooled sperm samples of normozoospermic men and asthenozoospermia patients (each pooled from 10 individuals), and was qualified by agarose gel electrophoresis and quantified using Nanodrop. Subsequently, commercial kit was used for piRNA-seq library preparation, which includes 3'-adapter and 5'-adapter ligation, cDNA synthesis and library PCR amplification. The prepared piRNA-seq libraries were finally quantified using Agilent BioAnalyzer 2100, then sequenced by using Illumina NextSeq 500. After removing 5'and 3' adaptor sequences and aligned to the piRBase, a total of 8,245,354 and 4,220,714 reads of piRNAs were obtained in the sperm of normozoospermic men group and asthenozoospermia patient group, respectively ( Figure 3A). Moreover, the types of piRNAs were decreased from sperm of normozoospermic men group (17657) to asthenozoospermia patient group (15742) ( Figure 3B). Analysis of the length distribution revealed that sperm of normozoospermic men group and asthenozoospermia patient group contained a number of small RNAs with size that was consistent with common the size of piRNAs (25-32 nucleotides) ( Figures 3C, D). Next, we narrowed the list of piRNAs. Firstly, we narrowed down and selected 33 known highly expressed piRNAs under the condition of sequencing reads are larger than 20000 in the group of normozoospermic men, Heatmap analysis was performed based on these criteria, the result showed that 17 piRNAs have at least 2-fold higher reads in normozoospermic men group than in the asthenozoospermia patient group ( Figure 3E, Supplementary Table 1). Subsequently, we compared asthenozoospermia patient group with the normozoospermic men group by scatter plots with the following parameters: sequencing reads are larger than 1000 in the group of normozoospermic men. The result was shown in Figure 3F and Supplementary Table 2, indicating a remarkable different expression level of piRNAs between asthenozoospermia patient group and normozoospermic men group. Generally, the comparison among the sperm piRNA profiles revealed a considerable reduction of sperm piRNAs in asthenozoospermia patients relative to normozoospermic men (from 8,245,354 to 4,220,714 reads and from 17,657 to 15,742 types of piRNAs). Individual Quantification of Sperm piRNAs by Quantitative RT-PCR Next, a TaqMan probe-based quantitative RT-PCR assay was performed to measure the presence of piRNAs in individual samples. The representative piR-1207 and piR-2107 were assessed in sperm samples of 20 normozoospermic men and 20 asthenozoospermia patients. Consistent with the results from deep sequencing, piR-1207 and piR-2107 levels were markedly downregulated in sperm of asthenozoospermia patients ( Figures 4A, B). We further performed receiver-operating characteristic (ROC) curve analysis to assess the usefulness of piR-1207 and piR-2107 in discriminating asthenozoospermia patients from normozoospermic men. ROC curve analysis for piR-1207 revealed an AUC of 0.845 and an optimal cut-off value of 1717.93 with a corresponding sensitivity of 70.0% and specificity of 95.2% ( Figure 4C), and for piR-2107 revealed an AUC of 0.93 and an optimal cut-off value of 7476.52 with a corresponding sensitivity of 85% and specificity of 90% ( Figure 4D). These results showed that piR-1207 and piR-2107 could serve as valuable indicators for distinguishing asthenozoospermia patients from normozoospermic men. Expression Level of MitoPLD Is Decreased in Asthenozoospermia Patient Sperms Since the biogenesis and function of piRNA is tightly associated with that of the PIWI protein subfamily, and MitoPLD is essential for the creation of the 5' ends of primary piRNAs, the marked reduction of mature piRNAs in sperm of asthenozoospermia patients suggested the dysfunction or loss of expression of these essential enzymes in sperms. Therefore, we measured the expression level of MitoPLD and PIWIL1 in the sperms of normozoospermic men and asthenozoospermia patients. The results revealed that the expression of MitoPLD protein was significantly reduced in asthenozoospermia patient sperms ( Figures 5A-C). In contrast, the alteration of PIWIL1 in asthenozoospermia patient sperms was irregular, PIWIL1 expression level were increased in the sperms of some asthenozoospermia patients (6 out of 11 patients), but were significantly decreased in 5 patients (Figures 5D-F). DISCUSSION Although piRNAs have been found to have obvious expression in germ cells across various animal species, specifically in male germ cells of mammals, researches related to their detailed function and mechanism remain obscure. Up to now, what is becoming clear is that piRNAs are participate in posttranscriptional regulation of protein-coding genes as well as in the repression of retrotransposons and are indispensable for male fertility, and a recent study showed that PIWI/piRNA pathway genes repression by hypermethylation is probably contributed to unsuccessful spermatogenesis (24). These results suggest that the critical role and function of piRNAs during spermatogenesis process has been well documented, whether piRNAs in sperm may regulate sperm motility remain largely unknown. Increasing studies have shown that miRNAs expressed in mature sperm could regulate sperm motility (25,26). In contrast to miRNAs, piRNAs were just discovered in 2006, and the biogenesis and functions remain largely unexplored. However, piRNAs are known to be much more abundant and germ cell-specific than miRNAs, suggesting that piRNAs may potentially play a more fundamental role in regulating the sperm motility. In this study, we found that piRNAs were enriched in sperms and observed that a massive amount of piRNAs were lost in sperms of asthenozoospermia patients. Furthermore, ROC curve analysis revealed a strong relationship between the low presence of sperm piRNAs and asthenozoospermia patients, suggesting that sperm piRNAs may be essential for sperm motility. The biological roles of these piRNAs in sperm motility with male infertility, which may provide some pathophysiological clues for the molecular mechanisms of this disease, call for further investigations. Based on their biogenesis models, they are typically classified into two groups: the primary processing pathway and amplification loop (secondary processing pathway). MitoPLD, a mitochondriaanchored endonuclease belonging to the member of the phospholipase D superfamily proteins, is conserved among diverse species and is implicated in the primary processing pathway of piRNAs. MitoPLD is previously known to be a phospholipase that hydrolyzes cardiolipin to generate phosphatidic acid and is involved in the regulation of mitochondrial morphology (27,28). MitoPLD is also implicated in the formation of nuage (also known as inter-mitochondrial cement or chromatoid body), which works as a pivotal cytoplasmic structure comprising most piRNA-related proteins (15,27). Surprisingly, MitoPLD was found to be essential to piRNA biogenesis. MitoPLD acts as an endonuclease and conducts the first cleavage of piRNAs precursors to generate the 5' ends of secondary piRNAs, and then the cleaved piRNAs are transferred to PIWI proteins to trigger the secondary piRNA processing pathway (29,30). Knockout of MitoPLD abolishes the majority of piRNA in male germ cells and resulted in transposon activation and arrest of spermatogenesis, characteristic phenotypes of piRNA pathway mutants (12).In this study, we measured the expression levels of MitoPLD in sperms of normozoospermic men and asthenozoospermia patients and found that MitoPLD protein was significantly downregulated in sperms of asthenozoospermia patients. Massive reduction of piRNAs in the sperms of asthenozoospermia patients may be caused, at least in part, by the parallel reduction of MitoPLD protein in sperms. Further studies are required to investigate whether dysregulation or dysfunction of MitoPLD is involved in the pathology of infertility. Exosomes are nano-sized vesicles with a diameter ranging between 30 and 150 nm. Released by multiple cell types, exosomes are present in a variety of body fluids and can transfer bioactive molecules (e.g., proteins, lipids and nucleic acids) between neighboring and distant cells (31). Scientists have reached a consensus that exosomes play a key role in intercellular communication via the horizontal transfer of miRNAs. However, the presence of piRNAs in exosomes has only currently been noted. A recent study has shown that a large number of small RNAs (including miRNAs and piRNAs) were contained and protected within seminal exosomes (21). In this study, we also identified a majority of piRNAs in the seminal exosomes. In addition, we found that piRNAs in seminal exosomes of asthenozoospermia patients were significantly decreased compared with normozoospermic men. However, it has not been established yet if the piRNA in seminal exosomes can have regulatory functions in the recipient cells and act as a new role in the intercellular communication system. Future studies are needed to characterize the functions of piRNAs in seminal exosomes and to investigate the role of communicators of exosomal piRNAs in the microenvironment of genital tract. In conclusion, we systematically characterized the piRNA profiles in sperms of normozoospermic men and asthenozoospermia patients and found that the amounts of piRNAs were significantly decreased in the sperms of asthenozoospermia patients. We also investigated the mechanism for the dysregulation of piRNAs in sperms and revealed that the parallel reduction of MitoPLD may be the cause and consequence of male infertility. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: NCBI GEO repository, GSE172486. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Hangzhou Medical College Ethics Committee, Nanjing Drum Tower Hospital Ethics Committee. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS Study conceptualization: YH and JZ. Data acquisition, analysis and interpretation: YH, YW, CY, and LS. Clinical samples and data collection: XZ. Technical or material support: LC, HC, and FG. Manuscript writing and editing: YH, CY, LS, and JZ. All authors contributed to the article and approved the submitted version. FUNDING This study was supported by a grant from the National Natural Science Foundation of China (No. 81801513).
2021-07-14T13:28:40.381Z
2021-07-13T00:00:00.000
{ "year": 2021, "sha1": "ca1673b50c4b21b995f45a0d0b7cb5c87541b279", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2021.696121/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ca1673b50c4b21b995f45a0d0b7cb5c87541b279", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
16906886
pes2o/s2orc
v3-fos-license
Mapping the Global Potential Geographical Distribution of Black Locust (robinia Pseudoacacia L.) Using Herbarium Data and a Maximum Entropy Model Black locust (Robinia pseudoacacia L.) is a tree species of high economic and ecological value, but is also considered to be highly invasive. Understanding the global potential distribution and ecological characteristics of this species is a prerequisite for its practical exploitation as a resource. Here, a maximum entropy modeling (MaxEnt) was used to simulate the potential distribution of this species around the world, and the dominant climatic factors affecting its distribution were selected by using a jackknife test and the regularized gain change during each iteration of the training algorithm. The results show that the MaxEnt model performs better than random, with an average test AUC value of 0.9165 (±0.0088). The coldness index, annual mean temperature and warmth index were the most important climatic factors affecting the species distribution, explaining 65.79% of the variability in the geographical distribution. Species response curves showed unimodal relationships with the annual mean temperature and warmth index, whereas there was a linear relationship with the coldness index. The dominant climatic conditions in the core of 2774 the black locust distribution are a coldness index of −9.8 °C–0 °C, an annual mean temperature of 5.8 °C–14.5 °C, a warmth index of 66 °C–168 °C and an annual precipitation of 508–1867 mm. The potential distribution of black locust is located mainly in the United Argentina. The predictive map of black locust, climatic thresholds and species response curves can provide globally applicable guidelines and valuable information for policymakers and planners involved in the introduction, planting and invasion control of this species around the world. Introduction Black locust (Robinia pseudoacacia L.) belongs to the family Leguminosae and is native to eastern North America [1,2].Its eastern range is centered on the Appalachian Mountains and extends from central Pennsylvania and southern Ohio to northeastern Alabama, northern Georgia and northwest South Carolina.The western section of its native range includes parts of Missouri, Arkansas and Oklahoma, and populations also exist in Indiana and Kentucky (see Figure 1; [1,3]).Black locust was first introduced to Europe in the early 17th century as an ornamental tree.Since then, it has been widely introduced to temperate Asia, Australia and New Zealand, northern and southern Africa and temperate South America for wood production, as a nurse tree [4], and for large-animal forage [5].Since black locust has a nitrogen-fixing capability and a well-developed root system, it can remediate and improve soil nutrient condition.This makes it an important ecological pioneer species for windbreaks, erosion control and reclamation of disturbed sites [6,7].In addition, black locust has a high reproductive potential, both through its root suckers (resprouting ability) and by seed propagation.However, it acts as a harmful invasive species in some parts of the world, for example in Great Britain, Germany, France, Japan and New Zealand [8][9][10].Currently, no techniques are available that provide effective control of black locust invasions [11].Therefore, in order to make practical and controlled use of this multipurpose species, it is essential to determine its global potential distribution area, significant environmental factors and species response curves, as prerequisites for top-level design in the introduction, cultivation, afforestation and invasion control of this species. Species distribution modeling (SDM) plays a leading role in biogeography and regional ecology in estimating the niche and distribution area of a species when distribution data are limited [12][13][14][15].Usually, SDM is only used to analyze the realized niche, although a very early SDM paper [16] discussed Hutchinson's [17] concept of the realized and fundamental niche in relation to tree species.Booth et al. [16] showed for the first time how information from introductions outside of the native range could be used to provide some indication of the fundamental niche.Many early introductions of black locust are likely to have been trials of monoculture timber plantation, and the species is invasive and has been able to compete with other species outside of its native range.Thus, here, we measure the occupied area and the potentially occupied area [14], which were referred to by Peterson et al. [18] as the occupied distributional area and the invadable distributional area, respectively.Figure 1.Global spatial abundance of black locust specimens around the world (the grid cell is 1.5° × 1.5°) and its native range in eastern North America [1]; the total number of specimens is 32,674 (32,434 from Global Biodiversity Information Facility (GBIF) and 240 from Chinese Virtual Herbarium (CVH); the GBIF database query date is 9 May 2014, and the CVH database query date is 10 May 2014. With the development of computer hardware and Internet speed, the current availability of species and climate information sharing systems has greatly enhanced the field of SDM [19][20][21][22].These have greatly inspired the use of a variety of algorithms for predicting the potential distribution of species.The performance of various SDM algorithms have been evaluated by numerous comparative studies, suggesting that the appropriate choice is dependent on multiple factors (e.g., sample size, species rarity, size of the species' geographic range, spatial scale and user preference) [13,23,24].Because black locust is a wide-ranging species, known to be present at various locations around the world (the distribution data collection process is described in Section 2.1.),maximum entropy modeling was used here to simulate its potential distribution.Maximum entropy modeling uses only presence data as the basis of its predictions, unlike presence/absence models, and is therefore more valuable in regions where collecting absence points has been problematic or completely neglected [25,26].It has recently been proven to be an effective tool for predicting potential species distributions in a wide variety of research studies [27][28][29]. Climate is considered to be the most important environmental factor influencing the distribution of species and vegetation at a regional and global scale [30][31][32][33].Therefore, our research has mainly concentrated on investigating the climatically suitable habitat, significant climatic factors, climatic thresholds (niche) and climatic response curves of this species.In this study, species occurrence data with a spatial resolution of 0.5° × 0.5° (approximately 55 km at the Equator; the reason for using this resolution is given in Section 2.1.),together with a system collating 13 climatic indexes (detailed information is given in Section 2.2), were input into a MaxEnt model to simulate the potential distribution area of black locust around the world.The main objectives of the present study were to simulate the ecological niche of black locust, analyze its ecological and geographical distribution and investigate primary climatic factors that determine the potential distribution of this tree around the world.The results could provide theoretical support for top-level design by policy-makers and planners for the introduction, cultivation, planting and invasion control of this species around the world. Species Occurrence Data Global black locust occurrence data were collected from herbarium records in the Global Biodiversity Information Facility database (GBIF) [34] and the Chinese Virtual Herbarium database (CVH) [35].A total of 32,556 herbarium records were collected from the GBIF database and 319 from the CVH database.Records without coordinates were deleted, and records from small Pacific and Atlantic islands were also deleted, because assigning coarse resolution coordinates (0.5° × 0.5°) to these records may lead to their corresponding climate data being inaccurate.A total of 32,674 specimens (32,434 from GBIF and 240 from CVH) were identified by the coordinates recorded in the database or by coordinates derived from a place name included in the database.The main reason for collating records with a coarse geographic resolution (0.5° × 0.5°) was that there may have been a sampling bias or error at a fine resolution in the GBIF and CVH occurrence records, which would produce models of lower rather than higher quality [21].Another consideration was the calculation speed of the computer, identified in many previous studies, which was based on a spatial resolution between 50 km × 50 km and 200 km × 200 km [20,21,36].A total of 1174 grid cells (0.5° × 0.5°) were identified globally as containing black locust (details of the calculation process are given in Section 2.4.).To clearly show the spatial abundance distribution of the 32,674 specimens around the world, the point density function in ArcGIS 9.3 (ESRI, Redlands, CA, USA) was used to draw a 1.5° × 1.5° resolution distribution map of black locust (Figure 1) with its native range in eastern North America based on Little [1]. Climatic Variables According to previous studies, there are many variables used to characterize hydrological-thermal climatic niches around the world.For example, 19 BIOCLIM variables were used to define global climatic niches in the WorldClim database [19,37]; three variables (annual biotemperature, potential evapotranspiration and annual precipitation) were used to define global climatic niches in the life zone model [30]; and three variables (warmth index, coldness index and humidity index) were used to represent global climatic niches in Kira's index system [38].Three groups of climatic variables are widely used in research on the relationship between species/vegetation and climate, on a regional or global scale.The 19 BIOCLIM variables are widely used in SDM studies, as the data can be easily downloaded from the WorldClim database with no further calculation required [39][40][41].The five integrated climatic variables (not including annual precipitation) are seldom used in SDM studies, but they can still generally provide considerable power for explaining species distributions [42][43][44][45][46]. In this study, we used the BIOCLIM variables and the five integrated climatic variables based on Holdridge's life zone model and Kira's index system.An excess of climatic variables can cause overfitting, so we selected 8 of the 19 BIOCLIM variables.A total of 13 climatic factors were used to define the climatic niches of the world (Table 1), which is sufficient for research on any species at a global scale, including black locust.Baseline climatic layers were downloaded from the WorldClim database with a 10 arc-min spatial resolution, which were generated using thin-plate smoothing splines of latitude, longitude, altitude and monthly temperature and precipitation records from 50-year climate station averages from 1950 to 2000 [19].Then, these layers were converted to a 0.5° × 0.5° spatial resolution across the globe, and some climatic variables were calculated using the corresponding formulae (Table 1). Potential evapotranspiration PET mm PER = 58.93 × ABT/AP (ABT is annual biotemperature, AP is annual precipitation) [30] Humidity index HI mm/°C HI = AP/WI (AP is annual precipitation, WI is the warmth index) [46] Model Selection and Evaluation In this study, we used the software, MaxEnt (Version 3.0), a machine learning algorithm designed by Phillips et al. [25].It is written in Java, so it can be used on all modern computing platforms, and is freely available on the Internet at [47].The main advantage of applying MaxEnt to the modeling of geographical species distributions in comparison with other methods is that it only needs presence data, besides the environmental layers.Furthermore, it is possible to use both categorical and continuous layers.Phillips et al. [25,48] found that MaxEnt outperformed the genetic algorithm for rule set prediction (GARP) [49] on observational data for North American breeding birds and two Neotropical mammals (Bradypus variegatus and Microryzomys minutus).Elith et al. [13] found that MaxEnt was one of the best of 16 different methods for modeling the distributions of 226 species in 6 different regions.Similarly, Wisz et al. [24] found that MaxEnt was one of the best predictors among 12 different models tested. MaxEnt applies five different feature constraints (linear, quadratic, product, threshold and hinge) to environmental variables, namely "the maximum entropy principle", to estimate the species distribution probability.This principle can be considered as a constrained optimization problem (where the aim is to maximize a function).The estimated MaxEnt probability distribution (Gibbs distribution) of location (χ) is exponential in a weighted sum of environmental features (f) divided by a scaling constant (Zλ, Equation 1) to ensure that the probability values range from 0 to 1 and sum to 1 [50]. where n is the number of environmental features, λ is the vector of the feature weights, with real values, and Zλ is the normalizing constant that guarantees that the probability distribution sums to one over the area of interest.MaxEnt provides output data in three formats: raw format (raw values must sum to 1), cumulative format (the value for each cell is equal to the probability of finding the species of interest at that cell plus all other cells with equal or lower probability; values range from 0 to 1) and logistic format (the probability of occurrence is estimated by including environmental variables; values range from 0 to 1) [51].Raw values are often very small for each data point, thereby making interpretation difficult.The cumulative format is more easily interpreted when projected in a geographic information system, but these projections are not necessarily proportional to the probability of occurrence.The logistic format is currently recommended, because it allows for an easier and potentially more accurate interpretation compared to the other approaches. MaxEnt modeling can determine the importance of environmental variables using a jackknife test and the regularized gain change during each iteration of the training algorithm.Caution must be used when employing this method, as strong collinearity can influence the results, due to the highly correlated variables.MaxEnt allows the construction of response curves to illustrate the effect of selected variables on the probability of occurrence.These response curves consist of the specific environmental variable as the x-axis and, on the y-axis, the predicted probability of suitable conditions as defined by the logistic output.Upward trends for variables indicate a positive relationship; downward movements represent a negative relationship; and the magnitude of these movements indicates the strength of the relationship [51]. An important part of determining the ability of niche models to predict the distribution of a species is having a measure of fit.The performance of the MaxEnt model is usually evaluated by the threshold-independent receiver operating characteristic (ROC) approach (calculating the area under the ROC curve (AUC) as a measure of prediction success).The ROC curve is a graphical method that represents the relationship between the false-positive fraction (one minus the specificity) and the sensitivity for a range of thresholds.It has a range of 0-1, with a value greater than 0.5 indicating a better-than-random performance event [52].A rough classification guide is the traditional academic point system [53]: poor (0.5-0.6), fair (0.6-0.7), good (0.7-0.8), very good (0.8-0.9) and excellent (0.9-1.0). Experimental Design and Statistical Analysis First, we plotted all 32,674 black locust specimens' occurrence records on the world map with a 0.5° × 0.5° spatial resolution.This means that the world map was divided into grid cells (land area with 584,521 cells) with 300 rows and 720 columns.We assumed that a grid cell was suitable for black locust survival, as long as one or more specimens were present in it.Then, a binary grid map (presence/absence map) with a 0.5° × 0.5° spatial resolution was converted into points by using the raster-to-point function in ArcGIS 9.3 (a total of 1174 occurrence points).The latitude and longitude coordinates of each occurrence point (see attachment Supplementary Excel) were stored in an Excel database for MaxEnt model building. Second, we loaded the latitude and longitude coordinates of the black locust occurrence points into MaxEnt, together with all 13 climatic layers.A ten-fold cross-validation method was used to assess the accuracy of the MaxEnt model predictions.The importance of climatic factors was evaluated by using a jackknife test and the regularized gain change during each iteration of the training algorithm.Otherwise, the default settings were used to run the MaxEnt model.The logistic format of the MaxEnt output was used for mapping habitat suitability.To avoid potential problems due to the effect of strongly correlated factors on the explanation of species response curves, these curves were created by using the MaxEnt model with only the corresponding variable.These curves reflect the dependence of the predicted suitability on the selected variable and reflect the dependency resulting from correlation between the selected variable and other variables. Finally, the MaxEnt model produced ten species-distribution probability maps based on ten-fold cross-validation.The ten probability maps were then averaged to obtain a habitat suitability map for black locust.Four arbitrary habitat categories were used: the core area (0.6-1.0), the moderately suitable area (0.4-0.6), the marginal area (0.2-0.4) and the unsuitable area (0-0.2),based on the predicted habitat suitability map.The climatic thresholds for each habitat class were analyzed using ArcGIS 9.3. Current and Potential Distribution of Black Locust Based on the locations of black locust specimens in the GBIF and CVH databases, the map of the present distribution is shown in Figure 1.Black locust occurs mainly in 35 countries: America, Canada, Mexico, Chile, Bolivia, Argentina (only one record), China, Japan, Afghanistan, Pakistan, Indonesia, Australia, South Africa, Nigeria, Ireland, United Kingdom, Portugal, Spain, France, Germany, Belgium, the Netherlands, Switzerland, Italy, Austria, Czech Republic, Poland, Hungary, Romania, Greece, Georgia, Armenia, Denmark, Norway and Sweden.Large numbers of specimens were collected in North America, Europe and Asia. The ten probability maps obtained from the ten-fold cross-validation were averaged to obtain a habitat suitability map for black locust (Figure 2).According to the probability values, four habitat categories were defined: the core area (0.6-1.0), the moderately suitable area (0.4-0.6), the marginal area (0.2-0.4) and the unsuitable area (0-0.2).The core suitable areas for black locust are distributed mainly in the eastern United States, Europe, Australia and New Zealand.The moderately-suitable areas mainly included China, Japan, South Africa, Chile and Argentina.In Europe, the core areas are the United Kingdom, Germany, France, the Netherlands, Belgium, Italy and Switzerland.Climatic threshold information for each habitat class is shown in Table 2.It shows the climatic thresholds for the core areas of black locust: a coldness index of −9.8 °C-0 °C, an annual mean temperature of 5.8 °C-14.5 °C, a warmth index of 66 °C-168 °C and an annual precipitation of 508-1867 mm.Table 2. Climatic threshold of black locust suitable habitat map predicted by the MaxEnt model. Model Performance and Importance of Climatic Factors Ten-fold cross-validation was used to evaluate the accuracy of the MaxEnt model.The accuracy of the resulting model predictions is shown in Figure 3; the MaxEnt model predictions were highly accurate (AUC > 0.9), with a mean AUC of 0.9165 (0.9033-0.9309).The coefficient of variation was only 0.8% among the ten predictions, indicating that the ten-fold cross-validation method does not affect the accuracy of the MaxEnt model simulation.The relative importance of the climatic factors is shown in Figure 4.It is apparent that the coldness index, the annual mean temperature and the warmth index are the most important climatic factors determining the distribution of black locust; these three factors explain 65.79% of the variance, (15.84%-27.92% for each factor), followed by the mean temperature of the coldest month, the annual precipitation and the annual biotemperature, which explain another 23.77% of the variance (7.13%-8.96%for each factor).The remaining seven climatic factors were less important in determining the geographical distribution of black locust (collectively, they explained 10.44% of the variance, 0.13%-4.15%for each factor).The response curves of black locust under all climatic factors is shown in Figure S1.The first three most important climatic factors (coldness index, annual mean temperature and warmth index) are shown in Figure 5.It is clear that unimodal relationships exist between the habitat suitability value and the annual mean temperature or warmth index, whereas the coldness index shows a linear relationship.The response peak in black locust habitat suitability for the coldness index was at 0 °C; for the annual mean temperature, it was at 9.8 °C; and for the warmth index, it was at 100 °C. Species Record Database and Species Modeling Tools The current availability of species-sharing information systems around the world makes it possible to easily study the geographical distribution of plants worldwide [20][21][22].GBIF is a free and open access biodiversity database that integrates existing worldwide biodiversity data to form a user-oriented global biodiversity service network.CVH is also a free and open access database, which integrates the herbarium data of national natural museums from 14 institutes in China.GBIF and CVH are complementary to each other with little duplication of occurrence records, which makes the occurrence records of CVH a great contribution to GBIF.The old interface of GBIF (which is still available, but not being further developed with new data) [54] includes the capability to quickly generate an SDM niche model (a variant of BIOCLIM rather than MaxEnt; for instructions, see Section 3.1 of Booth [22]).The GBIF niche model output, generated with the 19 BIOCLIM variables, appears to indicate an overly broad climatic suitability (see Supplementary Figure S2) when compared with Figure 2.This may be due to our inclusion of the CVH occurrence records from China, which has improved the SDM niche model's predictive performance.The new GBIF interface [34] no longer includes the SDM niche model option, so users are forced to perform their own SDM analyses outside of GBIF.GBIF now has so many data points that the providers of the service are concentrating on just storing the data and not providing many integrated analytical tools.Here, we provide a workflow (in Section 2.4) for predicting the potential distribution of a species based on a MaxEnt model, which may be useful for other species.We believe that if every country in the world contributed the data from their national herbaria to the GBIF database, we would be able to make more accurate predictions about our Earth in the near future by using SDM. At an ecosystem scale, many climate-vegetation models have been used to study the relationship between vegetation and climate, such as Holdridge's life zone model [30], Kira's index model [38] and the dynamic vegetation model [55,56].At a species scale, previous studies have normally used the peak width at half height (PWH) to study the climatic thresholds of species [45,57].PWH generally assumes that the response of the species to climatic factors is normally distributed.For example, Ni and Song [44] used this method to study the relationship between geographical distribution and climate for Cyclobalanopsis glauca in China.SDM provides another way to study species-climate relationships at a species scale, which does not assume that the response of the species to environmental factors is normally distributed.SDM is widely used, because it can simulate a species' potential distribution and simultaneously identify significant environmental factors [13,14,25].For example, Irfan-Ullah et al. [58] predicted the geographical distribution of Aglaia bourdillonii in India, and Li et al. [41] simulated the potential distribution of Quercus wutaishanica in China.This study uses the maximum entropy model, a very popular SDM, to predict the worldwide geographical distribution of black locust.The performance of MaxEnt reached a very high level (a mean AUC value of 0.9165) with a coefficient of variation of only 0.8%, indicating that the MaxEnt model was suitable for simulating the potential distribution of this species around the world.The response curves of black locust to dominant climatic factors show that there is a unimodal response to the annual mean temperature and warmth index and a linear response to the coldness index.The MaxEnt model does not assume that the response of the species distribution to climatic factors is a predefined normal distribution.Therefore, both types of response curve could be detected by the MaxEnt model (Figure 5 and Figure S1). Significant Climatic Factors, Geographical Boundary and Potential Distribution Area In this study, we found that the coldness index, annual mean temperature and warmth index are the most important climatic factors for determining the potential distribution of black locust (with these three factors determining 65.79% of the variance).According to Chuine's explanation [59] for why phenology drives species distribution, we infer that these three climatic factors may play different roles in the growth process of black locusts.The coldness index (reflecting the extent of harsh climatic conditions during the non-growing season, with a relative of 27.92%) can be interpreted as the cold-climate stress that drives the maturation of fruit later in the growing season, which means that black locusts cannot continue to expand into higher latitudes.The warmth index (reflecting heat conditions during the species growing season, with a relative contribution of 15.84%) can be interpreted as the high thermal climatic conditions that suppress black locust flowering and leafing early in the growing season, such that the tree cannot continue to expand into lower latitudes.The mean annual temperature, with a relative 22.03% contribution, reflects the annual species heat demand during the growing season.Fang and Lechowicz [43] also found that heat-related climatic factors were the limiting factors of the geographical distribution of Fagus spp.around the world, especially the heat conditions during the growing season.Their conclusions were similar to ours. The core suitable areas in the global potential distribution of black locust are mainly distributed among the eastern United States, Europe, Australia and New Zealand, whereas the moderately suitable areas are mostly in China, Japan, South Africa, Chile and Argentina (Figure 2).These countries are very suitable for afforestation with or introduction of black locust, but we should be aware of the high risk of the invasive potential of this species in these countries.When we compare the difference between the current data available in GBIF and CVH and the potential distribution of black locust (see attachment Figure S3), we could not find any occurrence records in the following 24 countries: Peru, Uruguay, Brazil, Zimbabwe, Kenya, Ethiopia, Morocco, Algeria, New Zealand, Croatia, Bosnia and Herzegovina, Yugoslavia, Albania, Macedonia, Bulgaria, Turkey, Iran, Azerbaijan, Ukraine, Belarus, Lithuania, Latvia, Estonia and Russia, most of which are developing countries.Though we do not know if black locust exists in these countries (due to the lack of web-based open access sharing of data about this species through GBIF from the national herbaria of these countries), we can conclude that this species could be introduced to these countries and may pose a potential threat of invasion there. Climatic Threshold and Its Implication Based on the map of the climatically suitable habitat for this species (Figure 2), we infer that the climatic thresholds of the core area of black locust are a coldness index of −9.8 °C-0 °C, an annual mean temperature of 5.8 °C-14.5 °C, a warmth index of 66 °C-168 °C and annual precipitation of 508-1867 mm.These climatic thresholds may define the most suitable climatic niche, as the occurrence points input into the MaxEnt model come from not only native areas, but also invaded and cultivated areas around the world.Petitpierre et al. [60] reported a large-scale test of climatic niche conservatism for 50 invasive terrestrial plant species from Eurasia, North America and Australia.Their findings reveal that substantial niche shifts are rare in terrestrial plant invaders, providing support for an appropriate use of SDM for the prediction of both biological invasions and responses to climate change.Therefore, we can use the climatic thresholds reported in Table 2 and the response curves in Figure S1 as references to relate to local weather station data to indicate when future invasion control should be initiated or when this species can be used for afforestation, especially on a small geographical scale. Some studies have reported that the local soil moisture level is an important factor in determining the distribution and growth of black locust at a small scale [61].Therefore, the use of just climatic factors, on a large scale and with a coarse resolution, to simulate suitable habitat may overestimate the potential distribution of this species.When more types of variables are used, such as soil factors, the niche of the species will be more accurate, and the modeled distributions can be much more credible.Guisan and Thuiller [12] stated that "a gradual distribution observed over a large extent and at coarse resolution is likely to be controlled by climatic regulators, whereas patchy distribution is more likely to result from a patchy distribution of resources, driven by micro-topographic variation or habitat fragmentation".In their hierarchical modeling framework (see Figure 3 in Guisan and Thuiller [12]), climatic factors were used to determine species ranges on a large scale with coarse resolution, while soil nutriments determined species ranges on a local scale with fine resolution.We recommend further research on the hierarchical prediction of black locust distribution with the integration of more types of environmental factors, at the national or regional scale with fine resolution in the climatically suitable countries identified here.Our research has mainly concentrated on the habitat that currently has the potential to be climatically suitable and the climatic niche of this species on a global scale with coarse resolution.The predictive map of black locust distribution, the climatic thresholds and the species response curves can provide global perspective guidelines and valuable information for policymakers and planners of introductions, planting and invasion control of this species around the world. Conclusions Black locust is native to eastern North America, and has already spread widely all over the world by transplantation and cultivation in recent 300 years.On the one hand, black locust is a tree species of high ecological and economic value (as a pioneer species, a nurse species, an ornamental species, a feed species, etc.).On the other hand, it is also a serious invasive species in some parts of the world (such as Britain, Germany, France, Japan, etc.).This study comprehensively integrate the global herbarium data of black locust (from Global Biodiversity Information Facility and Chinese Virtual Herbarium) and the world climate system (BIOCLIM system, Holdridge life zone system, and Kira's index system) using maximum entropy modeling to research the climatic suitable habitat, significant climatic factors, climatic thresholds (niche) and climatic response curves of this species at global scale with coarse resolution.Realized distribution area and potential distribution area are compared for figuring out the suitable area for invasive control or afforestation of this species.The climatic thresholds and response curves could be used as references to relate to the data of local weather station to indicate when future invasion control should be initiated or when this species can be used for afforestation, especially on a small geographical scale.In addition, the climate system (in Section 2.2) and the simulating workflow (in Section 2.4) in this study could be useful when the global potential distribution of other species is predicted. Figure 2 . Figure 2. Global potential distribution area of black locust predicted by the MaxEnt model around the world (the grid cell is 0.5° × 0.5°); the value of climatic suitability is the average of a ten-fold cross-validation.The standard deviation of each grid square is 0-0.11. Figure 3 . Figure 3. Areas under the receiver operating characteristic curve (AUC) values of ten-fold cross-validation models (1-10 represent the model code, ascending in order by AUC value; the mean AUC value is 0.9165). Figure S3 . Figure S3.The difference between the current distribution knowledge of GBIF and CVH and the potential distribution of black locust.No occurrence records were found in the following 24 countries: Peru, Uruguay, Brazil, Zimbabwe, Kenya, Ethiopia, Morocco, Algeria, New Zealand, Croatia, Bosnia and Herzegovina, Yugoslavia, Albania, Macedonia, Bulgaria, Turkey, Iran, Azerbaijan, Ukraine, Belarus, Lithuania, Latvia, Estonia and Russia, most of which are developing countries. Table 1 . Description of 13 climatic factors, corresponding calculated formula and reference.
2016-03-01T03:19:46.873Z
2014-11-18T00:00:00.000
{ "year": 2014, "sha1": "5aff8860b39f6ec22e56050746cecd5b8fbf29d2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4907/5/11/2773/pdf?version=1416318796", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "5aff8860b39f6ec22e56050746cecd5b8fbf29d2", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
2372426
pes2o/s2orc
v3-fos-license
Reciprocal Interaction between Macrophages and T cells Stimulates IFN-γ and MCP-1 Production in Ang II-induced Cardiac Inflammation and Fibrosis Background The inflammatory response plays a critical role in hypertension-induced cardiac remodeling. We aimed to study how interaction among inflammatory cells causes inflammatory responses in the process of hypertensive cardiac fibrosis. Methodology/Principal Findings Infusion of angiotensin II (Ang II, 1500 ng/kg/min) in mice rapidly induced the expression of interferon γ (IFN-γ) and leukocytes infiltration into the heart. To determine the role of IFN-γ on cardiac inflammation and remodeling, both wild-type (WT) and IFN-γ-knockout (KO) mice were infused Ang II for 7 days, and were found an equal blood pressure increase. However, knockout of IFN-γ prevented Ang II-induced: 1) infiltration of macrophages and T cells into cardiac tissue; 2) expression of tumor necrosis factor α and monocyte chemoattractant protein 1 (MCP-1), and 3) cardiac fibrosis, including the expression of α-smooth muscle actin and collagen I (all p<0.05). Cultured T cells or macrophages alone expressed very low level of IFN-γ, however, co-culture of T cells and macrophages increased IFN-γ expression by 19.8±0.95 folds (vs. WT macrophage, p<0.001) and 20.9 ± 2.09 folds (vs. WT T cells, p<0.001). In vitro co-culture studies using T cells and macrophages from WT or IFN-γ KO mice demonstrated that T cells were primary source for IFN-γ production. Co-culture of WT macrophages with WT T cells, but not with IFN-γ-knockout T cells, increased IFN-γ production (p<0.01). Moreover, IFN-γ produced by T cells amplified MCP-1 expression in macrophages and stimulated macrophage migration. Conclusions/Significance Reciprocal interaction between macrophages and T cells in heart stimulates IFN-γ expression, leading to increased MCP-1 expression in macrophages, which results a forward-feed recruitment of macrophages, thus contributing to Ang II-induced cardiac inflammation and fibrosis. Introduction Hypertension is a multi-factorial chronic inflammatory disease. It induces cardiac remodeling, which is characterized by inflammation, fibrosis and hypertrophy, a major cause of heart failure. In hypertension stage, damaged vasculature release inflammatory signals to recruit leukocytes into cardiac tissues, and then initiate fibrosis cascade [1]. The interaction between these inflammatory cells is complex and is still largely unknown. The renin-angiotensin system, especially angiotensin II (Ang II), plays a major role in inflammation and cardiac fibrosis [2]. Ang II can directly or indirectly activate different signaling pathways to trigger the inflammatory response and fibrosis in hypertension [3]. Several studies point to a role for the immune system in Ang IIdependent hypertension and its complications. Blockade of inflammatory responses blunted the chronic hypertensive response to Ang II, thus reducing cardiac hypertrophy [4]. Moreover, emerging evidence shows that activated effector T cells do not simply accompany hypertension but rather support a role of inflammation in this disease [5]. Co-stimulation of T cells via B7 ligands was found essential for the development of hypertension [6]. Ang II infusion in rats stimulated a T helper 1 (Th1) immune profile in splenocytes, which could be suppressed by the Ang II type I receptor (AT1R) blocker olmesartan but not by hydralazine, even the two treatments lowered blood pressure to a similar extent [7]. Th1 but not Th2 immune responses were positively associated with both outward vascular remodeling and intimal expansion of ascending thoracic aortic aneurysm [8]. However, the specific role of interaction of T cells and the macrophage on inflammatory response and cardiac fibrosis remains unclear. IFN-c is produced by activated T cells, macrophages or dendritic cells [9] and is a potent activator of macrophage and Th1 responses and production of inflammatory cytokines [10]. IFN-c can augment [11] or suppress [12] autoimmunity and the associated abnormalities in context-and disease-specific manners. Function of different cellular sources of IFN-c in different types and phases of immune response is multifarious. In renovascular hypertension models, endogenously increased Ang II production induced T-lymphocyte secretion of IFN-c that induced a switch from stable to vulnerable plaques [13]. Furthermore, the expression of both tumor necrosis factor a (TNF-a) and IFN-c secreted by T cells was increased in mice with Ang II-induced hypertension [5]. However, the exact role of IFN-c, such as the cellular sources for its production and its effector cells, in Ang IIinduced inflammation and remodeling remains unclear. In this study, we aimed to study the role of interaction between T cells and macrophages in regulating inflammatory responses in cardiac inflammation and fibrosis induced by Ang II infusion. We found IFN-c deficiency in mice prevented Ang II-induced inflammatory cells infiltration and cardiac fibrosis. The underlie mechanism involves a reciprocal interaction between T cells and macrophages to stimulate IFN-c and MCP-1 production in T cells and macrophages respectively, which results a forward-feed recruitment of macrophages. IFN-c Expression Is Increased in Ang II-infused Hearts We have reported previously that Ang II infusion stimulates cardiac inflammation and fibrosis [14,15,16], we assessed whether IFN-c is expressed in the process of Ang II-induced hypertensive cardiac inflammation and fibrosis. Compared with saline-treated mice, Ang II infusion significantly increased IFN-c positive cells in left-ventricle tissues of WT mice at day 7 after infusion ( Figure 1A). The mRNA expression of IFN-c was also significantly higher in Ang II-treated mice than that of in saline-treated controls at day 1, 3 or 7 after infusion ( Figure 1B). To examine if Ang II induces T lymphocytes infiltration into hearts, we used antibodies against CD45 or CD3e (T cell marker) to perform flow cytometry analysis. Flow cytometry analysis revealed that Ang II induced CD45 + leukocytes and CD3e + T cells infiltration as early as 1 day of Ang II infusion ( Figure 1C). To determine the types of cell producing IFN-c in Ang II-treated heart, we used antibodies against IFN-c, F4/80 (macrophage), CD3e (T cell), CD4 or CD8 to perform flow cytometry analysis. Flow cytometry analysis revealed that IFN-c was primarily produced by CD3e + T cells (both CD4 and CD8 positive cells) in Ang II-treated hearts ( Figure 1D), although macrophages could also produce IFN-c at a lower level. Knockout of IFN-c Reduces Ang II-induced Cardiac Fibrosis To determine whether IFN-c has a biological role in Ang IIinduced hypertensive cardiac inflammation and remodeling, we used IFN-c-knockout (IFN-c KO) mice. After 7-day Ang II infusion, blood pressure increased equally in WT and IFN-c-KO mice, and baseline systolic blood pressure was also similar in both WT and IFN-c-KO mice ( (Table 1). We also used echocardiography to evaluate LV size and function, but there was no difference between baseline and Ang II infusion for 7 days (data not shown). These results indicated that short period Ang II infusion (7 days) is not sufficient to cause cardiac hypertrophy. We then next determined the role of IFN-c in cardiac fibrosis. Masson's trichrome staining was performed to evaluate the degree of cardiac fibrosis. Compared with saline-treated mice, Ang-IItreated mice developed cardiac fibrosis (Masson's trichromepositive area) at day 7 after infusion ( Figure 2A). In contrast, Ang II-induced cardiac fibrosis was significantly reduced in IFN-c-KO mice. Moreover, knockout of IFN-c also significantly prevented accumulation of a-SMA-positive cells (a marker of myofibroblast), expression of a-SMA (at both mRNA and protein levels) in cardiac tissues ( Figure 2B and 2D and 2F). Finally, knockout of IFN-c significantly reduced collagen I expression at both mRNA and protein levels ( Figure 2C and 2E). Knockout of IFN-c Suppresses Macrophage Infiltration and Downregulates TNF-a Expression in Ang II-treated Hearts To determine how IFN-c regulates cardiac fibrosis, we measured inflammatory cells infiltration in Ang II-treated WT and IFN-c-KO hearts by flow cytometry analysis at day 7. Ang-IItreated WT hearts showed an increase in F4/80 + macrophages and CD3e + T cells infiltration ( Figure 3A-C). Ang II treatment also resulted in an increase in number of Mac-2 + macrophages in WT hearts, while the number of Mac-2 + macrophages was significantly less in IFN-c-KO than WT hearts ( Figure 3D). It was reported that peripheral blood T cell secretion of IFN-c and TNF-a was increased in Ang II-infused WT mice [5]. To examine the effect of IFN-c on the expression of TNF-a, we evaluated the expression of TNF-a at the mRNA level at baseline and 3 days, 7days after Ang II infusion. Ang II-infusion induced expression of mRNA of TNF-a at early as day 3 and day 7 in WT mice, however, knockout of IFN-c significantly inhibit the Ang IIinduced expression of TNF-a at day 3 and day 7 ( Figure 3E). Interestingly, knockout of IFN-c had not effect of the expression of another cytokine MIP-1a( Figure 3E). It was reported that cell death may involved in inflammation [17,18], to evaluate if effect of IFN-c on Ang-II-induced inflammation and fibrosis is involved in cell death, we performed TUNEL assay in Ang-II-infused hearts. Apoptotic cells were found in the hearts of both WT and IFN-c KO mice, and there was no significant difference between the two groups ( Figure S1A). The cell types of the TUNEL positive cells were analysised by HE staining for serial slide and dual immunofluorescence staining for Troponin I (cardiomyocyte) & TUNEL. Apoptosis was not found in cardiomyocytes after Ang II infusion ( Figure S1B). Moreover, we performed serial section analysis of HE staining, the TUNEL positive cells may be primarily in infiltrated inflammatory cells (new Figure S1A). These results indicated that the difference in inflammatory cells infiltration and cytokine expression is not involved in cell death in Ang II-induced inflammation and fibrosis. We next examined how IFN-c regulates the inflammatory cells infiltration into hearts in response to Ang II infusion. IFN-c Deficiency Decreases MCP-1 Expression In Vivo It has been reported that MCP-1 may be involved in IFN-cprimed macrophages recruiting leukocytes [19]. Therefore, we determined whether IFN-c regulates inflammatory cells infiltration by promoting MCP-1 expression in Ang II-induced cardiac fibrosis. To determine the expression of MCP-1 in Ang-II-infused hearts, immunohistochemical staining was performed, MCP-1 was predominantly expressed in a-SMA negative cells around microvessels in Ang-II-infused WT hearts( Figure 4A). To further identify the types of cell population responsible for the MCP-1 production in Ang II-infused hearts, we performed doubleimmunofluorescence staining using antibodies against MCP-1 and F4/80 (macrophage). We found that MCP-1 was primarily expressed in macrophages in Ang II treated-hearts in WT mice while the number of MCP-1 positive cells was significantly less in IFN-c KO mice ( Figure 4B). We next determined the MCP-1 expression at protein and mRNA levels in heart tissue by CBA Flex set assays and quantitative real-time PCR. As shown in Figure 4C&D, the Ang II-induced increase in MCP-1 expression in WT mice was significantly reduced in IFN-c-KO mice. Therefore, in the absence of IFN-c, MCP-1 expression is impaired and this may be responsible for reduced leukocytes infiltrating into cardiac tissues. Macrophage Stimulates Expression of IFN-c in T cells That Is Essential for Macrophage Migration As we showed IFN-c was primarily expressed in T cells, we next to determine how IFN-c expression is regulated in T cells. Macrophages and T cells were isolated from WT or IFN-c KO mice, and used for co-culture experiments. As shown in Figure 5A, T cells or macrophages alone did not express IFN-c, however, coculture of WT T cells and macrophages significantly increased IFN-c expression (19.860.95 folds, vs. WT macrophage, p,0.001; 20.962.09 folds, vs. WT T cells, p,0.001). Co-culture of IFN-c-KO T cells with WT macrophages produced significantly lower level of IFN-c ( p,0.01). These results indicate that IFN-c is produced by T cells and this process requires macrophages. To investigate whether macrophage promotes the IFN-c production of T cells by directly acting on T cells or by secreting soluble mediators, we incubated T cells either with the supernatant of WT macrophages (SN Mw) or with WT macrophages ( Figure 5A). Compared with T cells coculture with WT macrophages, SN Mw produced much lower than IFN-c secretion in T cells, suggesting direct cell interaction between macrophages and T cells promotes the expression of IFN-c. To determine the role of IFN-c in inflammatory cells infiltration, we performed transwell migration assays ( Figure 5B, up panel) to investigate if IFN-c deficiency reduces macrophage infiltration. As shown in Figure 5B, T cells or macrophages alone did not stimulated macrophage migration. Co-culture of WT T cells with WT macrophages increased macrophage migration significantly (2.560.8 folds, co-culture of WT T cells and macrophage vs. co-culture of IFN-c-KO T cells and macrophage, p,0.01), however co-culture of IFN-c-KO T cells with WT macrophages failed to stimulate macrophage migration. IFN-c in T Cells Augments the Expression of MCP-1 Because MCP-1 production was decreased in IFN-c-KO mice, we therefore determined if IFN-c-producing-T cells regulate chemokines expression that stimulates macrophage migration, we measured the chemokines production in the co-culture of T cells with macrophages. Similar to the results found in vivo, MCP-1 production, but not CCL5 production was significantly increased in co-culture of WT T cells with WT macrophages ( Figure 5C and 5D). In co-culture of IFN-c-KO T cells and WT macrophages, MCP-1 production was significantly lower than that in co-culture of WT T cells and WT macrophages (95.861.2%, p,0.05) ( Figure 5C). MCP-1 production was decreased significantly when T cells and macrophages were separated or incubating purified T cells with SN Mw ( Figure 5C, column b, d, h). Whereas WT macrophage inbutated with SN T probably produced more CCL5, indicating that some factors derived from T cells acting on macrophage to promote the CCL5 production ( Figure 5D, column i). Since T cells express AT-1R, we test if Ang II directly stimulate IFN-c expression in T cells, thymus T cells were treated with Ang II (100 nM) for 24 hrs, and mRNA of IFN-c was measured by RT-PCR. As shown in new Figure S3, Ang II treatment did not significantly increased the expression of IFN-c compared to that of the untreated T cells. Discussion The inflammatory response plays a critical role in hypertensioninduced cardiac remodeling; however, how inflammatory respons-es are activated and their specific roles in cardiac remodeling remain unclear. We showed that knockout of IFN-c significantly reduced Ang II infusion-induced inflammation and cardiac fibrosis in mice. T cells infiltration in mouse hearts was a major source of IFN-c production, and contact-mediated effect between IFN-cproducing T cells and macrophage stimulated macrophages production of MCP-1, which recruited more macrophages and fueled an inflammatory-positive feedback loop. Ang II can affect the immune responses by amplifying the expression of cytokines and chemokines in macrophages, regulating dendritic cell differentiation, and promoting lymphocyte proliferation [20]. Inflammation is regulated by the presence of immune cells such as T cells and macrophages and released inflammatory mediators such as cytokines and chemokines. Cytokines are critical regulators of immunity and inflammation and regulate stages of hypertension [5,21]. Cytokines such as IL-1, IL-6, IL-10, IFN-c, TNF-a or TGF-b have high expression in Ang II-treated vascular systems and exhibit pro-and anti-fibrosis actions [21,22]. Ang II-induced fibrosis in the heart and kidneys is mediated by blood pressure and calcineurin-dependent pathways [23]. We demonstrated that Ang II infusion rapidly induced the expression of the cytokine IFN-c in hearts ( Figure 1). Moreover, we found there is no IFN-c expression in cardiomyocytes and fibroblasts (data not shown), our result is consistent with previous studies that has documented the production of IFN-c is primarily by T cells and natural killer (NK) cells, possibly by antigenpresenting cells (e.g. macrophages and dendritic cells) in response to IL-12 but not other type of cells [24,25,26]. Fairweather et al reported that IFN-c deficiency increased chronic myocarditis, pericarditis and fibrosis after CB3 virus infection. Interestingly, they also found that IFN-c deficiency didn't significantly alter myocardial inflammation during acute myocarditis at day 12 after CB3 infection, although which reduced viral replication in the heart [27,28]. In contrast to infection, studies of an Ang II-accelerated atherosclerotic model have suggested IFN-c is a key factor in the pathogenesis of atherosclerosis [29]. Our results indicated that IFN-c has a key role in the progression of Ang II-induced cardiac inflammation and fibrosis (Figure 2 & 3). Although knockout of IFN-c did not prevent the increase in systolic blood pressure after Ang II infusion (Table 1), it prevented acute Ang II-induced inflammation (day 7 after infusion) such as macrophages and T cells infiltration in cardiac tissues and expression of TNF-a ( Figure 3). In agreement with our results, neutralization of IFN-c during challenge with antigen plus IL-18 inhibited the combination of eoainophilic infiltration, lung fibrosis, and periostin deposition or the combination of neutrophilic infiltration and airway hyperresponsiveness, respectively [12]. Our results showed that T cells accumulated and produced IFN-c in the mouse hearts with Ang II infusion ( Figure 1C-D & 3A-B). Furthermore, the contact-mediated effect between macrophages and T cells stimulated MCP-1 production and macrophage migration in WT mice ( Figure 5), which was reduced in IFN-c-KO mice (Figure 4). We demonstrated that IFN-c-producing T cells is essential for the infiltration of macrophages in Ang IItreated hearts ( Figure 5). It is known that T cells play a major role in mediating inflammatory disorders by contributing to or causing tissue damage through the release of IFN-c. Several reports showed that CD4 + T cells producing IFN-c controlled the differentiation, migration and activation of macrophage lineage SMA protein levels in WT and IFN-c-KO hearts with anti-a-SMA antibody. Representative protein bands (top panel) and quantitative analysis (bottom panel; n = 5). * p,0.05, **p,0.01 versus saline control; # p,0.05, ## p,0.01 versus Ang II-infused WT mice. doi:10.1371/journal.pone.0035506.g002 cells in myocarditis, central nervous system and Ang II-induced kidney injury [30,31,32]. CD8 + T cells and adipose tissue interact with each other to recruit macrophages and activate a local inflammatory cascade to mediate aortic aneurysm [33]. We found MCP-1 expression was increased in Ang II-treated hearts, and when IFN-c is deleted, MCP-1 production was significantly reduced (Figure 4 & 5C). Previous works from others and us demonstrated that MCP-1 is necessary to induce Ang IImediated fibrosis in the myocardium [14,34], which expression in macrophage, endothelium, vascular smooth muscle cells and glomerular endothelial and epithelial cells [35,36]. We found that WT macrophages interacting directly with WT T cells stimulated more MCP-1 production than WT macrophages interacting with IFN-c-KO T cells ( Figure 5C), providing an explanation of how IFN-c is responsible for amplifying the expression of MCP-1. Therefore, our findings established an interaction between T cell produced IFN-c and macrophage production of MCP-1. Lin et al used transgenic mice, called ''macrophages insensitive to interferon-c'' mice to assess the effects of IFN-c signaling on macrophage lineage cells in response to infection of lymphocytic choriomeningitis virus, they reported that CD4 + T-cell production of IFN-c promotes signaling in macrophage lineage cells, which control the production of chemokines i.e., MCP-1, and the recruitment of macrophages to the central neuron system [31]. Coelho et al reported that priming of macrophage with IFN-c stimulated production of MCP-1, which may drive tissue chemokine production and inflammation and bear a significant role in the pathogenesis of Chagas disease [19]. When this manuscript is under review, Pore et al reported that a similar observation of that coincubation with outer membrane protein A of Shigella flexneri 2a -pretreated macrophages enhances the production of IFN-c by the outer membrane protein A-primed CD4 + T cells, representing that outer membrane protein A may enhance IFN-c expression in CD4 + T cells through the induction of IL-12 production in macrophages. They demonstrated that TLR2 activation and antigen presentation is responsible for the optimum production of IFN-c by macrophages:CD4 + T cells coculture [37]. We found that another CC-chemokine-macrophage inflammatory protein 1a (MIP-1a)-expression did not differ between WT and IFN-c KO mice at day 7 after Ang II-infusion ( Figure 3E). Although MCP-1 and MIP-a are belong to CC chemokine family, Natasa et al revealed that MCP-1 expression was unregulated significantly in patients of intermediate uveitis (IU) and there is no difference in MIP-1a between IU and control patients in intraocular levels [38]. Our results reveal an important role of IFN-c in Ang IIinfusion-induced inflammation and cardiac fibrosis, which is summarized in Figure 6. IFN-c is essential for inflammatory cell infiltration, expression of pro-inflammatory cytokines, and cardiac fibrosis in Ang II-treated mouse hearts. Our results demonstrate that there is a reciprocal interaction between infiltrated T cells and macrophages in heart where IFN-c produced by T cells stimulates the macrophage expression of MCP-1, which causes the recruitment of macrophages and leukocytes and cardiac inflammation and fibrosis. Animals and Ethics Statement B6.129S7-Ifng tm1Ts /J (IFN-c 2/2 ) mice and wild-type littermates were purchased from the Jackson Laboratory [39]. Mice were maintained under specific-pathogen-free conditions in the animal facility at the Beijing Heart Lung and Blood Vessel Diseases Institute. The mice were given a standard diet. The investigations conformed to the US National Institutes of Health Guide for the Care and Use of Laboratory Animals (publication no. 85- 23,1996) and were approved by the Animal Care and Use Committee of Capital Medical University. Mouse Model of Ang II Infusion-induced Cardiac Fibrosis Hypertensive cardiac fibrosis was induced in 6-8 week old IFNc-knockout (IFN-c-KO) mice and littermate wild-type (WT) mice by subcutaneous infusion of Ang II (Sigma-Aldrich St. Louis, MO) at a dose of 1500 ng/kg/minute using osmotic minipumps (Alzet MODEL 1007D; DURECT, Cupertino, CA) as we described before [14]. We also infused mice for different days to investigate the time course effect. Systolic blood pressure was measured by the tail-cuff system (Softron BP-98A; Softron, Tokyo, Japan), and values were derived from a mean of 10-20 measurements per animal at each time point. All animals were euthanized by overdose pentobarbital (100 mg/kg) at the end of each treatment period. Histopathology and Immunohistochemistry Hearts from WT and IFN-c mice fixed in 10% formalin were processed and paraffin embedded. Heart sections (4 mm) were then stained with hematoxylin and eosin (H & E) and Masson's trichrome reagent [40]. The percent of fibrosis (blue staining) to total tissue was analyzed and calculated by a NIS-ELEMENTS quantitative automatic program (Nikon, Tokyo, Japan) with the average value of at least 8 images per heart in double-blind fashion. Serial, transverse cryosections (7 mm thick) of hearts were cut by use of a CM1950 Frigocut (Leica, Wetzlar, Germany) at 220uC and were kept at 280uC. Immunohistochemical staining was performed as described [41]. Heart sections were stained with anti-rabbit antibodies against Mac-2 (1:500 dilution, Santa Cruz Biotechnology, Santa Cruz, CA), collagen I (1:1000), and a-smooth muscle actin (a-SMA, 1:200; all Abcam, Cambridge, MA), and rabbit IgG or rabbit serum instead of primary antibody was used as negative control ( Figure S2). Peroxidase activity was visualized with use of diaminobenzidine, and sections were counterstained with hematoxylin. Images were obtained by the use of a CCD camera under a microscope (ECLIPSE80i/90i, Nikon, Japan) with a 6200 lens, and 10-20 fields/section were chosen randomly. Cryosections (1); that interact directly with macrophages and release IFN-c (2); which activates marcrophages and increases the production of MCP-1 (3); which recruits more macrophages to heart (4); then fuels inflammation and fibrosis (5). doi:10.1371/journal.pone.0035506.g006 TUNEL Staining The TUNEL procedure was performed with the In Situ Apoptosis Kit (Promega, Madison, MI). Five micron thick frozen sections are cut and airdried at 4uC for 48 hours. Following a cold acetone fixation for 10 minutes, the slides are airdried for 2 minutes before 3 rinses in PBS. The slides are then rinsed in 16 TdT buffer for 5 minutes before careful application of the TdT reation mix to the tissue sections on each slide, which are then incubated in a humid chamber at 37uC for 60 minutes, followed by 3 PBS washes. The slides are then stained with DAPI and coverslipped using Vectashield for viewing on the Nikon epifluorescent microscope. Flow Cytometry Hearts were harvested and cardiac cell suspensions were prepared as described [42]. Briefly, hearts were harvested, then minced cardiac tissue was digested with 0.1% collagenase B and 2.4 U/mL dispase II (both Roche Molecular Biochemicals) at 37uC for 30 min. The dissolved tissue was then passed through a 70 mm sterile filter (Falcon, BD, Franklin Lakes, NJ) yielding a single cell suspension. Cells were washed twice with Hanks' balanced salt solution (HBSS) buffer with 2% FBS, then underwent cell-surface and intra-cellular antigen staining with fluorochrome-conjugated monoclonal rat anti-mouse antibodies against CD45, CD3e, CD4, CD8, F4/80 or IFN-c (all BioLegend, San Diego, CA) at 4uC for 30 min. Flow cytometry was used to characterize the infiltration of T cells and macrophages into hearts of saline and Ang II-treated mice and the cell source of IFN-c involved by use of Epics XL equipment (Beckman Coulter, Miami, FL). Data were analyzed by use of Summit software (Beckman Coulter). Real-time PCR Total ventricular RNA was extracted by use of TRIzol reagent (Invitrogen, Carlsbad, CA) according to the manufacturer's protocol. PCR amplification involved use of the iQ5 Real-Time PCR Detection System (Bio-Rad, Hercules, CA) with SYBR Green JumpStart TM Taq ReadyMix TM (Takara, Otsu, Shiga, Japan) and primers for mouse IFN-c, Collagen I, a-SMA, TNF-a, MCP-1, MIP-1a and GAPDH. Melting curve analysis was performed at the end of each PCR reaction. The housekeeping gene GAPDH was used as control: the expression of those genes was expressed as a ratio to that of GAPDH. Primer sequences were as follows: Collagen I, forward 59-GAGCGGAGAG-TACTGGATCG- 39 Western Blot Analysis Heart tissues were harvested at the end of each treatment period, immediately frozen in liquid nitrogen, and then homogenized in lysis buffer. Western blot analysis was performed as described [21]. Cytokine Measurements To analyze the effect of Ang II on cytokine and chemokine production in hearts, heart tissue were harvested at the end of experiment, immediately frozen in liquid nitrogen, and then homogenized in lysis buffer. After centrifugation, the supernatants of tissue were collected and analyzed using the Cytometric Bead Array Flex Set system (BD Biosciences, San Jose, CA) to measure secreted MCP-1. To measure the concentration of IFN-c, MCP-1, and CCL5 in cell culture media, 50 mL supernatant from each sample was incubated with the CBA Flex Set beads assay for 2 hr. The fluorescence produced by the beads was measured on a FACS Calibur flow cytometer (BD Biosciences) and analyzed by the associated software. Isolation of Naîve T Cells and Macrophages The thymus and attached blood vessels were removed from 6to 8-week-old mice and washed in phosphate buffered saline (PBS) and ground by use of frosted glass in RPMI 1640 medium (HyClone; Thermo Fisher Scientific, Waltham, MA). Suspensions were cleared of connective tissue by filtration and then underwent Ficoll-gradient centrifugation (HaoYang, TianJin, China) to clear residual erythrocytes and non-lymphocytes. T cells were cultured in RPMI 1640 medium containing 10% fetal bovine serum (FBS), 1 mg/mL anti-CD3e (eBioscience, San Diego, CA), 1 mg/mL anti-CD28 (BioLegend, San Diego, CA), penicillin and streptomycin. Macrophages were isolated from bone marrow of mice and grown in macrophage colony-stimulating factor (M-CSF; Pepro-Tech, Rocky Hill, NJ) as we described [43] with minor modification. Briefly, bone-marrow cells were isolated from femurs and tibias of 8-to 12-week-old mice. Suspensions were cleared of adipose tissue and connective tissue by filtration and then underwent Ficoll-gradient centrifugation to clear residual erythrocytes and non-lymphocytes. Myeloid origin macrophage were cultured in DMEM medium (HyClone, Waltham, MA) supplemented with 10% heat-inactivated FBS in the presence of 50 ng/ mL M-CSF. In Vitro Migration Assay Cell migration was quantitated in duplicate by use of 24-well Transwell inserts with polycarbonate filters (8-mm pore size) (Corning Costar, Acon, MA). Macrophage (2.5610 3 in 250 mL DMEM high-glucose medium/10% FBS) was added to the upper chamber of the insert. The lower chamber contained macrophage (1.0610 5 ) and/or activated T cells (1.0610 6 ) in 1 mL RPMI 1640 medium/10% FBS isolated from WT and IFN-c-KO mice. The plates were incubated at 37uC in 5% CO 2 for 12 hr. Cells that had migrated were counted by use of DAPI staining. Statistical Analysis All data are expressed as the means6the SEM. The unpaired 2tailed t-test was used to compare the 2 groups. Comparisons between the wild-type (WT) group and IFN-c KO group were performed using one-way ANOVA by Newman-Keuls multiple comparison test from GraphPad Prism (GraphPad Software). When any significant difference (p,0.05) was seen in the main effect (group differences), the comparison was analyzed by the unpaired 2-tailed t-test. For the other comparisons, we determined the significance of difference between the means of the groups by one-way ANOVA. The difference was considered statistically significant at p,0.05. Figure S1 TUNEL assay in Ang-II-infused hearts. A. Apoptic cells were found in the hearts of both WT and IFN-c KO mice. At 7 days after Ang II infusion, serial slides of the hearts were examined. Apoptotic positive cells were stained by TUNEL staining; The cell types of the TUNEL positive cells were analysised by HE staining. Bar graph shows semi-quantification of ratio of TUNEL + cells to total cells. Arrows indicate positive TUNEL staining cells. Magnification: 6200. B. Apoptosis was not found in cardiomyocytes after Ang II infusion. Dual immunofluorescence staining for Troponin I (red, cardiomyocyte), TUNEL (green) and DAPI (blue, nuclei). Arrows indicate positive TUNEL staining cells. Magnification: 6400. (TIF) Figure S2 Negative antibody was replaced by rabbit IgG. Heart sections were stained with anti-rabbit antibodies against Mac-2, collagen I, and a-SMA, and rabbit IgG or rabbit serum instead of primary antibody was used as negative control. Magnification: 6400. (TIF) Figure S3 No difference of expression of IFN-c between Ang II-treated T cells and untreated T cells. Thymus T cells were treated with Ang II (100 nM) for 24 hr, the mRNA of IFN-c was measured by RT-PCR. Bar graph show that Ang II treatment did not significantly increased compared to that of the untreated T cells. (TIF)
2014-10-01T00:00:00.000Z
2012-05-02T00:00:00.000
{ "year": 2012, "sha1": "e5e4981ff810e0e76f0e65ada81c794431e32b11", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0035506&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5e4981ff810e0e76f0e65ada81c794431e32b11", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119342219
pes2o/s2orc
v3-fos-license
What Determines the Distribution of Shallow Convective Mass Flux through a Cloud Base? The distribution of cloud-base mass flux is studied using large-eddy simulations (LESs) of two reference cases: one representing conditions over the tropical ocean and another one representing midlatitude conditions over land. To examine what sets the difference between the two distributions, nine additional LES cases are set up as variations of the two reference cases. It is found that the total surface heat flux and its changes over the diurnal cycle do not influence the distribution shape. The latter is also not determined by the level of organization in the cloud field. It is instead determined by the ratio of the surface sensible heat flux to the latent heat flux, that is, the Bowen ratio B . This ratio sets the thermodynamic efficiency of the moist con- vective heat cycle, which determines the portion of the total surface heat flux that can be transformed into mechanical work of convection against mechanical dissipation. The thermodynamic moist heat cycle sets the average mass flux per cloud h m i , and through h m i it also controls the shape of the distribution. An expression for h m i is derived based on the moist convective heat cycle and is evaluated against LES. This expression can be used in shallow cumulus parameterizations as a physical constraint on the mass flux distribution. The similarity between the mass flux and the cloud area distributions indicates that B also has a role in shaping the cloud area distribution, which could explain its different shapes and slopes observed in previous studies. Introduction Since the seminal work on parameterization of cumulus clouds by Arakawa and Schubert (1974), AS-74, the understanding of the spectral distribution of cloud properties and how it is controlled by the large-scale environment remains an obstacle for the formulation of convection parameterizations. In their paper AS-74 wrote: "Our final problem is to find the mass flux distribution function. The real conceptual difficulty in parameterizing cumulus convection starts from this point. We must determine how the large-scale processes control the spectral distribution of clouds, in terms of the mass flux distribution function, if they indeed do so. This is the essence of the parameterization problem." With this in mind, it is the goal of our paper to determine how the mass flux distribution of shallow cumulus clouds p(m) is controlled by the underlying physical processes and large-scale conditions. In the formulation of the AS-74 parameterization, the mass flux distribution function refers to the spectral distribution of cloud subensembles. The subensembles encompass clouds of different types based on their sizes and cloud top heights. This distribution is estimated in AS-74 by numerical solution of the Fredholm integral equation assuming convective quasi-equilibrium (QE). Here, we instead regard the mass flux distribution as an asymptotic distribution of the spectral subensembles that are reduced to single clouds, which then can be classified as a cloud population distribution. In this way, we approach the problem from another point of view: instead of assuming convective QE and solving for the spectral distribution of mass fluxes numerically, we focus on the underlying physical principles that determine the shape of p(m) and its parameters. The decision to examine the population distribution p(m) instead of the spectral distribution based on cloud types comes from the need to formulate a scale-aware parameterization. As the model resolution increases to the kilometre scale, the separation of the cloud ensemble into spectral bins that represent clouds of different types loses statistical significance. Instead, a cloud sample within a grid box can be viewed as a random sample of clouds drawn from the cloud population. The clouds are grouped by the grid box boundaries regardless of the cloud types. The total mass flux in a grid box M is then a sum over the sampled clouds, M = ∑ n i=1 m i , and its spatial distribution p(M) is characterized by a spectrum of shapes starting from a normal-like distribution on the coarse grids, toward a long-tailed distribution on the kilometre-scale grids Sakradzija et al. 2015). The distribution of the total mass flux within model boxes p(M) has been parameterized based on the principles of statistical-mechanics, and has been applied to deep convection by Plant and Craig (2008), and further developed to a parameterization of shallow convection by Sakradzija et al. (2015Sakradzija et al. ( , 2016. In the context of such a parameterization, it is important to understand the physical constraints on p(m) because fluctuations of the subgridscale convective tendencies influence convective regimes, organization as well as energetics of the explicitly modelled atmospheric flows (Sakradzija et al. 2016). The evidence about p(m) based on observations is not extensive. A few observational studies that examined p(m) among other cloud statistics were focused on cumulonimbus clouds, for which p(m) was fitted to a log-normal distribution function (LeMone and Zipser 1980;Jorgensen and LeMone 1989). More evidence about p(m) has been provided by modelling studies using cloud-resolving models (CRM) or large-eddy simulations (LES). In a CRM study of an equilibrium deep-convective ensemble under homogeneous large-scale forcing, p(m) was fitted to an exponential function . This fit was supported by theoretical derivation using the formalism of the Gibbs canonical ensemble from statistical mechanics . As more computing power allowed performing simulations with resolutions on the order of 100 m, it was revealed that the shape of this distribution is dependent on the horizontal resolution. With kilometre-scale resolution, where the deep cumulus clouds are not fully resolved, p(m) takes an exponential-like shape, while the shape changes towards a power-law distribution when using higher resolution (Scheufele 2014). Scheufele (2014) further demonstrated that the power-law-like shape emerges as a result of self-organization of the individual cloud updrafts. For shallow cumulus clouds over the ocean, Sakradzija et al. (2015) found that the overall shape of the mass flux distribution results from the superposition of two distribution modes, one corresponding to the active buoyant clouds and the other one to non-buoyant clouds. The two modes of the cumulus cloud distribution deviate from an exponential shape due to correlation between cloud mass fluxes and cloud lifetimes. Each mode can be described using a Weibull distribution with two parameters, shape k and scale λ (see Eq. 13, and also Sakradzija et al. 2015). In the case of shallow cumulus clouds, the shape parameter of the Weibull distribution is less than one, k < 1, which signifies that it is a heavy-tailed distribution. The combination of at least two Weibull distribution modes results in a distribution of the shallow cumulus mass flux that takes an overall power-law-like shape (see section 3). Hence, it appears that different mechanisms can lead to power-law distributions (see e.g. Mitzenmacher 2003;Newman 2005). Moreover, either a power-law or a log-normal distribution can be generated by the same underlying mechanism under slightly different conditions (e.g. Mitzenmacher 2003) and it is often difficult to rule out one or the other functional form. It might be possible to gain more insight into the mass flux distribution p(m) by making a parallel to the distribution of cloud sizes. Based on the findings of modeling and observational studies, there is no consensus on the functional form that best describes the cloud size distribution. The suggested functions span from exponential (Plank 1969;Hozumi et al. 1982;Astin and Latter 1998), over log-normal (López 1977;LeMone and Zipser 1980;Jorgensen and LeMone 1989) to power-law functions with single (Lovejoy 1982;Zhao and Di Girolamo 2007;Wood and Field 2011;Dawe and Austin 2012) or double slopes (Cahalan and Joseph 1989;Sengupta et al. 1990;Nair et al. 1998;Benner and Curry 1998;Neggers et al. 2003;Trivej and Stevens 2010;Heus and Seifert 2013). Most studies, in particular more recent ones, suggest power-laws, with or without a break in the power-law scaling at the intermediate cloud sizes. This scale break manifests itself as a change in the slope of a power-law distribution or as an exponential cut-off near the distribution tail. However, no explanation supported by evidence has been provided for the observed differences in the distribution shapes and slopes, and some of these differences may just reflect different meteorological conditions. Given that the characteristics of cloud updrafts are substantially different between tropical oceanic and midlatitude continental cumulus convection (Xu and Randall 2001), the dependency of p(m) on meteorological conditions is not surprising. We nevertheless suspect that there are some dominant macroscopic parameters or processes that determine the characteristic cloud size and the mass flux that cause the variations in p(m) between different cases and locations. Instead of assuming a distribution functional form and estimating the distribution parameters by statistical fitting of modeled or observed clouds, we set out to identify the physical mechanisms that might lead to a specific distribution functional form and a characteristic scale. We use LES of shallow cumulus convection based on two measurement campaigns, RICO (Rain In Cumulus over the Ocean) to represent conditions over the ocean, and measurements in an ARM (Atmospheric Radiative Measurements) site to represent conditions over land (Section 2). We aim to reveal what makes the difference in p(m) between these two reference cases and to derive a parameterization for the distribution parameters that applies to oceanic and land conditions. In nine additional simulations, the two reference cases are modified (see section 2) to test the impacts of the large-scale forcing and surface conditions on p(m). Cloud lifecycles are studied using the method of cloud tracking, also described in Section 2. This method provides the lifetimeaveraged cloud mass flux distribution defined in section 3. Several reasons for the difference in p(m) between the two reference cases are hypothesized and tested in section 4. In section 5 we describe the physical principle that explains the difference between the two characteristic distribution shapes. The distribution is then fitted to the mixed Weibull function to estimate the remaining unknown parameters (section 6). Conclusions are given in section 7. LES case studies Simulations were performed using the University of California, Los Angeles, large-eddy simulation (UCLA-LES) model (Stevens et al. 1999;Stevens 2010). A detailed description of the UCLA-LES model and the specification of the parameters and constants used in our study are provided in Stevens (2010). The UCLA-LES model solves the Ogura-Phillips anelastic equations, discretised over the doubly periodic uniform Arakawa C-grid. The prognostic variables include the wind components u, v and w, liquid water potential temperature θ l , total water mixing ratio q t , and in the precipitating cases (see the next paragraph), rain mass mixing ratio q r and rain number mixing ratio N r . In the precipitating cases, the double-moment warm-rain scheme of Seifert and Beheng (2001) is used to compute the cloud microphysics. The subgrid turbulent fluxes are computed using the Smagorinsky-Lilly scheme (as described in Stevens et al. 1999;Stevens 2010). A third-order Runge-Kutta method is used for numerical time integration, a directionally split monotone upwind scheme is used for the advection of scalars, and directionally split fourth-order centered scheme is used for the momentum advection (see Stevens 2010). The effects of radiation are prescribed as net forcing tendencies. As a first reference case (R-base), an LES case study of shallow convection based on the Rain In Cumulus over the Ocean (RICO) measurement campaign (Rauber et al. 2007) is used to represent conditions over the tropical ocean. The field measurements were taken during the winter season 2004/2005 in the trade-wind region of the Western Atlantic upwind of the islands of An-tigua and Barbuda (Rauber et al. 2007). The initial profiles of potential temperature θ , specific humidity q v and the horizontal winds u and v are constructed as piece-wise linear fits of the averaged profiles from the radiosonde measurements taken over Barbuda during a period with no disturbance due to mesoscale convective systems ( Fig. 2 and Table 2 The start of the simulation is set to 11:30 UTC (6:30 am by local time), a time before convection initiates, and is integrated over a single diurnal cycle until 02:00 UTC next day (21:00 pm by local time). The initial vertical profiles of the thermodynamics quantities are computed based on the averaged soundings from that day ( Fig. 1 in Brown et al. 2002). The wind direction did not change significantly during that day, so the initial wind profile is set to a constant wind of u = 10 m s -1 and v = 0 m s -1 at all levels. The geostrophic wind is also set to these values, while the background wind is set to u = 0 m s -1 and v = 7 m s -1 . At the surface, the turbulent heat fluxes are prescribed following Brown et al. (2002) (see their Fig. 3) and exhibit a strong diurnal cycle. Weak large-scale forcing tendencies due to horizontal advection of moisture and temperature as well as radiative cooling rates are prescribed following the diurnal cycle; however they have only a minor impact on the simulation. The two reference LES cases, R-base and A-base, are further modified to test the effects of surface conditions, diurnal cycle and large-scale forcing on the cloud statistics (Table 1) Figure 1a). Note that the total surface heat flux in the RICO-based cases is in average more than twice lower than the total surface heat flux in the ARM-based cases (Table 1). By comparing the maximum values of the total surface heat flux or of the buoyancy flux near the peak of the diurnal cycle , the difference between the two reference cases is even up to four times. As expected, the mean thermodynamic state of the subcloud layer is affected by the changes in the Bowen ratio. Increase of the Bowen ratio from 0.03 to 0.33 in the RICO-based cases causes an increase of the liquid water potential temperature by 1 K, and a decrease in the total water mixing ratio by 1 g/kg, as averaged over a 500 m thick layer starting from the surface. In the ARM case, a decrease of the Bowen ratio from 0.33 in A-base to 0.03 in A-0.03 causes a decrease of the liquid water potential temperature by 2 K, and an increase of the total water mixing ratio by 2 g/kg, averaged over a 500 m thick layer at the surface. Clearly, all these test cases have a different thermodynamic state in the boundary layer, even though the Bowen ratios might have the same values. The depth of the subcloud layer is controlled by the surface buoyancy flux F buoy (Stevens 2007) with the higher cloud base heights in the simulations with higher surface buoyancy fluxes (Figs. 1d and 1e). The rate of growth of the subcloud layer is also influenced by B and it is higher in the cases with higher B (Figure 1e, see also Schrieber et al. 1996). Convective clouds are initiated sooner for the higher values of B ( Figure 1f). Except for the R-base case where the surface fluxes are not fixed, the top of the cloud layer does not seem to be significantly influenced by the changes in B or F buoy (Figure 1f). This indicates that the processes in the cloud layer are to some extent detached from the surface forcing. In the second group of simulations (A-lowflx, A-short, A-long; Figure 2), we have kept the Bowen ratio to its assigned values, but changed other key aspects of the forcing that are distinct between the two reference cases. The effect of the diurnal cycle in ARM is tested by shortening it by 1/3 (A-short), or by prolonging it by 1/3 (A-long), by applying these changes to the cycle period of the surface fluxes (see Figs. 2b,c) and the large-scale forcing tendencies. The effect of the value of the total surface heat flux is tested by reducing it by 20% in ARM (A-lowflx). As can be seen in Figure 2e, the rate of the growth of the cloud base height is not affected by these changes. However, if there is more time for the cloudy boundary layer to develop, as in A-long, a higher cloud base height is reached. The cloud layer deepens further either with an increase in the forcing period or with stronger total surface heat fluxes, although the differences are only around 100 m (Fig. 2f). Cloud tracking The cloud tracking algorithm developed by Heus and Seifert (2013) is applied to the simulated cloud fields in post-processing of the LES simulations. In the tracking algorithm, clouds are identified as the adjacent grid points that hold the liquid water path exceeding a threshold value of 5 gm -2 . In that way, the identified cloud area is a projection of a cloud from all vertical levels that can be tracked through space and time. Using the temporal resolution of one minute, cloud areas, vertical velocities and cloud lifetimes are recorded for each cloud in the simulation. A cloud splitting algorithm is then used to separate and track the individual cloud elements that form the multicore clouds or the merged cloud clusters. These cloud elements are defined as holding a buoyant core with the maximum incloud virtual potential temperature θ v excess larger than a chosen threshold of 0.5 K. More details and validation of the tracking method are provided in Heus and Seifert (2013). To develop a cloud parameterization based on the mass flux approach, the cloud mass flux has to be estimated near the cloud-base level. For this reason, we have developed a secondary tracking routine, as in Sakradzija et al. (2015), in which we record the area that every cloud occupies at the level that lies 100 m above the lifting condensation level (LCL). We define this area as the area that contains all the points with liquid water content greater than zero. Cumulus cloud population statistics The upward flux of mass through cloud base of the i-th cloud is defined as where a i is the area [ The two reference LES cases exhibit distinct horizontal and vertical extents of the clouds, number of clouds and their spacing, due to different initial conditions, surface and large-scale forcing. The mass flux distributions corresponding to these two reference cases have different shapes and they cover different ranges of the mass flux values (Fig. 3). The distribution of the cloud base mass flux in the ARM case shows a straight line shape on a log-log plot, similar to a power-law distribution over a range of three orders of magnitude. In contrast, the distribution in the RICO case shows a more concave shape. In previous literature on the cloud size distribution, such type of a concave shape has often been identified as a double power-law distribution with two distinct slopes and a scale-break point at the intermediate cloud size (Cahalan and Joseph 1989;Sengupta et al. 1990;Nair et al. 1998;Benner and Curry 1998;Neggers et al. 2003;Trivej and Stevens 2010;Heus and Seifert 2013). To make a parallel to these studies, we identify the scale-break in the mass flux distribution of the R-base case at a value of the cloud base mass flux close to 1 · 10 5 kgs -1 (Fig. 3). Based on the qualitative comparison of the mass-flux distributions of the R-base and A-base case, we conclude that there is no universality in the distribution slopes on a log-log plot (Fig. 3). As we will show in section 4c, the slope of the mass flux distribution changes with the change of a control parameter of the simulations. The sampling variability of the mass-flux distributions is very low in both reference cases except near the end of the right tails of the distributions (Fig. 3), which is a sign of a limited sample size of the largest possible cloud mass flux values. This portion of the distribution tail has higher sampling variability based on the 95 % confidence intervals computed for each distribution bin (shaded areas in Fig. 3). The confidence intervals were calculated using a bootstrap method with replacement using 1000 random samples. As a key contributor to the cloud base mass flux, the cloud area a c is distributed qualitatively similarly to the distribution of the mass flux (Fig. 4a). The difference between the two reference LES cases shows similar characteristics as for the two mass flux distributions. So, the knowledge about the physical mechanism that shapes p(m) might also be sufficient to describe p(a c ). The cloud area distribution of the A-base case shows a power-law-like shape with a scale-break around the value of 10 6 m 2 . The scale break in the ARM-base case is located at a scale an order of magnitude larger than the one of the R-base case. These two cloud Why are the two reference population distributions different? Is the distribution shape changing under the influence of the large-scale forcing or of the surface conditions? We address these questions in the following section. The three hypotheses The main differences between the two reference LES cases are in the existence of a strong diurnal cycle over land, strong self-organization of clouds over ocean and in the magnitude and partitioning of the surface turbulent heat fluxes (Table 1). Other aspects of the large-scale forcing are as well different between the two reference cases. However, we rule out those differences as a cause of the different distribution shapes because it was hypothesised and shown in previous studies that the intensity of the convective updrafts was insensitive to changes in the large-scale forcing (e.g. Robe and Emanuel 1996;Cohen and Craig 2006;Plant and Craig 2008). Based on these facts, we propose the three hypotheses that might explain the divergence of the mass flux distribution between the two reference LES cases: a. diurnal cycle of convection determines the distribution p(m), b. convective self-organization determines the distribution p(m), and c. surface fluxes determine the distribution p(m). In the following, we test the three hypotheses by analysing all eleven LES cases (Table 1). a. The first hypothesis: diurnal cycle of convection Here we test if changes in the forcing associated with the convective diurnal cycle might be responsible for the different shapes of p(m) in the two reference cases. We sample the clouds that emerge in the ARM case during four time frames of one hour duration, taken at different stages of the diurnal cycle, starting at 17:30 UTC. The distribution of cloud base mass flux in all four time frames is shown in Fig. 5. It is clear that there is no significant change in p(m) over the diurnal cycle of the ARM case, i.e. the distribution p(m) is stationary. Another property of the diurnal cycle that might influence p(m) is the period of the diurnal cycle. Shorter or longer diurnal cycles imply faster or slower temporal changes in the forcing. With faster changes, clouds might have less time to develop undisturbed, so their sizes and mass fluxes might be lower. Or, with slower changes in the forcing, larger clouds might result. To test this, we investigate the results of the simulations A-short and A-long. A time frame of one hour duration is taken around the peak of the diurnal cycle, after 9, 7 and 11 hours from simulation start in A-base, A-short, and A-long, respectively, and p(m) is examined (Fig. 6). There is again no significant difference among the simulations, except near the right tail of the distribution, where the A-short case shows a faster drop-off than the other two cases. This means that the largest possible clouds cannot develop in the ARM case if the period of the forcing is too short. Overall, there is nevertheless no change in the distribution shape, and the slope of the line stays similar across the three cases. The results of these experiments demonstrate that changes of the forcing over a diurnal cycle do not shape the distribution of the cloud base mass flux. b. The second hypothesis: convective self-organization In this section we test how the spatial correlations during the organized phase of the RICO case influence the cloud base mass flux distribution. Organization of convective clouds into clusters, lines, or arcs could influence p(m) by affecting the size and intensity of individual cloud elements. Here, it is important to note that the cloud tracking routine identifies the cloud entities that form the cloud clusters, and performs splitting so that every element can be followed separately even when two cloud elements have merged. In contrast, past studies have investigated the distributions of merged cloud clusters and suggested self-organization as a mechanism for creating power-laws (Scheufele 2014). We choose the R-base case to test the effects of cloud organization on p(m) because this convective case is strongly organized after one day of simulation (Fig. 7). Starting from a randomly distributed field of clouds and looking into the time frames with different stages and forms of organization, we plot p(m) in Fig. 7. We find no evidence that self-organization of clouds has an effect on p(m) because the overall distribution shape stays the same in spite of organization. Hence, the different degrees of organization between RICO and ARM cannot explain the differences in p(m). Even though self-organization is not responsible for the final shape of the distribution, it is a process that can produce longer tails in the cloud distributions, if the cloud splitting is not performed and cloud clusters are sampled to compute p(m) (Scheufele 2014). Fig. 7 indicates that this dependency vanishes if individual cloud elements are considered. c. The third hypothesis: surface heat fluxes The two reference cases have very different surface conditions, one is set over the ocean, while the other one is set over land, so the magnitudes of the surface heat fluxes differ by up to a factor four between the cases (see Fig. 1). We investigate here the dependency of the distribution shape on the surface turbulent heat fluxes, which drive the boundary layer convective updrafts that ultimately form cumulus clouds at the top of the subcloud layer. We test the magnitude of the fluxes and their partitioning at the surface. 1) THE MAGNITUDE OF THE SURFACE HEAT FLUXES We have already concluded in the previous section for the ARM case that p(m) does not change over a single diurnal cycle (Fig. 5). From this conclusion it also follows that p(m) is not sensitive to the surface flux magnitude. To further prove this, we perform one additional test (A-lowflx) in which the total surface turbulent heat flux is lowered by 20 % (Fig. 8). There is no significant difference between the two distributions. The A-lowflx case can simply be considered as another realization of the same shallow cloud ensemble of the A-base case. 2) THE RATIO OF THE SURFACE HEAT FLUXES, B The ratio of the sensible and latent heat fluxes at the surface, the Bowen ratio B, is the main parameter that characterizes the two surface types, ocean and land. Though the total surface flux magnitude has no effect on p(m), the partitioning of this flux into sensible and latent heating might have an effect. Note that the Bowen ratio does not change much over the diurnal cycle in ARM. We thus turn our attention to the sensitivity experiments using different Bowen ratios (Fig. 9). By changing only the ratio of the surface fluxes and leaving their magnitudes unchanged, the shape of the mass flux distribution can be altered. More importantly, by setting the RICO Bowen ratio in the ARM set-up (A-0.03), the mass flux distribution of the RICO case is recovered (Fig. 9a). Likewise, by setting the ARM Bowen ratio in the RICO set-up (R-0.33), the mass flux distribution of the ARM case is recovered (Fig. 9b). Thus, it is evident that the ratio of the surface fluxes and not their magnitudes shapes the mass flux distribution. 3) THE TWO MODES OF THE CLOUD DISTRIBUTION The final shape of p(m) is a result of the superposition of the distribution modes associated with cloud groups of different subtypes: active, forced and passive clouds (see the classification of Stull 1985). We examine the dependency of these modes on the Bowen ratio separately. Here we simplify the classification of clouds into buoyant (active) and non-buoyant (passive) clouds, as in Sakradzija et al. (2015). Forced clouds fall into the "passive" non-buoyant cloud group owing to this simplification. Clouds are classified as active buoyant clouds if the excess of the vertically integrated virtual potential temperature within clouds, θ v,up − θ v , is larger than a threshold. The threshold is set to 0.5 K, except in a case where this leads to a too small statistical sample, as in R-0.33. In the latter case, the threshold is set to 0.4 K. In the RICO-base case (Fig. 10a), the cloud distribution shows shorter tails in both modes, and lower mass fluxes in average compared to the A-base case (full lines in Fig. 10b). With increasing Bowen ratio, the active cloud modes shift towards higher mass flux values, while the slopes of the two modes become less steep (Fig. 10). Table 1 in Zhao and Di Girolamo 2007). The slopes of the observed cloud size distributions in the midlatitude regions have lower values than the slopes in the tropics (see Wood and Field 2011). These characteristics of the observed cloud size distribution correspond to the control that B imposes on the slopes that we observe in the RICO and the ARM cloud-base mass flux distributions (Fig. 10). Higher values of B in midlatitudes produce lower slopes compared to the higher slopes that are produced as a result of low B in the tropics. The Bowen ratio indirectly sets the average mass flux per cloud To understand the link between p(m) and B, we aim at deriving in this section the constraints on the mass flux m that an average cloud can transport based on the boundary layer energetics. As will be shown in section 6, m is the key parameter through which the difference between the mass flux distributions of the two reference cases is set. We start from the concept of atmospheric convection as a natural heat engine (Rennó and Ingersoll 1996). During a heat cycle of an average convective cloud, the heat Q in [J] is input near the surface in the form of the turbulent surface heat flux F in [Wm -2 ] (sum of latent and sensible heat fluxes). This heat is partly converted into mechanical work W mech of the convective overturning in the subcloud layer, and the rest is added into the cloud layer and redistributed further. Here, we define the heat cycle for the subcloud layer that lies between the surface layer over the warm ocean or land surface and the colder cloud layer above. The efficiency of the heat cycle is defined as the ratio of mechanical work and the heat input at the surface The theoretical maximum efficiency of the heat cycle in the subcloud layer is the Carnot efficiency, which can be defined as T s f c is the surface temperature and T lcl is the temperature at the lifting condensation level. If the heat input at the surface would happen solely in form of the sensible heat flux and if no heat was spent to transport water vapor out of the subcloud layer, the efficiency of the convective heat cycle would approach the Carnot efficiency. However, the thermodynamic cycle of convection in the boundary layer is a mixed moist heat cycle with an efficiency that is lower than the maximum theoretical Carnot efficiency, η < η max . As shown in Shutts and Gray (1999), the efficiency of the moist heat cycle can be expressed as (see their Eq. 19) where c p is the specific heat capacity of the dry air at constant pressure, L v is the latent heat of vaporization, g is the gravitational acceleration, ε = 1 − R v /R d = 0.608, R v is the gas constant for water vapour, R d is the gas constant for dry air, and H is the subcloud layer depth. Eq. 4 is derived under the assumption that the effective heat input at the surface, ηF in , is used to maintain convection against mechanical dissipation in a convective system in statistical equilibrium. The efficiency of a moist heat cycle η could be further explained using the entropy budget analysis as in Pauluis and Held (2002). They found that convection acts both as a heat cycle and as an atmospheric dehumidifier, and the irreversible entropy production by the two processes are in competition. The more the atmosphere acts as a dehumidifier, the less effective it is to generate kinetic energy of convective circulations (Pauluis and Held 2002). From Eq. 4, it follows that the Bowen ratio highly influences the fraction of the heat input that can be transformed into mechanical work to maintain convective circulations. B appears explicitly in Eq. 4 but also implicitly through its control on the depth of the subcloud layer H (see Schrieber et al. 1996;Stevens 2007). Equation 2 does not explicitly relate m to the moist heat cycle. To do so, we proceed as follows. The average cloud-base mass flux per cloud m is related to the turbulent flux of the moist static energy at the cloud-base level ρw h through the mass flux approximation as defined in Arakawa and Schubert (1974): where i = 1, ..., N is the index of individual clouds, and h i −h is the excess of the moist static energy within the updrafts that form clouds with respect to the environment, and an overline denotes averaging over the domain. As a first simple hypothesis, we assume that the turbulent flux of the moist static energy at cloud base, ρw h , is proportional to the effective surface forcing of the cloud ensemble, NηF in , and by using Eq. 5 we write where C 1 is a proportionality constant, which can be seen as a factor of correction for further heat losses not taken into account, and N is the number of clouds in the cloud ensemble. Because the surface forcing is homogeneous, F in is equal for all individual cloud heat cycles. The efficiency is controlled by the homogeneous surface properties and the subcloud layer depth and is approximately equal among the clouds (see Eq. 4), so η is treated as a constant in a single convective case. Now we apply the mass-flux-weighted averaging as defined in Yanai et al. (1973) to Eq. 6 h is the mass-flux-weighted average of h, which is approximately equal to the average of the moist static energy per cloud, h , where the brackets . denote averaging over the cloud ensemble. The relative difference between the values ofh and h is lower than 0.5 % as estimated from LES. So, we can rewrite the left-hand side of Eq. 6 as An average moist heat cycle per cloud can then be expressed as and the average mass flux per cloud is then approximately equal to with η given by Eq. 4. We look into the LES simulations to find evidence to support Eq. 11. We base our analysis on the active cloud group and we plot the average mass flux per active cloud m versus the righthand side of the equation 11 (Fig. 11a). It turns out that Eq. 11 holds remarkably well for the eight tested LES cases of Table 1, which suggests that the average mass flux per cloud is determined by the moist heat cycle of the subcloud layer. The coefficient of determination of a linear regression model is r 2 = 0.95. The slope is estimated to be equal to C 1 = 0.13. The intercept parameter is nevertheless not equal to zero and results in an additional mass flux which we will denote by m 0 : The estimated value in this study is m 0 = 3 · 10 −5 kg/s/m 2 . Depending on the test case and the Bowen ratio value, m can be 1.5 to 6.9 times larger than m 0 (Fig. 11a). The scaling Eq. 11 is evaluated in Fig. 11a only for the active clouds, while we do not show the scaling for the "passive" cloud group. This is because the buoyancy threshold used to separate the clouds into the two groups misinterpret some active clouds as passive. We can however show the scaling for the total cloud ensemble in Fig. 11b, which still holds. Equation 11 is decomposed into two parts to test the dependency of m of the active clouds on The average mass flux per cloud m is also not uniquely determined by η. η sets the slope of the three lines corresponding to the three different magnitudes of the ratio F in h −h . Furthermore, if the ratio F in h −h in a given group of points is higher, the efficiency η in the same group is lower compared to the other groups. As a result, m is uniquely determined by the product of the two factors (Eq. 11), with η playing the key role in setting the dependence on B. The fact that B sets the efficiency of the moist convective heat cycle, and thus also controls the expected value of p(m), directly explains why the magnitude of the surface forcing does not influence the distribution shape. From this it also follows that changes of the surface forcing over the diurnal cycle can not alter the distribution shape, as long as B does not change significantly over the diurnal cycle. The moist heat cycle formalism might also explain why self-organization is not a powerful driver for the distribution p(m). The convecting system is forced by the same amount of heat input and the efficiency of the moist heat cycle is the same at all stages of cloud organization. So, for the shape of p(m), the spatial distribution of clouds does not play any significant role. An important point to notice here is that the heat cycle formalism applies to the average convec- Parameters of the mass flux distribution For the application to parameterizations based on the spectral cloud ensembles (Arakawa and Schubert 1974) or stochastic cloud ensembles (as in Plant and Craig 2008;Sakradzija et al. 2015), a functional form for p(m) has to be defined and the corresponding distribution parameters have to be estimated. In the following, we adopt the mixed Weibull distribution as a functional form for p(m) as in Sakradzija et al. (2015) where f is the fraction of active cumulus clouds, k is the shape and λ is the scale parameter of the Weibull distribution, and subscripts p and a denote passive and active distribution modes. From the results of the previous section we know that m varies with the surface conditions. The question nevertheless remains whether any of the remaining distribution parameters, namely k p,a and λ p,a , are universal constants. In the study of Sakradzija et al. (2015) these parameters were estimated only for the RICO case for the time period of six hours, starting after six hours of simulation. The estimated shape parameter was k p = k a = 0.7 for the given cloud sample. In the following, we extend the analysis over longer time period of the RICO case, and over land conditions in the ARM case. In the following we focus on the estimation of the shape parameters, k p,a , while the scale parameters of the Weibull distribution modes, λ p,a , can be calculated from the expected value of the distribution, m p,a = λ p,a Γ[1 + 1/k p,a ]. In shallow cumulus cloud ensembles, the shape parameter that is less than one, k < 1, indicates that the memory of cloud lifecycles has an effect on the distribution shape (Sakradzija et al. 2015). This effect takes place through correlation between the cloud lifetimes τ i and the cloud base mass fluxes m i , which is already demonstrated for the RICO case in Sakradzija et al. (2015). We confirm this finding for the RICO case, and we also show that it holds in the ARM case (Fig. 12). This correlation is high with the correlation coefficient equal to r p = 0.8 in RICO and r p = 0.9 in ARM, estimated for the active cumulus clouds. We assume that this correlation can be described by a power law relation τ/ τ = (m/ m ) β , where β is the power exponent, and τ is the average cloud lifetime, as in Sakradzija et al. (2015). In the theory of extreme events, it is known that long-term correlations with a power-law decay of the autocorrelation function lead to Weibull distributions of return intervals between rare events (e.g. Bunde et al. 2003;Blender et al. 2015). In that case the power-law exponent of the autocorrelation function, t −β , can be assumed equal to the shape parameter of the Weibull distribution, k (e.g. Blender et al. 2015). Following this reasoning, the normalized lifetime expression τ/ τ = (m/ m ) β also leads to a Weibull distribution for the cloud base mass flux distribution (Eq. 13). The power-law exponent can then be related to the shape parameter of the active mode of the Weibull distribution as k a ≈ β . The nonlinear least square fit in Fig. 12 gives the values for the exponent β = 0.8 in RICO and β = 0.77 in ARM. Hence, it appears that β is independent on the case set-up. The passive cloud group is more dispersive (not shown here) and the statistical fit is thus more uncertain, however we will assume that k p ≈ k a = 0.8. Combination of the two Weibull modes of the same shape parameter k p = k a = k, but different m p and m a , and hence different λ a,p , can explain the difference between the two cases (Fig. 13). To construct Fig. 13 and in the purpose of highlighting the uncertainties in p(m) due to the chosen value of k only, we here calculate the values of m p and m a directly from the LES output rather than using the formalism of a thermodynamic cycle. The chosen value of k p = k a = 0.8 provides a good fit to both distributions (Fig. 13). On the same plot, we also test a broad range of values for k, which demonstrates that k is of secondary importance in determining the final shape of p(m). It is evident that k can still take a wide range of values, [0.8,1] for RICO and [0.5,0.8] for ARM, for the correct reproduction of the distribution p(m). Therefore, we conclude that the main parameter that sets the difference in p(m) among the shallow cumulus cases is m . The parameter f , which is the proportion that active clouds take in the cloud ensemble, is about 4 to 5 % of the total cloud population both in ARM and RICO. This is valid for the distribution of lifetime average mass fluxes during time frames of one hour duration, and including only those clouds that are initiated during the time frames. We choose here to set the value of f to 0.05. Conclusions The probability distribution of cloud base mass flux p(m) differs among shallow cumulus cases. These differences manifest themselves through various shapes, slopes and scales of the distribution. Based on the examination of one typical LES case over the ocean (RICO) and one typical LES case over land (ARM), and nine variations of these two cases, we propose an explanation for the differences in p(m) among shallow cumulus cases. The set-up of the two reference LES cases differs in the strength and partitioning of the surface turbulent heat fluxes, as well as in the prescribed large-scale forcing tendencies. The ARM case has a strong diurnal cycle that is typical for land conditions, while there is no diurnal cycle in the simulation over the ocean (RICO). In addition, the cloud field in the RICO case is strongly organized, with manifestation of cold pools and arc structures. We have investigated which of these differences in the LES set-up is responsible for the distinct shapes of the distribution p(m). Analysis demonstrates that partitioning of the surface turbulent fluxes into sensible and latent heating, the Bowen ratio B, is the only parameter that controls the shape of the distribution p(m). This control appears to be governed by the second law of thermodynamics and can be explained by interpreting moist convection in the boundary layer as a combination of moisture and heat cycles (as in Shutts and Gray 1999;Pauluis and Held 2002). The efficiency of the moist heat cycle, η, is less than the Carnot cycle efficiency, because it is directly set by the surface Bowen ratio and the depth of the convecting layer (Shutts and Gray 1999). Through η, the Bowen ratio controls the average mass flux per cloud m . Using the formalism of a moist heat cycle, a scaling law for m is derived (see Eq. 11). By this scaling, the average vertical mass flux through cloud base m is proportional to the ratio of the effective surface heat flux ηF s f c and the excess in the moist static energy at the cloud base with respect to the environment h −h. This scaling holds remarkably well for the active buoyant clouds in the eight considered convective cases, and thus suggests an universal law across a wide range of the control parameter B. Passive and forced clouds are not investigated here due to their uncertain separation from the active clouds, but we show that the scaling still holds considering all cloud types. As such, B controls the shape of the distribution p(m) through its control on m . We have demonstrated that different shapes of the distribution p(m) can be well captured by a two-mode Weibull distribution function. The shape parameter of each distribution mode is k < 1 and it is of secondary importance for determining the final shape of p(m). The reason for this robustness comes from similarity of the power-law exponent β in the relation between cloud lifetime and cloud-base mass flux across the LES cases. This power-law exponent sets the unique value of the shape parameter across the LES cases. The Bowen ratios tested in this study covered the range of values between 0.03 and 0.5. This range corresponds to the span of conditions covering ocean surfaces to temperate forests and grasslands. In order to make the conclusions of this study more general, it would be advantageous to expand the study to dry land surfaces and to extend the analysis to cloud observations. In addition, the mechanical forcing in the two reference cases was of similar magnitude. A question left for further investigation is how stronger winds and higher wind shears might influence the convective mass flux and population statistics. One of the key outcomes of this study is that the concept of a moist heat cycle applies to an average convective cloud cycle. In order to retrieve the total mass flux in a cloud ensemble M, it is necessary to set the constraints on the number of clouds N in every given case, since M = N m . N does not appear to be constrained by the moist heat cycle. One may hypothesize that M is governed by the large-scale forcing through control on the number of clouds N, in addition to the surface conditions that impose a constraint on m . The results of this study also have implications for the cloud size distribution, which has a very similar shape to the distribution p(m). Various shapes and slopes of the cloud size distribution that are observed and have been documented in literature, may just reflect changes in Bowen ratios encountered across various studies. The various proposed distribution shapes could be encompassed by a single functional form given by the mixed Weibull distribution function. Such a multimodal distribution function already encompasses all the observed shapes, starting from an exponential shape to power-laws, depending on the value of the distribution parameters. Based on this study, the expected value of the cloud size distribution might impose the only relevant control on the distribution shape, which then could be constrained by the underlying physical processes in the boundary layer. of the shape parameter k is quite wide to show low sensitivity of the distribution overall shape to this parameter, while the fraction of active clouds in the ensemble is f = 5%.
2017-05-16T18:31:16.000Z
2017-05-16T00:00:00.000
{ "year": 2017, "sha1": "c59afdf29dc40f76530b641bfbfe3f7fefd87a6e", "oa_license": "implied-oa", "oa_url": "https://doi.org/10.1175/jas-d-16-0326.1", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "daa87facac3640e20f1d8b9d411835025f11ee85", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Environmental Science", "Physics" ] }
7497627
pes2o/s2orc
v3-fos-license
Cortical folding and the potential for prognostic neuroimaging in schizophrenia In 41 patients with schizophrenia, we used neuroanatomical information derived from structural imaging to identify patients with more severe illness, characterised by high symptom burden, low processing speed, high degree of illness persistence and lower social and occupational functional capacity. Cortical folding, but not thickness or volume, showed a high discriminatory ability in correctly identifying patients with more severe illness. To date there are no objectives tests that aid prognostic prediction in schizophrenia.Historically, clinical outcomes have improved considerably for medical disorders where severity can be quantified reliably (for example malignancies, asthma).Prognostic prediction, in particular the ability to identify those who will do well in the long term, has proved to be a great challenge in schizophrenia. 1 Neuroimaging offers the great promise of providing objective measures of clinical utility in managing psychosis. 2Recently, the use of multivariate pattern classification in neuroimaging has enabled diagnostic separation at a single patient level. 3In this study, we investigated whether this approach can reliably discriminate a patient with less severe illness from one with more severe illness.Given the previous observations that cortical thickness, 4 folding patterns 5 and grey matter volume 6 relate to prognosis in schizophrenia, we employed these surfacebased morphometric measures to identify illness severity. Method A sample of 41 patients with a DSM-IV diagnosis 7 of schizophrenia or schizoaffective disorder was recruited for this study.This sample is described in detail in our previous studies. 8,9he clinical severity was quantified using a composite index derived from symptom burden, functional ability, cognition and persistence of illness as described in our previous work 8 and in online supplement DS1.Using this severity index, 20 participants were classified as having a high severity of illness with the remaining 21 having a low severity of illness.The clinical and demographic characteristics of the two groups are presented in online supplement DS1 and Table DS1. Structural magnetic resonance imaging scans obtained from the participants were processed using Freesurfer (5.1.0)(http://surfer.nmr.mgh.harvard.edu/)as previously described. 10econstructed surfaces were inspected for topological defects and edited in accordance with our previous work 11 by a single rater (L.P.) masked to the severity status at the time of surface editing.Cortical folding was measured using local gyrification index proposed by Schaer et al. 12 Cortical thickness was estimated using the standard procedures described by Fischl & Dale. 13The reconstructed brain surfaces were parcellated using the Destrieux atlas to provide 148 brain regions based on sulcogyral boundaries described by Duvernoy. 14For each metric, these 148 values were used as features in the classifier. We used a linear support vector machine (SVM) proposed by Cortes & Vapnik 15 and implemented by the libsvm toolkit (http://www.csie.ntu.edu.tw/~cjlin/libsvm/).SVM is a statistical discrimination procedure that finds a linear separation surface in the high-dimensional multivariate feature space that maximally separates the training data into two classes as specified by the pre-assigned labels (in this case, high and low severity groups).Based on this separation, the class membership of a new participant (test data) can be predicted, and the accuracy of these predictions quantified.Further details are given in online supplement DS1.We computed test performance measures and diagnostic odds ratio using a leave-one-subject-out (LOSO) cross validation.The statistical significance of these measures was determined using permutation testing (n = 1000 permutations). Results Table 1 displays the accuracy of the classification and the most significant predictors of the best performing classifier.Given that gender and parental socioeconomic status differed between the two groups, we regressed out the variance explained by these two variables, and repeated the SVM analysis.Our results continued to show a superior, statistically significant accuracy for regional gyrification but not for thickness or volume (online supplement DS1, Fig. DS1 and Tables DS3 and DS4). Discussion To our knowledge, this is the first study to investigate the prospect of exploiting multivariate neuroanatomical information to predict clinical severity of schizophrenia at the individual level.Using a classifier based on the features of cortical folding, we can identify the degree of illness severity in medicated, community-living patients with clinically stable schizophrenia.This predictive ability appears to be a unique feature of folding patterns, as the classifiers based on thickness and volume do not perform significantly above chance when separating high and low illness severity groups.Furthermore, patients with greater illness severity had reduced cortical folding in most brain regions, suggesting that a distributed defect in cortical morphology influences prognosis.Although the accuracy achieved by the gyrification-based classifier is statistically significant, the performance of this classifier is considerably weaker when compared with the multivariate neuroanatomical classifiers tested in the separation of healthy controls from patients with schizophrenia. 16There may be several reasons for this disparity.The use of median split to divide the sample into high and low severity could have contributed to the lack of strong between-groups discriminative features With larger samples, extreme prognostic groups (lying on either end of the severity continuum) could be used for training the classifier and improve the accuracy.It is worth noting that in clinical practice, it is rarely necessary to apply a test to differentiate a patient with schizophrenia from a healthy control.The classification of a patient with schizophrenia from a healthy control can be done clinically with a high degree of confidence, thus even a highperformance neuroimaging test will have limited clinical utility in this context.On the other hand, at present there are no reliable means of predicting prognostic group membership; even a test that increases the likelihood of identifying prognostic grouping to a moderate extent, could be of significant benefit to patients and clinicians. We quantified illness severity on the basis of a number of variables; this approach offered a multidomain metric that reflected symptom burden across the three syndromes of schizophrenia, a cognitive function that is most prominently affected in schizophrenia i.e. processing speed, social and functional performance and persistence of illness.Nevertheless, various other metrics relevant for the assessment of severity (such as Clinical Global Impression, quality of life scales, self-rated recovery measures or assessments of daily living) were not collected in this study.Furthermore, from this cross-sectional study it is not possible to extrapolate whether a gyrification-based classifier applied at illness onset could prospectively predict later severity.Nevertheless, when compared with cortical thickness and volume, gyrification has been shown to be relatively stable during adult life. 17In addition, a large degree of variance in the cortical folding patterns relates to neurodevelopmental integrity during the fetal or early neonatal period. 18Taken together, these observations suggest that the burden of neurodevelopmental abnormalities in a patient with schizophrenia could be a potential influence on illness severity. Our results provide preliminary evidence for the utility of cortical folding in single participant-level prognostic imaging in schizophrenia.With larger validation studies that combine high-yield clinical prognostic indicators with gyrification metrics, the predictive value can be further improved, enabling an objective grading of outcome in the management of schizophrenia.This has the promise of assisting targeted service delivery and making personalised recommendations with regard to the required duration of antipsychotic treatment.Most importantly, this approach can be refined to provide accurate information on the chances of a satisfactory clinical recovery and thus potentially empower patients by addressing the uncertainty that surrounds prognosis in psychotic disorders.
2015-03-16T19:17:14.000Z
2015-11-01T00:00:00.000
{ "year": 2015, "sha1": "b316c1048433f622b770dc9fe4154bbea39a691a", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/F44DCFEE4ED8954F7EDF58CF5A274272/S0007125000239822a.pdf/div-class-title-cortical-folding-and-the-potential-for-prognostic-neuroimaging-in-schizophrenia-div.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "e2cf7dc7145fd2a921bfbcec34f5cb1ffb414c71", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
122807021
pes2o/s2orc
v3-fos-license
Technical note: First spectral measurement of the Earth’s upwelling emission using an uncooled wideband Fourier transform spectrometer . The first spectral measurement of Earth’s emitted radiation to space in the wideband range from 100 to 1400 cm − 1 with 0.5 cm − 1 spectral resolution is presented. The measurement was performed from a stratospheric balloon in tropical region using a Fourier transform spectrometer, during a field campaign held in Brazil in June 2005. The instrument, which has uncooled components including the detector module, is a prototype developed as part of the study for the REFIR (Radiation Explorer in the Far InfraRed) space mission. This paper shows the results of the field campaign with particular attention to the measurement capabilities of the prototype. The results are compared with measurements taken by IASI-balloon (Infrared Atmospheric Sounding Interferometer – Balloon version), aboard the Introduction The observation of the upwelling emission in the full relevant spectral range, from the mid infrared (MIR) to the far infrared (FIR) region is one important missing measurement for the characterisation of the atmospheric Earth radiation budget. Up to now instruments operating in the MIR region cover the range above 600 cm −1 . The spectral observation of the FIR region, below 600 cm −1 , was usually missing or covered Correspondence to: L. Palchetti (l.palchetti@ifac.cnr.it) only in narrow bands, because of technical limitations pertaining to space-borne spectrometers operating at long wavelengths. No space mission that exploits the FIR for Earth's observation has been made or selected for future operations. Despite that, the FIR region is very important because most of the radiative cooling of the upper troposphere and lower stratosphere occurs there (Clough et al., 1992(Clough et al., , 1995. In clear sky conditions, from one third to one quarter of the total greenhouse forcing is calculated to occur in the FIR (Sinha and Harries, 1995;Brindley and Harries, 1998), and larger effects are expected to be present in cloudy conditions. The identification and characterisation of the atmospheric properties which modulate the Earth's emission in this spectral region is a mandatory task for global climate change estimations. For instance, several aspects related to water vapour have been debated recently. The absorption continuum must be evaluated with particular attention to its precise characterisation within the rotational band. The extent of direct or feedback processes (Stoker et al., 2001;Philipona et al., 2005), and the radiative forcing due to ice clouds (Rizzi and Maestri, 2003) must be quantified. Wideband spectral measurements covering the FIR region are expected to address the above issues and water vapour profile measurements with more accuracy (Rizzi et al., 2002a). Atmospheric emission measurements from space in the FIR require high efficiency spectrometers that could be obtained with cooled detectors , however, cooling systems are difficult and expensive to maintain for long-duration space operations. Recently the development of high-efficiency uncooled pyroelectric detectors has allowed the design of high performance Fourier transform Published by Copernicus GmbH on behalf of the European Geosciences Union. spectrometers (FTS) (Carli et al., 1999;Formisano et al., 2005), which do not require cooling systems. The first project addressing these issues for the Earth's atmosphere was the Radiation Explorer in the Far InfraRed (REFIR) project supported by the European Union in 1998-2000 (Rizzi et al., 2002b). The FTS turned out to be the most challenging unit of the mission since it required high technological effort to manufacture and test the new components, most of which were not commercially available. A new FTS was designed with an optical layout that maximises the reliability of the spectrometer for long lifetime space applications and optimises its performances for uncooled operations (Carli et al., 1999;Palchetti et al., 1999). Some prototyping activity was required in order to study the trade-off among all instrument parameters, to test the optical layout, and to optimise the data acquisition strategy. In 2001, a first prototype named REFIR-BreadBoard (REFIR-BB) (Palchetti et al., 2005) was developed for ground-based measurements and was successfully tested in 2004 from the South of Italy (Esposito et al., 2006). A second prototype named REFIR-Prototype for Applications and Development (REFIR-PAD) was optimised for stratospheric measurements from balloon-borne platforms. REFIR-PAD was recently flown for the first time on board the Laboratoire de Physique Moléculaire pour l'Atmosphère et l'Astrophysique (LPMAA) gondola hosting the Infrared Atmospheric Sounding Interferometer (IASI) -balloon (Té et al., 2002) during the Equatorial Large Balloons Campaign (ELBC) performed in Brazil in June 2005. This paper describes the results obtained during the first launch of REFIR-PAD with particular attention to the measurement capability of this instrument. In Sect. 2 the main specifications of the instrument and the laboratory characterisation are summarised. Section 3 is devoted to the results of the field campaign after the level 1 (from raw data to calibrated spectra) analysis. Comparisons are presented with both the IASI-balloon instrument in the MIR region and forward model calculations over the full spectral range. REFIR-PAD instrument REFIR-PAD is a compact FTS with double-input/doubleoutput port configuration, designed for being integrated on board different stratospheric platforms for field observations. The instrument has simple mechanical and electrical interfaces for an easy integration on the hosting platforms and it is not temperature stabilised. No telemetry is required, the acquired data are stored on board using pressurised hard disks. Figure 1 shows a diagram of the instrument with the identification of the possible viewing directions: nadir, limb and deep space view at +30 • elevation angle. REFIR-PAD makes use of two uncooled deuterated Lalanine-doped triglycene sulfate (DLATGS) pyroelectric detectors, stabilised at 25 • C, which allow to reach the required noise performances. With these detectors photon noise is not a concern: broadband measurements and relatively hot sources can be observed without an increase of the measurement noise. The spectrometer does not require either cooling or temperature stabilisation since the double input/double output configuration allows to control all the input sources. Only the detector unit must be stabilised but the temperature involved is in the range of 25-30 • C and the implementation of the control system is very easy. The interferometer can be operated with either polarising or amplitude beam splitters (BS) in order to maximise the performance in different spectral regions. A summary of the main parameters that characterise REFIR-PAD in the configuration used for the measurements here reported is shown in Table 1 The instrument was operated with Ge-coated Mylar BSs covering the 100-1400 cm −1 spectral range with 0.5 cm −1 unapodised resolution and 30 s acquisition time. A hot blackbody (HBB) at about 80 • C and a cold black-body (CBB) at about 20 • C were used for calibration. The pointing mirror allows the spectrometer to look alternatively at these two sources during the in-flight calibration. The instrument was characterised in the laboratory under vacuum conditions before the field campaign (Palchetti et al., 2005). The emission of the onboard BBs was known with a temperature error of about 0.5 K. HBB was used as the input source for the evaluation of the instrument performances. Different measurements of HBB at a pressure of 20 Pa were performed, and the noise equivalent spectral radiance (NESR) was calculated. The results, shown in Fig. 2 (dots), are in the range of 0.5-2 mW/(m 2 sr cm −1 ), apart from a few narrow bands in which the NESR is larger due to a reduced instrument efficiency caused by the absorption of the Mylar substrates. contained calibration spectra, deep space and limb view measurements, as well as nadir measurements. A total of 540 spectra per each output channel looking at the atmosphere at nadir were measured. Instrument performances The calibration accuracy can be verified using the deep space and the limb views. Since the BBs have temperature of about 290 K and 350 K while the nadir spectra correspond to brightness temperatures that may vary from 200 K to 300 K, the calibration implies an extrapolation. The correctness of this operation is verified by the absence of bias in the measurement of the deep space view where a nearly 0 K signal is expected. Figure 3, for instance, shows an example of the space view (bottom panel) and limb view (top panel) measurements. The measured radiance of the space view is nearly zero apart the contribution of the emission from the rotational water vapour band in the FIR region, the CO 2 at 667 cm −1 , and the O 3 at 1055 cm −1 that are even more visible in the limb spectrum. In these and in the following figures, the four high-frequency peaks of noise due to a reduced instrument efficiency caused by the absorption bands in the BS substrate (see Fig. 2) are masked with blanks. The radiance in the 500-560 cm −1 spectral window, where no significant emission lines exist in either the space view or the limb view, can be used for an estimate of the absolute calibration error. In this narrow spectral region, the mean difference from zero has allowed to calculate the equivalent brightness temperature uncertainty, shown in Fig. 4 as a function of time, for a brightness temperature of 280 K. During the flight, the calibration error for a single measurement resulted to be less that ±1 K peak-to-peak with values that oscillate around 0 K with a small bias (the average error is about −0.04 K). This result meets the requirement of an absolute calibration error of 0.1 K identified by Goody et al. (1998) for the identification of climatological fingerprints. During the flight the instrument underwent a 10 K temperature variation which caused an interferometric misalignment and a corresponding loss of efficiency at high frequency. As a consequence the calibration function varied among measurements and could not be averaged in the data analysis. We identified some components that can be optimised for future flights. Mainly they are the substrate type and planarity of the beam splitters, and the optical coupling of the detectors with the instrument. Further improvements of the radiometric performances of REFIR-PAD are, therefore, possible and the encouraging results of this first test flight are a good step in this direction. In order to verify the measurement noise for nadir viewings, the NESR has been determined from the rms of a set of 10 measurements looking at a constant scene from a single sequence. The result, reported in Fig. 2 (continuous line), is comparable to the NESR measured in laboratory conditions (dots) and allows to conclude that the instrument correctly attained its radiometric performances during the flight. The corresponding SNR is about 150 in the region around the CO 2 band and meets the requirement of SNR>100 established in the space instrument feasibility study. This requirement was determined with sensitivity tests (Rizzi et al., 2002a,b) of the retrieval performances of a space instrument. A SNR>100 allows the retrieval of vertical profiles with 2 km resolution of temperature and H 2 O with 1-2 K and 20% precision, respectively. Comparison with IASI-balloon measurements The accuracy of REFIR-PAD measurement has been checked against the results obtained with the IASI-balloon instrument sharing the same LPMAA gondola. IASI-balloon is a nadir looking Fourier transform MIR spectro-radiometer. Its spectral resolution is 0.1 cm −1 . The radiometric calibration is done by two blackbodies (HBB heated to +30 • C and CBB cooled to −20 • C). The two instruments have different ground pixels and this difference must be taken into account in the comparison. The boresights of the two Fourier transform instruments were co-aligned along the nadir direction, on ground before launch, using an IR camera and a warm point-like target. The camera, mounted on board the gondola for temperature characterisation of the observed scene, works in the 7.5-13 µm spectral range and has a rectangular field of view of 0.416×0.312 rad (corresponding, at a float altitude of 34 km, to about 14×10 km on ground), which is sufficiently larger than both the IASI-balloon (0.016 rad) and the REFIR-PAD (0.133 rad) instantaneous fields of view (IFOV). In this way, the IR camera has allowed during the flight the identification of possible differences between the scenes seen by the two instruments. The comparison is performed in the spectral region from 650 to 1400 cm −1 common to the two instruments, with the IASI data resampled to the lower resolution of REFIR-PAD of 0.5 cm −1 . The measurements were chosen so that the observed scene was constant within the IFOV of REFIR-PAD (which is the largest of the two) also considering the displacement of about 1.5 km of the ground pixel due to the horizontal drift of the gondola during the comparison. The condition of an homogeneous scene was found only at the end of the flight at 14:49 UTC when clear sky and uniform land coverage occurred during the measurements. The comparison shown in Fig. 5 is good in terms of the absolute difference between the two measurements. In the earlier part of the flight, the comparison is still good in the CO 2 region (where the same atmosphere is observed) but in the atmospheric window the emitted radiance depends on the surface characteristics and on the scattered low-altitude clouds, and differences are observed between the measurements of the two instruments. The larger difference that appears in the 1300-1400 cm −1 range is a random effect and it is mainly due to the reduced performances of REFIR-PAD. Indeed, above 1250 cm −1 the beam splitter efficiency reduction, caused by both absorption bands of the Mylar substrate and the planarity error, reduced the instrument performances with an increase of the NESR. Wideband radiance Observations over the whole spectral region observed by REFIR-PAD are compared with a simulation obtained by the ARTS (Atmospheric Radiative Transfer Simulator) forward model (Buehler et al., 2005) as a reference. In the simulation we used pressure and temperature values obtained from a balloon sounding performed from the nearest meteorological station (Manaus, 03 • 08 S, 59 • 58 W) in the same time window of the IASI-balloon/REFIR-PAD flight. The CO 2 volume mixing ratio used was 375 ppm, while the water vapour profile was fitted with a non-linear least-squares routine by using the REFIR-PAD data because sounding measurements were found to have a too large error. The other atmospheric profiles were taken from the tropical scenario of the FAS-COD (Fast Atmospheric Signature CODe) dataset (Anderson et al., 1986). The result of the comparison between measured data and forward model, shown in Fig. 6, gives a generally good agreement. The differences that exist are expected to be mostly due to a possible mismatch in the temperature profile. Other significant new information could be extracted by the REFIR-PAD measurements after the completion of the retrieval analysis. Water vapour and temperature profiles in clear sky conditions can be retrieved by taking advantage of the wideband coverage and of the new water vapour measurements in the FIR rotational band (Lubrano et al., 2000;Rizzi et al., 2002a). The retrieval of vertical concentration profiles is beyond the aim of this work and it will be covered in a future paper. The wideband emitted radiances can also be affected by clouds, which are detected by the IR camera co-aligned with IASI-balloon and REFIR-PAD. Figure 7 shows the difference in the spectral radiance measured in clear sky and cloudy conditions during the passage of a cloud through the instrument IFOV. The wideband spectrally resolved coverage has allowed a quantification of the effect of the cloud in the whole emission spectrum. In this case, we see that the cloud causes a cooling in the 700-1000 cm −1 region and does not affect the atmospheric emission below 500 cm −1 . The different behaviour in the two spectral regions is an important piece of information about cloud characteristics that can be exploited in a comprehensive retrieval approach of the REFIR-PAD measurements. In particular, REFIR-PAD is expected to be sensitive to the properties of high altitude cirrus clouds that have important signatures in the 200-500 cm −1 range (Evans et al., 1999;Yang at al., 2003). Conclusions The first spectral measurement of the wideband thermal emission of the Earth's atmosphere was performed from a stratospheric balloon using the REFIR-PAD instrument in the 100-1400 cm −1 spectral range at 0.5 cm −1 resolution. An important technical feature of these new measurements is the use of an uncooled instrument with uncooled detectors. An important scientific feature is the observation of the FIR component of the emitted radiation, which can not be neglected for a proper modelling of long term climate changes. The combination of the two features opens new perspectives in space-borne observations of the atmosphere. The measurements allow the clear identification of the effects of clouds and the vertical distributions of water vapour and temperature. A comprehensive work is in progress for the simultaneous quantification of the different components of the system.
2019-04-20T13:14:39.305Z
2006-10-31T00:00:00.000
{ "year": 2006, "sha1": "e5cbc924d0bb3790385be60bd39d907cfbe4e65f", "oa_license": "CCBYNCSA", "oa_url": "https://www.atmos-chem-phys.net/6/5025/2006/acp-6-5025-2006.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "790e927cf58962a802de76a35dda0829cc950994", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
3194271
pes2o/s2orc
v3-fos-license
Diagnosis and molecular characterization of rabies virus from a buffalo in China: a case report Background Rabies virus (RABV) can infect many different species of warm-blooded animals. Glycoprotein G plays a key role in viral pathogenicity and neurotropism, and includes antigenic domains that are responsible for membrane fusion and host cell receptor recognition. Case presentation A case of buffalo rabies in China was diagnosed by direct fluorescent antibody test, G gene reverse-transcriptase polymerase chain reaction, and RABV mouse inoculation test. Molecular characterization of the RABV was performed using DNA sequencing, phylogenetic analysis and amino acid sequence comparison based on the G gene from different species of animals. Conclusion The results confirmed that the buffalo with suspected rabies was infected by RABV, which was genetically closely related to HNC (FJ602451) that was isolated from cattle in China in 2007. Comparison of the G gene among different species of animal showed that there were almost no amino acid changes among RABVs isolated from the same species of animals that distributed in a near region. However, there were many changes among RABVs that were isolated from different species of animal, or the same species from different geographic regions. This is believed to be the first case report of buffalo rabies in China, and the results may provide further information to understand the mechanism by which RABV breaks through the species barrier. Background Rabies virus (RABV) is one of the seven species in the genus Lyssavirus in the Rhabdoviridae family [1]. All warm-blooded animals, including raccoons, skunks, bats and foxes, are susceptible to RABV, and domestic dogs act as the main reservoir and transmitter [2]. The annual number of human deaths caused by rabies is estimated to be 55,000 worldwide [3], with about 32,000 in Asia [4]. The total number of human deaths was 108,412 between 1950 and 2004 in China [5]. The average number was 1,524 from 1996 to 2008, and 50% of cases were reported in Guangxi, Hunan and Guizhou provinces [6]. Therefore, the disease continues to be a serious public and animal health problem in China. The aims of the present study were to diagnose a case of buffalo rabies that occurred in Wuhan City, Hubei Province, using three different methods, and to compare the sequences with different RABVs that were isolated from different species, based on the G gene. This is believed to be the first report of the phylogenetic analysis of buffalo RABV in China compared with other isolates from different animals. Case presentation Specimens were collected from the gyrus hippocampi of the buffalo with suspected rabies in Wuhan City (114.3°E, 30 antibody test (dFAT) of the specimen was performed as previously described [21][22][23]. Normal buffalo brain samples were used as a negative control. RABV isolation by mouse inoculation test was performed as described previously [24,25]. The total RNA from buffalo brain was extracted with Trizol reagent (Invitrogen) according to the manufacturer's instructions. Primer design and reverse-transcriptase polymerase chain reaction (RT-PCR) of the G gene were performed as described previously [17]. RT-PCR products were visualized under UV light after electrophoresis on 1% agarose gels containing ethidium bromide. The amplified products were purified with a QIAquick PCR gel extraction kit (QIAGEN) according to the manufacturer's protocol. The sequencing was carried out in an Applied Biosystems 3730 DNA automated sequencer. After the raw sequences were edited by Clus-talX Version 1.82 [26], 1575 nt sequences of the G gene were obtained and submitted to GenBank. The phylogenetic tree based on the deduced amino acid sequences was constructed by using the neighbor-joining method with 1,000 bootstrap replicates using MEGA version 4.0 software [27], based on the complete sequence of the RABV G gene from 10 different species of animal (Table 1). Bootstrap values >70% were considered significant [28]. Genetic distance analysis for the G gene was conducted with PHYLIP version 3.63 software [29]. Glycoprotein nucleotide sequences from different animals were identified, translated into amino acid sequences, edited, and pair aligned using BioEdit software [27]. Multiple alignments were performed by ClustalX software [26]. Conclusion dFAT indicated the presence of RABV antigen in the brain specimens from the buffalo with suspected rabies (Figure 1A), whereas normal buffalo brain did not ( Figure 1B). The expected size of the G gene fragment was obtained from the suspected buffalo brain by RT-PCR (data not shown). RABV from buffalo was isolated successfully in suckling mice (data not shown), and the RABV was named Hubei070308 strain. The positive result was supported by G gene sequencing, and the sequences were submitted to GenBank under the accession number EF643518. The G gene sequence, together with reference sequences from seven countries and 10 species of animals were aligned (Table 1). We showed that the G gene of RABV had relative territorial specificity but not species specificity ( Figure 2). The genetic relationship of the RABV in this study was differed greatly from PG(AY009097) that was used as a vaccine strain in China in 1931. Compared with 18 other RABV strains, Hubei070308 shared 85.0-99.8% sequence identity at the amino acid level (data not shown). We demonstrated that Hubei070308 strain was close to the cattle strain FJ602451 and the human strain DQ849063, which belong to Chinese group I [17], but it was far from coyote strain U52946, which was isolated in 1996 in the United States ( Figure 2). Additional File 1, Figure S1 shows that the amino acid changes were mainly focused on the transmembrane areas (aa 440-461), inner-membrane zone (aa 462-505) and signal peptide range of mature RABV glycoprotein. Glycoprotein sequences of Hubei070308 were identical to HNC (FJ602451) that was isolated from cattle in 2007. However, there were many amino acid substitutions when it was compared with RABVs from other animals such as skunk, dog, human, bat and deer. The linear epitope (aa 14-19) at antigenic site II and the minor site between aa 342 and 343 were highly conserved, which was consistent with the findings of Meng et al. [17]. Among all the RABVs, SHBRV strain that was isolated from bats was the most variable at the amino acid level. Many different animals can be infected by RABV [2], and cases of transmission from bats to humans [30], dogs to humans [17] and even dogs to pigs [31] have been reported. For a virus shed by one host to infect another, it must break through entry barriers (e.g., epithelium, mucus, and alveolar macrophages) and find its way to tissues in which it can replicate [32]. A B Figure 1 Results of dFAT of specimens from brain of buffalo with suspected rabies. (A) suspected buffalo brain sample; (B) normal buffalo brain sample. It has been reported that several amino acids in the RABV glycoprotein are responsible for pathogenicity [33,34]. Therefore, RABV glycoprotein is the best target protein to study virus-host interaction, or it may be the main protein that is responsible for breakthrough of the species barrier. In the present study, many amino acid substitutions in G protein were found among RABVs that were isolated from different animal species, or from the same species distributed in different geographic regions. These substituted amino acids may explain why RABV can break through the host barrier to infect one species of animal from another. This hypothesis needs to be confirmed by further experiments. Additional material Additional file 1: Figure S1. Comparative analysis of G gene amino acid with other RABVs isolated from different animals. Dots represent identity among all sequences. Arrows mark the range of the signal peptide, antigenic site, linear epitope and endo-domain. Transmembrane (TM) domain was framed. SP: signal peptides; ENDO: endodomain; AS2: antigenic site II; AS3: antigenic site III; LE: linear epitope.
2017-08-03T02:11:51.164Z
2011-03-06T00:00:00.000
{ "year": 2011, "sha1": "10f1a6bab92021ad764fef445e6b9d2b3aae1233", "oa_license": "CCBY", "oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/1743-422X-8-101", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "10f1a6bab92021ad764fef445e6b9d2b3aae1233", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
1951682
pes2o/s2orc
v3-fos-license
Next station in microarray data analysis: GEPAS The Gene Expression Profile Analysis Suite (GEPAS) has been running for more than four years. During this time it has evolved to keep pace with the new interests and trends in the still changing world of microarray data analysis. GEPAS has been designed to provide an intuitive although powerful web-based interface that offers diverse analysis options from the early step of preprocessing (normalization of Affymetrix and two-colour microarray experiments and other preprocessing options), to the final step of the functional annotation of the experiment (using Gene Ontology, pathways, PubMed abstracts etc.), and include different possibilities for clustering, gene selection, class prediction and array-comparative genomic hybridization management. GEPAS is extensively used by researchers of many countries and its records indicate an average usage rate of 400 experiments per day. The web-based pipeline for microarray gene expression data, GEPAS, is available at . INTRODUCTION It is quite common that the introduction of a new technology is accompanied by claims and promises which on many occasions cannot be fulfilled. This hype is then followed by a wave of disappointment against the technology. Fortunately, as it is reaching a certain degree of maturity, DNA microarray technologies do not seem to have followed this fate. During an initial period, DNA microarray publications were dealing with issues such as reproducibility and sensitivity. Many classical microarray papers dating from the late nineties were mere proof-of-principle experiments (1,2), in which only cluster analysis was applied. Later, sensitivity became a main concern as a natural reaction against quite liberal interpretations of microarray experiments made by some researchers, such as the fold criteria to select differentially expressed genes. It was soon obvious that genome-scale experiments should be carefully analysed because many apparent associations happened merely by chance (3). In this context, different methods for the adjustment of P-values, which are considered standard today, started to be extensively used (4,5). More recently the use of microarrays as predictors of clinical outcomes (6), despite not being free of criticisms (7), fuelled the use of the methodology because of its practical implications. There are still some concerns with the cross-platform coherence of results but it seems clear that intra-platform reproducibility is high (8) and, despite the fact that gene-by-gene results are not always the same, the biological themes emerging from the different platforms are increasingly consistent (9). That points to the importance of the interpretation of experiments in terms of their biological implications instead of a mere comparison of lists of genes (10,11). Keeping a pace with the trends mentioned above, Gene Expression Profile Analysis Suite (GEPAS) has been growing during the last 4 years. In the first release it was more oriented towards clustering and data preprocessing (12). Successive releases showed a package more oriented towards gene selection, class prediction and the functional annotation of experiments (13,14 The online version of this article has been published under an open access model. Users are entitled to use, reproduce, disseminate, or display the open access version of this article for non-commercial purposes provided that: the original authorship is properly and fully attributed; the Journal and Oxford University Press are attributed as the original place of publication with the correct citation details given; if an article is subsequently reproduced or disseminated not in its entirety but only in part or as a derivative work this must be clearly indicated. For commercial re-use, please contact journals.permissions@oxfordjournals.org modules, some of which are new while other ones constitute already available tools completely rewritten including new functionalities. GEPAS is not a simple web server, but it constitutes one of the largest resources for integrated microarray data available over the web. It has been working for more than four years having by the end of year 2005 an average of 400 experiments analysed per day summing up over all of their modules. GEPAS is used by researches worldwide as can be seen in the usage map, where all the sessions are mapped to its geographic location (http://bioinfo.cipf.es/access_map/map. html). It also offers on-line tutorials that can be used in courses. In the new version (3.0) we present new modules for the normalization of Affymetrix experiments, for differential gene expression, for the evaluation of cluster quality and another module for array-comparative genomic hybridization (Array-CGH) data management. Also, another conceptual novelty is the connection of GEPAS to the PupaSuite tools (15)(16)(17), which offers the possibility of analysing polymorphisms at the light of the results of the gene expression analysis. GENERAL OVERVIEW GEPAS aims to tackle the most common problems in microarray data analysis in a simple but rigorous way. Thus, after an essential step of normalization, there are different 'workflows', or sequences of steps, that can be followed, depending on the aim of the experiment: class discovery, differential gene expression, class prediction or genomic copy number estimation, just to cite the most common objectives of microarray experiments. Class discovery, either in genes or in experiments, is achieved by using clustering methods. GEPAS includes some commonly used clustering methods such as hierarchical clustering (18), SOTA (19,20), SOM (21), K-means (22) and SOM-Tree (23). The evaluation of cluster quality, a scarcely addressed issue, has been implemented here in the Cluster Accuracy Analysis Tool (CAAT) module (see below). Differential gene expression implies finding genes with significant differences in expression between two or more classes, related to a continuous experimental factor (e.g. the concentration of a metabolite) or to survival data. A new, more complete module for differential gene expression is presented in this new version of GEPAS (see below). The module Tnasas for class prediction implements different classifiers, such as diagonal linear discriminant analysis (DLDA) (24), nearest neighbour (NN) (25), support vector machines (SVM) (26), random forest (27) and shrunken centroids (PAM) (28) of known efficiency as class predictors using microarray data (24). Cross-validation error is calculated in a way to avoid the well-known selection bias problem (29,30). See Tnasas help (http://tnasas.bioinfo.cipf.es/cgibin/docs/tnasashelp) for a more detailed description of the methods and error estimation strategy. Array-CGH (31) can be analysed through the module ISACGH that allows predicting copy number, relating these values to gene expression and performing functional annotation through the babelomics (11) suite. Finally, functional annotation is carried out with the babelomics suite which can be used either as an independent suite or as an integrated part of the GEPAS. Figure 1 illustrates, following the metaphor of a subway line, the interconnections of the different tools in the GEPAS environment. NORMALIZATION AND PREPROCESSING GEPAS now implements normalization facilities for both twocolours and Affymetrix arrays. DNMAD (32) module performs normalization in two-colour arrays using print-tip loess (33) with a number of different options. DNMAD can input Genepix (Axon instruments) GPR files. The module expresso normalizes Affymetrix CEL files using standard Bioconductor (34) tools; in particular the package affy (35). Besides its friendly web interface we provide the user with the speed and above all the physical memory available in our server. More information can be found in the corresponding tutorial web pages (http://bioinfo.cipf.es/docus/courses/on-line.html). In addition, the preprocessor (36) module performs some preprocessing of the data (log-transformations, standardizations, imputation of missing values and so on). CLUSTERING AND CLUSTER QUALITY ESTIMATION Despite the fact that clustering is one of the most popularalbeit often improperly used (30)-methodologies in the analysis of microarray data there are very few alternatives for the estimation of the quality of the results found. We have included a module, CAAT, which provides many options for the visualization and intuitive manipulation of hierarchical and non-hierarchical clustering results. Many visualization modes, browsing options and cluster extraction possibilities are currently available. Moreover, CAAT provides some descriptive measures about each partition (average profiles, standard deviation profiles, inter and intra-cluster distances) as well as a global estimation of cluster quality by the silhouette method (37), which performs well, in noisy situations, such as microarray analysis (38). CAAT submits data to other tools such as the Babelomics (11) functional annotation suite or to ISACGH (Figure 1). DIFFERENTIAL GENE EXPRESSION This version of GEPAS includes new methods for differential gene expression analysis under different conditions. The old module pomelo has been replaced by the new module T-rex (Tools for RElevant gene seleXion) which is much faster and offers new tests for different situations. T-rex distinguishes among four conceptually different testing cases: Finding genes differentially expressed between two discrete classes (e.g. case/control and so on). A number of authors (39,40) have found that the classical t-statistic, which was widely used in early work on the analysis of differential expression, can be highly unreliable for microarray data. Problems arise mainly as a consequence of statistical issues relating to the SD term in the denominator of the t-statistic. For example, many non-differentially expressed genes may by chance have small observed SDs, which may cause these genes to be erroneously selected. GEPAS now also implements different new tests: The t-test, which is still available. An empirical Bayes methodology that allows fitting hierarchical mixture models to identify differentially expressed genes (41). One of the advantages of this methodology is that it fits a global model taking into account all genes in the dataset. A novel test for the analysis of microarray data by combining inference for differential expression and variability (CLEAR-test) (J. Valls, M. Grau, X. Sole, P. Hernandez, D. Montaner, J. Dopazo, M. A. Peinado, G. Capella, M. A. G. Pujana and V. Moreno, manuscript submitted). Most tests evaluate differential expression by using estimated variability, but no inference is made in terms of the variability itself. CLEAR-test evaluates both whether genes show large fold changes and whether their variability is high. A data-adaptive approach to the analysis of differential expression, in which an effective test statistic is learned directly from microarray data. This approach has been shown to ameliorate many of the problems associated with both the t-statistic and simple moderated statistics like SAM (42), and to produce good results under a range of conditions (43). Finding genes differentially expressed between more than two classes (e.g. different types of cancers and so on) Together with the classical ANOVA methodology we make available the same CLEAR test mentioned above (41). While the mathematical treatment of this kind of data is similar to that of two classes, in our tools, we separate the case when more than two classes are available because of its different conceptual implications. Finding genes whose expression is correlated to a continuous variable (e.g. the level of a metabolite). Regression analysis of gene expression on any numerical independent variable has been implemented. C routines have been compiled for the particular architecture of our computers in order to achieve the maximal speed. Estimates of Pearson's and Spearman's correlation coefficients as well as P-values for testing the null hypothesis of no correlation can also be obtained with T-rex. Finding genes whose expression is related to survival times. GEPAS uses C routines to estimate a Cox proportional hazards regression model (44). Right censored data are allowed as well as replicates in the survival times. Censoring variables should be provided by the researcher together with survival times that may be replicated. When appropriate, P-values adjusted for multiple testing are provided. Three methodologies are implemented. One of them controls the FWER (family-wise error rate) (45) while the others control the FDR (false discovery rate) (46). Our implementations make use of the p.adjust function in the stats R package and the qvalues package (47) from Bioconductor. FUNCTIONAL ANNOTATION Functional annotation of the experiments gives clues to the researcher for the interpretation of the experiment. There are a Figure 1. Map of GEPAS functionalities as a subway line. Data (Affimetrix, two-colour or raw) are introduced from the left side and pass through the preprocessor. Then different types of analyses can be performed: gene selection (T-rex) in different situations (two or more classes, correlation or survival; see text for details) or class discovery (Tnasas) are two types of supervised analyses. Array-CGH data can be analysed through the red line ISACGH. Unsupervised analysis can also be performed using different methods. CAAT allows to map co-expressed genes on their chromosomal coordinates allowing the study of RIDGES (54). All the tools end up in Babelomics (11), that allows for two different types of analysis: comparison of two sets of genes of analysis or blocks of functionally related genes. number of tools that make use of gene functional annotations to try to understand the global changes in gene expression in microarray experiments (48), but probably one of the most complete packages in this respect is the Babelomics suite (11,49). This suite of programs for functional annotation of genome-scale experiments has undergone a deep modification described in detail elsewhere (49). In brief, Babelomics can now compare two groups of genes and test simultaneously for the significant over-abundance of diverse biological themes such as GO terms, KEGG pathways, Interpro motifs, Swissprot keywords, Transfac Ò motifs, CisRed motifs, relative abundance in tissues and bioentities extracted from PubMed, with the proper multiple testing adjustment. This is carried out by the FatiGO+ module, the evolution of the FatiGO program (50). Additionally there are two modules designed to search for functionally related blocks of genes that are co-ordinately over-or under-expressed using both the FatiScan (51) or the GSEA (52) algorithms. Despite its general scope (Babelomics is not restricted to microarrays but applicable to any type of large-scale experiment), and the possibility of being used alone as an independent resource, the Babelomics suite has been fully integrated into GEPAS. Modules of gene selection (T-rex) or class prediction (tnasas) can submit the genes selected as relevant to the FatiGO+ module for testing against the rest of genes. Likewhise, the modules for clustering (hierarchical, k-means, SOM, SOTA) through their cluster' viewers or through CAAT, can submit the genes within the selected cluster to be tested against the rest of genes. Similar operation can be performed from within ISACGH, with the genes contained in the selected chromosomal region. Moreover, arrangements of genes can be sent from T-rex to the FatiScan to test blocks of functionally related genes tha are co-ordinately over-or underexpressed. Sets of arrays can also be submitted to GSEA with the same purpose. ARRAY-CGH Genetic aberrations, which are the molecular basis of many diseases, have classically been studied through CGH. The introduction of microarray-based CGH methods (array-CGH) has revolutionized this methodology in terms of resolution and throughput (31,53) but, at the same time, has generated a need for new algorithms and software for dealing with this type of data. We have included in GEPAS a new module, ISACGH, which completely replaces the old viewer InSilicoCGH (14). ISACGH includes two new and efficient methods for accurate estimation of genomic copy number from array-CGH hybridization data, integrated into a web-based system that allows, for the first time, the combined study of gene expression and genomic copy number. Several visualization options offer a convenient representation of the results. Moreover, the link to the Babelomics (11,49) tools allows, for the first time in a tool of this type, the production of functional annotations (using different relevant biological information such as gene ontology, pathways, etc.) for the detected chromosomal regions of interest (amplified or deleted). We use the DAS technology (Distributed Annotation System; see http://www.biodas.org/), that allows a remote mapping of information (our predictions) from a server (our server) to a client (Ensembl), to represent the ISACGH predictions and data onto the Ensembl chromosomal coordinates. ISACGH generically maps data onto their chromosomal coordinates. So, beyond to map genomic hybridisations any other data can be mapped. Thus CAAT can send to ISACGH groups of co-expressing genes, which might be useful for defining regions of increased gene expression, also known as RIDGES (54). Polymorphisms affecting gene expression Although the study of regulatory polymorphisms is not new, there has been a recent revival of interest in them mainly because of the availability of high-throughput data and methodologies that allows their characterisation (55). The corresponding GEPAS modules (CAAT, tnasas and T-rex) have a unique feature in this regard: the possibility of connecting the genes found to be regulated in a microarray experiment to possible regulatory SNPs in such genes. In particular, clustering and gene selection methods can be connected to the PupaSuite (15)(16)(17). DISCUSSION GEPAS is a long-term project that aims to provide the scientific community with an advanced set of tools for microarray data analysis, without renouncing to an easy and intuitive use. It has been running uninterruptedly for more than four years and has grown to include more tools as new algorithms were introduced in the microarray data analysis arena (12)(13)(14). The GEPAS team has intended to deliver a coherent set of state-ofthe-art and widely established algorithms, running away from building a simple collection of as-much-as-possible tools. Actually, any new tool included is the response to a new or emerging requirement requested by our users. As the Functional Genomics node of the Spanish Institute of Bioinformatics (INB; http://www.inab.org) and being part of the Spanish Network of Cancer Centers (RTICCC; http://www.rticcc.org) we have a direct contact with researchers from which we get much of the feedback necessary to build up a useful tool. GEPAS, integrated with the Babelomics suite (11,49), provides the tools for performing the most common analyses of microarray data. Moreover, it has been conceived as a workflow that helps the user to carry out a series of consecutive steps of analysis with simple mouse clicks. GEPAS has been designed to take full advantage of the properties of the web: connectivity, cross-platform functionality and remote usage. Its modular architecture allows easy implementation of new tools and facilitates the connectivity of GEPAS from and to other web-based tools. The user of GEPAS ranges from the experimentalist with not much experience in bioinformatics and no deep statistical skills, interested only in data analysis, to the bioinformatician that invokes some of the tools remotely for different purposes. GEPAS is running in a high-end cluster (with 20 dedicated AMD Opteron CPUs at 2.4 GHz) with a large amount of RAM (6 GB). This allows to use tools (e.g. normalization tools are highly RAM-consuming) that usually are beyond the capabilities of the hardware available to many end users. Although other alternatives are available for microarray data analysis, there is no other similar resource over the web with the number of possibilities offered by GEPAS.
2014-10-01T00:00:00.000Z
2006-07-01T00:00:00.000
{ "year": 2006, "sha1": "c91592b9b249fc84f0f353487fc0d20dc3ad89bf", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/34/suppl_2/W486/7623046/gkl197.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "203a1c69e9c766bfbab12692926723e2d5075365", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine", "Computer Science" ] }
198981108
pes2o/s2orc
v3-fos-license
Co-downregulation of GRP78 and GRP94 Induces Apoptosis and Inhibits Migration in Prostate Cancer Cells Abstract Background Both glucose-regulated protein 78 kDa (GRP78) and glucose-regulated protein 94 kDa (GRP94) are important molecular chaperones that play critical roles in maintaining tumor survival and progression. This study investigated the effects in prostate cancer cells following the downregulation of GRP78 and GRP94. Methods RNA interference was used to downregulate GRP78 and GRP94 expression in the prostate cancer cell line, PC-3. The effects on apoptosis and cell migration was examined along with expression of these related proteins. Results Small interfering RNAs targeting GRP78 and GRP94 successfully down-regulated their expression. This resulted in the induction of apoptosis and inhibition of cell migration. Preliminary mechanistic studies indicated that caspase-9 (cleaved) and Bax expression levels were upregulated while Bcl-2 and vimentin expression levels were downregulated. Conclusion Co-downregulation of GRP78 and GRP94 expression induces apoptosis and inhibits migration in prostate cancer cells. Introduction Prostate cancer (PCa) is the most common malignancy in middle-age and elderly populations. The increasing aging Chinese population has led to an increases in PCa incidence [1,2]. Androgen receptors are known to play an important role in PCa development and progression. Currently, maximal androgen deprivation therapy (ADT) remains a common treatment of PCa [3]. However, PCa often develops resistance to ADT and can subsequently evolve into castration-resistant prostate cancer (CRPC) [4,5]. Thus, there is a need to develop new therapies for CRPC. At present, gene therapy has opened new avenues for cancer treatment and has thus been attracting increasing attention. An important strategy in gene therapy is to interfere with both the transcription and translation processes of oncogenes and tumor suppressor genes to regulate their expression levels. Thus this regulation can affect tumor progression and improve patient prognoses [6,7]. Therefore, identifying a genetic target suitable for CRPC therapy may lead to the development of novel therapeutics. Glucose-regulated protein 78 kDa (GRP78) is a member of the heat shock protein family and is an important molecular chaperone. It functions by correcting protein folding errors and degrades irreversibly faulty polypeptides as part of the unfolded protein response (UPR). These functions help terminate or mitigate endoplasmic reticulum stress (ERS), in order to avoid apoptosis [8]. Glucose-regulated protein 94 kDa (GRP94) is another heat shock protein and also an important molecular chaperone. Both GRP94 and GRP78 play similar roles in the UPR process [8]. In addition, GRP94 is implicated in various signaling pathways; for example, this protein activates and sustains the MAPK and AKT/ S6 signaling pathways in order to avoid apoptosis and promote cell proliferation [9][10][11]. Previous studies have shown that GRP78 and GRP94 are highly expressed in siRNA synthesis All siRNAs, including those targeting GRP78 (siGRP78) and GRP94 (siGRP94) as well as a negative control siRNA (NC, non-targeted binding with any human gene sequence), were designed and synthesized by Invitrogen (USA). The sequences of the three RNA sets are shown in Table 1. Cell transfection The reverse transfections were performed using the INTERFERin in vitro siRNA transfection reagent (Polyplus, France) according to manufacturer's instructions. Specifically, siRNAs were diluted in 200 µl of Opti-MEM medium (Gibco, USA), combined with 12 µl of INTERFERin transfection reagent and incubated for 10 minutes at room temperature. Following incubation, the siRNA+INTERFERin transfection reagent mixture was transferred to a six-well plate containing 1.95 ml of FBSfree DMEM per well. PC-3 cells were incubated with the transfection mixture and cultured in a 37 ℃ incubator for 8 hours. Following this, 245 µl of FBS was added to each well, and the cells were returned to the incubator for further culturing. The final siRNA concentration in the NC, siGRP78 and siGRP94 groups was 50 nM. In the siGRP78+94 group, the final concentration was 50 nM siGRP78+50 nM siGRP94. No siRNA was added to the blank group. Western blot analysis 48 hours after transfection, cells were harvested, lysed and centrifuged at 12,000g for 15 minutes to collect protein lysates. Protein concentrations were quantified, and equal amounts of protein were then separated by electrophoresis on a 12% SDS-PAGE gel and transferred to a PVDF membrane. The PVDF membranes were blocked with blocking solution and then incubated with a primary antibody overnight at 4℃. In this study, the following primary antibodies were used: anti-GRP78 and anti-GRP94 (Proteintech, USA); anti-caspase-9, anti-vimentin and antiβ-actin (Santa Cruz Biotechnology, USA); anti-Bcl-2, anti-Bax and anti-GAPDH (Abcam, UK). Following incubation with the primary antibodies, the membranes were incubated with an HRP-labeled secondary antibody (Santa Cruz) for 2 hours at room temperature. The BeyoECL Plus Kit (Beyotime, China) was used to visualize protein bands. The relative gray value of each band was determined using Quantity One 4.62 software (Bio-Rad, USA). various tumor tissues and are involved in promoting tumor growth and invasion [12][13][14][15][16]. In this study, we used RNA interference technology to investigate the interaction between these two proteins in the PCa cell line, PC-3. We downregulated their expression levels both individually and simultaneously and examined the effects of this downregulation on biological behaviors (apoptosis, migration) as well as their possible mechanisms in PCa cells. Immunohistochemical staining Twenty PCa tissue cases were chosen for the experimental group, and 20 benign prostatic hyperplasia (BPH) tissue cases were used as a control group. Approval was obtained from all patients as well as from the ethics committee of the hospital prior to tissue collection for this experiment. The immunohistochemical staining process was as follows: tissue samples were fixed, paraffin-embedded and sectioned into 5 µm slices. The sections were dehydrated, dewaxed and then treated with hydrogen peroxide to block endogenous peroxidase activity. Antigen retrieval was performed by microwaving the sections, which were then blocked with 10% goat serum. Next, the sections were incubated overnight with anti-GRP78 and anti-GRP94 antibodies (Proteintech, USA). The sections were washed and then incubated with a secondary antibody from the ready-to-use SABC Staining Kit (Boster, China). Three fields were randomly selected under the microscope in each section, and the percentage of positively stained cells in the visual field was estimated. According to similar studies [17], the degree of staining was divided into five grades according to the proportion of positive cells: the proportion of positive cells <5% is negative (-), 5-25% is weakly positive (±), 26-50% is positive (+), 51-80% is moderately positive (2+), and >80% is strongly positive ( 3+), at the same time, pictures of these selected fields of view were taken to calculate their optical density (OD) values. between two groups were assessed using Student's t-test. The data are expressed as the mean ± SD, and P<0.05 was regarded as statistically significant. GRP78 and GRP94 are highly expressed in PCa tissue We first examined the expression of GRP78 and GRP94 in PCa tissue and benign prostatic BPH tissue via immunohistochemistry. Both proteins were mainly expressed in the cytoplasm and partially expressed in the cell membrane; positive staining was observed as brownish-yellow particles under the microscope. The results showed that GRP78 and GRP94 expression were moderately positive (2+) or strongly positive (3+) in 20 cases of PCa. GRP78 expression was strongly positive (3+) in 12 cases, while GRP94 expression was strongly positive (3+) in 14 cases. The expressions of GRP78 and GRP94 are shown in Figure 1. In all 20 BPH cases, GRP78 and GRP94 expression was either positive (+) or weakly positive (±). The mean OD value of the immunohistochemistry photographs was determined using HPIAS-1000 image analysis software (Champath Image, China). Analysis of the results demonstrated that the mean OD values for GRP78 and GRP94 expression in PCa tissue were significantly higher than those in BPH tissue. siRNA-mediated downregulation of GRP78 or GRP94 affects protein expression in PCa cells In this study, PC-3 cells were divided into the following groups: a blank group (blank; no siRNA added); a negative control group (NC; transfected with negative control siRNA); a GRP78 downregulated group (siGRP78; transfected with GRP78-targeting siRNA); a GRP94 downregulated group (siGRP94; transfected with GRP94-targeting siRNA); and a GRP78 and GRP94 co-downregulated group (siGRP78+94; transfected with both GRP94-targeting and GRP78-targeting siRNAs). Western blotting showed that in siGRP78-treated PC-3 cells, GRP78 expression was downregulated while there was a significant upregulation of GRP94 expression (P<0.01). Likewise, in the siGRP94 group, there was a decrease in expression of GRP94 expression and upregulation of GRP78 expression (P=0.001). In the co-downregulation siGRP78+94 group, the expression of both proteins was 2.6 Apoptosis rate determination 48 hours after transfection, the cells were harvested and washed twice and resuspended in 500 µl binding buffer. Then, 5 µl PI and 5 µl FITC-labeled Annexin V (KeyGen, China) were added to the resuspended cell solution. This mixture was incubated for 20 minutes at room temperature in the dark. The apoptosis rates were determined by flow cytometry. Determination of cell migration inhibition rates Cell migration assays were performed using the Transwell system (Corning Life Sciences, USA). 48 hours after transfection, the cells were harvested and resuspended in serum-free DMEM. The cell densities were adjusted to 1×10 5 /ml, and 200 μl of cell suspension was added to each Transwell insert compartment. A total of 600 μl of medium containing 20% FBS was added to the lower compartment of each well of the 24-well plate, and the cells were cultured for 24 hours at 37 ℃. Then, the Transwell insert was removed and the cells on the outer surface of the Transwell insert were fixed, stained with crystal violet and photographed. To calculate the migration inhibition rate, the stained cells were lysed in 100 µl of 10% glacial acetic acid, and OD values were measured at 570 nm using a microplate reader. The migration inhibition rates were calculated with the following formula: Migration inhibition rate = [1-OD value of experimental group / OD value of blank group] × 100% Statistical methods All data were statistically analyzed using SPSS 19.0 software (IBM, USA). Differences between three or more groups were assessed by one-way ANOVA. Differences Co-downregulation of GRP78 and GRP94 induces apoptosis in PCa cells 48 hours after transfection, we used Annexin V-FITC + PI staining to determine PC-3 cell apoptosis rates in each group by flow cytometry. As shown in Figure 3, the apoptosis rates in the siGRP78 and siGRP94 groups were significantly increased compared to the blank and NC groups. PC-3 cells cotransfected with siRNAs targeting GRP78 and GRP94, exhibited an apoptosis rate of approximately 20%, which is significantly higher than the rate observed in either the siGRP78 or the siGRP94 group. Co-downregulation of GRP78 and GRP94 inhibits PCa cell migration To determine the effect of GRP78 and GRP94 co-downregulation on PCa cell migration, we used the Transwell system and calculated migration inhibition rates in all five groups 48 hours after transfection. As shown in Figure 4, the migration inhibition rates in the siGRP78 and siGRP94 groups were significantly higher compared to the NC group. The migration inhibition rate in the siGRP78+94 group, was higher than in either the siGRP78 group or the siGRP94 group. Effect of GRP78 and GRP94 co-downregulation on the expression of apoptosis-and migration-related proteins in PCa cells To further investigate the potential mechanism underlying GRP78 and GRP94 co-downregulation-induced apoptosis and inhibition of migration of PCa cells, we used Western blotting to examine protein expression levels of caspase-9, BCL-2, Bax and vimentin in PC-3 cells. The protein expression levels in this group were then compared to the protein expression levels in the blank and NC groups. As shown in Figure 5, in the siGRP78+94 group, the expression of the apoptosis-associated caspase-9 (cleaved) protein was upregulated. This group also displayed decreased expression of Bcl-2 and increased expression of Bax. In the siGRP78+94 group, the expression of vimentin, which is associated with cell migration, was also downregulated. has multiple functions and includes aiding in protein folding and transport [18,19]. It is also an important stress response protein and is involved in maintaining stable ER function under such conditions (e.g., low oxygen, low sugar and acidic environments). Thus, it plays important roles in cell survival, proliferation and migration [8]. Previous studies have shown that GRP78 is highly expressed in various tumor cells and is closely associated with tumor proliferation, metastasis and drug resistance [20][21][22]. Similarly, GRP94 is also mainly expressed in the ER, where it functions as a molecular chaperone protein. Though GRP94 displays complex physiological functions, it is also involved in numerous cellular functions which include assisting protein folding, transport, degradation, stabilizing cell states and ensuring cell survival during ER stress [8,23,24]. Studies have shown that GRP94 is highly expressed in various solid tumors and promotes tumor growth and metastasis [25][26][27]. Functionally, these two proteins are similar and possibly act synergistically in cancer cells. In this study, we first examined the expression levels of these two proteins in BPH and PCa tissues. We found that both proteins were highly expressed, consistent with previous findings in various solid tumors [12-16, 20-22, 25-27]. This suggests that they may play an important role in PCa cells. Subsequently, we employed RNA interference technology to downregulate the expression of these two proteins in PCa cells. Our results demonstrated an inverse correlation between the two proteins. That is when GRP78 was downregulated, the expression of GRP94 was significantly upregulated. Likewise, when GRP94 was downregulated, the expression of GRP78 was upregulated. These results suggest the existence of a compensatory feedback mechanism between these two functionally similar proteins in PCa cells. When one protein is inhibited, the expression of the other increases in order to meet the functional demands of tumor cells in stressful environments (hypoxic, hypoglycemic and acidic environments) and to maintain cell proliferation and migration. However, the details of this compensatory feedback mechanism remains to be elucidated. Recent research has shown that the downregulation on the expression of either protein (GRP78 or GRP94) in some tumor cells, can to a degree, inhibit the proliferation or migration of tumor cells [28][29][30][31]. However, the downregulation of both proteins at the same time in tumor cells has rarely been reported. To inhibit the compensatory effects of these two proteins, we introduced small interfering RNAs targeting both proteins (GRP78 and GRP94) in PCa cells. This resulted in significant decrease in expression levels of both GRP78 and GRP94. Discussion PCa develops into castration-resistant prostate cancer (CRPC) after a period of androgen deprivation therapy (ADT), after which the efficacy of hormone therapy for prostatic tumors is significantly reduced. Gene therapy is attracting increasing attention as a novel form of CRPC treatment. Thus, it is critical to identify a gene target suitable for gene therapy-based PCa treatment. GRP78 is a molecular chaperone protein that is mainly expressed in the endoplasmic reticulum (ER). This protein was upregulated. Possibly the co-downregulation of GRP78 and GRP94 stimulates the mitochondrial apoptotic pathway by activating intracellular caspase-9 and increasing the Bax/Bcl-2 ratio [34,35]. Vimentin expression was significantly decreased in the siGRP78+94 group. Vimentin is essential for cytoskeletal maintenance and as a result of the decrease in expression this may have an effect on inhibiting cell migration. Though the mechanism driving this observation still requires further investigation. In summary, RNA interference technology can be effectively used to downregulate the expression of GRP78 and GRP94. This induces apoptosis and inhibits the migration of PCa cells. This siRNA-mediated co-downregulation has the potential to be used as a novel therapeutic method for PCa. Conflict of interest: Authors state no conflict of interest. To determine the effects of GRP78 and GRP94 co-downregulation on the biological behavior of PCa cells, we examined apoptosis rates and migratory abilities of PCa cells in each treatment group. We found that downregulation of GRP78 or GRP94 alone was able to induce apoptosis and inhibit the migration of PCa cells. However, when the two proteins were simultaneously downregulated, we found that there was a greater increase in apoptosis rates and significantly more inhibition of migration rates. This finding demonstrates that following the co-downregulation of both proteins, GRP78 and GRP94 are no longer able to functionally compensate for each other. As a result, this leads to a more pronounced inhibition of tumor cell characteristics. In light of these results, GRP78 and GRP94 may be promising targets in PCa therapy. However, further investigations involving more cell lines are necessary to comprehensively explore this mechanism. Furthermore, since differential basal levels of these molecular chaperones have been reported [32,33], GRP78/94-based treatments may only work with various efficacy. Lastly, we investigated the potential mechanism underlying the co-downregulation-induced increase in apoptosis and inhibition of cell migration. Our results demonstrated that in the siGRP78+94 group, expression of the caspase-9 (cleaved) was upregulated. Additionally, Bcl-2 expression was downregulated while Bax expression
2019-07-31T13:10:12.044Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "e33f8db90f844afc9df7edeee83bcfe13c77794a", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7874808", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "e33f8db90f844afc9df7edeee83bcfe13c77794a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
159343731
pes2o/s2orc
v3-fos-license
LOCAL POLITICS POST-REFORM ERA: THE ROLES OF MUHAMMADIYAH IN THE 2004- 2014 DPD’S ELECTION IN YOGYAKARTA SPECIAL REGIONS, INDONESIA After post-Reform Indonesia, Indonesia has been employing a dramatic decentralization practice in Indonesia nationwide. Since 1999, Indonesia has been electing for representative council (DPR RI) and regional representation so called DPD RI in very direct way. This paper aims to analyse the local politics especially at the case of DPD’s election in Yogyakarta special region of Indonesia in the competition process among the different group interests which are Islamic group (Muhammadiyah, NU, PKS), Kraton /aristocrat interest, and nationalist group interest (PDI-P). Those groups are the big four that have been playing important role in the competition in DPD election sequentially in 2004, 2009, and 2014. In Yogyakarta special region, four members of DPD since the first election has been represented by three Islamic group both modern and traditional ones and by Yogyakarta palace. this paper more focus on the 2014 election which were followed by more competitor from Pakualaman palace, and also from nationalist group that have same mass with the Palace. this competition was really interesting to explain, how people decide their representative members and how different group compete each other in this election. Also, what strategy and by manner they collect voters in this individual based-individual candidate election.  From the field research, it can be concluded at least there are three point (1) each candidate was strongly endorsed by established organization and community to support for the election since 2004. So, four incumbents easy to win the competition; (2) the success team have many thing in common for example the focus on the core of supporter (basis masa) by using strategy ‘by name by address list”; and lastly (3) they were employing any symbolic that really easy to understand for the common people such as Islamic value, organization value, ideology, etc.  by adopting many social activity and ritual in promoting candidates for example club goods, voluntary organization, and other forms of informal communities. INTRODUCTION Under the democratization project in Indonesia there are growing number of different types of political engagements among the people. Some of faith-based organization prefer to participate in such political party and in the same time they are ready to support candidates for regional representative council so-called DPD RI. But several Islamic groups remains 'apolitics'. The reasons why they are supporting candidate in individual elections are many. The main reason mostly it is about representative of the organizations to have their people in parliamentary assembly in the name of privilege of organization. Other reasons are about political access, financial resources, and to help organization in building network nationwide. Interestingly, there are a lot of organizations including political parties in Yogyakarta Special Regions claim themselves that they are appropriate to have candidate elected in the general election regularly. That is why in every 5 th year elections there are at least 10-13 candidates run for DPD RI in Yogyakarta. One of significant reform demands is actually the decentralization of political power from the previous authoritarian regime whose power held for more than 32 years in the base of power, the capital city, DKI Jakarta. This demand afterward was institutionalized in a democratic mechanism that is with holding an election in local level to elect the local representatives for regional representation in national level. This regional representation (DPD) is a new institution that officially established in 2004. Many observers and scholars say that this new institution is replacing the local delegation in new order era. The existence of this institution furthers becoming a challenge regarding to the quality of political decentralization-whether DPD, in the same time, it would bring a region it represents to a better condition and able to reinforce the local government capacity in development . In accordance to UUD 1945, the local representative council has several authorities as mentioned in below article 22D: 1 1.) The local representative council can propose to house of representative a bill which related with local autonomy, relationship between central and local government, the proliferation and the unity of region/district, the natural resources management and other economy resources management, and also affairs related to budget balance between central and local government. central and local government, the proliferation and the unity of region/district, the natural resources management and other economy resources management, implementation of state budget, tax, education, religion, and report the monitoring result to DPR as a consideration for taking an act. DPD was first established in October 4 2014, when the first 128 members of DPD were inaugurated and taken their oath. In the beginning of its establishment, there were various challenges faced by DPD. Those challenges are varying from its authority which considered insufficient to become an effective second room in bicameral parliamentary system, to its lack of institutional capacity. Those challenges appeared especially due to the lack of political support given to this new birth institution. The existence of local representation body in national parliament like DPD, was actually not a new fresh idea and the idea was appeared prior to independence era. This idea was initiated by Moh Yamin in UUD 1945 formulation assembly by national body which prepared all independence needs and affairs (BPUPKI). The notion about the importance of local representation in parliament were first accommodated in Indonesia first constitution, UUD 1945, with a concept 'local delegation' in People assembly Council (MPR), that works along with 'party delegation' (DPR). This was regulated in the second article UUD 1945, that states "MPR is consists of DPR members and delegations from groups and regions, according to regulation stated in constitution." This loose in regulation in UUD 1945 was further regulated in various regulations in constitution. This institution, according to political theory, is a territorial representative 2 , consist of representations from all provinces (four representation for each province) and these representatives are This paper aims to discuss one of cases about how such a big Islam based mass organization in Yogyakarta, Muhammadiyah, was participating to determine representative from Yogyakarta in DPD RI. The other questions try to be elaborated in this paper is to analyze the motivation that triggers Muhammadiyah to mobilize its organization and how its structure machines are operationalized to win the prepared candidate in 2004, 2009, and 2014 there are three different period of elections, this writing does not use the chronological approach in elaborating the idea yet it employs the comparative approach. The year or period would only become a mark to tell the involvement of Muhammadiyah, however, the essence is to 2 The existence of DPD in Indonesia constitutional system could not be separated from the institutionalization of representation function. In order to institutionalize this representation function, there are three well known representation system and applied in various democracy countries: 1.) Political representative system; 2.) Territorial representative system; 3.) Functional representative system. answer the question of how Muhammadiyah can take its nominated candidate to Senayan. In the internal of Muhammadiyah, there was a debate over the urgency of Muhammadiyah in promoting its talented cadre to DPD RI. Many cadres questioned the role and benefit that Muhammadiyah would gain with putting its cadre in DPD RI. There are several Muhammadiyah elites, even they are small in number, who doubts to promote any candidate from Muhammadiyah or even involve themselves in politics. Furthermore, there are also cadres who see the political effect of this involvement that can harm the harmony of ummah. On the other hand, there also many cadres who believe that having a representative in DPD RI is essential for Muhammadiyah. Interestingly, many cadres also have such imagination that an official endorsement from Muhammadiyah to its nominated candidate could be interepreted as Muhammadiyah's effort in contributing for the development of our nation. As an insight, a candidate of DPD RI who is promoted by conceptualization The conceptualization of election and representation in this discussion is quite significant to develop the idea about particular things that may different from various concepts in different context. In this term, it is imperative to conceptualize several ideas; election, civil society, and strategy to win in politic of representation. (1) Election There are various perspectives in understanding whether DPD election is categorized as practical politics or not in Muhammadiyah members. Politic is often associated with the politic of party while the non-party election like DPD is often considered as non-practical politics. Generally, people understand that election is an official and democratic transition of power and leader. Furthermore, election can also be used as 'reward and punishment' mechanism for the candidate. Therefore, if one seats in representation body through election process, so its representation would be considered as political representation including DPD member because even representative of DPD is territorial representative, the ones who are electing them are still a 'people'. Whatever the duty they hold in society, if they are working in representation body they would still be considered as political representatives. Since April 5 2012 DPD RI has been declared as Indonesia senate. The senate conception is basically has two functions, which is representation and its position. Representation is a function in representing people and region while its position works for giving the second opinion/alternative from DPR opinion. Even as second line, it is expected that this institution would have some improvement in its strategic roles in responding the challenges of development in the regions especially in new proliferated region that would definitely be helped by the existence of its representation in DPD. DPD RI is a new institution emerged from the wave of demands after the decline of Soeharto. It born in the new Indonesia constitutional system through the third UUD 1945 amendment and was decided by MPR RI in November 21 2001. DPD RI has functions, duties, and authorities that officially regulated in constitution. It is a representation body that represents a region and clearly has a significant and strategic position in Indonesia constitutional system to encourage the development process and the advancement of regions and also to actualize the check and balance system both with executive and among legislative body. (2) Muhammadiyah as Civil Society Muhammadiyah in DIY as social and religious organization is very old organization that born in 1912 and was initiated by Kyai Haji Ahmad Dahlan. It is often in political issue in which Muhammadiyah is involved or involves itself as organization mandate or as its elite political expression. Therefore, Muhammadiyah is feeling the necessary of having a representation in DPD RI since 2004 where DPD first established as a part of bicameral system. In the last three legislative elections, Muhammadiyah has always succeed to send its cadre to seat in DPD RI. That kind of patriotic role is considered as part of organization mandate especially a mandate from Tanwir Bali. In the history of politics, it's been always elite as the most determinant actor. Elite is defined as a social group who has highest index in society so they have influence and power in social political life. That index is mostly based on the income/richness, capability, and political power so they able to have control over a group of majority (Bottomore, 2006:1-2). According to Pareto's political stratification, society consists of two classes, which are: The upper class, that also divided into the governing elite and non-governing elite. The second class is the lower class that usually classified as the non-elite class or more well known as 'mass'. According to Robert D Putnam, there are three ways to identify this class; analyzing position, reputation, and decision. Either formal or non-formal position is considered can make people as an elite since it can mediate and give a power attribute that afterwards managed in various ways. Furthermore, the reputation analysis tends to be more informal. Elite is viewed from how he/she is considered has an influence in its neighborhood, even when he/she has no position in society. The last is the decision consideration that Muhammadiyah is basically should not play in practical politics. Moreover, there is an official letter or SK PP Muhammadiyah that clearly limits the procedural political movement. From one of key informants, the decision to nominate a candidate in DPD election was seen as political opportunity as conceptualized by McAdam (1999) and also discussed by Situmorang (2007) (4) Winning Strategy Of three DPD elections periods, the strategy and tactic to win the candidate has not changed much. There were only some improvement and neater organizing that can be seen in a 2014 election process. More detail picture and situation of strategy used by Muhammadiyah in three elections (2004,2009,2014) would be explained later. Methodology This study case based research is a descriptive qualitative research. To obtain a data, researchers examine documents both in mass media and in organization archives, and also conduct an interview with actor, success team, organization committee, any news and relevant documents. In addition, in the last DPD election in 2014, researchers were part of the competition therefore can be categorized as participant/participatory observation. As an insider, it is expected there will be an advantage in collecting sensitive and secret data that the data analysis process is started with; 1) Examining all available data from various sources, which in this context is the success team or campaign team documents; 2) After examining all available data, the next is examining the validity of data and conducting data reduction which done with selecting the important data and should be investigated further; 3) The last is interpreting the data. This process consists of understanding and investigating deeply all collected data and afterwards narrating them in a written research report. findings Of three elections in encouraging its cadre to seat in DPD RI, Muhammadiyah has employed similar strategies and tactics from year to year. There were only some improvement and neater organizing that can be seen in a 2014 election process. All these efforts are conducted in term of representing Muhammadiyah in its patriotic roles as regulated in several materials of organization decision. In other words, putting Muhammadiyah cadre in DPD is in term of keeping the spirit of organization in its birthplace, the Yogyakarta special region. In determining a candidate who will represent Muhammadiyah in DPD was never been an instant process. It went through a long process that should be passed so Muhammadiyah would really have its best cadre as representation of organization personally or by his/her voice in a policy making process. In DIY, the ones who decide organization representative are several chairman in regional For more detail about how Muhammadiyah is encouraging its candidate to win the election, there were several strategies that improved and changed from three election period in 2004, 2009, and 2014. It is very obvious that the contemporary marketing politics tactic was influencing the way Muhammadiyah plays it politic. A famous political strategy which known as "by name by address" and used by many legislative candidates was also used in Muhammadiyah. The role of ideology in this semi-politics process is frequently facing a complicated contestation. The reason of either pro and con of (2) as the real evidence of Muhammadiyah that always take the strategic role for the interest of region, country and nation; (3) This matter supported by the amar maruf nahi munkar movements that had been done by Muhammdiyah. Which means, Afnan sought as a figure who able to implement amar ma'ruf nahi munkar. From his track records, which so far from moral scandal made the socialization, activities within Muhammadiyah's forum went easy. People then could undoubtedly says that choosing him not solely as a candidate matter but because of Muhammadiyah (from various interviews). This phenomenon of 'symbolic power' in accordance with the ideas of Benedict Anderson as the imagine community-as the development of common identity. In Muhammadiyah bond, although no blood relation between a people with others, in a place that very far away, however, they have connection that bridged by this organization, same origins, or same religion. That resemblance took as the material by the success team of Muhammadiyah in order to gain many votes when elections. Not only ideological bonds, but also between educational service provider of Muhammadiyah with its service users. Whereas the factor utilization of charismatic family which is Afnan as the grandchildren of a moderate national hero and can be accepted by general public. In spite of that, the name of Hadikusumo able to be a force of its own memorabilia for constituents, especially here Afnannot only has similarity in name but also as grandchild of a national hero which by other competitors this strong image they do not have. Moreover his other competitor is came from the group of santri and also have the basis of santri then it become very small for the possibility for other candidates to pick on the big name of Hadikusumo, besides having the title of national hero ki bagus Hadiksusmu had history as preacher who highly respected at the national level. CONCLUSION DPD RI candidate that has been submitted by Muhammadiyah must come from the element of association cadre who have long been active, not an individual who suddenly appears, and that matter proven with the list of names raised by each Pimpinan Daerah Muhammadiyah (PDM) and nomination by PWM DIY. After those names screened by PCM and PDM collected, will be held a special forum named Musyawarah Pimpinan Wilayah (Muspimwil) in chosing the best nomine to be the representation of Muhammadiyah. Furthermore, the candidate not just enough by only active as Muhammadiah cadre, the individual who will represents Muhammadiyah must have wide-range knowledge related to public policy issues and able to stand up for the nation's interests in senate institution. After the long process, then the name of Afnan Hadikusumo appeared with the incumbent status to be back as Muhammadiyah's representation in DPD RI. Afnan judged as the right candidate to be the representation of Muhammadiyah to be compared with other names, which appeared to the surface because he has long history as a Muhammadiyah cadre and considered have the experience also capable knowledge to sit in the DPD RI seats. In addition to these reasons, there is also capable cadre like Henry Zudianto, but he was unwilling. A thesis of Muhammadiyah not the organization that designed for political power struggle in public area is still relevant. DIY Muhammadiyah been three times of DPD RI election nominating candidate with the motivation that encourage more on sentiment factor and emotional than legal-rational within recruitment of candidate till winning. Which means, the importance to protect pride and dignity of the organization in the eyes of DIY society and outside DIY is dominant till no considered only one eye by other political communities in Yogyakarta such NU and particularly PKS. Likely, the competitor of Muhammadiyah's candidate is not NU or the palace but candidate from PKS. The existence of external competitor motivates Muhmmadiyah to try mobilize its organizational resource to win the competition (votes above PKS). Effort to win the candidate from Muhammadiyah to be more superior than PKS should be failed in 2014 caused by Muhammadiyah's machine have responded mediocrely toward election till there was no proper preparation and systematic to gain votes. This matter once more become corrector that Muhammadiyah's people not too much aware of political business chiefly the Muhammadiyah's decision maker elites like in AUM, autonom organization, Muhammadiyah in the regional level till branch. The sectarianism politics in Muhammadiyah marked with the character when election ended then it is considered finished and return to work for organization routines. This study have not found a certain design in the internal Muhammadiyah to strengthen the DPD institution to be used as Muhammadiyah's missionary endeavor. This is caused by the view that for Muhammadiyah's people, politics representation important however, the businesses are return to the personal or the winning candidate. Representation concept in Muhammadiyah is more symbolic rather than political advocacy. This DPD case can be shown from there is no institution design to support program or DPD's member performance of Muhammadiyah representation in postelection. It means that DPD RI can be understood as elite representation rather than organization or public interest representation (ummah).
2019-05-21T13:04:52.461Z
2018-11-30T00:00:00.000
{ "year": 2018, "sha1": "05791bbaf8a6e7a4e2389910284b71b20f306345", "oa_license": "CCBYNC", "oa_url": "http://journal.umy.ac.id/index.php/GPP/article/download/5470/3887", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "05791bbaf8a6e7a4e2389910284b71b20f306345", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Political Science" ] }
119604392
pes2o/s2orc
v3-fos-license
Real solutions of the first Painlev\'e equation with large initial data We consider three special cases of the initial value problem of the first Painlev\'e equation (PI). Our approach is based on the method of uniform asymptotics introduced by Bassom, Clarkson, Law and McLeod. A rigorous proof of a property of the PI solutions on the negative real axis, recently revealed by Bender and Komijani, is given by approximating the Stokes multipliers. Moreover, we build more precise relation between the large initial data of the PI solutions and their three different types of behavior as the independent variable tends to negative infinity. In addition, some limiting form connection formulas are obtained. Introduction and main results The first Painlevé equation has the canonical form d 2 y dt 2 = 6y 2 + t. (1.1) It is known that there are two kinds of solutions of PI, behaving respectively as y(t) ∼ − t 6 and y(t) ∼ − −t 6 as t → −∞; see for example Bender and Orszag [3]. Refinements have been obtained for the second case, such that the solutions oscillate stably about the parabola y = − −t 6 , with where φ(t) = 24 and d > 0 and θ are constants; cf. Kapaev [12], Joshi and Kruskal [11], and Qin and Lu [19], see also §32.11 of the handbook [18]. Moreover, numerical analysis conducted in Holmes and Spence [10], Fornberg and Weideman [9] and Qin and Lu [19] indicates that there exist constants κ 1 < 0 and κ 2 > 0, such that all solutions of (1.1) with y(0) = 0 and κ 1 < y (0) < κ 2 belong to the second kind, while otherwise, if y (0) > κ 2 or y (0) < κ 1 , the solutions will blow up on the negative real axis. Therefore, it is natural to consider the initial value problem of (1.1) with initial data The problem is how to determine the asymptotic behavior of y(t) with a given real pair (a, b). In particular, if y(t) ∼ − −t/6 as t → −∞, how to establish the relation between the initial data (1.4) and the parameters d and θ in (1.2)- (1.3). This is an open problem mentioned by Clarkson in several occasions [4,5]. In fact, before Clarkson's open problem is being proposed, investigations [10][11][12]16] have been made on the PI functions on the negative real axis. In [10], a boundary value problem for PI is studied by Holmes and Spence, and it is shown that there are exactly three types of real solutions of PI equation. Later on, Kapaev [12] obtained the asymptotic behavior of these solutions as follows: (1.9) In the above formulas, s k for k = 0, 1, 2, 3, 4 are the Stokes multipliers associated with the given solution; see the definition of Stokes multipliers in (1.24) below. Moreover, using the isomonodromy condition, Kapaev [12] also got the following necessary conditions for the Stokes multipliers of each solution type: 1 + s 2 s 3 > 0 for type (A), 1 + s 2 s 3 = 0 for type (B), 1 + s 2 s 3 < 0 for type (C). (1.10) It should be noted that the solutions of types (A) and (B) may also have finite poles on the negative real axis; see [2, Figures 1 and 2]. In the present paper, we focus on three special cases of the initial value problem (1.4) of the PI equation (1.1). More precisely, we shall establish the relation between the asymptotic behavior of the PI solution y(t) as t → −∞ and the real initial data (y(0), y (0)) = (a, b), in the following cases: (I) fixed a and large positive (or, negative) b; (II) fixed b and large negative a; (III) fixed b and large positive a. The motivation of the first two cases comes from a recent investigation by Bender and Komijani [2], in which the unstable separatrix solutions of PI on the negative real axis are studied numerically and analytically. For case (I), Bender and Komijani conclude that for any fixed initial value y(0), there exists a sequence of initial slopes y (0) = b n that give rise to separatrix solutions. Figures 1 and 2 in [2] indicate an interesting phenomenon of the PI solutions as the initial slope varies. It seems that, when b 2n < y (0) < b 2n+1 , the solution passes through n double poles and then oscillates stably about − −t/6, while for b 2n−1 < y (0) < b 2n , the solution has infinitely many double poles. Moreover, they establish the asymptotic behavior b n ∼ Bn as n → +∞ (1.11) by studying the eigenvalue problem associated with the PT -symmetric HamiltonianĤ = 1 2p 2 + 2ix 3 ; see [2]. They have also obtained similar results for case (II). However, part of their argument is based on numerical evidences (see [2, p.8]). To give a rigorous proof of their results and to build a precise relation between the initial data and the large negative t asymptotics, further analysis is needed. Other than the above two cases, case (III) is essentially different, and has not been addressed before, as far as we are aware of. In the above cases (I)-(III), we consider the initial value problem by using the method of uniform asymptotics introduced by Bassom, Clarkson, Law and McLeod [1], and further developed by Wong and Zhang [21,22], Zeng and Zhao [23], and Long, Zeng and Zhou [17]. Our main results are stated as follows. Parts of Theorems 1 and 2 are given in [2] and the rest are new. First, for case (I), we shall prove the following results. In (1.12), (1.13) and (1.14), the constants P 0 , Q 0 and Q 1 are expressible in terms of the beta function as There also exists a descending sequence, · · · <b n < · · · <b 2 <b 1 < 0, having the similar properties as those of {b n }. For case (II), the following holds. Case (III) is in a sense different from the first two cases. We have the following result. where H 0 = 2 5 B 1 2 , 1 6 . For the sake of convenience, we recall some important concepts in the isomonodromy theory for the first Painlevé transcendents. First, one of the Lax pairs for the PI equation is given as follows (see [14]): where are the Pauli matrices and y t = dy dt . Compatibility of the above system implies that y = y(t) satisfies the first Painlevé equation (1.1). Under the transformation the first equation of (1.20) becomes Following [14] (see also [13]), the only singularity of equation (1.22) is the irregular singular point at λ = ∞, and there exist canonical solutions Φ k (λ), k ∈ Z, of (1.22) with the following asymptotic expansion where H = −( 1 2 y 2 t − 2y 3 − ty)σ 3 , and the canonical sectors are These canonical solutions are related by where s k are called Stokes multipliers, and independent of λ and t according to the isomonodromy condition. The Stokes multipliers are subject to the constraints s k+5 = s k and s k = i(1 + s k+2 s k+3 ), k ∈ Z. (1.25) Moreover, regarding s k as functions of (t, y(t), y (t)), they also satisfy [12, p.1687, (13)] whereζ stands for the complex conjugate of a complex number ζ. From (1.25), it is readily seen that, in general, two of the Stokes multipliers determine all others. The derivation of (1.23), (1.24) and (1.25), and more details about the Lax pairs, are referred to in [8]. The rest of the paper is arranged as follows. In Sec. 2, we prove the main theorems, assuming the validity of Lemmas 1, 2 and 3. Next, in Sec. 3, we apply the method of uniform asymptotics to calculate the Stokes multipliers when t = 0, and hence to prove the three lemmas. In the last section, Sec. 4, a discussion is provided on prospective studies and possible difficulties in the general case of Clarkson's open problem. Some technical details are put in Appendices A, B, C and D to clarify the derivation. Proof of the main results Because the monodromy data {s 0 , s 1 , s 2 , s 3 , s 4 } and the solutions of PI have a one-to-one correspondence, a general idea to solve connection problems is to calculate the Stokes multipliers of a specific solution in the two specific situations to be connected. In the initial data case, this means to calculate all s k as t → −∞ and at t = 0. When t → −∞, as stated above in (1.5), (1.7) and (1.9), the Stokes multipliers have been derived by Kapaev [12]. When t = 0, it is much difficult to find the exact values of s k . However, inspired by the ideas in Sibuya [20], we are able to obtain their asymptotic approximations in the special cases considered here, as a step forward. These approximations, together with (1.5), (1.7), (1.9) and (1.10), suffice to prove our theorems. First, the following lemma is crucial to prove Theorem 1. Lemma 1. For any fixed a and large positive (or negative) b, the asymptotic behaviors of the Stokes multipliers, corresponding to y(t; a, b), are given by An immediate consequence of (2.1) is stated as follows. Corollary 1. For any fixed a, there exists a positive sequence {b n } and a negative sequence as n → ∞, such that the Stokes multipliers, corresponding to y(t; a, b n ) (or y(t; a,b n )), satisfy Proof of Theorem 1. We only give the proof when b > 0. When b < 0, the argument is the same except for some minor justification of signs; see (2.1). According to (1.10), see also [13, Theorems 2.1 and 2.2], regarding s 0 as a function of a and b, then s 0 (a, b n ) = 0 implies that the solutions y(t; a, b n ) all belong to type (B). Moreover, (2.3) implies (1.11), and is an improved approximation of b n . Although both formulas are only applicable for large n, (2.3) has a better accuracy for small n when a = 0 as compared with (1.11); see Table 1. For fixed a, from (2.1), it is obvious that the sequence {b n } can be chosen such that It means that if b varies in the above two sequences of open intervals, the PI solutions y(t; a, b) correspond alternatively to type (A) and type (C) solutions. To get (1.12), we calculate the leading asymptotic behavior of s 1 − s 4 as b → +∞. In fact, as a consequence of (2.1), we get as b → +∞. Here we have set P 0 = E 0 +F 0 . Putting b = b n , and noting that bn as n → ∞, we obtain (1.12). Next, to obtain the limiting form connection formulas in (1.13), we only need to calculate |s 0 | and arg s 3 . According to (2.1) and (2.2), one has as b → +∞. Substituting (2.5) into (1.5) and noting that P 0 = E 0 + F 0 , one may immediately get (1.13). Finally, to obtain (1.14), we calculate |s 0 | and arg s 2 , which can also be derived from (2.1). This completes the proof of Theorem 1. Similarly, based on the following result, we can prove Theorem 2. Lemma 2. For fixed b and large negative a, the asymptotic behaviors of the Stokes multipliers, corresponding to y(t; a, b), are given by as a → −∞, where Q 0 , E 0 and F 0 are the same constants as the ones in (2.2). Corollary 2. There exists a negative discrete set {a n }, n = 1, 2, · · · , with such that the Stokes multipliers corresponding to y(t; a n , b) are Furthermore, Theorem 3 is an immediate consequence of a combination of Lemma 3 and (1.10). Lemma 3. For fixed b and large positive a, the asymptotic behaviors of the Stokes multipliers, corresponding to y(t; a, b), are given by We leave the proof of Lemmas 1, 2 and 3 to the next section. Uniform asymptotics and the proofs of the lemmas In this section, we are going to prove Lemmas 1, 2 and 3, using the method of uniform asymptotics [1]. The method consists of two main steps. The first step is to transform the Lax pair equation (1.22) into a second-order Schrödinger equation, and to approximate the solutions of this equation with well known special functions. Indeed, denoting (3.1) One can regard (3.2) as either a scalar or a 1 × 2 vector equation. We will see that in all cases (I), (II) and (III) considered in this paper, the solutions of this Schrödinger equation can be approximated by certain special functions. In the second step, we can therefore use the known Stokes phenomena of these special functions. In each case, we shall calculate the Stokes multipliers of Y , and then calculate those of Φ. We carry out a case by case analysis to complete the steps. We may assume that b > 0. The argument for the case when b < 0 is the same, and hence omitted here. With the scaling λ = ξ where and g(η, ξ) = O ξ − 6 5 as ξ → +∞, uniformly for all η bounded away from η = 0. Obviously, there are three simple turning points, say η j , j = 0, 1, 2. Those are the zeros of F (η, ξ) near −1, e πi 3 and e − πi 3 respectively. For convenience, we denote α = e πi 3 and β = e − πi 3 . A straightforward calculation shows that η 1 − α ∼ − 1 6ξ and η 2 − β ∼ − 1 6ξ as ξ → +∞. According to [7], the limiting state of the Stokes geometry of the quadratic form F (η, ξ)dη 2 as ξ → +∞ is described in Figure 1. Therefore, following the main ideas in [1], we can respectively from neighborhoods of η = η 1 and η = η 2 to the origin. In the present paper, we take the principal branches for all the square roots. Then the conformality can be extended to the Stokes curves, and the following lemma is a consequence of [1, Theorem 2]. Lemma 4. There are constants C 1 , C 2 andC 1 ,C 2 , depending on ξ, such that uniformly for η on any two adjacent Stokes lines emanating from η 1 ; and uniformly for η on any two adjacent Stokes lines emanating from η 2 . The proof of Lemma 5 is left to Appendix A. Now we turn to the proof of Lemma 1, assuming the validity of Lemma 5. If |η| → +∞ with arg η ∼ π 5 , then arg ζ ∼ π 3 . Hence, substituting (3.9) into (3.11) and (3.13), and noting that λ = ξ 2 5 η and the definition of F (η, ξ) in (3.4), we get as ξ → +∞. In addition, a straightforward calculation from (1.23) leads to Comparing (3.20) with (3.22), and using the results of Lemma 4, we obtain as ξ → +∞. Here, the c j 's in (3.23) are not equal but asymptotically equal to the corresponding ones in (3.21) as ξ → +∞. By abuse of notations, we use the same symbol for the c j 's in these two formulas, since we only care about the asymptotic behavior of the Stokes multipliers. Case (II): fixed b and large negative a In this case, it is appropriate to make the scaling a = −ξ 2 5 and λ = ξ 2 5 η with ξ → +∞, following which, equation (3.2) is reduced to respectively. Moreover, we find that near the turning pointsη 1 andη 2 , equation (3.29) is similar to and even simpler than (3.3) in Case (I), and the Stokes geometry ofF (η, ξ)dη 2 is the same as the one shown in Figure 1. Hence, following the analysis in Subsection 3.1, we get Hence, in a neighborhood of each of these two turning points and on the stokes curves emanating form them, the Airy functions can also be used to uniformly approximate the solutions of (3.33). Similar to (3.5) and ( then we have the following lemma which is similar to Lemma 4. Lemma 6. There are constants D 1 and D 2 , depending on ξ, such that uniformly for η on two adjacent Stokes curves emanating fromηα. Lemma 8. There are two constants A 1 and A 2 , depending on ξ, such that uniformly for η on any two adjacent Stokes lines emanating fromη j , j = 1, 2, 3. The proof of this lemma is similar to those of [1, Theorems 1 and 2]. By evaluating the integrals in (3.42) (see Appendix C), we get the asymptotic behavior of δ(η) as follows. Discussion We have considered the initial value problem of (1.1) with initial data (1.4) in three special cases. In fact we have given a rigorous proof of the conclusions in [2] for PI, built more precise relations between the initial value of PI solutions and their large negative t asymptotic behavior, and obtained the limiting form connection formulas (1.12)-(1.18). There are still several issues to be further investigated. First, we have only considered three special cases of the initial value problem of PI. An equally natural question is what would happen when both a and b are large. More attention should be paid to [9,Fig. 4.5] by Fornberg and Weideman. According to this figure, we find that it may be divided into two cases: (1) large |b| and large negative a, (2) large |b| and large positive a. (4.1) Moreover, we see that the analysis of the first case in (4.1) is similar to one of case (I) or case (II) in the present paper, while for the second case in (4.1), more careful analysis is needed. For example, a description of the Stokes geometry and the Stokes curves at the finite plane seems to be vital; cf. [15]. Of course, one may consider other special cases of Clarkson's open problem. For example y(0) = y (0) = 0. According to [11], see also in [19], the PI solution with y(0) = y (0) = 0 oscillates about y = − −t 6 when t < 0, and it satisfies (1.2) and (1.3). Hence, a question is how to determine the exact values of the parameters d and θ in (1.2) and (1.3). In fact, the result is already known, and is given by Kitaev [16] via the WKB method. To the best of our knowledge, it may be the only special case of the Clarkson's open problem which has been fully solved for PI equation. However, a more important and challenging problem is the general case of Clarkson's open problem. Assuming that a and b are fixed parameters, then equation (3.2) has a regular singularity at λ = a and an irregular singularity at λ = ∞ of rank 3. As far as we know, there is no such known special function that can be used to approximate the solutions of (3.2), uniformly for λ near both a and ∞. This is the main difficulty of the general case of Clarkson's open problem. Finally, it seems quite promising that the method of uniform asymptotics can also be applied to the initial value problems of other Painlevé equations. In fact, similar results for the properties of the PII solutions have also been stated in [2]. Choose δ to be a point with |δ − δ 1 | ∼ ξ − 1 2 as ξ → +∞, then Since when s is on the integration contour from η 1 to 1 + δ , F (s, ξ) Hence, we have Similarly, when s is on the integration contour from δ + 1 to η, F (s, ξ) The expression in (B.6) for Q 1 can be derived similarly, and probably easier. No integration by parts is needed in this case. Other integrals can also be evaluated. For instance, for the quantity E 0 in (A.6), we have
2017-06-13T10:56:47.000Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "8766060a090aa768ab8e3f470e4516faf1fb5081", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1612.01350", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8766060a090aa768ab8e3f470e4516faf1fb5081", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
245480412
pes2o/s2orc
v3-fos-license
Synthesis and In Vitro Evaluation of Aspartic Acid Based Microgels for Sustained Drug Delivery The main focus of the current study was to sustain the releasing behavior of theophylline by fabricated polymeric microgels. The free radical polymerization technique was used for the development of aspartic acid-co-poly(2-acrylamido-2-methylpropanesulfonic acid) microgels while using various combinations of aspartic acid, 2-acrylamido-2-methylpropanesulfonic acid, and N′,N′-methylene bisacrylamide as a polymer, monomer, and cross-linker, respectively. Ammonium peroxodisulfate and sodium hydrogen sulfite were used as initiators. Characterizations such as DSC, TGA, SEM, FTIR, and PXRD were performed for the fabricated microgels to assess their thermal stability with unreacted polymer and monomer, their surface morphology, the formation of a new polymeric system of microgels by evaluating the cross-linking of functional groups of the microgels’ contents, and to analyze the reduction in crystallinity of the theophylline by fabricated microgels. Various studies such as dynamic swelling, drug loading, sol–gel analysis, in vitro drug release studies, and kinetic modeling were carried out for the developed microgels. Both dynamic swelling and percent drug release were found higher at pH 7.4 as compared to pH 1.2 due to the deprotonation of functional groups of aspartic acid and AMPS. Similarly, sol–gel analysis was performed and an increase in gel fraction was observed with the increasing concentration of microgel contents, while sol fraction was decreased. Conclusively, the prepared carrier system has the potential to sustain the release of the theophylline for an extended period of time. Introduction Microgels are solvent-swollen, hydrogel, micro-particulate systems possessing discrete particles within the range of 20 nm to 50 µm. Microgels are widely recognized as a promising material for drug delivery [1]. The benefits of microgels are their synthesis through a simple procedure and the control over key features including size and functionality, key for regulating the precise binding of drugs, release kinetics, high stability and shelf-life, biodistribution and specific delivery, biocompatibility, bioaccumulation, degradation, and functionality in the context of a drug delivery application. Due to their high potential in biomedicine and drug delivery, great attention has been placed on microgels, especially in academic research. A number of advantages are presented by microgels over other nanoparticle-based vehicles in terms of in vivo application. The main benefit of microgels is their straightforward preparation procedure, which results in highly monodisperse nanoparticles in most cases. The resulting water-swollen, polymeric network of microgels is highly hydrophilic in nature and a low interfacial energy is presented in a biological Gels 2022, 8,12 2 of 19 environment, decreasing the nonspecific interactions with proteins (opsonization) and enhancing their biocompatibility and bioavailability [2,3]. Poly(aspartic acid) (ASPA), a synthetic polymer, is composed of free carboxylic groups or amino groups based on natural amino acid, as shown in Figure 1 [4]. Its aqueous solubility, biodegradability, and non-toxic nature are the main characteristics that enabled ASPA as a suitable candidate for drug delivery [5]. Similar to other ionic polymers whose swelling index is enhanced by either lowering the ionic strength or increasing the pH of the medium, ASPA shows its response to ionic strength and to the pH of the medium due to the presence of carboxylic groups. The ionization of carboxylic groups leads to a polyelectrolyte effect [6,7]. Negative charges are produced throughout the network due to ionization or deprotonation that results in the confirmation of the extended chain and a globule to coil transition [8]. 2-Acrylamido-2-methylpropanesulfonic acid (AMPS) is a hydrophilic monomer. It is ionic and non-ionic in nature, having a pKa value of 2. AMPS is a white crystalline powder that dissolves in water rapidly due to its hydrophilic nature, but its solubility is limited to polar organic solvents. The swellability of AMPS is highly dependent on ionized sulfonate groups. Due to the presence of sulfonic functional groups, AMPS shows a better stability against hydrolysis and a strong resistance to salt. AMPS has the capability to highly swell once exposed to a particular pH of the medium. The swelling of AMPS depends upon the polymer used in combination with them. It plays an important role in drug delivery systems. Malik et al. and coworkers prepared chitosan/xanthan gum-based hydrogels wherein AMPS was used as a monomer and demonstrated that the developed network of hydrogels released the antiviral drug acyclovir in a controlled way [9]. Similarly, Abid et al. (2021) developed xanthan gum and polyvinyl pyrolidone-co-poly(AMPS) hydrogels and reported the controlled delivery of venlafaxine [10]. monodisperse nanoparticles in most cases. The resulting water-swollen, polymeric network of microgels is highly hydrophilic in nature and a low interfacial energy is presented in a biological environment, decreasing the nonspecific interactions with proteins (opsonization) and enhancing their biocompatibility and bioavailability [2,3]. Poly(aspartic acid) (ASPA), a synthetic polymer, is composed of free carboxylic groups or amino groups based on natural amino acid, as shown in Figure 1 [4]. Its aqueous solubility, biodegradability, and non-toxic nature are the main characteristics that enabled ASPA as a suitable candidate for drug delivery [5]. Similar to other ionic polymers whose swelling index is enhanced by either lowering the ionic strength or increasing the pH of the medium, ASPA shows its response to ionic strength and to the pH of the medium due to the presence of carboxylic groups. The ionization of carboxylic groups leads to a polyelectrolyte effect [6,7]. Negative charges are produced throughout the network due to ionization or deprotonation that results in the confirmation of the extended chain and a globule to coil transition [8]. 2-Acrylamido-2-methylpropanesulfonic acid (AMPS) is a hydrophilic monomer. It is ionic and non-ionic in nature, having a pKa value of 2. AMPS is a white crystalline powder that dissolves in water rapidly due to its hydrophilic nature, but its solubility is limited to polar organic solvents. The swellability of AMPS is highly dependent on ionized sulfonate groups. Due to the presence of sulfonic functional groups, AMPS shows a better stability against hydrolysis and a strong resistance to salt. AMPS has the capability to highly swell once exposed to a particular pH of the medium. The swelling of AMPS depends upon the polymer used in combination with them. It plays an important role in drug delivery systems. Malik et al. and coworkers prepared chitosan/xanthan gum-based hydrogels wherein AMPS was used as a monomer and demonstrated that the developed network of hydrogels released the antiviral drug acyclovir in a controlled way [9]. Similarly, Abid et al. (2021) developed xanthan gum and polyvinyl pyrolidone-co-poly(AMPS) hydrogels and reported the controlled delivery of venlafaxine [10]. Theophylline (TP) is an alkaloid and commonly used as a bronchodilator drug in the treatment of chronic obstructive pulmonary disease [11,12]. TP is obtained from Camellia sinensis leaves. When TP is administered, the bronchioles and muscles of pulmonary sanguine vessels are directly relaxed, which demonstrates the relaxing and bronchodilator effect of TP upon the smooth muscles [13]. The half-life of TP is in the range of 6-12 h, but commonly reported as 8 h, whereas in smoking patients, the half-life is reduced to 5 h. In order to avoid large fluctuations of the plasma concentration, TP needs to be administered three to four times in a day [14]. However, taking TP several times in a day leads to severe complications such as nausea, vomiting, insomnia, abdominal pain, jitteriness, and a rapid or irregular heartbeat, which results in the high release of medication. It also reduces patient compliance [15]. Therefore, to overwhelm all these complications and improve the patient compliance, a polymeric system is required to sustain the TP's release. Zhang and coworkers prepared TP-loaded microspheres of chitosan/β-cyclodextrin by the spray Theophylline (TP) is an alkaloid and commonly used as a bronchodilator drug in the treatment of chronic obstructive pulmonary disease [11,12]. TP is obtained from Camellia sinensis leaves. When TP is administered, the bronchioles and muscles of pulmonary sanguine vessels are directly relaxed, which demonstrates the relaxing and bronchodilator effect of TP upon the smooth muscles [13]. The half-life of TP is in the range of 6-12 h, but commonly reported as 8 h, whereas in smoking patients, the half-life is reduced to 5 h. In order to avoid large fluctuations of the plasma concentration, TP needs to be administered three to four times in a day [14]. However, taking TP several times in a day leads to severe complications such as nausea, vomiting, insomnia, abdominal pain, jitteriness, and a rapid or irregular heartbeat, which results in the high release of medication. It also reduces patient compliance [15]. Therefore, to overwhelm all these complications and improve the patient compliance, a polymeric system is required to sustain the TP's release. Zhang and coworkers prepared TP-loaded microspheres of chitosan/β-cyclodextrin by the spray drying method and reported the sustained release of TP for 6 h at pH 6.8 [16]. Similarly, Ahirrao et al. (2014) developed hydrogel beads of sodium alginate and reported the maximum sustained release of TP for 11 h [17]. However, a lot of work is still needed in order to overcome the challenges faced by TP, especially due to its frequent administration. Therefore, the authors have prepared aspartic acid-based microgels for the sustained release of TP for 24 h. The literature reveals that microgel is one of the most suitable carrier systems for the sustained/controlled release of drugs. Hence, different researchers have formulated microgels for the sustained/controlled delivery of drugs such as poly(N-isopropylacrylamide) microgels, which were developed for the sustained release of naltrexone for up to 5 h by Kjøniksen and coworkers [18]. Similarly, Babu et al. (2006) prepared pH-sensitive microgels of sodium alginate/acrylic acid and demonstrated the controlled release of ibuprofen up to 12 h [19]. Comparing the release behavior of the currently fabricated microgels with the previously reported data, the developed microgels can be considered most suitable for the sustained release of drugs. The novelty of prepared microgels can be connected with the cross-linking of ASPA with AMPS by N ,N -methylene bisacrylamide (MBA) in the presence of initiators. Due to its unique features such as good biodegradability, solubility in water, and a non-toxic nature, ASPA has recently gained much attention, especially regarding drug delivery systems. Due to its pH-sensitive nature, the use of ASPA has been increased, especially in the development of pH-sensitive drug carrier systems such as hydrogels, microgels, and nanogels, etc. The pH sensitivity of ASPA is increased with the increase in pH of the medium due to the presence of carboxylate groups, which enable the ASPA to deprotonate at high pH values. Similarly, AMPS is hydrophilic in nature, and thus used widely in the preparation of different pharmaceutical products. As a good hydrophilic monomer, the introduction of AMPS into a polymer network increases the pH sensitivity, swelling ratio, and drug release of the developed drug carrier system. Hence, the recent combination of ASPA and AMPS has enabled the polymeric microgel to highly swell at high pH values due to its pH-sensitive contents and, as a result, maximum swelling and drug loading are observed. Similarly, a high drug release of the fabricated microgel at high pH values protects the stomach from the side effects of the drug and also the drug itself from stomach acidity. Hence, we can conclude that fabricated microgels could be considered as an ideal drug carrier system for the sustained release of theophylline and of other drugs too. Here, we report the synthesis of aspartic acid-based microgels for TP sustained release. Different concentrations of the polymer ASPA, the monomer AMPS, and the cross-linker MBA (N ,N -methylene bisacrylamide) were employed in the presence of the initiator APS (ammonium peroxodisulfate) and SHS (sodium hydrogen sulfite) for the fabrication of aspartic acid-co-poly(2-acrylamido-2-methylpropanesulfonic acid) microgels to sustain the release of TP for a prolonged period of time. Various studies, such as dynamic swelling, drug loading, sol-gel fraction, in vitro studies, and kinetic modeling were carried out. Similarly, characterizations such as DSC, TGA, SEM, FTIR, and PXRD were conducted to know and assess the different aspects of the developed microgels. Physical Appearance The physical appearance of the fabricated microgels was white in color, as shown in Figure 1A,B. The difference was in the hardness of the formulation. All the formulations of MBA ( Figure 1A) with an increasing concentration were hard and dense. The bulk density increased while the pore size decreased. The formulations of ASPA and AMPS ( Figure 1B) with an increasing concentration were porous with less bulk density. Dynamic Swelling Swelling studies were carried out for ASPA-pAMPS microgels to determine the swelling index of the microgels at two different pH media, i.e., pH 1.2 and 7.4, respectively, as shown in Figure 2A-D. Higher swelling at both pHs was exhibited for developed microgels but, due to the presence of COOH and NH groups of ASPA, swelling at pH 7.4 was observed higher than pH 1.2 ( Figure 2A). COOH and NH functional groups of ASPA were protonated at a lower pH of 1.2 and formed a conjugate with counter ions via strong hydrogen bonding and, as a result, a low swelling index of the fabricated microgels was observed at pH 1.2. However, with the increase in pH of the medium, deprotonation of COOH and NH groups occurred, which leads to an increase in charge density and generates strong electrostatic repulsive forces. These electrostatic repulsive forces result in higher expansion/swelling of microgels due to the high charge density of the same functional groups, which repel each other, and, as a result, maximum swelling is observed. Hence, as the pH of the medium is enhanced, the swelling of the fabricated microgels is increased in the same pattern, and vice versa [20]. Similarly, SO 3 H groups of AMSP were protonated at pH 1.2 because the pKa value of SO 3 H group was almost 1.9. Due to the protonation of SO 3 H groups, the charge density of SO 3 H groups was decreased due to the formation of a conjugate with counter ions by strong hydrogen bonding, and hence a decrease in swelling was observed. On the other hand, SO 3 H groups of AMPS were deprotonated at a high pH of 7.4, which leads to high charge density and, as a result, strong electrostatic repulsive forces are produced, which repel each other, and maximum swelling is achieved [9,21,22]. protonated at pH 1.2 because the pKa value of SO3H group was almost 1.9. Due to the protonation of SO3H groups, the charge density of SO3H groups was decreased due to the formation of a conjugate with counter ions by strong hydrogen bonding, and hence a decrease in swelling was observed. On the other hand, SO3H groups of AMPS were deprotonated at a high pH of 7.4, which leads to high charge density and, as a result, strong electrostatic repulsive forces are produced, which repel each other, and maximum swelling is achieved [9,21,22]. ASPA, AMPS, and MBA also influence the dynamic swelling of ASPA-pAMPS microgels at both pH values, as shown in Figure 2B-D. The swelling increased as the concentration of the ASPA increased ( Figure 2B). ASPA has COOH and NH functional groups, and an increase in the concentration of ASPA led to an increase in COOH and NH groups; due to this, charge density is increased and swelling increases. Similarly, a rise was seen in the dynamic swelling of microgels as the concentration of AMPS increased ( Figure 2C). AMPS contains SO3H groups, and as the concentration of AMPS increased, the generation of SO3H groups also increased; due to this, charge density is enhanced and swelling increases, and vice versa [23,24]. Both ionic and non-ionic groups are present in AMPS. As the concentration of ionic groups is increased, the swelling and superabsorbancy capabilities of AMPS-based hydrogels are increased and hence started to dissociate at different pHs [25]. Unlike ASPA and AMPS, a decrease in dynamic swelling was observed as the concentration of the MBA increased ( Figure 2D). The bulk density of polymeric microgels is increased with the enhancement of the MBA concentration; due to this, the penetration of water into a microgel network decreases and, as a result, a decline is observed in the dynamic swelling of microgels [26,27]. ASPA, AMPS, and MBA also influence the dynamic swelling of ASPA-pAMPS microgels at both pH values, as shown in Figure 2B-D. The swelling increased as the concentration of the ASPA increased ( Figure 2B). ASPA has COOH and NH functional groups, and an increase in the concentration of ASPA led to an increase in COOH and NH groups; due to this, charge density is increased and swelling increases. Similarly, a rise was seen in the dynamic swelling of microgels as the concentration of AMPS increased ( Figure 2C). AMPS contains SO 3 H groups, and as the concentration of AMPS increased, the generation of SO 3 H groups also increased; due to this, charge density is enhanced and swelling increases, and vice versa [23,24]. Both ionic and non-ionic groups are present in AMPS. As the concentration of ionic groups is increased, the swelling and superabsorbancy capabilities of AMPS-based hydrogels are increased and hence started to dissociate at different pHs [25]. Unlike ASPA and AMPS, a decrease in dynamic swelling was observed as the concentration of the MBA increased ( Figure 2D). The bulk density of polymeric microgels is increased with the enhancement of the MBA concentration; due to this, the penetration of water into a microgel network decreases and, as a result, a decline is observed in the dynamic swelling of microgels [26,27]. Drug Loading Swelling plays an important role in drug loading. The maximum amount of drug will be loaded by microgels if the swelling of the system is high because the larger the pore size, the greater the amount of fluid that will penetrate through the pores into the microgels' network. Due to this, the dynamic swelling will be greater and, as a result, the maximum amount of drug will be loaded by the microgels, and vice versa [28]. Moreover, the % drug loading was carried out for all formulations of ASPA-pAMPS microgels, as shown in Table 1. The % drug loading increased as the concentration of the ASPA and AMPS increased because the swelling of microgels was enhanced with the increase in the concentration of the ASPA and AMPS [29]. Contrary to ASPA and AMPS, the % drug loading was reduced as the concentration of the MBA increased. The bulk density was increased, due to which, water penetration into microgels decreases, which leads to a reduction in swelling; hence, % drug loading is decreased [28]. Table 1. Sol-gel analysis and % drug loading of ASPA-pAMPS microgels. Sol-Gel Analysis Sol-gel analysis was carried out for the developed microgels to know the soluble uncross-linked and insoluble cross-linked parts of microgels. The gel fraction was increased (Table 1) as the concentration of all microgel contents increased, i.e., ASPA, AMPS, and MBA, respectively. As the concentration of the ASPA increased, the gel fraction was increased because a high amount of free radicals were generated by ASPA for the monomer contents, which led to a fast polymerization process among the microgels' contents, therefore the gel fraction increased. Samanta et al. (2014) also reported that, as the concentration of polymer increases, the polymerization process among the hydrogels' contents is enhanced, which leads to greater gelation [30]. Similarly, a greater amount of SO 3 H groups were produced as the concentration of AMPS increased. The higher the SO 3 H groups, the faster the chemical reaction would be between polymer and monomer on their respective reactive sites and thus higher the gel fraction, and vice versa. The gel fraction is increased up to certain limit because the gel fraction is decreased if a very high concentration of AMPS is used. In such a condition, the AMPS contents already occupied the available reactive sites of the polymer, any further increase leads to steric hindrance effects and to the formation of a layer on the backbone of the polymer, due to which the pore size of the system decreases, the hardness of the system increases and, as a result, the gel fraction decreases [31]. Similar to ASPA and AMPS, the gel fraction is increased as the concentration of the MBA is increased. The higher the MBA concentration, the faster the cross-linking among the microgel contents will be and the greater the gel fraction [32,33]. Unlike the gel fraction, the sol fraction is decreased with the increase in the concentration of ASPA, AMPS, and MBA because sol fraction is inversely proportional to gel fraction [34], and vice versa. In Vitro Drug Release Study An in vitro drug release study was conducted for fabricated microgels to evaluate the percent drug release from the ASPA-pAMPS microgels at both acidic and basic media, i.e., pHs 1.2 and 7.4, respectively, as indicated in Figure 3A-D. A higher percent drug release (90%) was seen at pH 7.4 as compared to pH 1.2 (60%) ( Figure 3A) due to the deprotonation of the functional groups of the polymer and monomer. ASPA contains COOH and NH groups. Therefore, the functional groups of ASPA were protonated at a lower pH of 1.2 and formed a conjugate with other counter ions. Strong hydrogen bonding occurred, due to which, swelling and percent drug release were observed as almost low at pH 1.2. However, as the pH increased from 1.2 to 7.4, the deprotonation of COOH and NH groups of the ASPA occurred, which led to higher charge density and, as a result, strong electrostatic repulsive forces were generated. The same charges repelled each other, due to which, greater swelling and percent drug release were ultimately detected. Similarly, AMPS contained SO 3 H groups, which resulted in an increase in swelling and percent drug release at a higher pH of 7.4 due to the deprotonation of SO 3 H groups [9,35]. Similarly, an in vitro drug release study was carried out for the commercially available tablets Theolin S.R (250 mg, PeiLi Pharmaceutical IND. Co., Ltd, Taichung, Taiwan) at both pH 1.2 and 7.4, respectively, as shown in Figure 3E. A drug release of 96% from Theolin was observed for an initial 10 h at pH 1.2, whereas at pH 7.4, a drug release of 96-98% was detected for an initial 6-8 h. Comparing the percent drug release from the commercial product and the fabricated microgels, we can see that the developed system significantly sustained the release of TP for an extended period of time. Similarly, AMPS contained SO3H groups, which resulted in an increase in swelling and percent drug release at a higher pH of 7.4 due to the deprotonation of SO3H groups [9,35]. Similarly, an in vitro drug release study was carried out for the commercially available tablets Theolin S.R (250 mg, PeiLi Pharmaceutical IND. Co., Ltd, Taichung, Taiwan) at both pH 1.2 and 7.4, respectively, as shown in Figure 3E. A drug release of 96% from Theolin was observed for an initial 10 h at pH 1.2, whereas at pH 7.4, a drug release of 96-98% was detected for an initial 6-8 h. Comparing the percent drug release from the commercial product and the fabricated microgels, we can see that the developed system significantly sustained the release of TP for an extended period of time. Microgel contents, i.e., ASPA, AMPS, and MBA, also influenced the percent drug release from the developed microgels at both pH 1.2 and 7.4, respectively. The percent drug release increased with an increase in the concentration of ASPA ( Figure 3B) because the increase in the generation of COOH and NH groups occurred with the increase in ASPA concentration, and vice versa. Similarly, an increase in percent drug release was observed as the AMPS concentration was increased ( Figure 3C). The possible reason for this is the generation of maximum SO3H functional groups, which leads to higher swelling and percent drug release. Contrary to ASPA and AMPS, a decrease was seen in percent drug release with an increase in the concentration of MBA ( Figure 3D). The possible reason is the higher bulk density, which leads to a reduction in swelling and percent drug release [36]. Kinetic models such as zero order, first order, Higuchi, and Korsmeyer-Peppas models were carried out for all formulations of ASPA-pAMPS microgels to deduce the release mechanism of the drug from the fabricated microgels. The "r" values represent the regression co-efficient. The results of the kinetic models showed that all formulations exhibited the first order of the kinetic model because the "r" values of first order were greater than the "r" values of the respective models, as shown in Table 2 [38]. The "n" values determine the type of diffusion, i.e., Fickian diffusion/non-Fickian diffusion. If the "n" value is greater than 0.45, it indicates that the diffusion is non-Fickian, while if the "n" value is equal to or less than 0.45, it means that the diffusion is Fickian, and vice versa. The "n" values for all formulations were found within the range of 0.4886-0.8526, which indicated non-Fickian diffusion. Microgel contents, i.e., ASPA, AMPS, and MBA, also influenced the percent drug release from the developed microgels at both pH 1.2 and 7.4, respectively. The percent drug release increased with an increase in the concentration of ASPA ( Figure 3B) because the increase in the generation of COOH and NH groups occurred with the increase in ASPA concentration, and vice versa. Similarly, an increase in percent drug release was observed as the AMPS concentration was increased ( Figure 3C). The possible reason for this is the generation of maximum SO 3 H functional groups, which leads to higher swelling and percent drug release. Contrary to ASPA and AMPS, a decrease was seen in percent drug release with an increase in the concentration of MBA ( Figure 3D). The possible reason is the higher bulk density, which leads to a reduction in swelling and percent drug release [36]. Ali et al. (2014) developed polyvinyl alcohol-based hydrogels and reported that the percent drug release increased as the concentration of polymer and monomer increased, while it decreased with the increase in the concentration of the cross-linker [37]. Kinetic models such as zero order, first order, Higuchi, and Korsmeyer-Peppas models were carried out for all formulations of ASPA-pAMPS microgels to deduce the release mechanism of the drug from the fabricated microgels. The "r" values represent the regression co-efficient. The results of the kinetic models showed that all formulations exhibited the first order of the kinetic model because the "r" values of first order were greater than the "r" values of the respective models, as shown in Table 2 [38]. The "n" values determine the type of diffusion, i.e., Fickian diffusion/non-Fickian diffusion. If the "n" value is greater than 0.45, it indicates that the diffusion is non-Fickian, while if the "n" value is equal to or less than 0.45, it means that the diffusion is Fickian, and vice versa. The "n" values for all formulations were found within the range of 0.4886-0.8526, which indicated non-Fickian diffusion. DSC Analysis To investigate the thermal stability of ASPA, AMPS, unloaded ASPA-pAMPS microgels, TP, and drug-loaded ASPA-pAMPS microgels, DSC was conducted, as shown in Figure 4A-E. The DSC of ASPA ( Figure 4A) revealed an endothermic peak at 245 • C concerned with moisture loss, whereas an exothermic was observed at 255 • C, which demonstrates the ASPA's degradation [39]. The DSC of AMPS ( Figure 4B) indicated dehydration by an endothermic peak at 183 • C, while the glass transition temperature was revealed by an exothermic peak at 170 • C. Moreover, the degradation of AMPS was assigned by an exothermic peak at 202 • C [40]. Similarly, the DSC of ASPA-pAMPS microgels ( Figure 4C) indicated two endothermic peaks at 205 • C and 310 • C. The endothermic peak of the polymer was moved from 245 • C to 310 • C in ASPA-pAMPS microgels, which indicated the high stability of the developed microgels. We can conclude from the above discussion that the developed microgels' network is more thermally stable than its basic contents, i.e., ASPA and AMPS. This all means that ASPA, AMPS, and MBA successfully polymerized and fabricated a suitable and stable microgel network for a sustained drug delivery system. reported the same results as our study, which further supports our hypothesis [41]. The DSC of TP ( Figure 4D) revealed an endothermic peak at 272 • C, whereas two broad exothermic peaks were shown at 260 and 342 • C, respectively. The broad exothermic peak of TP at 260 • C could be seen in the DSC of the loaded ASPA-pAMPS microgels ( Figure 4E) at 290 • C. The slight modification in the peak position of TP was due to their loading by the developed microgels, which revealed no interaction of TP with the microgels' contents. hypothesis [41]. The DSC of TP ( Figure 4D) revealed an endothermic peak at 272 °C, whereas two broad exothermic peaks were shown at 260 and 342 °C, respectively. The broad exothermic peak of TP at 260 °C could be seen in the DSC of the loaded ASPA-pAMPS microgels ( Figure 4E) at 290 °C. The slight modification in the peak position of TP was due to their loading by the developed microgels, which revealed no interaction of TP with the microgels' contents. TGA Analysis TGA was conducted to evaluate and analyze the thermal stability of ASPA, AMPS, and ASPA-pAMPS microgels, as shown in Figure 5A-C. The TGA of ASPA ( Figure 5A) revealed a 27% loss in weight until the temperature approached 305 °C due to the loss of surface moisture. As the temperature increased up to 385 °C, a further loss of 13% in weight was perceived. A further increase in temperature led to a rapid decline in weight of ASPA, the degradation of ASPA started at 410 °C, and a further 15% weight loss was detected as the temperature approached 600 °C due to the degradation of the amino and carboxyl groups [42]. The TGA of AMPS ( Figure 5B) indicated a weight loss of 7% as the temperature reached 208 °C; a further weight loss of 45% was seen within the temperature range of 210 to 230 °C, which represents the dehydration of AMPS. Similarly, the sulfonic acid group started to decompose at 320 °C and a weight loss of 20% was detected within the temperature range of 230-320 °C. Weight loss continued, and a 22% reduction in weight was seen as the temperature approached 600 °C [22]. The TGA of ASPA-pAMPS microgels is shown in Figure 5C, which shows that the degradation half-life of the developed microgels was t1/2 = 350 °C, thus indicating that the developed polymeric network of microgels has the potential to remain stable at high temperature. A weight loss of 10% was indicated within the temperature range of 100-190 °C, followed by a further weight loss of 55% within the temperature range of 200-450 °C due to a breakdown of COOH and SO3H groups of ASPA and AMPS, respectively. Further degradation of the fabricated microgels started at 450 °C and kept going. A further 3% weight loss was perceived by the TGA Analysis TGA was conducted to evaluate and analyze the thermal stability of ASPA, AMPS, and ASPA-pAMPS microgels, as shown in Figure 5A-C. The TGA of ASPA ( Figure 5A) revealed a 27% loss in weight until the temperature approached 305 • C due to the loss of surface moisture. As the temperature increased up to 385 • C, a further loss of 13% in weight was perceived. A further increase in temperature led to a rapid decline in weight of ASPA, the degradation of ASPA started at 410 • C, and a further 15% weight loss was detected as the temperature approached 600 • C due to the degradation of the amino and carboxyl groups [42]. The TGA of AMPS ( Figure 5B) indicated a weight loss of 7% as the temperature reached 208 • C; a further weight loss of 45% was seen within the temperature range of 210 to 230 • C, which represents the dehydration of AMPS. Similarly, the sulfonic acid group started to decompose at 320 • C and a weight loss of 20% was detected within the temperature range of 230-320 • C. Weight loss continued, and a 22% reduction in weight was seen as the temperature approached 600 • C [22]. The TGA of ASPA-pAMPS microgels is shown in Figure 5C, which shows that the degradation half-life of the developed microgels was t1/2 = 350 • C, thus indicating that the developed polymeric network of microgels has the potential to remain stable at high temperature. A weight loss of 10% was indicated within the temperature range of 100-190 • C, followed by a further weight loss of 55% within the temperature range of 200-450 • C due to a breakdown of COOH and SO 3 H groups of ASPA and AMPS, respectively. Further degradation of the fabricated microgels started at 450 • C and kept going. A further 3% weight loss was perceived by the fabricated microgels until the temperature approached 600 • C. Conclusively, the discussion demonstrates that the fabricated network of microgel was thermally stable due to the cross-linking of its basic unreacted ingredients. B. Singh et al. (2019) developed carbopol-based hydrogels and reported high thermal stability for fabricated hydrogels [43]. fabricated microgels until the temperature approached 600 °C. Conclusively, the discussion demonstrates that the fabricated network of microgel was thermally stable due to the cross-linking of its basic unreacted ingredients. B. Singh et al. (2019) developed carbopolbased hydrogels and reported high thermal stability for fabricated hydrogels [43]. Surface Morphology and Particle Size SEM was performed at two different magnifications in order to evaluate and analyze the surface morphology of fabricated microgels, as shown in Figure 6A,B. A hard, rough surface with few pores was seen, which demonstrates the successful grafting of the polymer and monomer on their respective sites. The fluid medium penetrated through the pores into the microgels' network, which results in the swelling of microgels. Hence, the higher the swelling, the greater the drug loading, and therefore the greater the drug release [29], and vice versa. The swelling capability of microgels will be high if their surface is porous and vice versa. The average particle size of ASPA-pAMPS microgels ( Figure 6C) was found to be within the range of 26.967 um (26967.4 nm) with a polydispersity index of 0.480, which leads to high swelling, drug loading, and the release of drug [44]. Surface Morphology and Particle Size SEM was performed at two different magnifications in order to evaluate and analyze the surface morphology of fabricated microgels, as shown in Figure 6A,B. A hard, rough surface with few pores was seen, which demonstrates the successful grafting of the polymer and monomer on their respective sites. The fluid medium penetrated through the pores into the microgels' network, which results in the swelling of microgels. Hence, the higher the swelling, the greater the drug loading, and therefore the greater the drug release [29], and vice versa. The swelling capability of microgels will be high if their surface is porous and vice versa. The average particle size of ASPA-pAMPS microgels ( Figure 6C) was found to be within the range of 26.967 um (26967.4 nm) with a polydispersity index of 0.480, which leads to high swelling, drug loading, and the release of drug [44]. Surface Morphology and Particle Size SEM was performed at two different magnifications in order to evaluate and analyze the surface morphology of fabricated microgels, as shown in Figure 6A,B. A hard, rough surface with few pores was seen, which demonstrates the successful grafting of the polymer and monomer on their respective sites. The fluid medium penetrated through the pores into the microgels' network, which results in the swelling of microgels. Hence, the higher the swelling, the greater the drug loading, and therefore the greater the drug release [29], and vice versa. The swelling capability of microgels will be high if their surface is porous and vice versa. The average particle size of ASPA-pAMPS microgels ( Figure 6C) was found to be within the range of 26.967 um (26967.4 nm) with a polydispersity index of 0.480, which leads to high swelling, drug loading, and the release of drug [44]. , and (C) average particle size of ASPA-pAMPS microgels. FTIR Analysis FTIR spectra of ASPA, AMPS, unloaded ASPA-pAMPS microgels, TP, and drugloaded ASPA-pAMPS microgels are shown in Figure 7A-E, respectively. The FTIR spectrum of ASPA is presented in Figure 7A and indicates that the bands at 1562 and 1512 cm −1 correspond to N-H of amide. The absorption peak of C=O of the -COOH functional group −1 −1 Figure 6. (A,B) Surface morphology of ASPA-pAMPS microgels (AFn-3), and (C) average particle size of ASPA-pAMPS microgels. FTIR Analysis FTIR spectra of ASPA, AMPS, unloaded ASPA-pAMPS microgels, TP, and drug-loaded ASPA-pAMPS microgels are shown in Figure 7A-E, respectively. The FTIR spectrum of ASPA is presented in Figure 7A and indicates that the bands at 1562 and 1512 cm −1 correspond to N-H of amide. The absorption peak of C=O of the -COOH functional group was observed by a peak at 1712 cm −1 . Similarly, a broad band at 3445 cm −1 indicated the stretching vibration of N-H of the ASPA. The symmetric stretching vibration of carboxylate and OH groups was assigned by peaks at 1413 and 2860-3310 cm −1 . Zhao et al. (2006) reported the same spectra of ASPA as presented in our current studies, which further supports our observation [4]. Figure 7B indicates the FTIR spectrum of AMPS. C-H stretching of the methyl group of AMPS was assigned by a sharp band at 3007 cm −1 . The stretching and bending of C=O and N-H groups were assigned by absorption bands at 1670 and 1625 cm −1 . Similarly, the symmetric and asymmetric stretching vibration of S=O group was indicated by absorption bands at 1112 and 1370 cm −1 [45]. The ASPA peak at 1712 cm −1 and the AMPS peak at 1670 cm −1 shifted to 1702 and 1698 cm −1 in ASPA-pAMPS microgels ( Figure 7C), while some peaks disappeared. The shifting, disappearance, and formation of new bands revealed the overlapping of AMPS on the backbone of ASPA and the fabrication of ASPA-pAMPS microgels. The FTIR spectra of TP ( Figure 7D) assigned prominent bands at 1656, 1583, and 1297 cm −1 , corresponding to C=O stretching amid, C=C stretching aromatic, and C-O stretching, respectively. The characteristic bands of TP at 1656 and 1583 cm −1 shifted slightly to peaks at 1660 and 1580 cm −1 , respectively, in loaded ASPA-pAMPS microgels ( Figure 7E) due to the loading of TP by fabricated microgels. Therefore, no interaction was seen between the TP and microgels' contents [22]. PXRD Analysis PXRD was carried out to analyze the crystallinity of the ASPA, unloaded ASPA-pAMPS microgels, TP, and drug-loaded ASPA-pAMPS microgels, respectively, as shown in Figure 8A (Figure 8C), respectively. Similarly, the crystallinity of the TP was reduced by the developed microgels as the high intensity crystalline peaks of the TP disappeared in the drug-loaded ASPA-pAMPS microgels ( Figure 8D), which reveals that the developed system enhanced and sustained the delivery of the TP for a long period of time. The PXRD pattern of unloaded ASPA-pAMPS microgels and drugloaded ASPA-pAMPS microgels indicated a slight difference in peak intensity due to the encapsulation of the drug by the fabricated system of microgels [29]. PXRD Analysis PXRD was carried out to analyze the crystallinity of the ASPA, unloaded ASPA-pAMPS microgels, TP, and drug-loaded ASPA-pAMPS microgels, respectively, as shown in Figure 8A-D. The PXRD of ASPA ( Figure 8A) demonstrated prominent, high intensity crystalline peaks at 2θ = 22.80°, 24.30°, 26.20°, and 38.22°. The intensity of the characteristic peaks of ASPA disappeared/were reduced by unloaded ASPA-pAMPS microgels ( Figure 8B), which revealed the successful polymerization of ASPA with AMPS and resulted in the development of ASPA-pAMPS microgels. Similarly, the high intensity crystalline peaks of TP were assigned at 2θ = 12.09°, 14.53°, 21.08°, 37.51° [46] (Figure 8C), respectively. Similarly, the crystallinity of the TP was reduced by the developed microgels as the high intensity crystalline peaks of the TP disappeared in the drug-loaded ASPA-pAMPS microgels ( Figure 8D), which reveals that the developed system enhanced and sustained the delivery of the TP for a long period of time. The PXRD pattern of unloaded ASPA-pAMPS microgels and drug-loaded ASPA-pAMPS microgels indicated a slight difference in peak intensity due to the encapsulation of the drug by the fabricated system of microgels [29]. PXRD Analysis PXRD was carried out to analyze the crystallinity of the ASPA, unloaded ASPA-pAMPS microgels, TP, and drug-loaded ASPA-pAMPS microgels, respectively, as shown in Figure 8A-D. The PXRD of ASPA ( Figure 8A) demonstrated prominent, high intensity crystalline peaks at 2θ = 22.80°, 24.30°, 26.20°, and 38.22°. The intensity of the characteristic peaks of ASPA disappeared/were reduced by unloaded ASPA-pAMPS microgels ( Figure 8B), which revealed the successful polymerization of ASPA with AMPS and resulted in the development of ASPA-pAMPS microgels. Similarly, the high intensity crystalline peaks of TP were assigned at 2θ = 12.09°, 14.53°, 21.08°, 37.51° [46] (Figure 8C), respectively. Similarly, the crystallinity of the TP was reduced by the developed microgels as the high intensity crystalline peaks of the TP disappeared in the drug-loaded ASPA-pAMPS microgels ( Figure 8D), which reveals that the developed system enhanced and sustained the delivery of the TP for a long period of time. The PXRD pattern of unloaded ASPA-pAMPS microgels and drug-loaded ASPA-pAMPS microgels indicated a slight difference in peak intensity due to the encapsulation of the drug by the fabricated system of microgels [29]. Conclusions The characteristics of individual components were changed by the cross-linking of ASPA and AMPS and developed a new ASPA-pAMPS microgel carrier system by free radical polymerization technique. DSC and TGA demonstrated that the polymeric microgels were thermally stable due to the cross-linking and formation of different chemical bonds that enhanced the stability of the developed microgels. A hard surface with few pores of microgels was revealed by SEM. FTIR confirmed the development of ASPA-pAMPS microgels by polymerization reaction and the overlapping of AMPS on the back- Conclusions The characteristics of individual components were changed by the cross-linking of ASPA and AMPS and developed a new ASPA-pAMPS microgel carrier system by free radical polymerization technique. DSC and TGA demonstrated that the polymeric microgels were thermally stable due to the cross-linking and formation of different chemical bonds that enhanced the stability of the developed microgels. A hard surface with few pores of microgels was revealed by SEM. FTIR confirmed the development of ASPA-pAMPS microgels by polymerization reaction and the overlapping of AMPS on the backbone of ASPA. PXRD presented the reduction in crystallinity of the theophylline as the intensity of high crystalline peaks of the drug was reduced by polymeric microgels. pH-dependent swelling and percent drug release profiles were exhibited by designed microgels. High dynamic swelling and percent drug release were observed at pH 7.4 as compared to pH 1.2 due to the deprotonation of functional groups of ASPA and AMPS. Dynamic swelling, drug loading, and percent drug release were increased as the concentration of ASPA and AMPS was increased while they decreased with the enhancement in MBA concentration. Furthermore, a drug release study was performed for the commercial product, Theolin S.R tablets, at both pH 1.2 and 7.4. A rapid percent drug release (96%) from the commercial product was observed within the initial 6-8 h at pH 7.4, while almost the same percent drug release was observed at pH 1.2 within the initial 10 h. On the other hand, polymeric microgels sustained the high percent drug release for 24 h at pH 7.4, demonstrating sustained drug release behavior. Similarly, sol-gel fractions revealed an increase in the gel fraction with the increase in the composition of ASPA, AMPS, and MBA while showing a decrease in the sol fraction. Hence, due to the unique features of ASPA and AMPS that enabled the polymeric microgels to swell highly and release the high amount of drug at high pH values in a sustained manner, we can conclude that the current polymeric microgels are not limited to only sustaining the release of theophylline but can also be used for the sustained delivery of other drugs too, especially those experiencing stomach acidity problems. Synthesis of Microgels Various compositions of polymer ASPA, monomer AMPS, and cross-linker MBA were employed at a constant concentration of initiators APS and SHS for the development of aspartic acid-co-poly (2-acrylamido-2-methylpropanesulfonic acid) (ASPA-pAMPS) microgels, as shown in Table 3. The specific amounts of ASPA, AMPS, MBA, and APS/SHS were taken separately and dissolved in their respective solvents. ASPA is soluble in water; hence, the required quantity of ASPA was dissolved in deionized distilled water. Similarly, AMPS and APS/SHS are completely soluble in water, so they were dissolved in a specific volume of deionized distilled water, respectively. MBA is not completely soluble in water, therefore a mixture of water and ethanol was used and stirred at 50 • C with 50 rpm. Initially, the APS/SHS solution was added into the AMPS solution, stirred for 5 min, then the mixture was poured into the polymer solution and stirred for 20 min. Finally, the MBA solution was added dropwise into the above mixture with constant stirring. After 5 min, a translucent solution was formed and purged by nitrogen gas in order to remove dissolved oxygen from the solution. The translucent solution was transferred into glass molds and these were placed in a water bath at 65 • C for 2 h initially, and then the temperature was enhanced up to 70 • C for the next 5 h. The prepared gel was passed through a specific mesh number, 20, and the fine particles of gels were obtained. A mixture of water and ethanol was used for washing in order to remove any unreacted content attached to the surface of the gels. The prepared gels were placed at room temperature initially for 24 h, and then placed in a vacuum oven at 40 • C until complete dehydration. The dried particles of gels were passed again through a mesh number of 625, and microgel particles were obtained. The prepared microgels were then evaluated for further experiments. Table 3. Feed ratio scheme for formulation of ASPA-pAMPS microgels. Dynamic Swelling A swelling study was performed in HCl buffer of pH 1.2 and phosphate buffer of pH 7.4 at 37 • C for all formulations of APSA-pAMPS microgels. In total, 100 mg of microgels was enclosed in dialysis bags (MW; 12,000-14,000) and then immersed in respective buffer solutions. The dialysis bags were removed from the solution after a specific interval of time, blotted with filter paper to remove excess of water, weighed on a weighing balance, and then immersed again in respective pH buffer solutions. This process was continued until no further increase was observed in the weight of microgels [47]. The dynamic was calculated by the given equation: where q = dynamic swelling, D 1 = initial weight of microgels before swelling, and D 2 = final weight of microgels after swelling at time t. Drug Loading A drug loading study was conducted by diffusion and absorption method for fabricated microgels. A precise quantity of microgels was immersed into 2% drug solution of phosphate buffer pH 7.4, sonicated (Ultrasonic cleaner DC 400H) for 25 min, and then placed in an open area for 24 h at room temperature so that the maximum amount of the drug could be loaded by the microgels. The suspension was filtered by a membrane filter to remove the unloaded drug. After that, the suspension was lyophilized for 24 h to remove the entrapped solvent [48]. An extraction method was used for calculating the drug loaded by the developed microgels. An accurate weighed amount of drug loaded ASPA-pAMPS microgels was immersed in 100 mL phosphate buffer solution of pH 7.4 and stirred until the entire loaded drug was released. The suspension was filtered by a membrane filter and then analyzed on a UV-vis spectrophotometer (U-5100, 3J2-0014, Tokyo, Japan) at λ max 272 nm and the drug contents were evaluated. Sol-Gel Analysis Sol is the un-cross-linked soluble part of microgels, while gel is the cross-linked insoluble part of microgels. Sol-gel analysis was carried out for the purpose of knowing the sol and gel fractions of the ASPA-pAMPS microgels. Therefore, a specific amount of microgel (S 1 ) was taken and added into the round bottom flask containing deionized distilled water. A condenser was fitted with the round bottom flask. The Soxhlet extraction process was carried out for 12 h. After that, the microgels were collected and allowed to 4.9. Thermogravimetric Analysis (TGA) TGA (PerkinElmer Simultaneous Thermal Analyzer STA 8000) was conducted for ASPA, AMPS, and ASPA-pAMPS microgels. For TGA analysis, samples of polymer, monomer, and formulation were taken within a range of 0.5-5 mg and placed in an open pan connected to a microbalance. The heating rate was maintained at 20 • C/min. Similarly, the heat was kept at 40-600 • C under dry nitrogen throughout the experiment [50]. Surface Morphology and Particle Size Analysis The surface morphology of ASPA-pAMPS microgels was analyzed by scanning electron microscopy (SEM) (JSM-5300). The sample of developed microgels was fixed on a double-adhesive tape stuck to an aluminum stub. Gold sputter was used for the coating of gold on stubs and performed beneath an argon atmosphere. Scanning of coated samples was performed randomly and, by the help of photomicrographs, surface morphology was evaluated [51]. For particle size analysis, microgel particles were dispersed in acetone and a suspension was formed, which was then used for the analysis of particle size by using dynamic light scattering (DLS) method (ELSZ-2000 particle size analyzer, Otsuka Electronics, Otsuka, Japan) [52]. Fourier Transform Infrared Spectroscopy (FTIR) Analysis FTIR spectra of ASPA, AMPS, unloaded ASPA-pAMPS microgels, TP, and drugloaded ASPA-pAMPS microgels were performed to know (i) the structural arrangement of the microgels' contents individually and also in the polymeric network of microgels, and (ii) the interaction of drugs with the microgels' contents. ATR (Attenuated Total Reflectance) mode was used for spectra analysis. Hence, all the samples were evaluated and analyzed by NICOLET 380 FTIR within the spectra range of 4000-500 cm −1 [53]. The number of scans and resolution were kept at 8 and 4 cm −1 throughout the study, respectively. Powder X-ray Diffractometry (PXRD) Analysis PXRD pattern of ASPA, AMPS, unloaded ASPA-pAMPS microgels, TP, and drugloaded ASPAp-AMP microgels was carried out by XRD-6000 SHIMADZU X-ray DIFFRAC-TOMETER. Hence, dried powder samples of 500 mg were taken and griped by a plastic sample holder, whereas the surface of the samples was leveled by a glass slide. Theta (θ) was kept between 10-60 • at a rate of 2 • 2θ/min at room temperature [54].
2021-12-26T16:04:11.214Z
2021-12-24T00:00:00.000
{ "year": 2021, "sha1": "4cb082ac7da306faac6458d9ffd4b9ce03045b69", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2310-2861/8/1/12/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1b655a82aa978003fb7a1ed94587439c10e735a9", "s2fieldsofstudy": [ "Materials Science", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
271416679
pes2o/s2orc
v3-fos-license
Nationalist Soundscapes: The Sonic Violence of the Far Right Sound’s ability to impact the body and cross borders places it firmly within the remit of crimino-logical concern. However, although sound continually emerges as a feature of far-right protests and riots—including through music, chants, singing, yelling and drumming—the role it fulfils for the far right has gone untheorized. To address this gap, this article introduces the concept of ‘nationalist soundscapes’, which describes the mechanisms through which far-right nationalists deploy sound to effectuate a politics of power, domination and nationalist superiority. Referencing a selection of events, I argue nationalists weaponize sound in a way that is unique to them, insofar as nationalist soundscapes are deployed to assert ownership over the nation, while simultaneously displacing racialized others through sonic violence. INTRODUCTION 'Non-racist, non-violent: No longer Silent!',The English Defence League.[motto] Sound's ability to penetrate the body places it firmly within the remit of criminological concern.Indeed, Juliet Rogers writes that 'When sound enters the body it is never without impact ' (2020: 455).While this 'impact' can come in many forms, some of which are called 'damage, harm, resonance, recognition or perhaps listening', regardless, sound always 'provokes the body's response' (Rogers 2020: 455).Sound's criminogenic relevance and potential are perhaps most evident through the weaponization of its presence-such as via sonic weapons.It is also evident, however, through the weaponization of sound's absence-qua silence or indeed, silencing.Both are examples of sound's ability to impact the body.These observations highlight the importance of Keith Hayward's call for criminology to take stock of the criminogenic potential of sound, including that of 'soundscapes and acoustic spaces ' (2012: 458). This article takes Hayward's call to pay attention to sound seriously.Specifically, the article aims to turn the insights of sensory criminology towards an analysis of the specific way far-right Nationalist Soundscapes • 3 choreographed embodiment, allowing individuals to come together and comport themselves as a collective national body that moves in unison, such as when marching and chanting as one.As I elaborate, the synchronized embodiment of shared affects can be deployed to materialize the spatial and symbolic dominance of nationalists, thereby displacing the Other through (re) territorialization. To elucidate the concept and function of nationalist soundscapes, this article will first briefly examine how sound has thus far been theorized within criminology, highlighting Alison Young's recent call for the importance of 'listening criminologically ' (2023).I then contextualize nationalist deployments of sound by outlining the importance far-right groups place on the everyday national sensorium, including forms of both belonging and racism which they articulate in reference to sensory experience, including sound. Following this, I articulate a criminological theory of sound that draws from recent work in sensory criminology (McClanahan and South 2019;Millie 2019;Wall 2019;Herrity et al. 2021;Young and Popovski 2023), as well as adjacent concepts such as acoustic territorialization (Labelle 2010), sonic ecologies (Atkinson 2007), affect (Ahmed 2014) and affective atmospheres (Anderson 2009;Hillary and Sumartojo 2014;Fraser and Matthews 2019;Wall 2019;Young 2019;2021).I highlight that while some existing work in this realm hints at the political potential of sound (Rae et al. 2019;Russell and Carlton 2020;de Souza and Russell 2022), importantly, this has primarily focussed on the capacity for sound to be used as a tool of resistance, towards progressive ends.With respect to nationalist soundscapes, however, I am instead interested in sound's capacity to be harnessed towards a politics of power, domination and ethnic nationalist superiority. To properly unpack this sonic politics of power, domination and ethnic nationalist superiority, I first examine how nationalists use sound as they move.Here, I examine the relationship between sound and mobility in reference to several 'literal' nationalist movements-including an EDL march I attended in Dover, England; a UPF protest in Bendigo, Australia; and the series of 'freedom convoys' that emerged across the West to protest COVID-19 movement restrictions.As I elucidate, during events such as these, nationalists use sound both as they move, and in order to move, projecting sound over and across space, ahead of themselves and in advance of their arrival.This advancing sound-which can include chants that are rhythmically repeated or music that blares from vehicles or PA systems-not only foreshadows their approach (often threateningly), but so too, extends and expands the social body of nationalist organizations beyond themselves, breaching spatial, temporal and symbolic boundaries, and allowing them to fill and occupy spaces and places before they arrive.To this end, I explore how sound is used by nationalists to both assert ownership over space, while simultaneously demonstrating that their (supposed) ownership already exists. I then shift from exploring sound and/or mobility, towards analysing how sound 'works' for nationalists when they arrive at their intended destination.Here, I discuss several examples of nationalist protests and occupations, including the Justice For J6 Rally, the series of candlelit vigils held to commemorate the Capitol Hill riots and the Australian Freedom Rally's (AFR) creation of 'Camp Freedom' at Australia's Parliament House as the culmination of the Convoy to Canberra.It is here that I focus in particular on the way far-right groups use nationalist soundscapes to (re)territorialize space through the production of affective atmospheres that allow participants to connect to one another through the embodiment of the shared affects. In concluding, I emphasize the importance of theorizing sound and paying attention to the variety of ways it is deployed by the far right to effectuate tangible forms of sensory and symbolic violence.In concluding, I gesture towards other types of events and actors to which a sensory analysis of sonic violence might be applied, with the hope of providing some building blocks towards further analyses. 'LISTENING CRIMINOLOGICALLY ' Criminology is currently undertaking a sensory turn (McClanahan and South 2019;Herrity et al. 2021).As a part of this, Alison Young argues criminologists need to cultivate the ability to 'listen criminologically ' (2023).Some progress towards this has been made.In an earlier work, James Parker provided a sustained examination of the multifarious but often neglected relationship between sound and law, ultimately calling for the development of an 'acoustic jurisprudence ' (2015).In more recent criminological theorizing, the way sound is deployed by a range of actors has come to attention.However, as Young observes, much of this has 'focused substantially on institutional sound ' (2023: 147).For example, several researchers have 'investigated sound in the context of carceral settings', like prisons and courtrooms (Young 2023: 147).Among these important contributions can be included the work Rice (2016), Bens (2018), Russell and Carlton (2020), Flower (2021) and Herrity (2024).Outside of the prison and courtroom, others have explored the deployment of sound as a tactic of control and surveillance which is used by the state, the police and the military (Parker 2015;Merrill 2017;Wall 2019). Young also observes that the criminogenic contours of sound have begun to be explored in less explicitly criminological contexts.For example, Young and Popovski (2023) have examined how sound is deployed during a range of different types of protests, including those related to climate change, for land rights, against the development of nuclear power and against vaccines.In some of my previous work, I explore how far-right groups have used sound during protest occupations (Gillespie 2021;2023).In other contexts, Fatsis (2019), Scott (2020) and Lee (2022Lee ( , 2023) ) have examined the criminalization of music sub-genres, such as drill and grime, which are constructed as signifiers for illicit criminogenic subcultures, the (racialized) policing and regulation of which is thereby legitimized.As Fatsis notes, however, such attempts to police music can transform listening to it into an act of cultural and political resistance, potentially increasing the pleasure of listening for some (2019). These remarks highlight the importance of Young's call for 'listening criminologically' (2023).They demonstrate sound's relevance for criminology and its deployment by a variety of actors for a variety of reasons-many more of which will be discussed below.It is with this in mind that this article aims to further criminology's attunement.As outlined above, I aim to do this by focussing specifically on the way far-right groups deploy sound during events such as protests, marches and riots.In adopting this focus, my argument is not that the deployment of politicized soundscapes is the sole preserve of far-right protesters.The above observations show that they clearly are not.Instead, my claim is that there are specificities to the way far-right groups deploy sound, including both in terms of their means and their ends. As I demonstrate below, my primary argument is that far-right nationalists weaponize sound in a way that is unique to them-qua nationalist soundscapes-which they use to claim ownership of key spaces within the nation for themselves, while simultaneously displacing racialized others.To this end, I argue nationalist soundscapes are a technique of sonic violence which nationalists use to 'defend' the nation by preserving and reproducing a national sensorium.My claim is that these specificities are worth paying attention to and documenting because if we are to listen criminologically, we must listen attentively.It is towards such careful listening that this article will now proceed. NATIONALISM, R ACISM AND THE SENSORIAL In my own work on ethnic nationalist groups in Australia and the United Kingdom, I have frequently encountered examples of racist preoccupations with the sensory.In these contexts, groups such as the UPF and EDL3 posit near comprehensive lists of sensory and bodily intrusions, which they interpret as intrusions upon the nation itself (Gillespie 2021: 17).The EDL's Mission Statement (2016) provides several representative examples.It claims, for example, that Britain is being intruded upon and contaminated visually: 'our landscapes are marred by hideous mosques and their minarets' .Aurally: 'the so-called "call to prayer" is an audio intrusion inflicted on increasingly more communities' (my emphasis).Gustatorily: 'our food, often without our knowledge and consent, is subject to the incantations and animal brutality of the halal process' .And olfactorily: 'I smell their fucking stinking food everywhere I turn' (EDL member, cited in Treadwell and Garland 2011: 629-30).When completing fieldwork in Luton, where the EDL was formed, a member told me he could 'tell' which streets belong to which communities based 'entirely on the smell of the food' emanating from restaurants and homes (Gillespie 2021: 69). The above examples articulate with Étienne Balibar and Immanuel Wallerstein's argument that racism is a 'total social phenomenon', that renders possible the racialized other's intrusion upon all aspects of life (1991: 17).Accordingly, racism revolves around 'deep-rooted fears of intrusion' (Robinson and Gadd 2016: 197), leading to an obsessive monitoring for Otherness in everyday life (Hage 2004).It is, therefore, no surprise that racist hypervigilance frequently extends to the sensorial, where it is directed towards questions of 'food, size, shape, skin colour, even smell' , often interpreted as proxies for, or evidence of, 'miscegenation' and 'the dilution of racial purity' (Robinson and Gadd 2016: 197).The examples above highlight the scope of nationalist and racist anxieties about sensory intrusions upon the body, be they related to taste, touch, sight, smell or sound.It also highlights the elision nationalists sometimes effectuate between perceived intrusions of the senses and the nation.This is evinced, for example, by the so-called visual marring of the nation's landscapes, and the 'audio-intrusion' allegedly 'inflicted on…communities' .If individual nationalists see, hear and importantly, feel these sensory experiences, then the nation itself sees, hears, and feels them as well (Ahmed 2014: 1-2). The snapshot above also indicates the role the sensory plays in substantiating a perceived sense of belonging to, and possession of, the nation.From seemingly simple sensory encounters, the existence of entire territories can be inferred.This is illustrated, for example, by the notion one can 'tell' which streets belong to which communities based on the smell of particular food.It is also evinced by the EDL's claim that 'the stealthy incursion of halal meat into British supermarkets' is a deliberate step towards 'the creeping Islamisation of our country' (EDL 2016).This idea was echoed by a prominent EDL member during a rally I observed in Aylesbury: [Muslims] live under the land of the Umar.The Umar, the Islamic nation.Holds no borders.Islam is here, in our country, this Trojan horse is parked up.They're in our politics.They're in our food, they're in our schools.They're everywhere.(EDL 2015) The above statement can be read as an expression of Sivamohan Valluvan's (2019) assertion that nationalism is predicated on the idea of 'the racialised outsider' , who, by definition, is placed outside the nation and ought to remain there, and yet, always wants to get in-indeed, such that it 'holds no borders' (to borrow the EDL's words above).The sensory realm thus becomes another front to securing the nation's cultural integrity from the polluting effects of the racialized other. The understanding that nationalism places the Other outside the nation goes some way towards explaining why nationalists read the mere presence or proximity of the Other as an automatic intrusion upon the nation.Indeed, this may explain why nationalists can read sensory encounters with the Other not as 'unpleasant' isolated bodily experiences, but instead, as evidence of a much broader existential threat to the nation and its identity.The EDL's Mission Statement once again provides a clear articulation of this function, lamenting not only the 'audio intrusion' the Other '[inflicts] on increasingly more communities' , but so too, the limitations they say are increasingly placed upon nationalists due to the Other's proximity.As the statement elaborates: 'our speech concerning Islam and its "perfect man" Mohammed, is stifled by constant threats of death' (EDL 2016).Thus, while the Other allegedly inflicts its sound on the nation, nationalists themselves are silenced but need to be heard. 4n a similar example, Ralph Cerminara, the founding member of the white supremacist Australian Defence League (ADL), shared an informative anecdote.Responding to a question as to why he founded the ADL, Cerminara recalled an experience he alleges he had while shopping 'around Christmas time' in Sydney, Australia.He said that as he made his way through a department store, he noticed Christmas carols were not being played.He said that when he asked about this, he was told: 'You're not in Australia any more here-Christmas carols upset Muslims' (Collins 2014).This recounting highlights several important features of the function sound can fulfil for nationalists.It illustrates that for nationalists, the Other's presence and proximity can disrupt the nation by disrupting the sounds that signify it.It shows that the Other can be read as intruding upon the nation both by producing its own sounds (what the EDL calls 'audio intrusions') or, by causing the absence of specific sounds that carry national(ist) connotations, such as carols at 'Christmas time', which function as a signifier for a particular cultural way of life.The absence of such sounds not only disrupts the nation but much more: in their absence, the nation is said to no longer exist: hence, 'You're not in Australia any more here' . The brief examples above provide an initial indication of the importance far-right nationalists place on the sensorial.Some sensory experiences work to sustain the nation, while others can be read as intrusions or contaminations upon it.Indeed, without the desired sights, sounds, smells, tastes or tactile sensations, the idealized nation and perhaps the nation itself, can evaporate.Given the importance nationalists place on the sensorial, in the remainder of this article, I shift to focus on the way nationalists deploy soundscapes to defend the nation against the Other's perceived intrusion, proximity and presence. THEORIZING A CRIMINOLOGY OF SOUND To theorize nationalist soundscapes-and attend to the generative capacity of sound more broadly-in the "Introduction" section articulate something akin to a 'sonic criminology' that entails thinking through the impact sound can have on the body.This is important because, as Rogers elaborates, 'sound is felt physiologically and psychologically' (2020: 455).Indeed, the body itself 'contorts around…sound' such that it comes to be shaped by it (Rogers 2020: 455).That is to say: how we experience sound can impact how we experience the body. Bill McClanahan and Nigel South observe that 'there is no paucity of innovative research that deals with or employs the image' (2019: 1).Until recently however, the other senses, including sound, have been comparatively neglected as sites of critical inquiry, based largely on the occularcentric nature of Western ontology and epistemology (McClanahan and South 2019: 4).This is problematic because the senses do not operate in isolation from one another, but rather, are 'inextricably linked' such that input from one sensory modality can condition that of another (McClanahan and South 2019: 14) From this, McClanahan and South conclude that: Olfactory, tactile, auditory, gustatory and visual data, as they arrive in our internal affective spaces, are creations of an incalculable range of factors that include the conditions of their production, the historical context of that production and dissemination, and the cultural dynamics of their intake or consumption.Put simply, sensory information is given meaning through a complex system of interpretations, encounters and relations.(2019: 8) While the social and criminogenic aspects of the senses require attention, it is important that they are not conceptualized as mere phenomena that occur within a given frame, which can be isolated and about which knowledge can be produced (Young 2014).As Young reminds us with respect to visual criminology, 'images are frequently constructed as objects of analysis' , rather than 'constitutive elements of the discursive field ' (2014: 159).By the same token, I argue sound should not be conceptualized as a passive element that merely occurs as an effect of, or within, an already-established context, as a by-product of other, more primary processes.Instead, like the image, so too sound can be constitutive of the very spaces and places with/in which it seems to reverberate. Emma Russell and Bree Carlton note that 'sound is a particularly powerful boundary-crosser' , capable not only of filling existing spaces but of challenging extant spatial orderings (Russell and Carlton 2020: 296; see also Labelle 2010).This is because sound travels over and through (carceral) architectures, such that new 'spatial boundaries and impressions' can be 'created, reinstated and broken apart' (Russell and Carlton 2020: 300).As they elucidate through their concept of 'counter-carceral acoustemologies', sound's ability to cross and challenge borders can be harnessed as a progressive tool of social justice and anti-carceral resistance.This is because 'sound can breach the carceral boundary and displace carceral-spatial control through forms of political dialogue and creative exchange between imprisoned and non-imprisoned actors' (Russell and Carlton 2020: 297; see also Rae et al. 2019).Put simply, sound can challenge borders, which often struggle to contain it.Indeed, by forging connections and solidarity between people separated by established borders, the spaces those borders define can themselves be (re) territorialized and constituted anew. While counter-carceral acoustemologies highlight the progressive valence of sound, the concept of nationalist soundscapes seeks to articulate its capacity for the opposite: that is, for its use towards exclusionary ends.This potential articulates with Russell and Carlton's analysis that although 'space produces sound in all kinds of ways', space itself can simultaneously be 'configured and territorialised through sound' (2020: 300). 5Thus, just as sound can disrupt extant power relations, so too it can be used to strengthen and (re)establish them.This is approximate to what Brandon LaBelle calls 'acoustic territorialisation' , whereby sound galvanizes social bodies 'into a collective force', that can effectively expand the size, presence and demeanour of a crowd (2010: 115). 6Similarly, it articulates with Rowland Atkinson's notion of 'ecologies of sound ' (2007), whereby urban space is 'ordered' in part through the implementation of soundscapes. One of the mechanisms through which sound can territorialize space and solidify social bodies is through the generation of affect (Ahmed 2014).As Young elaborates-through a definition that already implicates sound-'affect marks the moment at which connection to something seen, heard, experienced or thought registers in the body and then demands that it be named or defined ' (2014: 162).This demand to name that which registers in the body-including that which is heard-does not leave the body unchanged, but rather, can constitute the very body that supposedly experiences it.For example, how sound is registered in a particular context can imply something about the subject who 'hears' sound in that way, including their relationship to the space and place in which the sound is heard, as well as with those around them.During a nationalist riot, for example, a racist chant may be 'heard' by a nationalist as empowering and emboldening, and as an affirmation of a particular positionality both within the nation and with respect to the riot and rioters themselves.The same chant, however, is unlikely to be heard in the same way by those to whom it is targeted.For them, the chant may instead induce anger or fear, among a plethora of other possibilities.This reading of the affective but contingent impact of sound on the body shows the extent of Rogers' claim that sound is felt 'physiologically and psychologically ' (2020: 455).Indeed, in this context, sound can be felt such that it violently constitutes who belongs to the nation and can feel safe there, and who cannot. When affects circulate and are produced within a particular space, they generate 'affective atmospheres': environments that both produce and are produced by affects (Anderson 2009; Wall 2019; Young 2019; 2021).Consider, for example, the fervour that may seem to sustain a race riot (for now, understood in abstract, generic terms).Such a riot may be an expression of already-existing affects and emotions; however, it might simultaneously create and sustain those very affects and emotions as well, essentially reproducing itself by reproducing that which coheres to the subjects of which it is comprised.This co-production of affects, atmospheres and social bodies can be deliberately harnessed (Wall 2019;2020).Illan rua Wall calls this as 'atmotechnics': the technologies and techniques through which affective atmospheres are deliberately created and manipulated towards specific ends (2019).Here, sound can play a vital role.Consider, for example, the use of sonic weapons to disperse crowds-that is, to break up social bodies-by creating an intolerable sonic atmosphere.Similarly, consider the way music is sometimes deployed as a form of hostile architecture, such as when classical music is played at train stations at night-time, with the intention of preventing 'youths' from gathering to socialize 'anti-socially' . As I elaborate below, the concept of nationalist soundscapes refers precisely to the techniques and technologies through which far-right nationalist groups deploy sound to deliberately create and sustain affective atmospheres.Such atmospheres work to galvanize the nationalist social body by (re)territorializing and (re)ordering the nation by displacing the Other and securing it for the nationalist.As I explain, the creation of affective atmospheres qua nationalist soundscapes is thus tantamount to sonic violence. SOUND AND MOVEMENT Far-right nationalist organizations frequently convey the idea they have a 'right' over the spatiality of the nation, and thus, should always be able to move freely within it (Gillespie 2020;2021). For example, during protest, I observed in 2015, a prominent EDL member giving a speech via megaphone declared: We've all come here today to prove a point.Our streets belong to us.We march them whenever we want, no matter what the Muslims say.No matter what the police say, no matter what the Lefties say: they are our streets!We control them!They are our streets!Similarly, in the lead-up to a protest of the building of a Mosque in Bendigo, Australia, members of the UPF exclaimed: 'The police have been instructed to minimize and inconvenience us.BUT THE LAND BELONGS TO US, AND WE'LL GO WHEREVER WE LIKE' (United Patriots Front 2015).Similar sentiments have also been expressed recently by far-right groups protesting against COVID-19 countermeasures.For example, when movement restrictions and vaccine passports were imposed, a series of 'freedom convoys' and protest occupations emerged across the West to performatively defy such measures, converging on their respective political capitals.Many of these were explicitly far right in nature, including the Truckers' Freedom Convoy, the Peoples' Freedom Movement, the Convoy to Canberra and the Convoy to Wellington, all of which were organized and attended by well-known far-right groups and political figures (Gillespie 2023;Gillespie and Ghumkhor 2024).In each of these events, when faced with restrictions upon their movement, whether real or perceived, nationalists themselves have moved performatively by choosing forms of protest that are themselves constituted by movement-such as convoys, marches and rallies.What I will now elucidate is the extent to which sound can function as an important component of such literal and figurative movements. When it was most active between 2009 and 2016, the EDL held marches and rallies on a near-weekly basis.During this time, the EDL was famous for its use of military-style chants and drumming as it marched down the road.These were usually chanted 'live' by those in attendance, although pre-recorded versions were also sometimes played and sung along to via PA systems.The most well-known chant-which is repeated in the EDL's theme song-was the repetitive and rhythmic, 'E-E-EDL!/ E-E-EDL! / E-E-EDL! / We're comin' down the road!'This chant emphasized the presumed spatial dominance and mobility of those 'comin' down the road' , describing literally what EDL members were doing as they chanted it. Another prominent EDL chant assumed a military 'ask/answer' format, in which the highest-ranking officer asks the group a question, to which they respond in unison (Gillespie 2021: 44).The chant went as follows: Officer: 'Whose streets?' Group: 'Our streets!' Officer: 'Whose streets?' Group: 'Our streets!' Officer: 'Whose fucking streets?!' Group: 'Our fucking streets!'Both of these chants make an explicit political claim through their content.Each articulates an image of the EDL's professed relationship to the nation: one of ownership, mobility, dominance and freedom.However, the form these chants assume-that of sound-also contributes to their meaning, insofar as the medium of sound reifies the EDL's claims of ownership and spatial dominance, literally filling the streets they claim they own with a violent declaration of that ownership, for all to hear in advance of their imminent arrival. The sonic form of these chants, and others like them, can also serve several other ideological functions simultaneously.For example, the nationalist soundscape to which these chants contribute to cohere members of the EDL together as a unified social body.As Young and Popovski note, ' A sonic characteristic of call-and-response is that the response is far louder than the original call; contribution to the response acoustically sutures the bodies of individuals present into the mass event ' (2023: 9).In this context, EDL members can sing and respond in unison because they supposedly already are in unison, even if it is their response that retroactively makes them so.Similarly, the chant's audible allusion to the military, via the 'ask/answer' format, further highlights the role sound can play in cohering members of groups such as the EDL together as 'one' (Gillespie 2021: 44-45).By alluding to the military, this chant contributes to the sense of legitimacy, organization and hierarchy the EDL seeks to establish for itself, while also working to substantiate the EDL's depiction of itself as a national 'defence league' , which not only acts on behalf of the nation but does so with the nation's authority.The EDL strengthens such sonic associations through its practice of frequently employing military-style snare drums, which emulate those used by British foot soldiers when marching.Like the chants above, the sound of the snare drum adds to the symbolism of the soundscape the EDL generates as it moves.Rather than merely conveying such ideological connotations, however, the drumming itself also facilitates the EDL's embodiment of this symbolism, allowing members to choreograph their bodies so that they all march in unison as one to the sound. Nationalist soundscapes serve a function not only as far-right organizations mobilize by foot, such as when marching and rioting, but so too, when they traverse space in their vehicles, such as when en route to a protest.This is perhaps most evident when the protest itself is a convoyas was the case with the series of freedom convoys described above.When nationalistic music is played from vehicles, occupants are surrounded by a mobile soundscape that is simultaneously projected outward, upon any who can hear it, as the vehicle moves.For example, when I observed EDL members gathering at an agreed location in preparation for a march, I noticed that as members arrived in their vehicles, almost all were playing loud music-often the EDL's theme song or other songs by the EDL band, Alex and the Bandits.These sonic announcements could be heard before the vehicles rounded the corner to the meeting point, ensuring their arrival was anticipated and was met with cheers by those who had already gathered.In effect, the sound of these vehicles arrived before they did. The projection of soundscapes from nationalist vehicles is not unique to the EDL.It is also a feature of many other ethnic nationalist organizations.For example, similar tactics were employed by the UPF in Australia.This occurred perhaps most notably during an infamous protest in Bendigo in 2015, which the UPF held to protest the building of a new mosque.During the event, many participants arrived in their 'utes' (an Australian slang term for 'utility vehicles' , often called 'pick-ups' in the United States).Such vehicles hold a place in the Australian nationalist imaginary, where their ability to traverse rough terrain is imagined to convey something about the strength and resilience of white Australians.Their ability to easily transport surfboards also associates them with both an emblematic national pastime and spatiality: surfing and the beach (Fiske 1983).Throughout the protest, UPF members used their utes to continuously drive laps around the congregation (Tilley 2015).As they did, they played a range of 'nationalist anthems' from their sound systems, including 'Waltzing Matilda' (which is often described as Australia's 'unofficial' national anthem); Jimmy Barnes' anti-government song, 'Khe Sanh' (a symbol of Australian resistance) and John Farnham's 'You're the Voice' (which I discuss in the next section).Through this tactic, the UPF ensured its protest was not only surrounded physically by the vehicles encircling it but also by the sonic border of a nationalist soundscape.This soundscape performed a dual function.On the one hand, it helped to materialize the protest as a coherent social body, with defined boundaries and borders even as it moved.Paradoxically, it also worked to enlarge the social body beyond those borders, as its sound travelled ahead of and beyond the protest.In the "Introduction" section that nationalist soundscapes should not be conceptualized as merely incidental or coincidental phenomena that occur when nationalist groups come together.Rather, sound can be constitutive of that very togetherness and can play an important role in helping nationalist groups to unite and move as harmonious social bodies.While this section has focussed on the way nationalist groups deploy sound as they move, in the coming section, I considered how they deploy sound when that movement comes to a stop. SONIC OCCUPATION AND AFFECT As outlined above, far-right groups deploy nationalist soundscapes as they traverse space, be it via chants and PA systems when marching, from vehicles as they form convoys or by driving laps around protest gatherings.So too, they deploy nationalist soundscapes when their movement comes to a halt: such as when they arrive at their intended destination, and seek to occupy and colonize that space.Throughout this section, I will analyse how nationalists use sound to occupy and establish themselves in space and place.I argue nationalist soundscapes work to create affective atmospheres that help nationalists establish and demonstrate symbolic and spatial dominance.To explore these dynamics, I will refer to two events as primary case studies: the Justice for J6 Rally, which occurred in the United States in 2021 and the AFR's creation of 'Camp Freedom' in 2022, in Canberra, Australia.After the Capitol Hill riots7 of 6 January 2021-known also as the J6 riots-approximately 20 participants who were imprisoned for their involvement formed the J6 Prison Choir.Using prison teleconference technologies, they recorded 'Justice For All' , which briefly rose to the top of the iTunes chart for the most downloaded new song, before it was removed.In Justice For All, the choir sings the United States' national anthem, the Star-Spangled Banner, which is interspersed with an audio collage assembled from various snippets of speeches given by Donald Trump at various rallies and political events.Included are several lines where Trump recites the Pledge of Allegiance, such as when he 'pledges allegiance to the Flag of the United States of America', and declares, 'We are one nation under God' . Soon after it was recorded, Justice For All became a rallying call for Trump supporters.It is now frequently played at political events, rallies and protests across the United States.It has been played, for example, during protests against the outcome of the 2020 election, which Trump and many of his supporters claim was stolen.Most notable among these is perhaps the Justice for J6 Rally, which was held at the site of the original J6 riots and became violent.As explained above, Justice For All was also played at over 35 candlelit vigils across the United States on the anniversary of the J6 riots, where it was used as a symbol of solidarity for those imprisoned, as well as to lament lost freedom and the 'theft' of the election. The playing of Justice For All during such events serves several functions.It links those who hear it-or at least, those who hear it a certain way-to those imprisoned for the J6 riots.To this end, Justice For All operates in a fashion akin to Russell and Carlton's concept of ' counter-carceral acoustemologies' (2020), whereby the border-crossing capacity of sound is harnessed to create trans-spatial solidarity-albeit here, utilized towards reactionary ends.Through Justice For All, the members of the J6 Prison Choir reach beyond the carceral architectures that seek to contain them.Simultaneously, those who gather at vigils held in their name are able to connect to and honour those they call 'the J6' .The soundscape created by these vigils is sustained not only by the song Justice For All but also through the recitation Pledge of Allegiance, various prayers and the holding of collective minutes of silence, which, as Young and Popovski note, can 'paradoxically amplify the impact of protest' by '[creating] a space of silence within the conventionally hyper-noisy metropolis ' (2023: 10). The soundscapes produced at these vigils do not merely link those who attend them to those who are imprisoned.They also work to establish affective links between those assembled.This is achieved through the coming together of bodies enveloped within a shared nationalist soundscape that quite literally sets the affective tone of that coming together.Indeed, the emotions generated by the affective atmosphere of the vigils are explicitly named in the words of Justice For All and its sampling of the Pledge of Allegiance.These lyrics articulate a particular image of the nation as a unified nation under God, which is 'indivisible' and provides 'liberty and justice for all' .Like this image of the nation, so too, those who assemble in its name and in the name of justice itself, come together as a singular, indivisible body, united by a sense of justice and a shared lamentation of a gross injustice having occurred-that of the supposedly stolen election and the political imprisonment of the J6.This interpretation shows that the creation of nationalist soundscapes can facilitate with Alistair Fraser and Daniel Matthews call 'spatialised feeling' (2019: 2), which describes the processes through which affective orientations come to be reified within and via spatialities, such that those who bear a relation to them can experience and perform particular affects therein.With respect to the J6 vigils, those who participate assemble because the message of Justice For All 'resonates' with them.By assembling to listen, participants can experience the affects and emotions articulated by Justice For All together, as a collective body that occupies space. Like Justice For All, the minutes of silence held at these vigils-which contribute to a nationalist soundscape through the manipulated absence of sound-also facilitate spatialized feeling by employing what Young and Popovski call a 'vocal silence' (2023: 10) that generates a shared sense of grief and solidarity in the face of the nation's supposed loss.This demonstrates that nationalist soundscapes can facilitate connections between subjects in a variety of ways.Whether through sound or silence, soundscapes can facilitate the collective embodiment of the affects they sonically articulate.This is seen above, for example, with respect to shared performances of outrage and injustice through yelling, singing, chanting and listening together, and so too, through shared performances of grief conveyed through collective silence.Nationalist soundscapes can thus help to produce social bodies via the socialization of affect.Such bodies can experience certain affects and emotions because they are together, and they are together because they experience certain affects and emotions (Ahmed 2014). The way nationalist soundscapes can generate particular affects within particular spatialities speaks to LaBelle's theory of 'acoustic territories' (2010), as discussed above.Examples of this are readily apparent in the selection of far-right events I have discussed, which include protests and marches held by the EDL and UPF, the 'freedom convoy' phenomenon that emerged across the West during the COVID-19 pandemic and the series of protests and vigils held following the imprisonment of those involved in the Capitol Hill riots.A further example occurred when the Convoy to Canberra arrived at Australia's political capital to establish a protest occupation they called 'Camp Freedom' .In doing so, the AFR deployed a range of sensory technologies, modifying the space visually, haptically and acoustically (Gillespie 2023).A core aspect of these modifications was the deployment of sound.The digital pamphlets disseminated widely online by the AFR asked participants to bring, among other things, 'Speakers and Megaphones' , with the express intention of ensuring nationalistic songs and political messages could be played upon the convoy's arrival.From the moment they formed Camp Freedom, these sound systems were used to play music with nationalist connotations, such as 'You're the Voice' , 'Khe Sahn' and 'Waltzing Matilda' , as used by other Australian nationalist groups, such as the UPF.Diverging somewhat from Labelle's of acoustic territorialization, I maintain AFR did not first create Camp Freedom so it could then play the music it wanted within the space it occupied.Rather, it was the very playing of the music itself that constituted the occupation of the space-qua Camp Freedom-in the first instance.This reading suggests that the space onto and into which nationalist soundscapes are projected does not necessarily pre-exist that projection, but that rather, space itself-at least, as it comes to be understood and inhabited-can be constituted by the projection of (nationalist) soundscapes upon it.By extension, this implies that the collectivities that come to occupy space by projecting sound upon it need not pre-exist those processes of projection either.Instead, the production of sound itself can retroactively form and inform the collectivity through the generation of shared affects. To this end, the lyrics of John Farnham's well-known anthem You're the Voice-as frequently appropriated by Australian nationalists-are informative.As the song proclaims: 'You're the voice, try and understand it / Make a noise and make it clear / We're not gonna sit in silence / We're not gonna live with fear / This time, we know we all can stand together' .For the many nationalists who have loudly sung these lyrics, sitting in silence is tantamount to living in fear.In contrast, contributing one's voice to collective action is simultaneously the making of a collective that can overcome the stifling effects of fear and silence by allowing bodies to 'stand together' as one.To this end, the sonic violence of nationalist soundscapes works not only to displace the Other but to constitute the nationalist social body as well and the very spaces they seek to inhabit. CONCLUSION Far-right nationalist groups frequently conceptualize sensory encounters with the Other as intrusions and contaminations upon the body, which they read as intrusions and contaminations upon the nation itself.Throughout this article, I have analysed the way ethnic nationalist groups utilize sound to attempt to counteract such perceived intrusions.I have introduced the concept of nationalist soundscapes to refer to the sonic violence of the far-right and as a means of conceptualizing the function this sonic violence sometimes fulfils. Elaine Scarry writes that 'so long as one is speaking, the self extends out beyond the boundaries of the body, [and] occupies a space much larger than the body ' (1985: 33).This is precisely the role nationalist soundscapes fulfil for far-right groups during events such as protests, riots and marches.I have argued that in addition to displacing the Other, nationalist soundscapes effectuate both the production of collective nationalist bodies and the spaces they occupy.I have explored this contention in reference to a snapshot of far-right nationalist events, including protests, marches, riots, rallies, occupations and convoys held by groups like the EDL, UPF and AFR.So too, I have explored the phenomenon of far-right 'freedom convoys' , which emerged during the COVID-19 pandemic and the series of protests and silent vigils that stemmed from the imprisonment of those involved in the Capitol Hill riots, including the formation of the J6 Prison Choir. My analysis of these groups and events suggests that nationalist soundscapes not only play a role in allowing nationalists to colonize and (re)territorialize spaces but so too, in constituting the very nationalist groups that come to occupy those spaces.This is because nationalist soundscapes allow far-right groups to embody and perform affects that connect participants to one another, validating their shared status as nationalists.Such affects include those relating to a shared sense of ownership and mobility; injustice, outrage, righteousness or grief; and a sense of solidarity and unity.By constituting the collective nationalist body, nationalist soundscapes simultaneously work to 'defend' the nation by securing its cultural integrity from the contaminating effects racialized others it displaces.This is one of the primary aspects of the sonic violence of nationalist soundscapes. While this article has aimed to articulate the specific mechanisms through which far-right nationalists deploy and weaponize sound, and the ends to which they do so, far-right actors are not the only ones that deploy soundscapes toward political ends.A variety of other actors, events and circumstances also warrant examination.It is therefore hoped that in addition to providing a better understanding of the way contemporary far-right groups deploy sonic violence, the foregoing analysis might also provide a starting point from which such examinations can proceed so that the ability to listen criminologically might be further cultivated. Downloaded from https://academic.oup.com/bjc/advance-article/doi/10.1093/bjc/azae046/7718413 by guest on 26 July 2024 Nationalist Soundscapes • 11 . What we hear or smell in a given moment, for example, may affect what we see in that moment.So too, it might affect how we interpret what we see.Sensory modalities not only influence one another but so too can influence our emotions and how we feel.As Rebecca Rago elaborates: Our emotions and senses are very tightly intertwined.What we hear, see, taste, smell, and touch can provide us with information on how to feel.In the other direction, what we feel can be heavily influenced by what our senses are taking in.(2014: no pagination) Downloaded from https://academic.oup.com/bjc/advance-article/doi/10.1093/bjc/azae046/7718413 by guest on 26 July 2024
2024-07-25T15:13:57.704Z
2024-07-23T00:00:00.000
{ "year": 2024, "sha1": "561ba22a39def20d993d3b1188206085edb85eee", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/bjc/azae046", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "40b29d6768deba7e944c4ceb2accc4b4f1a297b3", "s2fieldsofstudy": [ "Sociology", "Political Science" ], "extfieldsofstudy": [] }
231940642
pes2o/s2orc
v3-fos-license
Postural ergonomics and work-related musculoskeletal disorders in neurosurgery: lessons from an international survey Background Work-related musculoskeletal disorders (WMSDs) affect a significant percentage of the neurosurgical workforce. The aim of the current questionnaire-based study was to examine the prevalence of WMSDs amongst neurosurgeons, identify risk factors, and study the views of neurosurgeons regarding ergonomics. Methods From June to August 2020, members of the “European Association of Neurosurgical Societies,” the “Neurosurgery Research Listserv,” and the “Latin American Federation of Neurosurgical Societies” were asked to complete an electronic questionnaire on the topics of WMSDs and ergonomics. Results A total of 409 neurosurgeons responded to the survey, with a 4.7 male to female ratio. Most of the surgeons worked in Europe (76.9%) in academic public hospitals. The vast majority of the participants (87.9%) had experienced WMSDs, mainly affecting the shoulder, neck, and back muscles. The most common operations performed by the participants were “Craniotomy for convexity/intrinsic tumors” (24.1%) and “Open lumbar basic spine” (24.1%). Neurosurgeons agreed that ergonomics is an underexposed area in the neurosurgical field (84.8%) and that more resources should be spend (87.3%) and training curricula changes should be made (78.3%) in order to alleviate the burden of WMSDs on neurosurgeons. Univariate analysis did not reveal any associations between the development of WMSDs and age, gender, tenure, average duration of operation, operating time per week, type of operation, and surgical approach. Conclusions The problem of WMSDs ought to be more closely addressed and managed by the neurosurgical community. More studies ought to be designed to investigate specific ergonomic parameters in order to formulate practice recommendations. Supplementary Information The online version contains supplementary material available at 10.1007/s00701-021-04722-5. Introduction In recent years, the occupational mental burden and its effects on physicians' health, namely, burnout, have been given a lot of attention and have been extensively studied [7,25]. On the contrary, albeit work-related physical burden is also prominent in the medical profession, especially amongst surgeons, it is not as widely studied and addressed. Work-related musculoskeletal disorders (WMSDs) are injuries that affect various elements of the musculoskeletal system, such as the muscles, the tendons, the nerves, and the joints [12]. Their prevalence amongst surgeons is reported to be between 20 and 70% [2,15], with the most commonly affected muscle groups being those of the neck, shoulders, and lower back [24]. WMSDs in surgeons can lead to numerous disease processes such as carpal tunnel syndrome, lumbar/ cervical radiculopathy, varicose veins, and rotator cuff disease [6,9,17]. Such injuries do not solely have an effect on the surgeons' ability to operate, but also have a significant impact on patient care as well. WMSD is the number one cause of absenteeism amongst healthcare workers, thus indirectly decreasing the healthcare workforce and consequently increasing the patient waiting time [42]. More importantly, WMSDs have been shown to reduce dexterity, range of motion, grip strength, and proprioception, with a direct impact on optimal patient's care [30,37,40]. The International Ergonomics Association Council defines ergonomics as "the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data, and methods to design in order to optimize human well-being and overall system performance [21]." It has been proposed that ergonomics can facilitate surgeons in the process of altering their everyday practice to alleviate the physical stressors that cause WMSDs and improve their general well-being [15]. Although several studies have looked into the subject of WMSDs and postural ergonomics in relation to the practice of general, orthopedic, and gynecologic surgery [6,10,27], the neurosurgery-related literature is limited. The aim of the current questionnaire-based cross-sectional study was to examine the prevalence of WMSDs amongst neurosurgeons, identify possible risk factors in developing such disorders, and investigate neurosurgeons' views and attitudes regarding postural ergonomics. Materials and methods The present study constitutes a questionnaire-based, cross-sectional survey developed based on previously published literature on the subject of postural ergonomics in the surgical field [6,10,19,20]. The "Google Forms" online platform (Google, Inc.) was used to distribute an electronic questionnaire to the m e m b e r s o f t h e E u r o p e a n A s s o c i a t i o n o f Neurosurgical Societies (EANS), utilizing the EANS mailing list (c.2000), Twitter and Facebook account between June 3, 2020, and August 11, 2020. Furthermore, the questionnaire was distributed through email and Facebook posts to the members of neurosurgery-related groups [e.g., "Neurosurgery Research Listserv" (8000 members)], and the members of the Latin American Federation of Neurosurgical Societies (FLANC) (c.2000). Reminder e-mails were sent 2 and 4 weeks after initial distribution to increase the response rate. The survey did not collect any data through which the participants could be personally identified. The participants were asked to answer 38-49 questions (based on their answers) covering four major areas of interest, namely, (1) demographics and general information, (2) healthrelated information focusing on the musculoskeletal system, (3) procedure-specific information, and (4) personal views and attitudes regarding ergonomics. Statistical analysis All statistical analysis calculations were performed using the GraphPad Prism (version 8.4.0 for MacOS, GraphPad Software, San Diego, California USA, www.graphpad.com). Categorical variables were analyzed and tested for statistical significance by the use of the Fisher's exact and χ 2 test, as appropriate. The statistical significance threshold was set at p = 0.05. Regarding their surgical caseload, 126 surgeons (31.1%) reported that they mainly perform spine surgery, 181 (44.7%) mainly cranial, and 98 (24.2%) participants mentioned that they perform spine and cranial surgery equally often. Most of the responders (N =262, 64.4%) reported that they perform between 100 and 300 operations per year. The demographic characteristics and general information of the responders are presented in Table 1. Health-related information Regarding WMSDs, 358 (87.9%) of the participants reported that they have experienced musculoskeletal symptoms related to their work at least once in their career, predominantly pain (N = 264, 73.7%) (Fig. 1). Neck, shoulder and back were the most commonly symptomatic body parts, mainly affected after performing surgery (Fig. 2a). It is important to note that a substantial percentage of the responders (N = 98, 27.4%) started experiencing WMSDs after performing surgery while still in residency (Fig. 2b). This may indicate that early training regarding ergonomics and WMSDs are needed in order to educate young trainees on learning how to operate efficiently and ergonomically, something that more experienced neurosurgeons have learned through exposure. Out of those with symptoms, only 30 (8.4%) had to decrease their case volume; however, 215 (60.4%) have sought some kind of treatment for their symptoms. Interestingly, only 28 (7.85%) participants reported that they had taken time-off work due to their symptoms. Table 2 summarizes the healthrelated data of the participants. Information regarding intraoperative practice for the most commonly mentioned types of procedures are presented in Appendix A (Tables 4 and 5 Figure 4 demonstrates the physical burden of the most commonly mentioned types of procedures on the various parts of the body, based on the number of operations performed per week. Regardless of the type of the procedure, the most commonly affected parts of the body were the neck, shoulders, and lower back. The least affected areas of the body were the eyes and the wrists/fingers. Most surgeons complained of pain mainly after and during surgery, and it is important to note that very few reported that they are experiencing symptoms on a continuous basis. Views and attitudes on postural ergonomics The overwhelming majority of the responders believe that the physical burden on healthcare practitioners is an underexposed area in medicine (N = 320/400, 80%). Similarly, they also believe that postural ergonomics, in particular, is an underexposed area in the neurosurgical field (N = 340/401, 84.8%). Evidently, 314/401 (78.3%) reported that changes should be made in the training curricula of trainees in order for them to receive education and training on the topic of surgical ergonomics. Furthermore, 349/400 (87.3%) believe that hospital management authorities should invest more resources in order to equip the operating rooms more ergonomically Table 3 presents an overview of the results of the univariate analysis based on experiencing WMSDs. No associations were found between the development of WMSDs and the age/gender of the responders, their tenure, and the average duration of their operations. Moreover, no associations were found regarding the time they spend operating per week, their most common operations (craniotomy vs spine), the use of lead protection while operating, and the surgical approach (open versus minimally invasive). Univariate analysis Of note, a statistically significant larger number of participants in the 100-300 operations per year group reported WMSDs when compared with the > 300 operations per year group. Similarly with the training level, it is unclear whether this indicates that surgeons with a higher volume of cases learn to work more ergonomically. Summary The present questionnaire-based study surveyed 409 neurosurgeons to assess the effect of WMSDs in the neurosurgical field. Our results reveal that WMSDs is a prevalent issue in the field, as more than 85% of the participants reported that they have previously experienced some musculoskeletal discomfort associated with work-related exposure. Complaints MSK musculoskeletal, TTH trans-nasal trans-sphenoidal hypophysectomy, SD standard deviation associated with the neck, the back and the shoulders were commonly mentioned by the responders, mainly occurring after performing surgery. A sizeable percentage of those who have experienced WMSDs have sought treatment, using analgesics and physical therapy. Most neurosurgeons reported that they believe that "ergonomics is an underexposed area in the neurosurgical field", and that young neurosurgeons should be educated and trained on the subject while still in training. It is worth noting that our results hint that WMSDs start early in the course of a neurosurgeons' career (even during residency) and that surgeons with a higher volume of operations may empirically learn to work more ergonomically. As a results, it could be beneficial for young trainees and specialists to attend courses designed by experts and senior neurosurgeons on the subject of WMSDs. Our study did not reveal any associations between the development of WMSDs and any of the factors analyzed, maybe indicating that WMSDs are a global problem in neurosurgery irrespective of the gender, age, tenure, operating volume or approach of surgery (open versus minimally invasive) etc. Literature overview In recent years, increased awareness of the physical burden of operating on surgeons has led to the publication of several studies investigating the subject amongst various surgical specialties [6,19,20]. Most of the authors conclude that WMSDs are an important problem in the surgical profession and advocate for further research on the field of postural ergonomics in surgery. The literature pertaining to the field of postural ergonomics in neurosurgery is limited. Gadjradj et al. [19], in a recent survey amongst neurosurgeons, reported results similar to those of our present study. Of importance, they identified a tenure of more than 15 years to be associated with the development of WMSDs, specifically pain/discomfort, a result not replicated in our analysis. However, a previous study performed amongst spine surgeons [6] did not show any correlation between years of practice and WMSDs development. The gender factor It has been previously reported, in studies among the general population and various occupations, that the prevalence of WMSDs is greater amidst women [11,41,43]. It has been proposed that the smaller body size and anthropometric measurements of females may lead to a higher workload when performing the same tasks as males [39]. Furthermore, several studies suggest that sex hormones (e.g., estrogens) affect pain perception and argue that lower estrogen levels during some phases of the menstrual cycle may lead women to report more symptoms than men [3,4,16]. Interestingly, our study did not find any gender-based differences in the prevalence of symptoms when comparing females versus males. Minimally invasive versus open surgery The establishment of the concept of minimally invasive surgery and the implementation of minimally invasive techniques, especially in the fields of general and gynecological surgery, has fundamentally altered patient care [34]. However, minimally invasive procedures (e.g., laparoscopic and endoscopic) have been traditionally associated with increased WMSDs [2,31]. Endoscopic procedures are frequently performed in neurosurgery and have been associated with upper limb and shoulder pain [26]. In the present study, minimally invasive spine (MIS) procedures did not seem to increase WMSDs when compared with open spine surgery. Furthermore, when skull base surgery and trans-nasal trans-sphenoidal hypophysectomy were compared with "other craniotomy" procedures, no statistically significant difference in WMSDs was identified. Intraoperative routine and equipment Prolonged standing periods have been previously associated with increased lower back, leg, and feet pain [36]. Several authors have suggested that a sitting position should be preferred for long tasks, such as microsurgical interventions and suturing [8,22,23]. However, the results of our study suggest that most surgeons spend the majority of their operating time in the standing position. In order to minimize physical burden, specific training courses and trainee education could focus on teaching young neurosurgeons to effectively operate while sitting, when appropriate. In a study amongst surgeons performing vaginal surgery, chairs with round, flat seats, and back support were reported to be more comfortable than those with saddle-shaped seats and no back support [35]. Our results indicate that most neurosurgeons use a chair without back and neck support. This may indicate that operating rooms are not furnished with ergonomic equipment and that more careful planning and funds should be spent in that direction. It has been previously reported that, although loupes offer several advantages such as portability and cost-effectiveness, procedures performed with them are associated with extreme neck angles and increased muscle workload [13,44]. On the other hand, operating with the use of a microscope allows surgeons to maintain a neutral head position and offers a better view of the surgical field [13]. When available and appropriate, the microscope should be preferred to the loupes as it can increase surgeon's comfort and make assisting and operating safer and easier. Spine surgeons often use fluoroscopy-guided techniques to enable correct instrumentation and execution of procedures. In order to minimize radiation exposure, they usually wear lead aprons that can weigh up to 17 kg [1]. It has been reported that wearing a lead apron increases discomfort and fatigue, especially on the muscle groups of the back [1]. Although the majority of participants agree that wearing a lead apron increases physical discomfort, our univariate analysis did not reveal a statistically significant difference in WMSDs occurrence in spine surgeons that reported frequent lead apron usage. Future considerations The field of postural ergonomics in surgery is becoming increasingly popular in recent years, leading to an increased effort by the surgical community to find solutions regarding the problem of WMSDs. It is important to educate trainees and young neurosurgeons to be mindful of the related occupational risks that they will be inevitably exposed to throughout their careers. This could be achieved by officially incorporating postural ergonomics education into the training curricula of neurosurgery residents and can also be facilitated by courses on specific topics organized by neurosurgical societies. In 2013, Franasiak et al. reported that after attending ergonometric training designed by an expert, 88% of their study participants (robotic surgeons) changed practice, with 84% reporting reduction in musculoskeletal strain [18]. Interestingly, another study showed that training in the Alexander technique, a method that is used to change and improve movement habits, resulted in improved posture and less discomfort amongst urological surgeons [33]. Furthermore, more studies focusing on postural ergonomics, surgical instrument design, and operating theater equipment should be designed to identify ideal ergonomics for neurosurgery. An interesting approach was used by researchers from the Mayo Clinic, USA, who used wearable sensor inertial measurement units to study the posture of surgeons while operating [28]. In recent years, the concept of intraoperative microbreaks has been studied in order to identify if microbreaks can result in less fatigue. In a 2013 study, Dorion and Darveau reported that 20-s-long intraoperative microbreaks every 20 min to stretch the neck and shoulders resulted in statistically significant less discomfort in all body areas of the study participants (general surgeons, neurosurgeons, head and neck surgeons, cardiac surgeons) [14]. Limitations The current study has some limitations that should be acknowledged. Firstly, recall bias is an important factor in all survey-based studies, and it is particularly important in studies like ours that ask participants to recall information regarding careers spanning more than 45 years in some cases [5]. Additionally, the number of responders in our study was limited when compared with the global (≈ 50,000 neurosurgeons) and even the European (≈ 11,000) neurosurgical workforce [29]. Finally, because of the design of our study (mainly focused on EANS members), the vast majority of responders practise in Europe, introducing selection bias. These limitations make careful interpretation of our results necessary. Conclusion Postural ergonomics and WMSDs are important topics, which deserve more attention from the neurosurgical community, as a significant percentage of neurosurgeons has experienced WMSDs at some point throughout their career. Further research has to be conducted in order to shed more light on specific areas of interest, such as those of postural ergonomics and operating theater equipment. Trainees and young neurosurgeons ought to be educated on the subject and receive specific training, in order to adopt healthy attitudes and minimize WMSDs. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2021-02-17T15:23:31.197Z
2021-02-17T00:00:00.000
{ "year": 2021, "sha1": "b1c66d7415f847bebb7aaf3dbab8daf4dc1841eb", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00701-021-04722-5.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b1c66d7415f847bebb7aaf3dbab8daf4dc1841eb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245413441
pes2o/s2orc
v3-fos-license
A Novel Antimicrobial Peptide Sparanegtin Identified in Scylla paramamosain Showing Antimicrobial Activity and Immunoprotective Role In Vitro and Vivo The abuse of antibiotics in aquaculture and livestock no doubt has exacerbated the increase in antibiotic-resistant bacteria, which imposes serious threats to animal and human health. The exploration of substitutes for antibiotics from marine animals has become a promising area of research, and antimicrobial peptides (AMPs) are worth investigating and considering as potential alternatives to antibiotics. In the study, we identified a novel AMP gene from the mud crab Scylla paramamosain and named it Sparanegtin. Sparanegtin transcripts were most abundant in the testis of male crabs and significantly expressed with the challenge of lipopolysaccharide (LPS) or Vibrio alginolyticus. The recombinant Sparanegtin (rSparanegtin) was expressed in Escherichia coli and purified. rSparanegtin exhibited activity against Gram-positive and Gram-negative bacteria and had potent binding affinity with several polysaccharides. In addition, rSparanegtin exerted damaging activity on the cell walls and surfaces of P. aeruginosa with rougher and fragmented appearance. Interestingly, although rSparanegtin did not show activity against V. alginolyticus in vitro, it played an immunoprotective role in S. paramamosain and exerted an immunomodulatory effect by modulating several immune-related genes against V. alginolyticus infection through significantly reducing the bacterial load in the gills and hepatopancreas and increasing the survival rate of crabs. Introduction It is estimated that China accounts for over 60% of the global aquaculture production under the accelerated development of aquaculture industry [1]. Accordingly, various diseases often occur in the process of aquaculture, especially the bacterial infectious diseases, which cause the antibiotics widely used in aquaculture, either as pharmaceuticals in control diseases or routinely used in feedstuff as additives. The abuse of antibiotics leads to antibiotic residual problems in aquatic products. Through the consumption of aquatic products tainted by antibiotics, humans may acquire adverse drug reactions [2]. In particular, the abuse of antibiotics increased numbers of antibiotic-resistant pathogenic microorganisms in the aquatic environment, which poses a challenge to the development and use of antibiotic strategies to control fish diseases [3,4]. As it is known that antibiotic medications have been widely used not only in clinical treatment and the prevention of microbial infections, but also in feedstuffs [5], the wide spread of antimicrobial resistance (AMR) seriously affects animal and human health [6]. To control the antibiotic-resistant pathogens, a variety of effective first-line drug treatments (such as chloramphenicol, erythromycin, and terramycin) have recently been developed to control aquatic bacteria; however, these drugs often negatively affect many organisms, including fish and humans [3,7]. Therefore, the exploration and development of effective alternatives to substitute for antibiotics becomes a promising research hotspot. It is well known that marine invertebrates including crustaceans mainly depend on innate immune defense to protect themselves against invading pathogens. Of various effective immune-related components, the antimicrobial peptides (AMPs) are the most concerned because they play a significant role in innate immunity and serve as effective defense weapons against bacterial, fungal, and viral infections [8,9]. The antimicrobial mechanism of most AMPs is to disrupt the membrane integrity of invading microorganisms [10,11]. Compared with antibiotics, AMPs can offer multiple advantages as candidates for the development of antimicrobial agents, as their uses may include acting alone or in synergy with other antimicrobial agents to reduce the effective bactericidal concentration and thereby reduce cytotoxicity, and it is not easy to induce drug resistance in bacteria [12]. In addition to direct antibacterial functions, AMPs have an important capability to regulate the innate immune system [13]. Therefore, AMPs can not only improve the immune resistance of aquatic animals but also alleviate the problems of bacterial resistance and antibiotic contamination of aquatic products in aquaculture. AMPs can also produce immunological protection against bacterial challenge in vivo. Epinecidin-1, a synthetic 21-mer antimicrobial peptide originally identified from grouper (Epinephelus coioides), significantly improves the survival rate of zebrafish infected with Vibrio vulnificus [14]. LcLEAP-2C from large yellow croaker (Larimichthys crocea) can reduce the mortality of large yellow croaker after V. alginolyticus challenge [15], and white spot syndrome virus (WSSV) pre-incubated with anti-lipopolysaccharide factor (ALF) results in an increased survival rate of red claw crayfish (Cherax quadricarinatus) [16]. Similarly, in our laboratory, the recombinant product of one AMP SpHyastatin, which is identified in S. paramamosain can enhance the protection of the host against Vibrio parahaemolyticus infection in crabs [17]; there are two other AMPs: rSpALF7 could obviously improve the survival of crabs infected by V. alginolyticus [18] and rScyreprocin significantly decreased the mortality of Vibrio harveyi-infected marine medaka [19]. The action mechanism of antimicrobial peptides in vivo has been also investigated. Several recent studies have found that the administration of AMPs to fish can lead to a decrease in the number of bacteria in tissues, showing a direct antibacterial activity in vivo [20]. Some AMPs could be attributed to their ability to enhance immune response by modulating host gene expression [13], inducing or inhibiting cytokine production [20], and promoting the production of antimicrobial substances such as lysozymes and antioxidant enzymes [21]. In the study, based on the transcriptome database of S. paramamosain established by our laboratory, we identified an uncharacterized gene for the first time and named it Sparanegtin. The expression profiles of Sparanegtin in S. paramamosain with the challenge of LPS or V. alginolyticus were investigated. The recombinant product of Sparanegtin (rSparanegtin) in a prokaryotic expression system Escherichia coli was obtained. The antimicrobial activity assay, scanning electron microscopy (SEM), observation, and microbial surface components binding assays were performed to analyze the antimicrobial features of rSparanegtin against various microorganisms in vitro. In addition, the effect of rSparanegtin in vivo was evaluated by detecting the bacterial clearance ability in the gills and hepatopancreas of S. paramamosain infected with V. alginolyticus, as well as any effect on the expression patterns of some immune-related genes after the in vivo administration of rSparanegtin. This study aims to preliminarily study the function, immune-protective effect, and related mechanism of Sparanegtin, providing effective strategies for mud crab aquaculture disease control. This study aims to characterize the new AMP Sparanegtin, elucidating its immune-protective effect and the underlying mechanism and thus developing a potential effective antimicrobial agent that could be substituted for antibiotics to be used in animal husbandry or medicine in the future. Cloning and Sequence Analysis of Sparanegtin The full-length cDNA sequence of Sparanegtin was obtained, which is 525 bp, including a 252-bp open reading frame (GenBank accession number: MN612064). It had a predicted signal peptide of 23aa, and the cleavage position is between Gly-23 and Ala-24. The mature peptide contained 60 amino acid residues, and its calculated mass is 5.818 kDa with an estimated isoelectric point (pI) of 5.2, the total net charge of -1 ( Figure 1A). The predicted tertiary structure of Sparanegtin contains three α-helices ( Figure 1B). Cloning and Sequence Analysis of Sparanegtin The full-length cDNA sequence of Sparanegtin was obtained, which is 525 bp, ing a 252-bp open reading frame (GenBank accession number: MN612064). It had dicted signal peptide of 23aa, and the cleavage position is between Gly-23 and Alamature peptide contained 60 amino acid residues, and its calculated mass is 5.8 with an estimated isoelectric point (pI) of 5.2, the total net charge of -1 ( Figure 1 predicted tertiary structure of Sparanegtin contains three α-helices ( Figure 1B). Gene Expression Profiles of Sparanegtin The qPCR results showed that Sparanegtin was widely distributed in different (Figure 2A,B). In male adult crabs, Sparanegtin was dominantly expressed in th (Figure 2A), and the highest expression level of Sparanegtin was found in the hem of female adult crabs ( Figure 2B). We further investigated the expression pro Sparanegtin in the testis and hemocytes of male crabs after LPS or V. alginolyticus ch ( Figure 2C-F). In the testis, the expression of Sparanegtin was significantly down-re by LPS challenge at 3 hpi ( Figure 2C), while it showed significant up-regulation and 72 hpi under V. alginolyticus challenge ( Figure 2D). In the hemocytes, Sparanegt was significantly up-regulated at 3 hpi under both LPS and bacterial challenge 2E,F). Gene Expression Profiles of Sparanegtin The qPCR results showed that Sparanegtin was widely distributed in different tissues (Figure 2A,B). In male adult crabs, Sparanegtin was dominantly expressed in the testis (Figure 2A), and the highest expression level of Sparanegtin was found in the hemocytes of female adult crabs ( Figure 2B). We further investigated the expression profiles of Sparanegtin in the testis and hemocytes of male crabs after LPS or V. alginolyticus challenge ( Figure 2C-F). In the testis, the expression of Sparanegtin was significantly down-regulated by LPS challenge at 3 hpi ( Figure 2C), while it showed significant up-regulation at 3 hpi and 72 hpi under V. alginolyticus challenge ( Figure 2D). In the hemocytes, Sparanegtin gene was significantly up-regulated at 3 hpi under both LPS and bacterial challenge ( Figure 2E,F). rSparanegtin Shows Antimicrobial Activity The recombinant product of Sparanegtin (rSparanegtin) was successfully expressed in E. coli. SDS-PAGE analysis showed that the purity of rTrx and rSparanegtin was high, as shown in Figure 3A. In addition, the results from the mass spectrometry also confirmed that the purified protein was the target protein rSparanegtin ( Figure S1). The antimicrobial activity of rSparanegtin was determined. As shown in Table 1, rSparanegtin displayed good antimicrobial activities against several Gram-negative (E. coli, P. aeruginosa, P. stutzeri, P. fluorescens, S. flexneri), Gram-positive (B. subtilis, C. glutamicum, S. aureus) bacteria (MICs ranging from 12 to 48 μM and MBCs ranging from 24 to 48 μM), and yeast (C. Preliminary Study on the Antibacterial Mechanism of rSparanegtin 2.4.1. Binding Properties ELISA assay was used to investigate the binding properties of rSparanegtin to different microbial surface molecules and bacteria. In order to evaluate whether the label protein Trx would have any effect on the following results, rTrx was selected as the control group. Compared with the rTrx group, rSparanegtin had strong binding affinity with LPS, LTA, and PGN in a concentration-dependent manner, and their calculated apparent dissociation constants (K d ) were 0.2375, 0.3905, and 0.6246 µM, respectively ( Figure 3B). Killing Kinetic The results of the time-killing kinetic assay were applied to further evaluate the bactericidal activity of rSparanegtin. When rSparanegtin was incubated with P. aeruginosa at a concentration of 48 µM, all bacteria could be killed after 4 h of incubation ( Figure 3C). rSparanegtin Induces Morphological Changes in Microorganisms In order to study the antibacterial mechanism of rSparanegtin against P. aeruginosa, SEM was employed to observe the morphological changes of the microbial membrane after rSparanegtin and rTrx treatment. After incubating with rSparanegtin and rTrx for a certain period of time, the SEM images of P. aeruginosa showed a significant destruction of membrane integrity and even leakage of cell contents compared with the control group and rTrx group ( Figure 3D). rSparanegtin Shows No Cytotoxicity and Could Reduce the V. alginolyticus Endotoxin Level In Vitro The cytotoxicity of rSparanegtin was analyzed using primarily cultured crab hemocytes, HEK-293T and NCI-H460. As shown in Figure 4A-C, rSparanegtin showed no cytotoxicity. In addition, it was found that rSparanegtin treatment could significantly reduce the endotoxin level of V. alginolyticus, which also showed a dose-dependent manner. Under the treatment of 48 µM, the endotoxin level was reduced by about 70% ( Figure 4D). To investigate the in vivo protective effect of rSparanegtin, male mud crabs w challenged with different groups, including the PBS and V. alginolyticus pre-incubat group (short as PBS group), rTrx and V. alginolyticus pre-incubation group (short as rT group), and rSparanegtin and V. alginolyticus pre-incubation group (short as rSparaneg group). As shown in Figure 4E, 48 h after different treatments, the survival rate of the cr PBS and rTrx groups dropped to 50%, while the survival rate of the rSparanegtin gro , and NCI-H460 (C) was determined by the MTS method; data are presented as mean ± standard deviation (SD) (n = 3). **: p < 0.01, one-way analysis of variance (ANOVA) and Dunnett post-test. Endotoxin level of V. alginolyticus after rSparanegtin treatment in vitro (D). In vivo protective effect of rSparanegti was evaluated (E). The rSparanegtin (20 µg/crab), rTrx (20 µg/crab), and PBS was incubated with V. alginolyticus (1 × 10 6 CFU/crab) at room temperature for 60 min and then injected into the male crabs (n = 20 for each group). The survival curves were analyzed using the Kaplan-Meier Log rank test. 2.6. The Immunoprotective Effect of rSparanegtin on S. paramamosain 2.6.1. Survival Rate Comparison To investigate the in vivo protective effect of rSparanegtin, male mud crabs were challenged with different groups, including the PBS and V. alginolyticus pre-incubation group (short as PBS group), rTrx and V. alginolyticus pre-incubation group (short as rTrx group), and rSparanegtin and V. alginolyticus pre-incubation group (short as rSparanegtin group). As shown in Figure 4E, 48 h after different treatments, the survival rate of the crab PBS and rTrx groups dropped to 50%, while the survival rate of the rSparanegtin group was around 75%. Crabs in the PBS and rTrx groups died faster, and none of them survived 120 h after injection, while the survival rate of the rSparanegtin group was still about 40% (p < 0.05) ( Figure 4E). 2.6.2. Pre-Incubation of rSparanegtin and V. alginolyticus Reduces Bacterial Load in the Tissues Bacterial clearance represents a major endpoint of innate host immunity in response to infection. As we all know, AMPs are important components of the innate immune system. We evaluated the ability of rSparanegtin to eliminate bacteria in the tissues of mud crabs under different treatments as mentioned above. As shown in Figure 5A, compared with the PBS and rTrx groups, the rSparanegtin group showed a significant reduction in V. alginolyticus load in the gills at the 3, 6, 12, and 24 hpi ( Figure 5A). In the hepatopancreas, the V. alginolyticus load significantly decreased at 6, 12, and 24 hpi ( Figure 5B). Bacterial clearance represents a major endpoint of innate host immunity in response to infection. As we all know, AMPs are important components of the innate immune system. We evaluated the ability of rSparanegtin to eliminate bacteria in the tissues of mud crabs under different treatments as mentioned above. As shown in Figure 5A, compared with the PBS and rTrx groups, the rSparanegtin group showed a significant reduction in V. alginolyticus load in the gills at the 3, 6, 12, and 24 hpi ( Figure 5A). In the hepatopancreas, the V. alginolyticus load significantly decreased at 6, 12, and 24 hpi ( Figure 5B). The rSparanegtin (20 µg/crab), rTrx (20 µg/crab), and PBS were incubated with V. alginolyticus (1 × 10 6 CFU/crab) at room temperature for 60 min and then injected into the base of the right fourth leg of crabs. Infected crabs were dissected, and tissues including gills (A), midgut, and hepatopancreas (B) were collected at different time points (3, 6, 12, and 24 h). Homogenates were cultured onto marine broth 2216E plates. Colony numbers were normalized to tissue weight. Data represent the bacterial load in gills, midgut, and hepatopancreas. * p < 0.05, ** p < 0.01; *** p < 0.001. Pre-Incubation of rSparanegtin and V. alginolyticus Modulate Immune-Related Gene Expression Profiles The results of qPCR showed the effect of pre-incubation of rSparanegtin and V. alginolyticus on the immune response of S. paramomosain ( Figure 6). Compared with the PBS group and the rTrx group, the transcription levels of the canonical components of the immune pathway (including SpToll2, SpMyd88, and SpSTAT), two AMPs (SpHyastatin and SpALF2), and antioxidant enzyme genes (including SpCAT, SpSOD, and SpGPx) were increased significantly at 6 h in the rSparanegtin group. marine broth 2216E plates. Colony numbers were normalized to tissue weight. Data represent the bacterial load in gills, midgut, and hepatopancreas. * p < 0.05, ** p< 0.01; *** p < 0.001. 2.6.3. Pre-incubation of rSparanegtin and V. alginolyticus Modulate Immune-Related Gene Expression Profiles The results of qPCR showed the effect of pre-incubation of rSparanegtin and V. algi nolyticus on the immune response of S. paramomosain ( Figure 6). Compared with the PBS group and the rTrx group, the transcription levels of the canonical components of the im mune pathway (including SpToll2, SpMyd88, and SpSTAT), two AMPs (SpHyastatin and SpALF2), and antioxidant enzyme genes (including SpCAT, SpSOD, and SpGPx) were in creased significantly at 6 h in the rSparanegtin group. SpMyd88, SpSTAT, SpALF2, and SpHyastain (A-H) were evaluated using qPCR at 3, 6, 12, and 24 h post-injection. Each bar represents the means ± SD (n = 5). The same letters (a-b) indicate no sig nificant difference between groups, and different letters indicate statistically significant differences between groups (p < 0.05) as calculated by one-way ANOVA followed by Tukey's test. It was noted that only the means at each time point were compared for the denotation with the letters, whereas the means at different time points could not be compared with one another. Discussion In the study, based on the transcriptome database of S. paramamosain established by our laboratory, we identified a novel AMP and named it Sparanegtin. According to the theoretical pI 5.2 of its mature peptide, Sparanegtin is an anionic AMP. As it is known most reported AMPs are cationic peptides; however, more anionic AMPs have been grad ually identified in different species in recent years and also have a potent antimicrobia Figure 6. rSparanegtin effects on the V. alginolyticus infection-mediated immune gene expression profiles in S. paramamosain. Crabs were divided into PBS + V. alginolyticus, rTrx + V. alginolyticus, and rSparanegtin + V. alginolyticus groups. The expression levels of SpSOD, SpCAT, SpGPx, SpToll2, SpMyd88, SpSTAT, SpALF2, and SpHyastain (A-H) were evaluated using qPCR at 3, 6, 12, and 24 h post-injection. Each bar represents the means ± SD (n = 5). The same letters (a-b) indicate no significant difference between groups, and different letters indicate statistically significant differences between groups (p < 0.05) as calculated by one-way ANOVA followed by Tukey's test. It was noted that only the means at each time point were compared for the denotation with the letters, whereas the means at different time points could not be compared with one another. Discussion In the study, based on the transcriptome database of S. paramamosain established by our laboratory, we identified a novel AMP and named it Sparanegtin. According to the theoretical pI 5.2 of its mature peptide, Sparanegtin is an anionic AMP. As it is known, most reported AMPs are cationic peptides; however, more anionic AMPs have been gradually identified in different species in recent years and also have a potent antimicrobial activity. Dermcidin is a novel human antibiotic peptide secreted by sweat glands and has a net negative charge of -5 that shows antimicrobial activity in response to a variety of pathogenic microorganisms [22]. The three antifungal peptides from the Litopenaeus stylirostris and Litopenaeus vannamei have a negative net charge at physiological pH with a pI and a broad spectrum of antifungal activity [23]. Our previous studies report that two novel AMPs, Scygonadin [24] and its homologous SCY2 [25], are anionic peptides and both have antimicrobial activity. In the present study, it was found that rSparanegtin displayed a potent activity against several Gram-negative bacteria (E. coli, P. aeruginosa, P. stutzeri, P. fluorescens, and S. flexneri) (MICs ranging from 12 to 48 µM), Gram-positive bacteria (B. subtilis, C. glutamicum, and S. aureus), and yeast (C. neoformans and P. pastoris GS115) (MICs ranging from 24 to 48 µM) ( Table 1). The in vivo expression pattern of the Sparanegtin gene was tissue-specific. The mRNA transcripts of Sparanegtin were highly expressed in the testis of male crabs. In addition, some known AMPs are sexspecifically expressed; for instance, Adropin is specifically expressed in the ejaculatory duct of Drosophila melanogaster [26], as observed in our early study on Scygonadin that is dominantly expressed in the ejaculatory duct of male mud crabs and is involved in the reproductive immunity [24]. A recently reported AMP, scyreprocin, is identified as an interacting partner of SCY2 from the reproductive system of male S. paramamosain and highly expressed in the testis [19]. It is known that testes are organs of the male reproductive system of decapod crustaceans and harbor germ cells and produces spermatozoa [27], as well as being functional either at the beginning or during the entire spermatogenesis process [28]. Therefore, Sparanegtin that is highly present in testes may play an immune defense role in spermatogenesis and the reproduction process of male crabs. Binding to the surface of microorganisms is the first step for AMP to exert its antimicrobial effect. In order to better understand the underlying antimicrobial mechanism of AMPs, microbial cell wall polysaccharides binding assays were conducted in this study. The present study revealed that rSparanegtin had a strong binding ability to LPS, PGN, and LTA in a concentration-dependent manner and exhibited a higher binding ability to LPS than to PGN and LTA. Many AMPs are reported with similar activities via binding to microbial cell wall polysaccharides. rPcALF1 from red swamp crayfish (Procambarus clarkii) could bind with different amounts of microbial polysaccharides, mostly with LPS, followed by glucan, and the least with LTA, and then find that it has stronger antibacterial activity against Gram-negative bacteria [29]. In Marsupenaeus japonicus, MjCru I-1 could agglutinate bacteria and bind to bacteria by binding to the bacterial cell wall molecules including LPS, LTA, and PGN. MjCru I-1 had antibacterial activity against some bacteria by destroying the membrane of bacteria [30]. rLvCrustinB from Pacific white shrimp Litopenaeus vannamei directly binds to polysaccharides, including PGN, LTA, and LPS, indicating that LvCrustinB may be involved in the defense against Gram-positive and Gram-negative bacteria [31]. In the study, the SEM images of P. aeruginosa showed a significantly destruction of membrane integrity and even leakage of cell contents, suggesting that the activities of rSparanegtin may be via the interaction with the specific components of bacterial cell wall. This is consistent with the fact that rSparanegtin has high antimicrobial activity against P. aeruginosa. The antimicrobial mechanism of rSparanegtin may be similar to that of most AMPs that destroy the integrity of the microbial membrane, which leads to the leakage of the cytoplasmic contents and ultimately kills them [32]. It was interesting to note in the study that the survival rate of S. paramamosain challenged with V. alginolyticus was increased when rSparanegtin was given to crabs; correspondingly, there was a significant reduction in V. alginolyticus load in the gills and hepatopancreas at 6, 12, and 24 h. The results suggested that rSparanegtin might exert an immunological defense against the invading V. alginolyticus by which the survival rate of crabs was enhanced. Analysis of the Sparanegtin gene in vivo demonstrated that this peptide was significantly expressed in the testis at 3 h and 72 h or hemocytes at 3 h with the V. alginolyticus challenge; meanwhile, other AMPs such as SpHyastatin and SpALF2 as well as signal pathway associated genes such as SpToll2, SpMyd88, and SpSTAT were up-regulated at 6 h. All of these findings suggested that Sparanegtin may directly participate in the immune response or indirectly play a role by inducing the expression of other immune-associated genes with the injection of rSparanegtin when bacterial infection occurred in crabs; that means that Sparanegtin may generate immunoprotective and immunomodulatory activities. AMPs as products of immune response are testified to play important roles in killing or cleaning the infected pathogens directly. The significant expression of SpHyastatin and SpALF2 at 6 h after V. alginolyticus challenge might be due to the immunomodulatory effect of rSparanegtin. In a previous study on SpHyastatin, this peptide is down-regulated at 24 h and then up-regulated at 96 h but does not show any change in expression at 6 h after bacterial challenge [33], suggesting that the expression of SpHyastatin might be directly induced by the injection of rSparanegtin. The significant expression of SpToll2, SpMyd88, and SpSTAT implied that the immune-associated signal pathways participated in the defense against the V. alginolyticus challenge and may induce the activation of downstream effectors such as AMPs. In addition to resistance to a variety of pathogenic microorganisms, AMPs are also reported to regulate the expression of other immune genes [34]. We found that the preincubation of rSparanegtin and V. alginolyticus could induce the transcription levels of several immune-related genes, including immune signaling pathway-related genes (SpToll2, SpMyd88, SpSTAT), AMPs (SpHyastatin and SpALF2), and antioxidant-associated genes (SpSOD, SpCAT, and SpGPx). Such immunoenhancing properties are demonstrated in several other marine-derived AMPs, for example, shrimp and limulus anti-lipopolysaccharide factor [35][36][37]. The innate humoral immune response is mainly mediated by three immune signaling pathways, namely, the Toll pathway, IMD pathway, and JAK/STAT pathway [38]. By regulating or stimulating the Toll signaling pathway, the production of some immune factors related to its downstream pathway, such as antimicrobial peptides (AMPs), can be activated against microbial infection [39]. The JAK/STAT signaling pathway positively regulates AMP gene expression that plays an important role in immune response [40]. In this study, the expression trend of both AMPs (SpHyastatin and SpALF2) genes was consistent with the expression of SpToll2, SpMyD88, and SpSTAT, suggesting that the expression of both AMPs might be regulated through the Toll and JAK/STAT pathways. The up-regulation of SpHyastatin and SpALF2 may participate in eliminating the infected bacteria. In addition, bacterial infection can prompt the body to produce ROS, and excessive ROS will cause tissue damage and inflammation [41,42]. The up-regulated expression of antioxidant enzymes (SpSOD, SpCAT, and SpGPx) might be associated with the action of removing ROS in vivo. These results suggested that Sparanegtin was likely generating an immunomodulatory effect that helps eliminate the invading bacteria. The interactions among the induced expression of AMPs, clear degree of infected bacterial numbers, and survival rate of marine animals have been much reported in previous studies. For example, MjALF-E2 were upregulated by bacterial challenge and could promote the clearance of bacteria in vivo. After knockdown of MjALF-E2 and infection with Vibrio anguillarum, shrimp showed high and rapid mortality compared with GFPi shrimp, suggesting that MjALF-E2 serves a protective function against bacterial infection in shrimp [43]. A crustin gene PcCru isolated from red swamp crayfish Procambarus clarkia is significantly induced by bacterial stimulations at both the translational and transcriptional levels and could protect crayfish from infection by the pathogenic bacteria Aeromonas hydrophila in vivo [44]. In a bacteria challenge test, As-CATH4 and 5 (two vertebrate-derived cathelicidins family HDPs) could significantly decrease the bacterial numbers in crabs and increase the survival rates of crabs in both pre-stimulation and co-stimulation groups [45]. Similarly, the expression level of PcALF1 is induced by bacteria, and the injection of PcALF1 in crayfish (Procambarus clarkii) enhances the elimination of bacteria in vivo [29]. Our previous studies on other two AMPs, SCY2 and SpHyastatin, also show an immunoprotective effect on S. paramamosain, although both have differential antimicrobial activity and in vivo expression patterns. rSpHyastatin, a peptide that is highly expressed in hemolymphs with bacterial challenge, could confer immune-protective resistance against pathogenic challenge in S. paramamosain, causing less significant change in level of the mRNA expression of all tested immune and antioxidant-associated genes [17]. For SCY2, even though its gene expression is uniquely expressed during the mating of crabs and could not be directly induced by the injection of bacteria, rSCY2 could significantly increase the survival rate of S. paramamosain [46]. It is worth noting that rSparanegtin had no inhibitory or killing effect on cultured V. alginolyticus in vitro; however, it could be significantly expressed at some timepoints with the V. alginolyticus challenge in vivo and could significantly improve the survival rate of S. paramamosain after V. alginolyticus challenge as well as reduce V. alginolyticus load in the gills and hepatopancreas. The similar phenomenon is also found in our early study on an AMP SpHyastatin that is also identified in S. paramamosain [17]. Animals, Challenge and Tissue Collection Mud crabs (S. paramamosain) were purchased from the Zhangzhou Crab Farm (Fujian, China). Healthy male and female adult mud crabs (body weight 300 ± 30 g, n = 5) were dissected, and the tissues including testis, anterior vas deferens, seminal vesicle, posterior vas deferens, ejaculatory duct, posterior ejaculatory duct, penis, ovary, spermathecae, reproductive duct, muscle, thoracic ganglion, gills, brain, midgut, subcuticular epidermis, eye stalk, heart, hepatopancreas, and stomach were collected. Hemocytes were isolated from the hemolymph as described previously [47]. For the challenge experiment, adult male crabs (body weight 300 ± 30 g, n = 5) were injected with LPS at a dosage of 0.5 mg kg −1 or V. alginolyticus (1 × 10 6 CFU crab −1 ). Crabs injected with crab saline (NaCl, 496 mM; KCl, 9.52 mM; MgSO 4 , 12.8 mM; CaCl 2 , 16.2 mM; MgCl 2 , 0.84 mM; NaHCO 3 , 5.95 mM; HEPES, 20 mM; pH 7.4) were set up as the control group. Tissue samples (testes and hemocytes) were collected at 3, 6, 12, 24, 48, and 72 h post-injection (hpi). All tissues were stored at −80 • C until use. All animal procedures were carried out in strict accordance with the National Institute of Health Guidelines for the Care and Use of Laboratory Animals and were approved by the Animal Welfare and Ethics Committee of Xiamen University. Cloning, Expression, Purification, and Analysis of Recombinant Proteins Total RNA of testis was extracted using TRIzol™ reagent (Invitrogen, Carlsbad, CA, USA) and cDNA was generated using PrimeScript™ RT reagent Kit with gDNA Eraser Kit (Takara, China). The cDNA templates for 5 -and 3 -random amplification of cDNA ends (RACE) PCR were synthesized using a SMARTer ® RACE 5 /3 Kit (Takara, Dalian, China). Gene-specific primers were designed based on the partial sequences obtained from the transcriptome database established by our laboratory ( Table 2). The amplified fragments were cloned into the pMD18-T vector (Takara, Dalian, China) and sequenced by Borui Biotechnology Ltd. (Xiamen, China). The open reading frame of Sparanegtin was constructed into the pET-32a (+) vector (with 6× His tag and thioredoxin (Trx) tag) and transformed into E. coli BL21 (DE3) and further expressed (the specific primer sequences were listed in Table 2). A pET32a (+) vector with only 6× His tag and Trx (thioredoxin) tag was constructed, and the expressed product was used as a control. Isopropyl β-D-Thiogalactoside (IPTG) was added to a final concentration of 0.5 mM to induce protein expression at 28 • C for 8 h. The recombinant Sparanegtin (rSparanegtin) was expressed and purified through HisTrap TM FF crude (GE Healthcare, Chicago, IL, USA) on the ÄKTA Pure system (GE Healthcare, Chicago, IL, USA) according to the standard protocol. The purified proteins were dialyzed and concentrated, and the protein concentration was determined by Bradford assay. The purified proteins were confirmed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), Western blotting, and mass spectrometry identification. All recombinant proteins were stored at −80 • C Quantitative Real-Time PCR The total RNA of all samples was extracted and cDNA was synthesized as described above. Quantitative reverse transcription PCR (qRT-PCR) was performed using the cDNA as the template to detect the expression level of Sparanegtin in a real-time thermal cycler (ABI 7500, Waltham, MA, USA) using FastStart DNA Master SYBR Green I (Roche Diagnostics, Mannheim, Germany). The expression profiles of Sparanegtin gene in various adult crab tissues were determined by absolute quantitative real-time PCR (qPCR) and the expression changes of Sparanegtin during the response patterns of Sparanegtin gene to LPS and V. alginolyticus challenge were analyzed by relative qPCR. The specific primer sequences (Sparanegtin-qPCR-F/Sparanegtin-qPCR-R, GADPH-qPCR-F/GADPH-qPCR-R) are listed in Table 1. The qPCR cycle conditions were set as follows: an initial denaturing step at 95 • C for 10 min, 40 cycles at 95 • C for 15 s, 60 • C for 30 s, and 72 • C for 1 min. The 2 −∆∆Ct algorithm was applied to the expression profile analysis [48]. Antimicrobial Assay Microorganisms in the logarithmic growth phase were harvested and used to evaluate the antimicrobial activity of rSparanegtin. The minimum inhibitory concentration (MIC) and the minimum bactericidal concentration (MBC) were determined according to the previously described liquid growth inhibition assay, which were performed three times independently [49]. Compared with the negative control, the MIC value is defined as the lowest protein concentration that does not induce visible bacterial growth. Then, we spread the culture without visible bacterial growth on a solid medium plate. The MBC is the concentration that kills more than 99.9% of the microorganisms after incubation at 28 or 37 • C for 24 h. Binding Assays In order to determine the binding properties of rSparanegtin with lipopolysaccharides (LPS B5, Sigma, St. Louis, MO, USA), lipoteichoic acid (LTA, L2515, Sigma, St. Louis, MO, USA), and peptidoglycan (PGN from Bacillus subtilis, Sigma, St. Louis, MO, USA), a modified ELISA assay was performed as described previously [19]. Briefly, a 96-well ELISA plate was coated overnight with LPS, LTA, and PGN at 4 • C; then, it was blocked with 5% skimmed milk and incubated with a serial dilution of rSparanegtin and rTrx (0 to 5 µg mL −1 ) for 2 h at 37 • C. Bound peptides were detected by incubation with mouse anti-His antibody (1:3000, prepared in 1% skimmed milk) followed by adding goat antimouse HRP antibody (1:5000, prepared in 1% skim milk). After the colorimetric reaction, the absorbance at 450 nm was measured using a microplate reader (TECAN GENios, GMI, Brooklyn Park, MN, USA). The independent assays were performed three times. The binding parameters, apparent dissociation constant (Kd), and maximum binding (Amax) were determined using non-linear fitting as A = Amax [L] / (Kd + [L]), where A is the absorbance at 450 nm and [L] is the protein concentration [17]. Time-Killing Kinetic Assay The Gram-negative bacteria P. aeruginosa were subjected for time-killing kinetic assay according to the previous description. rSparanegtin was incubated with bacteria at a concentration of 48 µM. The cultures were sampled and plated at different time points (n = 3). The plates were incubated at 37 • C for 24 h, and the total viable count (TVC) was determined. The independent experiments were performed three times. SEM Observation SEM was used to further study the antibacterial mechanism of rSparanegtin. P. aeruginosa (5 × 10 7 CFU mL −1 ) was prepared as described in the antimicrobial assay. PBS, rTrx, and rSparanegtin were separately added into each individual culture medium and incubated at a concentration of 48 µM for 30 min. The microbial cells were collected and fixed with pre-cooled 2.5% glutaraldehyde at 4 • C for 2 h. Then, the samples were dehydrated with a graded series of ethanol (30%, 50%, 70%, 80%, 95%, and 100%) and further dehydrated in a critical point dryer (EM CPD300, Leica, Wetzlar, Germany) and gold coated [50]. Finally, the change in morphology of the bacteria was observed by SEM (SUPRA 55 SAPPHIRE, Carl Zeiss, Oberkochen, Germany). Cytotoxicity Assay The cytotoxicity of rSparanegtin was evaluated using hemocytes from S. paramamosain. The hemocytes of S. paramamosain were isolated as previously described [51]. Briefly, the hemocytes were maintained in L-15 medium prepared in crab saline and supplemented with 5% fetal bovine serum, inoculated on a 96-well cell culture plate with approximately 10 4 cells well −1 , and incubated overnight at 26 • C. HEK-293T cells were maintained in Dulbecco's Modified Eagle Medium supplemented with 10% fetal bovine serum, and NCI-H460 cells were maintained in Roswell Park Memorial Institute 1640 supplemented with 10% fetal bovine serum. HEK-293T and NCI-H460 cells were inoculated on a 96-well cell culture plate and incubated at 37 • C with 5% CO 2 overnight. Finally, all the cells were incubated with culture medium supplemented with various concentrations of rSparanegtin (3, 6, 12, 24 and 48 µM, n = 3). After 24 h of incubation, cell viability was assessed using a CellTiter 96 R ® Aqueous Kit (Promega, Madison, WI, USA). The independent experiments were carried out three times. Endotoxin Assay The endotoxin level of V. alginolyticus after rSparanegtin treatment was detected by the Toxin Sensor™ Chromogenic LAL Endotoxin Assay Kit (GenScript, Piscataway, NJ, USA) following the manufacturer s instructions [52]. When V. alginolyticus reached the logarithmic growth phase, they were collected and adjusted to a concentration of 10 7 CFU/mL. Then, they were incubated with different concentrations of rSparanegtin (0, 12, 24, 48 µM, n = 3) at room temperature for 1 h and analyzed by a spectrophotometer at an absorbance of 545 nm (Agilent Technologies, Bayan Lepas, Malaysia). Each sample had three biological parallels. The independent experiments were carried out three times. 4.11. Evaluation of the In Vivo Activity of rSparanegtin on S. paramamosain Infected with V. alginolyticus In order to investigate the in vivo protective effect of rSparanegtin, we performed a mortality comparison assay using male S. paramamosain (average weight 40 ± 5 g) infected with V. alginolyticus. rSparanegtin, rTrx, and V. alginolyticus were prepared in PBS. First, the recombinant protein (20 µg/crab) was incubated with V. alginolyticus (1 × 10 6 CFU/crab) at room temperature for 1 h, and then, the mixture was injected into the base of the right fourth leg of crabs. The control group received an equal volume of V. alginolyticus diluted in PBS. Sixty crabs were divided into three groups (including PBS and V. alginolyticus pre-incubation group, rTrx and V. alginolyticus pre-incubation group, and rSparanegtin and V. alginolyticus pre-incubation group) with 20 crabs in each group. The survival rates of crabs in each group were recorded at different time points (3,6,9,12,24,36,48,60,72,96, and 120 h). Bacterial Load Assay and Quantification of Immune-Related Gene Expression after Different Treatment In order to investigate the bacterial load in tissues, male S. paramamosain (average weight 40 ± 5 g each) was performed different treatments as described above. The crabs were dissected, and tissues including hemocytes, gills, and hepatopancreas were collected at different time points (3, 6, 12, and 24 h, n = 5). Gills and hepatopancreas (0.1-0.2 g fresh weight per tissue) were homogenized in PBS. Then, the tissue homogenates were spread on marine broth 2216E plates, and the plates were incubated at 28 • C for 24 h. The colonies were counted separately for each sample at each time point. The total RNA of the collected tissues was extracted, and the cDNA was synthesized as described above. The expression profiles of several immune-related genes were analyzed by qRT-PCR. The GenBank accession numbers for those genes are listed as follows: SpToll2: Table S1. Statistical Analysis The results are presented as the mean ± standard deviation (SD). For the absolute qPCR assays, statistical analyses were performed by one-way analysis of variance (ANOVA) following a Tukey post-test. For the relative qPCR assays, statistical analyses were performed by two-way ANOVA following a Bonferroni post-test. For cytotoxicity assays, statistical analysis was performed by one-way ANOVA following Dunnett's post-test. For the mortality comparison assay, data were analyzed using the Kaplan-Meier log rank test. For the immune-related gene expression, one-way analysis of variance (ANOVA) was used for statistical analysis using SPSS 18.0 (IBM, Armonk, NY, USA) to determine the expression difference within groups. Significant levels were accepted at p < 0.05. Conclusions In summary, a new antimicrobial peptide named Sparanegtin was identified in S. paramamosain, and its transcripts were specifically distributed in tissues and significantly expressed with bacterial challenge. rSparanegtin had antimicrobial activity, and the antimicrobial mechanism involved initial damage to the outer membrane of bacteria, eventually resulting in the loss of cellular components and the complete collapse of the cell architecture. rSparanegtin showed no cytotoxicity and could reduce the V. alginolyticus endotoxin level in vitro. This AMP had an in vivo protective and immunomodulatory effect in S. paramamosain that could reduce the bacterial load in tissues and enhance the survival rate of crabs challenged with V. alginolyticus. Taken together, Sparanegtin might be a potential effective antimicrobial agent to be used in aquaculture or animal husbandry.
2021-12-23T16:12:38.843Z
2021-12-21T00:00:00.000
{ "year": 2021, "sha1": "2bcf179b5d87646382d70b196ad5f41a0ba4e61c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/1/15/pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "ad3f019cb572c98064aac7fead589ce5b3922711", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
11765504
pes2o/s2orc
v3-fos-license
Synchronization, Diversity, and Topology of Networks of Integrate and Fire Oscillators We study synchronization dynamics of a population of pulse-coupled oscillators. In particular, we focus our attention in the interplay between networks topological disorder and its synchronization features. Firstly, we analyze synchronization time $T$ in random networks, and find a scaling law which relates $T$ to networks connectivity. Then, we carry on comparing synchronization time for several other topological configurations, characterized by a different degree of randomness. The analysis shows that regular lattices perform better than any other disordered network. The fact can be understood by considering the variability in the number of links between two adjacent neighbors. This phenomenon is equivalent to have a non-random topology with a distribution of interactions and it can be removed by an adequate local normalization of the couplings. I. INTRODUCTION Synchronization of populations of interacting oscillatory units takes place in several physical, chemical, biological and even social systems [1][2][3]. Networks of interacting oscillators are currently used to modelize these phenomena. In this paper we will focus on a special kind of interacting oscillators, namely pulse coupled oscillators. These units oscillate periodically in time and interact, each time they complete an oscillation, with its coupled neighbors sending them pulses which modify their current states. These systems show a rich spectrum of possible behaviors which ranges from global synchronization [4] or spatio-temporal pattern formation [5,6] to selforganized criticality [7]. Although some theoretical approaches have been proposed, in general, the singular nature of pulse-like interactions does not allow to describe the system in terms of tractable differential equations. Despite this, some methods have been developed to find the attractors of the dynamics and study their relative stability [4,8,5]. In this paper, we want to focus on the effects that different topologies have on the dynamical properties of the network. In particular, we will study how does network's topology affect global synchronization. So far, most of the studies on networks of coupled oscillators have been done on either small connectivity lattices (usually 1D rings) [8,5] or globally coupled networks (all to all coupling) [4]. Nevertheless, there is some work done in net-works of continuously coupled oscillators (Kuramoto's) [9][10][11] and Hodgkin-Huxley neuron-like models [12] where different non-standard topologies are considered. Among these, the so-called Small-world networks seem to be an optimal arquitecture, in terms of activity coherence, for some of these coupled systems [11,13]. Pulse coupled oscillators are commonly used to modelize driven biological units such as pacemaker cells of the heart [14] and some types of neurons [15]. In these systems, synchronization is usually considered to be a relevant state. Regarding heart, pacemakers must be synchronized in order to give the correct heart rhythm avoiding arrhythmias or other perturbed states. In populations of neurons, synchronization has been experimentally reported [16] and is believed to play a role in information codification [17]. Therefore, it is interesting to check which kind of topologies makes the network reach a coherent state more easily and uncover why is it so by looking for its responsible mechanisms. We will focus in a whole family of networks which are characterized by its increasing degree of disorder, i.e. ranging from regular lattices to completely random networks. The structure of this paper is the following. In Sec. II we introduce the model of pulse-coupled oscillators which is going to be used throughout the paper. In Sec. III we start studying synchronization of populations of these coupled oscillators in random networks. In Sec. IV we compare random network performance with the more classical regular lattices. In Sec. V we consider a more general family of networks with a variable degree of randomness and study its synchronization properties. Moreover, the interplay between diversity, interaction and topology is also discussed. In the final Section we present our conclusions. II. BASICS We study the synchronization of a network of N oscillators interacting via pulses. The phase of each oscillator φ i evolves linearly in time until one of them reaches the threshold value φ th = 1. When this happens the oscillator fires and changes the state of all its vicinity according to where j ⊂ Γ(i), Γ(i) being the list of nearest neighbors of oscillator i. The nonlinear interaction is introduced in the Phase Response Curve (PRC) ∆(φ). We use a PRC which induces a global synchronization (φ 1 = φ 2 = ... = φ N ) of the population of oscillators: ∆(φ) = εφ with ε > 0 . This PRC is, indeed, the most simple type of interaction that always leads to a synchronizated state whatever the initial conditions are. In other words, synchronization is the unique attractor of the dynamics. Altough it has only been mathematically proved for all-to-all [4] and local [8] couplings, for all the topologies we have dealt with, synchronization holds. Therefore the dynamics could also be expressed as where t j are the firing times of φ j . To define a certain degree of synchronization in our simulations, we define the variable measured each time φ 1 = 0. The choice of oscillator 1 as a reference is completely arbitrary. Notice that measuring the phases at t + 1 ensures that these phases are 0 if they are synchronized with oscillator 1, not depending on the order they fire. In this way, we have a series of system "snap-shots" which mathematically correspond to a return map of the dynamics (see Fig. 1). Synchronization time T is thus defined as the time needed to reach m = 1. When this happens, all oscillators will always fire in unison. III. RANDOM NETWORKS We start studying synchronization of a population of coupled oscillators by defining a Random Network (RN ). We restrict ourselves to the most simple type of RN [19], that is, we randomly select a pair of the N nodes and establish a link among them, repeating this procedure up to a certain number of links l. Notice that, with this wiring method, there is no guarantee of ending up with a connected network (where there must be a path connecting any pair of nodes). Dealing with a network splitted into two or more clusters would make global synchronization (m = 1) impossible so we should avoid such pathological configurations. In order to have a connected network one has to work over a threshold number of links which ensures connectivity [20] l ≫ N 2 ln(N ). Therefore we can study RN whose number of links l runs from the above limit up to the globally connected network (all to all coupling) which has l = N (N − 1)/2. In addition, if we want to study the transient to synchronization, one should always stay in the limit otherwise synchronization would be achieved in a few firings due to the stronger interaction. As one expects, when the number of links l is increased, the time needed to reach synchronization T diminishes, having its lowest value for the globally connected case. What is really interesting is how does T decrease as we consider networks with more links. It turns out that T follows a power-law with a slope which is independent of the number of nodes. with α = 1.30 ± 0.05 for ε = 0.01. In Fig. 2 this behavior is shown. In these simulation results, each point is averaged over different random topologies with the same number of links and over different initial arbitrary conditions for all the oscillators. In addition, one can study how does T increase with the number of oscillators N . We find that it also follows a power-law behaviour which does not depend on the number of links l considered with β = 1.50 ± 0.05 for ε = 0.01. Therefore, once the interaction strength is set, we can characterize synchronization time T by means of network's geometrical properties trough the scaling relation which can be rewritten as In Fig. 3 we plot the collapse of data curves according to Eq. 9 and the agreement is excellent. The exponents α and β are constant within the error bars for the checked values of ε (0.1 > ε > 0.005). IV. RANDOM NETWORKS VERSUS REGULAR LATTICES Once we have seen the synchronization features of RN, it would be interesting to compare them with the performance of Regular Lattices (RL). In the 1D RL we consider, each oscillator is coupled to its 2l/N nearestneighbors in a ring-like (1D reticle with periodic boundary conditions) network. In Fig. 5 there is an example of RL with 2l/N = 4. In order to do the comparison we must calculate T for the RL, always keeping the same number of nodes and links as in the RN cases. Since the RL is the topological configuration which is a connected network with a minimum number of links (l = N ) we can also explore topologies with fewer links than the RN . Another point one has to take into account when studying RL with a growing number of links l is that it is not possible to add just one link to pass from one configuration to another since it would break the regularity of the lattice. Instead, one has to work with integer values of 2l/N , that is, adding a next nearest neighbor to all oscillators when passing from one configuration to the next one that have more links. Therefore, altough we can start from an initial minimal configuration with less links, we have less points to study. In Fig. 4 results for ε = 0.01 are shown. One can clearly see that the RL performs better than the RN for all degrees of connectivity. This result holds for all ε > 0. Nevertheless, this difference is only appreciable for lower values of l so that as our network has more links it quickly vanishes. When we are close to the globally connected network, the synchronization features of both kind of networks are roughly the same, while in the low connectivity case, the RN has synchronization time T much longer (about twice) than the RL. Randomization procedure for an initial RL with links to first and second nearest neighbors. Each links is cut with a probability p = 0.3 and re-wired between two randomly selected pair of nodes (dashed lines). For p = 0 we have again the RL since no link is re-wired while for p = 1 the pure RN is recovered. V. MIXED TOPOLOGIES So far, we have checked the synchronization features of the two extreme kind of networks: RN and the RL. Nevertheless, there exists a whole family of networks that lie between these two limits. They are networks of mixed nature, that is, although they may have some random connections, also posses an underlying regular structure. Recently, this kind of networks have received a lot of attention [11,13,22], specially due to to the socalled Small-world networks. These networks, basically a regular lattice with a very small amount of random connections, have the advantage of having a low average distance among nodes while keeping a highly clustered structure. In this work, we examine synchronization time for networks with all degree of randomness ranging from the RL to RN. We parametrically characterize these networks with a re-wiring probability per link p. It defines the following randomization procedure: starting from an initial RL of l links, we cut each link with probability p and re-wire it between two randomly chosen pair of nodes. Notice that our method slightly differs from other used by some authors who just rewire one edge of the link [13] or add new ones [21]. In this way, we keep the number of links l constant and recover the previous two limiting cases, the RN and the RL, for p = 1 and p = 0 respectively (see Fig. 5). In Fig. 6 we see the synchronization time T for a network of N = 300 oscillators with 2l/N = 16. One can clearly see that T grows monotonously as we introduce more disorder into the system (increasing p). For different N and l the behavior of T is qualitatively the same. These results obviously raise a question: why does topological disorder slow the synchronization process?. The re-wiring process induces a random distribution of links for any oscillator. Therefore, two adjacent units can have a very different number of oscillators. This fact is crucial since the incoming signal from the firings of the neighborhood of a given oscillator can be much larger or smaller than the signal that another of its neighbors receive. In this case, the two oscillators have different effective frequencies. The larger the difference in their effective driving, the more difficult is to synchronize these two units. This can be thought as a kind of dynamic frustation among two adjacent oscillators. One way of quantifying this problem is to check the variability in the number of neighbours per oscillator. In Fig. 7 we can see how does the dispersion in the number of links per node grow as we induce more topological disorder. This dispersion σ 2 is zero for p = 0 (RL) whereas for a p = 1 (RN ) the distribution of links is known to follow a Poisson distribution with a variance equal to 2l/N when N → ∞ [19]. As we can see, both Fig. 6 and 7 look quite similar, they show a monotonic growth with the re-wiring probability p wich seem to saturate for values close to 1. Another way to check if, in the topologically disordered model, this dynamic frustation is the responsible for the delay to synchronization is trying to remove it. This can be done if we think in terms of effective drivings, once we have seen that topological disorder induces an heterogeneity in these drivings, we can try to make them homogeneous again by means of a convenient local interaction normalization. The normalization works as follows, without changing the topology, each oscillator modifies all pulses it receives from the firing of any of its neighbors by the factor where N (Γ(i)) is the number of neighbors of φ i . This normalization means that the more pulses an oscillator receives, the less intense they are. The average number of neighbours < N (Γ(i)) > is always 2l/N for all p. In Fig. 6 we see that this procedure does remove the dynamical frustation, lowering the time needed to achieve synchronization, and even making it shorter than the unnormalized case for some small values of p. Therefore, with this rough method we are able to get rid of the effect that topological disorder had on the synchronization features of the network. From another point of view, one can think of this variability induced by the topological disorder as something equivalent to have some diversity in a population of coupled oscillators on a RL. Imagine that, for instance, a population of oscillators following the dynamics: withε ij being a random variable uniformly distributed over the interval (ε − s, ε + s). In this case, s gives us a quantitative idea of the population diversity. Now, in this modified model, synchronization time T also grows as we increase population diversity s. In Fig. 8 we can check this for a population of N = 100 oscillators in a RL with 2l/N = 16 and a mean value of the interaction <ε ij >= 0.01. The same result, for the specific case of all-to-all coupling had already been found out in [23]. Therefore, for this kind of pulse-coupled oscillatory systems, inducing some topological disorder is almost equivalent to deal with a random distribution of interactions in a regular lattice as far as synchronization features are concerned. VI. CONCLUSIONS In this paper we have studied synchronization time T for several networks, each of them characterized by a different degree of randomness. For the special case of a completely random network we have found out a scaling relation between T and network's connectivity T (N, l). As far as other topologies are concerned, the regular lattice is the one which synchronizes faster. Nevertheless, our regular lattice is a 1d ring-like structure, and there are other kind of regular lattices which might also be studied (2d lattices, hierarchical trees,...). Therefore the question of which is the optimal synchronizing network remains open. However, the main aim of our work was to point out which are the geometrical mechanisms responsible for slowing or accelerating the synchronization process in such pulse-coupled systems. It turns out that the variability in the number of neighbors is a factor that slows synchronization. We have finally proposed a local normalization method that manages to remove the effects induced by the topological disorder. Among the limitations of our model there is the lack of time delays in the interaction, or a finite pulse propagation velocity, which are present in real systems. Such effects might modify some of the results and is part of future work.
2016-01-25T19:18:26.375Z
2000-04-30T00:00:00.000
{ "year": 2000, "sha1": "0cf0c9ff7873683a602e09486a99abed7efdbc64", "oa_license": "CC0", "oa_url": "http://diposit.ub.edu/dspace/bitstream/2445/18831/1/172843.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0cf0c9ff7873683a602e09486a99abed7efdbc64", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics", "Medicine" ] }
235811018
pes2o/s2orc
v3-fos-license
Mining Knowledge of Respiratory Rate Quantification and Abnormal Pattern Prediction The described application of granular computing is motivated because cardiovascular disease (CVD) remains a major killer globally. There is increasing evidence that abnormal respiratory patterns might contribute to the development and progression of CVD. Consequently, a method that would support a physician in respiratory pattern evaluation should be developed. Group decision-making, tri-way reasoning, and rough set–based analysis were applied to granular computing. Signal attributes and anthropomorphic parameters were explored to develop prediction models to determine the percentage contribution of periodic-like, intermediate, and normal breathing patterns in the analyzed signals. The proposed methodology was validated employing k-nearest neighbor (k-NN) and UMAP (uniform manifold approximation and projection). The presented approach applied to respiratory pattern evaluation shows that median accuracies in a considerable number of cases exceeded 0.75. Overall, parameters related to signal analysis are indicated as more important than anthropomorphic features. It was also found that obesity characterized by a high WHR (waist-to-hip ratio) and male sex were predisposing factors for the occurrence of periodic-like or intermediate patterns of respiration. It may be among the essential findings derived from this study. Based on classification measures, it may be observed that a physician may use such a methodology as a respiratory pattern evaluation-aided method. Introduction The aim of this study is threefold. First, it aims to use group reasoning [1][2][3] to investigate how to handle data comprising health indicators and breathing signal characteristics and the machine learning approach that should be employed. Therefore, in the pre-analytic stage, physicians, diagnosticians, and computer scientists were engaged to discuss several possible ways to manage the collected data. For example, as we deal with real-world data that contain uncertain or incomplete samples, deep learning was dismissed at the early stage of the analysis, as we needed to obtain a better insight into the analysis beyond quantitative assessment. Simultaneously, group reasoning can be treated as a part of the tri-way conceptual reasoning model proposed by Yao [4] and adopted from Nanay [5]. Second, this study aims to follow Yao's [4] three stages of reasoning: perception, cognition, and action ( Fig. 1). Finally, after performing group reasoning, we found that there were many factors collected in diagnostics, which were cost-and time-intensive. Because of COVID-19, determining which should be retained is imperative. Otherwise, data processing may be time-consuming, decisions may be too slow, or the diagnostic pathway may be affected by a specific component contributing to the overall medical context. Consequently, we decided to focus on tri-way reasoning [4], applying granular computing [6][7][8][9][10], and rough sets [6, [11][12][13] to knowledge mining. Moreover, the approach proposed is validated by a baseline k-nearest neighbor algorithm and UMAP (uniform manifold approximation and projection) visualization. In this study, we deal with data related to cardiovascular disease (CVD). CVD remains a major killer in the world. Each year CVD causes 3.9 million deaths in Europe, including 1.8 million deaths in the European Union [14]. There is increasing evidence that altered respiration might contribute to the development and progression of CVD. Abnormal respiratory patterns are common in patients with severe conditions, including congestive heart failure (CHF) and obstructive sleep apnea (OSA). The so-called Cheyne-Stokes respiration (CSR) during sleep, presenting as repeating rises and falls in ventilation separated with periods of apnea (cessation of breathing), is a common finding in patients with CHF [15]. In these patients, a similar respiration pattern with apnea (CSR) or without (periodic breathing, PB) was also frequent during the day. Furthermore, it was shown that the cyclical pattern of breathing is a marker of poor outcomes [16]. Diagnosis of OSA is based on the investigation of respiratory patterns during sleep, but it is limited to apnea and hypopnea detection and focuses on the differentiation between obstructive and central apneas [15]. Therefore, the respiratory rate quantification and abnormal pattern prediction deviate from the fine-grained universe in which every bit of information is ordered adequately towards coarse-grained ones as "normal" and "disrupted" respiratory patterns could only roughly be discerned between each other category. A concept of granularity in medicine has existed for decades. It already appeared among the studies dating back to 1998, when Tange et al. [17] referred to clinical narratives containing free text as "high granularity" segments. Information retrieval from clinical narratives involves several steps: searching for a labeled segment, reading its content, and analyzing it. In the authors' opinion, physicians can retrieve information better when clinical narratives written in the free text are divided into many small labeled segments, i.e., granules. Qi et al. defined five types of granules, namely, those induced by objects and attributes, and the ones induced by both objects and attributes simultaneously are seen from different perspectives and levels. Pal discerns three components of granular computing (GrC), i.e., granulation, granules, and computing with granules [6]. In this study, the granularity concept should be understood as a set of objects with descriptions derived from discretization. These granules involve a reduced number of relevant features, resulting in dimensionality reduction. In this sense, clusters or segments formed by granulation are called granules. Therefore, a granule may be defined as a collection of indiscernible entities that are collected according to their similarity, proximity, or functionality regarding given attributes [6,7,18]. Biomedical signals are often analyzed with the use of Gabor transform and discrete wavelet transform (DWT) [19−21]. This study aimed to determine the relationship between phase coherence and instantaneous heart rate and respiration. One of the critical points of this study is that PB, in which slow periodic oscillations modulate the regular oscillations corresponding to rhythmic expiration and inspiration, evoked high altitude-induced hypoxia. Also, using signal processing based on wavelet analysis helped to analyze mechanisms underlying respiratory control during hypobaric hypoxia, which is related to genetics and cardiovascular dynamics [22]. Notably, the graphical results of wavelet analyses were treated as clouds and granules when they were first used. However, this concept was not further developed. Also, there were different directions of granularity concepts foreseen and applied to general medicine, medical informatics, cohort selection, risk prediction, and healthcare quality measurement [23,24]. Notions such as multi-granularity embeddings [23], coarse-or fine-grained objects, were discussed in medical data analysis through granularity principles [24]. Signal analysis employing wavelet analysis was performed "behind-the-scenes" along with physicians' subjective evaluation, showing another level of granularity according to the adopted Yao's model [4]. Also, this study gained knowledge from the data collected, processed, and analyzed (ETL, extract-transform-load phase [25]). Roughly, we divided the signal analysis outcome as AcƟon outcome of the intelligent processing by means of the rough set-based analysis, supported by the expert's knowledge and validated by baseline algorithms PercepƟon target and the measurement situaƟon establishing; data gathering, and evaluaƟng CogniƟon data transforming into knowledge by specifying a sequence of tasks to accomplish this process granules of normal breathing signal patterns, periodic-like signals, pauses, or apneas. Short-term daytime signal recordings are based on various respiratory belts designed to measure chest diameter changes resulting from breathing. In contrast to the analysis of heart rate and blood pressure variability, respiration pattern assessment is not well developed, and comprehensive methods for full automatic detection of a periodic pattern of breathing are rare. In many previous studies, the main part of respiratory pattern assessment was mostly based on a visual inspection. There are a few examples of using the combination of visual assessment and computerized analysis of the breathing pattern [16,26,27]. The outcome of the automatic classification of respiratory signals to detect abnormalities in breathing or breathing cessations is encouraging. Yet, this needs to be further developed. Thus, there is a clear need to develop novel methods to measure and analyze respiratory variability, especially in healthy individuals and patients with early stages of CVD presenting a spectrum of respiratory pattern alterations, including cyclical behavior, that does not meet the criteria for CSR. These novel methods, especially if combined with parallel assessment of heart rate and blood pressure variability, might provide better insights into cardiorespiratory regulation in health and in disease and induce better prevention and treatment of CVD. Therefore, we propose a new approach to respiratory pattern assessment, namely rough set-based processing of data [11,12], relying, however, on Yao's perception-cognition-action tri-level conceptual model [4] as the starting point (Fig. 1). This tri-level conceptual model explains how cognition is needed as an intermediate between perception and action to better apply intelligent data analytics and study human understanding [4]. Moreover, the model basis lies in the data-informationknowledge/wisdom (DIKW) hierarchy, another way of rationalizing in threes [28]. In fact, Yao builds the perceptioncognition-action model around machine/system (collection, analysis, and decision), DIKW (data, information, and knowledge/wisdom), and human (perception, cognition, and action) layers. Our understanding of this model is as follows: the perception layer established the target and the measurement situation -it gathered and evaluated the data. Moreover, at this stage, group reasoning was applied. Cognition is associated with mental processes that involve gaining knowledge and comprehension; that is, showing context and finding answers hidden in large volumes of information transforms data into knowledge by specifying a sequence of tasks to accomplish this process. Finally, action is the outcome of intelligent processing through rough set-based analysis supported by the expert's knowledge. Accordingly, all steps of the conceptual model are shown in Fig. 1. Also, we considerably followed the approach proposed by Polkowski and Artiemjew [29], who developed a classifier for coronary heart disease, first using data pre-processing techniques in dealing with the missing values. Second, granular classifier is applied to discover the absence or presence of coronary disease. The flowchart of our performed experiment is presented in Fig. 2. Study Group This study complies with the Declaration of Helsinki, and the ethics committee of the Medical University of Gdansk approved its protocol (NKEBN/422/2011). All participants were informed about the merits of the study and signed written consent forms. The study group comprised 276 subjects (157 men) aged 51.4 ± 11 years. Among them, 151 had a history of hypertension, 28 experienced a transient ischemic attack, 21 were diagnosed with obstructive sleep apnea, and there were 11 diabetic patients. The mean body mass index (BMI) in this group was 28.8 ± 4.9 kg/m 2 , and the waist-to-hip ratio (WHR) was 0.92 ± 0.10. In each subject, 20-min recordings of respiration were performed in the supine position. All patients were asked to relax, but not to fall asleep. The respiratory belt (Pneumotrace II™), based on a piezoelectric device connected to PowerLab with the LabChart software (ADInstruments, Australia), was used to derive breathing patterns. The sampling rate was 1000 Hz. Using the LabChart software, respiratory tracings were visualized, and breathing patterns were classified by the physician as normal (for a respiratory signal with similar amplitudes), periodic-like (characterized by cyclical behavior of breathing pattern including waxing and waning of amplitude), or intermediate (including various types of the pattern). At this stage, no additional methods for detecting PB (for example, time-varying spectral density analysis) were used. As in many subjects, breathing patterns changed during recording, and the percentage of a given type of breath in all patients was indicated. The intermediate type of pattern included all cases when the percentage of normal or periodic-like pattern was rated as less than 70%. It should be stressed that all of these classifications were entirely subjective. Among 276 studied subjects, 92 were classified as having a normal breathing pattern, 56 had a periodic-like pattern, and 128 had an intermediate pattern of breathing. Table 1 presents the data collected from the questionnaire forms based on a clinical assessment, which were subsequently completed by respiratory signal analysis and their further evaluation. Statistical Analysis of Data Acquired For each parameter (age, weight, height, waist, HIP, BMI, and WHR), significant differences between the obtained results were calculated using the Kruskal-Wallis test. The Kruskal-Wallis test is an extension of the Wilcoxon rank sum test but opposite to it, and it is not limited to two populations [30]. The null hypothesis in this test is the assumption that medium ranks or the medians of the data series are the same. The statistical calculations were performed using MATLAB. The obtained results are presented in Table 2. Groups with normal, intermediate, and periodic-like patterns of breathing did not differ according to age. Subjects with the normal breathing pattern had lower values of weight, height, waist, BMI, and WHR than persons with periodic-like and intermediate patterns. Concurrently, the two latter groups were similar regarding the abovementioned anthropometric parameters. Additionally, there were significant differences between groups according to sex (p = 0.0003, chi 2 test). It indicates that obesity (especially the so-called central obesity characterized by high WHR) and male sex were predisposing factors for the occurrence of periodic-like or intermediate patterns of respiration. The obtained results for BMI and WHR are presented in boxplots in Fig. 3. A typical data presentation was adopted; i.e., the central mark indicates the median, and the top and bottom edges of the box denote the 75th and 25th percentiles, respectively. Observations beyond the whisker length are marked as outliers using a red + symbol. Sex differences are presented in Fig. 4. Wavelet Analysis of Signals The respiratory signal composition in its nature is a dynamic and non-stationary process [31], with the alternation of peaks of different spectral ranges, which is why wavelet, i.e., time-frequency representation, is useful in detecting dynamic changes of signal components and eventually observing patterns of such signal behaviors [32]. By employing scaling and translation, wavelet analysis creates a set of orthogonal basis functions. Good localization characteristics in both time and frequency domains and selectivity in the time domain make wavelet analysis suitable for approximating non-stationary signals. One of the crucial features of wavelet analysis is that it captures signal elements at different detail levels. Consequently, we call these detailed granules useful information. Pre-processing of Signals The analysis of the respiratory signal using machine learning methods requires performing initial pre-processing and parameterization of the input data. In our approach, these data refer to a digital signal representing changes in the participants' chest circumferences caused by respiration. The signal has a sampling rate of 1000 Sa/s. Examples of a few seconds of such signals obtained from a healthy participant are shown in Fig. 5. Signals characterized by a "periodiclike" structure of inhale-exhale events are shown in Fig. 5. As inspiration and expiration events may be separated by "pause" segments characterized by no significant changes in chest circumference, we have concluded that inhale-exhale events have a well-defined triangular shape of finite length, which is approximately consistent at least within a personspecific recording session. Therefore, we employed an analysis method to recognize the time of occurrence of certain inhale-exhale events in the acquired signal and frequency content analysis of those signals, namely, the analysis employing the wavelet transformation. An important decision in designing a system for signal processing employing wavelet transformation is whether to use a continuous or discrete wavelet transformation algorithm and choose the appropriate mother wavelet for such processing. As we wanted to test some different wavelet scales, we decided to use discrete wavelet transform (DWT), which allows the decomposition of signals into components of scales, with a power of 2. Due to this property of DWT, we were able to test the scales of wavelets that had various orders of magnitude. By employing DWT, we were also able to perform calculations relatively quickly, which is another advantage of DWT compared to continuous wavelet transformation. Computation speed is important in our case, as we had to process 111.42 h of acquired respiration signals associated with participants having both normal and abnormal breathing patterns. Additionally, before calculating DWT, we first standardized the input data according to the following formula: where • n denotes a sample number. • Signal[n] denotes an original, unstandardized respiration signal. • Signals[n] indicates a signal after the standardization process. • Mean() represents an operation of calculating the average of the input signal. • Std() denotes an operation of calculating the standard deviation of the input signal. (1) Also, in the literature, one can find examples of DWT used for data parameterization before processing it by separate machine learning algorithms and other processing methods [33,34], and for signal denoising [35], which also encouraged us to choose DWT as the parameterization method. Choosing Appropriate Wavelet Type Another important decision regarding the pre-processing stage was the selection of the desired mother wavelet. As our signals were standardized, we assumed that scalograms generated using a better-suited mother wavelet would have a broader range of values. To perform the necessary calculations, we used the DWT by employing the PyWavelets Python library [36]. To test the wavelets from the PyWavelets library that maximized the criterion for the maximum range of scalogram values, we calculated the interquartile range (IQR) of those values that are observable in scalograms associated with each possible mother wavelet. For each of the 106 discrete mother wavelets available in the library, we calculated the IQR parameter. We averaged it over the results obtained from 30 participants classified as people having a normal breathing pattern. A trained medical doctor performed the aforementioned respiratory pattern assessment. Furthermore, we plotted the results achieved with each mother wavelet as a boxplot illustrating how the IQR values varied for every tested mother wavelet. The results of this evaluation are shown in Fig. 6. A detailed list of wavelets available in PyWavelets (we used the version 1.1.1 library) can be found in online documentation [37]. The best mother wavelet, according to the criterion of the maximum span of the scalogram values, is rbio3.1 (Fig. 6). As DWT in the case of our analysis is intended to be the parameterization stage, we also had to reduce the input data size. To achieve this goal, we omitted seven decomposition components associated with the smallest wavelet scales. It resulted in the creation of components of approximately 4500 samples-long that is comparable to the length of a relatively high-resolution spectrogram, which machine learning algorithms can process in the next step of our analysis. Examples of final scalograms that passed the next stage of the calculations are shown in Figs. 7 and 8. Rough Set-Based Analysis of Data Rough set theory was created by Polish mathematician Zdzisław Pawlak [11,12]. It is used to approximate a set by its upper and lower approximations: the first includes objects that may belong to the set, and the latter includes objects that surely belong to the set. Both approximations are expressed as unions of atomic sets containing indiscernible objects with the same values of attributes ( Fig. 9). Two objects x and y are characterized by attributes P ⊆ A (P is a subset of a set of all possible attributes A). These are in the indiscernibility relation if (x, y) ∈ IND(P), where IND(P) is an equivalence relation defined as a set of all pairs with exactly the same values for all considered attributes: where p(x) is the value of attribute p of object x. In this study, P is a set of selected wavelet types and scales introduced in the previous section, and objects x are particular cases of patients. All objects in the indiscernibility relation with x produce an equivalence class [x] P , being a set of all objects identical with x on every attribute. If P contains attributes sufficient for distinguishing between objects with different decisions, then the class [x] P contains only objects with the same decision as the considered object x. A lack of distinction between objects inside the equivalence class is not harmful to classification accuracy. Thus, P's considered set of attributes generates a partitioning of the universe of discourse U into atomic sets, which are the building blocks for representing a rough set. A set of objects with the desired decision is such a rough set, called a decision class. A set of all objects with one of the possible decisions d = {d 1 ,…, d n } is denoted as X di . Following the rough set theory, X di can be approximated by its lower and upper approximations, the former denoted as di : The lower approximation is a set of all objects x, whose equivalence classes [x] P are included within the decision class of interest X di . It can also be interpreted as a set of objects whose attribute values allow for precise classification of the decision class X di with a decision d = d i . Also, the set of objects PX di is called upper approximation and is defined as: The upper approximation includes all objects x, whose equivalence classes have a non-empty intersection with the considered decision class X di . It can be conveniently interpreted as a set of all objects x that have the values of their attributes pointing to similar objects, and at least one of these has the desired decision X di . Some object(s) equivalent to x can have other decisions as well (Fig. 9). The given subset of attributes P can be sufficient enough to generate such a partitioning of the universe of x ∈ U that decision classes are approximated with high precision. The accuracy of the rough set approximation of a decision class X di is expressed as and P di ∈ [0,1], where P di = 1 will be the case of a precisely defined crisp set. Application of the rough set theory in a decision system often requires a minimal (the shortest) subset of attributes RED ⊆ P, called reduct, resulting in the same quality of approximation as P. Numerous algorithms for calculating reducts are available, and for this study, two methods are examined [38] (described in the "Experimental Procedure" section). Usually, prior to reduct calculation for attributes with continuous values, discretization is performed. Discretization algorithm analyses attribute domain, sorts values present in the training set, takes all midpoints between values, and finally returns the midpoint maximizing the number of correctly separated objects of different classes. It is repeated for every attribute. Three different methods were examined in this study. Discretization limits the number of possible values-for attributes in this study, there are 1, 2, or 3 cuts, splitting the values into 2, 3, or 4 discrete ranges, accordingly. Once the reduct is obtained, the attributes useful for a particular classification task are known. The data are filtered, the attributes not present in the reduct are removed, and others are discretized accordingly. Furthermore, all cases in the training set are analyzed, and decision rules are generated. Each object x i attributes p n ∈ RED are treated as an implication antecedent, and the decision d i for the x i object is the At the classification phase, these rules are applied for every object in the testing set, and subsequently, the decision is determined to be compared with the actual one and measure the accuracy. The abovementioned treatment is initialized 10 times in a 10-fold cross-validation procedure, each time comprising determining the discretization cuts, the reduct, and generating the rules based on the training set, then applying the rules to classify the testing set, and measuring the accuracy. The process is automated by employing a script written in the R language [39]. Other Rough Set Approaches The process described above assumes a crisp distinction between atomic sets. A rough set theory variant called fuzzy rough sets applies fuzzy equivalence classes, making use of fuzzy indiscernibility [40,41]. It allows for expressing imprecise knowledge about similarity and dissimilarity between objects by fuzzy membership functions. The presented study uses crisp atomic sets as a result of cut calculation; therefore, the fuzzy approach is unsuitable here. Another interesting extension of rough sets is the dominance-based approach. It requires all attributes to follow some preference order, where it is possible to determine the more and the less desired values [42]. In this study, the features extracted by wavelet analysis cannot be ordered by preference; therefore, this approach is not applicable. Dataset Description Decision features of periodic-like, irregular, and correct breathing patterns contain percentage measures determined by a medical expert, expressing how strongly a given pattern type is present in the signal (for simplification of records and presentation of results, they are converted to deciles 0, 1, 2, …, 10). There are significantly more cases with 0 values than others; therefore, a stratification-the bias reduction procedure-is introduced by purposefully sampling the dataset to obtain as many cases with 0 values as the average number of cases with values 1, 2, ...,10. Goal Therefore, the goal was to explore a rough set-based [11,12] highly granularized approach for data mining of a breathing pattern database. It was shown here how signal attributes and anthropomorphic parameters could be exploited to create prediction models to determine the percentage contribution of periodic-like, intermediate, and normal breathing patterns in the analyzed signals. The output class values are quantized, and many possible quantization ranges are verified during the automatic search of optimal model hyperparameters aimed at maximizing the resulting model accuracy. As already mentioned, the R programming environment [39] with the RoughSets package [38, 43−45] was used for the rough set-based processing. The model hyperparameters considered in this study are as follows: (1) the number of quantization cuts for the input data discretization and (2) the ranges defining output classes that had been provided with a 10-value scale but were quantized into three ranges in the process. Data Granularization For knowledge extraction and modeling, two initial assumptions were made: • The input signal wavelet parameters that have continuous values and varying lengths, dependent on the signal length and wavelet scale, were quantized for the processing and reduced to a fixed number of only four descriptors for each wavelet scale, based on quartile ranges. • The output classes with 10 values of percentage contribution are quantized into three ranges: low, medium, and high (coded as "A," "B," "C"), defined by discretization cuts. Consequently, a separate fine-tuned model was created for each breathing pattern that outputs one label (low, medium, and high) easily for interpretation. These classifiers are suitable for operation on input signals of any length. Discretization of Wavelet Parameters Only three wavelet scales were selected: 5, 6, and 7, which were the compromise between matching the time resolution to a breathing period and choosing some samples that would be low enough to assure fast processing. It may be observed in Figs. 7 and 8 that scales higher than 7 are too coarse and do not exhibit the breathing cycles inherent periodicity. Scales lower than 5 are too detailed and appear to contain the same information as 5, 6, and 7. From each wavelet scale, the following four descriptors are extracted: first, second, and third quartiles (Q1, Q2, and (Table 3). Therefore, the general signal characteristics are obtained, are robust to noise, and are suitable for further processing in a rule-based decision system. Those real-valued descriptors are further quantized during the rough set-based knowledge modeling process, employing one of the selected discretization algorithms (global discernibility, unsupervised intervals, or unsupervised quantiles), with a set number of cuts c. The discretization method and c were considered as arguments for exploration during the automatic search for model hyperparameters. Discretization of Output Classes The output was initially defined on a 10-valued decile scale, describing the intensity of a particular breathing pattern in the analyzed recording. For this study, it was transformed into only three ranges, automatically during the model search, by setting a set D = {d, d2}, where d1 = {1, 2, 3, 4, 5} is a lower decile boundary, and d2 = d1 + width is a higher decile boundary, where width = {2, 3, 4, 5}, with a condition d2 < 9 fulfilled. During the discretization, the pair of cuts D = {d, d2} were implemented as ranges [0, d1), [d1, d2), [d2, 10), and replaced with labels "A," "B," and "C," respectively. The structure of the resulting decision table exploited in the following experiments is presented in Table 4. Experimental Procedure Each experimental run comprises several key steps, including selecting the modeled pattern, data filtration for bias reduction, splitting into training and testing cases (in a ratio of 85: 15), and training the model with given hyperparameters, verifying the accuracy. For each combination of hyperparameters, 10 such runs were conducted, each with different random data filtration and splits, acting as a cross-validation method and coping with a relatively low number of cases in the database. A detailed description of the procedure is as follows: • From normal, intermediate, and periodic-like, choose the breathing pattern to be modeled: • Set the hyperparameters of the model to be trained. • Set the reduct computation method (DAAR heuristic or greedy heuristic). • Filter the data: Reduce the risk of bias by randomly subsampling the cases where the decile value is equal to 0. • Calculate the average number of cases where the decile value is equal to 1, 2, …, 10. • Count all cases where the decile value is equal to 0. • Calculate how many 0 cases should be removed to match their number to the average. • Divide randomly into training and testing sets in 85:15 ratios. • Apply one of the selected discretization methods for attribute values, using the c cuts. • Calculate the reduct and rules. • Apply the rules to the test cases. • Measure and report the accuracy of the results and the number of rules. • Repeat 10 times for the same hyperparameters, and create statistics for accuracies and numbers of rules. To summarize, the whole process explores hyperparameters: c, discretize method, reduct algorithm, rule algorithm, decision ranges d1 and d2, to automatically find the model configuration that maximizes resulting prediction accuracy for the decision classes of the breathing pattern. It can be formalized as follows: Model Exploration Results The results of 60 best model exploration runs are presented in Figs. 10, 11, and 12, sorted by decreasing median accuracy. Accuracies and the number of rules in the generated models were collected and presented as boxplots (with quartiles and medians). On the x-axis, the labels denote the model configuration, where • {d1-d3}, e.g., {1-3}, is a definition of modeled decile range cuts. • d or g is an abbreviation for reduct computation method: DAAR heuristic, or greedy heuristic. • q, i, or n is an abbreviation for the discretization method: unsupervised quantiles, unsupervised intervals, or global discernibility. • LEM2, CN2, AQ, and IND are rule-induction methods. • cn is the number of desired cuts for attribute discretization. Accuracy and Rule Analysis A process of rule filtration was performed based on a few criteria. First, models with less than two rules were removed because to perform a decision regarding three classes, at least two rules should be employed (for example, if the 1st rule is true-the class can be "A"; if the 2nd rule is true-the class can be "B"; and if neither is true-the class can be "C"). Then, models with a mean Laplace confidence calculated over all the model rules less than 0.6 were removed. Laplace confidence is a metric reflecting rule accuracy over the considered class and all objects matching the rule: where R K is the rule related to the class K, n K (R K ) is the number of objects of the class K correctly classified by the rule, n(R K ) is the number of all objects matching the rule (regardless of their class), and k is the number of classes in the model. If the number of rules with Lc > 0.6 was larger than 100, the threshold was decreased to leave at most 100 rules in the rule base for each examined model. It was assumed that such a high number of rules for classification into three classes is excessive and impractical. Notably, other rule selection methods were examined as well: support and confidence, and the results were confirmed to be similar (the result accuracies reported in this section are similar with a significance of 0.05). It can be observed from Table 5 that the percentage of contributions of the periodic-like pattern is the most problematic relation to the model, and no approach resulted in an accuracy higher than 0.65 (Fig. 10). Then, the relationship between signal attributes, anthropometric features, and the resulting percentage of intermediate and normal patterns is more clearly defined, and median accuracies in a considerable number of cases exceed 0.75 (Figs. 11 and 12). Table 6 reports the average F1 scores. It can be observed that values fall below 0.74 for the considered problem of three classes with imbalanced sizes, which is a very common problem. The best cases are as follows: for the periodic type for D = {d1, d2} = {3, 8}, when the average accuracy is 0.71, the average precision is 0.72, the average recall is 0.75, and average F1 = 0.72; for the intermediate type for D = {d1, d2} = {1,6}, when the average accuracy is 0.94, the average precision is 0.71, the average recall is 0.8, and average F1 = 0.72; and for the normal type for D = {d1, d2} = {4,8}, when the average accuracy is 0.77, the average precision is 0.77, the average recall is 0.73, and average F1 = 0.74. Therefore, in these cases, a dedicated classifier based on considered models can be implemented and used in practice for a screening procedure and automatic coarse determination of the breathing pattern. Rule Analysis All signal attributes (wavelet scales and quartile ranges) were present in the models. In very few cases, the rules incorporated anthropomorphic features, namely age, and WHR. Figures 10, 11, and 12 show that a considerable number of possible approaches can result in similar classification accuracy, and some of the tested decile ranges produce higher results than others ( Table 5). Regardless of attempts to reduce bias in the dataset, the procedure remained flawed in this regard. In many models, the extracted knowledge is oriented towards a single class with high accuracy instead of all. Notably, many rules describe only one class, the one with the highest number of cases, resulting in low overall accuracy. It occurs for wide decile ranges, i.e., when one range covers a significantly larger number of cases than any other range. Then, the model tends to favor this particular class, deriving more rules supporting these cases. Non-biased Rules For d1 = 3 and d2 = 6, the results are not biased, as the resulting discretization ranges contain an approximately equal number of cases and have a similar width. An example of a model comprising four rules describing the percentage of the normal pattern contribution is as follows (number of training cases supporting the rule is provided in the brackets at the end of each rule): The resulting accuracy is 0.72. The above rule set is consistent (no two rules contradict each other) and can be used in further experiments. Incomplete Rule Set The same target ranges (d1 = 3, d2 = 6) for periodic type produced only two rules: The classification accuracy for the test cases is 0.75. It can be observed that such a model does not detect range B of the periodic pattern contribution, and it tends to make a bimodal decision, either to class A (contribution less than 30%, as the decile margin is d1 = 3) or to class C (contribution higher than 60%, decile d2 = 6). The above rules were derived by applying DAAR heuristics for reduct calculation, unsupervised quantiles for discretization, and the CN2 algorithm for rule generation. Contradicting Rules It can be observed that rule no. 2 is in contradiction with rule no. 1, but during the inference, it is interpreted that all cases matching the more specific rule no. 1 (two attributes checked in the antecedent) are classified as a class C (the contribution of normal pattern in centile d2 = 3 or higher). Then, cases not covered by rule no. 1 but are covered by rule no. 2 are classified as A. The accuracy here is 0.795 and can be considered appropriate for screening applications. Data Analysis Based on the k-Nearest Neighbor Algorithm Discretized data used for training of the rough set-based system were also subject to the analysis employing a relatively simple classification method-the k-nearest neighbor algorithm. It can be a baseline approach that is useful in assessing the robustness of a solution employing rough sets. Despite being simple in principle, the k-nearest neighbor algorithm also has some hyperparameters that can be optimized. We used a grid search approach to select the best values for • Discretization level d. • Normalization method of data after discretization. • Number of neighboring points considered in the classification process, which is denoted as k. • Type of Minkowski distance metric is used for finding the nearest neighbors. For the distance metric, the Minkowski distance was employed. It is a convenient choice for optimization, as the type of Minkowski distance can be controlled with a parameter, as the formula for calculating such a metric is as follows: where D Minkowski (x, y) denotes the Minkowski distance between points x and y , n is the number of dimensions, and p is a parameter that can be optimized in the grid search procedure. The grid search was conducted for all discretization ranges considered in the rough set-based experiment. For data normalization, five possibilities were considered: 1. No normalization. 2. Normalization by division of parameters (which can also be called dimensions, columns of the dataset) by the maximum absolute value of such parameter. 3. Normalization by scaling parameters to the range of < −1;1 >. 4. Standardization, which is performed by performing calculations according to the following formula: where x standardized is a dimension after normalization, x is a vector before normalization, x is the mean value of x , and std(x) is a standard deviation of x. 5. Normalization of data points (which correspond to rows of the dataset) by treating them as vectors and normalizing the length of such vectors to 1, which can be defined as follows: where u d.p.norm. is a vector representing given data point after normalization, u is a vector representing data point before the normalization, and |u| is the length of the aforementioned vector. Notably, the last type of normalization is applied to data points, not parameters themselves. For the number of neighbors ( k ), values from 2 to 50 were considered. For the Minkowski distance, there were five values of p used, namely 1 (for which the metric becomes a so-called Manhattan distance), 1.25, 1.5, 1.75, and 2 (for which the metric becomes the Euclidean distance). The results obtained in a grid search are shown in Table 7. Overall, 20,825 combinations of hyperparameters were evaluated. For the evaluation of the performance, an approach based on twofold cross-validation repeated five times (5 × 2CV) was employed [46]. As classes for each discretization range were not balanced, only the F1 score was used for evaluation of the performance. A resulting performance estimate in the form of descriptive statistic parameters was provided. The aforementioned statistical parameters are minimum and (11) maximum values of the F1 score, the mean F1 score, its standard deviation, and the confidence interval for the F1 score with the 95% significance level. Gaussian distribution of the performance metric was assumed and used in the calculation of the confidence intervals. Implementations of k-nearest neighbors and cross-validation are from the scikitlearn Python library (version 0.24.1). For all parameters not specified in this study, a default value was used. In Table 7, one can find that the most commonly occurring type of discretization is {4,8} and the maximum obtained F1 score is 0.676, which is a worse result than the one obtained by the rough set-based approach. However, it also should be stressed that the higher boundary of the confidence interval is just slightly lower than the accuracy of the rough set-based approach, which is equal to 0.74. Also, it is clear that in most cases, it is beneficial to not use the Euclidean distance in measuring distances and choose values of p between 1 and 2. The most common values of p are 1.75 and 1.5. Both of them occurred five times throughout the ranking list. From all investigated normalization types, only standardization was found to be present among the 20 best sets of hyperparameters. To visualize the structure of the data classified by the k-nearest neighbor algorithm, a visualization was prepared for the four most effective discretization ranges and preprocessing types. The results of such visualization prepared by employing the UMAP dimensionality reduction algorithms are shown in Fig. 13 [47]. An implementation of UMAP available in the umap-learn Python library (version 0.5.1) was used. All the visualizations presented in Fig. 13 show clusters having the property that different classes tend to be most prominent in just one specific region of the cluster. However, despite scenario b where standardization was used, there are no separate clusters present. In scenario b, those clusters are likeliest to correspond to two sexes of participants and have no evident association with classes A, B, or C. This implies that even for the data pre-processing types, which were the most beneficial for the k-nearest neighbors, there was no obvious way to separate examples of at least some classes. There is always some overlap between them. However, one can identify situations, such as one visible in the subplot c, where classes A and B are separated by a significant amount of margin for most of the data points belonging to one of them. Discussion The accuracy and F1 scores obtained by the tri-way reasoning employing rough set-based approach was relatively high and amounted to 0.795 (accuracy) and 0.74 (F1 score), respectively, so it already aided respiratory pattern evaluation. Yet, implementing the rule set is not straightforward, and further expansion with new knowledge can be problematic. The approach in modeling various percentage contribution ranges should be further explored to reduce the bias and achieve possible higher accuracy for narrow centile ranges. It was shown that for evenly spread ranges <0,3>, <3,6>, <6,10> defined by d1 = 3, d2 = 6 results in lower accuracy but more appropriate rule sets (complete rule sets without bias). Therefore, this issue will be studied in the future. When comparing the results of the rough set-based analysis with a baseline algorithm (k-NN), it occurs that rough sets return higher values. On the other hand, employing k-NN and UMAP visualization brought new insights into the data gathered. The UMAP-based approach shows that different classes tend to be most prominent in just one specific region of the decision space. However, despite various standardization scenarios used, there are no separate clusters present. Moreover, the database will be extended with new cases, allowing the research to focus on a lower number of models but potentially contain more accurate and usable knowledge, automatically extracted from a larger number of training samples. Our study indicates that the granularity concept applied to respiratory rate quantification and abnormal pattern prediction might provide novel insights into cardiorespiratory regulation beyond those offered by a simple analysis of respiratory rate, inspiration and expiration times, tidal volume assessment, or their variability [48,49]. Many previous studies concerning respiratory variability have been performed in animals and cannot be directly translated to humans [50,51]. Analysis of the pattern of respiratory signals in humans is much more challenging. The variability of frequency and amplitude, the impact of artifacts (related to body movement, speech, etc.), and the individuality of patterns in different subjects should be considered [52]. Traditionally, visual assessment by an expert physician has often been used to identify PB appearance. However, this approach is mainly subjective and might be misleading. Previous efforts to develop better methods were either limited to the investigation of respiratory patterns during sleep from polysomnographic recordings [14] or performed in homogeneous groups of subjects: healthy individuals [22,53], neonates [26], or patients with CHF [16,54]. Furthermore, these studies aimed to detect patients with clear-cut periodic patterns of breathing. Conclusion This study was conceived as a tri-way approach to evaluate breathing patterns automatically. We started by observing how a medical expert performs measurement, collects the data and signals, and evaluates and interprets them to form knowledge. Followed by that, group reasoning was executed by physicians, diagnosticians, and computer scientists. Collectively, a sequence of tasks was envisioned concerning the methodology regarding structuring the collected data, deciding on the signal analysis, and the processing method. This stage outcome used wavelet-based signal analysis and data and signal processing by rough sets. We incorporated these data processed into granules representing knowledge related to a particular patient. An important decision was to select an appropriate type of wavelet analysis, i.e., continuous (CWT) or discrete (DWT). The outcome of the discussion related to this end was the use of DWT. Furthermore, we formulated a criterion upon which the desired mother wavelet was chosen. Concerning the results obtained, interestingly, all signal attributes were present in the models. Contrarily, the rules rarely incorporated anthropomorphic features. It should be further explored since some of these data are regarded by a medical expert as substantial. However, obesity (especially the so-called central obesity characterized by high WHR) and male sex were predisposing factors for the occurrence of periodiclike or intermediate patterns of respiration. It may be one of the essential findings derived from this study. Even though BMI is used as a primary assessment tool in numerous fields in medicine linked to poor health outcomes, WHR, waist (circumference)-to-hip (circumference) ratio, may be a better indicator related to faulty breathing patterns, as it considers different body types, sex, and age. This finding confirms the work by Ross et al. who posited that WHR is a more critical factor than BMI in medical assessment [55]. To reassume, in this study, we considered three patterns of breathing-normal, intermediate, and periodic-like-in a group of subjects of different ages, sex, body constitution, and medical records (healthy, hypertensives, patients with a history of TIA, etc.). It is an important distinction that our approach, based on wavelet analysis along with rough set-based processing, was effective in non-invasive and short-term (20-min) recordings during wakefulness. Analyses performed by employing k-NN validated to some extent the results obtained in the rough set-based approach, though the obtained F1 score values were smaller. Also, it seems that the UMAP visualization allows for evaluating the data gathered, showing that clusters tend to form themselves in various parts of the decision space; however, they are not fully separated. Our results indicate that the proposed method can support the visual assessment of respiratory patterns by an expert. Automatic characterization of breathing patterns, based on our approach, can be applied in future studies focusing on cardiorespiratory coupling in health and disease. Finally, it can enable the online analysis of the respiratory pattern changes in the monitored patients, which might be vital in patients with COVID or other life-threatening conditions. Unfortunately, studies have not considered such a complex analysis of breathing patterns and their relationship with anthropomorphic health indicators. Although reports concerning lung/asthmatic breath classification exist [56,57], they identify respiratory patterns employing recorded signals and machine learning following conventional chest auscultation with a stethoscope rather than taking a plethora of health indicators in the analysis. In their study, Göğüş et al. classified inhalation and exhalation sound signals of 11 persons based on features derived from DWT and wavelet packet transform (WPT) signals along with an artificial neural network (ANN). ANN was used to classify respiratory sounds into four classes: normal, mild asthma, moderate asthma, and severe asthma [56]. The obtained classification accuracies are high; however, the number of signals used is too small to provide meaningful observation. Kandaswamy et al. [57] classified 126 signal samples. Still, their focus was on several lung sound categories: normal, wheeze, crackle, squawk, stridor, or rhonchus, so it is difficult to estimate whether these results would be held when applied to a larger dataset. Consequently, a direct comparison of the results obtained is not possible. Future studies will be directed towards a twofold aim. First, we would like to improve the process of detecting pauses and apneas. This approach will help the expert assess the breathing pattern. Furthermore, it might be treated as a pre-processing stage supporting further analyses. One of the planned approaches will comprise modified VAD (voice activity detection) algorithms. VAD algorithms are a critical part of speech processing, recognition, and coding systems. Their operation principle is to detect and separate fragments of silence (pauses) and regions containing speech. Attempts at respiratory pattern assessment using such algorithms can be found in the literature; however, the experiments were limited to signals recorded using a microphone [28,58,59]. We believe that modifying the VAD algorithms will allow their application to analyze the signals coming from the respiratory belt. Moreover, in future studies, we would like to follow the second aim by employing other techniques, such as the rough-fuzzy approach, to identify the best way to analyze the gathered data, as some of the parameters acquired need creating membership functions and fuzzification. Funding This study was supported by the Medical University of Gdańsk Grant ("Excellence Initiative-Research University") and the National Science Centre (Poland) MAESTRO UMO-2011/02/A/ NZ5/00329 grant (K. Narkiewicz, B. Graff). It is also funded by Gdańsk University of Technology within the Curium-Combating Coronavirus program implemented under the "Initiative of Excellence-Research University" (No. 034427.SARS). Declarations Ethics Approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed Consent Informed consent was obtained from all individual participants included in the study. Conflict of Interest The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2021-07-14T05:20:20.898Z
2021-07-10T00:00:00.000
{ "year": 2021, "sha1": "fc2f22866a1dca6345b34c108675e3a4e4b5f89e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12559-021-09908-8.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "fc2f22866a1dca6345b34c108675e3a4e4b5f89e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
254046482
pes2o/s2orc
v3-fos-license
Immune micro-environment and drug analysis of peritoneal endometriosis based on epithelial-mesenchymal transition classification Background Epithelial-mesenchymal transition (EMT) is a complex event that drives polar epithelial cells transform from adherent cells to motile mesenchymal cells, in which are involved immune cells and stroma cells. EMT plays crucial roles in migration and invasion of endometriosis. The interaction of endometrial implants with the surrounding peritoneal micro-environment probably affects the development of peritoneal endometriosis. To date, very few studies have been carried out on peritoneal endometriosis sub-type classification and micro-environment analysis based on EMT. The purpose of this study is to investigate the potential application of EMT-based classification in precise diagnosis and treatment of peritoneal endometriosis. Method Based on EMT hallmark genes, 76 peritoneal endometriosis samples were classified into two clusters by consistent cluster classification. EMT scores, which calculated by Z score of 8 epithelial cell marker genes and 8 mesenchymal cell marker genes, were compared in two clusters. Then, immune scores and the abundances of corresponding immune cells, stroma scores and the abundances of corresponding stroma cells were analyzed by the “xCell” package. Futhermore, a diagnostic model was constructed based on 9 diagnostic markers which related to immune score and stroma score by Lasso-Logistic regression analysis. Finally, based on EMT classification, a total of 8 targeted drugs against two clusters were screened out by drug susceptibility analysis via “pRRophetic” package. Results Hallmark epithelial-mesenchymal transition was the mainly enriched pathway of differentially expressed genes between peritoneal endometriosis tissues and endometrium tissues. Compared with cluster 2, EMT score and the abundances of most infiltrating stroma cell were significantly higher, while the abundances of most infiltrating immune cells were dramatically less. The diagnostic model could accurately distinguish cluster 1 from cluster 2. Pathway analysis showed drug candidates targeting cluster 1 mainly act on the IGF-1 signaling pathway, and drug candidates targeting cluster 2 mainly block the EGFR signaling pathway. Conclusion In peritoneal endometriosis, EMT was probably promoted by stroma cell infiltration and inhibited by immune cell infiltration. Besides, our study highlighted the potential uses of the EMT classification in the precise diagnosis and treatment of peritoneal endometriosis. Introduction Endometriosis is characterized by the presence of normal endometrium (like stroma and glands) abnormally invaded in body parts other than the uterine cavity, which shares many characteristics with malignant tumour (1) (2). Although ectopic endometrial tissue can be implanted in any parts of body, abdominal cavity is one of the most frequently locations that endometriotic tissue implanted into, leading to peritoneal endometriosis (1)(2)(3)(4). Over the past decades, several systems have been proposed for endometriosis classification. The most widely accepted is American Society for Reproductive Medicine (rASRM) classification and the updated Enzian classification (Supplement to ASRM Classification) (5). However, the rASRM score has limitations in deep infiltrating endometriosis description and Enzian classification has not included peritoneal endometriosis classification (6), which is greatly limiting accurate diagnosis and treatment of peritoneal endometriosis. Epithelial-mesenchymal transition (EMT) lead to the increased motility via rearrangements of cellular contact junctions, loss of cell adhesion, apicobasal polarity and epithelial cell morphology, thus promoting lesion metastasis (7,8). In general, EMT of ectopic endometrial tissue is more active than that of eutopic endometrial tissue, which may be beneficial for migration and invasion of ectopic tissue (9). After endometrium attaches to peritoneum, endometrial epithelial cells also undergo EMT (10). Furthermore, the expressions of EMT induced transcription factors that may trigger EMT were significantly increased in deep endometriotic lesions than in eutopic endometrium (11,12). These indicate EMT is a factor contributing to progression of endometriosis. Classification based on EMT hallmarks has been widely used in diseases sub-classify (13,14), we supposed classification based on EMT also has a potential to be used on peritoneal endometriosis classification. Immune micro-environment affects EMT (15,16). Peritoneal endometriosis is markedly characterized by increased numbers of peritoneal macrophages and elevated concentrations of pro-inflammatory chemokines, which associated with endometriosis-related pain and infertility (17,18). Macrophages induced EMT in pancreatic cancer cells (19). And inflammatory mediators in retrograde menstrual fluid probably contribute to ectopic endometrial EMT in the presence of peritoneal hypoxia (20). Besides, in superficial peritoneal endometriosis, the migration and infiltration of peritoneal endometriotic tissue were also associated with the formation and differentiation of stroma cells, such as myofibroblasts and smooth muscles (SM)-like cells (21). All these made us curious about the differences in immune cell infiltration and stroma cell infiltration of peritoneal endometriosis classified based on EMT classification. Here, we classified peritoneal endometriosis into two clusters based on EMT hallmark genes by consistent cluster classification, which is suitable for diseases classification from the perspective of molecular (22,23) Then, we compared the immune micro-environment and stroma cells infiltration of two clusters. What was more, based on EMT classification, we established a diagnostic model and screened potential drugs against different clusters. In conclusion, our study provided a potential strategy for peritoneal endometriosis diagnosis and treatment. Data collection The RNA sequencing dataset of 76 peritoneal endometriosis tissues and 37 endometrium tissues was fetched from GSE141549. The clinical information all subjects was provided in Supplementary Table 1. Another RNA sequencing dataset that containing 11 peritoneal endometriosis tissues and 11 endometrium tissues was GSE5108. The single cell RNA-seq dataset (ScRNA-Seq) of 8 peritoneal endometriosis tissues was fetched from GSE179640. All the above datasets were downloaded from GEO DataSet. EMT hallmark genes were referred from the HALLMARK_EPITHELIAL_ MESENCHYMAL_TRANSITION gene set in Molecular Signatures Database v7.5.1 (https://www.gsea-msigdb.org/gsea/ msigdb/). The data of msigdb.v7.4.entrez.gmt was downloaded from Gene Set Enrichment Analysis website (https://www.gseamsigdb.org/gsea/msigdb/). Gene set enrichment analysis In order to explore potential mechanisms of EMT in peritoneal endometriosis, we performed gene set enrichment analysis (GSEA) on GSE141549 and GSE5108. Firstly, logFc values of all genes between peritoneal endometriosis tissues and endometrium tissue genes were obtained by"limma" package. The n, GSE A bas ed on ms igdb.v7.4 .ent rez .gmt by "clusterProfiler" package were performed (24). At last, the results was visualized by gseaplot2 of the "enrichplot" package (25). Consistent cluster analysis based on EMT To classify peritoneal endometriosis, we performed consistent clustering analysis on GSE141549 based on the 200 EMT hallmark genes by using the "ConsensusClusterPlus" package (26). Samples were divided into two clusters according to the expression characteristics of EMT hallmark genes. Single cell RNA-seq data analysis ScRNA-Seq analysis and visualization for GSE179640 were performed with "Seurat" package (version 4.1.1) (27,28). Briefly, we removed low-quality cells with feature RNA< 500 or > 6000 and mitochondrial reads > 20%. Then, the top 2000 highly variable genes were selected after the gene expression normalization. After gene expression integration, cells were clustered and two-dimensional visualization was performed using uniform manifold approximation and projection (UMAP). Clusters were annotated based on the average gene expression of the following major cell types: fibroblasts EMT score calculation To screen mesenchymal cell marker genes and epithelial marker cell genes for EMT score of peritoneal endometriosis, we firstly referenced 8 epithelial cell marker genes (CD24, CDH1, DSP, EPCAM, FOLR1, KRTI8, KRT19 and OCLN) and 14 mesenchymal cell marker genes (ACTA2, CD44, CDH2, FN1, ITGA5, MMP2, S100A4, SNAI2, TNC, TWIST1, VIM, WNT5A, ZEB1 and ZEB2) of the CellMarker website (http://xteam.xbio. top/CellMarker/). Then, we compared the expression of these genes in epithelial cells cluster and mesenchymal cells cluster (GSE179640). Finally, 8 epithelial genes and 8 mesenchymal genes were selected for EMT score. EMT score was the sum of Z scores of mesenchymal genes minus the sum of Z scores of epithelial genes (33). Calculation of immune score, stroma score, abundances of immune cells and stroma cells "xCell" provides a novel method to infer immune and stromal cell types, immune score and stroma score based on genetic characteristics (34). Here, we used the "xCell" package to analyze the relative abundance of immune cells and stroma cells, immune score and stroma score in peritoneal endometriosis samples. Screening and functional enrichment analysis of differentially expressed genes In order to figure out the functional differences of the differentially expressed genes (DEGs), the differential genes between cluster 1 and cluster 2 were screened by using of the "limma" package (adj. p. val< 0.05, |log FC| > 1) (35). Then, the DEGs were analyzed by Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) through the website (https://cn.string-db.org/). By setting FDR< 0.05, the significant terms were selected and visualized with the "ggplot2" package (36). Weighted gene co-expression network analysis Weighted correlation network analysis (WGCNA) can be used for finding clusters (modules) of highly correlated genes, for summarizing such clusters using the module eigengene or an intramodular hub gene (37). To identify immune score or stroma score associated modules or genes, the "WGCNA" package was used to construct the co-expression network analysis of the mRNA expression matrix of DEGs. samples were clustered according to pearson's correlation analysis and the outliers were removed. The soft thresholding parameter (b) was selected when the scale free topology model fit > 0.85. Afterward, the adjacency matrix was transformed into a topological overlap matrix (TOM) and genes were assigned to different gene modules according to dissimilarity matrix (1-TOM). Similar dynamic modules were merged when coefficient of dissimilarity< 0.2. Pearson correlation analysis was performed to identify the module with the strongest association with immune score and stroma score. The module eigengenes related to immune score or stroma score were selected with gene significance (GS) > 0.55 and module membership (MM) > 0.85, respectively. Lasso-logistics regression We extracted expression matrix of immune score and stroma score related genes from GSE141549. Then, 76 samples in this expression matrix were randomly divided into training dataset and test dataset in a ratio of 1:1. In the training dataset, the Lasso-Logisitic regression analysis was performed based on the classification information of cluster 1 and cluster 2 using the "glmnet" package (38). The diagnostic markers were screen and a diagnostic model was built. Furthermore, the diagnostic model was validated in the test dataset. The ROC curves were plotted using the "ROCR" package and AUC value was calculated (39). Drug susceptibility analysis The "pRRophetic" package was used to analyze the half maximal inhibitory concentration (IC50) of 251 drugs (40). Then, the drug candidates for cluster 1 or cluster 2 were screened by setting the adj. p. val<0.05. Statistics of data All statistical analyses performed in our study were conducted in R studio (version 4.1.2). Comparisons of mRNA expression were analyzed by Wilcoxon test. All correlation analyses were performed by Pearson correlation analysis using the "corrplot" package (41). Differences were significant when P < 0.05. Results The classification based on the EMT hallmark genes Results of GSEA on deferences gene expression between peritoneal endometriosis tissues and endometrium tissues of both GSE141549 and GSE5108 showed that hallmark epithelialmesenchymal transition (EMT) was the mainly enriched pathway (Figures 1A, B). In order to analysis peritoneal endometriosis from the perspective of EMT, we performed consistent clustering analysis on GSE141549 that containing 76 patients with peritoneal endometriosis based on the EMT hallmark genes. Samples could be clearly divided into cluster 1 (n = 34) and cluster 2 (n = 42) (Figures 1C-E). EMT score comparison between cluster 1 and cluster 2 In order to calculate the EMT score of peritoneal endometriosis tissues, we performed ScRNA-Seq analysis on GSE179640 for selecting marker genes of epithelial cells and mesenchymal cells. The entire cell population was categorised into 18 major cell clusters. All cell clusters were identified as 11 cell types, consist of fbroblasts cells, macrophages/monocytes, endothelial cells, epithelial cells, other T cells, mesenchymal cells, CD8+ T cells, DC, mast cells, NK/neutrophils and unknown based on expression of markers (Figures 2A, B). We compared the expression of 8 epithelial cell marker genes and 14 mesenchymal cell marker genes in both epithelial cells and mesenchymal cells. Results showed the expression of 8 epithelial cell marker genes (CD24, CDH1, DSP, EPCAM, FOLR1, KRTI8, KRT19 and OCLN) were significantly higher in epithelial cells compared these with mesenchymal cell. And 8 mesenchymal cell marker genes (ACAT2, CD44, FN1, S1004A, TNC, VIM, ZEB1 and ZEB2) were just the oppose ( Figure 2C). Hence, we selected these 16 genes as the marker genes for EMT score. Then, EMT score of peritoneal endometriosis (GSE141549) based on the Z score of these marker genes were calculated. Results showed that EMT score of cluster 1 was significantly higher than that of cluster 2 (p< 0.0001) ( Figure 2D). Results indicated EMT appears much more robust in cluster 1 than that in cluster 2. Screening and functional enrichment analysis of the differential gene between cluster 1 and cluster 2. In order to explore the differences between cluster 1 and cluster 2 comprehensively, we analyzed the DEGs between cluster 1 and cluster 2. Results showed there were 95 upregulated genes and 57 down-regulated genes in cluster 1 compared with cluster 2 (Figures 3A, B). Pathway enrichment indicated that the mainly enriched BP were Regulation of midbrain dopaminergic neuron differentiation and Negative regulation of smooth muscle cell matrix adhesion, the mainly enriched MF were Chemokine activity and CCR chemokine receptor binding, and the mainly CC were mainly Z disc, Stress fiber and Dystrophin-associated glycoprotein complex ( Figure 3C). The mainly KEGG-enriched pathways were Cytokine-cytokine receptor interaction, Chemokine signaling, Toll-like receptor signaling pathway and NFkB signaling pathway ( Figure 3D). Results showed DEGs between cluster 1 and cluster 2 mainly in volved in chemokines signaling pathways, including inflammatory chemokines pathways (Tolllike receptors pathway and NF-kappa B pathway). Screening of genes related to immune score and stroma score Given that the functional differences between cluster 1 and cluster 2 were mainly enriched in chemotaxis and inflammatory responses, we further analyzed the immune micro-environment. The immune score of cluster 1 was significantly lower than that of cluster 2 (p<0.01), while the stroma score was dramatically higher than that of cluster 2 (p<0.0001) ( Figure 4A). Furthermore, we selected immune score related gene and stroma score by WGCNA. Four modules were identified when the Diss Thres was set as 0.2 after merging dynamic modules, as shown in the clustering dendrograms ( Figure 4B). The brown module and turquoise module were associated with Immune score and stroma score respectively ( Figure 4C). Finally, 7 Immune score-related genes were set selected by setting GS>0.55 and MM>0.85 ( Figure 4D). Results showed that the expression of all 7 Immune score-related genes in cluster 1 were significantly lower than those in cluster 2 (p<0.0001) ( Figure 4E). All these 7 genes were significantly positively correlated with Immune score (p<0.0001) ( Figure 4F). Similarly, 14 stroma score-related genes were selected and the expression of these 14 genes in cluster 1 were remarkably higher than those in cluster 2 (p<0.05) (Figures 4G, H). All 14-stroma score-related genes were significantly positively correlated with Immune score (p<0.05) ( Figure 4I). In conclusion, the immune cells infiltration of cluster 1 was significantly higher than that of cluster 2, while the infiltration of stroma cells was remarkably lower in cluster 2. We speculated that, in peritoneal endometriosis lesions, high infiltration of immune cells inhibited the progression of EMT, while high infiltration of stroma cell contributes to EMT. The abundances of immune cells and stroma cells Given the significant differences in immune score and stroma score between the two clusters, we further analyzed the abundances of immune cells and stroma cells. Results showed that the abundances of 12 kinds of immune cells, namely DC cells, iDC cells, Monocytes, Macrophages, M1 Macrophages, M2 Macrophages, Basophils, Th1 cells, Th2 cells, CD4+ Tem cells, B cells and memory B cells,were significantly lower in cluster 1 than that of cluster 2 (p< 0.05) ( Figure 5A). Correlation analysis showed the abundances of above 12 kinds of immune cells were almost remarkably positively correlated with the expression of all 7 immune score-related genes (p< 0.05) ( Figure 5B). This indicated that immune cells in cluster 2 were more active than those in cluster 1. And immune cells were positively regulated by The screening of genes that related to Immune score and stroma score. immune score-related genes. Additionally, the abundance of Epithelial cells, Keratinocytes and Osteoblasts in cluster 1 were significantly lower than those in cluster 2 (p<0.05) ( Figure 5C) and had significantly negative correlations with the stromarelated genes ( Figure 5D). While the abundances of Fibroblasts, ly Endothelial cells, Myocytes, Chondrocytes and Skeletal muscle cells were significantly higher in cluster 1 than that of cluster 2 (p<0.05) ( Figure 5C) had significantly positive correlations with the whole stroma-related genes ( Figure 5D). In addition, abundances of Adipocytes and Smooth muscle cells were also higher in cluster 1. The epithelial cell abundance of cluster 1 was lower, which consistent with the EMT score ( Figure 2D). It was suggested that the increased abundance of Fibroblasts, ly Endothelial cells, Skeletal muscle cells and Smooth muscle cells probably contribute to EMT in peritoneal endometriosis. Construction of the diagnostic model To construct a diagnostic model, diagnostic markers were screened from immune score-related genes and stroma scorerelated genes by lasso-logistic regression analysis in the training dataset. The minimum binomial deviance was obtained when log(l) was -5.773583, and 9 genes were selected as diagnostic markers ( Figure 6A). The coefficients of TMEM47 and FRZB were larger than the other 7 genes ( Figure 6B) (Supplementary Table 2). A diagnostic model was constructed with the following formula: The ROC analysis showed that the AUC of the training dataset was 0.955 when the cut-off value of the cd-score was -36.070 ( Figure 6C). Sample was classified as cluster 1 when the cd-score was less than or equal to the cut-off value, otherwise sample was classified as cluster 2. According to the cut-off of the training dataset, the AUC of the test dataset was 0.862 ( Figure 6D). Additionally, cd score was significantly negatively correlated with EMT score in training dataset, test dataset and entire dataset ( Figure 6E-G). Therefore, the diagnostic model constructed from these 9 genes and their coefficients had high specificity and sensitivity. Candidate drug screening Based on the clusters classified by EMT hallmark genes, drug susceptibility was analyzed. In the training dataset, the IC50 of BMS-754807 and Lisitinib in cluster 1 was significantly lower than that in cluster 2 (p<0.05), while the IC50 of Methotrexate, Gefitinib, Veliparib, GW 4441756, CCT007093 and Temozolomide in cluster 1 were remarkably higher in cluster 2 (p<0.0001) ( Figure 7A). The drug susceptibility trends of all candidate drugs in the test dataset were consistent with that in the training dataset ( Figure 7B). Then, we classified the dataset into cluster 1 and cluster 2 by the diagnostic model we established. Except for GW 441756, the susceptibility trends of all candidate drugs in the test dataset predicted by the above diagnostic model were also consistent with the training dataset. ( Figure 7C). Results showed BMS-754807 and Lisitinib were more sensitive for cluster 1, while Methotrexate, Gefitinib, Veliparib, CCT007093 and Temozolomide were more sensitive for cluster 2. It was suggested that the diagnostic classification models we established can be used for drug screening. Discussion Over decades, endometriosis classified traditionally based on lesion appearance, pelvic adhesions, or/and anatomic location of disease (42), but none of the current classification systems classify peritoneal endometriosis from molecular perspective. Here, we classified peritoneal endometriosis into two cluster based on EMT hallmark genes and found EMT scores of cluster 1 was significantly higher than cluster 2. What was more, we also found EMT in peritoneal endometriosis was related with both immune cell infiltration and stroma cell infiltration. In addition, based on immune score-related genes and stroma score-related genes, we established a diagnostic model and screened candidate drugs. Our study provided new ideas for classification, diagnosis and treatment of peritoneal endometriosis. EMT is involved in the process of endometriosis. The migration and invasion abilities of endometrial stromal cells enhanced by facilitated EMT, and conversely inhibited EMTrelated proteins reduced the volume and weight of endometriotic lesions in mice model (43)(44)(45). In pathological and physiological EMT, both stroma cells and immune cells are involved (46)(47)(48). Researches concerning stroma cell involve in EMT are not rare. Adipocytes promote EMT progression by reducing epithelial cell characteristics or inducing EMT-related phenotypes and thus promote tumor invasiveness (49, 50). Ly endothelial cells mediate the preferential migration of cells that undergoing EMT to lymphatic vessels by secreted pro-inflammatory cytokines (51). Chemokines promote pulmonary fibrosis by promoting EMT (52). EMT induced tissue fibrosis, which probably stimulate the production of fibroblasts in (53). Here we found not only the stroma score but also the abundances of most infiltrating stroma cells were significantly higher in cluster 1 than these in cluster 2, including fibroblasts, adipocytes, ly endothelial cells, chondrocytes, skeletal muscle cells and smooth muscle cells. We proposed that the infiltration of stroma cells probably contribute to EMT in peritoneal endometriosis. Besides, T and B cells, DC cells and tumor-associated macrophages that present in the tumor micro-environment induce EMT (54). Macrophages may induce pathological EMT of epithelial cells in a denomyosis (55). EMT is strongly associated with a highly immunosuppressive environment (15). We found the immune score was significantly lower in cluster 1 in than that in cluster 2, while the abundances of all infiltrating immune cells were significantly higher in cluster 2 than that in cluster 1, particularly macrophages, DC cells, CD4 +T cells and B cells. Here, we proposed immune cell infiltration possibly inhibited the EMT of peritoneal endometriosis, especially macrophages, DC cells, CD4+T cells and B cells. Therefore, EMT classification is meaningful for peritoneal endometriosis accurate diagnosis and treatment. Additionally, stroma score-and immune score-related genes possibably participate in stromal cells and immune cells infiltration. Aoc3 is an endothelial adhesion molecule that contributes to the extravasation of neutrophils, macrophages, and lymphocytes to sites of inflammation (56). CASQ2 is a calcium binding protein that stores calcium for muscle function (57). FRZB is involved in the regulation of chondrocytes development (58). MGP is a vitamin K-dependent protein, which is synthesized in bone and many other mesenchymal cells, which is also highly expressed by vascular smooth muscle cells (VSMCs) and chondrocytes (59). CCL3 and CCL3L3 are chemokines that produced by macrophage and monocyte respectively (60,61). Ifi30 is an IFN-g-inducible protein that is involved in MHC class II-restricted antigen processing and MHC class I-restricted cross-presentation pathways of adaptive immunity (62). Therefore, it was suggested that these genes regulate stroma cells and immune cellsinfiltration in peritoneal endometrisis. To date, drugs treatment for endometriosis are mainly based on hormone regulation and inflammation inhibition, rarely concerning EMT. Here, based on EMT classification, we selected 2 candidate drugs for cluster 1 and 6 candidate drugs for cluster 2. As for cluster 2 drugs, Methotrexate blocks tumor cell proliferation mainly through the inhibition of dihydrofolate reductase (DHFR), which is also an immunosuppression (63). Gefitinib is a small molecule inhibitor of epidermal growth factor receptor (EGFR) tyrosine kinase (64). Veliparib is an inhibitor of PARP1 and PARP2 (65). GW 4441756 is a selective TrkA (NTRK1) inhibitor. CCT007093 is an inhibitor of protein phosphatase 1D (PPM1D Wip1) (66). Temozolomide reduces The comparison of drug sensitivity between cluster 1 and cluster 2. the proliferative activity of tumor cells (67). Pathway enrichment analysis found that drugs for cluster 2 mainly acted on the EGFR signaling pathway (Supplementary Figure 1). And restraining EGFR pathway can inhibit EMT progression (68,69). Among drugs for cluster 1, BMS-754807 is a potent small molecule inhibitor of IGF-1R/IR family kinases. Lisitinib is a dual inhibitor of IGF-1 and insulin receptor (IR) (70). IGF-1 is expressed in ectopic endometrial stroma cells (71). In addition, IGF-1 concentration in peritoneal fluid of patients with endometriosis are significantly higher than that of normal controls (72,73). On the other hand, the peritoneal mesothelial cells with insufficient IGF-1R expression had lower migration ability and higher adhesion ability (74). In addition, inhibitors of IGF-1R hinder the growth of ectopic lesions and reverses the pain behavior in mice model (71,73). It was indicated that inhibition of insulin-like growth factor pathway was crucial for the treatment for cluster 1. Of course, drugs we screened needed to be further validated. In conclusion, we classified peritoneal endometriosis based on EMT. Then, we constructed diagnostic models based on the screened genes and performed drug screening. This will provide a new strategy for the precise diagnosis and medicine of peritoneal endometriosis. Data availability statement Publicly available datasets were analyzed in this study. This data can be found here: https://www.ncbi.nlm.nih.gov/geo/ query/acc.cgi?acc=GSE141549. Ethics statement Ethical review and approval was not required for the animal study because Our study is based on sequencing data downloaded from the GEO database. Author contributions JT and JW collected the research data and checked the data analysis. MY directed data analysis. QQ analyzed the data and wrote the draft. All authors contributed to the article and approved the submitted version. Funding We acknowledge the PhD workstation of Guangdong Provincial Reproductive Science Institute (Guangdong Provincial Fertility Hospital) for funding support (NO.: BS202201).
2022-11-29T14:08:25.521Z
2022-11-29T00:00:00.000
{ "year": 2022, "sha1": "dba19048ca0e92aa14f39e99f6938c8eb9492577", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "dba19048ca0e92aa14f39e99f6938c8eb9492577", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
257128542
pes2o/s2orc
v3-fos-license
Back to Basics: A Simplified Improvement to Multiple Displacement Amplification for Microbial Single-Cell Genomics Microbial single-cell genomics (SCG) provides access to the genomes of rare and uncultured microorganisms and is a complementary method to metagenomics. Due to the femtogram-levels of DNA in a single microbial cell, sequencing the genome requires whole genome amplification (WGA) as a preliminary step. However, the most common WGA method, multiple displacement amplification (MDA), is known to be costly and biased against specific genomic regions, preventing high-throughput applications and resulting in uneven genome coverage. Thus, obtaining high-quality genomes from many taxa, especially minority members of microbial communities, becomes difficult. Here, we present a volume reduction approach that significantly reduces costs while improving genome coverage and uniformity of DNA amplification products in standard 384-well plates. Our results demonstrate that further volume reduction in specialized and complex setups (e.g., microfluidic chips) is likely unnecessary to obtain higher-quality microbial genomes. This volume reduction method makes SCG more feasible for future studies, thus helping to broaden our knowledge on the diversity and function of understudied and uncharacterized microorganisms in the environment. Introduction The vast majority of bacteria and archaea remain understudied since they have not yet been successfully cultured; thus, their genomes, metabolic potential, and functions in the environment remain unknown [1][2][3][4]. We refer to these microorganisms as microbial dark matter (MDM) [5]. Within MDM hide potentially novel and important solutions for sustainable energy, bioremediation of contaminated environments, and the war against rising antibiotic resistance [6][7][8][9][10]. The use of culture-independent methods to study microorganisms, such as metagenomics, has significantly advanced our understanding of MDM. However, metagenomics still struggles to reliably assemble true, individual genomes due to strain variations and misattribution of sequences to the wrong genomes [11,12]. Furthermore, highly repetitive sequences like those found in CRISPR regions [13,14] are often not accurately assembled and 16S rRNA sequences, as well as mobile genetic elements such as plasmids, are often not attributed to their host organisms [15,16]. As a result, insights into evolutionary mechanisms, like horizontal gene transfer, are lost. Therefore, single-cell genomics (SCG) was developed as a complementary tool to enable the analysis of individual cells, thereby expanding our knowledge of MDM taxa [17][18][19]. In general, a microbial SCG workflow ( Figure 1) involves (A) sample collection and preservation, (B) specific or non-specific cell staining, (C) cell sorting, (D) cell lysis, (E) whole genome amplification (WGA), and (F,G) genome sequencing and analysis [17,18]. The WGA step is crucial for generating a sufficient amount of input DNA for library Cells are typically stained with a non-specific fluorescent dye, such as DAPI or SYBR ® Green, but they can also be specifically labeled, e.g., with fluorescence in situ hybridization. (C) Fluorescence activated cell sorting (FACS) is the most common choice for physical isolation of a single cell into multi-well plates (D). Once isolated, the single cells are lysed, typically with a combination of alkaline buffer and freeze-thaw cycling, to release the DNA from the cell. (E) Whole genome amplification (WGA) is required to generate sufficient amounts of DNA for library preparation since a typical prokaryotic cell only contains a few femtograms of DNA. (F) Once DNA libraries are prepared, short-and/or long-read sequencing platforms such as Illumina ® and Oxford Nanopore Technologies ® , respectively, can be employed. (G) Finally, bioinformatics is utilized to conduct the quality assessment, assembly, classification, ORF calling, and annotation of the sequences. Created with BioRender.com. Modified from Kaster & Sobol (2020) [17]. Unfortunately, MDA also constitutes one of the major limitations in single-cell sequencing due to its high costs (Table 1), as well as its bias against high GC regions, which leads to uneven genome amplification [29][30][31]. Furthermore, artifacts like chimeras and non-specific products can be produced and are thought to occur randomly since sequences that are over-represented in one MDA reaction can be under-represented in another [29,31]. However, some have found these effects to be reproducible due to the fact that a decreased template copy number increases bias and certain sequences are simply not amplified at all [25,29,30,32]. As a result, treatments such as post-amplification endonuclease degradation and post-amplification normalization by nuclease degradation of dsDNA have been used to reduce chimeric sequences [33] and highly abundant sequences [20], respectively. Other approaches have worked to improve MDA itself, such as WGA-X™, which uses a more thermostable phi29 polymerase for better amplification of high GC organisms [34] (Table 1, Appendix A, Figure A1-A). However, lower genome coverage for organisms with a low GC content compared with standard MDA is reported. More recently, primary template-directed amplification (PTA) was developed, which employs exonuclease-resistant terminators to create smaller amplicons that undergo limited subsequent Cells are typically stained with a non-specific fluorescent dye, such as DAPI or SYBR ® Green, but they can also be specifically labeled, e.g., with fluorescence in situ hybridization. (C) Fluorescence activated cell sorting (FACS) is the most common choice for physical isolation of a single cell into multi-well plates (D). Once isolated, the single cells are lysed, typically with a combination of alkaline buffer and freeze-thaw cycling, to release the DNA from the cell. (E) Whole genome amplification (WGA) is required to generate sufficient amounts of DNA for library preparation since a typical prokaryotic cell only contains a few femtograms of DNA. (F) Once DNA libraries are prepared, short-and/or long-read sequencing platforms such as Illumina ® and Oxford Nanopore Technologies ® , respectively, can be employed. (G) Finally, bioinformatics is utilized to conduct the quality assessment, assembly, classification, ORF calling, and annotation of the sequences. Created with BioRender.com. Modified from Kaster & Sobol (2020) [17]. Unfortunately, MDA also constitutes one of the major limitations in single-cell sequencing due to its high costs (Table 1), as well as its bias against high GC regions, which leads to uneven genome amplification [29][30][31]. Furthermore, artifacts like chimeras and non-specific products can be produced and are thought to occur randomly since sequences that are over-represented in one MDA reaction can be under-represented in another [29,31]. However, some have found these effects to be reproducible due to the fact that a decreased template copy number increases bias and certain sequences are simply not amplified at all [25,29,30,32]. As a result, treatments such as post-amplification endonuclease degradation and post-amplification normalization by nuclease degradation of dsDNA have been used to reduce chimeric sequences [33] and highly abundant sequences [20], respectively. Other approaches have worked to improve MDA itself, such as WGA-X™, which uses a more thermostable phi29 polymerase for better amplification of high GC organisms [34] (Table 1, Appendix A, Figure A1A). However, lower genome coverage for organisms with a low GC content compared with standard MDA is reported. More recently, primary template-directed amplification (PTA) was developed, which employs exonucleaseresistant terminators to create smaller amplicons that undergo limited subsequent amplification to limit over-representation of random positions and reduce error propagation [35] ( Table 1, Appendix A, Figure A1B). While this method looks promising to reduce ampli-fication bias, the approach is still in the alpha testing stage for microorganisms (https: //www.bioskryb.com/resolvedna-microbiome-alpha/ (accessed on 1 August 2022)) and quite expensive. The hybrid method, multiple annealing and looping based amplification cycles (MALBAC), combines PCR and MDA methods to successfully reduce amplification bias [36,37] (Table 1, Appendix A, Figure A1C). Yet, MALBAC remains widely unused in microbial SCG, because the Bst and Taq polymerases have higher error rates because they lack proofreading capability [38]. Thus, further work needs to be done to optimize MALBAC, possibly with phi29 or less error-prone enzymes [39]. Even though there is hope to reduce amplification bias in microbial WGA, statistically, inconsistency and bias among the DNA amplification of millions of templates will still persist [41]. In addition, WGA methods are highly sensitive to contamination due to the low amounts of DNA from a single cell. Prior decontamination of reagents with UV [42] can help to remove common reagent contaminants, but this does not prevent other sources of endogenous and/or exogenous contaminants, which become more amplified in larger WGA reaction volumes due to reduced polymerase specificity [43]. Therefore, through bioinformatics, contamination in SAGs needs to be analyzed and removed prior to downstream analysis [12]. Moreover, the large, recommended reaction volumes of these WGA methods also quickly become very costly when applied to high-throughput SCG ( Table 1). These high costs limit the depth at which samples can be analyzed, preventing, for example, minority taxa from being captured with SCG. Therefore, a methodically simpler solution is to reduce WGA's reaction volume. Reduction of total WGA volume has been shown to increase the concentration of the template and lessens the chance of background contamination being amplified [43]. Furthermore, this approach also significantly reduces the high costs of WGA (Table 1). Previous studies have applied this approach at sub-nanoliter (nL) and picoliter (pL) volumes in microfluidic devices [38,40,[44][45][46][47][48], nanowells [49,50], planar surfaces [51,52], and hydrogels [53], which are compared in detail in Figure 2. Many of these approaches and their devices remain largely unused outside of their respective publications, likely because most microfluidic chips and other platforms are not commercially available; they require complex fabrication and operation [54,55], and are therefore hard to access and implement in other research groups. Commercially available options, such as 10× genomics ® , BD Rhapsody TM , and Fluidigm ® C1 are costly, less flexible, and geared towards eukaryotic cells. Additionally, current droplet-based technologies sort based on Poisson distributions of cells, resulting in high unoccupancy and low cell recovery [56], which is not applicable for studies analyzing rare populations [57]. Other approaches such as the use of planar substrates require special care to avoid contamination and evaporation, while hydrogel matrices lack the throughput needed for microbial SCG ( Figure 2). Hence, the establishment of a reliable and easy-to-use volume reduction method is needed to widen the accessibility and application of microbial SCG. Rhapsody TM , and Fluidigm ® C1 are costly, less flexible, and geared towards eukaryotic cells. Additionally, current droplet-based technologies sort based on Poisson distributions of cells, resulting in high unoccupancy and low cell recovery [56], which is not applicable for studies analyzing rare populations [57]. Other approaches such as the use of planar substrates require special care to avoid contamination and evaporation, while hydrogel matrices lack the throughput needed for microbial SCG ( Figure 2). Hence, the establishment of a reliable and easy-to-use volume reduction method is needed to widen the accessibility and application of microbial SCG. The color code indicates the relative advantage of a particular approach based on a given feature, from green (better advantage), through orange, to red (less advantage). Surprisingly, there is a lack of information on how bias can be simply reduced between the low-to sub-microliter range within standard 384-well plates. Reduction of standard MDA reaction volumes down to 1.2-2.0 μL have been previously reported [18,58]; however, a systematic assessment of its effect on MDA bias and genome completeness has not yet been done before. Therefore, in this study, we compared the amplification bias in single-amplified genomes (SAGs) of Escherichia coli from 10 μL total MDA reaction volumes down to 0.5 μL using novel acoustic liquid dispensing technology developed by Dispendix GmbH (https://www.dispendix.com/ (accessed on 1 August 2022)). Our results indicated that an MDA reaction volume of 1.25 μL is the "sweet-spot" for significantly reducing amplification bias and increasing assembly coverage up to almost 90%, offering an easily accessible approach for future SCG studies to improve WGA in a cost-effective manner. (E) Hydrogels. The color code indicates the relative advantage of a particular approach based on a given feature, from green (better advantage), through orange, to red (less advantage). [54,55,57], as well as from our own experiences with droplet microfluidics, microwells, and planar substrates. Made with Biorender.com. Surprisingly, there is a lack of information on how bias can be simply reduced between the low-to sub-microliter range within standard 384-well plates. Reduction of standard MDA reaction volumes down to 1.2-2.0 µL have been previously reported [18,58]; however, a systematic assessment of its effect on MDA bias and genome completeness has not yet been done before. Therefore, in this study, we compared the amplification bias in singleamplified genomes (SAGs) of Escherichia coli from 10 µL total MDA reaction volumes down to 0.5 µL using novel acoustic liquid dispensing technology developed by Dispendix GmbH (https://www.dispendix.com/ (accessed on 1 August 2022)). Our results indicated that an MDA reaction volume of 1.25 µL is the "sweet-spot" for significantly reducing amplification bias and increasing assembly coverage up to almost 90%, offering an easily accessible approach for future SCG studies to improve WGA in a cost-effective manner. Results and Discussion Previous studies show that volume reduction improves polymerase specificity through "molecular crowding" [59,60]. Molecular crowding reduces competition between amplification of the template and contamination by increasing the probability that polymerase and primers bind to template DNA and reducing spurious binding [59,60]. Moreover, lower reaction volumes reduce the amount of surface area for nonspecific adsorption of nucleic acids to the multi-well plate walls [61,62]. However, too much crowding can also cause adverse effects by causing sterical hinderance and reducing the polymerase from accessing the template [63,64]. Here, we sorted single E. coli cells into 384-well plates to compare SAG amplification bias within total MDA microliter and sub-microliter reactions for the first time. In contrast, previous studies have largely examined volume reduction in the sub-nanoliter to picoliter range [38,44,45,47,[49][50][51]65]. MDAs with total reaction volumes of 0.5, 0.8, 1.0, 1.25, 5.0, and 10 µL were conducted in 384-well plates (Appendix A, Table A1). The smallest-sized MDA reaction, 0.5 µL, did not work and the amplification success rate for the 0.8 and 1.0 µL reactions was only 68.75% and 62.50%, respectively. In comparison, the success rate for the 1.25 µL MDA reactions was 87.50%, whereas both the 5.0 and 10 µL MDA reaction volumes had a success rate of 100%. The lower success rates in the lower MDA reaction volumes was likely due to evaporation and/or sterical hinderance of the polymerase in the small volumes [63,64]. On average, the time that it took for the amplification to reach the detection threshold (indicated as Cq; quantification cycle) was earliest for the 1.25 µL MDA reaction volumes ( Figure 3A). Previous studies have reported that earlier Cq values indicated higher genome recovery success and quality [34,66]. Additionally, our detected relative fluorescence (RFU) endpoints and DNA yields from the successful reactions decreased as reaction volumes decreased ( Figure 3A,B), initially indicating that volume reduction likely limited the exponential nature of MDA [38,45], which should improve genome coverage and uniformity. To further compare the quality of the WGA reactions, a total of five amplified replicates for each different MDA reaction volume were chosen based on their Cq and RFU values, then subjected to Illumina sequencing using equal amounts of DNA (Appendix A, Table A2). There was a significant difference in reads lost during read trimming between the different MDA reaction volumes ( Figure 4A, p = 0.0002 Appendix A, Table A3). On average, the 1.25 µL sized MDA reactions lost significantly fewer reads to read quality trimming compared with all other reaction volumes (p ≤ 0.05). After read trimming, all samples were normalized to a 200× sequencing depth before further read processing steps to ensure a fair comparison between the mapping and assembly quality of the different reaction volumes. After depth normalization, the number of duplicated reads was, on average, greater in larger-sized volume reactions ( Figure 4B), but the difference between all reactions of the different volumes was not found to be significant (p = 0.0870, Appendix A, Table A3). While some amount of read duplication inevitably results from MDA's exponential amplification nature, comparisons of the percent duplicates between samples could still provide insight into the specificity of the amplification itself. A higher number of duplicates can be caused by the lower template specificity in large MDA reactions causing more spurious priming and amplification [40,51], especially when template concentrations are very low [67]. Furthermore, the issue of lower template specificity also explains why there was an observed trend that larger reaction volumes had more contaminant reads removed after filtering than the smaller reactions ( Figure 4C). Lower specificity, leading to more contamination, is likely due to the increased competition between background contamination and the E. coli single-cell DNA [40,49]. This increase in contamination was also reflected in the higher amplification gain and product yield mentioned previously ( Figure 3A,B), which other studies reported as well [45,47,49]. In general, we also observed that 5 and 10 µL MDA reaction volumes gave less consistent results, as evidenced by larger variation between replicates ( Figure 4). As a consequence of lower template specificity, the MDA reaction volumes above 1.25 µL also performed worse during read mapping to the reference E. coli MG1655 genome, as indicated by genome coverage breadth and coverage uniformity ( Figure 5A,B). MDA in 0.8 and 1.0 µL reaction volumes also resulted in low coverage breadth and uniformity, and a reaction volume of 1.25 µL was therefore determined as the "sweet-spot" for improved MDA in 384-well plates. Likely, the 0.8 and 1.0 µL reaction volumes were simply too low, causing too much molecular crowding, sterically hindering the polymerase from fully accessing the template DNA [63,64], and/or there was too much evaporation. Reduced genome coverage was also recently reported for MDA reaction volumes below 150 nL on a microfluidic system [44], suggesting that platforms independently have a specified "sweet-spot" for efficient MDA. As a consequence of lower template specificity, the MDA reaction volumes above 1.25 μL also performed worse during read mapping to the reference E. coli MG1655 genome, as indicated by genome coverage breadth and coverage uniformity ( Figure 5A,B). MDA in 0.8 and 1.0 μL reaction volumes also resulted in low coverage breadth and uniformity, and a reaction volume of 1.25 μL was therefore determined as the "sweet-spot" for improved MDA in 384-well plates. Likely, the 0.8 and 1.0 μL reaction volumes were simply too low, causing too much molecular crowding, sterically hindering the polymerase from fully accessing the template DNA [63,64], and/or there was too much evaporation. Reduced genome coverage was also recently reported for MDA reaction volumes below 150 nL on a microfluidic system [44], suggesting that platforms independently have a specified "sweet-spot" for efficient MDA. On average, reads from 1.25 µL MDA reaction volumes covered 85 ± 13% of the E. coli genome, which was 19% to 40% more than the other sized reactions ( Figure 5A). This increase in coverage was a large improvement when compared to current, well-established methods like WGA-X™, which gives a reported~36 ± 21% read coverage of E. coli in a standard 10 µL reaction [34]. When compared to 10 µL reactions in this study, we still noted approximately 19% greater coverage breadth than WGA-X™, even though we used 2 million fewer reads during read mapping. Likely, this difference can be attributed to the lysis modified specifically for E. coli herein. Furthermore, the average genome coverage in our study is~45% greater than MDA performed in~60 nL hydrogel reactions [53]. Here, the much lower coverage for E. coli could be due to the fact that the authors performed a second round of MDA, which has been shown to increase bias [38]. Our reported coverages are also well within range of those reported from a different nanoliter microfluidic method [38], as well as from picoliter droplet reactions [47], at the same sequencing depth (Appendix A, Figure A2). It should be mentioned that one other study reports~15% greater coverage from MDA in nanoliter microwells when compared with our 1.25 µL average genome coverage at the same sequence depth (20×) (Appendix A, Figure A2) [49]; however, the authors only used three single E. coli cells for testing. To assess the uniformity of read coverage across the genome, reads were averaged into 10 kilo-base (kb) bins and their read depths plotted to visualize coverage depth for each reaction volume ( Figure 5A). Especially in the larger volumes, more genome regions are not covered by any reads in comparison to MDA performed in 1.25 µL volumes. Furthermore, coverage depths were more uniform across the genome in 1.25 µL, as evidenced by Lorenz curves [68] showing a more equal distribution of reads covering all bases of the genome ( Figure 5B). We further verified this by calculating the Gini index of each sample, which is a measure of deviation from uniformity ranging from 0 (perfectly uniform distribution) to 1 (extremely uneven distribution) [69]. The Gini index differs significantly between different reaction volumes (p = 0.0176, Appendix A, Table A4), and is lowest for 1.25 µL reactions (~0.71 ± 0.07, Appendix A, Table A4). These levels of uniformity are similar to those obtained from E. coli in 150 nL microfluidic MDA reaction volumes [38] and hydrogels [53]. Next, we assembled and compared SAGs for all replicates. Prior to assembly, read depths were normalized due to the large differences introduced via MDA, setting a target depth of 100×. However, MDA reaction volumes less than and greater than 1.25 µL resulted in lower final sequence depths due to the fact that more reads were lost during the read pre-processing steps ( Figure 6A). Therefore, the resulting assemblies were of lower quality compared with assemblies from 1.25 µL MDA reaction volumes ( Figure 6B,C). Specifically, 1.25 µL reactions had the longest average total length and N50 at 3,522,851 bp and 46,179 bp, respectively ( Figure 6B,C). N50 constitutes the sequence length of the shortest contig representing 50% of the assembly's total sequence length and indicates that assemblies from 1.25 µL reaction volumes were more contiguous, resulting in higher quality assemblies than the other MDAs. Next, assembly coverage and completeness were calculated. The difference between these two measurements is that coverage is calculated as the percentage of the assembly (contigs) mapped to the reference genome [70], whereas genome completeness was estimated by MDMcleaner as the presence of marker genes such as small subunit (SSU) rRNA genes, large subunit (LSU) rRNA genes, universal bacterial/archaeal protein coding marker genes, total coding sequences (CDS), and tRNAgenes [12]. In general, the assembly coverage (p = 0.0199) and completeness (p = 0.0128) both significantly differed between the different-sized reactions (Appendix A, Table A5). Not surprisingly, coverage and completeness were highest for assemblies from 1.25 µL MDA reactions and were on average~75 ± 14% and 94 ± 0.04%, respectively, while contamination was lowest ( Figure 6D-F). Three out of five 1.25 µL MDA reaction replicates even achieved over 75% coverage, with the highest being 89.5% (Appendix A, Table A5). Comparatively, WGA-X™ reported E. coli assembly coverages of <60%, even with~5× more reads [34]. Whereas at 10 µL, our assembly coverages were found to be within the range of those reported from WGA-X™ in 10 µL reactions, highlighting how WGA-X™ could also benefit from further volume reduction. In comparison to other volume reduction approaches, our higher assembly coverages were within range of previously reported E. coli MDA coverages in pL droplets (88-91%) [47] and nL wells (88-94%) [49] at similar sequence depths. range of those reported from WGA-X™ in 10 μL reactions, highlighting how WGA-X™ could also benefit from further volume reduction. In comparison to other volume reduction approaches, our higher assembly coverages were within range of previously reported E. coli MDA coverages in pL droplets (88-91%) [47] and nL wells (88-94%) [49] at similar sequence depths. The total average length of the assemblies, (C) N50 average, the minimum contig length needed to support 50% of the genome assembly, and (D) the percent coverage of the assemblies across the E. coli MG1655 reference genome, were all determined with QUAST [70]. (E) The completeness of the assembled genome and (F) percent of contaminated bases in the assemblies, were determined by MDMCleaner [12]. The boxes' middle line represents the median, and the x represents the mean. Five replicates were used for calculation. Overall, these results demonstrate that MDA performed in 1.25 μL reaction volumes greatly improves this method by producing significantly less-biased, less-contaminated, and more complete SAGs than standard, larger reaction volumes. To assess the benefit of further volume reduction, we also tested the 0.5 μL MDA reaction volume in a droplet microarray (DMA) (Aquarray, Germany) since this reaction size did not work in 384-well plates (Figure 3). The DMA is a platform consisting of a glass slide with super-hydrophobic and hydrophilic patterning to create spots in which nanoliter-sized reactions can take The total average length of the assemblies, (C) N50 average, the minimum contig length needed to support 50% of the genome assembly, and (D) the percent coverage of the assemblies across the E. coli MG1655 reference genome, were all determined with QUAST [70]. (E) The completeness of the assembled genome and (F) percent of contaminated bases in the assemblies, were determined by MDMCleaner [12]. The boxes' middle line represents the median, and the x represents the mean. Five replicates were used for calculation. Overall, these results demonstrate that MDA performed in 1.25 µL reaction volumes greatly improves this method by producing significantly less-biased, less-contaminated, and more complete SAGs than standard, larger reaction volumes. To assess the benefit of further volume reduction, we also tested the 0.5 µL MDA reaction volume in a droplet microarray (DMA) (Aquarray, Germany) since this reaction size did not work in 384-well plates (Figure 3). The DMA is a platform consisting of a glass slide with super-hydrophobic and hydrophilic patterning to create spots in which nanoliter-sized reactions can take place [71,72]. To prevent evaporation during six hours of MDA, the DMA was placed in a humidity chamber [73] and 5% glycerol was added to the MDA master mix. However, these tests were not successful. Recently, the DMA was used to synthesize cDNA from single HeLa cells [73]; however, the cDNA only spent approximately one hour on the DMA versus six hours needed for MDA, and amplification was performed off-chip. Therefore, we attribute our failed MDAs on the DMA to evaporation and/or sterical hinderance of the polymerase [63,64]. Still, further volume reduction could possibly increase genome coverage by~12-14% [47,49], but the reproducibility of these picoliter and nanoliter approaches is uncertain since few approaches and their results have been validated outside the original study. This is because microfluidic, droplet, and other volume reduction approaches are not as easily accessible or easy to use in other groups, and many are not high-throughput. Additionally, because DNA yield is limited in smaller volumes, some studies have had to perform two rounds of MDA to generate sufficient amounts of products for library preparation [40,53]. However, library preparation input requirements have decreased from ug to pg in the last few years [74], so lower DNA yield is no longer much of an issue. Bacterial Growth and Isolation Escherichia coli K12 MG1655 (DSMZ 18039) was cultured in 1 mL of Luria Bertani (LB) broth at 30 • C and 750 rpm with the Thermomixer Comfort (Eppendorf, Hamburg, Germany) to the exponential growth phase (~4 h; OD600 of~2.2-2.6). From this point forward, cells were processed in a UV-decontaminated ISO 4 cleanroom. Equipment and gloves were decontaminated with DNA AWAY (Thermo Fisher Scientific, Waltham, MA, USA). Consumables were UV treated for 1 hr in a Crosslinker and 1 × PBS was UV treated for 6 h in a 254 nm shortwave ultraviolet crosslinker at 0.12 Joules/I 2 (Analytik Jena GmbH, Jena, Germany). A BD FACSMelody (Becton-Dickson, Franklin Lakes, NJ, USA), fitted with a 100 µM nozzle and equipped with a 488 nm laser for excitation was used to sort single cells. Cells were first diluted to approximately 10 6 cells mL −1 with sterile 1X PBS to ensure an event rate of <1000 events/s. Gates were defined on side-scatter (cell complexity) and forwardscatter (cell-size). Cells were sorted in single-cell mode into 384-well plates (Bio-Rad, Hercules, CA, USA) containing no sorting buffer (i.e., dry sorting). Plates were sealed with Microseal B (Bio-Rad, Hercules, CA, USA) and stored at −80 • C. Cell Lysis Plates containing sorted cells were thawed and centrifuged at 4 • C for 5 min at 3000 rpm (Eppendorf, Germany). Preliminary results found that REPLI-g Single Cell Kit (QIAGEN, Hilden, Germany) lysis buffer was too destructive for E. coli single cells; therefore, a modified lysis buffer from Stepanauskas et al. (2017) was used [34]. Cell lysis buffer (0.2 M KOH, 5 mM EDTA and 50 mM DTT) and neutralization buffer (1 M Tris-HCl, pH 4) were treated with UV for 10 min on an ice-water bath in a 254 nm shortwave ultraviolet crosslinker at 0.12 Joules/cm 2 (Analytik Jena GmbH, Jena, Germany) [42]. The lysis solution was then dispensed onto the cells and into wells containing no cells (negative controls) with an I.DOT mini (Dispendix, Stuttgart, Germany) non-contact liquid dispenser. The plate was incubated at 21 • C for 10 min and neutralized by the addition of an equal volume of neutralization buffer (1 M Tris-HCL, pH 4). The amount of lysis and neutralization buffer per MDA reaction can be found in Appendix A, Table A1. Multiple Displacement Amplification (MDA) Multiple displacement amplification (MDA) was performed with the REPLI-g Single Cell Kit (QIAGEN, Hilden, Germany). REPLI-g sc Reaction Buffer and Polymerase were combined in 0.2 mL DNase, RNase-free PCR tubes (Biozym Scientific GmbH, Hessisch Oldendorf, Germany) and UV treated for 30 min on an ice-water bath in a 254 nm shortwave ultraviolet crosslinker at 0.12 Joules/cm 2 (Analytik Jena GmbH, Jena, Germany) [42]. Syto-13 (Invitrogen, Waltham, NJ, USA) was added to the master mix at a final concentration of 1 µM to monitor exponential DNA amplification. The REPLI-g master mix was then dispensed onto the lysed cells and negative controls with an I.DOT mini (Dispendix, Stuttgart, Germany) non-contact liquid dispenser so that the final MDA volumes were 0.5, 0.8, 1.0, 1.25, 5, and 10 µL. The MDA's were incubated for 6 h at 30 • C in a CFX-384 thermocycler (Bio-Rad, Hercules, CA, USA), then 65 • C for 10 min to stop the amplification and held at 4 • C. Amplified DNA was kept at −20 • C until used for library preparation. Library Preparation and Sequencing The following steps were performed under a UV-decontaminated Laminar Flow PCR workbench (STARLAB International GmbH, Hamburg, Germany), sterilized with DNA AWAY (Thermo Fisher Scientific, Waltham, MA, USA). Prior to library preparation, the amplified DNA was cleaned with DNA Clean & Concentrator-5 (Zymo Research, Irvine, CA, USA). DNA input for library preparation was normalized to 5.98 ng µL −1 . Libraries were prepared using the NEBNext ® Ultra™ II FS DNA Library Prep Kit for Illumina (New England Biolabs (NEB), Ipswich, MA, USA), following the <100 ng input protocol. Fragmentation was set to 14 min and 7 PCR cycles were used. NEBNext ® Multiplex Oligos for Illumina ® were used for barcoding. Library concentration and size was quantified with Qubit™ DNA HS assay (Life Technologies, Carlsbad, CA, USA) and a Bioanalyzer High Sensitivity DNA kit (Agilent, Santa Clara, CA, USA). The libraries were sequenced using an Illumina NextSeq 550 with the High Output Kit v2.5 300 Cycles (2 × 150 bp paired-end) (Illumina, San Diego, CA, USA). Prior to de novo assembly, the read coverage was normalized with bbnorm.sh setting target = 100 and min = 5 [76]. SPAdes v.3.15.5 was used as recommended for single cells by using the flag -sc for single-cell mode, kmer lengths of 21 to 101 in 10 -step increments, and setting the flag-careful to reduce the number of mismatches [78]. QUAST v.5.2.0 was used to assess assembly quality [70] and MDMcleaner v0.8.3 [12] was used to estimate SAG contamination and completeness. Statistical differences between sample quality, mapping, and assembly statistics were calculated using Anova: Single Factor with an alpha value of 0.05 in Microsoft Excel ® . For data not normally distributed, as determined by Shapiro-Wilk testing, the non-parametric Kruskal-Wallis one-way ANOVA was used with an alpha value of 0.05. Both the Shapiro-Wilk and Kruskal-Wallis tests were calculated using the Real Statistics Resource Pack software (Release 7.6), Copyright (2013-2021) Charles Zaiontz (www.real-statistics.com (accessed on 1 January 2023)). Pairwise comparisons for measurements with statistical significance were determined using the t-Test: two-sample assuming equal variances with an alpha value of 0.05 in Microsoft Excel ® . Gini indexes were calculated with the ineq package [68] in R v.3.6.3 [79]. Read depth and Lorenz curve plots were created using ggplot2 [80]. Conclusions Based on our results, we question whether further volume reduction is really necessary. As reviewed in Figure 2, many of the current nL and pL volume reduction approaches are either too low-throughput, require complex fabrication, and/or are too expensive to make or purchase (>100 USD per device). Therefore, one should gauge for themselves whether the time and cost benefits of volume reduction down to nL and pL reactions make sense in the scope of their study. Meanwhile, volume reduction in standard 384-well plates and with commercially available cell sorters and liquid dispensers makes this approach more easily accessible to other researchers and already drastically reduces the costs by~97.5% from the standard 50 µL MDA reaction (Table 1). We also found that with our approach, 40× sequence depth is enough for high-quality assemblies (Appendix A, Figure A2), compared to the standard >100× depths generally used in microbial SCG [34,42]. Further cost reduction could also be achieved by applying this approach to the less expensive WGA-X™ method ( Table 1), seeing that preliminary work in our group finds WGA-X™ to work in 1.25 µL reaction volumes as well. In the end, we anticipate that the improvements made herein will be of great interest for other single-cell studies and will therefore increase the use of SCG, especially for research focused on elucidating the genomic potential of rare taxa and/or novel microbial dark matter in environmental samples. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The sequence reads and genome assemblies generated and analyzed during the current study are available at NCBI GenBank and NCBI SRA under BioProject ID PR-JNA900537 (https://www.ncbi.nlm.nih.gov/bioproject/PRJNA900537). Acknowledgments: The authors acknowledge the support by the state of Baden-Württemberg through bwHPC. We also thank John Vollmers for bioinformatics insight, Florian Lenk for sequencing assistance, and David Thiele for library preparation assistance, as well as Hao-Yu Lo and Maximiano Cassal for fruitful discussions on MDA. We also thank Pavel Levkin, Anna Popova, and Shraddha Chakraborty for assistance with the droplet microarray experiments. sh [76]. Down-sampled reads were then mapped to E. coli MG1655 reference genome with bbmap.sh. Standard error bars were calculated using all five replicates. [76]. Mapping statistics were also calculated with BBmap. Gini indices were calculated with the ineq package [68] in R studio [79]. Conflicts of p-values were calculated using one-way ANOVA with an alpha of 0.05 in Microsoft Excel ® . Table A5. MDA assembly summary. After read processing, the final sequence depth used for assembly was calculated. Assembly N50, length, and coverage were calculated using QUAST [70]. Sample Coverage is calculated as the percent of contigs aligned to the reference genome. MDMcleaner was used to calculate completeness and assembly contamination [12]. p-values were calculated using one-way ANOVA with an alpha of 0.05 in Microscoft Excel ® .
2023-02-24T17:04:31.651Z
2023-02-21T00:00:00.000
{ "year": 2023, "sha1": "a58d1bb08be89d35677df33aefc87503a3324e1a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/5/4270/pdf?version=1676971131", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4d50931a979a20d96ed45670a9efa86e2b513db8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
208942289
pes2o/s2orc
v3-fos-license
2-(6-Methoxynaphthalen-2-yl)-1-(morpholin-4-yl)propan-1-one In the title compound, C18H21NO3, the naphthalene group and the basal plane of the morpholine ring (r.m.s. deviations = 0.0177 and 0.0069 Å, respectively) are oriented at a dihedral angle of 44.0 (2)°. In the crystal, molecules are linked by C—H⋯π interactions. In the title compound, C 18 H 21 NO 3 , the naphthalene group and the basal plane of the morpholine ring (r.m.s. deviations = 0.0177 and 0.0069 Å , respectively) are oriented at a dihedral angle of 44.0 (2) . In the crystal, molecules are linked by C-HÁ Á Á interactions. Experimental Cg2 and Cg3 are the centroids of the C1-C6 and C3/C4/C7-C10 rings, respectively. The molecular structure of the title compound is illustrated in Fig. 1. The naphthaline group A (C1-C10) and the basal plane of the morpholine group B (atoms C15-C18) are planar with r.m.s. deviations of 0.0177 Å and 0.0069 Å, respectively. The dihedral angle between planes A/B is 43.97 (23)°. The O1 and C11 atoms of the methoxy group are at a distance of -0.0911 (44) and -0.2335 (74) Å, respectively, from the mean plane of the naphthaline group. The morpholine group has a chair conformation with atoms N1 and O3 at a distance of 0.5827 (79) and -0.6752 (77) Å, respectively, from the basal plane B. Experimental A solution of morpholine (0.35 g, 40.2 mmol) in 5 ml of dichloromethane (DCM) was added to a solution of naproxen acid chloride (0.5 g, 20.1 mmol) in DCM (10 ml). The reaction mixture was stirred at room temperature for 3 h. After completion the reaction mixture was filtered and the filtrate concentrated to give the crude product. The product was purified by flash column chromatogrphy using n-hexane: ethyl acetate (50:50). The resulting jelly like product was recystallized from diethyl ether and hexane (1:1) to give the title compound as colourless prism-like crystals, suitable for X-ray diffraction analysis [Yield: 65.0%, M.p.: 388 K]. Refinement In the final cycles of refinement, in the absence of significant anomalous scattering effects, Friedel pairs were merged and Δf " set to zero. The H atoms were positioned geometrically (C-H = 0.93-0.98 Å) and refined as riding with U iso (H) = k × U eq (C), where k = 1.5 for methyl and = 1.2 for other H-atoms. A view of the molecular structure of the title molecule, with atom numbering. Displacement ellipsoids are drawn at the 50% probability level. 2-(6-Methoxynaphthalen-2-yl)-1-(morpholin-4-yl)propan-1-one Crystal data Special details Geometry. Bond distances, angles etc. have been calculated using the rounded fractional coordinates. All su's are estimated from the variances of the (full) variance-covariance matrix. The cell e.s.d.'s are taken into account in the estimation of distances, angles and torsion angles Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å 2 ) x y z U iso */U eq O1
2018-04-03T02:28:06.761Z
2012-08-04T00:00:00.000
{ "year": 2012, "sha1": "f8fc25cc8d7042823a82ba5fe858aa550e14fb46", "oa_license": "CCBY", "oa_url": "http://journals.iucr.org/e/issues/2012/09/00/su2485/su2485.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1bb36a0782de5574995bcf59cce062d71cc84d9f", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
32945192
pes2o/s2orc
v3-fos-license
Bimetallic Copper-Heme-Protein-DNA Hybrid Catalyst for Diels Alder Reaction † A bimetallic heme-DNA cofactor, containing an iron and a copper center, was synthesized for the design of novel hybrid catalysts for stereoselective synthesis. The cofactor was used for the reconstitution of apo-myoglobin. Both the cofactor alone and its myoglobin adduct were used to catalyze a model Diels Alder reaction. Stereoselectivity of this conversion was analyzed by chiral HPLC. Reactions carried out in the presence of myoglobin-heme-Cu-DNA catalyst showed greater product conversion and stereoselectivity than those carried out with the heme-Cu-DNA cofactor. This observation suggested that the protein shell plays a significant role in the catalytic conversion.(doi: 10.5562/cca1828) INTRODUCTION One of the ongoing challenges in synthetic chemistry concerns the development of powerful catalysts.Of particular interest is the use of highly selective and efficient enzymes as biocatalysts for chemical synthesis.Our increased understanding of chemical and enzymatic catalysis has enabled us to explore and develop bio inspired catalysts with modified, even enhanced efficiency and selectivity by taking advantage of a variety of enzyme classes. 1 For example, metalloenzymes, which contain a metal ion cofactor as prosthetic group are an important class of proteins involved in a range of biologically important reactions such as substrate oxidation (cytochrome P450 enzymes, peroxidases), 2 carbon dioxide fixation (carbonic anhydrase),superoxide dismutation 3 and many others.It is estimated that one third of all enzymes require a metal cofactor to ensure proper function and as a result, numerous studies are devoted to the understanding of their structure and function. 4wing to the advances in biophysical and computational chemistry as well as in structural biology, it is nowadays possible to design new hybrid metalloenzymes with enhanced stability and reaction efficiency.This is achieved either by modification of theprotein shell through introduction of unnatural amino acids 5 or by replacing metal containing prosthetic groups with artificial ones. 6ince tailoring of the protein shell often requires lengthy process of site selective mutations and subsequent enzyme expression, this approach is often of limited because of the low amounts of the biocatalyst that can be produced. 5n the other hand, the modification of small organic prosthetic group offers ease of preparation and a range of different functions, which can be introduced.By chemical modification of prosthetic groups such as protophorphyrin IX (heme) and their insertion into apo-enzymes (that are non active enzymes which lack their natural prosthetic group), highly efficient catalysts with novel properties can be obtained. 7For example, the reconstitution of apo-myoglobin (apoMb) with ruthenium (II) modified heme led to the development of photoactivatable enzymes, which enable temporal control over oxidation reactions catalyzed by myoglobin. 8But the changes in the metalloenzyme prosthetic group are not the only approach to evolve interesting new catalytic properties.Introduction of a catalytic moiety into the chiral environment of a host protein can also lead to novel enantionselective catalysts that are active under mild conditions and show high selectivity. 9 nation with avidin as a chiral protein host and biotinylated phosphine rhodium (I) nornborndiene complex as the catalytic centre.This approach led to 40 % of enantiomeric excess (e.e.) values. 10The initially described ligands, 10 such as phosphine rhodium complexes were later replaced by other metal complexes yielding even higher (up to 96 %) e.e. values and increased catalytic turnover. 9,11These results clearly indicate that hybrid metal-protein complexes hold great potential as catalysts in synthetic organic chemistry.Building upon this idea, Reetz et al designed hybrid proteins by non-covalent attachment of Cu-phtalocyanine to different serum albumins. 12Such proteins were capable of catalyzing Diels Alder reaction of azachalcones with cyclopentadiene in aqueous medium with high selectivity and e.e. between 85−93 % depending on the albumins used.Such high e.e. clearly indicated the potential of the protein modification for design of powerful hybrid catalysts.In the further work by the same group, thermostable protein tHIsF was modified by site specific mutagenesis to enable Cu 2+ binding thereby affording an artificial metalloenzyme. 13he hybrid enzyme achieved 46 % e.e. in Diels Alder reaction showing that the rational introduction of catalytic groups to the natural proteins can lead to a design of novel catalysts, however some issues of site directed mutagenesis and catalytic selectivity need to be resolved. Recently, DNA has also been used as a chiral support for enantioselective catalysts because of its chemical stability and well defined structure. 14,15In addition, DNA from natural sources such as salmon sperm or calf thymus, can be obtained in relatively large quantities compared to the proteins.DNA could also enable rational design of the desired catalytic centre as numerous metal binding functional groups with huge catalytic potential can be introduced to the natural systems.As an example, taking advantage of the ability of certain functional groups to intercalate the double-stranded DNA, Roelfes et al. designed Cu 2+ complex which, when bound to the DNA, revealed good enantioselectivity in Diels Alder reaction.14a,14b Similar hybrid catalytic systems based on DNA and Cu complexes were also used for other reactions such as Friedel Crafts alkylation 16 and Michael addition. 17In previous work we have shown that functional DNA-heme protein conjugates as well as photoactivatable enzymes can be generated by reconstitution of apoMb with chemically modified heme derivatives. 18otivated by the aforementioned developments of hybrid metalloenzyme catalysts, we here demonstrate for the first time that the reconstitution of apoMb with a bimetallic heme cofactor can be used to create novel catalysts for stereoselective synthesis.In particular, we report on the synthesis of an artificial heme group containing a singlestranded DNA oligonucleotide as well as an appended Cu (II) moiety, which was successfully used for reconstitution of apoMb.The resulting hybrid was used as a cata-lyst in a model Diels-Alder reaction revealing moderate stereoselectivity as a consequence of the chiral environment provided by both the DNA and protein shell. RESULTS AND DISCUSSION Modification of heme moiety and its reconstitution into apo enzymes has been successfully used as a tool for introducing new functionalities to the heme enzymes. 6n such way novel biomaterials were designed by use of DNA- 19 and polymer modified heme groups. 20In many cases, enhanced catalytic performance was enabled by modification of the heme propionate groups, 21 exchange of iron centre 22 or by appending electro- 23 and photoactive 8,24 groups to these side chains.Inspired by the work of Hamachi et al. 24 on photoactivatable Mb, we have previously synthesized tris (2, 2'-bipyridyl)ruthenium (II) Ru(bpy) 3 2+ -DNA heme conjugate, in which the Ru moiety is used for the photoactivation of the artificial catalyst while the DNA moiety can be harnessed for selective hybridization-based immobilization. 18Herewith we show that a similar synthetic route can be used to synthesize a heme derivative chemically modified with a Cu(bpy) 2+ moiety to provide active centre for the catalysis of Diels-Alder reaction. 5' alkylamino modified oligonucleotide 1, which remained bound on the controlled pore glass (CPG) support after solid-phase phosphoamidite synthesis was reacted with heme in the presence of HBTU/HOBt/DIPEA (HBTU=O-benzotriazole-N,N',N''-tetramethyl-uronium hexafluorophosphate, HOBt= 1-hydroxybenzotriazole, DIEA = diisopropylethylamine) as activating agents for amide coupling. 6As a result, two products are formed, namely heme derivatives where one or both propionic groups are modified with a DNA strand (2 and 3 respectively in Figure 1).The CPG was then washed and reacted with an access of amino-bpy 4, synthesized according to a previously reported procedure, to yield heme derivative 5 (Figure 1).Subsequent to the cleavage from the CPG support using t-butylamine/meOH/water treatment at 65 °C, the crude reaction mixture was analyzed and purified by high performance liquid chromatography (HPLC).Besides the unreacted DNA (1) and the byproduct HemeD 2 (3), the desired bpy-hemeD 1 (5) complex was obtained(Figure 2).The reaction yield was about 10 % with respect to the used amount of DNA, which is in good agreement with earlier studies. 18n addition to HPLC characterization, the presence of the expected product 5 was also confirmed by MAL-DI-MS analysis (Figure S1), which revealed an observed mass peak at m/z = 7937, corresponding to [M+NH 4 ] + .Byp-hemeD 1 5 was then incubated with 10 equivalents of Cu(NO 3 ) 2 to afford Cu(byp) 2+ -hemeD 1 6 (Figure 3a) and the product was purified by gel filtration chromatography to remove excess of the inorganic salt.MALDI-MS analysis indicated complete conversion (peak at m/z = 7983, corresponding to [M+2H] + (Figure 3b). To investigate the effect of the DNA strand on the enantioselective catalysis, Cu(byp)-hemeD 1 was hybridized with its complementary strand to form double stranded, chiral conjugate Cu(byp)-hemeD 1 cD 1 7 (Figure 4a).The hybrid was then used as the catalyst for Diels Alder reaction between dienophile 3-phenyl-1-(2pyridyl)-2-propen-1-one 8 and cyclopentadiene 9, which was previously shown to be catalyzed by Cu 2+ . 25Dienophile 8 was prepared by aldol condensation of benzaldehyde with a slight excess of 2-acetylpyridine at 0 °C in water, as previously reported. 26ince 8 is a poor dienophile, Diels Alder reaction only occurs in the presence of Lewis acids.25a In a test reaction, compound 8 was first reacted with 9 in presence of Cu(NO 3 ) 2 salt to afford a racemic mixture of endo (major compund 10 a and b) and exo (minor, compound 11a and b) isomers (Figure S2).Chiral HPLC analysis of the reaction products revealed that four major products were formed, corresponding to the exo (11) and endo isomers (10).The endo/exo ratio was 11.4 and, as expected, no e.e. was observed.The retention times of the product peaks were comparable to those of the products of the previously studied Lewis acid catalyzed enantioselective Diels Alder reaction.14b These data were used as a reference for the experiments using Cu(byp)-hemeD 1 cD 1 conjugate 7 as the catalyst (Figure 4b).Conjugate 7 was obtained from 6 by simple hybridization with one molar equivalent of the complementary ssDNA oligonucleotide.Subsequent to the incubation of the dienophile and diene for 72 hours at 4 °C in the presence of hybrid 7, the reaction mixture was analyzed by chiral HPLC.Remarkably, the analysis revealed only two peaks corresponding to the endo adducts, while no exo adduct could be detected (Figure 4c).However, calculation of the product conversion based on the corresponding peak areas showed that the products were only formed in about 6 % yield.Chiral HPLC analysis also revealed that no enantiomeric excess was obtained in the reaction catalyzed by hybrid 7.This might be explained by the fact that the chiral microenvironment of the dsDNA is too far away from the copper centre.To explore further the effect of biomolecular environment on the stereoselectivity of the catalytic Cu centre, conjugate 6 was used to reconstitute apoMb (Figure 5a), thereby enabling the investigation of potential effects of chiral protein shell.It was expected that the second coordination sphere created by the protein shell should influence the Cu 2+ catalyzed Diels Alder reaction since the catalytic metal centre is closely embedded in the chiral microenvironment provided by the protein and ssDNA moieties. After reconstitution of apoMb with conjugate 6, the reaction mixture was analyzed and purified by FPLC using an anion exchange column (Figure 5b).The resulting Cu(byp)-Mb-D 1 conjugate 12 (0.1 mmol dm −3 ) was then used as catalyst for Diels Alder reaction under same conditions as described above.The reaction mixture was subjected to the chiral HPLC analysis (Figure 5c).Peaks 11a and 11b refer to the both enantiomers of exo product and peaks 10 a, b correspond to both enantiomers of the endo product, while peaks 8 and 9 correspond to the educts.The product conversion of about 71 % was significantly higher than observed for the DNA hybrid catalyst 7. The endo/exo selectivity (8.6) was lower than observed for 7, however distinctive enantioselectivity could be observed with e.e. values for endo product of 18 % and for exo products of 10 %. These results illustrate that the semisynthetic enzyme 12 is a far better catalyst than Cu(byp)-hemeD 1 cD 1 7. Higher product conversion and greater enantioselectivity suggest that the protein shell plays a significant role in the overall catalytic effect.Indeed, the catalytic copper centre is closer to the chiral microenvrionment of the myoglobin.In comparison with previously studied systems of DNA-metal catalytic hybrids for similar Diels Alder reactions, 14 however, our semisynthetic protein conjugate were less efficient with respect to the enantioselectivity.Therefore, further work will focus on exploring other protein models where the Cu centre is more closely embedded in the protein shell.Moreover, different porphyrins and DNA sequences will be investigated to generate a library of semisynthetic enzyme-DNA catalysts. EXPERIMENTAL SECTION Copper ligand 4: amno-Bpy 4 was synthesized using modified approach described previously by Hamachi 8 and Kuo. 18Detailed experimental procedures are provided in the Supporting Information Preparation of bpy-hemeD 1 conjugate 5. Commercially available amino modified oligonucleotide (5'Tramino-GTG GAA AGT GGC AAT CGT GAA G), which was still coupled to the CPG support and contained the protection group.The trityl group was manually removed using commercial 3 % dichloroacetic acid (DCA) in dichloromethane solution followed by washing with CH 3 CN and drying with argon.Hemin (75 μmol) and HBTU (75 μmol) were dissolved in 1.0 ml DMF and to this HOBt (50 μmol) in 500 μL CH 3 CN was added followed by 27 μL DiPEA.This solution was mixed with the detritylated oligonucleotide and coupling was allowed to proceed for 180 minutes at 20 °C.The resulting CPG suspension was then washed thoroughly with DMF and CH 3 CN and dried with argon.HOBt (50 μmol) and HBTU (75 μmol) were dissolved in 1.0 ml DMF and to this aminoalkyl-modified bipyridine, amino-Bpy 4 (75 μmol) in 500 μL CH 3 CN was added followed by 27 μL DiPEA.This solution was then incubated at 40 °C overnight.The modified oligonucleotide was then de-protected using t-butylamine:MeOH:H 2 O (1:2:1) mixture for 3 h at 65 °C, purified by HPLC and analyzed by MALDI-TOF mass spectrometry.Preparation of BpyCu-hemeD 1 conjugate 6: 0.1 mmol dm −3 of bpy-hemeD 1 conjugate 5 was incubated in a 1 mmol dm −3 aqueous solution of Cu(NO 3 ) 2 at room temperature for 3 hours, to yield Cu(bpy) 2+ -heme-DNA conjugate.The product was successively purified with NAP 5, NAP 10 columns and Vivaspin ® columns (cutoff filter 5000) in order to remove the excess of inorganic salts thoroughly.The conjugate was then analyzed with MALDI-TOF mass spectrometry.Reconstitution of apo-myoglobin with artificial cofactor.Apo-myoglobin (aMb) was prepared by Teale's 2-butanone method.In brief, the aqueous myoglobin solution was acidified by adding pre-cooled diluted hydrochloric acid (0.1 M HCl (aq) , 4 °C) and the pH value was adjusted to 2.5−3.0 in order to denature myoglobin and thus to enable the release of heme.Heme was extracted from the aqueous solution by 2-butanone.The heme / 2-butanone solution was discarded and the aqueous apo-myoglobin solution was then purified using NAP columns in order to remove remaining 2-butanone and re-buffered in phosphate buffer (Kpi, pH 7.4) meanwhile.The protein concentration was determined spectrometrically by molar extinction coefficient of apomyoglobin at 280 nm (ε 280 = 15800 cm −1 mol −1 dm 3 ).A solution of the apo-enzyme (200 L, 60 mol dm −3 ) in potassium phosphate buffer, pH 7, was then mixed with BpyCu-hemeD1 6 conjugate (1.1 equiv., 330 L, 40 mol dm −3 ) and incubated for at least 24 hours at 4 °C.Reconstituted enzymes were purified using ion exchange FPLC (AKTA purifier, Amersham Bioscience, MonoQ column, buffer A: 20 mmol dm −3 Tris A pH 8.3 and buffer B: 20 mmol dm −3 Tris A and 1.5M NaCl using stepwise gradient (saved method: CHKHemenzyme.m01).The concentration of reconstituted myoglobin was determined spectrometrically by molar extinction coefficient of Soret band at 405 nm (ε 405 = 171000 cm −1 mol −1 dm 3 ).Enantioselective catalytic reaction.An 1 mL of aqueous solution containing freshly distilled [cyclopentadiene 9] = 15 mmol dm −3 , [aza-chalcone 8] = 1 mmol dm −3 , [catalytic conjugate] = 0.1 mmol dm −3 in phosphate buffer (pH 7.4) was incubated under orbital shaking for 72 hours.The resulting reaction mixtures were then extracted with diethyl ether and were subjected to chiral-HPLC analysis (Daicel ODH, Hexane/IPA = 80 / 20, flow rate: 0.9 mL/min).The retention time of the products in the chiral-HPLC analysis was assigned according to the previously.The exo-isomers appeared at the retention time of 7.1 and 7.5 minutes while the endo-isomers appeared at the retention time of 8.1 and 9.2 minutes.The enantiomeric excess was estimated from the peak area in the chiral-HPLC chromatogram.The product conversion was estimated from the peak areas in the chiral-HPLC chromatogram based on the following formula: area conv.(%) 100 % area area Where area P is the total peak area of the product of the reaction, area S is the peak area of the starting material and c is the correction factor determined to be 1.21. Supplementary Materials.Supporting informations to the paper are enclosed to the electronic version of the article.These data can be found on the website of Croatica Chemica Acta (http://public.carnet.hr/ccacaa). Figure 1 . Figure 1.Schematic representation of the Byp-heme-DNA synthesis using solid phase synthetic procedure. Figure 2 . Figure 2. Chromatogram of HPLC analysis of modified heme purification after cleavage and deprotection from CPG solid support. Figure 4 . Figure 4. Hybridisation of hemeD 1 6 with complementary DNA strand to afford double stranded conjugate 7 (a), which is then used as a catalyst for Diels Alder reaction (b).Chiral HPLC analysis of Diels Alder adducts catalysed by heme-D 1 cD 7 conjugate. Figure 5 . Figure 5. Reconstitution of heme 6 into apo myoglobin to prepare hybrid catalyst 12 (a) and subsequent FPLC purification.Diels Alder reaction products were analysed by chiral HPLC (c). Acknowledgements. This work was partially supported by the Zentrum für Angewandte Chemische Genomik (ZACG), a joint research initiative founded by the European Union and the Ministry of Innovation and Research of the state Northrhine Westfalia, the project SMD in the course of FP7-NMP-2008-SMALL-2, founded by European Comission, and KIT Excellence Intitiative 2006−2011, project A5.7.C.-H.K. acknowledges support through the International Max-Planck Research School in Chemical Biology, Dortmund, and a student fellowship from Deutscher Akademischer Austauschdienst (DAAD).We would like to thank Prof. M. Christmann and his coworkers for help with the chiral HPLC analysis.
2017-05-03T00:30:04.633Z
2011-10-03T00:00:00.000
{ "year": 2011, "sha1": "5f8df59007042acff8c52dab776560c5181bee71", "oa_license": "CCBY", "oa_url": "http://hrcak.srce.hr/file/107225", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "5f8df59007042acff8c52dab776560c5181bee71", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
19005278
pes2o/s2orc
v3-fos-license
Efficacy of a Feed Dispenser for Horses in Decreasing Cribbing Behaviour Cribbing is an oral stereotypy, tends to develop in captive animals as a means to cope with stress, and may be indicative of reduced welfare. Highly energetic diets ingested in a short time are one of the most relevant risk factors for the development of cribbing. The aim of this study was to verify whether feeding cribbing horses through a dispenser that delivers small quantities of concentrate when activated by the animal decreases cribbing behaviour, modifies feeding behaviour, or induces frustration. Ten horses (mean age 14 y), balanced for sex, breed, and size (mean height 162 cm), were divided into two groups of 5 horses each: Cribbing and Control. Animals were trained to use the dispenser and videorecorded continuously for 15 consecutive days from 1 h prior to feeding to 2 h after feeding in order to measure their behaviours. The feed dispenser, Quaryka®, induced an increase in time necessary to finish the ration in both groups of horses (P < 0.05). With Quaryka, cribbers showed a significant reduction of time spent cribbing (P < 0.05). After removal of the feed dispenser (Post-Quaryka), cribbing behaviour significantly increased. The use of Quaryka may be particularly beneficial in horses fed high-energy diets and ingesting the food too quickly. Introduction Stereotypies are defined as invariant and repetitive behaviour patterns that seem to have no function [1]. They are reported in more than 15% of domesticated horses [2] and are known as the disease of domestication [3], since they have never been observed in free-ranging feral horses. They may be indicative of reduced welfare [4,5] but it is not self-evident whether stereotypies are representative of the current situation or of a previous suboptimal condition. This is based on the findings that once a stereotypic behaviour is established, it will become a habit, and it is difficult to stop or rectify it [6][7][8]. Cribbing horses may be stressed more easily than unaffected horses [9,10]. It has been shown that attempts to inhibit this behaviour through the use of anticribbing collars or other physical devices may significantly impact equine welfare, by reducing the horse's ability to cope with stress without addressing the underlying cause [11]. Epidemiological and experimental studies provide quite an accurate understanding of the prevalence, underlying mechanisms, and owner perceptions of cribbing behaviour. These studies have shown that many factors can be associated with an increased risk of cribbing, including management conditions that prevent foraging opportunities and social contact, provision of high concentrate diets, and abrupt weaning [12]. Since cribbers perform the behaviour most frequently following delivery of concentrated feed, it has been suggested that diet may be implicated [13]. As a consequence of high-energy carbohydrate feed administration, cribbing horses ingest food quickly and have a lower production of saliva, a higher gastric fermentability [14], and acid fermentation in the cecum and large intestine [13]. The latter phenomenon may be related to a higher transit time of feed in the large intestine, indicating that the orocecal digestion in cribbing horses is less efficient compared to healthy subjects. The action of cribbing could therefore be an attempt to [15]. Equine behaviour and welfare scientists agree that management of cribbing horses should focus on improvement of life conditions and feeding management rather than on attempts at physical prevention of the behaviour. The aim of this study was to verify whether feeding cribbing horses through a dispenser that delivers small quantities of concentrate when activated by the animal decreases or eliminates cribbing behaviour, modifies feeding behaviour, or induces frustration. This study describes the effect of a feed dispenser, Quaryka, on feeding time budget of cribbing horses. Research Methods Ten horses, balanced for breed and sex, aged between six and 20 years (mean age: 14 years), were recruited. All subjects were deemed healthy following a physical examination and were exempt of medical treatment, except for vaccination and deworming. Horses were divided into two groups: five Cribbing horses and five Control horses ( Table 2). Cribbing horses had been stereotyping for at least one month and had never been treated for the condition whereas Control horses had never exhibited the stereotypy. All subjects were kept under the same housing and management conditions. They were housed in standard single horse boxes (3 × 3 m) in visual contact with other conspecifics. They were fed twice a day, morning and afternoon, with hay and concentrate. Water was provided ad libitum by automatic drinkers. During the observation period, all horses were managed avoiding any changes in terms of workload, housing, rations, and daily routine. The owners were asked to fill out a questionnaire including information on the horse's characteristics and history as well as on the physical and social environment of the horse. Questions touched on home environment, management and feeding, and horse's use and exercise. Other specific questions regarded the medical history and development and presence of cribbing or other behaviour problems. The design was a cross-over case-control study. In a preliminary phase, the horses were trained to use the dispenser Quaryka ( Figure 1) that delivered small quantities of concentrate when a wheel was activated by the horse's mouth. The training included four main steps lasting approximately two hours overall. (1) Approach to the Dispenser. The horse's attention was directed to the wheel by placing some concentrate feed on its spokes. (2) Habituation to the Wheel Rotation. When the horse approached the wheel with the muzzle, the wheel was manually rotated, so that the concentrate fell into the manger (this process allowed the animal to associate the movement of the wheel with the availability of food). (3) Reward-Dependent Approach to the Wheel. As soon as the horse touched the spokes with the lips, the experimenter turned the wheel. (4) Autonomous Use of the Dispenser. The horse approached the dispenser independently and the experimenter intervened only if the animal was distracted or did not apply enough force in turning the wheel. Each step was repeated several times, until the horse exhibited consistent learned behaviour. After the preliminary phase, the observation period lasted 15 days divided into phases of five days each, for both groups of horses. During the first five days of testing (Pre-Quaryka), each subject was videorecorded in the box while fed concentrate in the usual location. During the next five days, the concentrate feed was distributed only through Quaryka (During-Quaryka) and, during the last five days, the dispenser was removed from the box and the horse was fed concentrate in the usual feeder (Post-Quaryka). During the study, the horses were continuously videotaped for one hour prior to and two hours after afternoon administration of the concentrate feed. Behaviour was recorded by a remotely operated video camera (Panasonic, HDC-SD99, Panasonic, Japan), mounted on a wall over the box and linked to a sequential switcher and time-lapse video recorder. Data Analysis. The video recording analysis was carried out using dedicated software, the Solomon Coder (beta 12.09.04, copyright 2006-2008 by András Péter), customized with a specific behaviour configuration. An observer trained in animal behaviour and use of the software analysed all the videotapes. Behavioural categories are listed and described in Table 1. Owners' answers to the questionnaire were scored and reported. Statistical Analysis. All statistical analyses were conducted using SPSS 21 (SPSS Inc., Chicago, USA). Differences were considered to be statistically significant if ≤ 0.05. For each behaviour, mean duration and standard deviation were calculated. ANOVA (analysis of variance) was used to investigate potential differences in horse behaviour between groups or time periods. Results and Discussion Results from the questionnaire are summarized in Table 2. The majority of the horses (70%) were stabled on sawdust litter, while only 30% of them, of both groups, were stabled on wheat wood shavings. All boxes had an open window (overlooking indoor or outdoor) and the large majority of Veterinary Medicine International 3 the subjects (80%) had daily access to grass paddocks, with shade and water available. Horses were fed with hay (a mean of 9 kg/day each) and concentrate (a mean of 3 kg/day each) to meet their specific energy requirements. All owners described their horse as "easy to manage" and "getting along with other horses." All owners of Cribbing horses reported that the stereotypy was present at the time of purchase. Most of the horses approached Quaryka with curiosity (60%), while the others (40%) showed signs of diffidence (no tactile exploration, standing alert) during the first 20 minutes. The video analysis revealed that, during the entire observation period, none of the horses showed any of the behaviours related to frustration or fear described in Table 1 and none of the Control horses showed any displacement or stereotypic behaviour. Figure 2 reports time to finish the ration recorded in horses during Pre-Quaryka. Compared to Control horses, Cribbing horses tended to need a longer time to finish their concentrate, in agreement with findings of Clegg et al. [13] who found that cribbers and weavers took longer time than Control horses to fully consume their ration. This result can be explained considering that cribbers stereotype most frequently during and following the consumption of meals [16][17][18][19][20]. Only cribbers displayed lip playing during the observation time. Quaryka induced an increase in time needed to finish the ration in both groups of horses (Figure 3, < 0.05). Also cribbers needed significantly more time to finish the ration than Control horses ( < 0.05). After Quaryka removal, horses in both groups showed a feeding behaviour similar to that expressed before the introduction of the dispenser (Figure 3). Interestingly, cribbers showed a significant reduction of time spent cribbing ( < 0.05), indicating that horses' interaction with Quaryka induced lengthening of the time taken Bedding type to finish the ration, in the absence of stereotyped behaviours associated with food consumption. After removal of the feed dispenser (Post-Quaryka), cribbing behaviour significantly increased compared to the previous phases ( Figure 4). This result is compatible with a posttreatment rebound caused by a rise in the motivation to cribbing. Posttreatment rebound was observed in horses prevented from cribbing by the use of inhibitory systems [19]. This hypothesis should be considered with caution as in this study cribbing was never prevented. A possible alternative explanation may be related to the short exposure of the subjects to the dispenser not allowing a lasting effect on the stereotypic behaviour. The effectiveness of Quaryka in reducing cribbing behaviour cannot be generalised due to the limited animal sample. It should be noted that, in all the horses included in this study, the use of Quaryka was associated with an increase of time needed to finish the ration. This may be particularly beneficial in horses fed high-energy diets and ingesting their food too quickly.
2018-04-03T04:20:19.014Z
2016-10-13T00:00:00.000
{ "year": 2016, "sha1": "63d23df4e085b3bd10857a6c900a1b973972362a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2016/4698602", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0e64b623a6e6239fe28680e70af854a6673657d0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235561424
pes2o/s2orc
v3-fos-license
Bio-Inspired Fabrication of Silver Nanoparticles Using High Altitude Squamulose Lichen Extract and Evaluation of its Antioxidant, Anticandida and Cytotoxic properties Bio-inspired nanoparticle synthesis has attracted substantial interest among the scientic society owing to its eco-friendly and non-toxic nature. In the present study, Silver nanoparticles (AgNPs) were synthesized using high altitude squamulose lichen – Cladonia subradiata and characterized using different techniques. The antioxidant and anticandida activity of AgNPs were evaluated using multiple in-vitro assays. In-silico molecular docking analysis and in-vitro cytotoxic assay was performed to determine the anti-cancer potential of synthesized AgNPs. The results of the spectroscopic studies revealed the successful synthesis of AgNPs and the presence of different functional groups suggesting the involvement of phytocompounds in the reduction and capping of AgNPs. The average size of the AgNPs was 20 nm and predominantly spherical in shape. AgNPs demonstrated excellent DPPH free radicals scavenging activity with an IC 50 value of 7.51 ± 0.4 µg/mL. C.albicans was identied as the most susceptible strain from the anticandida studies. Usnic acid and Pulvinic acid exhibited low binding energies and showed excellent inhibition interaction with EGFR lung cancer protein. The in-vitro cytotoxic results were impressive with an IC 50 value of 28.75 µg/mL for A549 lung cancer cells treated with AgNPs. Thus, the study demonstrates the effective and non-toxic synthesis of AgNPs using a less explored lichen extract as a promising anticandida and anticancer agent in the eld of nano-medicine. in uence the synthesis of nanoparticles [11]. Microorganisms and plants are considered as the most promising primal matter for the genesis of MNPs due their ability to produce stable and mono-dispersed particles [12]. Plant-based MNPs synthesis has the advantage of being eco-friendly and economic, while microbes-based MNPs synthesis is regarded effective due to their unique enzyme machinery [13]. In fact, bio-inspired fabrication has become a new trend that is preferred by the scienti c society to produce MNPs that are non-toxic, well-morphed, amicable in size, reproducible and easily scaled-up [14]. MNPs synthesized through biological methods are bio-compatible and reported to have anticancer and antimicrobial properties. Anti-cancer potential of these MNPs are reported in cancer cell lines such as A549 (human lung cancer) [15], MCF 7 (human breast cancer) [16], HCT-116 (human colon cancer) [17] and Hep2 (human hepatic cancer) [18]. The anti-cancer mechanism is elucidated to be dependent on intracellular reactive oxygen species production and apoptosis via mitochondria-dependent and caspasedependent pathways [19]. Application of biogenic MNPs also includes imaging facilitators, theranostics and sensor designing [20,21]. Lichens are complex symbiotic organisms that are formed from the intimate association between a mycobiont (fungi) and a photobiont (algae or cyanobacteria) [22]. They are regarded as the primary colonizers and inhabitants of terrestrial ecosystems with worldwide distribution regardless of the geographical characteristics [23]. Lichens are a part of traditional medicine owing to their rich phytochemical pro le [24]. Unique and novel secondary metabolites have been identi ed from the thallus or the lichenized stroma [25]. Among the several identi ed compounds, metabolites belonging to the class of phenols, depsides, depsones, quinones, furans, dibenzofurans and lactones have gained special interest in the eld of medicine [26,27]. Lichens are regarded as pollution indicators [28] and also their extracts are identi ed to possess vital antibacterial, antifungal, antiviral, antioxidant, antibiotic, antimutagenic, antipyretic and anti-carcinogenic activities [29]. In the present study, Silver nanoparticles (AgNPs) are synthesized in a rapid and affordable approach using the aqueous extract of squamulose lichen -Cladonia subradiata collected from the Kodaikanal hills of Western ghats region, India. For the rst time, in-vitro antioxidant activity, in-silico and in-vitro cytotoxic properties of C. subradiata mediated AgNPs have been reported. Preparation of extract The identi ed lichen sample was washed gently washed under running tap water and rinsed with double distilled water. The sample was shade dried for 7 days and grinded to ne powder using a laboratory mixer grinder. 10g of the ne sample powder was weighed and dissolved in 100 mL of various solvents (polar to non-polar). The mixture was agitated using an orbital shaker at room temperature for 72 hours. [32] ,and Wagner et al., [33] were followed to detect the presence of alkaloids avonoids, phenols, saponins, tannins and glycosides. GC-MS analysis of C. subradiata extract Gas chromatography and Mass spectroscopy has been regarded as a "gold standard" for forensic substance identi cation because it is used to perform a 100% speci c test, which positively identi es the presence of a particular substance. Aqueous extract of C. subradiata dissolve in DMSO was analyzed using a GC Clarus 500 PerkinElmer system and gas chromatograph interfaced to a mass spectrometer (GC-MS) instrument employing the following conditions. The column Elite-1 was fused silica capillary column, operating in electron impact mode at 70eV. Helium (99.999%) was used as carrier gas at a constant ow of 1ml / min and an injection volume of 2 ml was employed (split ratio of 10:1 Characterization of Silver nanoparticles The pellet, either directly or as redisposed in distilled water was used for the characterization procedures. The fabrication of the nanoparticles was con rmed using Ultra-Violet Visible (UV-Vis) spectroscopy (Shimadzu with a range of 200-800nm). Functional groups were identi ed using Fourier Transform Infrared (FTIR) spectrophotometer (Perkin Elmer, range 4000 to 500 cm − 1 ). Powder X-ray Diffractrometer (XRD) (X' Pert Pro -PANalytic) was used for particle nature analysis. Morphology and average size of the particles were visualized and calculated from the Transmission electron microscope images (Joel/Jem 2100 HR-TEM operating at a voltage of 200kv). Determination of Total Phenol Content (TPC) and Total avonoid content (TFC) The total phenol content and avonoid content of C.subradiata aqueous extract and AgNPs green synthesized using the same extract were evaluated by different spectroscopic methods. Different concentrations of the lichen extract (20, 40, 60, 80 and 100 µg/mL) and AgNPs (2, 4, 6, 8 and 10 µg/mL) were used to perform the analysis. The TPC in the samples were determined using Folin-ciocalteau method [36] and TFC was determined using Aluminium chloride method [37]. The samples were measured for absorbance using a Shimadzu spectrophotometer (200-800 nm) in the wavelength 725 nm and 430 nm for the assays respectively. In-vitro antioxidant activity of C. subradiata extract and AgNPs The antioxidant activity of C. subradiata extract and AgNPs were determined using different in-vitro assays. Different concentrations of C. subradiata extract (20, 40, 60, 80 and 100 µg/mL) and AgNPs (2, 4, 6, 8 and 10 µg/mL) were used. 2, 2-diphenyl-1-picrylhydrazyl (DPPH) free radical scavenging assay was performed by adding 2 ml of DPPH solution in methanol to the aliquots of the samples. The mixture was allowed to react in dark for 30 minutes and the absorbance was measured at 517 nm [38]. ABTS (2,2'azino-bis(3-ethylbenzothiazoline-6-sulfonic acid)) free radical scavenging assay was done by adding diluted ABTS which was incubated in dark for 14 hours to the samples and incubated for 15 minutes. The absorbance of the samples was measured at 734 nm [39]. To perform the Hydrogen peroxide (H 2 O 2 ) free radical scavenging assay 40 mM H 2 O 2 was added to the different aliquots of samples and incubated for 10 minutes. The absorbance of the samples was recorded at 230 nm [40]. Ascorbic acid was used as the positive control for all the assays and the free radical scavenging percentage of the samples were calculated as follows: Where AC is the absorbance of the control; AS is the absorbance of the sample 2.6. In-vitro anticandida activity of C. subradiata extract and AgNPs In-vitro anticandida activities of C. subradiata extract and synthesized AgNPs were evaluated by agar well-diffusion method. The samples were tested at different concentrations (15-100 µg). Candida strains were obtained from Microbial Type Culture Collection and Gene Bank (MTCC), Chandigarh, India (C. albicans -MTCC 183, C. tropicalis -MTCC 184, C.glabrata -MTCC 3019, C.parapsilosis MTCC − 7043 and C. krusei -MTCC 9215). The assay was performed using Potato Dextrose Agar (PDA) as the culture medium and 0.1 mL of each inoculum was swabbed on individual petriplates and allowed to dry in order to assure the absorbance of the inoculum by PDA. Wells were cut using a cork borer and different concentrations of the samples were added to each well. Distilled water served as the negative control and standard antifungal agent Amphotericin B was used as the positive control. The plates were incubated at 37°C for 24 hours after which the zones of inhibition was observed and measured in millimetres [41]. In-silico molecular docking of C. subradiata compounds Structure of chemical compounds identi ed using GC-MS analysis were obtained from the Pubchem database and the three-dimensional protein structure of EGFR protein involved in lung cancer pathogenesis was downloaded from RCSCB protein data bank (PDB) [42]. The Graphical User Interface program "AutoDock Tools" was used to prepare, run, and analyze the docking simulations. Koll man united atom charges, solvation parameters and polar hydrogen's were added into the receptor PDB le for the preparation of protein in docking simulation. Auto Dock requires pre-calculated grid maps, one for each atom type present in the exible molecules being docked and its stores the potential energy arising from the interaction with rigid macromolecules. This grid must surround the region of interest in the rigid macromolecule. The grid box size was set at 126, 126 and 126A° (x, y, and z) to include all the amino acid residues that present in rigid macromolecules. AutoGrid 4.2 Program, supplied with AutoDock 4.2 was used to produce grid maps. The spacing between grid points was 0.375 angstroms. The Lamarckian Genetic Algorithm (LGA) was chosen search for the best conformers. During the docking process, a maximum of 10 conformers was considered. The population size was set to 150 and the individuals were initialized randomly. Maximum number of energy evaluation was set to 25,00,000, maximum number of generations 27,000, maximum number of top individual that automatically survived set to 1, mutation rate of 0.02, crossover rate of 0.8, Step sizes were 0.2 A for translations, 5.0° for quaternions and 5.0° for torsions. Cluster tolerance 0.5A, external grid energy 1,000.0, max-initial energy 0.0, max number of retries 10,000 and 10 LGA runs was performed. The best ligand-receptor structure from the docked structures was chosen based on the lowest energy and minimal solvent accessibility of the ligand. Docking results of each calculation were clustered on the basis of root mean square deviation (RMSD) between the Cartesian coordinates of ligands and were ranked according to binding energy [43]. In-vitro cytotoxic activity of C. subradiata extract and AgNPs The human lung cancer cell line (A 549) was obtained from National Centre for Cell Science (NCCS), Pune and grown in Eagles Minimum Essential Medium containing 10% fetal bovine serum (FBS). The cells were maintained at 37°C, 5% CO 2 , 95% air and 100% relative humidity. Maintenance of the cultures was done by weekly passaging and the culture medium was changed twice a week. 100 µL per well of cell suspension were seeded into 96-well plates. After 24 h the cells were treated with serial concentrations of AgNPs. Following sample addition, the plates were incubated for an additional 48 h at 37°C, 5% CO 2 , 95% air and 100% relative humidity [44]. The medium without samples served as control and the experiments were performed in triplicate. Statistical analysis All the assays were performed in triplicates and expressed as mean ± standard error. Origin 8 pro and Excel 2010 softwares were used for plotting the graphs. Identi cation of lichen sample Kodaikanal hills of the Western Ghats region was a rich repository of several diverse lichen species. Crustose, foliose, fruiticose and squamulose lichens were documented during the survey and most of them belonged to the families Pyrenulaceae, Bacidiaceae, Physciaceae, Parmeliaceae, Cladonniaceae, Arthoniaceae, Graphidaceae, Trochotheliaceae and Ramalinaceae. Western Ghats region of Tamil Nadu has a wide assortment of lichens with more than 657 taxa among which some lichens still remain unexplored and the lichen Parmelia pseudobitteriana was identi ed for the rst time from kodaikanal hills [45]. The lichen sample selected for the study was identi ed as C. subradiata (Fig. 1). The lichen was characterized by squamules of 2x1 mm with whitish graypodetida sparingly branched. The tips were blunt in young poetida and mature tips formed cups. The surface was thinly corticated and the pycnidia were found in young basal squamules. Phytochemical screening of C. subradiata Methanol, ethanol, acetone, chloroform, aqueous, petroleum ether and hexane extracts of C.subradiatia were screened for the presence of phytochemicals. Table 1 shows the presence and absence of phytocompounds such as alkaloids, avonoids, phenols, saponins, tannins, glycosides and sterols. Extraction of the phytocompounds was found to be high in mid-polar solvents such as acetone, chloroform and water. Chloroform extract exhibited strong results for avonoids and phenols when compared to other extracts. The extraction quality was poor in petroleum ether and hexane suggesting the minimal quantity of non-polar groups in the phytocompounds. Alkaloids, phenols, avonoids, saponins, glycosides and tannins are the frequently reported phytochemicals in lichen species [46]. Bodicherla et al., [47] has reported the presence of phytochemicals similar to our results in selected macrolichens in methanol, 2-propanol and water extracts. The phytochemicals in biological samples are considered as the major agents that are responsible for the rapid reduction of metal ions to metal nanoparticles in an eco-friendly manner [48]. Identi cation of C. subradiata compounds using GC-MS analysis The results of the GC-MS analysis con rmed the presence of phytocompounds in the chloroform extract of C.subradiata. 34 phytoconstitents were identi ed from the chromatogram (Fig. 2) Biogenic synthesis and characterization of AgNPs synthesized from C. subradiata AgNPs were rapidly synthesized using C.subradiata chloroform extract which was con rmed from the visible colour change in the colloidal solution. The colour change from pale yellow to brown intensi ed with the increase in time (Fig. 3a). The Uv-Visible spectra of the synthesized AgNPs from the lichen extract showed surface plasmon bands between 420 and 450 nm at a time lapse of 5 to 180 minutes. The intensity of the bands stabilized after 60 minutes and was devoid of red and blue shifts (Fig. 3b). This indicates the stability of the bio-synthesized AgNPs using C.subradiata chloroform extract. Reports of Abdel-Raouf et al., [50] showed that bands were obtained between 400-500 nm for AgNPs synthesized from brown algae extracts and similarly Gudikandulaet al., [51] reported bands between 419-421 nm for AgNPs from white rot fungi. These reports run parallel with our report as lichens are symbionts of algae and fungi. The FT-IR spectrum (Fig. 4) revealed the functional groups of the AgNPs synthesized using C. subradiata chloroform extract. Peak at 3438 cm − 1 corresponds to O-H stretch of alcohol, the peak at 1416 cm − 1 is attributed to C = C stretch of alkenes and peak at 1107 cm − 1 is designated to C-N for amide or nitro groups. Peaks obtained belonged to the functional groups such as alcohol, nitro compounds, amides and alkenes respectively. The presence of amide and nitro groups showed that other than secondary metabolites, proteins play a major role in the reduction and capping of AgNPs [52]. The X-ray diffraction pattern (Fig. 5) of AgNPs synthesized from C.subradiata chloroform extractcorresponded to (111), (200), (220) and (311) crystallographic planes of face centred cubic (FCC) Silver. Debye-Scherrer formula (D = 0.94 λ/β cos θ) was used to calculate the average size of the AgNPs. The average size of the AgNPs synthesized from C.subradiata chloroform extract was 23 nm. The results were in accordance with the Braggs re ection of silver nanocrystals [53]. The morphological characteristics of the AgNPs were investigated using a TEM instrument. The micrographs (Fig. 6a) revealed that the AgNPs synthesized from lichen extracts were predominantly spherical in shape and their size ranged between 20-50 nm. The crystalline nature of the nanoparticles was evident from the SAED pattern (Fig. 6b) and coincided with the XRD results. Biogenic fabrication of Ag NPs using aqueous-ethanolic extract of Usnea longissima yielded NPs within the size range of 9-11 nm with enhanced antibacterial activity, phytocompounds of the lichen extract were suggested to be the vital reducing and capping agents [54]. Total phenol and avonoids content in C. subradiata chloroform extract and AgNPs The results of the TPC and TFC exhibited by C. subradiata chloroform extract and AgNPs are given Table 2. It is shown that C. subradiata chloroform extract showed high TPC and TFC. Considerable amounts of phenols and avonoids were detected in the Ag NPs synthesized from the lichen extract. The highest TPC (163.41 ± 0.3 mg GAE/g) and TFC (88.72 ± 0.01 mg GAE/g) were expressed in C. subradiata chloroform extract at a concentration of 100µg/mL. Green synthesized Ag NPs using Tridax procumbens exhibited 68.93 ± 0.36 µg/mg GAE and 64.98 ± 0.46 µg/mg QE as TPC and TFC at 1mg/mL concentration [55]. It is evident from our results that the content of avonoids and phenols increased with the increase in the concentration of the sample, therefore it can be suggested that the TPC and TFC contents of the AgNPs synthesized from chloroform extract of C. subradiata might increase with the increase in the concentration. Antioxidants play an inevitable role in neutralizing or nullifying the effect of free radicals and management of fatal diseases. Phenols and avonoids in plant extracts are suggested as to be excellent free radical scavenging agents. The potent antioxidant activity exhibited by the AgNPs was due to the phytocompounds that act as reducing and capping agents [35]. In-vitro anticandida activity of C. subradiata extract and AgNPs Anticandida activity of the Ag NPs synthesized from C. subradiata extract was tested against ve candida strains and the zone of inhibition (ZOI) measured for the strains ranged from 7.3 ± 0.72 mm to 17 ± 1.12 mm. The ZOI observed for C. subradiata chloroform extract ranged from 0.3 ± 1.3 mm to 5.7 ± 0.9 mm. The order of susceptibility of the strains was as follows for AgNPs: C. albicans ≥ C. tropicalis ≥ C. krusei ≥ C. glabrata ≥ C. parapsilosis, whereas the susceptibility order of the lichen extract was observed to be: C. albicans ≥ C. krusei ≥ C. tropicalis ≥ C. parapsilosis ≥ C. glabrata (Fig. 8). C.albicans was highly susceptible to the treatment of Ag NPs and C. subradiata chloroform extract. It was evident from the well diffusion assay that the inhibition pattern was concentration dependent. The ZOIs ranged from 22.3 ± 1.4 mm to 25.7 ± 0.7 mm. The activity of 38 lichen extracts were found to have poor inhibitory effects on planktonic C. albicans yeast and an MIC of 500 µg and above were reported [56]. This suggests that phytocompounds present in lichen extracts were least effective against candida species and was in correspondence with our results. However Ag NPs synthesized from seed extract of Syzygium cumini was reported to have an MIC value of 0.125 mg-0.250 mg/mL against Candida sp. [57]. Similarly the ZOI of Curcumin-Ag NPs was observed as 22.2 ± 0.8 mm, 20.1 ± 0.8 mm, and 16.4 ± 0.7 mm against C. glabrata, C. albicans and C. tropicalis respectively [58]. Results of the anticandida assay suggest that AgNPs synthesized form C. subradiata can be considered as promising anticandida agents. 3.8.In-silico molecular docking of C. subradiata compounds against lung cancer protein In-silico molecular docking was performed using the structure of seven compounds identi ed by GC-MS analysis of C. subradiata chloroform extract using Autodock 4.2. From the analysis, six out of seven compounds showed hydrogen bond interactions. Docked pose of the different compounds with EGFR protein (PDB Id:2GS2). Hydrogen bond interactions, the information about the Binding energies, Number of hydrogen bonds formed and the distance between them is given in Table 3. From the analysis, it is observed that Pulvinic acid and Usnic acid has least binding energy and showed remarkable hydrogen bond interactions with EGFR protein (Fig. 9). Usnic acid suppresses angiogenesis of breast cancer by successfully inhibiting vascular endothelial growth factor receptor (VEGFR) 2 mediated Extracellular signal-regulated protein kinases 1 and 2(ERK1/2) and AKT/P70S6K signaling pathways in endothelial cells [59]. Phytocompounds from the fruticose lichen Rocella montagnei showed impressive docking scores against CDK-10 (Cyclin Dependent Kinase 10) that plays a pivotal role in the pathogenesis of cancer and is attributed for the incessant proliferation of cancer cells [60]. Lichen metabolites are also known to inhibit the activity of cyclooxygenase-2 enzyme which is involved in the in ammation of tissues [61]. Molecular docking results of this study provide a new arena for exploring the anti-cancer potential of lichen metabolites against lung cancer proteins. 3.9. In-vitro cytotoxic activity of C. subradiata extract and AgNPs In-vitro cytotoxic activity of C. subradiata chloroform extract and AgNPs synthesized from the same extract were carried out owing to the encouraging results obtained from in-silico molecular docking study using the structures of C. subradiata extract compounds identi ed by GC-MS analysis. The effects of C. subradiata extract and AgNPson cell viability were analysed and quanti ed by using MTT assay after 24hour treatment with a concentrations of the samples ranging from 3 to 300 g/mL (Fig. 10). Through this assay, cell viability was determined based on the measurement of mitochondrial function, as MTT is transformed into formazan crystals in living cells in which mitochondrial dehydrogenases are functional. As shown in Fig. 10 The IC 50 value obtained in this study was comparatively less and exhibited potential cytotoxic effects.The mechanism of action of AgNPs dependent cancer cell death is owed to the generation of intracellular reactive oxygen species. Over production of free radicals leads to oxidative stress, thus damaging the DNA of the cancer cells and forcing them to enter apoptosis. Morphology, size, charges of the AgNPs are crucial in determining the cytotoxic effect [64, 65]. Bio-active compound based metal nanoparticles have also showed excellent cytotoxic activity against A549 cells, in a study Palladium nanoparticles synthesized using ascorbic acid by microwave irradiation method showed an IC 50 of 7.2 ± 1.7 ug/mL [66]. However the results of our study suggested that simple and facile synthesis are cost-effective in yielding AgNPs with similar cytotoxic effects. Conclusion The present study elucidated a greener, safer, eco-friendly, feasible and rapid alternative approach to synthesize Silver nanoparticles with enhanced biological applications. C. subradiata lichen extract was a successful green resource for the synthesis of AgNPs and are identi ed to have rich secondary metabolite pro le. Lichen extract and synthesized AgNPs showed substantial amounts of phenol and avonoid content with effective free radical scavenging activity and anticandida activity. The in-silico study explicated the effective binding and interaction of C.subradiata compounds with EGFR lung cancer protein. In-vitro cytotoxicity assay revealed the dose dependent effect of AgNPs and lichen extract on A549 cells with a low IC 50 value. The results present vital evidence that cost-effective synthesis of AgNPs from C. subradiata lichen extract and their anti-cancer properties have potential applications against lung cancer in the eld of nanomedicine.
2021-06-22T17:55:13.673Z
2021-04-30T00:00:00.000
{ "year": 2021, "sha1": "84cc4eda3aa1135ee8ba1a917f75ddf9fbc8be48", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-467364/v1.pdf?c=1631896363000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "e585cee670cef20ec8a5ba01fa1daf39825264c4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
221621796
pes2o/s2orc
v3-fos-license
Retrograde urethrosonography with SonoVue in strictures of the male urethra: a pilot study AIM To evaluate the effectiveness of SonoVue urethrosonography in diagnosing the adult male anterior urethral strictures pathology in comparison with retrograde urethrography. MATERIAL AND METHOD We standardised the method and performed a comparative study evaluating the diagnosis of urethral strictures using retrograde urethrography and retrograde ultrasonographic exploration with SonoVue on 6 male patients. RESULTS In all patients, the existence of urethral stricture, localization and its extension were confirmed. Contrast-enhanced ultrasonographic exploration brought additional categories of information: the degree of spongiofibrosis, the elasticity of the urethral walls and the presence of urethral lithiasis. There were no periprocedural incidents. CONCLUSION This pilot study demonstrates the feasibility and innocuity of urethral ultrasound with SonoVue. Real-time ultrasound exploration highlights aspects that are not seen in radiological examination, so the method can be complementary or alternative to this procedure. Introduction Urethral strictures represent the reduction of the urethral lumen caused by a process of scarring (by trauma, localized inflammation, iatrogenic/idiopathic pathologies); the congenital form is extremely rare in adults. The urethra presents an anterior and posterior segment. The anterior part is surrounded by the spongious body and is divided into the penile and bulbar segments. The posterior urethra, surrounded by the prostate and urethral sphincter, is divided into prostatic and membranous segments [1][2][3]. The term "urethral stricture" usually applies to the stricture of the anterior urethra and represents the abnor-mal narrowing of any segment of the urethra surrounded by the spongious body, secondary to spongiofibrosis; the posterior stricture is due to the fibrotic process that narrows the neck of the bladder and results from a lesion secondary to traumas or surgical interventions (radical prostatectomy) [4]. In the literature the preferred term to describe the narrowing/ obstruction of the urethra is "stricture" and the term "stenosis" is reserved for the narrowing of the membranous and prostatic urethra, not surrounded by the spongious body. The terminology reflects the fact that there may be different damage mechanisms involved, each leading to fibrosis [4]. Urethral stricture is a common cause of presentation to the urologist and could be a complex problem due to the difficulties of diagnosis, treatment and risk of recurrence. Many methods of treatment have been described according to the location, length and density of fibrous tissue in the strictured area [5]. The diagnosis of urethral stricture is based on the history, clinical signs (diminished urinary stream, thin/ interrupted jet/urinary retention, or urinary tract infec-tions such as prostatitis or epididymitis) and imaging. The methods for diagnosis are retrograde urethrography, voiding cystourethrography, computed tomography (CT) or magnetic resonance imaging (MRI) but also urethral sonography [6][7][8]. The retrograde urethrography, introduced by Cunningham in 1910, has numerous limitations: it is invasive, it underestimates the length of strictures (due to the patient's positioning/penis traction during injection), it does not provide information about the degree of spongiofibrosis [8] and it contributes with 0.6%-1.6% to hospital-acquired infections. Also, the risk of allergic reaction and the patient/physician exposure to radiation (5-9 MSV = 2.5 years background radiation and 230 X-rays) [7,9] has to be mentioned. CT scans are also radiation-inducing, while MRI of the penis has a higher cost and is recommended only in specific situations (for example, prostatomembranous urethral occlusions secondary to crush injuries of the pelvis) [10]. Contrast enhanced ultrasound (CEUS) is an evolving imaging modality, a non-invasive and non-irradiating method with increasing clinical utility. The most widely used contrast agent is the second generation Sono Vue, namely sulfur hexafluoride. The technique was used for retrograde microbubble enhanced ultrasound urethrogram or contrast-enhanced voiding urosonography for assessing the vesicoureteral reflux, megaureter, ectopic ureter, ureteroceles, vesical diverticulum, urogenital sinus or congenital urethral pathologies (posterior urethral valves, anterior urethral valves, diverticula of prostatic utricle) [11]. In such instances, contrast-enhanced urosonography was able to distinguish between the congenital stricture of Cobb's collar (also known as Moorman ring or Young's type III valve [12]) that requires endoscopic transurethral incision and segmental strictures which ought to be treated with balloon dilation. The purpose of our study was to evaluate the effectiveness of harmonic ultrasonography with contrast enhancement of the urethra in the diagnosis of adult male anterior urethral pathology. The objectives pursued were: a demonstration of the feasibility of urethrosonography with contrast enhancement; the identification and characterization of the stricture; the innocuity of the method for the patient and the physician; and comparison of contrastenhanced urethrosonography with retrograde urethrography studies which were performed in the same patients. Material and method The study was carried out on a series of 6 patients with urethral stricture who presented in the emergency service of Department of Urology between March -May 2018. In all patients, a conventional urological assess-ment (clinical examination, prostate check, supra-pubic ultrasonography with pre-and postmictional evaluation of the bladder volume) and a radiological exploration -retrograde urethrography -was initially performed. Retrograde ultrasonographic exploration with SonoVue was performed in the same day, after explaining in detail the objectives of the research and the procedure, as well as after obtaining the written patient's consent. The approval of the Ethics Committee of the University was also obtained. The method of exploration was the same for all the patients: a) the patient seated in the dorsal decubitus and dorsal recumbent position; b) disinfection of perineal and scrotal region; c) CEUS were performed on a General Electric Logiq E9 machine; we used a broad spectrum convex transducer (1.5-6 MHz) suitable for abdominal explorations with contrast; d). the application of the transducer was carried out on the perineal region using as a reference element the medio-sagittal axis with simultaneous and continuous visualization of the penile and prostatic urethra; e) the examination was carried out by a team consisting of an experienced examining physician, an urologist (who injected the contrast agent in conditions of sterility) and a nurse (who ensured the resources necessary to the exploration: ultrasound gel and contrast agent, who filled the syringe with the diluted contrast So-noVue agent and supervised the patient; f) the SonoVue contrast agent used was a small dose (drops) instilled in 10 cc of physiological serum (fig 1). The ultrasonographic exploration was divided into two stages: a) the initial stage -the overall and indicative assessment of the region for the purpose of identifying the reference anatomical structures; b) the exploration stage: tracking in "hybrid" mode and real-time progression of the contrast agent to the urethral level ( fig 2). The entire urethra was examined by CEUS in order to determine: the presence of the stricture, its site and length but also the presence of other changes: spongiofibrosis, small stones. Statistical analysis A descriptive analysis was performed between the results of the retrograde urethrography and SonoVue urethrosonography. Results The average age of patients was 66.16 years (aged between 50 and 75 years). Total duration of exploration was 10 minutes on average. There were no periprocedural incidents. In table I the diagnosis, the relevant history, the descriptive radiological aspect of the retrograde urethrography, the appearance of ultrasonography with SonoVue and the limitations encountered by the two methods were detailed. The existence of an urethral stricture and its site were confirmed during urethrosonography in all patients. The length of the stricture was better estimated using this technique (fig 3). Ultrasonographic exploration also brought additional categories of information: thickening of spongiosum and/or the presence of "garland-like" lithiasis <1 cm disposed on the urethra as well as the elasticity of the urethral walls (fig 4, fig 5). The urethral path appears in the orange color and it is linear. Discussion Urethral strictures can have a profound impact on the quality of life, including sexual activity, as a result of a number of complications associated with urinary obstruction (infection, bladder stones, urethral diverticulum, fistula, sepsis) and, finally, with chronic renal failure [13]. The pathology of anterior urethral strictures is a significant part of the work of the urologist. The choice e of an appropriate treatment for anterior urethral strictures requires the evaluation of the entire urethra (proximal/distal versus the strictured area) and depends on preoperative imaging and endoscopic techniques [14,15]. By using retrograde urethrography we could not appreciate the length of the stricture (the contrast substance did not pass the strictures) or the thickness of Spongiosum. The diagnosis of urethral diverticulum/ stones/ flexure of the urethra has not been edified (additional incidences of pelvis X-ray are required in these cases). Given the inherent risks of retrograde urethrography (as a routine investigation for urethral strictures), the diagnostic methodology should be revised in order to select less invasive methods such as urethrosonography [7]. Urethrosonography, introduced by McAninch et al in 1988, presents the following advantages: lack of exposure to radiation and hypersensitivity reaction [16], provides a three-dimensional study, it better estimates the length of the stricture, it highlights the degree, the extension of spongiofibrosis and periurethral pathology [14]. However, a limitation of the procedure is represented by the impossibility of assessing the posterior urethra [15]. . Ultrasonographic exploration is enlarged and shows in detail the length of the stricture, its degree and the thickness of spongiosum at the level explored (yellow arrows). Suprastenotic dilatation with localized appearance may also be found, similar to radiologically-highlighted dilation. Different studies have been conducted to compare both techniques. Choudhary et al concluded that retrograde urethrography and urethrosonography are equally effective in detecting anterior urethral strictures, but the latter has greater sensitivity to the additional characterisation of strictures (length, diameters and periurethral pathology such as false pathways and spongiofibrosis) and benefits from a lower incidence of complications [17]. In accordance with the above is the study by Ravikumar et al, which evidenced that urethrosonography was more sensitive and specific in diagnosing urethral strictures comparing with retrograde urethrography. Although the sensitivity and specificity rate were 100% for the identification of anterior urethral strictures, the accuracy of urethral sonography decreased dramatically in the assessment of posterior urethral strictures (75% sensitivity and 50% specificity) [18]. SonoVue urethrosonography was able to determine the degree of spongiofibrosis, in comparison to retrograde urethrography. This information has valuable clinical importance because it has been proven that inpatients with anterior urethral strictures without spongiofibrosis, stricture dilation can be as efficient as internal urethrotomy regarding the recurrence rates [19]. Also, patients with extensive periurethral spongiofibrosis seem to present a worse response after an internal urethrotomy, as they often present early recurrence and have a lower chance of treatment success, thus should be offered urethroplasty [20,21]. Ouattara et al [22] and Gupta et al [23] have also shown that the urethral sonography is a method that allows the diagnosis of urethral stricture, evaluation of periurethral fibrosis and diagnosis of post-infectious stricture, and this method can replace retrograde urethrography and voiding cystourethrography. This procedure is well tolerated and its accuracy has been confirmed by several authors [24]. Shahsavariet al [6] found no superiority of urethral sonography compared with retrograde urethrography. The sensitivity and specificity of urethral sonography in the diagnosis of anterior urethral strictures was 86% and 94% respectively, the negative predictive value being higher than the positive predictive value (96% versus 82%). Also, the authors note that the length of the urethral strictures identified by the urethral sonography was lower than those identified in the retrograde urethrography [6]. The American Urological Association (AUA) and Société Internationale d'Urologie (SIU) state in their Male Urethral Strictures Guidelines that both direct vision internal urethrotomy and dilation can be offered as an initial treatment for strictures with a length less or equal to 2 cm [25,26], while AUA recommend that for longer stric-tures, multiple, penile or penobulbar, the initial treatment that should be offered is urethroplasty (Grade C). SIU Guidelines mention that urethral reconstruction should be chosen instead of endoscopic treatment in cases of near obliterative strictures or where complete urethral obliteration is present [26]. The study conducted by McAnish et al has shown that, using urethral ultrasonography, the mean length of the anterior urethral stricture was modified from 2 cm (as measured by retrograde urethrogram) to 3.4 cm and, thus, the treatment was switched in 45% of patients (from anastomotic urethroplasty to an onlay urethroplasty) [27]. In this study we found that real-time examination highlights aspects that are not seen in the radiological examination, mainly the distension of the urethra by injection (which constitutes an interesting functional information) and tissue elasticity. Also, we showed that SonoVue ultrasound is a feasible method regarding the assessment of male adult distal urinary tract, which might bring additional information in further studies. The main limitation of the study is represented by the small number of participants, which, in the end, allow us to draw only preliminary conclusions regarding the method. Additional large-scale studies should take place in order to perform statistical analysis and validate the method. Also, there was no control arm, to ensure the lack of false positive results of the method. In conclusion, SonoVue urethrosonography can identify the anterior urethral strictures, their location and length. Real-time harmonic examination highlights aspects that are not seen in the radiological examination, mainly the distension of the urethra by injection (constituting an interesting functional information) and degree of spongiofibrosis. Larger studies are required in order to evaluate the possibility of the replacement, in selected cases, of retrograde urethrography with SonoVue urethrosonography.
2020-07-23T09:09:06.398Z
2020-07-22T00:00:00.000
{ "year": 2020, "sha1": "8c8eef3430aedbb8dc6ec6411a5f7e214733bd02", "oa_license": null, "oa_url": "https://medultrason.ro/medultrason/index.php/medultrason/article/download/2483/1673", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ea553eaacdd62e338f0e2ec10d8a048c5870142d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
23554485
pes2o/s2orc
v3-fos-license
The SLMTA programme: Transforming the laboratory landscape in developing countries Background Efficient and reliable laboratory services are essential to effective and well-functioning health systems. Laboratory managers play a critical role in ensuring the quality and timeliness of these services. However, few laboratory management programmes focus on the competencies required for the daily operations of a laboratory in resource-limited settings. This report provides a detailed description of an innovative laboratory management training tool called Strengthening Laboratory Management Toward Accreditation (SLMTA) and highlights some challenges, achievements and lessons learned during the first five years of implementation (2009–2013) in developing countries. Programme SLMTA is a competency-based programme that uses a series of short courses and work-based learning projects to effect immediate and measurable laboratory improvement, while empowering laboratory managers to implement practical quality management systems to ensure better patient care. A SLMTA training programme spans from 12 to 18 months; after each workshop, participants implement improvement projects supported by regular supervisory visits or on-site mentoring. In order to assess strengths, weaknesses and progress made by the laboratory, audits are conducted using the World Health Organization’s Regional Office for Africa (WHO AFRO) Stepwise Laboratory Quality Improvement Process Towards Accreditation (SLIPTA) checklist, which is based on International Organization for Standardization (ISO) 15189 requirements. These internal audits are conducted at the beginning and end of the SLMTA training programme. Conclusion Within five years, SLMTA had been implemented in 617 laboratories in 47 countries, transforming the laboratory landscape in developing countries. To our knowledge, SLMTA is the first programme that makes an explicit connection between the performance of specific management behaviours and routines and ISO 15189 requirements. Because of this close relationship, SLMTA is uniquely positioned to help laboratories seek accreditation to ISO 15189. Introduction Efficient and reliable laboratory services are essential to a functioning health system as high-quality laboratory testing plays a key role in patient care, surveillance and outbreak investigation. 1 Poor laboratory quality and its negative impact on healthcare systems have been documented for resource-limited settings, including sub-Saharan Africa (SSA). 2,3,4,5 Using the number of accredited laboratories as a quality metric, a 2013 survey showed that 37 out of the 49 countries in SSA had no medical laboratories accredited to any internationally-recognised standards. Of the 380 accredited laboratories in that region, 91% were in South Africa and only 17% were public health laboratories. 6 In recent years, however, several landmark events have drawn attention to the poor state of public health laboratories and have pushed for strengthening of laboratory systems and networks. 1,7 One of these events was the issuance of the World Health Organization (WHO)-Lyon statement in 2008, 8 which called for countries with limited resources to pursue practical quality management systems and to adopt a stepwise approach to quality improvement and accreditation. 4,7 Another was the 2009 launch of a laboratory management training programme called 'Strengthening Laboratory Management Toward Accreditation' (SLMTA). 1 Effective management and leadership are critical to strengthening health systems and the scaling up of health service delivery. 9 Recently, many countries and partners have initiated efforts to enhance management of health programmes and service delivery in developing countries, with measurable success. 10,11,12,13,14,15,16,17,18 Most of these management capacity-building efforts focused on managers from hospitals, primary healthcare centers (such as family planning, mother-child health, etc.), or vertical public health programmes (such as tuberculosis [TB] and HIV). Existing laboratory management capacity-building efforts have primarily targeted senior laboratory officials where the focus is on laboratory policy, system and network development, 19,20,21,22,23 as opposed to daily operations of individual laboratories. Training programmes are needed to enable laboratory managers to use available resources (staff, budgets, supplies, equipment, buildings and information) efficiently for planning, implementation and evaluation of service delivery in order to meet patients' and clinicians' expectations and public health needs. 24 The SLMTA programme was created in response to the observed need for structured laboratory management training and quality improvement by the US Centers for Disease Control and Prevention (CDC), in collaboration with the American Society for Clinical Pathology, the Clinton Health Access Initiative, and the World Health Organization's Regional Office for Africa (WHO AFRO). SLMTA is a competency-based management training programme which uses a series of short didactic courses and work-based applied learning projects with the goal of achieving immediate and measurable laboratory improvements. It provides a practical approach to addressing everyday challenges using available resources. The SLMTA training curriculum and implementation method were pilot-tested in 15 laboratories in Uganda from August 2008 to March 2009, yielding promising results. 24 SLMTA was then officially launched in 2009, with implementation beginning in 2010. As of the end of 2013, SLMTA had been rolled out in 47 countries and 617 laboratories, and had improved enrolled laboratories an average of 23 percentage points after one round of SLMTA training in a pre/post study using the WHO AFRO Stepwise Laboratory Quality Improvement Process Towards Accreditation (SLIPTA) checklist. 25 This report provides a detailed description of the SLMTA programme and highlights some challenges, achievements and lessons learned during its first five years of implementation (2009)(2010)(2011)(2012)(2013) in developing countries. Key components The design of the SLMTA curriculum and its implementation exemplify what is known as 'good practice' in management competencies development. 19,26 The SLMTA curriculum covers the 10 key competencies of a laboratory manager: productivity; work area; inventory; procurement; equipment maintenance; quality assurance; specimens; laboratory testing; test result reporting; and document and records control. A total of 66 tasks and job routines define effective laboratory management and constitute the learning objectives of the curriculum. 24 A typical SLMTA training programme spans from 12 to 18 months ( Figure 1). Training is conducted in a series of three workshops, each lasting three to four days, utilising 44 instructional activities 27 and more than 100 job aids. Each activity provides hands-on, practice-based learning experience for specific management tasks. The total training time is approximately 60 hours to teach all 44 activities. After each workshop, participants implement improvement projects in their home laboratories. There are two types of improvement projects: complicated projects that require extensive planning and data collection before and after the change; and simpler 'just do it' types of projects that can be implemented immediately with minimal time and resources (Box 1). Implementation of improvement projects requires teamwork involving the entire laboratory staff, thus ensuring that the projects become part of the laboratory's continuous improvement processes. Participants are encouraged to implement locally-appropriate solutions using existing resources. During the home-based learning period after each workshop, participants are supported by periodic supervisory visits or on-site mentoring guided by standardised tools. This structured supervision and support component is critical to the success of the SLMTA programme. The formal laboratory evaluation component is designed to identify weaknesses and areas that require improvement, measure success of the programme and indicate future goals for the laboratory. Evaluations are based on WHO AFRO's five-stage accreditation-preparedness scheme, called SLIPTA, which recognises laboratories according to their • Redesign your floor plan to improve efficiency and measure the change such as reduction in turn-around time. • Design a competency assessment programme and conduct a set number of assessments. • Conduct a safety audit and reduce the number of identified non-conformities. • Introduce an inventory management system; monitor stock-outs. • Implement equipment maintenance and service. • Improve documentation (policies, standard operating procedures, quality logs, checklists, etc.). • Monitor running of internal quality control. • Monitor performance and documentation of External Quality Assessment. • Monitor and reduce specimen rejection rates. • Monitor results of referral specimens. • Conduct customer satisfaction survey and follow up on issues. level of compliance with the International Organisation for Standardization (ISO) 15189 standard. 1 Under the SLIPTA scheme, laboratories are audited using the SLIPTA checklist, which includes 111 items divided into 12 sections (Table 1) based on the 12 Quality System Essentials from the Clinical and Laboratory Standards Institute (CLSI). 28 After an audit, a laboratory receives a score out of 258 points in order to determine its star rating -from '0' (0-141 points, < 55%) to '5' (244-258 points, ≥ 95%). 29 Not all laboratories will pursue accreditation; regardless, the SLIPTA scheme provides the roadmap and motivation for laboratories to make steady improvement in service delivery and patient care. SLMTA and SLIPTA are closely linked. The SLIPTA checklist provides the SLMTA programme with a means to identify gaps and benchmark progress. SLMTA, on the other hand, equips laboratory management with the ability to implement quality management systems in order to improve their performance on the SLIPTA scale and eventually achieve formal accreditation status. To support this link, individual SLIPTA checklist items are mapped to each of the 44 instructional activities in the SLMTA curriculum so that participants know exactly which management action will fulfill the requirements of any given checklist item. Because of this close linkage between the SLMTA curriculum and the SLIPTA checklist, in June 2012, after modification of the SLIPTA checklist, the SLMTA curriculum underwent revisions to remap the revised checklist items to SLMTA instructional activities. Each laboratory participating in SLMTA conducts an internal audit at the beginning (baseline) and the end (exit) of the programme using the SLIPTA checklist. The difference between baseline and exit scores, as well as their respective star ratings, is calculated in order to quantify the effects of the programme on laboratory function and quality ( Figure 1). In addition to the SLIPTA scores, laboratories demonstrate their progress through improvement project data such as turn-around time, sample rejection rate, stock out rate, customer satisfaction survey results and before-and-after photographs of physical changes. Variations from the basic implementation model Some countries have customised SLMTA delivery to fit their local context. Two notable variations are Cameroon and Lesotho, which adapted their programmes to address local challenges and to enhance existing laboratory capacitybuilding efforts. Despite the variations, both adaptations adhere to the critical requirement of implementing SLMTA as a process (a series of workshops with improvement projects and mentoring) rather than a single training event. Cameroon Most countries conduct the SLMTA training in a central location. This centralised model provides logistical convenience, particularly when many laboratories are enrolled in the same round, allowing the programme to train many laboratories at one time. It also enables personnel from various laboratories to interact and learn from each other. However, there are drawbacks, including, (1) high costs associated with renting a venue and travelling participants; (2) staff must be absent from their laboratories for prolonged periods because of travel between home and training locations; and (3) a limited number of staff can attend the course, creating a potential divide between those who are trained and those who are not. Working with a very limited budget, Cameroon decentralised the workshops and conducted facility-based training, with teams traveling to the laboratories in the programme to provide training on site. Whilst this model required more time from the trainers, it enabled hospital management and clinicians to be involved in the training alongside laboratory management, facilitating advocacy. In addition, it allowed the course to be better tailored to the needs of the individual laboratories, with all discussions related to site-specific challenges and solutions. 30 Lesotho The schedule and frequency of trainings for the initial SLMTA round in Lesotho were modified in order to match existing mentorship timetables. 31,32 At the time that SLMTA was adopted, the country had already begun a structured mentorship programme with an embedded mentor. This mentor soon became certified as a SLMTA trainer so that he could enhance on-going mentoring efforts with the SLMTA programme. These laboratories received SLMTA training one day per week over two blocks of six weeks each, spaced six months apart. The total training time was the same as the standard three-workshop model. Because of the availability of a full-time mentor, these laboratories received more intensive and frequent monitoring visits -a total of 12 visits versus the standard six -and were able to implement numerous improvement projects. Capacity building for programme scale-up In order to facilitate programme scale-up, a training-oftrainers approach was used to develop indigenous trainers, who in turn implement the SLMTA programme in-country. 27 Because the quality and integrity of the programme relies heavily on these local trainers, it is critical that they are competent and well qualified. To achieve that goal, the programme has established strict screening criteria in order to ensure that potential trainers have the necessary availability, motivation and commitment, along with a technical background. A formal training-of-trainers course was developed in which SLMTA master trainers teach both the curriculum content and also facilitation skills. This two-week course provides a demanding but supportive environment where participants conduct teach-back of assigned activities from the curriculum and immediately receive constructive feedback from master trainers in order to improve their facilitation skills and understanding of the content. To graduate, participants must fulfill several requirements: (1) 100% daily attendance, including group work sessions; (2) equal responsibility in the preparation and facilitation of teach-back assignments; (3) 100% completion of homework; and (4) endorsement by a master trainer. Participants and their organisations also receive reports providing performance reviews and recommendations on specific roles that they are competent to play in programme implementation. Timely, specific, behaviour-focused feedback is the cornerstone of training-of-trainers. As such, the master trainers' ability to mentor the participants and provide constructive feedback determines the quality of trainers produced. The rapid expansion of the SLMTA programme has resulted in the demand for more master trainers who can train trainers. Given the crucial role that master trainers play in developing competent trainers, they must be highly motivated and effective, their qualifications must be impeccable and their development and selection process rigorous. To be considered as a master trainer candidate, he or she must: (1) be a certified SLMTA trainer; (2) have conducted the entire SLMTA process; (3) have the availability and commitment needed to be a strong asset to the programme; and (4) be nominated by an existing master trainer. Eligible candidates are invited to a training-of-trainers course, where they apprentice under existing master trainers whilst sharing the course workload equally. 27 Throughout the course, these candidates receive coaching and feedback on their performance from master trainers and their competence and commitment are assessed constantly. Additional considerations Country commitment Countries adopting the SLMTA programme are advised to fulfill certain pre-requisites to ensure success. Firstly, they must have a national laboratory policy and strategic plan, along with a laboratory technical working group in order to drive the initiative forward. Secondly, countries must ensure financial and political support for SLMTA and a commitment to improving laboratory quality at all levels: Ministry of Health, hospital management, laboratory management and laboratory staff. It is critical that SLMTA sites have dedicated quality assurance and safety officers. It is also important for participants to remain in the same job or organisation throughout the duration of the programme and to be allowed the time needed to participate in the programme. Site selection Site selection should be based on several factors, including facility infrastructure, staffing levels, impact on coverage of patient care, geographic considerations and demonstration of site commitment. The number of laboratories enrolled for each round of SLMTA (i.e., cohort) has varied by country -ranging from one each in Angola and Swaziland to 27 in Malawi. 25 Countries have been advised to start small and scale up progressively. However, political pressure for broader impact and the desire for more laboratories to benefit from SLMTA may have resulted in some countries enrolling large numbers of laboratories. Four countries (Ethiopia, Malawi, Nigeria and Uganda) have enrolled > 20 laboratories in the first or subsequent SLMTA cohorts. 25 Enrolling a large number of laboratories requires more human and logistical resources for the provision of sufficient site monitoring and support. In addition, it is essential that there is good communication and coordination amongst trainers and mentors so as to ensure consistency throughout the group. Most countries have continued to enroll new laboratories in subsequent SLMTA cohorts. 25 Kenya to date has initiated six cohorts of SLMTA, enrolling a total of 50 laboratories and seven blood banks. Lesotho, a small country with only 19 laboratories, has reached a high coverage of 18 (95%) laboratories over three cohorts of SLMTA. Human resources Countries vary in their capacity to rollout the SLMTA programme. Implementation requires three primary cadres: trainers to teach the curriculum; auditors to perform the internal audits; and mentors to facilitate the improvement projects. Regional and in-country SLMTA training-of-trainer workshops conducted during the past five years have steadily produced more local trainers. 27 Although the demand for SLMTA trainers still exceeds the supply, the deficiency is less severe than that of qualified auditors and mentors. Using unqualified auditors may lead to inaccurate audit findings and missed non-conformities. This gap is being addressed slowly as many countries are seeking partners' help with regard to scaling up auditor training. Mentorship and site visits may be the most challenging aspect of implementation and are often overlooked in the initial programme planning. Site visits require personnel time, transportation resources (fuel, vehicle, driver) and lodging and per diem if overnight stays are necessary. If this component is not scheduled and budgeted properly from the beginning, countries often struggle to provide the onsite support and supervision that are critical to the programme's success. Site visits are necessary in order to check the progress of the improvement projects, assess effectiveness of the previous workshops, troubleshoot site-specific issues and provide motivation and encouragement. Site visits often involve meetings with top facility management to advocate support for the laboratory. The length of site visits has varied greatly between countries and even amongst laboratories within the same SLMTA cohort, ranging from half a day to three or more days at each site. The frequency and length of site visits should be considered carefully and planned according to the size and scope of testing activities in the laboratory. In addition, the level of quality at baseline and progress thereafter, as well as site staff's experience with regard to implementing quality systems, should be considered. Laboratories needing more support should receive longer or more frequent visits to enable them to make measurable improvements and sustain their motivation. The need for extensive but affordable site support has led countries such as Cameroon, 30 Mozambique, 33 Swaziland and Zimbabwe 34 to establish structured mentorship programmes with full-time facility-based local mentors -a model spearheaded by Lesotho. 32,35 This model has welldefined goals for each mentoring engagement, extended contact time on site, defined periods when mentors are absent, consistent approaches across laboratories and measurement of progress using standardised tools. Mentors may come from the laboratories they are assigned to mentor, from a local partner, or from outside the country. Mentors receive training in SLMTA implementation, mentorship and auditing. Because of their extended participation in the laboratories they are mentoring, they are able to gain knowledge of the rhythms, practices and personalities of the laboratory, enabling them to facilitate the necessary changes in attitudes and behaviours. Other strategies have been used to provide the needed support for the SLMTA laboratories. In Kenya, for example, select SLMTA hospital laboratories were paired, or 'twinned', with internationally-accredited research laboratories. The accredited laboratories mentored the SLMTA laboratories in quality management system implementation. 36 Experience from Africa SLMTA was launched in Africa in 2009. By the end of 2013, it had been implemented in 23 countries on the continent with a total of 503 participating laboratories, which constituted 87% of all the SLMTA-enrolled laboratories in the world. 25 As the continent that launched SLMTA, Africa has demonstrated to the world that with ingenuity, innovation and determination, implementing quality management systems is possible, despite resource limitations. To date, four SLMTA-enrolled laboratories in Africa have been accredited to ISO 15189, whilst many more are making great progress in continuous quality improvement. 25 In the sections below, we highlight the experiences of four African countries. Mozambique -Country ownership and sustainability To develop a self-sufficient quality programme, Mozambique integrated SLMTA within the existing structure of the Ministry of Health laboratory system. A National Laboratory Quality Technical Working Group was established and a dedicated coordinator hired. The Ministry of Health provided the vision and leadership in implementation and advocacy, coordinated and financed the programme with partner support and pressed for SLMTA activities to be included in provincial and hospital annual plans and budgets. Decentralising programme management to the provincial level has enabled them to increase programme coverage and lower the costs. 33 Rwanda -Data-driven advocacy As with many other countries, Rwanda's laboratories suffered from chronic service disruptions as a result of reagent stock-out and equipment breakdowns from lack of maintenance. An improvement project was assigned to the SLMTA-enrolled laboratories, which tracked the number of tests not performed because of stock-out and equipment breakdowns over a three-month period. They then calculated the funds required to purchase needed reagents and maintain equipment, along with the revenue that would have been generated from these tests, finding that the missed income was far greater than the cost of preventing stock-out and equipment breakdowns. This return on investment analysis persuaded hospital management to prioritise reagent supplies and to contract with manufacturers to provide regular maintenance services for the laboratory equipment. 37 Cameroon -Expanding quality past the laboratory In Cameroon, management at one hospital witnessed the transformation of its laboratory after SLMTA and undertook to extend the quality into other units of the hospital. They formed their own quality improvement teams, which have reported improved hospital cleanliness, reduced patient waiting times, greater patient satisfaction, development of new treatment protocols and increased recognition of the importance of patient safety. Additionally, a reduction in infection rates and stillbirths, as well as an increase in the number of patients served and hospital revenue, have been observed. 38 Zimbabwe -Overcoming contextual challenges Zimbabwe has suffered economic crises in the past few decades, resulting in deterioration of the healthcare system and a shortage of human resources. Participants in its two SLMTA cohorts have identified creative solutions to overcome the extensive logistic and resource challenges. For example, standard operating procedures were handwritten in exercise books, Levy-Jennings charts were plotted manually and a paper-based system was used where computerised Laboratory Information Systems were not available. Hospitals recognised the value of accreditation and prioritised budgets for equipment calibration, service contracts and staff vaccinations. Funding from the US President's Emergency Plan for AIDS Relief (PEPFAR) supported the establishment of a training and mentorship department at the Zimbabwe National Quality Assurance Program Trust in order to develop local capacity to support SLMTA programme rollout and continued quality improvement for laboratory services. 39 SLMTA's global reach and influence outside Africa The SLMTA-driven laboratory quality improvement achieved in Africa has inspired countries in other regions to follow suit, even in the absence of a regional or national accreditation preparedness scheme such as WHO AFRO's SLIPTA. Outside the continent of Africa, 24 countries from the Caribbean Region, Central and South America and Southeast Asia have adopted the SLMTA programme and have used the SLIPTA checklist to measure gaps and the progress of enrolled laboratories. The Caribbean Region, comprising many island countries with diverse geography, people, size and economy, has implemented SLMTA in 12 countries. 25 After completing the SLMTA programme, Bahama's National HIV Reference Laboratory was accredited and two other enrolled laboratories in the region are also seeking international accreditation. 40 In Southeast Asia, impressive results have also been observed in Cambodia and Vietnam, where one provincial laboratory that tests clinical as well as food and environmental samples was accredited to ISO 17025 in 2013. 25 A desire to automate data collection, analyse and manage SLIPTA audit data more efficiently and to enable real-time graphical display of actionable results at audited facilities led to the development of a multi-lingual electronic tool in Vietnam. 41 This tool has been shared with the global SLMTA community. In Latin America, a partnership was forged where 14 military laboratories from eight countries in the region were enrolled in PROMELA (Programa de Mejoramiento de Laboratorios de las Fuerzas Armadas de Latinoamérica), an overarching laboratory improvement programme using SLMTA as its principle training tool in addition to other practical laboratory training and biosafety and/or infection control training. The fact that two Africa-based master trainers (one Anglophone, one Lusophone) came to assist in the first Spanish-speaking training-of-trainers in Latin America underscores the benefits of standardised training and highlights SLMTA's true global nature and its farreaching network across borders and continents. Lessons learned Throughout the SLMTA rollout, countries have overcome many challenges such as attrition of SLMTA-trained staff, encouraging the entire laboratory to work as a team, engaging hospital management, and insufficient mentorship capacity. Table 2 summarises the most common challenges and offers corresponding recommendations to help guide future implementation. Despite the challenges, SLMTA has Common challenges Recommendations Number of labs enrolled in each cohort of SLMTA: What is the best way to achieve nation-wide impact whilst ensuring each laboratory receives sufficient support and attention? • Limit the number of laboratories according to available financial, logistical, and human resources. • Use the initial SLMTA-enrolled laboratories to identify problems most likely to affect other laboratories in the country. Present recommendations to upper management and advocate for system-wide reform. • Target fewer laboratories or select specific units of large laboratories. Focus on strengthening those laboratories or units to become centres of excellence and twin them with other laboratories or units. Programme disruptions: How can delays and disruptions during SLMTA implementation be minimised? • Before implementation, identify costs of the entire process, including all activities necessary to achieve accreditation preparedness. Budget resources accordingly. • Define and agree on roles and responsibilities with all parties involved. • Set dates of all programme activities during planning and adhere to the schedules. • Request authorisation for budget, travel dates, release of trainers at the beginning of the programme. High staff turnover: How can staff turnover be minimised during the SLMTA process? • The Ministry of Health and hospital management should be enlisted to help reduce reassignment during SLMTA implementation. Consider signing a Memorandum of Understanding with heads of the participating institutions to confirm commitment. • Sites should not be enrolled if management does not agree to keep staff in current positions for the duration of the programme. • Minimise the impact of turnover by training more than one person from each site. Non-SLMTA staff involvement: How can staff members not involved in the SLMTA training be engaged for the overall improvement effort? • Require those who attend the SLMTA workshops to share their knowledge and tools with their colleagues when they return home. • Hospital and laboratory management must be engaged and mandate that improvement projects involve all laboratory staff. • Treat all the laboratory staff as a team; acknowledge, motivate, and encourage them for their effort and progress. Hospital management: What is the best way to engage hospital management? • Identify a clinician who is a champion for the laboratory, and enroll that person in SLMTA. • Communicate with the hospital administration, keeping them informed on issues and progress. Publicize the laboratory's success stories. • Conduct the SLMTA activity "Meet the Clinicians" on site to facilitate communication between laboratory staff and clinicians. Site support and mentoring: What is the best way to ensure that each laboratory receives sufficient mentorship support, given limited mentoring capacity and resources? • Limit the number of laboratories enrolled based on the available resources required for on-site support and mentoring. • Establish a structured mentorship programme using local mentors who have been carefully selected and trained. • Clearly define, measure, and report outcomes of mentorship engagement. Program sustainability: How can SLMTA become self-sustaining within a country? • Establish or strengthen quality management systems coordination within the existing Ministry of Health structure. • Decentralise programme management to provincial levels to increase programme coverage whilst lowering cost. • Integrate SLMTA into pre-service curriculum for laboratory professionals. • Select and train laboratory managers or other qualified individuals as mentors within their own laboratories. • Conduct in-country training-of-trainers to develop a cadre of local SLMTA implementers for continuous implementation. • Reduce programme costs by using health facilities for training, rather than renting meeting space. Integrate small 'bitesize' training sessions into established laboratory routines, such as teaching one activity during weekly staff meetings. worked successfully by demonstrating that with resolve, commitment and ingenuity, laboratory teams in developing countries can improve their service delivery using existing limited resources. It also demonstrates that starting with small tangible improvements ('low-hanging fruit') and gradually building upon early successes can boost laboratory teams' confidence and motivate them to tackle the harder issues. This strategy is similar to the 'Little Steps' approach 42 that has been shown to be effective in sustaining healthcare quality improvement efforts in developing countries. Within a few years, SLMTA has demonstrated its transformative power, emerging as a flagship programme for laboratory system strengthening in PEPFAR-supported countries. A recent 2013 Institute of Medicine report 43 recognised that improvement of laboratories under PEPFAR support and guidance has been a signature achievement. In addition, it states that: PEPFAR's laboratory efforts have had a fundamental and substantial impact on laboratory capacity in countries. This laboratory infrastructure has been, and continues to be, leveraged to improve the functioning of countries' entire health systems. 43 As laboratories do not exist in a vacuum, there have been calls 38,44 for the SLMTA model to be adapted for the clinical settings in developing countries, with a goal toward overall hospital accreditation. This will ensure the sustainability of laboratory improvements and accreditation, and boost the centrality of quality management systems in hospital facilities, resulting in better patient care. SLMTA implementation has been supported primarily with PEPFAR resources. To ensure its longevity and viability beyond PEPFAR, countries must work hard to integrate the SLMTA components into normal laboratory operations, decentralise programme planning and budgeting to the provincial or lower level, look for ways to be financially self-sufficient (such as charging enrollment fees for privately-owned laboratories) and incorporate the curriculum into pre-service education. Conclusion After five years of implementation, SLMTA has proven to be an effective programme for the strengthening of laboratory health systems, with a focus on building management capacity in order to achieve quality services for improved patient care. Evidence to date has indicated widespread success of the programme in its ability to facilitate continuous quality improvement in the enrolled laboratories. SLMTA has the unique potential to help laboratories make progress through the SLIPTA process, improve quality of services and subsequently achieve accreditation to ISO 15189.
2017-04-04T14:57:11.292Z
2014-09-16T00:00:00.000
{ "year": 2014, "sha1": "d938e8db4a5f927154a1e75d2f17b34fbf636606", "oa_license": "CCBY", "oa_url": "https://ajlmonline.org/index.php/ajlm/article/download/194/182", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d938e8db4a5f927154a1e75d2f17b34fbf636606", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Engineering", "Medicine", "Computer Science" ] }
5968404
pes2o/s2orc
v3-fos-license
A Methodological Framework for Assessing Agents , Proximate Drivers and Underlying Causes of Deforestation : Field Test Results from Southern Cameroon The international debates on REDD+ and the expectations to receive results-based payments through international climate finance have triggered considerable political efforts to address deforestation and forest degradation in many potential beneficiary countries. Whether a country will receive such REDD+ payments is largely contingent on its ability to effectively address the relevant drivers, and to govern the context-dependent agents and forces responsible for forest loss or degradation. Currently, many REDD+ countries are embarking on the necessary analytical steps for their national REDD+ strategies. In this context, a comprehensive understanding of drivers and their underlying causes is a fundamental prerequisite for developing effective policy responses. We developed a methodological framework for assessing the drivers and underlying causes of deforestation and use the Fako Division in Southern Cameroon as a case study to test this approach. The steps described in this paper can be adapted to other geographical contexts, and the results of such assessments can be used to inform policy makers and other stakeholders. OPEN ACCESS Introduction In 2013, the 19th Conference of the Parties (COP) of the United Nations Framework Convention on Climate Change (UNFCCC) completed the methodological framework for REDD+, providing additional momentum for tropical forest countries to develop strategies for reducing forest-based emissions and/or for the enhancement of forest carbon stocks, conservation and sustainable forest management (REDD+).The Parties decided that REDD+ should be implemented in three phases: first, developing national strategies or action plans and capacity-building, second, the implementation of the national approaches, and finally, "results-based actions that should be fully measured, reported and verified" in phase three [1].So far, most countries are in phase one or two and seek ways to identify and address the drivers of deforestation.The importance and inherent complexity of this challenge have also been acknowledged by the Parties [2], but the respective decision does not provide guidance on how REDD+ countries can actually cope with drivers [3].In parallel to the national policy processes, a number of sub-national jurisdictional REDD+ schemes are being developed with bi-and multi-lateral donor support, where REDD+ programs are embedded in national frameworks as an interim measure for eventually upscaling results-based finance to the national level. A crucial element in the process of designing (sub-) national strategies and action plans is a thorough understanding of the drivers, agents and underlying causes of deforestation [4].In order to inform REDD+ actions and enable the development of evidence-based strategy options to effectively address drivers, REDD+ policy makers and stakeholders seek thorough analysis and quantification of drivers-a challenging task, due to inherent local complexities surrounding deforestation and the scarcity or dispersed nature of reliable data. Literature on drivers of deforestation distinguishes between proximate drivers and underlying causes.Proximate or direct drivers are human activities and actions that directly impact forest cover and result in the loss of carbon stocks, e.g., agricultural expansion or logging for timber [4][5][6].Underlying causes are complex interactions of social, economic, political, cultural and technological developments that in combination create the enabling environment for proximate drivers to unfurl, such as the lack of land use planning and ineffective law enforcement [6].Underlying causes stem from multiple scales: international (e.g., commodity markets and commodity price dynamics), national (e.g., economic development strategies, population growth, domestic markets, governance) and local circumstances (e.g., livelihood options, poverty, and unclear land tenure) [6][7][8].While proximate drivers and the corresponding agents may be considered relatively straightforward to quantify within defined spatio-temporal boundaries, a thorough evaluation of underlying causes requires other tools and methods as underlying causes are not all geographically proximate. Given the cross-disciplinary dimension of deforestation, comprehensive assessments require collaboration amongst those with diverse sets of skills and knowledge such as remote sensing, socio-economic analysis, human-ecosystem interaction and macro-economic and trade analysis. Another challenge concerns the availability of reliable data, which is required in order to provide up-to-date and evidence-based information for the development of strategic options to effectively address deforestation under the specific circumstances.The purpose of this paper is to provide REDD+ stakeholders with a generally applicable methodological framework for assessing drivers, agents and underlying causes of deforestation.After introducing the global context and relevance of the study in Section 1 (Introduction), we describe the proposed methodological framework for systematically assessing and quantifying the drivers and agents of deforestation, including preliminary identification, categorization and weighing of their corresponding underlying causes (Section 2).Section 3 presents the results of the field test, carried out in the Fako Division in the Southwest Region.In Section 4, we discuss the methodology in light of limited available data and capacities, and the implication of the results on the design of effective REDD+ strategy options.Finally, we draw conclusions on how the approach may be applied at the national/regional level, in Cameroon and other REDD+ countries. Methodology This section describes the methodological framework developed through this study with the aim of providing general and easily replicable steps for REDD+ stakeholders at the early stages of REDD+ strategy development.The methodological framework was developed in an iterative process combining expert judgment, applicability in the field and stakeholder consultation, beginning with a desk-based study to review literature on assessing deforestation and combine existing methods into a comprehensive framework.This was then tested in a pilot area of more than 200,000 ha and subsequently critically discussed in a series of national REDD+ stakeholder workshops and technical working groups [9]. Steps of the Methodological Approach The methodological framework consists of a step-wise approach for assessing proximate drivers, agents and the underlying causes of deforestation within a specific and clearly demarcated area.Its bottom-up approach consisting of six simple steps, allows for flexible procedures based on the available amount of global and local data, additional information and available resources for the assessment (see Figure 1). Step1: Data Gathering and Literature Review The aim of the literature review is to identify key agents, proximate drivers of deforestation and their underlying causes.The first step consists of reviewing relevant information, existing research and analytical work regarding land use dynamics in the country, including available national REDD+ Readiness plans and strategy documents.Spatially explicit investment plans and sector strategies, and land use / land cover (LULC) maps with activity data estimates should be collated to identify key information sources as well as data gaps.A prerequisite to step two below, is the defining of proximate drivers and corresponding LULC classes to be mapped, aiming at the representation of the most important groups of drivers, creating classes that are clearly separate, but at the same time not too numerous.Through an iterative process, this classification may then need to be adapted to what is distinguishable at the available resolution in remote sensing.Typical classes of drivers include: small-scale agriculture, cattle ranching or industrial agriculture, possibly distinguishing between different crop types (e.g. annual subsistence versus perennial cash crops).The area in which the drivers study will be carried out should be defined at this stage as the scope of the assessment depends on the availability of historical land use data.Existing remote sensing analysis should be sought and assessed for applicability and reliability, as this would significantly reduce the costs of the assessment. Figure 1. Methodological approach for the systematic assessment of proximate drivers, agents and underlying causes of deforestation. Step2: Land Use / Land Cover and Change Analysis The purpose of this step is to create land use and forest cover maps that estimate deforestation rates and attribute forest loss to the above-identified agents.In many cases, deforestation maps showing forest/non-forest may already be available, but without the specification of land use types and proximate drivers responsible for deforestation.If suitable maps and spatial data are not available, remote sensing analysis must be conducted by a member of the driver assessment team familiar with remote sensing tools and techniques.For this assessment, at least two time series of comparable images are required (preferably three), one depicting current land use and land cover (LULC) and the other(s) depicting historical LULC.Using available satellite imagery, a preliminary list of LULC categories is identified, based on the polygons visible in the images.Combined with the literature review from the step above, these categories should correspond to the main expected drivers and agents.This information allows for quantifying drivers in terms of forest area lost or affected.Thereafter, a quantitative LULC change assessment is carried out to develop a land use change matrix, following international guidelines from GOFC-GOLD [10].The land use change analysis entails localizing the expansion of the major proximate causes of deforestation and quantifying their area-wise impact in the past, for example through an object-based segmentation mapping, where the images are automatically analyzed for spectrally similar objects, divided in a second step into segments and finally [11].This process should be carried out in two phases, first distinguishing between forest and non-forest and then repeating the object-based classification separately for forest and non-forest areas to produce a detailed land use classification [12].Next, land use change and forest loss attribution is calculated, for example, using the Land Change Modeler available in the IDRISI software [13].An important result of this step is quantifying the contributions of different drivers and agents to deforestation, which is best represented in a table (cf.Table 1 in Results). Step 3: Carbon Stock Change Analysis In this step, the area approximations of the land use categories identified above are linked to their respective long-term average carbon stock value (in tCO2/ha) based on existing local and regional forest carbon inventories or carbon stock assessment of respective land uses. If local carbon stocks or biomass stock information is lacking, IPCC Tier 1 biomass and carbon stock data may be used as a default until better data become available.The conversion into a consistent unit per ha (tCO2/ha) is carried out using respective IPCC Tier 1 biomass expansion factors (BEF), root-to-shoot ratios, carbon fractions and wood density conversion factors.The carbon pools included in the analysis should be selected based on availability of reliable information, keeping in mind that while more accurate assessments through carbon inventories is an option for generating accurate data, this will considerably increase the time and resources required for the drivers analysis.The historical greenhouse gas (GHG) emissions due to deforestation are quantified by multiplying emissions factors with the respective land area changes (activity data) [14].Emissions factors were quantified based on the carbon stock difference (tCO2/ha) between two land uses that were subject to changes. Step 4: Assessment of Agents and Proximate Drivers This step consists of a field assessment to ground truth the LULC categories defined in the above remote sensing analysis.If necessary, the remote sensing analysis may need to be refined at this stage to better account for the findings of the field visit.The bottom-up approach following [14] aims to assess the net benefits gained from current land uses.First, an opportunity cost analysis evaluates the economic costs and benefits associated with different land uses defined as drivers [15].The first step of this opportunity cost analysis is to estimate the value of the standing forest and to compare it to the revenues of other land use options.This is combined with an analysis of deforestation agents in their specific locations to capture the main products, relevant markets, costs, inputs and other economic considerations related to the different production systems of the different land uses, based on existing factors and market prices prevalent in the region.Opportunity costs are quantified on a one ha and per tCO2 basis, using GHG emission factors calculated in the previous steps.In addition, interviews with deforestation agents facilitate a better understanding of decision chains leading to deforestation and help in capturing those aspects relevant for deforestation that are not quantifiable in financial terms.This analysis also helps to capture the non-carbon benefits of standing natural forests. Step 5: Analysis of Underlying Causes The aim of this step is not only to understand the historic causes of deforestation, but also to estimate the likely future deforestation patterns, taking into consideration the international, national and sub-national circumstances, as well as expected trends.The projection of drivers takes an explorative approach through stakeholder interviews, classifying likely future impacts of different underlying causes and weighing them according to (1) increasing impact, (2) business as usual, and (3) decreasing impact (cf.Section 3.4).This classification should serve as the basis for subsequent focus group discussions with local experts and key stakeholders.Local facilitators should be identified prior to field work to support stakeholder identification and ensure that data collection methods are adapted to local communication norms. Step 6: Stakeholder Validation Once the assessment has been finalized, it is important to share and validate results, ideally through a multi-stakeholder workshop where the findings are openly discussed.Participants in such a workshop may include national government agencies and ministries, civil society, research institutes, academia and importantly, representatives from the assessment area. Description of the Case Study Area Used for Testing the Methodology Cameroon was chosen as a follow-up to a global drivers study.Moreover, Cameroon is a relevant case study, as it is one of the many countries at the beginning stages of the national REDD+ strategy development process.The pilot study was conducted in the Fako Division, located in the Southwest region of Cameroon and falls within the forest mono-modal agro-ecological zone.This area was chosen due to the relatively good data availability in the region.Fako Division's size was considered to be large enough to draw conclusions applicable at larger geographical contexts, such as jurisdictional REDD+ programs on a landscape level.In total, Fako Division covers an area of 203,876 ha and consists of montane and sub-montane forest, lowland forest and mangroves.With Mount Cameroon (4100 meters above sea level) near the coastline, the division is characterized by its biophysical circumstances, exceptional biodiversity and rich volcanic soils.The volcanic geography and average rainfall of between 2000 and 3000 mm in the lower parts of Fako Division contribute to high soil fertility, making the area predisposed for agricultural production. The predominant land uses in the Fako Division can be attributed to three major classes (Figure 2): agricultural lands, urban areas and forests.In line with the forest transition theory [16], the significant land cover changes in the Fako Division are accelerated through positive feedback loops. The division has experienced large-scale deforestation since colonial times, with the first commercial plantations established in 1907 under the German colonial administration in the coastal plains around Mount Cameroon [17].Today, the area is characterized by a high influx of farmers from outside the division, who often combine subsistence and cash cropping. Results In this section, we present the main findings of the introduced six steps of our methodological framework.The presentation of results is organized according to these methodological steps, with sub-sections corresponding to the steps found in the framework above, except for first step (data gathering and literature review), as the results from this step are integrated throughout. Land Use/Land Cover and Change Analysis For the remote sensing analysis, we selected two consistent time series for the years 1986 and 2010 from the Landsat archive.These years had the best coverage of the entire study area with the least cloud cover, allowing for a relatively consistent wall-to-wall comparison.In order to improve the classification, secondary data were used as a reference, for example the World Resources Institute's (WRI) Interactive Forest Atlas, which provides land use allocation and land cover data [18].Besides remote sensing data, complementary vector data were gathered related to infrastructure, protected areas, forestry reserves, and large-scale agriculture areas.Our assessment quantified only land use changes due to deforestation, as the degradation assessment using Landsat was not possible.Thus, efforts were made to estimate degradation by using secondary data [19] and by conducting interviews with local stakeholders.According to our remote sensing analysis, in 2010 natural forests cover approximately 80,232 ha, from which almost 75% of these forests are located within the Mount Cameroon National Park and reserves [20].In addition, mangroves cover about 16,532 ha (see Table 1).The second biggest share of land use in 2010 was agricultural production: cocoa subsistence farming systems (52,445 ha), followed by palm oil and rubber production (27,990 ha), and, less important but rapidly increasing, banana and tea production (4205 ha). Table 1. Land use change in the Fako Division. These figures can be translated in spatially explicit land use maps for the years 1986 and 2010 (Figure 3).According to the remote sensing results, net forest lost from 1986-2010 was 8564 ha equaling an annual deforestation rate of 0.51%.The major land use changes have been due to subsistence and cocoa farming (85%) and plantation development for rubber and palm oil (11%) (see Figure 4).Minor proximate deforestation drivers, such as plantation development for tea and banana production and urban development, were responsible only for 4% of the total deforestation.Conversely, mangrove forest areas have increased by 1945 ha, but the discrepancy may only be due to cloud cover changes and the differences in the classification of the Landsat images, instead of an actual increase in cover.However, our field based assessment indicates that mangroves are subject to significant degradation processes, which was not captured by remote sensing. Carbon Stock Change Analysis Using the long-term average carbon stock differences (Table 2) of different land uses, and multiplying by area subject to historical changes, the total GHG emission of 4.77 million tCO2 were emitted due to forest loss from 1986-2010, which averages roughly 199,000 tCO2 annually.Deforestation due to subsistence agriculture and cocoa farming was the main GHG emitter, with 3.81 million tCO2 (158,800 tCO2/year), followed by palm oil and rubber with 0.67 million tCO2 (27,800 tCO2/year).Minor, proximate contributors to GHG emissions were forest conversion to banana and tea plantations (195,600 tCO2; 8150 tCO2/year) and urbanization (100,000 tCO2; 4200 tCO2/year).Palm oil plantation [21] 105.6 Banana and tea plantations [26] 91.7 22 113.7 Rubber plantation [26] 170.1 Assessment of Agents and Proximate Drivers In the Fako Division, we identified four major land uses and four main agent groups responsible for deforestation (Table 3).For the opportunity cost assessment, the net present values (NPVs) were calculated at a discount rate of 10% for a period of 20 years [27].An economic analysis of standing natural forest was carried out by quantifying the combined value of currently marketed non-timber forest product (NTFPs).The main NTFPs commercialized and consumed by local communities include: eru (Gnetum Africanum), a leaf used in local dishes; bush pepper (Berberis Canadensis) and njansang (Ricinodendron Heudelotii), which are local culinary spices; and bush mango (Irvingia Gabonensis) fruits and nuts [25].Some NTPFs are also exported such as Prunus Africana (a bark valued by the pharmaceutical industry), and eru, as significant amounts are exported to neighboring Nigeria or Cameroon's worldwide diaspora.In addition, natural forests provide important ecosystem services, such as hydrological regulation and prevention of soil erosion.Spiritual and cultural values were identified in particular by local community groups. Agricultural Drivers The literature review revealed a range of secondary data and analysis regarding economic valuation of land uses in Southwest Cameroon [24,28].Agricultural land use has been identified as the major proximate driver of deforestation [29].Our review and field assessment identified three major agents: (1) small-scale farmers practicing cocoa cash cropping combined with subsistence food crop farming; (2) national, medium to large scale investors and local elite, mainly investing in palm oil or rubber production, and (3) large-scale agro-industry, represented by the Cameroon Development Corporation (CDC) undertaking palm oil, rubber and banana production. Small-scale farmers in the Fako Division can be divided into two groups: local communities, who are traditional land owners and migrants, who moved to the region for the productive farmland.The latter group acquires land through purchase or by user rights transfer from the local traditional chiefs.These migrants are mostly involved in cocoa farming, and are the main agents operating at the forest frontier compared to the indigenous peoples, whose role in deforestation in this context is generally considered minor.Farmers are mainly attracted by the availability of fertile soil, adequate climatic conditions and the relative proximity of markets, mainly for cocoa.Small-scale agriculture, including cocoa farming, is mainly concentrated in areas surrounding the Mount Cameroon National Park, the city of Muyuka and coastal areas.Agricultural production here is characterized by a gradual shift from annual crops (e.g.cocoyams, plantain, cassava, maize for subsistence) to perennial crops (mainly cocoa), combined with the gradual expansion of fields.For the economic profitability analysis, we assume the natural forest is initially thinned by burning and used for subsistence crops in the first three years, followed by the planting of cocoa as a cash crop and other fruit tree species.The mixed farming system results in a NPV of US dollars (USD) 2,125 USD/ha over a period of 20 years compared to 51 USD/ha for standing natural forest.The profitability of cocoa without subsistence crops is reduced to 1.615 USD/ha (Table 3), which is mainly due to the late yield of cocoa after planting (year four), while subsistence crops begin generating net positive cash flows in year one.The conversion of natural dense forest to mixed cocoa agroforestry systems results in opportunity costs of 2.074 USD/ha and 4.7 USD/tCO2.Small-scale agriculture is therefore an important source of income generation for local communities and helps them to improve and sustain their livelihoods.With respect to social and environmental non-carbon benefits, this land use type is crucial for domestic food security, providing timber products for construction and fuel wood for cooking.Cash cropping is an additional source of income, with mixed agroforestrycocoa systems potentially playing an important role in conserving local biodiversity while functioning as carbon sinks. Medium-scale investors are composed of a group of local elites who invest in agriculture, especially in palm oil and rubber.This agent group may include former civil servants, business men, politicians, high-ranking officials or the returning diaspora.Generally, this group purchases land areas between 5 and 100 ha in order to establish plantations, mostly in proximity to agro-industry, where forests have already been removed or degraded and a paved road system is already in place.Such investors are also attracted by the division's fertile soils and favorable climatic conditions, but are highly motivated also by expected increases in international commodity prices.Plantations are managed and developed by permanent local staff, with the support of seasonal farmers.Because of the poor equipment and old machinery, palm oil yields and processing efficiency is generally very low, with an average yield of 8 t/ha fresh fruit bunches (FFB) and efficiency rates at around 12%, compared to the agro-industry with a return on investment between 18% and 20%.Rubber yields are also relatively low, with an average of 1.26 t/ha dried rubber at maturity [30], which we confirmed during our field assessment and interviews. One rotation cycle generally lasts 25 years for both land use types.The profitability of both rubber and palm oil production models is relatively similar.Over a 20-year period, palm oil has a NPV of 1.244 USD/ha, with an internal rate of return (IRR) of 19%, whereas rubber has an NPV of 821 USD /ha, with an IRR of 13.9% (Table 3).The low NPV for rubber is mainly due to the fact that trees can only be tapped for the first time at an age of 8 years, thus positive cash flows occur very late.The opportunity cost for avoiding the conversion of natural forest to palm oil creates opportunity costs of 1,193 USD /ha and 2 USD/tCO2, while avoiding conversion to rubber results in opportunity costs of 770 USD/ha and 1.4 USD/tCO2.Benefits related to local employment generation are considered important non-carbon benefits, as well as the contribution to national food security, as Cameroon relies on international imports to meet domestic food needs. Agro-industry The state-owned CDC is one of the biggest agro-businesses in Cameroon, with currently around 11,900 ha under palm oil production, 8500 ha for rubber and 4500 ha for banana [18].Prior to the 25 year time period covered by this study, agro-industry was the major deforestation agent in the Fako Division.However, the relative importance of this agent can be considered to be decreasing, as land leases are expiring and investment in maintenance and replanting is low.The palm oil and rubber plantations are generally over-mature, with relatively low yield level compared to international benchmarks.CDC palm oil plantations yield on average between 12 and 18 t/ha FFB, whereas internationally, average yields range between 20 and 25 t/ha [30].The processing efficiency rate is only about 18.5% due to old and partially outdated machinery.Compared to the medium size investors, palm oil and rubber development under the CDC is more professionalized and more intense with higher fertilizer, herbicide and insecticide as well as labor inputs, which result in a higher profitability. The analysis shows that palm oil is most profitable over a period of 20 years with an NPV of 3,186 USD/ ha and an IRR of 24.2%, while rubber is less competitive over a period of 20 years with an NPV of 1959 USD/ha and an IRR of 16.6%.The opportunity costs for the avoidance of converting natural forest to palm oil and rubber amount to 3,135 USD/ha (5.2 USD/ tCO2) and 1908 USD /ha (3.6 USD/ tCO2), respectively. Similar to the medium size investors, the CDC as a large employer, also contributes to poverty alleviation, income generation and established outgrower schemes.Moreover, the company contributes significantly to meeting the national demand for palm oil products and reducing international import dependency.Furthermore, the company often supplies some electrical facilities and helps to build unpaved roads in some areas, which facilitate access to the market for nearby communities. Mangrove Ecosystems Mangrove ecosystems play a crucial role in the local economy, where people around coastal locations in the Fako Division are heavily dependent on these ecosystems for the harvesting of fish, shrimp, NTFPs, timber and fuel wood [31].It is estimated that about 62.5% of the total wood harvested is used for fish smoking alone, while 34.3% of wood harvested is used for cooking and 3.3% for construction [32].It has been estimated that one ha of mangroves can produce around 3.4 m³/ha sustainable annual yield of timber and fuel wood [33].In our economic valuation, and excluding annual labor costs, this yield could result in an NPV of 215 USD/ha over a period of 20 years.However, mangrove ecosystems have additional crucial social and environmental benefits.Most important is the provision of a healthy habitat for fish populations for commercial and subsistence uses, erosion prevention, a natural barrier against floods, water regulation services and spiritual values.Our economic values thus only partially reflect the actual economic value of this ecosystem.Moreover, mangroves are a significant biodiversity hotspot and function as a large carbon sink.Our field assessment has shown that mangroves are generally not managed and harvesting rates exceed annual growth rates, leading to significant degradation.Therefore, in our economic model we assume annual extraction rates are roughly 13.6 m³/ha/year, resulting in a reduction from 402 m³/ha to 200 m³/ha over a period of 20 years, resulting in a NPV of 855 USD/ha.Thus, the opportunity cost of switching to more sustainable mangrove exploitation is 640 USD/ha and 1.3 USD/tCO2. Analysis of Underlying Causes In the following, the causes underlying the above drivers and agents are described according to the five main factors explained in the Methodology.Both the current and expected future impact is explained, as depicted in Figure 5. [34].This increase implies a higher demand for agricultural products and thus more pressure on the land and forests.The relationship, however, between population growth and deforestation is not always linear and other factors such as land availability may be more influential.Without a transformation in current agricultural practices towards less area expansion, the increasing number of small-scale farmers will result in continued forest conversion.However, given the limited amount of arable land left in Fako Division, it can be expected that future deforestation resulting from larger-scale agents will increase in importance due to other causes not related to demographic trends. The percentage of the population in urban areas in the Fako Division has risen from 48% in 1987 to 65% in 2005 [34].Interviews suggest that urbanization in Fako Division in part due to educational opportunities in Buea.The increase in educational opportunities brings higher-income employment opportunities, resulting in changes in consumption patterns.Demand for local deforestation-driving products, e.g., eru and mangrove wood smoked fish, is expected to rise.However, higher income also implies more demand for processed foods and higher-value imported products, leading to deforestation outside of Fako. Economic Factors The main economic factors affecting large-scale agents are demand/market forces, whereas mainly poverty affects small-scale agents.Although Cameroon is a net importer of agricultural products, in particular, palm oil, rubber, cocoa and banana produced in the Fako Divisions are mainly exported, with international price speculations impacting deforestation.Small-scale cocoa and rubber are affected by fluctuations in crop prices while the CDC's palm oil is by law destined for consumption in Cameroon, with the price fixed by the government.During the field work, the "growing influence of Nigeria's economy" was often referred to, with this densely-populated neighbor already placing significant pressure on the area's natural resources, including NTFPs and mangrove fuel wood.Improving regional trade and developing the infrastructure connecting Cameroon and Nigeria is part of the regional development plan, meaning this market is expected to place increasing pressure on all agents (except for agro-industry) in the future. The definition of poverty is manifold and extremely subjective.In this study, poverty is defined as the lack of livelihood alternatives, and thus is most relevant to the small-scale agents.However, the impact of poverty is expected to decrease in the future in line with planned improvements in government service provision and related productivity improvements. Technological Factors The main technological factors impacting agents in the Fako Division are low productivity and infrastructure development.Low productivity generally affects all agents, with barriers for smallholders and small firms to access technology, skills and finance, widely documented as a constraint to sustainable productivity growth in Africa's agriculture sector [35].In the case of cocoa, research has shown that improved crop varieties result in significant productivity increases, which could (given the correct policy context) reduce agricultural area expansion [36].Field interviews confirmed the perception of the direct link between agricultural inputs and productivity. Regarding infrastructure development, the road network is considered of low quality and thus, the current impact on agents remains limited.However, planned infrastructure development is expected to have the highest influence on small-scale farmers, as local informants explain the lack of "farm-to-market" roads hinders small-scale agriculture expansion.The link between road infrastructure development and deforestation is highly context dependent; however, it is argued that increased market accessibility raises farmer net incomes, which leads to further investments in productivity and reduces the need to expand farm areas into forests.Moreover, infrastructure development is highly linked to the distribution and use of technology, especially in the case of agricultural inputs, which may in turn lead to higher productivity.Improving agricultural technology and crop yields can relieve pressure on the forests, but also can encourage more deforestation if the surplus generated is used for additional forest clearing [37]. Policy and Institutional Factors Non-forest policies and processes, especially agriculture development programs, play an important role in stimulating forest clearance [36].The development goals and sector strategies outlined in Cameroon's national development plans, if realized, may lead to further deforestation, as the Cameroonian government has made a high-level political and economic decision to develop agro-industrial plantations to promote job creation, and economic growth and development [38].However, as policy implementation is often weak, the impact of these development plans is difficult to quantify and in this analysis remains speculative only.Also, the lack of national land use planning to match these development plans results in land being allocated to large-scale agriculture or mining developments in a non-transparent way, which was witnessed in the Fako Division by the lack of awareness of local experts in the government's granting of 100,000 ha around Mount Cameroon to a Russian coffee company [39].The absence of a consistent framework for the allocation and publication of natural resource permits and contracts that ensures coherence across natural resource sectors is one of the main underlying causes of deforestation. Traditional land tenure systems in rural areas often operate in opposition to national land ownership and use arrangements, which for smallholders create a sense of insecurity that restricts productivity-enhancing investments on land.There are often overlaps in land ownership and it is not uncommon that the same plot of land is sold to a number of individuals.Farmers, in particular migrants, cultivate forest areas in order to gain rights to land under customary law.Land tenure insecurity has less impact on larger-scale agents who are better placed to obtain official land titles. Cultural Factors In the Fako Division, cultural factors underlying deforestation or degradation are most relevant to the use of mangrove wood for fish drying.Mangroves are under pressure in the Division.The asserted better taste of the fish dried with mangrove wood [31] implies that those operating the dried fish value chain are unlikely to switch to alternative fuel sources, even if they were available. Critical Discussion of the Methodological Framework The framework combines a number of tools whose results demonstrate the relative importance of different drivers according to forest area lost (through remote sensing and ground truthing), associated emissions, economic weight (opportunity costs), associated non-carbon benefits of land use changes from the agent perspective.The methodology as developed and tested has proven to be suitable for identifying and prioritizing drivers and agents of deforestation in the Fako Division in Southern Cameroon, and for assessing the relevance of underlying causes and likely future trends.We argue the methodological framework is also relevant to other forest areas under pressure, including in contexts of limited data availability. However, for a large-scale application in jurisdictional or national-level REDD+ schemes the methodology has a number of trade-offs, mainly with regards to the amount of resources required: the thoroughness and integrity of the drivers' analysis depend very much on funding availability to carry out thorough field assessments and existing data, especially access to reliable and spatially-explicit land use data.In the following, we discuss the critical deficiencies we encountered while testing the methodological framework. Land Use/Land Cover Change Analysis Given the large spatial dimension of regions and countries considering REDD+, trade-offs have to be made between the level of detail of the drivers' assessment and the costs.Developing the land use change matrix requires a wall-to-wall remote sensing coverage in at least two time periods from comparable satellite images.As this was not readily available for the entire mono-modal agro-ecological zone, carrying out the analysis for the Fako Division using Landsat data from 1986 and 2010 proved to be a relatively cost-effective approach.A limitation of Landsat, however, concerns the assessment of forest degradation: with its optical sensor, Landsat cannot detect forest degradation, especially small-scale forest degradation that causes spatial changes that are smaller than the pixel resolution of the satellite images.While high resolution satellite images can partially cope with these deficits, the high costs of these technologies may be prohibitive for many cases and for large areas.The methodology is flexible however, and more detailed remote sensing analysis using higher resolution images is encouraged, if available resources allow. Estimation of GHG Emissions The assessment of carbon stocks and GHG emissions according to land use category should be based on local/regional studies.Where data were unavailable or uncertain, IPCC tier 1 data that reflect long-term average carbon stocks can be used, as was done for some land use categories assessed in this case study.While the use of such proxy data may not be sufficient for carbon accounting in future REDD+ projects, it is helpful for understanding the dimension of emissions and for prioritizing actions and measures to address drivers.This step of the methodological framework provides reasonable preliminary estimates which serve to inform the development and prioritization of REDD+ strategy options, keeping in mind that REDD+ is results-based and net emissions reductions may likely be the key performance indicator. Assessment of Proximate Drivers and Agents The field assessment is critical for understanding the opportunity costs and other non-financial factors underpinning deforestation agents' motivations and location-specific land uses, from a bottom-up perspective of land users.The field assessment is also important for understanding the socio-economic settings and contexts leading to deforestation, as drivers do not necessarily result from rational decision-making, where deforestation agents make informed decisions based on which land use has the highest economic return.For example, large-scale agents may be influenced by land use planning and national development priorities, while small-scale agents may be driven to engage in deforestation due to livelihood requirements.During the field work for this study, we found that it is crucial to triangulate different sources of information: using existing socio-economic local studies, collecting field based data and conducting expert interviews for each individual agent and land use to conduct regular plausibility checks of results generated. Assessment of Underlying Causes The approach to assessing current impacts and future trends of underlying causes is explorative and necessarily simplifies complex interactions between multiple forces.However, depicting underlying causes using colors and arrows allows for a broader range of stakeholders to understand and contribute to the analysis.The underlying causes analytical process is meant to bring together diverse stakeholders to discuss the oft sensitive reasons underlying forest decline.The driver assessment final validation workshop provided an ideal venue for this multi-stakeholder dialogue. Driver assessments increasingly incorporate spatial or agent-based modeling to project future expected impacts of different parameters.These modeling exercises are often preceded by scenario-based stakeholder consultations to determine potentially salient factors to model, similar to the consultations carried out for this study.Thus, the results generated through this methodology can be seen as the first step in a more detailed analysis, including a quantitative analysis of non-proximate underlying causes.However, while the effects of some underlying causes can be more easily quantified and modeled, such as commodity price fluctuations, international trade or road construction, a number of important underlying causes are not easily captured by the modeling approach, i.e. poor natural resource governance and a lack in law enforcement capacity. While carrying out the methodological steps within clearly delineated spatial and temporal boundaries allows for quantifying proximate drivers and agents in terms of forest loss, GHG emissions and economic weight, this approach is likely to miss or downplay distant drivers, e.g.international commodity demand and trade fluctuations, which may not be readily perceived by local land users. Conclusions and Outlook Our methodological framework guides systematic assessments of agents, proximate drivers and underlying causes of deforestation in a given geographical context by the use of existing information and studies and by combining the best practice drivers assessment methods and tools.The framework combines bottom-up (e.g., opportunity costs from agent perspective) and top-down (e.g., remote sensing) approaches, whose combined results lay the basis for subsequently identifying and prioritizing REDD+ strategy options.Further features are the inclusion of private and public stakeholders at different stages of the assessment and the purposeful combination of quantitative and qualitative information.The transparent generation and communication of the results helps in validating the findings. While the testing of the framework reaffirmed the general suitability of the approach, it also revealed inevitable trade-offs in terms of the level of detail and accuracy.Detailed drivers assessments are crucial for the identification of strategic options to effectively address deforestation.In order to balance the trade-offs between accuracy and costs, we propose a disaggregated approach, where detailed drivers assessments are carried out in areas carefully selected for their representativeness and suitability.The criteria for selecting priority areas for assessment depends on the objectives of the REDD+ program, i.e., addressing deforestation or degradation, or developing REDD+ programs in a specific forest ecosystem.Priority areas may be deforestation hotspots or areas having experienced significant deforestation rates in the past or that possibly will in the future, identified through spatially explicit sector strategies or investment plans.We argue that such a disaggregated approach is suitable for balancing the above-described trade-offs, as driver assessments carried out at the national level have often concluded that drivers are highly context-dependent and recommend more detailed analysis at the local level. Bearing in mind the trade-off-between costs and the level of detail, this method is sufficient to determine the most important agents and activities that lead to GHG emissions and should be improved over time.By applying this methodological framework, the gathered information provides a solid basis for the development of REDD+ strategy options and the different elements needed for a comprehensive national approach for REDD+.Opportunity cost analyses often show that non-forest land uses are economically more profitable than maintaining natural forest for local forest users.Given the relatively low economic benefits derived from forests for deforestation agents, countries need to develop REDD+ strategies that provide economic alternatives at the local level.New land use strategies may need to be developed in order to provide livelihood benefits without compromising forest health and functionality.The case study of Fako Division shows that smallholder farmers are the agent currently causing the most forest loss.Cameroon's national REDD+ strategy can address the deforestation caused by this agent by through an array of specific interventions as part of an integrated landscape management strategy to maintain or regenerate forest cover and improve food production per unit area of cropland.Increasing small-holder farmer crop yield will also increase farmer income and welfare and with the right institutional context, reduce uncontrolled forest loss.Although a comprehensive and detailed driver analysis at the local level is only the first analytical step, the right combination of farm-level interventions and cross-cutting policy measures requires further analyses. 6. Validation of results with stakeholders and experts 5 . 2 . 1 . Qualitative analysis and projection of underlying causes of deforestation 4. Assess impacts and motivation of main agents affecting deforestation in the study area 3. Carbon stock change analysis of historical GHG emissions from deforestation Land use / land cover and change analysis using remote sensing Data gathering and literature review of land use relevant information labeled and classified Figure 2 . Figure 2. Administrative boundaries of Fako Division in Cameroon. Figure 5 . Figure 5.Estimated impacts of underlying causes for deforestation and forest degradation. 3. 4 .1.Demographic Factors Due to natural reproduction, in-migration, and urbanization, the population of Fako Division has grown from roughly 220,000 persons in 1987 to an estimated 465,000 in 2005 Table 2 . Carbon stocks and emissions of identified land uses. Table 3 . Overview of proximate agents of deforestation and degradation in Fako Division. Note: For the opportunity costs calculation for agricultural expansion natural dense forest is assumed, while for mangrove forest degradation mangrove forests are used as a basis.
2016-03-22T00:56:01.885Z
2015-01-09T00:00:00.000
{ "year": 2015, "sha1": "c5100e3676691a633c4ffcb22bce8d66fca576ca", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4907/6/1/203/pdf?version=1420810113", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "7c9abb2db93b224b0c277cf0b832e8812a2ca8d1", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
259749602
pes2o/s2orc
v3-fos-license
Semisupervised Graph Neural Networks for Traffic Classification in Edge Networks , Introduction In the edge network environment, tens of millions of edge nodes are linked together through countless network nodes for data interaction and analysis. At the same time, more and more edge devices are also joining the Internet of Tings. Each network application has its own corresponding trafc behavior characteristics. With the continuous emergence of various new network applications and network application layer protocols, the complexity of network trafc is increasing and becoming more changeable, dynamic, and heterogeneous. To meet network specifcation requirements for a given type of service, trafc data need to be classifed with a high degree of accuracy to satisfy the QoS requirements. Nowadays, the mainstream trafc classifcation method always arranges its models on the centralized cloud server [1], and the edge terminals only take responsibility for sending the collected trafc to the cloud server to train the trafc classifer. Tat will result in subpar real-time performance and raise edge nodes' bandwidth overhead. Te centralized processing mode based on cloud computing models has successfully aggregated computing power and storage capacity and performed unifed network management. Due to the limitations of the edge node's hardware resources, it is often necessary to provide relevant services to users through remote cloud computing resources, and the cloud server still bears a huge computing load. Terefore, it is a new trend to mount neural networks on edge nodes. With more and more edge terminals joining the network, the existing centralized cloud computing service has delegated computing resources to the edge, allowing more data processing tasks to be completed nearby. Te current trend in technology development is more inclined to perform tasks on fexible but resource-constrained terminal devices. However, for AI technology, most of the intelligent algorithms are computationally intensive and require strong computing power for support. Due to the limitations of the hardware performance of the edge node device itself and the network communication environment, we are faced with the problem that it is difcult to realize all of them at the same time. In this paper, we paid attention on how to perform trafc classifcation tasks in edge networks. Te existing trafc analysis and identifcation framework needs to conduct unifed analysis [2][3][4], which leads to the huge bandwidth resources required in the process of transferring all captured packet fles to train a global classifer. When faced with the current situation of edge networks featuring high dynamic, large scale, low bandwidth resources, and weak links, the depth model with a large parameter scale fnds it difcult to play a role on edge nodes. In order to maintain the efciency of edge-side trafc identifcation and traceability, it is necessary to design lightweight trafc analysis models and traceability suitable for cloud-edge end-to-end collaboration scenarios and be able to match and analyze the service QoS. Network trafc classifcation maps the trafc fow through the network according to its type; thus, the managers can have an overview of the network conditions, which is also thought to be a prerequisite for subsequent management decisions. Tere are usually three types of classifers applied in the trafc classifcation felds [5]: port-based classifcation methods that group trafc's kinds according to port information; machine learning-based methods that classify trafc according to the statistical features (e.g., conventional machine-based approaches connect the statistical signatures from trafc samples to each application type); and expert labor to select the features to ft the models. Te features that are usually used are packet length, packet direction, packet arrival time, and so on. However, in recent years, researchers have focused on automatically learning feature-based trained deep neural network models for wellknown application kinds. Deep learning-based methods make up for the limitations brought by classic machine learning methods as they do not incorporate human labors. Many deep learning models have been applied in the trafc classifcation feld [6][7][8]. Recently, Pang et al. [6] proposed a method that applied graph neural networks to trafc classifcation. To generate graphs, session' packets are extracted as nodes, and edges are used to record the order information for the trafc sessions; however, the chained graph model only considers the order arrangement of the packets and does not explore the interrelationships between the packets in a session. In this paper, we propose a novel semi-supervised trafc classifcation method based on graph convolutional neural networks. We process the trafc packets uploaded and transform them into graphs to convey their structural information. Ten we use graph neural networks to further extract the features of the trafc data. Finally, we have used multiple GCNNs to expand the training set for the cloud server. On the publicly available network trafc dataset "ISCX VPN and Non-VPN dataset," we verify the efcacy of our model. Te experiment's fndings show that it accomplished outstanding classifcation. Tis paper is to address the existing issues for trafc classifcation tasks in edge networks as the existing methods like deep packet inspection (DPI) [9] remain, which require unifed trafc identifcation analysis after port mirroring, leading to huge bandwidth resources occupied during the process of copying messages from one or more ports (the source port) of the device to a monitoring port (the destination port) of the device. Te upload of the complete trafc data from edge nodes to a cloud server that is frequently used in traditional methods brings a lot of problems, as the explosive growth of trafc in its volume and complexity will result in consuming a large number of bandwidth resources and poor responsiveness of the edge system, thus it cannot assume real-time performance and impede other normal services. Semi-supervised methods can be a good choice to solve the problem that it is easy to collect data but labeling it is cumbersome, especially when given the massive amount of trafc per second forwarding in and out of edge gateways. And also, to relieve the bandwidth pressure of transferring the trafc data from the edge nodes to the cloud server, the edge nodes will select some samples to upload for the training of the cloud server. Besides, to better extract the features when generating the raw captured trafc sessions into graphs. If we consider the packets in a session as nodes, all nodes have their own feature information (e.g., sequential features of raw packet streams and statistical features concluded from packet bytes) and structural information (e.g., its relative position in the session and the structure of its byte-length sequence), we can abstract a session to a graph to cover the inter-and intrarelationships it conveys. Te following is the paper's primary contributions: (1) Te designed model is typically applied to the cloudedge architecture as the edge nodes extract the features from the raw capture fles that come in from the terminal side and select the samples for the training set, while the server side performs the semisupervised learning that tries to fnish graph classifcation jobs with just a few samples of labeled graphs. In our framework, edge nodes use several GCNNs to choose highly representative graph instances from the newly collected data, and after pseudo-labeling, add them to the training set. (2) Rather than only regarding the packets in a session as the nodes in the graph, in this paper, we borrow the concept of "fow granules" to cover the internal information between data packets. Individual packets in sessions are extracted and packed into several granules which are incorporated into the graph as nodes. Te relative positions and structural information of the granules are transformed into edges. A graph that represents a session and is labeled as the session's trafc type is later used in classifcation. (3) Our model uses graph convolutional neural networks to capture the trafc data's structural information. Our solution surpasses various state-ofthe-art approaches and produces great results on labeled network trafc datasets that are publicly available for trafc classifcation. Te arrangement of the remaining sections in this paper is as listed: Te proposed trafc classifcation algorithms for semisupervised jobs are then discussed, starting with an introduction to the associated work and a description of the preliminary steps. Te following section of this research paper introduces the datasets utilized and assesses the performance of the proposed model in comparison to previous network trafc classifcation methods. Finally, the paper comes to a conclusion. Trafc Classifcation. In light of the swift advancement of network technologies and the explosive growth of the scale of network trafc, network trafc of diverse types requires diferent underlying network resources. Terefore, in order to achieve efcient network management and improve the quality of network service, it is necessary to efectively monitor and classify network trafc. In recent years, as a result of the rapid development of deep learning in artifcial intelligence and other domains, many academics have started attempting to use deep learning to solve the problem of network trafc classifcation, thus achieving the purpose of online intelligent identifcation of network trafc. Because they extract features without the help of experts, deep learning-based approaches are diferent from conventional machine learningbased methods or packet inspection-based methods. Additionally, deep learning-based methods are more capable of learning than conventional machine learning methods, which allow them to perform better overall [4]. Wang et al. [10] proposed a 1D-CNN-based encrypted trafc classifer extracting features directly from bytes of raw trafc. Lotfollahi et al. [11] proposed a method called deeppacket which used the frst 1480 bytes of each IP packet as model input to perform packet-level trafc classifcation tasks and accomplished excellent performance. Lopez-Martin et al. [12] combined recurrent neural networks (RNNs) and CNNs to categorize trafc for every packet in the session using six extracted statistical features. An RNNbased technique for trafc classifcation termed BSNN was proposed by Li et al. [13]. Long short-term memory (LSTM) or gated recurrent units (GRUs) serve as the foundation for the RNN component of BSNN. Network datagrams are treated as input by BSNN, which provides the categorization outcomes immediately. Liu et al. [14] later introduced the FS-Net, an end-to-end trafc classifcation model in which a multilayer encoder-decoder structure fed with fows' sequential features as packet length sequence was used to further enhance the RNN-based encrypted trafc classifer. In [15], multimodal multitask cutting-edge deep learning approaches are applied in a systematic framework to create a viable mobile trafc classifer, which can jointly learn the shared representation of the sequential features (payload bytes) and statistical features (informative protocol header felds) of sessions. Tat work has been further improved in [16], and explainable artifcial intelligence (XAI) is employed to extrapolate the categorization process of the improved version on the state-of-the-art multimodal trafc classifer [17]. In [18], hybrid neural networks are used to analyze the dual-mode features that are extracted from the raw trafc data. In Tables 1 and 2, the related work section is complemented by a table categorizing the reviewed works along with their primary distinguishing features so as to position the present contribution efectively. In Table 1, the singlemodal trafc classifer is presented, and within each category, the works are presented in order of publication. Te following defnitions are provided for acronyms and columns. Column "Research" listed the research for comparison. Column "Input Data" means the input data for the deep learning models; in the entries, LX means the Xth layer of the ISO/OSI model. Column "Trafc object," briefy known as TO, means the trafc classifcation granularity adopted, Entry "B" means bifow/session, "F" means fow, "P" means packet, and "D" means IP datagram. Column "DL Classifer" means the deep learning models adopted for the trafc classifer, in the entries, BiGRU means bi-directional gated recurrent unit, CNN means convolutional neural network, LSTM means long short-term memory, MLP means multilayer perceptron, and SAE means stacked auto encoder. Column "Open," briefy known as O, means whether the publicly available dataset has been adopted, Entry "Y" means Yes, "N" means No, and "P" means partial. In Table 2, multimodal trafc classifcation architectures are listed. Te following defnitions are provided for columns and acronyms in Table 2 that are absent from Table 1. Column "Multimodal," briefy as known as MM, means whether the multimodal deep learning techniques are employed. Column "Multitask," briefy known as MT, means whether the multitask deep learning techniques are employed. Column "Supervised Shared Representation" and Column "Training-Phase Specifcation," briefy known as SSR and TPS, respectively, clarify whether the trafc classifer uses those techniques. For all above columns, "Y" means Yes, "N" means No, "P" means Partial, and "-" means not applicable. We can fnd that most classifers only handled single types of features for classifers and few of them dealt with multimodal inputs (MM columns) with specifc subsets of the heterogeneous inputs being trained on the lowest layers. So, we concluded the single-model and multimodal architectures, respectively, in Tables 1 and 2. A bifow is the most frequently used trafc object (TO column), both for extracting input data and for assigning classifcation labels. Also, deep learning models tend to Discrete Dynamics in Nature and Society extract features from the raw input in an end-to-end way. It is the way that deep learning methods learn automatically and do not involve expert labors so that they gain popularity than machine learning methods in trafc classifcation felds. Numerous alternative methods have been developed for the classifcation tasks, including Deep neural networks (DNNs), various autoEncoders (AEs), one-and twodimensional convolutional neural networks (1D and 2D-CNNs), and various recurrent neural networks (RNNs) (DL classifer column). Besides, it showed that a proportion of these publications validate and assess the performance of their classifers using publicly accessible datasets (open columns). In this paper, to learn the diferent views of the classifcation object, we have tried to combine the sequential features and statistical features to extract the features from the raw PCAP fles; that is, we have tried to take advantage of the packet length sequence to segment the packet-byte sequence into granules (which could be transformed into the nodes in the graphs). Ten graph neural networks are employed for trafc classifcation. Semisupervised Trafc Classifcation. In fact, we cannot label all samples, as it will require a lot of labor. In order to solve the problem of insufcient network trafc data labels, methods based on semisupervised deep learning have gained popularity. Semisupervised learning (SSL) is a key problem in the felds of machine learning and pattern recognition. It is a technique for learning that combines supervised and unsupervised learning. Regarding semisupervised learning, pattern recognition is performed on the basis of both labeled and unlabeled data in huge quantities. Semisupervised learning can produce relatively good accuracy while also requiring the fewest number of workers possible. Consequently, semisupervised learning is receiving increased attention. In recent decades, numerous deep learning-based semisupervised methods have proven their efciency and effectiveness in the trafc classifcation feld. Deep convolutional generated adversarial networks (DCGAN) have been employed in [19]. Te accuracy of their method is almost the same as that of the supervised method for the labeled large datasets. Te authors in [20,21] adopted autoencoders, which are thought to be a common technology in semisupervised learning. In [22], the author used stacked sparse autoencoders (SSAEs). Te results obtained demonstrate better performance than the traditional model. In [23], the author proposed a variational automatic encoder (VAE)based model for anomaly detection. Te model is superior to other semisupervised learning models, and the evaluation index increases by 5-10%. Wang et al. [24] proposed a SDN edge gatewayembedded semisupervised trafc classifer based on generative adversarial networks (GANs). By training and testing on the public dataset "ISCX2012 VPN-nonVPN," the experimental results demonstrate that the ByteSGAN can efectively outperform other supervised-learning based methods such as CNN. In this paper, the graph convolutional neural networks (GCNNs) are further utilized in the trafc categorization model. GCNN is a kind of convolutional neural network that can directly act on graphs and utilize their structural information. Trafc Classifcation in Edge Networks. In the edge network environment, where network trafc of various heterogeneous types grows exponentially, how to efectively Discrete Dynamics in Nature and Society perform trafc classifcation tasks in edge scenarios remains a problem for researchers. As mentioned, when dealing with SDN edge gateways, Wang et al. [24] proposed a semisupervised trafc classifer using GANs. In SDN edge gateways, various intelligent devices are connected to the edge gateway through wireless access technologies, and all data packets from these smart devices will be queued on the WAN interface, waiting for the edge gateway to forward them out of order. Te trafc classifcation process is mostly concentrated on the SDN controller. Obviously, SDN controllers will sufer from huge fow processing pressure. Tough in [24] only trafc classifers that are applied in SDN edge networks are considered, it can still give some inspiration for trafc classifers applied in edge networks. In an edge environment, there is often just a virtualized resource pool made up of many servers. However, when a number of terminal devices are linked to the edge platform via the edge side, there is frequently signifcant resource demand on the edge side. Numerous terminals and sensors are networked to the edge platform in numerous contexts, including medical, industrial, and the Internet of vehicles. Higher standards are needed to be presented for edge clouds. As shown in Figure 1, in the process of edge-side network trafc classifcation and recognition, the DL models are trained based on the labeled network trafc data. However, in the actual situation, the classifer often receives the network trafc of unmarked categories, resulting in misclassifcation and other problems. In addition, because the labeled sample size is too small, the model trained with small samples is easy to fall into over-ftting of small samples and under-ftting of target tasks. With the increasing complexity of network topology and the explosive growth of network applications, the network trafc on the edge side presents features like nonlinearity, high complexity, and auto-correlation. At the same time, the network trafc of diferent applications varies greatly, which brings difculties and challenges to the accurate marking of network trafc. Te traditional trafc analysis and identifcation framework is based on the cloud server for unifed analysis, which leads to the need for huge bandwidth resources in the process of transmitting all captured packet fles to the cloud server. Terefore, current model applications are more and more inclined to deploy from the cloud to the edge to reduce bandwidth consumption. In the cloud-edge integration system, the edge gateway can realize local linkage between the device and data processing and analysis without networking [25]. However, the edge node deployment trafc analysis model still has problems. When faced with low-bandwidth resources and weak-link edge networks, the deep learning model with huge parameters is unable to play a role on the edge nodes. In order to perform accurate trafc classifcation tasks on the edge-side network, cloud-edge collaboration can be used to balance the processing pressure of edge nodes and maximize the advantages of cloud computing and edge computing with high processing efciency and low latency. Te edge gateways and cloud server can cooperate to share the pressure of the central cloud node. Some of the data computation and storage work is carried out by the edge computing node, reducing the computing processing pressure of the edge cloud server to aggregate the trafc data of each node for unifed trafc analysis. Based on this, this paper proposes a semisupervised network trafc classifcation and identifcation method for cloud-side collaboration scenarios. Tis method distributes part of the trafc analysis tasks on the central cloud server and the edge gateway to jointly complete the edge-side trafc identifcation task and realize the efcient use of computing resources. Graph-Based Traffic Classification in Edge Computing Networks To deal with bandwidth shortage problems brought by traditional methods, we are considering a cloud-edge integrated collaborative system with several edge gateways and cloud servers. Traditional trafc classifers tend to collect all captured raw trafc fles together, and that may eat up the network bandwidth while transmitting raw fles. In a cloudedge integrated system, we split the trafc classifcation tasks into several phases and put them on edge nodes and a cloud server to fully utilize all computing resources and prevent latency and bandwidth insufciency when putting all models and trafc raw captured fles on one side. Te detail is depicted in Figure 2. At the edge gateway layer, there are mainly two stages as feature extraction and graph generation. Te specifc process is as follows: (1) Te edge-side gateway captures and processes the features of the trafc packets uploaded by the terminal node to avoid a large number of complete packets' information being uploaded to the cloud center, so as to reduce the data processing delay and relieve the resource pressure on the cloud server. (2) At the same time, the edge gateway will use the graph neural networks to further transform the trafc data into graphs, using "granules" to further extract the interrelationship between individual packets within a session. Multiple GCNNs are also used to select and transmit the samples with high confdence to the cloud server. Graph Generation Based on Granules in Edge Nodes. Te edge nodes (here we mean edge computing gateways) will collect all raw packet fles (PCAPs) captured by edge devices that are abstracted as edge nodes in the network. Te edge nodes will further process the PCAP fles and put them into the graph neural networks to classify the trafc session. Te defnition of the fow is provided frst. Flow separation is the initial step after receiving the raw trafc fles (PCAPs). Trafc is made up of fows, and a fow is a collection of packets that can be uniquely identifed by the traditional fve-tuple notation (source IP, destination IP, source port, destination port, and protocol). Sessions are made up of packets with an exchangeable pair of network source and destination, and they include all packets sent between two hosts during the course of a session. Tis paper uses a session as the trafc classifcation granularity. We typically cover 2 types of features: session-level features and the packets' sequential features. Packet length sequence (i.e., the number of bytes per packet) is used as a session-level features. Besides, we typically select the frst M bytes for every packet to compose the byte sequence of this session (packets with length less than M will be pad with zeros to reach M-byte sequence, while the packets' bytes exceeding the range of M will be truncated). As depicted in Figure 3, a bifow classifcation object can be transformed into a vector of X ∈ R n×M , where n is the size of packets of the session. For the packet level, the individual packet can be vertically segmented into several granules, as shown in Figure 3(a), that would be transformed into vertices of the graphs. When segmenting the neighboring packets into the granules in the next part, we use the session' statistical features (a session's packet length sequence) to perform the packet segmentation. Secondly, the edge gateway further processes the trafc data uploaded by the edge devices. As depicted in Figures 3(a) and 3(b), each packet-byte sequence is vertically segmented according to the session-level information. To fully explain how to transform the session's packetbyte sequence according to the packet length sequence. At frst, we introduce the concept of "fow granule." Te concept of "fow granule" used in this paper is inspired and derived from [26]. Te term "fow granule" was initially used in [26], which explained how neighborhood data packets with the same packet length might be aggregated to create granules. As a result, an aggregated packet sequence rather than a single, unique packet now represents the information to be processed. Here, we try to segment the packet sequence to extract the internal relationship between the packets themselves. If consecutive interarrival data packets tend to show similarities in their length or very probably appear in the same neighborhood, they will be combined to become a "granule." Te session is composed of a sequence of granules that will still keep the order within the granule itself and within the session. Now suppose, we segment the session sequence into several subsequences to fnd the granule segments. Our goal is to segment the session' packet sequence X � x 1:T � (x 1 , . . . , x T ), where T is the packet size for that session. Neighboring packets with similar sizes can be combined together to make a granule that later can be transformed into a node in graphs. After vertical segmentation, we get the granule G(y)| y�1.2...Y per session as formula (1); Y is the size of the granules per session. (1) After vertically segmenting the packets to make up the fow granules, to further transform the fow granules into graph nodes, the packets within the fow granule are combined, as depicted in Figure 3(c), by calculating the average number of the packet bytes. where m � 1, 2, . . . , M, and M means the length of the packet sequence. |U j | means the packet size of that granule. Nodes correspond to granules, while edges refer to the adjacency between the granules when transforming every session's packet-byte sequence into graphs. As opposed to the proposed methods in [6], the features of every packet in the session are extracted to represent the graph's nodes and then a chained graph is created according to the session's packet order. In this step, we not only cover the packet-byte sequence itself but also explore the interrelationship between the consecutive data packets, as an individual granule can be represented as a node in graphs. Each node is associated with a feature vector for that granule. Ten edges generated. By default, each edge is undirected. After obtaining the nodes, we extract the set of edges between nodes according to the adjacency between packets. Here we use undirected edges because undirected edges capture the relative sequence relationship between each granule better than directed edges. Suppose the node feature matrix at this point is F ∈ R y×M , accordingly, we construct node correlation functions, which take a node feature matrix as input and produce the corresponding adjacency matrix. A T ∈ R y×y : If two original messages are adjacent in the raw sequence, an undirected edge is established between the corresponding two nodes. where Corr(•) computes correlations or dependencies of each channels (nodes) on the basis of X T . Node correlation functions come in a variety of options; here, we calculate the correlations by In this stage, we extract the relationships between the subsequent data packets in addition to using the isolated packet information and transform a session into graphs. Later, we can use the graph neural networks to model the 50 50 373 52 241 181 70 70 Vertically segment the session's packet-byte sequence into granules according to packet length sequence Discrete Dynamics in Nature and Society 7 trafc data and further mine the inner relationship between the trafc data to its types. Semisupervised Trafc Classifcation in Edge Server and Centralized Server. In the last subsection, sessions have been transfered into graphs to convey the structural information between packets within the session. In this stage, our goal is to forecast the class labels of graphs in order to forecast the label of the trafc session that turned into the graph. A node in a graph often symbolizes an object in the real world; in this paper, it means the granules. Moreover, sessions can also be interconnected and abstracted as nodes in the global graph. Here we explore a more difcult but practically valuable scenario in which a node is a graph instance in and of itself. After the raw byte sequence of the trafc sessions has been transformed into several graphs to model the interdependency within the sessions' packets themselves. We can fnd that the session-level graphs show their similarity when the graphs are categorized in the same application types, which means, a set of graph instances can be modeled into a hierarchical graph connecting individual graphs with edges. Regarding graph classifcation tasks, typical graph-based neural network algorithms often need a large number of labeled graph samples. However, since large-scale labeled graph datasets often come at a signifcant cost in terms of time and efort, graph classifcation jobs frequently encounter the issue of a lack of labeled graph samples. Besides, considering the edge networks, we cannot deploy all the neural networks on the edge nodes since only limited resources are allowed on edge nodes. If all the work is pursued on the centered servers, it would lead to high latency and wasting computing resources. In this paper, active learning techniques are employed to enhance the efciency of semisupervised learning, which unifes the edge nodes and central server. In this part, we frst introduce a self-attentive graph embedding techniques to include graphs of any size into fxed-length vectors, which are frequently utilized as semisupervised classifcation input. In addition to greatly simplifying the representation of a hierarchical graph, the embedding approach also ofers meaningful interpretations of an individual graph instance through a self-attentive mechanism that distinguishes their role in categorizing a graph instance. Tis phase that connects and unifes the edge nodes and cloud server can be simply separated into two parts as graph embedding and graph-based classifcation, which are depicted in Figure 4. Te former is to transform the graph with variable node sizes into a fxed length vector, and latter can be the classifcation input in the latter part. As follows, we give the descriptions for the graph embedding part that takes the processed samples from the previous stage as input. As depicted in Figure 5, the purpose of this part is to convert the graph with diferent number of nodes into a vector e n with a unifed dimension and then use it as the input for graph classifcation. Firstly, two layers of GCNs are applied, with the adjacency matrix A ∈ R y×y and attribution matrix F ∈ R y×M as input. Ten, we get H � AReLU AFW 0 W 1 . In the former formula, A � D − (1/2) (A + I n )D − (1/2) is the normalized form of adjacency matrix A where I n is identity matrix and D � m (A + I n ) im . Here, W 0 ∈ R M×h and W 1 ∈ R M×v are two weight matrices. Next, we use the self-attentive mechanism to assign diferent weights to nodes in the graph, to diferentiate the nodes within a graph. After softmax, we can also get the predicted class probabilities ψ after a fully connected layer. where W s1 ∈ R d×v and W s2 ∈ R r×d . Te purpose of multiplying W s1 is to convert the node representation from a v-to a d-dimensional space linearly. After that, nonlinearity is added by coupling with the function tanh. W s2 is used to infer the importance assigned to each node within the graph. Lastly, we get the e ∈ R r×v by multiplying S and H. Te predicted class probabilities ψ would be used for picking out the samples sent to the cloud server. To enhance graph classifcation performance, we should choose which samples may be successfully fltered out and applied to a graph neural network-based classifer on a cloud server afterwards. Te framework determines which graph examples are often considered to be signifcant for enhancing the performance of the graph classifcation model by employing a number of supervised classifers. Te training set is then updated with these examples. We introduce a unifed classifer system, which uses weighted majority voting to combine the decisions of P classifers to decide the fnal label of the graph sample after P isolated GCN-based classifers and obtains the weight by maximizing the performance of the whole expert set. To be more specifc, each classifer has the same set-up as Figure 4 but has a diferent kernel size. When the individual classifer gets its predicted class probabilities ψ, the fnal decisions could be calculated. If the results show the same label as the fnal voting results, then we add weights to that classifer. Te weighted voting method assigns a certain weight to each classifer member, and the weight is obtained by measuring the classifer accuracy of each member on the training set. Te weight is proportional to the accuracy; that is, the base classifer with good classifcation ability is given a larger weight coefcient, while the base classifer with relatively poor classifcation ability is given a smaller weight coefcient, and the integration result depends on the weighted sum. Now, we need to select the samples with high performance gain which is mathematically described as the weighted mean value of the class output probability value calculated by the last phase. To be more specifc, we defne the graph samples that will be picked out as training samples with a weighted mean classifcation probability of P classifers higher than a threshold. Te selected samples would be annotated and added to the labeled training set to improve the efectiveness of the GCN-based classifer on the cloud server and further improve the accuracy of the graph classifcation tasks. 8 Discrete Dynamics in Nature and Society Te Graph Embedding phase tends to enlarge the labeled training set and produce the fx length e ∈ R r×v that would be the model input in the cloud server. Te semi-supervised classifcation phase is set up on cloud servers. Te defnitions of the problems are given at frst. Graphs are represented as G m � (V, E), V is the set of nodes while E is the set of edges that defne a graph. Te goal is to map the graph to its class label as function f: G m M m�1 ⟶ Y given the set of graphs G m M m�1 . We incorporate the active graph classifcation phase to select a set of graphs G select � G l+1 , . . . , G l+k from the unlabeled samples G U to the labeled training set G L after annotation so that the new training set in the center server can have a better ability to predict the unlabeled class labels. In order to enhance graph classifcation outcomes for semi-supervised learning, our method chooses unlabeled graph samples with high confdence in multiple GCNs' clustering and adds them to the training set after pseudo labeling. GCN-based models are chosen to be employed in semisupervised training on the cloud server. Now the graph embedding E � e { } L+U i�1 and the adjacency matrix Θ ∈ R (L+U)×(L+U) have been given, which are calculated according to formula (4). Two GCN layers are later used here; the classifcation probability of each graph example will be represented by the Softmax layer as follows: Θ � D − (1/2) (Θ + I n )D − (1/2) is the normalized form of adjacency matrix Θ and I n is represented as identity matrix and D � m (Θ + I n ) im . W 0 Θ and W 1 Θ are weight matrices. Te parameters in edge nodes are not retrained but rather fne-tuned depending on the parameters gained in the previous iteration to further increase the efciency. In the cloud server, the graph neural network-based semi-supervised trafc classifer uses pseudo labeled samples to further train itself and a softmax layer to get its outputs. Evaluation Metrics. All approaches are evaluated based on their accuracy (A.), recall (R.), and F1-score (F1). Te following are the defnitions: Deep-Packet: Tis method combines the steps of feature extraction and classifcation into a single system and is based on CNNs to process the byte sequence of a packet. On the UNB ISCX VPN-nonVPN dataset, it performs admirably. GCNN: Tis method uses a chained graph model on the trafc packet data and performs supervised trafc classifcation tasks on graph neural networks over automatically extracted features over the chained graphs. DISTILLER: Tis method leverages the combined and efcient use of multitasking and multimodal deep learning techniques. It handles three jobs at once: encapsulation, trafc kinds, and trafc application categorization tasks. Additionally, it ofers the trafc object from the packet-level and fow-level viewpoints. However, in this study, we just focus on the problem of trafc type categorization. ByteSGAN: Tis method typically employs semisupervised learning approaches based on generative adversarial networks (GAN) for the categorization of encrypted data. It is intended to be incorporated in the SDN edge gateways. Te method uses the packet-byte data for the model input. Efectiveness Analysis. It is evident that the enhanced algorithm performs well. Te fndings will be displayed in Figures 6, 7, 8, and 9 in order to more clearly illustrate how this method has improved things. Experimental Setup. We set P � 3 graph convolutional neural networks built on edge nodes to vote for the last label outcome and set a 1500-byte input sequence length limit; if length is less than 1500, pad zeros; if more, then truncate the byte sequence. We set 2 layers of graph convolutional networks for each classifer and their input and output dimensions which are n × 64, n × 128, n × 256, where n represents the number of nodes. To perform semisupervised trafc classifcation learning, we randomly choose 30% of labeled datasets to train the ByteSGAN and SSGAN (our semi-supervised classifer). Figure 6 shows that our method shows excellent performance when applied to the VPN dataset. Te accuracy, recall, and F1-score all rise when compared to the DISTILLER method by 9.42%, 9.67%, and 8.91%, respectively. Contrasting with the GCNN algorithm, the accuracy and F1-score are improved by the method we use by 0.46% and 3.40%, respectively. Compared to the ByteSGAN semi-supervised algorithm, by 0.92%, 1.57%, and 3.03%, respectively, our technique raises accuracy, recall, and the F1-score. Figure 7 shows that our method applied to the non-VPN dataset performs best. Compared to the DISTILLER algorithm, the accuracy improves by 18.67%, the recall increases by 25.38%, and there is a 17.83% boost in the F1-score. In contrast to the deep-packet algorithm, the accuracy, recall, and F1-score are improved by our approach by 0.87%, 4.55%, and 1.71%, respectively. In contrast to the GCNN algorithm, our method increases accuracy, recall, and the F1score by 2.77%, 5.10%, and 3.61%, respectively. Compared to the ByteSGAN semi-supervised algorithm, our method increases by 1.79%, 4.01%, and 2.69% in terms of accuracy, recall, and the F1-score. Figure 8 shows that our method shows excellent performance when applied to the TOR dataset. Te accuracy rises by 2.74%, the recall rises by 12.41%, and the F1-score rises by 5.75% in comparison to the DISTILLER method. In contrast to the Deep-Packet algorithm, our method increases accuracy, recall, and the F1-score by 0.33%, 2.14%, and 1.83%, respectively. In contrast to the GCNN algorithm, our method increases accuracy, recall, and the F1-score by 0.95%, 2.62%, and 3.57%, respectively. Compared to the ByteSGAN semisupervised algorithm, our method increases accuracy, recall, and the F1-score by 0.92%, 3.22%, and 3.01%, respectively. Experimental Results of Non-TOR Dataset. Figure 9 shows that our method applied to the non-TOR dataset performs best. Compared to the DISTILLER algorithm, the accuracy improves by 5.01%, the recall increases by 7.74%, and the F1-score increases by 4.21%. Compared to the deep-packet algorithm, our method increases accuracy, recall, and F1-score by 12.63%, 13.6%, and 12.39%, respectively. When compared to the GCNN algorithm, our method increases accuracy, recall, and F1-score by 7.72%, 12.15%, and 10.16%, respectively. Compared to the Byte-SGAN semisupervised algorithm, our method improves accuracy, recall, and F1-score by 3.0%, 5.38%, and 5.78%, respectively. Figure 10 shows the confusion matrix for the VPN and non-VPN dataset, and we can see that almost all the trafc types have good performance. In the graph generation part, since we extract the trafc features by grouping packets as granules according to their sequential features, we can have better performance than simply transforming the simple packets into nodes and generating chained graphs when we use these graphs to perform trafc classifcation later. Also, we compare our methods with the fully-supervised methods since we want to prove that the SSGCN can have the same efcacy as the supervised methods such as deep-packet using Discrete Dynamics in Nature and Society only 30% part of the labeled dataset. Besides, SSGCN also shows its efectiveness when compared to the semisupervised methods like ByteSGAN, as we employ the graphs to extract the structural information for the trafc data. Conclusions In this paper, we have presented a novel semi-supervised trafc classifcation approach based on improved graph convolutional neural networks. In the edge-server integrated-system, the trafc packets uploaded are processed and transformed into graphs. We have used multiple GCNNs to enlarge the training set for the cloud server. Te cloud server performs the semisupervised trafc classifcation tasks based on graph convolutional networks. On publicly available network trafc datasets, we verify the efcacy of our model. Te experiment's fndings show that it is possible to accomplish outstanding classifcation. In further study, we will investigate these aspects of the suggested methodology: (1) Te majority of current trafc classifers operate inside the predefned trafc categories. Tese techniques cannot handle unrecognized trafc from unrecognized classes. Zero-day applications are trafc classifcations for which the classifer has not been trained. Just a small number of recent studies, many of which rely on locating unlabeled clusters and later classifying them, have ofered solutions for zero-day applications. (2) Te procedure for deploying the network will be extended to the real environment. More metrics will be introduced to measure the trafc classifer' performance. Conflicts of Interest Te authors declare that they have no conficts of interest.
2023-07-12T06:12:04.256Z
2023-07-03T00:00:00.000
{ "year": 2023, "sha1": "a228933c35d9bea6733ce658672a9a2fa54f9b7a", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ddns/2023/2879563.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "09486f662c07edc2f4aed3cded87735c8d86a4d1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
17557755
pes2o/s2orc
v3-fos-license
Deep Transient Optical Fading in the WC9 Star WR 106 We discovered that the WR9-type star WR 106 (HDE 313643) underwent a deep episodic fading in 2000. The depth of the fading (dV ~ 2.9 mag) surpassed those of all known similar"eclipse-like"fadings in WR stars. This fading episode was likely to be produced by a line-of-sight episodic dust formation rather than a periodic enhancement of dust production in the WR-star wind during the passage of the companion star though an elliptical orbit. The overall 2000 episode was composed of at least two distinct fadings. These individual fadings seem to more support that the initial dust formation triggered a second dust formation, or that the two independent dust formations occurred by the same triggering mechanism rather than a stepwise dust formation. We also discuss on phenomenological similarity of the present fading with the double fading of R CrB observed in 1999-2000. Introduction Wolf-Rayet (WR) stars are massive, luminous stars which have blown away the hydrogen envelope, and are considered to be immediate precursors of some kinds of supernovae. WCL-type stars are a carbon-rich, late-type subclass of WR stars [for the definition of the subclasses of WC-type stars, see e.g. Smith & Shara (1990); more comprehensive information of WR stars can be found in the catalogue by van der Hucht (2001). WC9-type stars are the coolest WCL-type stars which are characterized by strong CIII and CII lines, and the weak or absent OV feature (Torres & Conti, 1984). WC9-type stars have been receiving much astrophysical attention in that they are one of the most effectively dust-producing environments in stellar systems (for recent reviews, see Williams (1995Williams ( , 1997). The dust-forming process in WCL-type (especially in WC9-type) stars is known to be either continuous or episodic. The best-known continuous dust producer is a binary WR 104 (WC9+B0.5V), renowned for its "dusty pinwheel nebula" (Tuthill et al., 1999(Tuthill et al., , 2002. Recently discovered large-amplitude optical variability even suggests the presence of a continuous "dust jet" in the direction of the rotation axis (Kato et al., 2002). WR 112 (WC9+OB?) Send offprint requests to: Taichi Kato, e-mail: tkato@kusastro.kyoto-u.ac.jp has been also suspected to have a similar dusty pinwheel (Marchenko et al., 2002). Another class of manifestation of dust production in WCL stars is episodic optical fading (Veen et al., 1998) or episodic infrared brightening (Williams et al., 1990), which are considered to arise from temporary condensations of dust clouds. In 2001 April, one of the authors (KH) serendipitously discovered a new variable star named Had V84 (vsnetalert 5856), 1 which was subsequently identified with WR 106 = HDE 313643 ( Fig. 1). WR 106 is known to show a strong infrared excess (Cohen & Vogel, 1978;Cohen, 1995;Kwok et al., 1997;Pitault et al., 1983), which indicates substantial dust formation. We also noticed that the object was listed as No. 15357 in FitzGerald (1973), who suspected 0.13 mag V -band variability based an analysis of past photoelectric archival data. The object was given a name for suspected variable star (NSV 10152), but the variability was not confirmed at that time. Observation and Results A total of 177 observations were made between 1994 February 17 and 2002 June 18, with twin patrol cameras equipped with a D = 10 cm f/4.0 telephoto lens and unfiltered T-Max 400 emulsions, located at two sites in Toyohashi, Aichi (KH) and Saku, Nagano (KT). The passband of observations covers the range of 400-650 nm. Photographic photometry was performed using neighboring comparison stars, whose V -magnitudes were calibrated by T. Watanabe. The magnitudes were derived by a combination of image size and density. The overall uncertainty of the calibration and individual photometric estimates is 0.2-0.3 mag, which will not affect the following analysis. A scatter around the maximum light likely comes from statistical distribution of errors, although superposed intrinsic variations cannot be ruled out. The resultant light curve is presented in Fig. 2. The star showed an overall range of variability of between 11.4 and fainter than 14.7 mag. Taking into measurement errors into consideration, the minimum full amplitude of the variation is 2.9 mag. Fig. 3 shows an enlarged light curve of the 2000 fading episode. This figure clearly demonstrates that the overall 1 http://www.kusastro.kyoto-u.ac.jp/vsnet/alert5000/ msg00856.html. An inspection of the available archived images at the USNOFS pixel server, 9 epochs during 1950-1996, has revealed no distinct fading of WR 106, suggesting that fadings are rather rare. Discussion WR 106 was studied for binarity by Williams & van der Hucht (2000). The lack of evidence for a companion and the apparent lack of photometric periodicity (Fig. 2) less favor the interpretation of a periodic enhancement of dust production in the WR-star wind during the passage of the companion star though an elliptical orbit, as has been proposed in WR 140 (Williams et al., 1990) and presumably WR 137 (Williams et al., 2001). The present phenomenon seems to be better understood as an "eclipse-like", line-of-sight dust formation as proposed by Veen et al. (1998). The depth of the present phenomenon, however, far surpasses those (up to 1.2 mag in visual wavelengths) of the previously known similar phenomena in other stars. Following the interpretation by Veen et al. (1998), the production rate of the optical depth or the dust production rate in the present episode should be at least a few times larger than in the previously recorded phenomena. Furthermore, the observed depth severely constrains the amount of the unobscured scattered light to be less than 7 %. The present phenomenon is composed of at least two distinct fadings (Fig. 3). Veen et al. (1998) reported the presence of two-step fadings in some fadings. Veen et al. (1998) suggested several possibilities to explain the twostep fadings: (1) sudden enhancement of the dust production in response to an inflow of additional matter to the dust production area, (2) non-radial expansion of a neighboring cloud, or (3) formation of the second cloud in the shade of the first cloud. In the present case, the close occurrence of two rare fadings suggests that they are not a chance superposition of two independent phenomena, but are more physically related. The similar observed depths and durations of the two fadings do not seem to support a stepwise formation of the dust cloud, as represented by the possibilities (1) and (3). The present observation seems to more support that the initial dust formation somehow triggered a second dust formation in the proximity, or that the two independent dust formations occurred by the same triggering mechanism. We also note phenomenological similarity of the present fading with the "double fading" of R CrB observed in 1999-2000, the data are from VSNET. 2 ). The fading mechanism proposed by Veen et al. (1998) being analogous to the fading mechanism of R CrB stars (for a review, see Clayton (1996)), the analogy may suggest a common underlying dust production mechanism between R CrB stars and WR 106. Similar double fadings are also known in some [WC] stars (CPD−56 • 8032 = He3−1333 = V837 Ara: Pollacco et al. (1977); V348 Sgr: Heck et al. (1985)), which are sometimes considered to be related to R CrB-type stars. It is widely believed that the dust formation in R CrB stars are associated with pulsation (Clayton, 1996). Although the large difference of the gravity and temperature between WR stars and R CrB stars may make it difficult to directly apply the R CrB-type dust formation to a WR star, a pulsation-type instability similar to that of R CrB stars in the outer WR wind may have caused a similar sequence of fadings in a WR star. The authors are grateful to the observers who reported visual observations of R CrB to VSNET. This work is partly supported by a grant-in aid [13640239 (TK), 14740131 (HY)] from the Japanese Ministry of Education, Culture, Sports, Science and Technology. This research has made use of the Digitized Sky Survey producted by STScI, the ESO Skycat tool, and the VizieR 2 http://www.kusastro.kyoto-u.ac.jp/vsnet/. catalogue access tool. This research has made use of the USNOFS Image and Catalogue Archive operated by the United States Naval Observatory, Flagstaff Station (http://www.nofs.navy.mil/data/fchpix/).
2014-10-01T00:00:00.000Z
2002-08-22T00:00:00.000
{ "year": 2002, "sha1": "40336d2605f05adf587a58073668e55a3db8b513", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2002/39/aaeh062.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "40336d2605f05adf587a58073668e55a3db8b513", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
27125090
pes2o/s2orc
v3-fos-license
Membrane Type-1 Matrix Metalloproteinase (MT1-MMP) Exhibits an Important Intracellular Cleavage Function and Causes Chromosome Instability* Elevated expression of membrane type-1 matrix metalloproteinase (MT1-MMP) is closely associated with malignancies. There is a consensus among scientists that cell surface-associated MT1-MMP is a key player in pericellular proteolytic events. Now we have identified an intracellular, hitherto unknown, function of MT1MMP. We demonstrated that MT1-MMP is trafficked along the tubulin cytoskeleton. A fraction of cellular MT1-MMP accumulates in the centrosomal compartment. MT1-MMP targets an integral centrosomal protein, pericentrin. Pericentrin is known to be essential to the normal functioning of centrosomes and to mitotic spindle formation. Expression of MT1-MMP stimulates mitotic spindle aberrations and aneuploidy in nonmalignant cells. Volumes of data indicate that chromosome instability is an early event of carcinogenesis. In agreement, the presence of MT1-MMP activity correlates with degraded pericentrin in tumor biopsies, whereas normal tissues exhibit intact pericentrin. We believe that our data show a novel proteolytic pathway to chromatin instability and elucidate the close association of MT1-MMP with malignant transformation. MT1-MMP functions as one of the main mediators of proteolytic events on the cell surface, and it is directly involved in the pericellular proteolysis of the extracellular matrix, cell surface adhesion, and signaling receptors and in the activation pathway of soluble secretory MMPs (5,(7)(8)(9) Cell surface-associated MT1-MMP acts as a growth factor in malignant cells and assumes tumor growth control (4). The conditional expression of MT1-MMP can, by itself, confer tumorigenicity on nonmalignant epithelial cells and cause the formation of invasive tumors (10). MT1-MMP also plays an important role in normal development; MT1-MMP knock-out mice are dwarfs, and they die prematurely (8,11). A loss of the structurally similar primordial At2-MMP induces dwarfism in Arabidopsis plants (12). There is no extracellular matrix in plants, however, that is similar to the collagenous extracellular matrix of mammals. This datum alone is enough to suggest that the protease plays a role in certain functionally relevant intracellular events in addition to its role in pericellular proteolysis. MT1-MMP is tightly regulated at the transcriptional and posttranscriptional levels both as a protease (through activation and inhibition) and as a membrane protein (via trafficking, internalization, and recycling) (13)(14)(15) The trafficking and the internalization, via clathrin-coated pits and caveolae, have emerged as the essential mechanisms that regulate the biological function of MT1-MMP (16 -23). These new data, combined together, provided a compelling argument to investigate the trafficking and the intracellular compartmentalization of MT1-MMP in greater detail. These data also argue that there is a role for the protease in intracellular events in addition to its role in pericellular proteolysis. Here, we have discovered compelling evidence that MT1-MMP is trafficked along the tubulin cytoskeleton. A fraction of cellular MT1-MMP accumulates in the centrosomal compartment. In the pericentrosomal compartment, active, functionally potent MT1-MMP degrades an integral centrosomal protein, pericentrin. Pericentrin is essential to the normal functioning of centrosomes in the mitotic spindle formation. MT1-MMP proteolysis of pericentrin causes chromosome instability, which is an early predictor of carcinogenesis. Overall, our results suggest an intracellular function for the membranetethered protease and an important role of MT1-MMP in the transition of cells from normalcy to malignancy. MATERIALS AND METHODS Antibodies and Cells-Rabbit polyclonal antibodies against the catalytic domain and against the hinge region of MT1-MMP were from Chemicon (Temecula, CA), Sigma, and Triple Point Biologics (Portland, OR). Rabbit polyclonal antibodies 4b and M8 to the C-terminal and N-terminal parts of pericentrin, respectively, were characterized earlier (24,25). A murine monoclonal antibody against ␥-tubulin was from Sigma. Monoclonal antibodies against ␣-tubulin, RAB-4 and RAB-11, were from BD Biosciences. Human U251 glioma, human MCF7 breast carcinoma, and Madin-Darby canine kidney (MDCK) cells were from ATCC (Manassas, VA). All cells were grown in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum. For MT1-MMP overexpression, MDCK cells were transfected with the pcDNA3.1-zeo vector (mock cells) and with the plasmid bearing human MT1-MMP to overexpress the protease. Control and MT1-MMP-expressing breast carcinoma MCF7 and glioma U251 cells were obtained earlier (18,26). In this work, U251 cells were also transfected with ␣1-antitrypsin Portland (PDX). MCF7 cells were also transfected with the catalytically inert MT1-MMP-E240A construct and the internalization-deficient, tailless MT1-MMP-⌬CT construct. MCF7 cells were also transfected with MT1-MMP tagged with a FLAG tag. To avoid interference with the trafficking of MT1-MMP, the FLAG tag was inserted into the hinge region of the protease. Peptide cleavage and the mass spectrometry analysis of the digest were performed as described earlier (27). All of the buffer solutions used for the preparation of cell lysates and for the isolation of centrosomes were supplemented with a protease inhibitor mixture (pepstatin, leupeptin, bestatin, aprotinin, E-64) and additionally with phenylmethylsulfonyl fluoride and EDTA (1 mM each). MT1-MMP Small Interfering (si)RNA Constructs-The MT1-MMP siRNA target sequence was designed by using the siRNA Designer software (Promega). From six tested sequences, the sequence 5Ј-GAAGCCUGGCUACAGCAAUAU-3Ј repressed the expression of MT1-MMP most efficiently. The 5Ј-GGUCCAUGCUGCAGAAAAACU-3Ј scrambled RNA sequence was used as a control in our studies. Both sequences were cloned into the psiLentGene vector (Promega) and used to transfect U251 cells. Transfected cells were selected and cloned in the medium supplemented with 2 g/ml puromycin. The level of expression of MT1-MMP in the clones was determined by Western blotting. Isolation of Centrosomes-Centrosomes were isolated from nocodazole-synchronized metaphase U251 cells (25). Mitotic cells were harvested by mitotic shake off and lysed in 1 mM Tris-HCl, pH 8.0, containing 0.5% Igepal. Cell lysates were spun at 1500 ϫ g to separate the nuclei and cell fragments. The supernatant fractions were filtered through a nylon mesh (70-m pore size) and centrifuged on a 20% w/w Ficoll-400 cushion at 12,000 rpm for 30 min. The crude centrosomal fraction localized at the Ficoll-water interface was collected and further purified by a 40 -80% sucrose gradient centrifugation at 30,000 rpm for 2 h. MMP-2 Activation Assays-The ability of cellular MT1-MMP to activate proMMP-2 was demonstrated by gelatin zymography. For the analysis of centrosomal MT1-MMP, the isolated centrosomes were diluted 1:100 in 25 mM HEPES, pH 7.5. Diluted aliquots were co-incubated for 14 h at 37°C with the purified proMMP-2 (10 ng). The samples were further analyzed by gelatin zymography. Fluorescence-acitvated Cell Sorter Analysis-Cells were detached in trypsin-EDTA, fixed in 70% ethanol, washed in phosphate-buffered saline, and resuspended in a 1% bovine serum albumin, phosphatebuffered saline solution supplemented with 50 g/ml propidium iodide. The DNA content of cells was analyzed on a FACScan flow cytometer. Metaphase Spreads and Chromosome Count-Cells were incubated for 30 min at 37°C with 0.005% ethidium bromide and then with colcemid (50 g/ml) for 2.5 h. Cells were next treated with 0.56% KCl for 15 min and then fixed with Carnoy's fixative. The fixed cells were mounted on glass slides. After 72 h, chromosomes were stained with Giemsa stain and examined on a microscope. Digital images of chromosome spreads were analyzed, and chromosomes were counted in Ͼ100 spreads of each cell line. The Design of the MT1-MMP Chimeras-Using a QuikChange mutagenesis system (Stratagene), the Asp-Tyr-Lys-Asp-Asp-Asp sequence was inserted immediately prior to the Asp 307 -Lys 308 sequence of MT1-MMP. As a result, the final construct exhibited the Asp-Tyr-Lys-Asp-Asp-Asp-Asp-Lys sequence of the FLAG tag in the hinge region of MT1-MMP. To construct MT1-MMP-GFP, the Thr 300 -Ser 301 sequence of the hinge domain of MT1-MMP was modified to insert PacI and BlpI restriction sites. The enhanced GFP sequence (Clontech) flanked at both ends with (Gly) 5 was then inserted into the PacI/BlpI sites of MT1-MMP to generate the MT1-MMP-GFP chimera. MCF7 and U251 cells were stably transfected with the pcDNA3.1-zeo plasmids bearing MT1-MMP-FLAG and MT1-MMP-GFP, respectively. To avoid the aberrant trafficking of the recombinant constructs, the clones expressing low levels of the chimeras were specifically selected and analyzed further. The Analysis of Tumor Biopsies-Frozen samples of colon adenocarcinomas and invasive mammary grade II-III carcinomas and the matched normal tissues were obtained from the NCI Cooperative Human Tissue Network. The homogenized samples were extracted on ice with a radioimmune precipitation assay buffer containing the protease inhibitors. The extract aliquots (60 g of each) were analyzed by immunoblotting with the MT1-MMP Ab815 and pericentrin 4b antibodies. RESULTS AND DISCUSSION Centrosomal MT1-MMP-We examined the subcellular localization of endogenously expressed MT1-MMP in breast carcinoma MCF7 and glioma U251 cells, both of which synthesize MT1-MMP naturally. The level of MT1-MMP in MCF7 cells was, however, very low. U251 cells (Fig. 1a) and MCF7 cells (not shown) demonstrated specific centrosomal MT1-MMP immunoreactivity. The centrosomal association of MT1-MMP was confirmed by using ␥and ␣-tubulin as centrosomal and mitotic spindle markers, respectively. Excess antigen blocked the centrosomal MT1-MMP immunoreactivity (Fig. 1d). Several individual antibodies to MT1-MMP, which were raised against the hinge region and against the catalytic domain, generated similar MT1-MMP immunostaining. The staining of cells with the isotype control was negative. The centrosomal MT1-MMP immunoreactivity was strongly enhanced in the dividing metaphase cells. Overall, only a fraction of MT1-MMP accumulates in centrosomes, whereas the bulk of cellular MT1-MMP is associated with the plasma membrane and the multiple intracellular vesicles (Fig. 1b). Nocodazole abrogated the association of MT1-MMP with centrosomes in the interphase cells. Nocodazole had no effect on the association of MT1-MMP with centrosomes in the metaphase cells (Fig. 1a). To corroborate further the presence of endogenous MT1-MMP in centrosomes, U251 cells were stably transfected with the siRNA construct (GAAGCCUGGCUACAGCAAUAU). MT1-MMP silencing by siRNA repressed both the expression of cellular MT1-MMP and its centrosomal immunoreactivity (Figs. 1a and 2c). To demonstrate the existence of centrosomal MT1-MMP in transfected cells, we used MT1-MMP chimeras. The use of chimeras allowed us to avoid using MT1-MMP antibodies to confirm the centrosomal localization of the protease. The MT1-MMP-GFP construct was detected via the GFP moiety fluorescence without using antibody staining. The FLAG and the GFP protein sequences were both inserted into the hinge region of MT1-MMP. Following transfection of the cells with the chimeric constructs, MT1-MMP-FLAG and MT1-MMP-GFP were each detected in the centrosomes and co-localized with ␥-tubulin in breast carcinoma MCF7 and glioma U251 cells, respectively (Fig. 1c). The accumulation of the MT1-MMP chimeras in the pericentrosomal space and the partial co-localization with the centrosomes is a result of MT1-MMP overexpression. Evidently, excess MT1-MMP is incapable of fitting into the tight centrosomal compartment. To further corroborate the presence of MT1-MMP in the centrosomes, we isolated centrosomes from the synchronized metaphase U251 cells and determined that MT1-MMP cofractionates with ␥-tubulin (Fig. 2a). The concentration of MT1-MMP in the cytoplasm fraction was significantly lower than that in the centrosomes and that is why the cytoplasm fractions did not demonstrate observable amounts of the protease. In contrast, the centrosome samples were free of MMP-2 (a soluble proteinase and a target of MT1-MMP activation) (Fig. 2b) and a plasma membrane marker CD44 (not shown) suggesting the lack of contamination by plasma membrane or transport vesicles. It is not surprising that MT1-MMP traverses and partially accumulates in the pericentrosomal area, because the microtubule cytoskeleton is essential for the nocodazole-sensitive trafficking of MT1-MMP (28,29). Centrosomes are the microtubule-organizing centers, which play a key role in rapid protein trafficking. Proteins, e.g. caveolin, have been shown to travel from the perinuclear space to the plasma membrane and back using the tubulin cytoskeleton as "railroad tracks" (29,30). Our experiments have led us to the discovery that the microtubulin cytoskeleton and the centrosomes (the microtubulin cytoskeleton-organizing centers) are essential for the trafficking and the internalization of MT1-MMP and that MT1-MMP is trafficked to the pericentrosomal space most probably in the endosome-like vehicles. An analysis of the cells showed the existence of MT1-MMP-positive vesicles localized alongside the tubulin cytoskeleton (Fig. 2d). RAB-4 and RAB-11 (the markers of late/recycling endosomes and pericentrosomal/recycling endosomes, respectively) (31) co-localize with MT1-MMP, suggesting its endosomal nature (29, 32) (Fig. 2, e and f). To examine the intracellular trafficking of MT1-MMP, we used a newly developed non-covalent protein delivery Chariot reagent (33). This non-covalent reagent allows the delivery of proteins, including antibodies, to the inside of the cell compartment. Following the penetration through the cell membrane, the delivered Chariot-antibody complex dissociated inside the cell compartment and liberated the antibody. The liberated, functional antibody then diffused throughout the cell and interacted with the target protein and, thus, allowed the identification of the subcellular compartment that harbors the target protein. The transduction of cells with the antibodies to MT1-MMP, by using a Chariot reagent, as well as the uptake of the MT1-MMP antibody by cells (29) also confirmed the microtubular transport of vesicular MT1-MMP to the centrosomes (not shown). The most recent publication (34) confirms the endosomal nature and the microtubular intracellular trafficking of metalloproteinases such as MMP-2 and MMP-9. These results provide indirect support for the data presented in our manuscript. Taken together, our data suggest that the tubulin cytoskeleton is involved in the rapid, vesicular MT1-MMP trafficking. MT1-MMP Targets the Centrosome Proteome-Centrosomes play a central role in the organization of the tubulin cytoskeleton and microtubule nucleation by the ␥-tubulin ring complex (24,35,36). They regulate the mitotic spindle during cell division and provide sister chromatid disjunction (37). Centrosomal MT1-MMP is proteolytically potent, and therefore, it may attack the centrosomal targets. Knowing the identity of these targets is of great importance to a more complete understanding of the tumorigenic function of MT1-MMP. In our earlier work, we identified the cleavage preferences of MT1-MMP through the proteolysis of protein substrates and the substrate phage libraries (27). We determined that the Pro-X-X-2-X Hydrophobic collagen-like cleavage motif is not ideally selective for MT1-MMP because this motif is recognized by several other individual MMPs. Highly selective MT1-MMP substrates lack the characteristic Pro at the P3 position; they contain, instead, an Arg at the P4 position (27). This P4 Arg is essential for efficient hydrolysis and for selectivity for MT1-MMP (38). MT1-MMP appears to recognize cleavage substrates in two distinct modes, using contacts at the P3 and the P1Ј to recognize less selective substrates and using contacts at the P4 and the P1Ј to recognize highly selective substrates (27). We used these data to construct a probabilistic cleavage profile of MT1-MMP using a system for the prediction of protease specificity (PoPS) (39). Using a conventional set of parameters such as charge, polarity, and size, the phage library data for the P4 -P1Ј positions were used to produce a position specific scoring matrix on a scale of Ϫ5.0 to ϩ5.0, as required by PoPS. The matrix contained a strong preference for Arg at P4 and excluded non-hydrophobic residues from the P1Ј position. The matrix was also biased against collagen-like cleavage sites by excluding Pro from the P4 position. Lastly, the matrix was weighted in favor of the P4 and P1Ј positions. To filter these predictions further, the programs PSIPRED (40) and NCOILS (41) (integrated in the PoPS system) were used to predict secondary structure and to search for sites that were located in regions of low structure. PoPS was then used to search for the presence of this profile in the human proteome (Ͼ25,000 proteins) and in the centrosomal proteome consisting of 114 proteins (42). This analysis returned a score for each identified site, based on the weighted matrix. The analysis revealed 111 top scoring hits in the human proteome. A significant fraction of known MT1-MMP cleavage targets, including tissue transglutaminase, fibronectin, vitronectin, the low density lipoprotein receptor-related protein LRP, and the complement component C3 (43)(44)(45)(46)(47)(48) were in this group. The subset of centrosomal proteins was significantly enriched in the high scoring, MT1-MMPsensitive hits compared with the whole human proteome; ϳ14% (total of 16) centrosomal proteins have the highest scores of 56 -58 (60 is the highest possible score in PoPS), compared with ϳ2.4% in the same score group of the entire proteome. Of the 111 human top scoring proteins, three proteins are of centrosomal origin. Fig. 3 shows the number of the known centrosomal proteins that were assigned the "MT1-MMP cleavage score" according to PoPS. One of the three top-scoring targets was the integral centrosomal protein, pericentrin (PoPS score ϭ 58). Two other top-scoring targets were centrosomal Nek-2-associated protein 1 and a protein with an unknown function, KIAA1731. Overall, our in silico analyses suggest that centrosomes, relative to the total human proteome, are strongly enriched in the MT1-MMP cleavage targets and that the cleavage of the centrosomal proteins is an important proteolytic function of MT1-MMP. Pericentrin Is an MT1-MMP Cleavage Target-Pericentrins 1 and 2, which are the splice variants of the same chromosomal gene (GenBank PCN2_HUMAN), are integral and essential centrosomal proteins (49). Pericentrin directly binds ␥-tubulin and anchors the ␥-tubulin-containing ring complexes to the centrosomes (50). Pericentrin silencing and mutations interfere with normal spindle formation and ␥-tubulin localization in the centrosomes and result in G 2 cell-cycle arrest, chromosome instability, and mitotic spindle aberrations (25,36). The proteolyzed pericentrin was routinely observed in tumor cell lines (24,25,36). No individual proteases capable of cleaving pericentrin, however, have been identified so far. Inhibitors of serine and aspartic proteases as well as the specific inhibitors of calpain and caspases and proteasome inhibitors failed to inhibit the proteolysis of cellular pericentrin. To assess whether pericentrin is susceptible to cleavage by MT1-MMP and to confirm our computer predictions, we synthesized the 10-mer peptides derived from the putative cleavage sites of pericentrin. The peptides were subjected to cleavage by the individual catalytic domain of MT1-MMP at a 1:1000 enzyme:substrate ratio. Mass spectrometry was used to determine the mass of the cleavage products and the localization of the scissile bond (Fig. 4a). The A42A peptide (SGAIGF2LRTA) that is highly sensitive to MT1-MMP (27) was used as a control. GM6001 fully blocked the cleavage of the A42A peptide, thus confirming the absence of contaminating proteases in the MT1-MMP samples. From 12 tested peptides, only the pericentrin peptides bearing the predicted ALRRLLG 1156 2L 1157 FG and RAARVLG 672 2L 673 ET cleavage sites were susceptible to MT1-MMP. We examined further the ability of MT1-MMP to cleave pericentrin in the purified centrosome sample in vitro. To avoid Because the antibody M8 to the N-terminal portion of pericentrin was used, the C-terminal cleavage fragments were not observable in this experiment. In turn, ␥-tubulin was unaffected by this treatment (Fig. 4b). These data argue that centrosomal pericentrin is a likely target of MT1-MMP proteolysis in vivo. To confirm the MT1-MMP cleavage of pericentrin in the cell Arrows point to the centrosomes. Antibody uptake by the cells was performed as described earlier (29). f, immunoblotting (with the M8 antibody) of cellular pericentrin from total cell lysate demonstrates that both MT1-MMP siRNA silencing and PDX rescue cellular pericentrin in glioma U251 cells. g, breast carcinomas exhibit active MT1-MMP and the pericentrin cleavage fragment. Mammary carcinoma biopsies (tumors 1 and 2) and matched normal tissue (normal 1 and 2) were extracted in the presence of the protease inhibitors. The extracts were analyzed by immunoblotting with the antibodies against MT1-MMP Ab815 and pericentrin 4b. Note that up-regulated pericentrin is cleaved in tumors. h, the pattern of pericentrin cleavage and the positions of the pericentrin antibody binding sites. The antibodies M8 and 4b recognize the N-terminal and the C-terminal portions of the pericentrin molecule, respectively. The stable C-terminal 150-kDa fragment frequently accumulates in tumor cells, whereas the N-terminal fragment appears to degrade completely. system, we analyzed MT1-MMP-transfected and mock-transfected breast carcinoma MCF7 and glioma U251 cells. U251 cells naturally synthesize MMP-2 that can be activated given that MT1-MMP activity is increased in the cells because of transfection with the MT1-MMP cDNA. Mock cells, which were transfected with the empty vector, synthesize MT1-MMP naturally, whereas MT1-MMP-transfected cells overexpress the protease. We also analyzed U251 cells which express the MT1-MMP siRNA or PDX alone or co-express PDX with MT1-MMP. PDX is a potent inhibitor of the proprotein convertases that activate the latent MT1-MMP zymogen (53). As a result, U251 cells, transfected with PDX alone, exhibited only the latent, naturally synthesized zymogen of MT1-MMP and were incapable of activating MMP-2 (Fig. 4c). Cells transfected with MT1-MMP alone exhibited significant levels of the mature MT1-MMP enzyme. In U251 cells, transfected with both MT1-MMP and PDX, the latter significantly albeit incompletely, repressed both the activation of overexpressed MT1-MMP and its ability to activate exogenous proMMP-2. An immunoblotting analysis demonstrated a direct correlation of MT1-MMP activity with the proteolysis of pericentrin (Fig. 4c). In mock glioma cells, which naturally express MT1-MMP, pericentrin was predominantly represented by the intact 220-kDa species (25,54), and the 200-and 150-kDa degradation fragments. We conclude from these data that the observed limited cleavage of pericentrin is a function of endogenously expressed MT1-MMP rather than MT1-MMP overexpression. In cells overexpressing active MT1-MMP, intact pericentrin disappears, thus confirming the function of MT1-MMP in the cleavage of pericentrin. In turn, the glioma PDX cells, with latent MT1-MMP, exhibit intact pericentrin. The molecular weight of the 150-kDa degradation fragment correlates well with cleavage of pericentrin by MT1-MMP at the ALRRLLG 1156 2L 1157 FG site (numbering is given according to pericentrin 2). In these experiments we used the pericentrin antibody 4b that is directed to the C-terminal portion of the protein and that, therefore, recognizes the C-terminal 150-kDa cleavage fragment. In agreement with the MT1-MMP proteolysis of pericentrin observed in glioma cells, intact pericentrin was not found in MT1-MMP-overexpressing breast carcinoma MCF7 cells (Fig. 4d). To the contrary, the expression of the internalizationdeficient, tailless MT1-MMP-⌬CT mutant (Fig. 4e), which is not delivered to the centrosomes, or the catalytically inert MT1-MMP-E240A construct (the Ala substitutes for an essential active site Glu 240 ) rescued pericentrin from the proteolysis in MCF7 cells (Fig. 4d). Similar to PDX, the MT1-MMP siRNAsilencing rescued pericentrin from MT1-MMP cleavage in U251 cells (Fig. 4f). To confirm our hypothesis that MT1-MMP causes proteolysis of pericentrin, we examined invasive mammary carcinoma, colon adenocarcinoma biopsies, and matching normal tissues. The samples were extracted with a radioimmune precipitation assay buffer containing the protease inhibitor mixture, phenylmethylsulfonyl fluoride and EDTA. MT1-MMP and pericentrin were each assessed by immunoblotting of the extracts. The intact Ϸ220-kDa pericentrin was found in the normal tissues. In contrast, the 150-kDa degradation fragment of pericentrin was found in mammary carcinoma and colon carcinoma biopsies. In colon carcinoma samples (not shown) the pattern of pericentrin was similar to that observed in breast cancer biopsies (Fig. 4g). The presence of proteolyzed pericentrin in tumor biopsies correlated with the presence of the 45-kDa form of MT1-MMP, which is indicative of MT1-MMP self-proteolysis and, consequently, the protease activity. The pattern of pericentrin cleavage and the positions of the pericentrin antibody binding sites are summarized in Fig. 4h. Overall, our data suggest that pericentrin is the cleavage target of MT1-MMP in vivo. MT1-MMP proteolysis of pericentrin, however, is limited and results in the generation of the 150-kDa degradation fragment, which is associated, as well as intact pericentrin, with the centrosomes. Additional studies are required to identify the function of the pericentrin fragment in malignancy. Consistent with our data, pericentrin also interacts with the cation channel polycystin-2 membrane protein (55), thereby providing evidence of the interactions between membrane and centrosomal proteins. Conversely, interactions of pericentrin with polycystin-2 provide a rationale for the similar interactions of pericentrin with MT1-MMP. The most recent data suggest that a vesicular form of pericentrin also exists in the cells and that vesicular pericentrin could be, in fact, the target of MT1-MMP proteolysis. 2 On the other hand, MT1-MMP is known to autolytically shed its highly potent ectodomain, which could be the major soluble form of intracellular MT1-MMP (56) following the release of the endosomal cargo. It is highly likely that pericentrin is not a singular intracellular target of MT1-MMP. Our additional proteomics study of the centrosome proteome (ϳ400 individual proteins in glioma U251 cells) demonstrated that ϳ30 centrosomal proteins represent potential targets of MT1-MMP because they distinguish the cells in which MT1-MMP was silenced by siRNA from the cells in which MT1-MMP was overexpressed. The identification of these putative centrosomal targets of MT1-MMP by mass spectrometry analyses of the tryptic digest fragments is currently in progress. MT1-MMP Induces Chromosome Instability-To test the hypothesis of whether MT1-MMP causes aberrations in genome inheritance, MDCK epithelial cells were transfected with human MT1-MMP. Tumor cell lines, including U251 and MCF7, demonstrate preexisting chromosome instability and multiple spindle aberrations and, therefore, cannot be used for the identification of MT1-MMP-induced chromatin aberrations. We selected MDCK cells because the conditional expression of human MT1-MMP is, by itself, sufficient to confer tumorigenicity on these non-malignant epithelial cells and to cause the formation of invasive tumors (10). From numerous stably transfected MDCK clones, we selected clones number 5 (MT#5) and number 6 (MT#6) with the high and the low expression of MT1-MMP, respectively, for the analysis (Fig. 5, a and b). As a control we used MDCK cells transfected with the empty vector (mock). The MT#6 clone demonstrated the centrosomal MT1-MMP immunoreactivity (Fig. 5c). Similar immunoreactivity of MT1-MMP was determined in the MT#5 clone. As expected, pericentrin was strongly degraded in both the MT#5 and MT#6 clones (not shown). As detected by fluorescence-activated cell sorting, the total DNA content was increased in MT#6 and markedly so in MT#5 cells at 2 months following transfection (Fig. 5d). In contrast, the total DNA content in MDCK cells expressing the tailless, internalization-deficient MT1-MMP-⌬CT construct was close to that in mock cells. We also identified the number of chromosomes in the cells. There was a direct correlation between the MT1-MMP expression and the DNA content/aneuploidy (Fig. 5, a, b, and d). Mock cells contained 80.2 Ϯ 0.87 chromosomes with a 10% aneuploid frequency. In the MT1-MMP-transfected cells both of these figures were significantly higher (89.1 Ϯ 2.1 chromosomes/27% aneuploidy in MT#6 cells, and 100.3 Ϯ 2.9 chromosomes/48% aneuploidy in MT#5 cells). We inferred that MT1-MMP induced aneuploidy in MDCK cells in a dose-dependent manner. Immunofluorescent staining revealed numerous aberrations of the mitotic spindle in metaphase MT#5 cells (Fig. 5e). We concluded, therefore, that MT1-MMP enhances chromosome instability in MDCK cells. These data are consistent with the enhanced tumorigenesis observed in the MT1-MMP-expressing MDCK xenografts in immunodeficient mice (10). The aberrant functionality of centrosomes correlates with chromosome instability, a predictor of carcinogenesis (57)(58)(59)(60)(61). Cells with multiple centrosomes tend to form multipolar spindles, which result in abnormal chromosome segregation during mitosis (57,(62)(63)(64)(65). It has been postulated that centrosome aberration may compromise the fidelity of cell division and cause chromosome instability. The acquisition of genomic instability is a crucial step in the development of human cancer (66). The ubiquity of aneuploidy in human cancers, particularly in solid tumors, suggests a fundamental link between errors in chromosome segregation and tumorigenesis. The observed aneuploidy in MT1-MMP-expressing cells suggests the presence of a novel, previously uncharacterized proteolytic pathway to chromatin instability. It is also highly likely that cellular proteases exhibit the additional, previously unexpected, functions in mitosis. Thus, activation of -calpain during mitosis is required for cells to establish the chromosome alignment, suggesting that this protease is also involved in the cleavage of certain centrosomal proteins (67). Consistent with our hypothesis, MMP-2 is present and functions in the nucleus of cardiac myocytes (68). It is premature to extrapolate our data to other members of the MT1-MMP family. We suspect, however, that MT2-MMP and MT3-MMP, similar to MT1-MMP, are likely to be found in the centrosomes and to function in the pericentrosomal compart-ment. It appears also that pericentrin is not a single intracellular target of MT1-MMP. Additional targets of MT1-MMP proteolysis have already been detected, and an effort to determine their identity is currently in progress. Overall, we suggest that there is a causal link between MT1-MMP, pericentrin proteolysis, and chromosome instability. We also suggest that an intracellular proteolytic function of MT1-MMP is an important element in the transition of cells from normalcy to malignancy and that this novel function elucidates the close association of MT1-MMP with malignant transformation and cancer.
2018-04-03T00:51:42.066Z
2005-07-01T00:00:00.000
{ "year": 2005, "sha1": "dfb6375f22e285418385bd9de7a9ca3c7efe1221", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/280/26/25079.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "30b25268f5cfe89c27bbf18268c6cdb5093520ab", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
249534389
pes2o/s2orc
v3-fos-license
Reproducibility Study of the Thermoplastic Resin Transfer Molding Process for Glass Fiber Reinforced Polyamide 6 Composites Polyamide 6 (PA6) thermoplastic composites have higher recyclability potential when compared to conventional thermoset composites. A disruptive liquid molding manufacturing technology named Thermoplastic Resin Transfer Molding (T-RTM) can be used for processing composites due to the low viscosity of the monomers and additives. In this process, polymerization, crystallization and shrinkage occur almost at the same time. If these phenomena are not controlled, they can compromise the reproducibility and homogeneity of the parts. This work studied the influence of packing pressure, as a process variable, throughout the filling and polymerization stages. To assess the process reproducibility and parts’ homogeneity, physical, thermal and mechanical properties were analyzed in different areas of neat PA6 and composite parts. This study showed that a two-stage packing pressure can be successfully used to increase parts’ homogeneity and process reproducibility. The use of 3.5 bar packing pressure during the polymerization stage resulted in mechanical properties with lower standard deviations, indicating a higher degree of homogeneity of the manufactured parts and higher process reproducibility. These results will be used for establishing the actual state of the technology and will be a base for future process optimization. Introduction Climate changes are one of the main adversities of the XXI century with serious consequences for biodiversity preservation and available resources. In 2019, road transport was responsible for 12% of greenhouse gas emissions from fuel combustion. Aware of this situation, in the last years, the automotive industry has been developing new solutions for the reduction of gaseous emissions, driven by strict regulations and directives issued by countries or international organizations [1,2]. Lightweight design is a strategy followed by the automotive industry to reduce CO 2 emissions on road transport. With the ongoing transition towards electric mobility, weight reduction can also contribute to enhancing the range of an electric vehicle. One of the heaviest parts of an automobile is typically its body-in-white (BiW) structure. Steel has been the most common material used on automotive structural parts due to its mechanical performance; however, it contributes significantly to the final weight of the car. The weight percentage of the BiW structure in a conventional car is typically around 20%; thus, it is essential to develop a new generation of lightweight materials and processing techniques, ensuring the required mechanical behavior [3][4][5][6][7][8]. By the analysis of the state-of-the-art, it can be inferred that the studies addressing the robustness of the T-RTM process towards its application in an industrial context are still incipient. For establishing a reliable process, it is important to understand the effect of processing parameters in the final part properties. This research analyzes the influence of packing pressure, throughout the filling and polymerization stages, on the homogeneity of the parts and the process reproducibility. The effect of this process parameter was evaluated through physical, thermal and mechanical analysis. T-RTM Prototype Equipment and Processing Neat PA6 and composite parts were produced in T-RTM prototype lab equipment (ESAN, University of Aveiro), according to Figure 2. The equipment was placed in a controlled lab environment with a temperature of 26 ± 3 °C and a relative humidity of ≤45%. Four plies of Saertex X-E-573 g/m 2 -1270 mm biaxial (−45/+45 • ) non-crimp glass fibers (GF) fabric (SAERTEX GmbH & Co. KG, Saerbeck, Germany) were used as reinforcements. T-RTM Prototype Equipment and Processing Neat PA6 and composite parts were produced in T-RTM prototype lab equipment (ESAN, University of Aveiro), according to Figure 2. The equipment was placed in a controlled lab environment with a temperature of 26 ± 3 • C and a relative humidity of ≤45%. By the analysis of the state-of-the-art, it can be inferred that the studies addressing the robustness of the T-RTM process towards its application in an industrial context are still incipient. For establishing a reliable process, it is important to understand the effect of processing parameters in the final part properties. This research analyzes the influence of packing pressure, throughout the filling and polymerization stages, on the homogeneity of the parts and the process reproducibility. The effect of this process parameter was evaluated through physical, thermal and mechanical analysis. T-RTM Prototype Equipment and Processing Neat PA6 and composite parts were produced in T-RTM prototype lab equipment (ESAN, University of Aveiro), according to Figure 2. The equipment was placed in a controlled lab environment with a temperature of 26 ± 3 °C and a relative humidity of ≤45%. The first processual methodology stage was the inertization of the equipment with nitrogen gas. The CL, C1 and C20P were inserted into a container and heated for 15 min at 95 • C. The raw materials mixture was made by mechanical stirring at 250 rpm for 10 min. Before the injection phase, the molten material was transferred through a polyvinyl chloride inlet hose to the mold cavity. A single cavity mold was designed to produce plate-shaped parts with a 120 × 75 × 1.8 mm 3 cavity ( Figure 3). The cavity had a U-shape geometry for woven fiber clamping. The mold was heated using electrical cartridge resistors. Due to the low resin viscosity and to allow a vacuum, the mold was sealed using double O-rings. For the composites' production, 4 GF layers with −45 • /+45 • layout were placed inside of the mold cavity for approximately 50% of fiber volume. The first processual methodology stage was the inertization of the equipment with nitrogen gas. The CL, C1 and C20P were inserted into a container and heated for 15 min at 95 °C. The raw materials mixture was made by mechanical stirring at 250 rpm for 10 min. Before the injection phase, the molten material was transferred through a polyvinyl chloride inlet hose to the mold cavity. A single cavity mold was designed to produce plate-shaped parts with a 120 × 75 × 1.8 mm 3 cavity ( Figure 3). The cavity had a U-shape geometry for woven fiber clamping. The mold was heated using electrical cartridge resistors. Due to the low resin viscosity and to allow a vacuum, the mold was sealed using double O-rings. For the composites' production, 4 GF layers with −45°/+45° layout were placed inside of the mold cavity for approximately 50% of fiber volume. To fill the resin into the mold, the injection stage occurred under a pressure of 2 bar using nitrogen gas for 30 s. The mold cavity pressure was set at 0.15 bar, and the setpoint temperature of the mold was 160 °C. Preliminary injection tests were performed to set suitable ranges for the injection pressures. The polymerization time was 30 min for all parts. During polymerization, one or two packing pressures were applied. The first packing pressure stage was set at an inlet mold pressure of 3 bar for two minutes. An optional second stage was implemented to achieve a more efficient control of material backflow, shrinkage and void formation. This stage consisted of the application of 3.5 or 6.0 bar in the mold outlet until the end of the polymerization time. The produced part was then demolded. Table 1 displays the nomenclature assigned to the parts based on the six processing conditions used. For each manufacturing condition, five parts were produced. Materials Characterization The mechanical behavior, monomer conversion degree, density, fiber volume percentage and void content were assessed from different areas of each part, according to To fill the resin into the mold, the injection stage occurred under a pressure of 2 bar using nitrogen gas for 30 s. The mold cavity pressure was set at 0.15 bar, and the setpoint temperature of the mold was 160 • C. Preliminary injection tests were performed to set suitable ranges for the injection pressures. The polymerization time was 30 min for all parts. During polymerization, one or two packing pressures were applied. The first packing pressure stage was set at an inlet mold pressure of 3 bar for two minutes. An optional second stage was implemented to achieve a more efficient control of material backflow, shrinkage and void formation. This stage consisted of the application of 3.5 or 6.0 bar in the mold outlet until the end of the polymerization time. The produced part was then demolded. Table 1 displays the nomenclature assigned to the parts based on the six processing conditions used. For each manufacturing condition, five parts were produced. Materials Characterization The mechanical behavior, monomer conversion degree, density, fiber volume percentage and void content were assessed from different areas of each part, according to Figure 4. A representative image of the neat polymer and composite specimens, obtained after cutting the part, is depicted in Figure 5. A representative image of the neat polymer and composite specimens, obtained after cutting the part, is depicted in Figure 5. Mechanical Analysis Tensile tests were carried out on a Shimadzu Autograph AG-IS 10 kN universal testing machine (Shimadzu Corporation, Kyoto, Japan) at ambient temperature. Tensile tests applied to neat polymers were performed according to ISO 527-2 standard (Type 1BA) at a constant speed of 1 mm/min. Composite specimens tests were performed according to ISO 527-4 standard and conducted at a constant speed of 2 mm/min, with dimensions of 70 × 7 × 1.8 mm 3 and a gauge length of 14 mm. Young's modulus was calculated using a video extensometer in order to measure the gauge length elongation of the specimens. A representative image of the neat polymer and composite specimens, obtained after cutting the part, is depicted in Figure 5. Mechanical Analysis Tensile tests were carried out on a Shimadzu Autograph AG-IS 10 kN universal testing machine (Shimadzu Corporation, Kyoto, Japan) at ambient temperature. Tensile tests applied to neat polymers were performed according to ISO 527-2 standard (Type 1BA) at a constant speed of 1 mm/min. Composite specimens tests were performed according to ISO 527-4 standard and conducted at a constant speed of 2 mm/min, with dimensions of 70 × 7 × 1.8 mm 3 and a gauge length of 14 mm. Young's modulus was calculated using a video extensometer in order to measure the gauge length elongation of the specimens. Mechanical Analysis Tensile tests were carried out on a Shimadzu Autograph AG-IS 10 kN universal testing machine (Shimadzu Corporation, Kyoto, Japan) at ambient temperature. Tensile tests applied to neat polymers were performed according to ISO 527-2 standard (Type 1BA) at a constant speed of 1 mm/min. Composite specimens tests were performed according to ISO 527-4 standard and conducted at a constant speed of 2 mm/min, with dimensions of 70 × 7 × 1.8 mm 3 and a gauge length of 14 mm. Young's modulus was calculated using a video extensometer in order to measure the gauge length elongation of the specimens. Figure 6 summarizes the method for assessing the reproducibility and homogeneity of the parts through the studied properties: yield strength (σ y ), maximum tensile strength (σ M ) and Young's modulus. Figure 6 summarizes the method for assessing the reproducibility and homogeneity of the parts through the studied properties: yield strength (σ y ), maximum tensile strength (σ M ) and Young's modulus. Average standard deviations of the studied properties measured in all areas (sA), used to evaluate the part-to-part reproducibility [24] of the process, were calculated using Equations (1) and (2) [25]. in which where Ai is the area number, NA is the total number of areas, Pi is the part number, PartPi is the studied properties of part Pi (in area Ai), sAi is the standard deviation of studied properties measured in area Ai, NP is the total number of parts and uAi is the average studied properties measured in area Ai. Average standard deviations of the properties measured in all plates (sP), used to evaluate the parts homogeneity, were calculated using Equations (3) and (4) [25]. in which Average standard deviations of the studied properties measured in all areas (sA), used to evaluate the part-to-part reproducibility [24] of the process, were calculated using Equations (1) and (2) [25]. in which where A i is the area number, NA is the total number of areas, P i is the part number, Part Pi is the studied properties of part Pi (in area Ai), sA i is the standard deviation of studied properties measured in area Ai, NP is the total number of parts and u Ai is the average studied properties measured in area Ai. Average standard deviations of the properties measured in all plates (sP), used to evaluate the parts homogeneity, were calculated using Equations (3) and (4) [25]. in which where sP i is the standard deviation of the property measured in part P, area Ai is the property measured in area Ai (in part Pi) and u Pi is the average property measured in part Pi. Monomer Conversion Degree The monomer conversion degree was obtained from thermogravimetric analysis (TGA). The tests were conducted on Hitachi STA300 equipment (Hitachi, Ltd, Ibaraki, Japan) at 10 • C/min heating rate, between 25 • C and 550 • C, in open aluminium pans. An inert atmosphere was set by a 200 mL/min nitrogen gas flow. The conversion degree was calculated according to Equation (5) [26]. where wl 100 • C is the sample weight loss at 100 • C, wl 240 • C is the sample weight loss at 240 • C and w initial is the sample initial weight. Density, Fiber Volume Content and Void Volume Content The density of the specimens was obtained based on the ISO 1183 standard, by immersion method in water, through an Ohaus Adventurer AX224 analytic balance (Ohaus Corporation, Parsippany, NJ, USA), with its respective density determination kit. In neat polymer parts, the density was measured in the inlet, central and outlet areas ( Figure 3). In composite parts, the density was analyzed in the inlet area. For each analyzed area, the densities of five samples were measured. The fiber volume and void content of the composites were determined based on ISO 7822:1999, Method A standard, known as the burn-off technique. The samples were placed in a Termolab MLM furnace (Termolab, Águeda, Portugal) at room temperature, heated at 10 • C/min until 560 • C and kept at 560 • C for 2 h to eliminate the organic phase. Then, the samples were cooled to room temperature and weighed to determine the fiber weight content (FWC) by the ratio between final weight and initial weight. Fiber volume content (FVC) was calculated according to Equation (6) [3,27]. where ρ measured is the composite density and ρ GF is the density of GF. The VVC for composite parts was determined by Equation (7) [3,27]. where the ρ theoretical is the density estimated considering the FVC and the resin volume content. Composites VVC and morphology were evaluated from polished samples (finished with 9 µm diamond paste) through a Polarized Optical Microscope (POM) and a Scanning Electron Microscope (SEM). The samples were collected from the inlet area of the plates. A reflected-light Nikon Eclipse L150 (Nikon Corporation, Tokyo, Japan) and a Canon EOS 100D (Canon Inc. Tokyo, Japan) digital single lens reflex camera were used for optical micrographs. The SEM micrographs were obtained using a Tescan Vega LMS (Tescan Orsay Holding, a.s., Brno, Czech Republic) with an accelerating voltage of 30 kV. The samples for SEM were sputter coated during 300 s with a layer of gold-palladium. The images were treated to highlight the presence of voids using ImageJ, version 1.51j8. Results and Discussion The mechanical behavior, monomer conversion degree, density and fibers volume content of the neat polymer and composite parts are going to be evaluated in this paper section. Mechanical Analysis PA6 properties depend on its formulation and processing conditions. PA6 yield stress, for instance, can typically range from around 50 to 75 MPa [18,28]. There is usually some variation in the data collected from mechanical tests due to several factors such as the apparatus resolution and calibration and operator measuring errors and some inhomogeneities always exist, even within the same lot of material. As already mentioned, besides the production of parts under the same processing conditions, to assess process reproducibility, this work also studied the differences in the mechanical performance within each part to evaluate the parts' homogeneity. It is intended to optimize the second packing pressure variable also based on this evaluation for neat polymer and composite parts [29]. Neat Polymer Parts The σ y , σ M and Young's modulus for neat polymer parts obtained with different packing pressures are presented in Figure 6. The APA6_3.5 parts achieved higher σ y , σ M and Young's modulus when compared to the parts manufactured without the second packing pressure. The increasing of the packing pressure to 6 bar did not contribute to a further increase in the mechanical properties. This behavior can be due to the nitrogen pressure inducing voids in the resin. The APA6_3.5 parts mechanical behavior is on par with the best properties obtained by AROP of CL through T-RTM [18,30]. In Figures 7 and 8, the average values of sP and sA for each processing condition are also visible. Overall, when the second packing pressure is applied, the values of standard deviations tended to decrease. The application of packing pressure throughout the polymerization stage can compact the resin and compensate for its shrinkage. Analyzing each part individually, the values of sP tended to be lower for a packing pressure of 3.5 bar, which can indicate a higher parts' homogeneity. In the specific case of Young's modulus, a similar sP was found. In the evaluation of the neat polymer results between different parts, the sA tended to be lower for 6 bar. Admitting that the reproducibility of the process can be measured by the sA, this result can be a sign of a more reliable process. Composite Parts The σ M and Young's modulus of the composites obtained at different packing pressures are shown in Figure 9. Compared to neat polymer parts, composite specimens achieved higher σM and Young's modulus due to the presence of fibers, denoting a transfer of properties from the fibers to the composite. As with neat polymers, the parts produced with 3.5 bar achieved higher σ M when compared to the parts without second packing pressure. However, the differences were lower, probably due to the pressure drop caused by the fibers. The increase of the packing pressure to 6 bar also did not lead to a further increase in σ M . This behavior can arise from the possibility of void formation due to nitrogen pressure. Although Young's modulus tended to decrease with the increase of second packing pressure, it can be considered that those differences did not have a significant impact on composite parts' behavior. The results are within or slightly above the range of values reported in the literature for GF composites obtained by T-RTM [30,31]. Composite Parts The σ M and Young's modulus of the composites obtained at different packing pressures are shown in Figure 9. Compared to neat polymer parts, composite specimens achieved higher σ M and Young's modulus due to the presence of fibers, denoting a transfer of properties from the fibers to the composite. As with neat polymers, the parts produced with 3.5 bar achieved higher σ M when compared to the parts without second packing pressure. However, the differences were lower, probably due to the pressure drop caused by the fibers. The increase of the packing pressure to 6 bar also did not lead to a further increase in σ M . This behavior can arise from the possibility of void formation due to nitrogen pressure. Although Young's modulus tended to decrease with the increase of second packing pressure, it can be considered that those differences did not have a significant impact on composite parts' behavior. The results are within or slightly above the range of values reported in the literature for GF composites obtained by T-RTM [30,31]. Figure 11 displays a comparison of the standard deviations associated with the reproducibility (sA) and homogeneity (sP) between neat polymers and composite parts for σM and Young's modulus. The results indicate that the parts manufactured with a 3.5 bar second packing pressure promote lower standard deviations. The composite parts tended to have higher standard deviations, which may be explained by a less effective pressure transmission due to the presence of fibers, during injection/packing stages, and also by differences in the permeability of the fibers. Figure 11 displays a comparison of the standard deviations associated with the reproducibility (sA) and homogeneity (sP) between neat polymers and composite parts for σ M and Young's modulus. The results indicate that the parts manufactured with a 3.5 bar second packing pressure promote lower standard deviations. The composite parts tended to have higher standard deviations, which may be explained by a less effective pressure transmission due to the presence of fibers, during injection/packing stages, and also by differences in the permeability of the fibers. Monomer Conversion Degree According to the literature, it is important to achieve a conversion degree above 95% to avoid fiber-matrix interface lack of adhesion issues [18,30,32]. Monomer Conversion Degree According to the literature, it is important to achieve a conversion degree above 95% to avoid fiber-matrix interface lack of adhesion issues [18,30,32]. All the neat polymer and composite samples analyzed had a conversion degree above 95%. Since oxygen and humidity inhibit the polymerization of raw materials, the equipment was previously pressurized with nitrogen gas. It was also essential to perform the trials in a laboratory with a controlled temperature and humidity environment. A relative air humidity above 45% can promote moisture absorption of the raw materials when they are transferred to the container. Humidity affects resin polymerization through the deactivation of the catalytic system (C1), which occurs due to its reaction with water, leading to the formation of secondary products. Since the polymerization kinetics is influenced by temperature, the environment's thermal stability is also important for the manufacturing process [16]. Neat Polymer Parts' Monomer Conversion Degree The neat polymer monomer conversion degree is displayed in Figure 12. The conversion degree was around 98% with standard deviation values below 1% for all the experimental conditions. The reduced differences in conversion degree between the inlet, center and outlet areas and between distinct parts were an indication of suitable parts' homogeneity and process reproducibility. The monomer conversion degree for neat polymer parts was within the range of the studies for AROP of CL by T-RTM [8,9,13]. Composite Parts' Monomer Conversion Degree The composites monomer conversion degree, by part and part area, is shown in Figure 13. For the composite parts, the conversion degree was ~99%, a value slightly higher than that observed in neat polymer. This can be explained by higher thermal stability in Composite Parts' Monomer Conversion Degree The composites monomer conversion degree, by part and part area, is shown in Figure 13. For the composite parts, the conversion degree was~99%, a value slightly higher than that observed in neat polymer. This can be explained by higher thermal stability in the mold cavity promoted by the presence of the reinforcing fibers. In the presence of fibers, less resin is injected into the mold cavity, decreasing the resin's thermal inertia during polymerization. In addition, a higher thermal conductivity of GF, compared to the resin, enables a fast and more balanced heat transfer within the mold cavity. The conversion degree values for the composite parts were similar to the values reported in the literature [22,30,31,33]. Neat Polymer Parts' Density and VVC The densities of the neat polymer by part and part area are presented in Figure 14. The results suggest that the increase of the second packing pressure can have a negative effect on part densities. This is in agreement with the possibility that nitrogen gas pressure can induce voids in the resin [18]. Although the mentioned density differences are not very substantial, even a small variation in neat polymer densities can lead to a significant increase in VVC. Since this effect is enhanced by increasing nitrogen pressure, this could explain the lower mechanical behavior of APA6_6.0 when compared to APA6_3.5 parts. The density standard deviation values were below 1% for all the processing conditions. Neat Polymer Parts' Density and VVC The densities of the neat polymer by part and part area are presented in Figure 14. The results suggest that the increase of the second packing pressure can have a negative effect on part densities. This is in agreement with the possibility that nitrogen gas pressure can induce voids in the resin [18]. Although the mentioned density differences are not very substantial, even a small variation in neat polymer densities can lead to a significant increase in VVC. Since this effect is enhanced by increasing nitrogen pressure, this could explain the lower mechanical behavior of APA6_6.0 when compared to APA6_3.5 parts. The density standard deviation values were below 1% for all the processing conditions. Considering an APA6 theoretical density of 1.16 g/cm 3 (for a 100% dense APA6 part) [18,32,34], a lower density can be attributed to the presence of 1 to 1.5% VVC in neat polymer samples [3,35]. Composite Parts' Density, FVC and VVC The densities of the composites by part are shown in Figure 15. Compared to neat APA6, the densities of the composites are higher due to the presence of the fibers. The average densities of the composite parts were similar. Standard deviation values are ranging from 1 to 2%. The higher standard deviation on the density of the composite parts can be attributed to the differences in the fiber volume content (Figure 15a). An increase in fiber volume content tended to lead to a higher part density. The obtained fiber volume content of 55-60% is higher than what is usually presented in the literature for this technology [22,23,27,30,31]. Young's modulus was also affected by the differences in fiber volume content ( Figure 15b). For each packing condition, it can be observed that an increase in fiber volume content lead to an increase in Young's modulus [36]. This can explain the higher standard deviation values in composite Young's modulus results. Considering an APA6 theoretical density of 1.16 g/cm 3 (for a 100% dense APA6 part) [18,32,34], a lower density can be attributed to the presence of 1 to 1.5% VVC in neat polymer samples [3,35]. Composite Parts' Density, FVC and VVC The densities of the composites by part are shown in Figure 15. Compared to neat APA6, the densities of the composites are higher due to the presence of the fibers. The average densities of the composite parts were similar. Standard deviation values are ranging from 1 to 2%. The higher standard deviation on the density of the composite parts can be attributed to the differences in the fiber volume content (Figure 15a). An increase in fiber volume content tended to lead to a higher part density. The obtained fiber volume content of 55-60% is higher than what is usually presented in the literature for this technology [22,23,27,30,31]. Young's modulus was also affected by the differences in fiber volume content (Figure 15b). For each packing condition, it can be observed that an increase in fiber volume content lead to an increase in Young's modulus [36]. This can explain the higher standard deviation values in composite Young's modulus results. The burn-off technique was used for composite samples to evaluate the influence of the packing pressure on VVC. The results are summarized in Table 2 and indicate that the increase of the packing pressure can have a negative effect on VVC. Parts VVC (%) APA6/GF 1.0 ± 0.6 APA6/GF_3.5 1.0 ± 0.5 APA6/GF_6.0 1.8 ± 0.3 The burn-off technique is known for its low accuracy, which can be particularly noticeable in samples with a VVC up to 2%. The low accuracy is reflected in the standard deviation values presented. Therefore, POM images were used to evaluate composites' VVC and morphology. Representative POM images for each processing condition are depicted in Figure 16a-c, in which it is possible to see the GF layers as horizontal lines. In samples obtained using 6 bar of packing pressure, it was possible to identify more voids, namely, macro voids, (Figure 16a-c). In order to highlight the voids' presence, treated images (imageJ) are presented in Figure 16d-f. The burn-off technique was used for composite samples to evaluate the influence of the packing pressure on VVC. The results are summarized in Table 2 and indicate that the increase of the packing pressure can have a negative effect on VVC. The burn-off technique is known for its low accuracy, which can be particularly noticeable in samples with a VVC up to 2%. The low accuracy is reflected in the standard deviation values presented. Therefore, POM images were used to evaluate composites' VVC and morphology. Representative POM images for each processing condition are depicted in Figure 16a-c, in which it is possible to see the GF layers as horizontal lines. In samples obtained using 6 bar of packing pressure, it was possible to identify more voids, namely, macro voids, (Figure 16a-c). In order to highlight the voids' presence, treated images (imageJ) are presented in Figure 16d Conclusions and Future Work The use of a 3.5 bar second packing pressure had a positive influence on APA6 tensile mechanical behavior. An increase of the packing pressure to 6 bar did not contribute to a further increase in the tensile properties, due to the presence of a higher VVC induced by nitrogen gas. The results suggest that the packing pressure can aid to increase process reproducibility in the production of neat polymer parts, particularly when using 6 bar. An intermediate 3.5 bar packing pressure can also contribute to increasing APA6 parts' homogeneity. In composite parts, packing pressures did not contribute to a significant increase in mechanical properties. This could be due to the pressure barrier created by the fiber and to the increase of the VVC for higher packing pressure. For composite manufacturing, the results indicated that the use of a 3.5 bar intermediate packing pressure can contribute to increasing the process reproducibility and the homogeneity of the parts. A homogeneous and reproducible monomer conversion was obtained for neat polymer and for composite parts indicating that the experimental procedure was reliable. Polymer and composite parts' densities suggest that the increase of the second packing pressure can have a negative effect on parts due to nitrogen-induced voids. In future work, to improve process reproducibility and parts' homogeneity, an alternative pressure medium should be evaluated to avoid nitrogen-induced voids. To improve the mechanical properties of composites, a higher packing pressure should be addressed. In situ dielectric monitoring can also be a particularly useful tool to understand the phenomena involved during polymerization and to assess the reaction reproducibility and quality control in real-time. Funding: The authors would like to acknowledge the financial support from the Tech4 T-RTM project. This is a project in collaboration with BTL-Indústria Metalúrgicas S.A. and Simoldes Plásticos, S.A., and co-financed under Portugal 2020 and the European Regional Development Fund, through COMPETE, under the scope of the project POCI-01-0247-FEDER-047026. This work was also developed within the scope of the project CICECO-Aveiro Institute of Materials, UIDB/50011/2020, UIDP/50011/2020 and LA/P/0006/2020, financed by national funds through the FCT/MCTES (PIDDAC). The project was also supported within the scope of TEMA-Center for Mechanical Technology and Automation, by the projects UIDB/00481/2020 and UIDP/00481/2020-FCT-Fundação para a Ciencia e a Tecnologia, and CENTRO-01-0145-FEDER-022083-Centro Portugal Regional Operational Program (Centro2020), under the Portugal 2020 Partnership Agreement, through the European Regional Development Fund. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Data is contained within the article.
2022-06-10T15:04:36.738Z
2022-06-08T00:00:00.000
{ "year": 2023, "sha1": "85b9a321aee5bb559a7aa7ea7ca989aafa32eaa8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/16/13/4652/pdf?version=1687934374", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6efc4ac121fa3e68f981283ce13407e123359f47", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }