id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
254593693
pes2o/s2orc
v3-fos-license
Incidence, risk factors and clinical outcomes of septic acute renal injury in cancer patients with sepsis admitted to the ICU: A retrospective study Background The purpose of this study was to clarify the incidence, risk factors, and clinical outcomes of septic acute kidney injury (AKI) in cancer patients with sepsis admitted to the intensive care unit (ICU). Methods A total of 356 cancer patients admitted to the ICU due to sepsis from January 2016 to October 2021 were analyzed retrospectively. According to the incidence of septic AKI, all patients were divided into the non-AKI group (n = 279) and the AKI group (n = 77). The clinical data after ICU admission were compared between the above two groups, and the risk factors and the clinical outcomes of septic AKI in the ICU were identified. Results The incidence of septic AKI in all patients was 21.6% (77/356). LASSO regression and logistic regression all showed that lactate, sequential organ failure assessment (SOFA) score and septic shock were closely related to the occurrence of septic AKI. In terms of clinical outcomes after ICU admission, the rate of mechanical ventilation (MV) and continuous renal replacement therapy (CRRT), MV time, hospitalization time and 28-day mortality in the ICU were significantly higher in the septic AKI group than in the non-septic AKI group. Among the three subgroups of septic AKI (AKI combined with septic shock, septic cardiac dysfunction or acute respiratory failure), the mortality of patients in the subgroup of AKI combined with septic shock was significantly higher than others. CRRT has no significant effect on the short-term outcome of these patients. Conclusion Lactate level, SOFA score and septic shock were closely related to the occurrence of septic AKI in the ICU. The clinical outcomes within 28 days after ICU admission of cancer patients with septic AKI were worse than those without septic AKI. The short-term outcome was worse in patients with septic AKI complicated with septic shock. CRRT does not have any significant effect on the short-term prognosis of cancer patients with septic AKI in the ICU. Introduction Acute kidney injury is considered as one of the serious comorbidities in critically ill patients. AKI may have higher short-term and long-term mortality, and the use of medical resources is considerably increased. AKI is characterized by a sudden decrease in glomerular filtration rate (GFR), resulting in the accumulation of nitrogenous waste and the inability to maintain the homeostasis of body fluids and electrolytes (1). Although there is not any clear causal relationship between AKI and chronic kidney disease (CKD), the AKI non-intervention group may increase the risk of CKD (2). Patients with AKI are the most likely to suffer from accelerated loss of renal function and progress to CKD than patients without AKI with all else being equal (3). CRRT is an effective treatment for AKI, but it does not reduce long-term mortality of AKI or the risk of CKD (4). Even if AKI patients return to normal kidney function after discharge from the hospital, there is still a risk of adverse kidney events for up to 10 years (5). In addition, a meta-analysis suggests that the duration of AKI is independently related to long-term mortality, cardiovascular events and the development incident CKD of stage 3 (6). Considering the above situation, AKI should be given full attention and early disposal. The most common cause of AKI in critically ill patients is sepsis. Cohort studies indicate that the incidence of septic AKI ranges from 19 to 48%, while the mortality of patients with septic AKI fluctuates from 22 to 70% (7,8). The pathophysiology of septic AKI is still not fully appreciated. Traditionally, it is believed that septic AKI is mainly caused by global renal ischemia and hypoperfusion, septic endotoxin-mediated cell damage, and renal tubular necrosis (9). However, other studies suggest that septic AKI is a bioenergy adaptive response of the body to microcirculation dysfunction and inflammation caused by sepsis, which has no significant correlation with the existence of systemic hypoperfusion or the severity of sepsis (10)(11)(12). Abbreviations: ICU, intensive care unit; AKI, acute kidney injury; SOFA, sequential organ failure assessment; MV, mechanical ventilation; CRRT, continuous renal replacement therapy; CKD, chronic kidney disease; GFR, glomerular filtration rate; KIDGO, kidney disease improving global outcomes; SCD, septic cardiac dysfunction; ARF, acute respiratory failure; PCT, procalcitonin; cTnI, cardiac troponin I; BNP, brain natriuretic peptide. Because the immune system of cancer patients with sepsis cannot cope with the initial injury, pathogen invasion emerged on the basis of malignant cell transformation. Compared with non-cancer patients, cancer patients with sepsis had a 2.5fold higher in-hospital mortality rate due to sepsis. Cancer patients with sepsis have a worse prognosis (13,14). Therefore, there may be a great proportion of septic AKI in cancer patients with sepsis. This retrospective study aimed at the precise population of cancer patients with sepsis to clarify the incidence, risk factors and short-term clinical outcomes of septic AKI after ICU admission to guide clinical intervention and judge prognosis. Participants The study complies with the Declaration of Helsinki and was approved by the Ethics Committee of Peking University Cancer Hospital(ethics approval number 2020KT33), and all patients provided written informed consent for the treatment of sepsis and related scientific research purposes A total of 356 cancer patients with sepsis were retrospectively screened out of 3,362 patients admitted to the ICU in Peking University Cancer Hospital from January 2016 to October 2021, according to the inclusion criteria. Inclusion criteria: (1). Patients with sepsis aged >18 years; (2). Diagnosis satisfying the definition of sepsis 3.0. Exclusion criteria: (1). CKD stage 3 and above; (2). After kidney transplantation; (3). Incomplete clinical data. All the included patients were divided into the non-AKI group (n = 279) and the AKI group (n = 77) in terms of the occurrence of septic AKI (Figure 1). The diagnosis of septic AKI: (1). Clinical judgment of AKI has a positive correlation with sepsis; (2). AKI refers to the definition and diagnostic criteria from kidney disease improving global outcomes (KIDGO) in 2012. Data collection Demographic characteristics and baseline data [including sex, age, body mass index (BMI), cancer types, cancer treatment, Frontiers in Medicine 02 frontiersin.org Flow chart of the research scheme. (CRRT), MV time, length of stay in the ICU and 28-day mortality in the ICU. Statistics SPSS 26.0 (Armonk, NY: IBM) and R language (version 4.1.2, involving software packages such as "survival, " "surviviner, " "glmnet, " "pROC") were used for statistical analysis. Continuous variables with normal distributions were expressed as means ± SD; otherwise, they were expressed as medians (IRQ). Categorical data were expressed as numbers (proportions). categorical variables were reported as frequency or percentage (%). Continuous variables with a normal distribution were compared by unpaired independent t test, continuous variables with a skewed distribution were compared by the Mann-Whitney U test, and the classified data were compared using the χ2 test or Fisher's exact probability method. Logistic regression and LASSO regression were utilized to compare and screen out the significant risk factors of septic AKI. The number of septic AKI related variables of non-zero parameters was controlled by adjusting the Lambda(λ) value in the LASSO regression. 1se (the dashed line on the right side) was taken as a reference, the method of ten-fold cross-validation was utilized to obtain the minimum number of variables for the optimal model. The Kaplan-Meier method was used to analyze the short-term clinical prognosis of patients with septic AKI. The ROC curve was used to determine the predictive value of relevant risk factors for septic acute renal injury. Bivariate correlation analysis is applied to the comparison of critical variables. For all the above tests, a two-tailed P < 0.05 was regarded as statistically significant. Occurrence of sepsis-related AKI among different cancer types: After being regrouped, septic patients with retroperitoneal cancers and urinary cancers were more likely to suffer from septic AKI (P = 0.002) ( Table 1). Comparison of variables with statistical differences between the two groups of patients: There were significant Important variables identified with ten-fold cross-validation. Drawing of multivariable ROC curve. differences in creatinine, lactate, procalcitonin (PCT), brain natriuretic peptide (BNP), and SOFA scores after ICU admission between the septic AKI group and the non-septic AKI group (P = 0.032, P = 0.002, P = 0.001, P = 0.005, P = 0.001) ( Table 2). 3. Comparison of sepsis-related complications between the two groups: Sepsis-related complications (septic shock, SCD and ARF) were more likely to occur in the septic AKI group than in the non-septic AKI group (P = 0.001, P = 0.001, P = 0.041) ( Table 3). 4. LASSO regression was used to screen the important risk factors of septic AKI: All the variables in Tables 1-3 were screened with LASSO regression for avoiding overfitting the data in order to improve accuracy (Figures 2, 3). These important variables including lactate, SOFA score, septic shock, and PCT were strongly associated with septic AKI. 5. Independent risk factors of septic AKI were screened out by multivariate analysis of Logistic regression: Lactate, SOFA score and septic shock (variables from Tables 1, 3) were closely related to septic AKI, and these three variables were independent risk factors for septic AKI (P = 0.001, P = 0.001, P = 0.009) ( Survival analysis of septic acute kidney injury (AKI) group and non-septic AKI group. (P = 0.04) (Figure 4). Bivariate correlation analysis of these three variables showed that there was a positive correlation between septic shock and lactate (P < 0.001, r = 0.330), a positive correlation between septic shock and SOFA score (P < 0.001, r = 0.413), and a positive correlation between lactate and SOFA score (P < 0.001, r = 0.378). 7. Comparison of the difference on the short-term clinical outcome between two groups: In terms of short-term clinical outcomes, patients with septic AKI had higher rates of MV and CRRT, longer durations of MV-time and ICU stay-time, and higher 28-day mortality in the ICU (P = 0.004, P = 0.001, P = 0.006, P = 0.004, P = 0.001) ( Table 5). 8. Comparison of 28-day survival rates in the two groups and in multiple subgroups of sepsis AKI: The 28day survival rate of patients with septic AKI was significantly lower than that of patients with non-septic AKI within 28 days after ICU admission (P < 0.001) (Figure 5). In the three subgroups of septic AKI (septic AKI combined with septic shock, septic cardiac dysfunction or acute respiratory failure), the 28-day survival rate of septic AKI combined with septic shock decreased significantly (P = 0.005) ( Figure 6); However, there was no significant difference in the other two subgroups of patients (P = 0.07, P = 0.34) (Figures 7, 8). 9. Effects of CRRT treatment on the short-term prognosis of septic AKI patients: According to whether CRRT was performed in the ICU, patients with septic AKI were divided into the CRRT group and the non-CRRT group. There was not any significant difference in the 28-day outcome of the two groups. CRRT had no meaningful effect on the shortterm prognosis of septic AKI patients (P = 0.19) (Figure 9). Discussion Septic AKI is a life-threatening complication characterized by an abrupt deterioration in renal function, manifested as elevated serum creatinine levels, oliguria, or both. It closely relates to infection or sepsis. Septic AKI is one of the earliest focal manifestations in patients with sepsis. Current estimates suggest that septic AKI affects 10-67% of patients with sepsis (8,15). However, more than two-thirds of patients with septic shock FIGURE 6 Survival analysis of AKI with septic shock group and AKI without septic shock group. Survival analysis of AKI with SCD group and AKI without SCD group (61 of 77 patients underwent bedside echocardiography). Frontiers in Medicine 07 frontiersin.org Survival analysis of AKI with ARF group and AKI without ARF group. may be complicated with septic AKI (16). For unexplained AKI, the possibility of sepsis should be examined first. Cancer patients are more likely to suffer from sepsis and have a significantly higher mortality rate due to sepsis than non-cancer patients (14). Our study aimed to understand the related factors of septic AKI in cancer patients with sepsis and is used as a basis for the prevention, treatment and renal function recovery of septic AKI for this population. We found that there may be a definite relationship between septic AKI and cancer type. Regroup analysis showed that sepsis patients with retroperitoneal and urinary tumors were more vulnerable to septic AKI. For the two types of cancer patients, we analyzed the reasons. The mechanism of retroperitoneal and urinary tumors with septic acute kidney injury may include the following: Firstly, the tumor has oppressed or invaded the urinary system, causing local obstruction or postrenal obstruction, resulting in impaired renal function. Secondly, most patients with retroperitoneal and urinary tumors have undergone surgery, and there is a risk of low organ perfusion during the operation. Some patients may undergo single nephrectomy, and patients may be complicated with abdominal infection, paralytic intestinal obstruction, intra-abdominal hypertension after surgery (17). In addition, tumor-related thrombotic microvascular disease and septic coagulation dysfunction may affect the kidneys, resulting in acute kidney damage caused by renal microvascular thrombosis with endothelial swelling and microvascular obstruction (18). All of the above related factors may significantly increase the probability of septic AKI in these cancer patients. However, it was not found that the two types of cancers were closely related to the occurrence of septic AKI in the septic patients with the two types of cancers were included in the multivariate analysis. Our study also concluded that lactate, SOFA score and septic shock were closely related to the occurrence of septic AKI with LASSO regression and Logistic regression. Serum lactate levels in the septic AKI group were significantly higher than those in the non-septic AKI group. The serum lactate level is a sensitive but non-specific indicator of metabolic stress (19). As a product of anaerobic glycolysis, lactate is markedly elevated in settings of hypoxia, stress, and critical illness (20). Most studies have demonstrated that high levels of lactate are significantly positively correlated with sepsis mortality, and the higher the lactate level is, the worse the prognosis of sepsis (21, 22). Hyperlactatemia is a significant manifestation of increasing tissue anaerobic metabolism in patients with sepsis. It is regarded as a sensitive marker of systemic or local organ tissue hypoperfusion (23). Based on the above studies, it is reasonable to believe that elevated lactate levels can predict FIGURE 9 Survival analysis of AKI with CRRT and AKI without CRRT. renal hypoperfusion, which may eventually progress to AKI. SOFA score in the septic AKI group was also significantly higher than that in the non-septic AKI group. The SOFA score is a key component of the third edition of the definition of sepsis. Clinical diagnosis of infection and SOFA ≥ 2 points can be considered as the definition of sepsis (24). The higher the SOFA score, the more severe organ dysfunction due to sepsis. In our study, the differences in SOFA score between the two groups were consistent with the short-term prognosis, which suggested that the higher the SOFA score, the more severe the illness and the worse the prognosis. Studies have demonstrated that there is a good correlation between the SOFA score and lactate level. The higher the SOFA score is, the higher the lactate level in serum, both of which are signals of increased organ dysfunction and suggest the need for urgent medical intervention (25). We also found that the proportion of patients with septic shock in the septic AKI group was considerably higher than that in the non-septic AKI group. This indicated that septic shock was closely related to the occurrence of septic AKI, which was an independent risk factor for septic AKI. Septic shock leads to systemic hypotension and hypoperfusion of multiple organs, including kidney hypoperfusion. In addition, studies have shown that septic shock may lead to dysfunction of the renal vascular bed, leading to a dramatic decrease in GFR and the development of septic AKI (26). Finally, we carried out a bivariate correlation analysis of these three variables, which showed a significant positive correlation among these variables (P < 0.001). This result shows that septic shock may have higher levels of blood lactate, and both of which are positively correlated with the severity of the disease, that is, SOFA score. In our study, rates of MV and CRRT for the septic AKI group were significantly higher than those of the non-septic AKI group, and the MV time and the ICU stay time were also significantly prolonged. There was a major difference in the 28-day mortality between the above two groups. The 28-day mortality was also significantly increased when septic AKI was combined with septic shock. We compared the effect of CRRT on the prognosis of patients with septic AKI. CRRT cannot prolong the short-term survival time of patients with septic AKI, so CRRT did not improve the short-term prognosis of septic AKI. These conclusions are in agreement with most studies (27, 28). Our study on septic AKI is of definite clinical significance. Firstly, the group of this study focused on cancer patients with sepsis, and we found that septic cancer patients of retroperitoneal and urinary tumors were more likely to have septic AKI. The group studied and this conclusion are not common in previous studies. Secondly, we screened out three variables with the intersection of Wayne diagram adopted from the combination of Lasso regression and logistic regression. The prediction of the combined ROC based on the three variables for the occurrence of septic AKI has good performance. Later, it can be modeled and verified after increasing the sample size. If the predictive ability of the model is reliable, it can be adopted in clinical application to judge the prognosis of septic AKI at an early stage. Finally, we understand that if cancer patients with sepsis have septic AKI at the same time, the short-term outcome will be poor, and CRRT cannot effectively improve the prognosis. The above aspects are helpful for us to understand the risk factors of septic AKI in cancer patients with sepsis, which play a good reference role in the diagnosis, treatment, and prognosis of septic AKI in cancer patients. However, our study has its limitations. Firstly, this study was a retrospective study, and our data were taken from singlecenter studies, so the incidence and severity of septic AKI may be biased. Secondly, for all patients with septic AKI, we focused on the short-term outcomes within 28 days after ICU admission and lacked 90-day or longer follow-up data on cancer patients with sepsis. The lack of awareness of the long-term survival and physical and mental health of patients with sepsis is also something that needs to be improved in future research. In addition, in view of the small number of CRRT treatments, a total of 24 cases, we did not conduct a subgroup analysis, but only to explore the overall prognostic differences. If patients with septic AKI were graded to different subgroups according to the KIDGO criteria, and the prognostic value of CRRT in each subgroup is compared, different results may be obtained, which also represents one of the limitations of this study. Conclusion Lactate level, SOFA score and septic shock were closely related to the occurrence of septic AKI in the ICU. The clinical outcomes within 28 days after ICU admission of cancer patients with septic AKI were worse than those without septic AKI. The short-term outcome was worse in patients with septic AKI complicated with septic shock. CRRT does not have any significant effect on the short-term prognosis of cancer patients with septic AKI in the ICU. This study was a preliminary exploration of the incidence, influencing factors and clinical outcomes of septic AKI in cancer patients with sepsis, which has certain guiding significance for the diagnosis, treatment and prognosis of septic AKI. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors for a reasonable purpose, without undue reservation. Ethics statement The study was approved by the Ethics Committee of Peking University Cancer Hospital and all patients provided written informed consent for the treatment of sepsis and related scientific research purposes. Author contributions YY and JD designed, analyzed, and drafted the manuscript. XC and RC collected and interpreted the patients' data. HW administered and revised the manuscript. All authors read and approved the final manuscript.
2022-12-14T14:21:47.251Z
2022-12-14T00:00:00.000
{ "year": 2022, "sha1": "66775e471858776878b5512950b37269d6055d38", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "66775e471858776878b5512950b37269d6055d38", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
59573573
pes2o/s2orc
v3-fos-license
Soil C : N stoichiometry controls carbon sink partitioning between above-ground tree biomass and soil organic matter in high fertility forests Introduction Forest ecosystems worldwide are currently acting as carbon (C) sinks (Pan et al. 2011). Several factors may, however, influence the magnitude and direction of the net C balance, including recovery from historical land use (e.g., abandoned agricultural land reverting to forested land), increases in atmospheric CO2 concentration and nitrogen (N) deposition, and climate change (Schimel et al. 2001, Thomas et al. 2010). Nonetheless, while much research has been done to understand the controls on net ecosystem C balance (Valentini et al. 2000, Rustad et al. 2001, Reichstein et al. 2007a), we know little about the controls on C sink partitioning between plant biomass and soil organic matter (SOM) pools. Soils may store C for long periods of time (Lal 2005), accumulating on average three times the C in terrestrial vegetation (Post et al. 1982). On the other hand, more N is required per unit of C stored in soil as compared to plant biomass (Yang & Luo 2011). Hence, while an allocation to SOM may increase C sequestration in the long term, a preferential allocation to plant biomass is a more nutrient-efficient C sequestration process in the shorter term. Studying ecosystem C sink partitioning is challenging due to the difficulties associated with quantifying the different ecosystem fluxes. Especially complex is the assessment of rapid and small changes in SOM which are linked to the balance between microbial respiration and plant inputs, including both litter and root-derived C (Schrumpf et al. 2011). Thus, belowground C allocation and subsequent C dynamics are still far from being accurately quantified and understood (Phillips et al. 2011, Vicca et al. 2012). Root C inputs have been shown to influence soil Introduction Forest ecosystems worldwide are currently acting as carbon (C) sinks (Pan et al. 2011). Several factors may, however, influence the magnitude and direction of the net C balance, including recovery from historical land use (e.g., abandoned agricultural land reverting to forested land), increases in atmospheric CO2 concentration and nitrogen (N) deposition, and climate change (Schimel et al. 2001, Thomas et al. 2010. Nonetheless, while much research has been done to understand the controls on net ecosystem C balance (Valentini et al. 2000, Rustad et al. 2001, Reichstein et al. 2007a), we know little about the controls on C sink partitioning between plant biomass and soil organic matter (SOM) pools. Soils may store C for long periods of time (Lal 2005), accumulating on average three times the C in terrestrial vegetation (Post et al. 1982). On the other hand, more N is required per unit of C stored in soil as compared to plant biomass (Yang & Luo 2011). Hence, while an allocation to SOM may increase C sequestration in the long term, a preferential allocation to plant biomass is a more nutrient-efficient C sequestration process in the shorter term. Studying ecosystem C sink partitioning is challenging due to the difficulties associated with quantifying the different ecosystem fluxes. Especially complex is the assessment of rapid and small changes in SOM which are linked to the balance between microbial respiration and plant inputs, including both litter and root-derived C (Schrumpf et al. 2011). Thus, belowground C allocation and subsequent C dynamics are still far from being accurately quantified and understood , Vicca et al. 2012). Root C inputs have been shown to influence soil C sequestration, but both the magnitude and direction of this root effect are variable (Karlen & Cambardella 1996, Parton et al. 1996, Cardon et al. 2001, Rasse et al. 2005, Dijkstra & Cheng 2007. A robust definition of net ecosystem production (NEP) should be based on a full ecosystem mass balance (Randerson et al. 2002), which accounts for both plant and soil sinks. When it is flux-based, NEP is defined as the difference between ecosystemlevel gross photosynthetic gain of C (gross primary production, GPP) and ecosystem respiratory losses (Reco). Alternatively, NEP (g C m -2 y -1 ) can be expressed as (Campbell et al. 2004 -eqn. 1): In deciduous forest ecosystems, ΔCbiomass is the annual change in plant biomass (wood, branches, coarse roots), and ΔCsoil is the annual net change in soil organic C (SOC) stock. In this equation, litterfall and fine root turnover are considered as soil C input and therefore contributing to the ΔCsoil (see eqn. 2). Net ecosystem productivity can be directly determined using eddy covariance techniques starting from net ecosystem exchange (NEE = -NEP - Baldocchi 2003, Aubinet et al. 2012. Plant biomass changes are usually estimated via a combination of repeated inventories and allometric relationships (Clark et al. 2001). On the other hand, direct SOC determination methods are generally unable to quantify ΔCsoil in the short term (Schrumpf et al. 2011), and, at annual timescales, alternative methods are required to estimate soil C changes. Considering that the dissolved organic C (DOC) is typically negligible, representing around 1% of forest NPP , ΔCsoil can also be written as (eqn. 2): where Inputlitter is the above-ground litterfall (i.e., leaves, branches, wood, etc.), Inputroots is the root-derived C input (i.e., exudates, root slashing and turnover), RC-rhizosphere is the rhizosphere respiration of root-derived C, and Rh is the heterotrophic respiration. Litter input is conventionally measured by litter traps, while wood input is measured using repeated sampling (Harmon & Sexton 1996), and rhizosphere and heterotrophic respiration can be estimated by a variety of methods (e.g., trenching, girdling, isotopes), as reviewed by Subke et al. (2006) and Kuzyakov (2006). The largest challenge is estimating gross root inputs. However, methods exist to estimate net annual root-derived C input (Net-Croot), which is the difference between Inputroots and RC-rhizosphere (eqn. 3): Different tracer methods have been used to date to estimate Net-Croot, such as pulse labeling, continuous labeling, and 13 C natural abundance (Kuzyakov & Domanski 2000). The latter uses the difference in the stable C isotope composition of native SOM and new plant-derived organic matter to quantify Net-Croot.. When natural isotope abundances do not allow the use of this approach, distinct C isotope signatures in the soil organic C (SOC) pool and plant-derived organic matter can be obtained in manipulation experiments, by growing C3 plants (δ 13 C of approximately -27‰) in soil with organic matter derived from C4 plants (δ 13 C of approximately -12‰) or vice versa. This approach has been successfully applied in pot (Ineson et al. 1995, Vicca et al. 2010) and field studies (Hoosbeek et al. 2004, Cotrufo et al. 2011 and was used in this investigation. Net-Croot, combined with aboveground inputs to the soil (litter and dead wood), also provides interesting information about soil C dynamics. For soils at steady-state (ΔCsoil=0), the sum of Net-Croot and aboveground inputs is the amount of C that replaces SOC decomposition, thus becoming a measure for SOC turnover. For soils which are net C sinks (ΔCsoil>0), this sum exceeds SOC mineralization and a fraction of it enlarges the SOC pool, thus leading to soil C sequestration. In this context, for soils which are net C sinks, the ratio between ΔCsoil and Net-Croot + aboveground inputs indicates the fate of C input: the higher the ratio, the larger the contribution of fresh C to soil C sequestration. The opposite is true for soils that are net C sources (ΔCsoil<0). Root C input rates vary considerably depending on tree species, mychorrhizal associations and environmental factors (Lynch & Whipps 1990), with values of up to 40% of net assimilated C being reported (Van Veen et al. 1991). According to the microbial efficiency-mineral stabilization (MEMS) framework (Cotrufo et al. 2013), the fraction of Net-Croot inputs sequestered in the soil depends on the efficiency of decomposers to convert C into bio-products as compared to tha amount of C lost as CO2 (Six et al. 2006) and on soil matrix interactions , Kleber et al. 2007. Soil organic matter mineralization is driven by both substrate stoichiometry and microbial demand for resources (Melillo et al. 1982, Hessen et al. 2004: when N is limiting, microbes use labile substrate to mineralize recalcitrant SOM (Moorhead & Sinsabaugh 2006, Craine et al. 2007). Root exudates can thus prime SOM decomposition (Lohnis 1926, Bingeman et al. 1953, Fontaine et al. 2004). Clearly, rootderived soil C inputs can either stimulate soil C sequestration or, conversely, induce pri-ming with consequent losses of stabilized SOM but likely enhancements in N availability, which in turn can stimulate plant growth. The key factors determining the direction (and magnitude) of this effect are, however, not yet clear. Understanding the fate of root-derived C, and its effects on N dynamics and ecosystem C sequestration, is relevant from an ecological perspective and is also an urgent challenge to address, particularly in the context of global changes such as atmospheric CO2 increase and N deposition. The aims of the present study were: (1) to obtain an estimate of Net-Croot in six different forest ecosystems; (2) to partition NEP into aboveground tree biomass production and soil C sinks; and (3) to investigate the controls of this partitioning. Specifically, we tested the hypothesis that soil C:N stoichiometry controls ecosystem C uptake (GPP) and sink partitioning (ANPP vs. soil C) across forest ecosystems. To verify if our hypothesis could be generalized to other forests, we tested it on several world forest sites for which ANPP, GPP and soil C:N data were available in the literature. Study sites Six forests were considered in the present study. Three sites were in central Italy, two sites in northern Italy, and one in Croatia. All sites were equipped with an eddy covariance tower for mass, momentum and energy ecosystem exchange measurements and can be classified as high fertility sites, according to key soil properties (Vicca et al. 2012 -see also Appendix 1). Site characteristics and flux data are reported in Tab. 1, while a brief description for each site is given below. Roccarespampani (42° 24′ N, 11° 55′ E - Claus & George 2005, Tedeschi et al. 2006) is a Turkey oak (Quercus cerris L.) coppice forest at about 235 m a.s.l. in central Italy. Mean annual temperature is 14 °C and mean annual rainfall is 755 mm. Soil is sandy clay Luvisol (which is typically nutrient rich), derived from sedimentary material of volcanic origin and marine deposits, and is moderately acid (pH=5.7), with a total depth > 100 cm (Rey et al. 2002). Cation exchange capacity (CEC) is high, ranging between 19 and 42 meq 100g -1 in the different soil layers (Tedeschi et al. 2006). The forest has been managed as a "coppice with standards" over the last 200 years, with a rotation cycle varying between 15 and 20 years. Two stands were selected: a 6-year-old coppice (RO1) and a 15-year-old coppice (RO2). Jastrebarsko (JA -45° 37′ N, 15° 41′ E; Marjanovic et al. 2010Marjanovic et al. , 2011) is a 35-yearold forest in Croatia dominated by pedunculate oak (Quercus robur L.) with 19% of black alder (Alnus glutinosa Haernt.), 14% hornbeam (Carpinus betulus L.) and 9% of narrow-leafed ash (Fraxinus angustifolia L.). Mean annual temperature is 10.4 °C with mean monthly temperatures of -0.2 °C and 20.7 °C in January and July, respectively. Average annual precipitation is 900 mm year -1 , of which around 500 mm falls during the active vegetation period (April-September). Soil is a Luvic Stagnosol with a depth > 100 cm and an acidic pH (4.9) in the upper mineral layer (0-20 cm) that linearly increases to neutral pH at depths > 100 cm. At the beginning of the growing season, the soil drains and water content soon drops below water holding capacity (46% v/v) allowing enough oxygen supply for root growth and substantially increasing nutrient availa-bility in these soils, where nutrient availability can be constrained by high water levels. La Mandria (LM -45° 09′ N, 7° 34′ E) is an 80-year-old pedunculate oak-hornbeam forest (Quercus robur L. and Carpinus betulus L.) in northern Italy. Mean annual temperature at the site is 11.6 °C and annual precipitation is 1030 mm. Soil is Typic Fragiudalf with adequate moisture content throughout the year, neutral pH and good CEC (ranging from 17 to 11 meq 100 g -1 at soil surface and Bh horizons, respectively). Collelongo (CO -41° 52′ N, 13° 38′ E; Valentini et al. 1996, Scartazza et al. 2004 Tab. 1 -General characteristics for the six forest sites used in this study. ( were derived for all sites using published gridded maps with 0.5° × 0.5° resolution derived from interpolated (krieged) ground data (available at http://www.daac.ornl.gov). Total wet depositions (kg N ha -1 y -1 ) were then computed as the sum of aqueous NO 3and NH 4+ fields, which were available. (b): For Collelongo, the reported number refers to direct measurements available for the period -2009(Flechard et al. 2011. Mean annual temperature at the site is 7.1 °C and mean annual rainfall is 1188 mm. The soil is a Humic Alisol with volcanic ash also present. Both CEC and N content are high in the different soil layers, ranging from 14.8 to 23.3 meq 100g -1 and from 4 to 7.3 mg N g -1 , respectively (Persson et al. 2000). Wet N deposition rates in the period 2002-2009 averaged 10.8 kg N ha -1 yr -1 (Flechard et al. 2011) Net root-derived C input to soil Net-Croot was quantified using the in-growth core isotope technique, following Cotrufo et al. (2011). A soil depleted in 13 C (δ 13 C = -17.22‰) was collected from the USDA-ARS Central Plains Experimental Range located in NE Colorado, USA (40° 49′ N, 104°4 6′ W). The soil is classified as a Zigweid soil series (Fine-loamy, mixed, superactive, mesic Ustic Haplocambid), with a pH of 7.4, N content of 1.37 g kg -1 , and P content of 0.5 g kg -1 (Cotrufo et al. 2011). At this site, plant cover is approximately 75% C4 grasses, and for brevity we call henceforth this soil as "C4 soil". Soil was air-dried prior to being sealed and boxed for shipment to Italy. Upon arrival, the C4 soil was ground and sieved to 2 mm and well mixed to make a homogeneous soil pool, before using it for in-growth cores and chemical (C% and δ 13 C) analyses as described below. At each forest site, six cores, made of a 2 mm mesh net (thus allowing the penetration of fine roots) with a diameter of 4 cm and a height of 30 cm, were placed randomly within the eddy covariance tower footprint in October 2006 (2008 for Jastrebarsko) and filled with the C4 soil to a bulk density similar to the average bulk density for the site. At the top of each core the net was closed to avoid above-ground litter input. Cores were sampled a year later, and the soil from each core was separated into 0-15 cm and 15-30 cm depth layers, except for Jastrebarsko, where the entire 0-30 cm core was considered. All soil samples were sieved to 2 mm, and root samples carefully removed and washed with deionized water. Root samples were pooled by site and depth, and each samples analyzed in triplicates. Both soil and root samples were oven-dried at 70 °C, pulverized and analyzed for %C and δ 13 C by an elemental analyzer (Flash EA 1112 NC, CE Instrument, Wingan, UK) connected to an Isotope Ratio Mass Spectrometer (IRMS, Delta Plus, Thermo-Finnigan, Bremen, Germany). Prior to C analyses, soil samples were treated with HCl to eliminate carbonates (Harris et al. 2001). The measured δ 13 C values were used to calculate the proportion of new C (fnew, i.e., the Net-Croot), by using a mass ba-lance equation (Del Galdo et al. 2003, Cotrufo et al. 2011 ): where δsoil is δ 13 C of the organic matter of the C4 soil collected from each core after one year of field incubation, δold is the δ 13 C of the organic matter of the C4 soil measured before incubation, and δveg is the δ 13 C of the roots averaged by site and depth. The average δveg value across all our sites was -28.11±0.29‰, while variation (standard deviation) within a site was between 0.15 and 0.57‰ at RO1 and RO2, respectively. Knowing the f values for the new C, the soil organic C concentrations (%C), soil depth (D, m), and soil bulk density (σ, kg m -3 ), Net-Croot amounts (g m -2 ) were computed for all soil samples as follows (eqn. 5): Estimates of Net-Croot using this method (Cotrufo et al. 2011) rely on the assumptions that: (1) root inputs are the same inside and outside the in-growth bags and are independent of the C4 soil properties; and (2) that there is no isotopic fractionation during the decomposition of the native SOM or formation of the new SOM from the root tissues. New studies applying this method should test those assumptions, since some fractionation could occur (Hobbie et al. 2004). Ecosystem fluxes and primary production Eddy covariance flux data from all five Italian sites were analyzed for the years 2006-2007 (Tab. 1). Data of net ecosystem exchange (NEE), gross primary production (GPP) and ecosystem respiration (Reco) at monthly time steps were downloaded from the central Fluxnet database (http://gaia.agra ria.unitus.it/database/). Specifically, we used the NEE gap-filled data using the Artificial Neural Network method (NEE_ANN from level 4 dataset - Papale et al. 2006). Reco was computed according to the short-term temperature response of night-time fluxes (Reichstein et al. 2005) and GPP values were derived as sum of the absolute values of NEE_ANN and Reco. At sites where data for the years 2006 or 2007 were incomplete even after gap-filling because of missing weather data, data for 2008 were also included in the analysis for the calculation of annual means. As for the Jastrebarsko site, 2009 eddy flux data were derived from Marjanovic et al. (2010). Mean annual temperature (MAT), mean annual precipitation (MAP), and soil C stocks (0-30 cm), as well as changes in wood biomass (stem and branches -ΔCwood), were derived from ancillary data files available at the central database, updated to 2006-2007 when necessary, or using specific yield tables available at the site (e.g., Jastrebarsko). All data were checked, if necessary updated and completed by site Principal Investigators, who are co-authors of the present study. Changes in root biomass (ΔCroots) were derived from ΔCwood using root-to-shoot ratios reported by Mokany et al. (2006) or using site-specific relationships as in the case of Collelongo and do not include fine root productivity. ANPP was calculated as the sum between ΔCbiomass and NPPleaves (foliar net primary production). The latter corresponds to litterfall in the case of broadleaved forests, and was directly measured at the site (i.e., Rocca, Jastrebarsko, Collelongo) or assessed from NPPwood using biomass expansion factors derived at nearby sites with similar species composition and structure (i.e., La Mandria). In the case of Lecceto, where the dominant species is evergreen (Holm oak), we assumed that the system was at steady state and thus litterfall = NPPleaves. Then the ANPP: GPP ratio was calculated. World forest sites data In order to test if the relationship between ANPP:GPP and soil C:N, observed across our study sites, was generalizable across forest ecosystems, we searched published datasets (Litton et al. 2007, Luyssaert et al. 2007, Vicca et al. 2012 for forest sites that provided the data suitable to our analyses. Twenty-three additional sites were found including ANPP and GPP data, as well as soil C:N (determined for a depth up to 45 cm) were found (Tab. 2). Fertility classification followed Vicca et al. (2012). More details are given in Appendix 1. Data analysis At each site the annual change in net soil C (ΔCsoil -g C m -2 y -1 ) was calculated starting from eddy covariance NEE data and measured changes in aboveground wood biomass (ΔCwood) and coarse roots (ΔCroots) by re-arranging eqn. 1 (eqn. 6): Statistical analyses were performed using the package SIGMA PLOT® 11.0 (Systat® Software, San José, CA, USA). Data were tested for normal distributions, using the Shapiro-Wilk's test, and homogeneity of variance, and log transformed when necessary. To assess differences in Net-Croot among sites, a one way analysis of variance (one-way ANOVA) was used. Significant treatment (site) effects (P<0.05) were further explored via a treatment (site) comparison using the Least-Squares means test with Tukey's adjustment for multiple comparisons. For sites iForest (early view): e1-e12 e4 © SISEF http://www.sisef.it/iforest/ where data for 0-15 and 15-30 cm depths were available, a two-way ANOVA with site and depth as fixed factors was also performed. A correlation analysis between all available variables was performed using the Spearman's rank method through a correlation matrix in STATA10® (StataCorp®, College Station, TX, USA). For variables that were correlated with p<0.10, linear models were fitted to measured data. Ecoystem C sink partitioning All six sites were net C sinks with similar NEP values (average NEP was 547 ± 25 g C m -2 year -1 ) but with large differences in annual GPP (Tab. 1). They actively sequestered C both aboveground and in the soil: ΔCwood represented between 10 and 48% of annual GPP (RO1 and LM, respectively), ANPP (=ΔCwood + litterfall) was between 13 and 77% (RO1 and LM, respectively) and ΔCsoil was positive for all sites representing between 6 and 20% of annual GPP (CO and RO1, respectively). World forest sites The positive relationship between ANPPto-GPP ratio and soil C:N found across our six study sites was confirmed also when in-iForest (early view): e1-e12 e6 © SISEF http://www.sisef.it/iforest/ Fig. 1 -Net annual root-derived carbon input (Net-Croot) to soils (0-15 and 15-30 cm) quantified using isotope-labelled (e.g., C4) soil in-growth cores at the six study sites. Vertical bars indicate standard deviation. Different letters indicate significant difference in total Net-Croot at p < 0.05. For site labels, see Tab. 1. As for the Jastrebarsko site (JA), only total net derived carbon is reported (grey bar). Fig. 2 -Relationships between soil C:N stoichiometry and gross primary productivity (GPP -panel A) or net soil C sequestration (panel B) as related to soil C:N ratio. Dashed lines represent 95% confidence interval; the reported R 2 is the adjusted R 2 . Dashed lines represent 95% confidence interval; the reported R 2 is the adjusted R 2 . cluding additional fertile forests from different regions (Tab. 2). In particular, a positive relationship was found between ANPP:GPP and soil C:N for high fertility (sensu Vicca et al. 2012) sites (adjusted-R 2 = 0.64; p = 0.03; Fig. 4). Conversely, no significant relationship was detected for low and medium fertility sites. Discussion and conclusions To our knowledge, this study is the first to quantify Net-Croot in a range of forest ecosystems. The measurement of Net-Croot in situ is difficult, thus measured values are lacking and modeled estimates cannot be validated. However, the in-growth core isotope technique has already been shown to allow detection of changes in Net-Croot in CO2 and climate manipulation experiments (Hoosbeek et al. 2004, Cotrufo et al. 2011, even though it does suffer from several caveats related to the use of an exogenous soil and high spatial variability. Steingrobe et al. (2000) reviewed the in-growth core method for measuring gross root growth: a first shortcoming associated with this method is achieving the soil conditions inside the bag similar to the bulk soil. Moreover, soil texture has also been shown to significantly influence rhizodeposition rates (Scandellari et al. 2010), although it is difficult to determine whether soil texture influenced rhizodeposition rates in our study. Our estimates of Net-Croot using the ingrowth core isotope technique were on average 606 g C m -2 y -1 , which is higher than values reported by Cotrufo et al. (2011) for a Arbutus unedo L. coppice in dry Mediterranean conditions, but lower than values reported by Hoosbeek et al. (2004) for an irrigated and fertilized poplar plantation in central Italy. A possible overestimation of Net-Croot can be also related to the fact that a certain amount of fine root fragments could have passed through the 2 mm sieve. Such an amount is a function of root integrity as affected by plant age and sample processing. Being aware of this possible overestimation and of the above-mentioned limitations associated with the in-growth core isotope technique, in this study we used Net-Croot estimates solely as an indicator of differences in the effect of root-derived C on SOC sequestration through the calculation of the ratio ΔCsoil : (Net root-derived C + litterfall C). Many factors have been suggested to affect soil C sequestration, including the characteristics of input material, soil texture and mineralogy, climatic factors, and soil nutrient status (Galantini et al. 1992, Andrén & Kätterer 1997. We found that the proportion of root C input resulting in C sequestration at these high fertility sites was related to soil C:N ratio, and soil C sequestration was greater at low C:N (Fig. 2b) therefore confirming our hypothesis. Recen-tly, Manzoni et al. (2012) suggested a C-tonutrient stoichiometric control on microbial C use efficiency (CUE), which would increase with increasing nutrient availability. The importance of CUE as a determinant of the fate of plant inputs to soils has also been recognized by other recent studies (Schimel & Schaeffer 2012, Cotrufo et al. 2013) and some models have suggested that low nutrient availability, particularly N, might limit soil C storage through mechanisms that are still not completely understood (Rastetter et al. 1997, Hungate et al. 2003. Recently, Kirkby et al. (2013) hypothesized that the sequestration of C-rich crop residue material into SOM could be improved only by adding supplementary nutrients, as the more stable SOM fraction has more N, P and S per unit of C than the plant material input due to microbial reprocessing. Thus, the increase in soil C sequestration at lower soil C:N values observed in this study may be explained by a higher microbial CUE of root C inputs. Soil C:N exerted a strong control on GPP across our six forests and GPP increased with decreasing soil C:N (Fig. 2a). This relationship is based on six forest sites and we cannot exclude the possibility that other factors influenced this relation. At the ecosystem scale, variation in global plant productivity across ecosystems has often been related to environmental factors (Field et al. 1995, Reichstein et al. 2007b), but also to and aboveground net primary production (ANPP) with changes in nutrient availability. At high fertility sites (such as the sites considered in the present paper), GPP, ANPP and soil C sequestration (ΔC) changes are controlled by soil C:N stoichiometry. At low soil C:N ratio, C sink allocation shifts from NPP to soil C sequestration. nutrient availability (Vicca et al. 2012). In this context, Zha et al. (2013) reported a strong positive relationship between GPP or NPP and total soil N. Across our sites, ΔCwood and ANPP increased slightly, but not significantly with increasing soil C:N, and showed significant relations with MAT, MAP and stand age, thus confirming previous studies (Curtis et al. 2002, Hsu et al. 2012, Robinson et al. 2012, He et al. 2012). How such different behavior between GPP and ΔCwood or ANPP with respect to soil C:N could be explained? We suggest that this result is due to the lower demand for N by woody tissues (which comprise the largest fraction of the tree and are characterized by very high C:N) as compared to green leaves (which control GPP, and have much lower C:N than woody tissues). As a result of these variations in both GPP and ANPP, the ratio between ANPP and GPP varied substantially among our six forest sites. Following the distinct patterns of ANPP and GPP versus soil C:N, the ANPP-to-GPP ratio significantly increased with increasing soil C:N (Fig. 2a). At first sight, this seems to contradict the current understanding that partitioning of photosynthates into aboveground biomass increases with increasing nutrient availability across a wide range of forests (Vicca et al. 2012). However, all six forest sites had high nutrient availability but, at those sites where soil N presumably exceeded tree demand for wood growth (i.e., sites with low C:N), root C inputs were probably responsible for the higher net soil C sequestration. We speculate that the link between soil C:N stoichiometry and microbial activity controls C sequestration belowground, as well as for the increase in ANPPto-GPP ratio with increasing soil C:N across the high fertility forests in our dataset. At soil C:N below 15, CUE is expected to be high, and more of the fresh C input is used for microbial products, resulting in the net formation of new SOM. Conversely, when C:N is high, microbes have a low C use efficiency and therefore they respire more of the fresh C inputs and prime SOM decomposition (Fontaine et al. 2004), which increases N availability and supports a higher allocation of fixed C (GPP) to ANPP. Our observations of increasing ANPP-to-GPP ratio, and the tendency for a decrease in soil C sequestration with increasing soil C:N (Fig. 2b), support this hypothesis. In order to further test this hypothesis, we analyzed a larger dataset. Also in this case, ANPP-to-GPP ratios were quite variable (average ANPP:GPP = 0.28 with SD = 0.10; Tab. 2) and our analysis confirmed the relationship between ANPP-to-GPP ratio and soil C:N at sites with high fertility (Fig. 4). At sites where overall nutrient availability was low, this relationship did not hold. Variation in partitioning of GPP to ANPP at these sites is probably driven by the need for plants to invest in the nutrient acquiring system (i.e., roots and root symbionts - Vicca et al. 2012). When nutrient availability is limited, belowground input by plants may be the dominant control of microbial activity and SOM mineralization (Hamilton & Frank 2001, Wardle et al. 2004, De Deyin et al. 2008, De Graaff et al. 2010, thereby influencing mineral nutrient availability for plant uptake. Our speculation is also consistent with other recent findings. At the Duke Free Air CO2 Enrichment (FACE) experiment, the increase in the belowground C flux stimulated microbial activity, accelerated SOM decomposition, and stimulated tree uptake of N bound to this SOM, sustaining ANPP (Drake et al. 2011, Phillips et al. 2012. Yin et al. (2013) found that an increase in the release of root exudates into the soil was an important physiological mechanism to sustain growth responses of plants to experimental warming. At our study sites, soil C:N stoichiometry appeared to be weakly controlled by the soil clay content (p = 0.15 -Tab. S1 in Appendix 1), decreasing with increasing %clay in soil. This is consistent with our knowledge of soil primary organo-mineral particles, which describes clay-associated SOM as the fraction with the highest microbial contribution and lowest C:N ratio (Christensen 1992, Grandy & Neff 2008. In conclusion, our results suggest that a specific site property, such as soil texture, could drive soil C:N stoichiometry which in turn would control ecosystem C uptake and partitioning within forests of high nutrient availability. While GPP strongly and linearly increased with increasing soil N, aboveground tree biomass demand for N appeared to saturate, possibly because of the higher C:N of wood vs. green leaves, and, at high nutrient availability, NPP became limited by other environmental factors. When this occurs, more C is sequestered by soil (Fig. 5), where the high N availability promotes CUE efficiency and new SOM formation.
2018-12-17T22:36:52.554Z
2015-04-01T00:00:00.000
{ "year": 2015, "sha1": "0c3ccb75b9589faa29286ac40a7a1d18e15d8498", "oa_license": "CCBYNC", "oa_url": "http://www.sisef.it/iforest/pdf/?id=ifor1196-008", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8f85943ff2b13be87047c77b6455fae2dba28fc5", "s2fieldsofstudy": [ "Environmental Science", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
216061270
pes2o/s2orc
v3-fos-license
Estimation of COVID-19 prevalence in Italy, Spain, and France At the end of December 2019, coronavirus disease 2019 (COVID-19) appeared in Wuhan city, China. As of April 15, 2020, >1.9 million COVID-19 cases were confirmed worldwide, including >120,000 deaths. There is an urgent need to monitor and predict COVID-19 prevalence to control this spread more effectively. Time series models are significant in predicting the impact of the COVID-19 outbreak and taking the necessary measures to respond to this crisis. In this study, Auto-Regressive Integrated Moving Average (ARIMA) models were developed to predict the epidemiological trend of COVID-19 prevalence of Italy, Spain, and France, the most affected countries of Europe. The prevalence data of COVID-19 from 21 February 2020 to 15 April 2020 were collected from the World Health Organization website. Several ARIMA models were formulated with different ARIMA parameters. ARIMA (0,2,1), ARIMA (1,2,0), and ARIMA (0,2,1) models with the lowest MAPE values (4.7520, 5.8486, and 5.6335) were selected as the best models for Italy, Spain, and France, respectively. This study shows that ARIMA models are suitable for predicting the prevalence of COVID-19 in the future. The results of the analysis can shed light on understanding the trends of the outbreak and give an idea of the epidemiological stage of these regions. Besides, the prediction of COVID-19 prevalence trends of Italy, Spain, and France can help take precautions and policy formulation for this epidemic in other countries. Introduction COVID-19 is defined as a new type of coronavirus that spreads rapidly from person to person and becomes a major epidemic that causes a great tragedy. COVID-19 has been identified from a family of zoonotic coronaviruses, such as the severe acute respiratory syndrome coronavirus (SARS-CoV) and the Middle East Respiratory Syndrome Coronavirus (MERS-CoV) seen in the past decade. The starting point of the virus is considered to be the Wuhan city of China, and the first fatal cases were reported in late 2019. At this point, this virus causes fatal effects, especially on the elderly and those with chronic diseases . The disease has a very dynamic structure and spreads rapidly. Unfortunately, as of April 15, 2020, 123,010 deaths and approximately 2 million cases have been confirmed worldwide. The number of confirmed Contents lists available at ScienceDirect Science of the Total Environment j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / s c i t o t e n v cases varies due to differences in epidemiological surveillance and detection capacities between countries. However, it can be said that the disease has spread all over the world as of today. Since there is no treatment method determined for this type of virus yet, it requires the effective planning of the health infrastructure and services, where the rate of disease spread should be controlled. For this reason, the estimation of the total confirmed cases and possible new cases in the future is vital for managing and directing the demand to the health system. Mathematical and statistical modeling tools that can be used for making short and long-term case estimates to plan the number of additional materials and resources are needed to deal with the outbreak. Estimating the expected burden of disease is essential for public health officials to effectively and timely manage medical care and other resources needed to overcome the epidemic. Also, such estimates can direct the intensity and type of interventions needed to alleviate the outbreak (Zhang et al., 2020). Recently, different statistical methods such as time series models (Kurbalija et al., 2014), multivariate linear regression (Thomson et al., 2006), grey forecasting models (Wang et al., 2018a;Zhang et al., 2017), backpropagation neural networks Ren et al., 2013;Zhang et al., 2013), and simulation models (Nsoesie et al., 2013;Orbann et al., 2017) were used to predict epidemic cases. Epidemics are affected by many different factors. For this reason, the general spread of the outbreak is characterized by tendencies and randomness. Therefore, the mentioned statistical tools are insufficient to analyze the epidemic randomness, and the models are difficult to generalize. The Automatic Regressive Integrated Moving Average (ARIMA) model has been successfully applied in the field of health as well as in different fields in the past due to its simple structure, fast applicability and ability to explain the data set (Cao et al., 2020). As seen in Table 1, ARIMA models have been successfully applied in the past to estimate the incidence and prevalence of influenza mortality, malaria incidence, hepatitis, and other infectious diseases. Besides, ARIMA models are widely used for time series prediction of epidemic diseases such as hemorrhagic fever with renal syndrome, dengue fever, and tuberculosis. ARIMA models are instrumental in modeling the temporal dependency structure of a time series, given the changing trends, periodic changes, and random distortions in the time series. It is relatively easy to explain to the end-user since ARIMA methods do not contain much mathematics or statistics. In this way, the end-user can have an idea of how the prediction model is developed and can rely more on the model during the decision-making process. In recent studies different models have been used to predict COVID-19 incidence, prevalence, and mortality rate in China. For example, Li et al. (2020) developed a function to predict the ongoing trend with data-driven analysis and estimate the outbreak size of the COVID-19 in China . Roosa et al. (2020) used validated phenomenological models during previous outbreaks to create and evaluate short-term forecasts of the cumulative number of confirmed cases in Hubei, China (Roosa et al., 2020). Fanelli and Piazza (2020) analyzed the temporal dynamics of the COVID-19 pandemic in mainland China, Italy, and France (Fanelli and Piazza, 2020). Roda et al. (2020) compared standard SIR and SEIR frameworks to model the COVID-19 in Wuhan Province, China (Roda et al., 2020). Wu et al. (2020) predicted the spread of COVID-19 for the national and global scale, to evaluate the effect of the metropolitan-wide quarantine of Wuhan and its neighbours . Al-qaness et al. (2020) improved the Adaptive Neuro-Fuzzy Inference System (ANFIS) by applying an Enhanced Flower Pollination Algorithm using the Salp Swarm Algorithm to estimate the number of confirmed COVID-19 cases in China (Al-qaness et al., 2020). Anastassopoulou et al. (2020) studied on the estimation of the critical epidemiological parameters as well as the modeling and predicting the spread of the COVID-19 epidemic in Hubei, China (Anastassopoulou et al., 2020). Wang et al. (2020) developed the Patient Information Based Algorithm for estimating the death rate of COVID-19 in realtime using publicly available data . In summary, there are many studies in the literature to predict the spread of COVID-19 in China. However, Europe has become the epicenter of the virus and hit the continent harder than China. As of April 15, 2020, the apparent mortality rate of COVID-19 is 4% in China, 13% in Italy, 11% in Spain, and 15% in France. Therefore, it is significant to analyze the situation of the COVID-19 epidemic and predict the prevalence trend, especially in Italy and the two most affected countries, France and Spain. The aim of this study is to estimate the prevalence of COVID-19 in Italy, Spain, and France, where the virus spreads fastest and causes tragic results. The data analyzed in this study correspond to the period between 21 February 2020 and 15 April 2020. The data set was used to perform and analyze a case estimation model by applying different ARIMA models. Thus, in addition to enlightening the characteristics of the spread of the epidemic, it was aimed to provide authorities with realistic estimates for the peak time and intensity of the epidemic using models based on simple quantitative models. These models can help predict the health infrastructure and material needs that patients will need in these countries in the near future. Data collection The prevalence data of COVID-19 was taken from the WHO website (https://www.who.int/emergencies/diseases/novel-coronavirus-2019/ situation-reports/), and MS Excel was used to build a time-series database. Descriptive statistics of the COVID-19 data of the mentioned countries between 21/02/2020-15/04/2020 are given in Table 2. To create a stable and effective ARIMA model, at least 30 observations are required (Box et al., 2015). Therefore, in this study, a time series containing at least 45 data was used to predict COVID-19 prevalence of Italy, Spain, Table 1 Various studies on disease prevalence/incidence prediction using the ARIMA model. As seen from Fig. 1, the COVID-19 outbreak in Spain and France started later than Italy. Italy reported its first COVID-19 case on January 31, 2020. In Italy, the total number of confirmed cases of COVID-19 reported during the period is 162,488, with an average of 3009 new cases per day. The north of the country was most affected, and the region with the highest number of cases was Lombardy, which recorded 62,153 cases. The neighbouring regions of Emilia-Romagna and Piedmont recorded 21,029 and 18,229 cases, respectively. The overall prevalence of COVID-19 in Spain and France follow Italy, the hardest-hit country in Europe. Spain is the second country with the highest number of deaths in Europe. The first COVID-19 case in Spain was reported about a month after Italy, and since then the number of confirmed cases has jumped to about 172,541. In France, the other most affected European country, the first COVID-19 incident was reported on January 24, 2020, the number of deaths reached to 15,708, and the reported total confirmed cases hit to 102,533. ARIMA models A time series is simply expressed as a set of data points ordered in time (Fanoodi et al., 2019). Time series analysis aims to reveal reliable and meaningful statistics and use this knowledge to predict future values of the series (Liu et al., 2011;Elevli et al., 2016;He and Tao, 2018;Benvenuto et al., 2020). The ARIMA model was introduced by Box and Jenkins in the 1970s (Box et al., 2015). The ARIMA is one of the most used time series models as it takes into account changing trends, periodic changes and random disturbances in the time series. ARIMA is suitable for all kinds of data, including trend, seasonality, and cyclicity. It is also flexible and useful in modeling the temporal dependency structure of a time series. ARIMA model is generally referred to as an ARIMA (p,d,q) where p is the order of autoregression, d is the degree of difference, and q is the order of moving average . The ARIMA model can be modified to perform the function of an ARMA model as well as a simple AR, I or MA model. AR (p) model refers to the current value of the time series Y t linearly in terms of its previous values Y t−1 , Y t−2 ,..,Y t−p and the current residuals ε t . MA (q) model refers to the current value of the time series Y t linearly in terms of its current and previous residual series ε t−1 , Fig. 3. The estimated ACF and PACF graphs to predict the epidemiological trend of COVID-19 prevalence for (a) Italy, (b) Spain, and (c) France. ε t−2 ,..,ε t−q. The general formula of AR (p) and MA (q) models can be expressed in Eqs. (1) and (2), respectively. where ϕ and θ are the autoregressive and moving average parameters, respectively. Y t is the observed value at time t and ε t is the value of the random shock at time t. It is assumed to be independently and identically distributed with a mean of zero and a constant variance of σ 2 . ARMA(p,q) model is composed of AR and MA models, in which the current value of the time series is defined linearly in terms of its previous values as well as current and previous residual series. The ARMA(p,q) model can be presented as given in the Eq. (3). where α is a constant, ε t−1 is the value of the previous random shock. The ARIMA model deals with non-stationary time series. The differenced stationary time series can be modelled as an ARMA model to perform the ARIMA model (He and Tao, 2018). Model selection The accuracy of a model can be tested by comparing the actual values with the predicted values. In this study, three performance criteria, namely Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) were applied to test the predictive accuracy of the developed ARIMA models. They are expressed mathematically in Eqs. (4) to (6). where y t is the observed value at time point t, e t is the difference between the observed and estimated values. Also, n is the number of time points. Lower RMSE, MAE, and MAPE values indicate a better fit of the data. All analyses were performed using STATGRAPHICS Centurion XVI. I software with a statistically significant level of p b .05. Results and discussion 3.1. Forecasting the prevalence of COVID-19 pandemic using the ARIMA model The ARIMA modeling procedure is composed of four iterative steps: assessment of the model, estimation of parameters, diagnostic checking, and prediction. The first step of the ARIMA model is to control whether the time series is stationary and seasonal. A time series is considered as stationary if its statistical properties such as mean, variance, autocorrelation are constant over time. The stationary of a time series observation is important as it will make it easier to get accurate estimates (Elevli et al., 2016). Time series plot, Autocorrelation Function (ACF), and Partial Autocorrelation Function (PACF) graphs were constructed to check the seasonality and stationarity. The ACF graph determines whether previous values in the series are related to the following values. The PACF graph finds out the degree of correlation between a variable and a lag of the said variable that is not explained by correlation at all loworder lags (He and Tao, 2018). Estimated autocorrelations for the time series of Italy, Spain, and France are shown in Fig. 2. Straight lines on the graph are two standard deviations limits and allow to detect nonzero correlations. Bars that extend beyond the lines show statistically significant autocorrelations for the COVID-19 data. Figs. 1 and 2 confirm that the overall prevalence of COVID-19 used in this study does not show seasonal patterns. However, the ACF plots in Fig. 2 shows that the prevalence of the COVID-19 is not stationary because autocorrelations reduce very slightly. Therefore, the first-order difference was taken to stabilize the mean of the COVID-19 prevalence. However, even after the first difference, it seems that the trends of all series not eliminated, so the second-order differences should be taken. All series became stationary after the second difference, and then parameters of ARIMA models were determined according to the ACF and PACF plots (see Appendix). In addition to the developed ARIMA models, different models were also created, and their performances were compared using various statistical tools. All statistical procedures were performed on the transformed COVID-19 data. ARIMA models with minimum MAPE values and statistically significant parameters were selected as the best models. Accordingly, the ARIMA (0,2,1), ARIMA (1,2,0), and ARIMA (0,2,1) models were chosen as the best models for Italy, Spain, and France, respectively. The models fitted the COVID-19 data reasonably well (Fig. 3, Table 3) with a minimum MAPE Italy = 4.752, MAPE Spain = 5.849, and MAPE France = 5.634 values. Table 4 shows the parameter estimates for the best models. The p-values of the associated with the parameters are b0.05, so the terms are considerably different from zero at the 95.0% confidence level. The fitted and predicted values are presented in Fig. 4. As seen in Table 5, the next 10-day estimate of confirmed cases may be between 196,147 in Italy,204,497 in Spain,and 140,619 in France. Discussion Effective strategies are needed to prevent and control the spread of epidemics. Estimating the epidemiological trend of the prevalence of outbreaks is crucial for the allocation of medical resources, regulation of production activities, and even for the national economic development of countries. Thus, it is essential to create a reliable and suitable forecasting model that can help governments as a reference to decide on emergency macroeconomic strategies and medical resource allocation. Time series analysis is instrumental in developing hypotheses to understand the prevalence trend of various diseases and forecast the dynamics of observed phenomena, and then in the construction of a quality control system. ARIMA model is one of the most commonly used time series forecasting methods because of its simplicity and systematic structure and acceptable forecasting performance (Wang et al., 2018b). In this study, the current situation of the COVID-19 pandemic in Italy, Spain, and France was presented, and the ongoing trend and extent of the outbreak were estimated by the ARIMA model. To the best of our knowledge, this study is the first to implement ARIMA models to predict the prevalence of COVID-19 in Italy, Spain, and France. There is great concern that European countries' health system capacity can effectively respond to the needs of infected patients who need intensive care for the COVID-19 pandemic. Especially in Italy, the number of patients infected since February 21 closely follows an exponential trend. Although the number of total confirmed cases of Italy is still increasing, the incidence of new confirmed cases is declining, and the government plans to return to normal life gradually. The daily new confirmed cases decreased to 2000-4500 over the last ten days. Meanwhile, Spain, Europe's second-worst-hit country with 18,056 deaths, has seen a drop in daily coronavirus deaths in the past five days. However, the total number of confirmed cases has overtaken Italy. On the other hand, there is no downward trend in new confirmed cases in France, and it seems that more days are needed to reach the plateau. This pattern will cause intensive care units to be at their maximum capacity. As a result, if the virus does not develop new mutations, the number of cases is expected to reach the plateau. Otherwise, clinical and social problems will be unmanageable, expected to result in disaster. Conclusion Forecasting the prevalence of the disease is important for health departments to strengthen surveillance systems and reallocate resources. Time series models play an important role in outbreak analysis and disease prediction. In this study, ARIMA time series models were applied to the overall prevalence of COVID-19 of three European countries most affected by COVID 19: Italy, Spain, and France. The results of the study can help politics and health authorities to plan and supply resources effectively, including staff, beds and intensive care facilities to manage the situation in these countries over the next few days and weeks. For more precise comparison and future perspectives, the data should be updated in real-time. Fig. 4. Time-series plots for the best ARIMA models. Table 5 Prediction of total confirmed cases of COVID-19 for the next ten days according to ARIMA models with 95% confidence interval.
2020-04-23T05:06:49.567Z
2020-04-22T00:00:00.000
{ "year": 2020, "sha1": "207eb259cdcaa139fa1ccccf96986a2a1ab67b02", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.scitotenv.2020.138817", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "207eb259cdcaa139fa1ccccf96986a2a1ab67b02", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
6929681
pes2o/s2orc
v3-fos-license
Concerted activities of Mcm4, Sld3 and Dbf4 in control of origin activation and DNA replication fork progression Eukaryotic chromosomes initiate DNA synthesis from multiple replication origins in a temporally specific manner during S phase. The replicative helicase Mcm2-7 functions in both initiation and fork progression and thus is an important target of regulation. Mcm4, a helicase subunit, possesses an unstructured regulatory domain that mediates control from multiple kinase signaling pathways, including the Dbf4-dependent Cdc7 kinase (DDK). Following replication stress in S phase, Dbf4 and Sld3, an initiation factor and essential target of Cyclin-Dependent Kinase (CDK), are targets of the checkpoint kinase Rad53 for inhibition of initiation from origins that have yet to be activated, so-called late origins. Here, whole genome DNA replication profile analysis is employed to access under various conditions the effect of mutations that alter the Mcm4 helicase regulatory domain and the Rad53 targets, Sld3 and Dbf4. Late origin firing occurs under genotoxic stress when the controls on Mcm4, Sld3 and Dbf4 are simultaneously eliminated. The regulatory domain of Mcm4 plays an important role in the timing of late origin firing, both in an unperturbed S phase and dNTP limitation. Furthermore, checkpoint control of Sld3 impacts fork progression under replication stress. This effect is parallel to the role of the Mcm4 regulatory domain in monitoring fork progression. Hypomorph mutations in sld3 are suppressed by a mcm4 regulatory domain mutation. Thus, in response cellular conditions, the functions executed by Sld3, Dbf4 and the regulatory domain of Mcm4 intersect to control origin firing and replication fork progression, thereby ensuring genome stability. INTRODUCTION Eukaryotic cells initiate DNA synthesis from multiple replication origins on each chromosome to ensure efficient duplication of the genome in S phase. Activation of replication origins is achieved through two distinct steps that take place at separate stages of the cell division cycle. The first step, licensing of replication origins, occurs in G1 when CDK activity is low (Diffley 2011). During this process, a double hexameric minichromosome maintenance (MCM) complex, composed of two Mcm2-7 hexamers, is loaded onto each replication origin to form a pre-Replicative Complex (pre-RC) by the Origin Recognition Complex (ORC) and licensing factors, Cdc6 and Cdt1 (Diffley 2011). The second step, activation of licensed origins, occurs at each origin in a temporally controlled manner throughout S phase and requires activities of two S phase kinases, the S phase Cyclin-dependent Kinases (CDKs) and the Dbf4-dependent Cdc7 kinase (DDK) (Tanaka and Araki 2013). CDK phosphorylates two key substrates, Sld2 and Sld3, and promotes their binding to Dpb11 (Tanaka et al. 2007;Zegerman and Diffley 2007). DDK phosphorylates several subunits of the Mcm2-7 hexamer and, most importantly, blocks an intrinsic inhibitory activity residing within the amino-terminus of the Mcm4 subunit (Sheu and Stillman 2006;Randell et al. 2010;Sheu and Stillman 2010). The action of these S phase kinases facilitates recruitment of Cdc45 and the GINS complex, composed of protein subunits Sld5, Psf1, Psf2 and Psf3, to the inactive MCM double hexamer and converts it into an active helicase complex, composed of Cdc45, Mcm2-7 and GINS (the CMG complex) (Tanaka and Araki 2013). The two-step process separates the loading and activation of replicative helicases at origins and thereby ensures that initiation from each origin occurs once and only once during each cell division cycle. Once origins are fully activated, the double helix unwinds and DNA polymerase and other replisome components are recruited to establish replication forks, where new DNA is copied bi-directionally from each origin. Initiation of DNA synthesis from licensed origins across the genome (origin firing) follows a pre-determined temporal pattern (Rhind and Gilbert 2013). In budding yeast, the timing of DNA replication can be traced to the activation of individual origins. Origins that fire early in S phase are referred to as early origins and those that fire later are late origins. Despite being an essential target of CDK, Sld3, together with Sld7 and Cdc45, binds to the loaded Mcm2-7 hexamer in a manner dependent on DDK but not CDK (Heller et al. 2011;Tanaka et al. 2011). This association is a prerequisite for the subsequent CDK-dependent recruitment of a pre-loading complex, composed of Sld2, Dpb11, GINS and pol ε (Muramatsu et al. 2010). It was proposed that DDK-dependent recruitment of the limiting Sld3-Sld7-Cdc45 is a key step for determining the timing of origin firing (Tanaka et al. 2011). Furthermore, simultaneous overexpression of several limiting replication factors advances late origin firing (Mantiero et al. 2011;Tanaka et al. 2011). Under genotoxic stress during S phase, DNA damage checkpoint pathways inhibit late origin firing (Zegerman and Diffley 2009). In budding yeast, DNA damage activates the mammalian ATM/ATR homolog, Mec1 kinase, which in turn activates the Rad53 effector kinase (the homolog of mammalian Chk2) to phosphorylate and inhibit the activities of Sld3 and Dbf4, thereby preventing late origin firing (Lopez-Mosqueda et al. 2010;Zegerman and Diffley 2010). Some firing of late origins could be detected under DNA damaging conditions in phosphorylation mutants of these two targets rendered refractory to the inhibition by Rad53. An initiation inhibitory activity within the non-structured, amino-terminal regulatory domain of Mcm4 (NSD; Fig. 1) also plays a role in regulating origin firing under genotoxic stress (Sheu et al. 2014). Because this domain is a target of DDK (Masai et al. 2006;Sheu and Stillman 2006;Sheu and Stillman 2010), it is conceivable that Mcm4 could mediate the checkpoint control by Rad53 phosphorylation of Dbf4. However, since DDK has targets other than Mcm4 and Mcm4 is regulated by signals in addition to DDK, a more comprehensive picture of how these factors cooperate to control origin firing under stress condition remains to be addressed. In addition to origin activation, DNA synthesis can be controlled at the level of replication fork progression. For example, deoxyribonucleoside triphosphate (dNTP) levels influence the rate of replication fork progression (Santocanale and Diffley 1998;Alvino et al. 2007). Hydroxyurea (HU) inhibits the activity of ribonucleotide reductase (RNR) and causes a dramatic slowdown of replication fork progression. In contrast, high dNTP concentration inhibits ORC-dependent initiation of DNA replication (Chabes and Stillman 2007). It has been proposed that dNTP levels are key determinants of replication fork speed and that cells adapt to replication stress by up-regulating dNTP pools (Poli et al. 2012). Methyl methanesulfonate (MMS), a DNAalkylating agent, also results in slower fork progression while activating the DNA damage checkpoint response (Tercero and Diffley 2001). Although Mec1 and Rad53 are essential for preventing DNA replication fork catastrophe, these checkpoint kinases are not required for fork slowing in MMS (Tercero and Diffley 2001;Tercero et al. 2003). Thus, it is possible that an alternative mechanism might regulate fork progression under stress conditions. The structurally disordered N-terminal serine/threonine-rich domain (NSD) of Mcm4 participates in both initiation and fork progression (Sheu et al. 2014). It can be subdivided into two overlapping but functionally distinct segments, the proximal segment and the distal segment (Fig. 1). The proximal segment of the NSD (amino acids 74-174) is responsible for the initiation inhibitory activity that is mitigated by DDK through phosphorylation (Sheu and Stillman 2006;Sheu and Stillman 2010). The distal segment (amino acids 2-145) is important for controlling fork progression and checkpoint response under replication stress caused by depletion of dNTP pools and its function is regulated by CDK (Devault et al. 2008;Sheu et al. 2014). Thus, this intrinsic regulatory domain of the replicative helicase maybe cooperate closely with additional factors to control origin firing and replication fork progression in response to various environmental conditions. By employing whole genome replication profile analysis, we investigated the effect of mutations affecting the two Mcm4 NSD segments and the Rad53 phosphorylation targets, Sld3 and Dbf4, on origin activation and replication fork progression under various conditions. The results show that when the controls on all three factors are simultaneously eliminated, robust late origin firing occurs following genotoxic stress caused by low dNTP levels, even without overexpressing limiting factors. The proximal NSD plays an important role in the timing of late origin firing in both normal S phase and DNA damaging conditions, while the checkpoint resistant version of Sld3 and Dbf4 only have effects under DNA damaging conditions. However, a checkpoint resistant version of Sld3 has a strong impact on fork progression under replication stress conditions and the effect works in parallel to the effect of the Mcm4 distal NSD. In addition, we also found that removing the inhibitory proximal NSD domain from Mcm4 can suppress the defect of hypomorph sld3 mutants. Thus, in response to environmental conditions, Sld3, Dbf4 and the regulatory segments of Mcm4 contribute in a concerted manner to control origin firing and replication fork progression through overlapping but distinct pathways to ensure genome stability. Eliminating the controls simultaneously on Mcm4, Sld3 and DDK is required for maximal late origin firing in hydroxyurea To evaluate how deregulating the control on Mcm4, Sld3 and Dbf4 impact origin firing on a genome-wide scale, the replication profiles of the wild type (WT) and mutant strains with mcm4 Δ 74-174 [mcm4 mutant lacking the proximal NSD domain, the target of DDK], sld3-38A and dbf4-19A [alleles of SLD3 and DBF4, respectively, that are resistant to the checkpoint control due to serine/threonine to alanine substitutions at the Rad53 target sites (Zegerman and Diffley 2010)], were analyzed in single, double and triple mutant combinations. Cells were analyzed by releasing them synchronously from G1 phase into S phase in the presence of HU for 90 min (Fig. 2, data for chromosome IV). Late origins were inactive at this time in WT ( Fig. 2A, profile WT; red arrows) while a very low level of late origin firing was detected in each of the single mutants (profiles mcm4 Δ 74-174 , sld3-38A or dbf4-19A; Fig. 2A), consistent with previous findings (Lopez-Mosqueda et al. 2010;Zegerman and Diffley 2010;Sheu et al. 2014). Late origin firing appeared more prominent in all of the double mutant combinations (profiles mcm4 Δ 74-174 sld3- 38A, sld3-38A dbf4-19A and mcm4 Δ 74-174 dbf4-19A), suggesting that these three factors function in pathways that are not completely overlapping. Among the double mutants, the mcm4 Δ 74-174 sld3-38A combination showed the most robust late origin firing. Thus it is likely that Mcm4 and Sld3 function in separate control pathways to regulate origin firing. The mcm4 Δ 74-174 dbf4-19A combination only increased late origin firing slightly compared with each single mutant, consistent with the finding that Dbf4 and Mcm4 act in pathways that overlap extensively as Mcm4 is the essential target of DDK (Sheu and Stillman 2010). The detectable, but limited increase in late origin firing in sld3-38A dbf4-19A cells in comparison with their single mutants also suggests overlapping of the pathways involving Sld3 and Dbf4. Interestingly, in the triple mutant ( Fig. 2A profile mcm4 Δ 74-174 sld3-38A dbf4-19A), many late origins fire very robustly, more than any of the double and single mutants and the efficiency of late origin firing approached the level of the early origins. The massive firing of late origins in the triple mutant further suggests that each of the three factors contribute independently to control of origin firing through overlapping but none-identical pathways. Late origin firing in these mutants was not due to defect in the HU induced checkpoint response. Judging from levels of Rad53 hyper-phosphorylation and phosphorylation of S129 in H2A (γH2A), the checkpoint signaling in mcm4 Δ 74-174 , sld3-38A and all the double and triple mutants was stronger than WT (Fig. 2C, Rad53 and γH2A). The elevated checkpoint signaling could be also reflect more origin firing in these cells and more stalled forks (Tercero 2003). Consistent with this idea, higher levels of Cdc45 loading were also detected in these mutants (Fig. 2D). Proximal segment of Mcm4 NSD delays late origin firing in HU when control on Sld3 and Dbf4 by checkpoint is abrogated To determine whether the triple mutant activated late origins with the same kinetics as early origins, the replication profiles of WT and the triple mutant were compared at different time points after release from G1 phase into S phase in the presence of HU (Fig. 3). At 25 min after release into HU, early origins fired in WT cells, while late origins remained inactive (Fig. 3A). At 50 min, the profile of activated origins in WT remained similar to that at 25 min, with a small increase of peak width, indicating progression of replication forks. At 75 min, replication forks progressed further, but the pattern of origin firing remained unchanged. In contrast, the triple mutant activated some late origins by 25 min. and the peak height of late origins continued to increase relative to that of early origins as time progressed to 50 and 75 min, suggesting that late origins continued to fire in this cell population. These data show that removing the proximal NSD segment of Mcm4, together with abolishing Rad53 phosphorylation of Sld3 and Dbf4 allowed late origins to fire efficiently in the presence of HU, but still in a temporally specific manner. In the presence of the proximal NSD segment, however, only low levels of late origin firing were detected in the sld3-38A dbf4-19A mutant at 50 min after release from G1 and only slightly increased at 75 min., but did not reach the same level as the mcm4 Δ 74-174 sld3-38A dbf4-19A mutant (Fig. 3B). These observations suggest that proximal segment of the NSD prevents late origin firing at the earlier time in S phase despite the absence of active checkpoint inhibition of Sld3 and DDK function. The distal segment of Mcm4 NSD and Rad53 phosphorylation of Sld3 affect replication fork progression In addition to revealing patterns of origin activation, whole genome replication profile analysis also provides information on the average replication fork progression from each origin in the . CC-BY-ND 4.0 International license a certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted July 1, 2015. ; https://doi.org/10.1101/021832 doi: bioRxiv preprint population of cells. For computational analysis, we defined fork progression as the observed peak width at the half maximum of the peak height for each origin in the profile (Sheu et al. 2014). Analysis of replication profiles in HU showed that replication fork progression was much less in all the mutants containing the sld3-38A allele than in the wild type ( Fig. 2A, 2B, 3C, 4A and 4B; Supplementary table 1), suggesting that phosphorylation of Sld3 by the checkpoint kinase Rad53 is needed to allow replication fork progression in HU. The mcm4 Δ 74-174 mutant lacking the Mcm4 NSD distal domain also had less fork progression than the wild type, but the difference was subtle, yet reproducible ( Interestingly, during analysis of the double mutants it became apparent that hyper-activation of checkpoint signaling as measured by the degree of Rad53 and H2A phosphorylation was observed in the sld3-38A single mutants and the mcm4 Δ 74-174 sld3-38A double mutant, but was not observed in the mcm4 Δ 2-145 sld3-38A and mcm4 Δ 74-174, 4(SPàAP) sld3-38A double mutants. These . CC-BY-ND 4.0 International license a certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted July 1, 2015. ; https://doi.org/10.1101/021832 doi: bioRxiv preprint hyper-phosphorylation levels were slightly more elevated than that of the mcm4 Δ 2-145 single mutant (Fig. 4C). Despite the variation in Rad53 and H2A phosphorylation, the downstream response to checkpoint activation, as manifest in degradation of Sml1 and up regulation of Rnr4, appeared similar among all strains. The proximal segment of the Mcm4 NSD controls firing of late origins in an unperturbed S phase Since the proximal segment of the NSD imposes a barrier to refrain late origin firing in HU (Fig. 3B), it is possible that this domain also controls late origin firing in a normal, unperturbed S phase. To test this idea, the replication profiles of WT and various mcm4 mutants containing mutations within the NSD were examined and compared at 25 min after release from G1 arrest (Fig. 5). This time point was selected because peaks representing DNA synthesis from individual origins could be clearly detected without excessive overlap and thus were more suitable for analysis of origin firing and fork progression. At this time, DNA synthesis from early origins was readily detected in WT, whereas very small amounts of DNA synthesis occurred from late origins (Fig. 5A, profile WT, red arrows). Thus, this time represents a point in the early S phase. A similar profile was found with the mutant lacking the distal segment of the NSD (Fig. 5A, profile mcm4 Δ 2-145 ). In mcm4 Δ 2-174 and mcm4 Δ 74-174 mutants that lacked the proximal segment of the NSD, however, peaks corresponding to firing from late origins were clearly observed, although still less than early origins (Fig. 5A), suggesting that the proximal NSD segment contributes to the temporal pattern of late origin firing during an unperturbed S phase. Interestingly, advanced firing of late origins did not occur when, in the context of an Mcm4 lacking the proximal segment of the NSD, the phospho-acceptors for S phase-CDK phosphorylation within the distal segment of the NSD were mutated to alanine (profile mcm4 Δ 74-174, 4(SPàAP) ) ( Fig. 5). In contrast, mutation of these same residues to the phospho-mimetic aspartic acid (profile mcm4 Δ 74-174, 4(SPàDP) ) allowed earlier firing of late origins, similar to the mcm4 Δ 74-174 mutant. Thus, phosphorylation of the CDK sites within the distal segment of the NSD is important for efficient firing of late origins during an unperturbed S phase. . CC-BY-ND 4.0 International license a certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted July 1, 2015. ; https://doi.org/10.1101/021832 doi: bioRxiv preprint From the replication profile analyses, it appeared that mutations in the Mcm4 NSD did not have a dramatic effect on fork progression in an unperturbed S phase (Fig. 5B; Supplementary table 1). However, subtle differences would be more difficult to detect in such an experiment because DNA synthesis occurred much faster in the absence of HU. Similarly, the sld3-38A mutation did not restrict fork progression in contrast to its effect in HU ( Fig. 6B; Supplementary table 1). The distribution of fork progression in the dbf4-19A mutant appeared more heterogeneous (Fig. 6B), similar to the pattern observed in HU (Fig. 2B). Double mutants of dbf4-19A with either mcm4 Δ 74-174 or sld3-38A yielded phenotypes resembling the single mutant of mcm4 Δ 74-174 or sld3-38A with respect to replication fork progression. Checkpoint response and replication profiles for Mcm4 NSD mutants entering S phase in the presence of MMS Although the NSD proximal segment controls late origin firing in both a normal S phase and S phase with a depleted pool of dNTP, the effect of NSD mutations on fork progression was only observed in the presence of HU. Furthermore, sld3-38A showed a strong phenotype in restricting fork progression in HU, but no obvious effect in a normal S phase. The differential influence of these mutations on origin firing and replication fork progression in an unperturbed versus a HU treated S phase raised the question of whether these factors would have same effect on other types of genotoxic stress than dNTP depletion. Thus, the DNA damage checkpoint response and DNA replication profiles in cells replicating in the presence of the DNA alkylating agent methyl methanesulfonate (MMS) were studied. For replication profile analyses, cells were synchronized in G1 phase and allowed to enter S phase in the presence of MMS for 50 min. We did not use the 90 min time point that we typically used for analysis of profile in HU because replication in MMS was faster than in HU and 90 min in MMS would have produced profiles difficult to analyze due to numerous merged peaks and passive replication at late/unfired origin loci by replication forks moving from early firing origins. At 50 min in MMS after release from G1 arrest, few late origins fire in WT and in the mutant lacking the distal NSD segment (Fig. 7A, profiles WT and mcm4 Δ 2-145 ). In contrast, late origin firing was more evident in the mutants lacking the proximal NSD segment (Fig. 7A, profiles mcm4 Δ 2-174 and mcm4 Δ 74-174 ). In the same context. Mutation of the CDK sites to alanines within the distal NSD segment (mcm4 Δ 74-174,4(SPàAP) ) suppressed late origin firing while mutating the same sites to phosphomimetic aspartic acids (mcm4 Δ 74-174,4(SPàDP) ) restored the level of late origin firing. Thus, the proximal NSD segment also mediates control of late origin firing in MMS. Replication fork progression was also affected in the Mcm4 NSD mutants replicating in the presence of MMS (Fig. 7). Fork progression was more restricted in the mcm4 Δ 74-174 mutant lacking the NSD proximal segment compared to WT, while more expansive fork progression was observed in mutants lacking the distal NSD segment (Fig. 7A and B; compare mcm4 Δ 2-145 with . CC-BY-ND 4.0 International license a certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted July 1, 2015. ; https://doi.org/10.1101/021832 doi: bioRxiv preprint WT and mcm4 Δ 2-174 with mcm4 Δ 74-174 ). Fork progression in MMS was also regulated by phosphorylation at the CDK target sites within the distal segment of the NSD because forks progressed further in mcm4 Δ 74-174,4(SPàAP) compared to progression in mcm4 Δ 74-174 , while in the mcm4 Δ 74-174,4(SPàDP) mutant fork progression was more restricted, similar to that in mcm4 Δ 74-174 (Fig. 7B). Therefore, the distal and proximal segments of the Mcm4 NSD play important roles in mediating control of fork progression in diverse types of genotoxic agents. In HU, the NSD distal segment was important for checkpoint signaling at the level of Mec1 signaling (Sheu et al. 2014), although Mec1 phosphorylation of Mcm4 is independent of checkpoint activation (Randell et al. 2010). Removing the distal segment of the NSD, or mutation of the phospho-acceptor amino acids at CDK sites to alanine within this domain resulted in reduced levels of Rad53 hyper-phosphorylation and γH2A. In contrast, other aspects of checkpoint signaling further downstream, such as Sml1 degradation and Rnr4 induction appeared normal in cells treated with HU. In MMS, however, hyper-phosphorylation of Rad53 and S129 phosphorylation in H2A, as well as further downstream events such as degradation of Sml1 and up regulation of Rnr4 levels appeared very similar among wild type and various Mcm4 NSD mutants (Fig. 7C). Thus, the Mcm4 NSD did not play a prominent role in checkpoint signaling in response to DNA damage caused by MMS. Cooperation between the proximal segment of the Mcm4 NSD and Rad53 in regulating late origin firing in MMS The Rad53 resistant sld3-38A and dbf4-19A strains alone also showed low levels of late origin firing in cells treated with MMS (Fig. 8A). The difference between the wild type and the dbf4-19A mutant was subtle. Nevertheless, the double mutants sld3-38A dbf4-19A, mcm4 Δ 74-174 sld3-38A and mcm4 Δ 74-174 sld3-38A activated more late origins than the wild type and any of the single mutants. The mcm4 Δ 74-174 dbf4-19A showed only a slight increase in late origin firing, compared with the mcm4 Δ 74-174 single mutant, consistent with Mcm4 functioning downstream of Dbf4 in controlling late origin firing in MMS. Furthermore, the triple mutant activated late origins the most efficiently (Fig. 8A, profile mcm4 Δ 74-174 sld3-38A dbf4-19A). Thus, all three factors contribute to control of late origin firing through overlapping but non-identical pathways in MMS, as was found for cells treated with HU. Like in HU, the sld3-38A mutant had an effect on restricting replication fork progression in MMS (Fig. 8B). The effect of sld3-38A and mcm4 Δ 74-174, on restricting fork progression was additive under this condition, suggesting that they control fork progression separately. Fork progression in dbf4-19A was the more extensive, but become more restricted when sld3-38A and mcm4 Δ 74-174, were also present. Thus, Mcm4, Sld3 and Dbf4 cooperate to regulate fork progression in MMS. . CC-BY-ND 4.0 International license a certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted July 1, 2015. ; https://doi.org/10.1101/021832 doi: bioRxiv preprint The DNA damage checkpoint signaling was active in the wild type and all of the single, double and triple mutant combinations of mcm4 Δ 74-174 , sld3-38A and dbf4-19A (Fig. 8C). Although we detected elevated H2A S129 phosphorylation in the double mutants of mcm4 Δ 74-174 sld3-38A and sld3-38A dbf4-19A, as well as the triple mutant, the differences in the signaling at this level among strains did not appear as dramatic as those observed in HU (Fig. 2C). Suppression of the temperature-sensitive (ts) phenotype of multiple sld3-ts mutants by deletion of the Mcm4 NSD proximal segment Although replication profile analyses suggest that Sld3 and the Mcm4 NSD mediate controls from separate pathways, the fact that they affect similar processes raises the possibility that the tasks executed by these two factors may converge on a common target. For example, the proximal segment of the Mcm4 NSD maybe inhibiting the same molecular process that Sld3 is facilitating. If this is the case, it is likely that removing the proximal NSD segment would compensate for the weakened Sld3 function in hypomorph sld3 mutants. The idea was tested by introducing the mcm4 Δ 74-174 mutation in sld3-ts mutants sld3-5, sld3-6 and sld3-7 (Kamimura et al. 2001), all of which fail to grow on YPD plates at a non-permissive temperature above 30ºC, 34ºC and 37ºC, respectively. At 30ºC, the sld3-5 mutant grew extremely poorly, compared with the wild type and the mcm4 Δ 74-174 mutant, while the mcm4 Δ 74-174 sld3-5 double mutant grew much better than the sld3-5 mutant (Fig. 9A). At 23ºC, the sld3-5 mutant also grew slower than the wild type but the mcm4 Δ 74-174 sld3-5 grew similarly to the wild type. Thus, removing proximal segment of the NSD improved the growth of the sld3-5 mutant. Likewise, removing the proximal segment of the NSD improved the growth of the sld3-6 and sld3-7 mutants at 37ºC, but no growth occurred in sld3-5 and mcm4 Δ 74-174 sld3-5 at this temperature. Thus, removing the Mcm4 proximal segment of the NSD suppresses the defect of multiple hypomorph sld3-ts mutants. We also tested if removing the proximal segment of the Mcm4 NSD would also suppress the ts phenotype of mutants affecting other factors that function with Sld3, such as Dbp11 and components of the GINS complex (Kamimura et al. 1998;Takayama et al. 2003). The mcm4 Δ 74-174 mutation failed to suppress the ts phenotype of sld5-12 or psf1-1, mutants in GINS subunits ( Fig. 9B, 30ºC, 34ºC and 37ºC). A very slight improvement of growth in the mcm4 Δ 74-174 dpb11-1, compared with the dpb11-1 mutant, at 34ºC was observed. However, the suppression was much less effective than the suppression of the sld3-6 defect by mcm4 Δ 74-174 (Fig. 9B, 34ºC and 37ºC). The specific and strong suppression of sld3-ts by mcm4 Δ 74-174 is consistent with the idea that Sld3 and the Mcm4 NSD regulate the same process at the molecular level to control origin firing and influence replication fork progression under genotoxic stress. DISCUSSION . CC-BY-ND 4.0 International license a certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted July 1, 2015. ; https://doi.org/10.1101/021832 doi: bioRxiv preprint The inhibition of DNA replication under genotoxic stress requires both Rad53 and Mec1 kinases (Sanchez et al. 1996;Santocanale and Diffley 1998;Zegerman and Diffley 2009). In previous studies, we demonstrated that even in the presence of an active S-phase checkpoint response, late origins fire in the presence of HU when the Mcm4 NSD proximal segment was removed (Sheu et al. 2014). The observation suggested that under replication stress the checkpoint kinase Rad53 inhibited Dbf4 by phosphorylation (Lopez-Mosqueda et al. 2010;Zegerman and Diffley 2010), rendering DDK incapable of relieving the initiation inhibitory activity of the Mcm4 NSD proximal segment (Fig. 9C). Since the Mcm4 NSD proximal segment is targeted by DDK (Sheu and Stillman 2010), inhibition of DDK by active Rad53 would not prevent initiation in the absence of this initiation inhibitory domain. However, late origin firing in the absence of this domain was still rather inefficient, presumably because checkpoint activation of Rad53 still allowed phosphorylation of the other target, Sld3, thereby inactivating Sld3 activity and preventing robust firing of late origins (Fig. 9C). By using specific probes for analysis by alkaline gel electrophoresis or two-dimensional gel electrophoresis, some firing of certain late origins was detected in sld3 and dbf4 mutants that are refractory to the inhibition by the checkpoint kinase Rad53 (Lopez-Mosqueda et al. 2010;Zegerman and Diffley 2010). In the current study, whole genome replication profile analysis was used to investigate the individual roles, as well as the combined effect of the two Mcm4 NSD segments and the Rad53 targets, Sld3 and Dbf4, on origin activation and replication fork progression in order to delineate the relationship among these factors in control of replication in response to replication stress. In both HU and MMS, late origins fire in each of the mcm4 Δ 74-174 , sld3-38A and dbf4-19A single mutants across the entire genome, albeit very inefficiently ( Fig. 2 and 7). The fact that all the double mutant combination among these three mutations activated late origins more efficiently than the respective single mutants and that the triple mutant exhibited the most efficient firing of late origins suggests that each factor contributes a unique function in control of origin activation. Yet, their functions may not be completely independent (Fig. 9C). For example, late origin firing in the mcm4 Δ 74-174 dbf4-19A double mutant appeared only marginally more efficient than the mcm4 Δ 74-174 single mutant. This was not surprising given that Mcm4 is the principal, essential target of DDK and mcm4 Δ 74-174 can bypass the regulation of the kinase (Sheu and Stillman 2010). Nevertheless, because the triple mutant promotes robust firing of late origins, more than any of the single and double mutants, both the Mcm4 NSD proximal segment and Dbf4 must also independently contribute to regulate late origin firing (Fig. 9C). This can be anticipated for Dbf4 because DDK also phosphorylates other factors in addition to Mcm4 NSD. For Mcm4, it raises the possibility that, in addition to DDK, other factors might participate in the regulation of the function of the proximal NSD in controlling late origin firing under replication stress. Identification of factors that interact with the proximal NSD may shed light on this aspect of the control mechanism. Alternatively, Dbf4 may affect Mcm2-7 helicase activity independent of the Mcm4 NSD or participate in feedback regulation of Rad53 kinase activity or specificity (Fig. 9C, dashed lines). One possibility is that they antagonize each other's activity, essentially creating a . CC-BY-ND 4.0 International license a certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted July 1, 2015. ; https://doi.org/10.1101/021832 doi: bioRxiv preprint feedback loop for inactivating the checkpoint once the replication stress has subsided. The partial overlap of functional pathways involving Sld3 and Dbf4, as revealed by the minimal combined effect of dbf4-19A and sld3-38A on late origin firing in HU compared to each single mutant can be explained by the fact that the early association of Sld3 to the pre-RC depends on DDK activity (Heller et al. 2011;Tanaka et al. 2011) (Fig. 9C). In contrast, the mcm4 Δ 74-174 sld3-38A exhibited the strongest additive effect among the double mutant combinations, suggesting that these two factors mediate regulation via separate pathways (Fig. 9C). However, the suppression of the hypomorph sld3-ts defect by mcm4 Δ 74-174 suggests that these two pathways intersect on a common process and they are likely regulating the same factors. It is possible that this common pathway merges on the direct activation of the Mcm2-7 helicase by recruiting other helicase components Cdc45 and GINS (Fig. 9C). In the sld3-38A dbf4-19A double mutant that was expected to be completely refractory to the control by the Rad53-dependent S-phase checkpoint, late origins did not fire until 50 min into S phase in the presence of HU (Fig. 3B). Given that late origin firing was readily detected in a normal, unperturbed S phase at this time, this suggests that a mechanism is functioning to withhold late origins from firing in this double mutant condition. Removing the proximal segment of the Mcm4 NSD in the same genetic background allowed late origins to fire by 25 min after release, similar to what we have observed in unperturbed S phase (Fig. 5A), strongly suggesting that activating DDK alone is not sufficient to efficiently block the function of the proximal NSD segment in withholding late origin from firing earlier in HU. Thus, besides DDK, additional factors might participate in relieving the block imposed by proximal NSD to delay late origin firing. The proximal segment of the Mcm4 NSD also controls late origin firing in an unperturbed S phase (Fig. 9C). In mutants lacking this domain, more late origin firing was detected in early S phase (Fig. 5A). In contrast, neither of the sld3-38A, dbf4-19A or sld3-38A dbf4-19A mutants showed advanced firing of late origins in an unperturbed S phase (Fig. 6A). Thus, checkpoint kinase Rad53 does not appear to control late origin firing through Sld3 and Dbf4 in a normal S phase. Phosphorylation of the CDK sites within the distal segment of the Mcm4 NSD (Devault et al. 2008) was also important for advanced firing of late origins in the unperturbed S phase when the proximal segment of the NSD was removed (Fig. 5A). Previous studies in budding yeast have shown that, in the absence of the main S-CDK cyclin, Clb5, only early origins fire, but not late origins (Donaldson et al. 1998). Thus, activation of late origins requires activity of S phase CDK. Together, these results suggest that phosphorylation the distal segment of the Mcm4 NSD by CDK is an important step for activation of late origins (Fig. 9C). The accumulation of CDK activity as cells progress through S phase may eventually allow late origins to fire. The role of the Mcm4 NSD in regulating late origin firing and fork progression previously discovered in HU was largely recapitulated in cells replicating in MMS (Fig. 7). Specifically, in . CC-BY-ND 4.0 International license a certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted July 1, 2015. ; https://doi.org/10.1101/021832 doi: bioRxiv preprint MMS the proximal segment of the NSD mediates control of late origin firing and the distal segment NSD mediates control of fork progression in a manner that is regulated by phosphorylation at CDK target sites. However, unlike the response in HU, the Mcm4 NSD mutations exhibited little effect on checkpoint signaling in response to DNA damage caused by MMS (Fig. 7C). In our previous study, we noticed an interesting inverse correlation between checkpoint signaling and DNA replication fork progression in HU (Sheu et al. 2014), raising the possibility that one process controls the other. The study here in MMS, in contrast, provides evidence that these two processes might not always influence each other. At least in MMS, the Mcm4 NSD is likely to regulate fork progression through a mechanism independent of the canonical DNA damage checkpoint pathway. One possibility is that the DNA damaging signal would somehow control the activity of CDK or other SP site kinases, which in turn regulate the function of the distal segment of the NSD. Sld3 also mediates control of DNA replication fork progression in both HU and MMS ( Fig. 2B and 8B). The sld3-38A mutant exhibits a dramatic slowdown in replication fork progression. This is a somewhat surprising observation because Sld3 is not considered a replication fork component because it is required for initiation, but not for elongation in normal S phase progression (Kanemaki and Labib 2006). It is not clear why DNA synthesis is so limited in the this mutant but the mutation does not seem to result in a defective replication factor because the replication profile of this mutant is similar to the wild type in an unperturbed S phase and this mutant grows at a rate comparable to wild type cells. Furthermore, a previous study reported that yeast strains expressing sld3-38A as the sole copy of Sld3 displayed no increase in sensitivity to hydroxyurea or DNA damaging agents and did not exhibit synthetic growth defects with several conditional alleles of essential replication proteins (Zegerman and Diffley 2010). Therefore, this phenotype is likely due to regulation of fork function. Replication profile analysis of the double mutant combining sld3-38A and NSD mutants in the distal segment showed that the effect of the sld3-38A mutant on fork progression in HU and distal NSD mutant are not epistatic to each other (Fig. 4), consistent with the idea that these two factors operate in separate pathways to regulate fork progression in HU. In contrast to its role in HU and MMS, the checkpoint resistant sld3-38A mutant did not affect fork progression in an unperturbed S phase (Fig. 6). Thus, the control of fork progression through Rad53 target sites on Sld3 is a specific feature in the genotoxic-stressed condition. Since DDK binds directly to the MCM-2-7 helicase subunits Mcm4 and Mcm2 (Varrin et al. 2005;Sheu and Stillman 2006;Jones et al. 2010), and DDK binds directly to Rad53 (Dohrmann et al. 1999;Weinreich and Stillman 1999;Kihara et al. 2000), it is possible that the regulation of the response to DNA replication stress such as limiting dNTP levels involves a local response at the DNA replication fork, essentially a solid state regulatory complex. How other Mcm2-7 associated replication checkpoint proteins such as Mrc1, Dpb11, Sld2 and the large subunit of DNA polymerase ε control initiation of replication and fork progression remains to be . CC-BY-ND 4.0 International license a certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted July 1, 2015. ; https://doi.org/10.1101/021832 doi: bioRxiv preprint investigated, but we suspect that they integrate with the regulatory system involving Dbf4, Sld3, and Mcm4. . CC-BY-ND 4.0 International license a certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted July 1, 2015. ; https://doi.org/10.1101/021832 doi: bioRxiv preprint METHODS Yeast strains and methods. Yeast strains generated in this study were derived from and are described in Supplementary table 2. A two-step gene replacement method was used to replace the endogenous MCM4 with mcm4 mutants as described (Sheu et al. 2014). All the yeast strains used for the whole genome DNA replication profile analyses have a copy of the BrdU-Inc cassette inserted into the URA3 locus (Viggiani and Aparicio 2006). For G1 arrest of bar1Δ strains, exponentially growing yeast cells (~10 7 cell/ml) in YPD were synchronized in G1 with 25 ng/ml of α-factor for 150 min at 30ºC. For G1 arrest of BAR1 strains, exponentially growing cells were grown in normal YPD, then transferred into YPD (pH3.9), grown to approximately 10 7 cell/ml and then synchronized in G1 with three doses of α-factor at 2 µg/ml at 0, 50, and 100 min time point at 30ºC. Cells were collect at 150 min for release. To release from G1 arrest, cells were collected by filtration and promptly washed twice on the filter using 1 culture volume of H 2 O and then resuspended into YPD medium containing 0.2 mg/ml pronase E (Sigma). Protein sample preparation and immunoblot analysis. TCA extraction of yeast proteins was as described previously (Sheu et al. 2014). For chromatin fractionation, chromatin pellets were prepared from ~5x10 8 yeast cells and chromatin-bound proteins were released using DNase I using a procedure described previously (Sheu et al. 2014). For immunoblot analysis, proteins samples were fractionated by SDS-PAGE and transferred to a nitrocellulose membrane. Immunoblot analysis for Mcm3, Cdc45, Orc6, Mcm4, Rad53, γ-H2A, Rnr4 and Sml1 were performed as described (Sheu et al. 2014). Isolation and preparation of DNA for whole genome replication profile analysis. Detailed protocol was described previously (sheu2014). Briefly, yeast cells were synchronized in G 1 with α-factor and released into medium containing of 0.2 mg/ml pronase E, 0.5 mM 5-ethynyl-2'deoxyuridine (EdU) with or without addition of 0.2 M HU or 0.05% MMS as described in the main text. At the indicated time point, cells were collected for preparation of genomic DNA. The genomic DNA were fragmented and then ligated to adaptors containing custom barcodes and then biotinylated, purified, PCR-amplified, quantified, pooled and submitted for sequencing. Computational analyses of sequencing data were described in detail previously (Sheu et al. 2014). Computational analyses of sequencing data: Read mapping, replication profile analysis, and peak width analysis were performed as previously described (Sheu et al. 2014) with minor modifications. Each genome-wide replication profile was generated from between 4.4 to 49.1 million mapped reads. In the replication profile analysis, read counts were averaged across the genome using a sliding window of 500 bp, 1000 bp, or 2000 bp. Fork progression was quantified as the full width at half . CC-BY-ND 4.0 International license a certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted July 1, 2015. ; https://doi.org/10.1101/021832 doi: bioRxiv preprint maximum of replication profile peaks that encompass single origins annotated in oriDB v2.1.0 (Siow et al. 2012). Peak heights for each data set were rescaled to a common value for the 99.5 percentile of all genomic positions. Peaks having maximum height below either 10% or 30% of this reference value were excluded from the peak width analysis in order to avoid calling peak widths from insufficient read counts. This cutoff is indicated by a line in each of the peak-width plots in the Figures. Supplemental table 1 lists the analysis parameters used for each sample. ACKNOWLEDGMENTS This work was supported by a grant from the National Institutes of Health (GM45336) and core facilities by an National Cancer Institute Core grant (CA045508). We thank A. Chabes for the antibodies against Rnr4 and Sml1, H. Araki for yeast strains, Cold Spring Harbor Laboratory DNA Sequencing Next Gen shared resource for high throughput sequencing, Microscopy, Flow Cytometry shared resources and Bioinformatics core for initial data analysis, Patty Wendel and James Building Staff for general assistance. DISCLOSURE DECLARATION (including any conflicts of interest) No conflicts of interest exist . CC-BY-ND 4.0 International license a certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted July 1, 2015. ; https://doi.org/10.1101/021832 doi: bioRxiv preprint CC-BY-ND 4.0 International license a certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted July 1, 2015. ; https://doi.org/10.1101/021832 doi: bioRxiv preprint and box graph, excluding peaks with heights smaller than 30 % of the maximal height scale. (C) Cells from the indicated strains were synchronized in G1, released into 0.2 M HU and collected at the indicated time points. Protein samples were analyzed as in Fig. 2C. Red arrows indicate late origins that are inactive in the wild type cells but fire in the triple mutant in HU as in Fig. 2. (B) Distribution of fork progression from origins shown as individual width-height plots and a box graph, which excludes peaks with heights smaller than 30 % of the maximal height scale. . CC-BY-ND 4.0 International license a certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted July 1, 2015. CC-BY-ND 4.0 International license a certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted July 1, 2015. . CC-BY-ND 4.0 International license a certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted July 1, 2015. ; https://doi.org/10.1101/021832 doi: bioRxiv preprint
2016-11-01T19:18:48.349Z
2015-07-01T00:00:00.000
{ "year": 2016, "sha1": "0b1c081cd35217ea162b16083d1e2fa68c62062b", "oa_license": "CCBYNC", "oa_url": "http://genome.cshlp.org/content/26/3/315.full.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "187945f011bd88752b7a3ed9c81d369e9205051d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
118909329
pes2o/s2orc
v3-fos-license
Approximate stabilizer rank and improved weak simulation of Clifford-dominated circuits for qudits Bravyi and Gosset recently gave classical simulation algorithms for quantum circuits dominated by Clifford operations. These algorithms scale exponentially with the number of T-gate in the circuit, but polynomially in the number of qubits and Clifford operations. Here we extend their algorithm to qudits of odd prime dimensions. We generalize their approximate stabilizer rank method for weak simulation to qudits and obtain the scaling of the approximate stabilizer rank with the number of single-qudit magic states. We also relate the canonical form of qudit stabilizer states to Gauss sum evaluations. We give an O(n^3) algorithm to calculating the inner product of two n-qudit stabilizer states. I. INTRODUCTION With the prospect of noisy intermediate scale quantum (NISQ) computers with 50 − 100 qubits appearing in the next decade [4,30], determining the minimal classical cost of simulation of quantum computers has received much recent attention [5,8,18,29,35]. The Gottesman-Knill theorem shows that Clifford circuits are efficiently classically simulatable [1]. Adding any non-Clifford gate creates a universal gate set [32]. One such choice for a non-Clifford gate is the T gate: T |j = e ijπ/4 |j , j ∈ {0, 1} [6]. Bravyi and Gosset gave a classical algorithm for simulation of quantum circuits that scales exponentially with the number of T -gates in the circuit but polynomially with the number of qubits and Clifford gates [8]. This algorithm was further developed in [7]. What is supplied by the addition of T -gates to a Clifford circuit? The fault tolerant implementation of Clifford+T circuits substitutes magic states for each T gate [9,40]. Colloquially, T gates add "magic" to a Clifford circuit. Magic is supplied by contextuality, a longstanding source of puzzles and paradoxes in the foundations of quantum mechanics [23]. The relationship of magic to contextuality also provides a connection to quasiprobability representations of quantum mechanics [13,36]. Specifically, positivity of a quasiprobability representation is equivalent to the absence of contextuality, and such positive states, operations and measurements admit efficient classical simulation in some cases [28,38]. Classical statistical theories with an imposed uncertainty principle can reproduce these positive quasiprobabilistic theories for Gaussian states and qudits with d > 2 [3,37]. Pashayan et al. gave an algorithm allowing a positive quasiprobability description to include some negativity [34]. Comparing the algorithms of Bravyi and Gosset and Pashayan should shed more light on the relationship between magic, contextuality and negativity [8,34]. However quasiprobability representations for qubits are distinct from their d-dimensional cousins [24][25][26]. The desire to understand the relationship between magic, contextuality and negativity therefore motivates extension of the algorithm of Bravyi and Gosset to qudits with dimension greater than two. In the present paper we extend the algorithm of Bravyi and Gosset to qudits of odd prime dimension. The structure of the paper is as follows. In Sections II and III, we briefly introduce the necessary background. In Section IV we give the nonorthogonal decomposition of the magic state, and in Section V we give results on approximate stabilizer rank and weak simulation algorithm for qudits. We close the paper by briefly comparing our algorithm to that of [34]. II. QUDIT PAULI GROUP AND CLIFFORD GATES The Pauli and Clifford groups were first generalized beyond qubits by Gottesman [16]. Assuming henceforth that d is an odd prime, we define the Heisenberg-Weyl operators: where X |j = |j ⊕ 1 , where ⊕ denotes addition modulo d, Z |j = ω j |j , x = (x, z), where x and z are integers modulo d, ω = exp(2πi/d) and τ = e (d+1)πi/d = ω 2 −1 . The Heisenberg-Weyl operators form a group whose product rule follows from the Heisenberg-Weyl commutation relation ωXZ = ZX: where x 1 · x 2 is the symplectic inner product: x 1 · x 2 = z 1 x 2 − x 1 z 2 . The generators of the Clifford group on qudits are P , H and CN OT , where P |j = ω j(j−1)/2 |j , H |j = d −1/2 k ω jk |k and CN OT |j, k = |j, k ⊕ j . We can also write any single qudit Clifford unitary as C F, χ = D χ U F , where χ = (x, z) and F is a 2 × 2 matrix with entries modulo d. We will make particular use of matrices C γ, χ = D χ U γ for U γ |k = τ γk 2 |k . The order of C γ, χ is d. The Clifford group is reviewed in more detail in Appendix A. Qudit stabilizer states can be prepared from a logical basis state by a qudit Clifford circuit. The Gottesmann-Knill theorem generalizes to qudits and qudit stabilizer computations allow efficient classical simulation [16]. Qudit stabilizer states possess canonical forms in the logical basis just as in the qubit case [12,19,31]. The remaining generalization we require is an efficient classical algorithm for obtaining the inner product of two stabilizer states. This is required by the algorithm of Bravyi and Gosset and the qubit case was given in [8]. We give an O(n 3 ) algorithm for the inner product of two n-qudit stabilizer states based on Gauss sums in Appendix F. The qudit T -gate was defined in [11,22] as a diagonal gate U T that maps Pauli operators to Clifford operators. Its action is specified by the image of X = D (1,0) under U T . Magic states are then eigenvectors of this image. Let the eigenstate of X with eigenvalue ω k be |+ k , then the magic states are U T |+ k . This approach is that taken by Howard in [22]. The image of X under U T can be written (up to a phase) as C = XP γ Z ξ for γ, ξ integers modulo d. The effect of nonzero ξ is simply to reorder the eigenvectors and hence we can choose ξ = 0. Similarly, the eigenvectors for γ > 1 and γ = 1 are related by application of P γ−1 , a Clifford operator. We can therefore specialize to the case γ = 1 and ξ = 0, and the gate with action: where3 indicates the multiplicative inverse of 3 modulo d. This is the gate defined by Campbell et al. in [11]. The qudit magic states are reviewed in more detail in Appendix B. The definition of magic states allows one to replace a Clifford+T circuit with a Clifford circuit with injected magic states [9,40]. This construction was extended to qudits in [22] and we review it in Appendix D. In Section III we will review the Bravyi-Gosset algorithm for qubits which we will generalize to qudits. III. THE BRAVYI-GOSSET ALGORITHM Bravyi and Gosset gave algorithms for both weak and strong simulation in [7,8]. A strong simulation outputs the probability of measuring output x from a given Clifford+T circuit. A weak simulation algorithm generates samples from the probability distribution over outputs of a given Clifford+T circuit. Here we review the weak simulation algorithm. A brief summary of relevant features of the strong simulation algorithm is given in Appendix C. The key advantage of weak simulation is that one can sample from aP out (x) that is close enough to the actual P out (x). Bravyi and Gosset devised a method to approximate the t-qubit magic state |A ⊗t , where |A = 2 −1/2 (|0 + e iπ/4 |1 , with a superposition of < 2 t stabilizer states. The approximate stabilizer rank χ is defined as the minimal stabilizer rank (defined in [10] and reviewed in Appendix C) of a state |ψ that satisfies | ψ|A ⊗t | ≥ 1−δ. A close approximation to the tensor product of magic states means a close approximation to the action of a Clifford+T circuit realized by magic state injection [8]. Therefore,P out (x) will be close enough to P out (x) if δ is small enough. The sampling procedure given by Bravyi and Gosset relies on standard computations of stabilizers. The extension of such computations to d > 2 have long been well understood [16]. We will therefore refer the reader to [8] for details of these procedures which, mutatis mutandis, can be applied in the qudit case, and focus on the approximate stabilizer rank. We begin by reviewing the approximate stabilizer rank construction from [8]. From the magic state |A defined above one can construct the equivalent magic state: The state |H can be decomposed into a sum of nonorthogonal stabilizer states as follows: where 0 = |0 and 1 = 1 √ 2 (|0 + |1 ). Then |H ⊗t can be rewritten as The weak simulation algorithm reduces the number of stabilizer states required by approximating |H ⊗t . This approximation |H ⊗t * is constructed by taking a subspace L of F t 2 : H ⊗t * = 1 (2 cos(π/8)) t x∈L |x The stabilizer rank of this approximation state is the number of elements in L, which is 2 k . The random subspace L is chosen so that |H ⊗t * satisfies: It is useful to discuss the subspaces of F t 2 in the language of d-ary linear codes. L is a k-dimensional binary linear code which can be specified by k generators of length t. These generators can be written in a standard form as a k × t matrix {1 k |G} where 1 k is the k × k identity matrix and G is a k × (t − k) matrix. Sampling random subspaces of F t 2 is therefore equivalent to sampling matrices G. The algorithm of Bravyi and Gosset achieves an improved scaling of cos(π/8) −2t 2 0.23t for weak simulation over 2 0.47t for strong simulation. In section IV and V, we will see more details of how to bound the scaling while we extend this approximate rank and weak simulation scheme to qudits. IV. NONORTHOGONAL DECOMPOSITIONS OF QUDIT MAGIC STATES The qudit magic state we want to decompose is an eigenvalue one eigenstate of the Clifford operator C d as defined by eq. (3). We choose a stabilizer state 0 with non-zero inner product with the magic state and act on it with powers of C d to obtain d stabilizer states { j = C j d 0 , j = 0, ..., d − 1}. We know these stabilizer states are distinct because if any pair were equal then the original state 0 would be an eigenstate of the Clifford operator and hence a magic state. The sum of these d states form a decomposition of the magic state (up to a possible global phase). Because C d has order d this state is by construction an eigenvalue one eigenstate of C d . The d stabilizer states in the decomposition form an orbit around the magic state. This construction was discussed previously in [20]. There are d(d + 1) single-qudit stabilizer states [39], partitioned into d + 1 orbits, each orbit giving a decomposition of the magic state. Every state in each orbit has the same overlap with the magic state: where the qudit magic state is |M d = M d |+ . This property is a generalization of 0 H = 1 H = cos π/8 for the qubit case. The overlaps of the elements of the nonorthogonal basis are given by: 0 j = 1 √ d for all js, i.e.: This expression is that for states in a SIC-POVM, and the construction here is similar to the generation of such states from a fiducial state [14,41]. Here we only obtain d states, however. See Appendix G for the evaluation of the phase of j k . The states |+ p = Z p |+ are representatives of the d orbits, each of which generated by C d . This is because C a d |+ p = |+ q for any a, p, q, which follows simply from the action of C d in the logical basis. C d applies phases quadratic in j to |j followed by a shift. This cannot be equal to a state generated from |+ by any power of Z, which can only apply phases linear in j to |j . From the orbit representatives we can determine the inner product of the states in the orbit with the magic state. This is given by: (11) This is a cubic gauss sum which can be written: For the d = 3 case, the magnitude and phase of this cubic Gauss sum, and φ(p, d), are computed in Appendix E. The sum is real, although not necessarily positive. Although we do not obtain a closed form for this sum, we can compute the integer value of p which maximizes its absolute value for a given d. These values are tabulated for small d in Table I. The complete form of the nonorthogonal decomposition is: which is the generalization of eq. (5) to arbitrary d. V. WEAK SIMULATION AND APPROXIMATE STABILIZER RANK In order to get an approximation for |M ⊗t , we can follow the method of Bravyi and Gosset for the qubit case, taking a k-dimensional subspace of F t d : Here we label the state by L ⊂ F t d , a k dimensional code subspace of F t d and Z(L) is a normalization factor. Comparison with eq. (13) shows that Z(F d ) = d|α| 2 . We require: for a given δ, where the first equality follows from eq. (9) and where: Selection of the subspace L depends on two factors. First, we choose the dimension of L by setting k: Note that the maximum precision that can be required from the method for given t is obtained by setting k = t, so that δ max = 2 −t(1+2 log d |α|)+1 . Next we find an L for which Z(L) is not too large. The probability of obtaining a small enough Z(L) can be analyzed as in [8] by evaluating the expectation value of Z(L) over all possible L ∈ F t d : Here I L ( x) is a indicator function, i.e., it is equal to 1 when x ∈ L and 0 otherwise. The second equal sign stands because the expectation value of I L (x) for a fixed From eq. (17) Randomly choosing δ −1 subspaces gives an L such that: and hence satisfying eq.(15), with high probability. The upper bound for the approximate stabilizer rank of a t-qudit magic state given by the above method is: In the qubit case an explicit sum formula was given for Z(L) with 2 k terms, and hence the cost of evaluating Z(L) is O(2 k ). What is the cost of evaluating Z(L) for arbitrary d? In Appendix G we give an explicit formula for Z(L) as a sum of products, and hence the cost of diag(e 2πi/9 , 1, e −2πi/9 ) 0 1+2 cos(2π/9) 3 0.84403 3 0.32t 5 diag(ω −2 , ω, ω −1 , ω −2 , ω −1 ) 4 3+2 cos(2π/5) 5 0.723607 5 0.41t 7 diag(ω 3 , ω −2 , 1, ω 3 , ω 1 , ω 2 , 1) 3 1+6 cos(2π/7) 7 0.677277 7 0.40t Table I. The matrices M d , optimal value of p and approximate stabilizer rank scaling comparison for d = 2, 3, 5, 7. Here κ = −2 log d α so that d κt = α −2t . Here the ω for d = 5 and d = 7 rows are e 2πi/5 and e 2πi/7 respectively. VI. DISCUSSION The motivation to study the qudit generalizations of stabilizer rank algorithms such as those in [7,8] is to enable comparison with other simulation algorithms. In [34], the authors apply Monte Carlo sampling on trajectories of the quasiprobability representation to estimate the probability of a measurement outcome. They find the hardness of this strong simulation depends on the total negativity (Negativity of the inputs, gates and measurements) of the circuit. Specifically the cost of the algorithm scales with the square of the total negativity. For Clifford+T circuits that are gadgetized so that the circuit is realized by Clifford gates with magic state injection, the negativity of the circuit only comes from the ancilla inputs of magic states. If we apply the method of [34] to the gadgetized circuit with an input of t-qutrit magic states, the cost scales as 3 0.84t . This result is obtained by calculating the negativity of a single-qutrit magic state. In the present paper, we obtain a scaling of 3 0.32t for weak simulation of qutrit Clifford+T circuits. This shows that weak simulation using the approximate rank method has superior scaling to strong simulation using the method of [34]. A stabilizer rank based strong simulation algorithm for qudits would require new results on exact stabilizer rank of qudit magic states, a topic for future work. Recent progress in extending the qubit case has been reported in [7], and improvements to Pashayan's algorithm using a discrete systems generalization of the stationary phase approximation were given in [27]. It should be noted that one should not think of weak simulation as easy and strong simulation as hard. The difficulty of weak and strong simulation is a property of the distribution being sampled or computed. In some cases, such as quantum supremacy, we expect the difficulty of weak and strong simulation to coincide [5]. If we consider negativity and stabilizer rank as two measures of quantumness, we can see that they differ. Bravyi et al. [10] conjectured that the magic state has the smallest stabilizer rank out of the non-stabilizer states. However, the quasi-probability of the magic state has the largest negativity. In fact, Howard and Campbell also noticed this disagreement between stabilizer rank and robustness of magic [21]. It is worth noting the differences between stabilizer rank and approximate stabilizer rank. Namely, the approximate stabilizer rank seems to agree with other measures of quantumness such as negativity or robustness of magic in that it reaches a maxima at the magic state and a minima on stabilizer states. The exact stabilizer rank does not share these properties. This makes the investigation of the difference between exact and approximate stabilizer rank interesting. Appendix A: The Qudit Clifford Group We recall that d is an odd prime. In a d dimensional system the Pauli operators X and Z are defined as: where ω = exp(2πi/d). These operators obey the Heisenberg-Weyl commutation relation: In d dimensions the Weyl-Heisenberg displacement operators are defined by: where x = (x, z),τ = e (d+1)πi/d = ω 2 −1 . The qubit Pauli operators are recovered from this expression for d = 2, with D (1,0) = X, D (0,1) = Z and D (1,1) = −Y . The Heisenberg-Weyl operators form a group with multiplication rule: where x 1 · x 2 is the symplectic inner product: For d > 2 the Weyl-Heisenberg operators are unitary but not generally Hermitian. In the qubit case, the Clifford gates map Pauli operators to Pauli operators. In the qudit case Clifford gates map Weyl-Heisenberg operators to one another. The generators of the Clifford group are defined so that the Hadamard gate maps X → Z and the phase gate maps X → XZ. The generators of the single-qubit Clifford group are: The d-dimensional Clifford operators are generated by: and: The single-qudit Clifford group is isomorphic to the semidirect product group of SL(2, Z d ) [33] and (Z d ) 2 [2, 41]. We can represent the Clifford group using a 2×2 matrix F and a 2 vector χ, both with entries in Z d : Specifically, a Clifford unitary is given as follows: Where if: then: if β = 0 and The multiplication rule is: The action of the Clifford operators on the Heisenberg-Weyl operators in this representation can be given as follows: In particular we are interested in Clifford operations defined by matrices of the form: and we introduce the notation: for χ = (x, z) T . From Table I in Zhu [41] the order of any element C γ, χ is d. Clearly X, P and Z are order d. For d = 2 H is order 2 and for d > 2 H is order 4. The generators H and P are given by: which follows from HXH † = Z and HZH † = X −1 and: These expressions for H and P allow us to construct the F and χ for any single qudit Clifford operation expressed as a word on the generators H and P . Appendix B: Qudit Magic states and T gates To go beyond Clifford group computation it is useful to introduce the Clifford hierarchy, which classifies unitary operators by their action on the Pauli group. The Clifford hierarchy was defined by Gottesman and Chuang in [17]: C(k + 1) = U |U P U ∈ C(k), P ∈ P (k ≥ 0). (B1) The first level of the Clifford hierarchy is the Pauli group C(1) = P. The Clifford group is the second level of the hierarchy, unitary operators that map the Pauli group to itself. Note that elements of the Pauli group are themselves elements of the first level of the Clifford hierarchy. The third level of the Clifford hierarchy are operators that map Pauli operators to Clifford operators. The qubit T gate is such an operator because T XT † = P HP 2 H, a non-Pauli element of the second level of the Clifford hierarchy. Bravyi and Kitaev first proposed qubit magic states in [9]. They define magic states as the image of |H and |T under single-qubit Clifford gates, where |H is defined by eqn. 4 and |T by for cos(2β) = 1 √ 3 . |H is the eigenstate of the Hadamard gate H and |T is the eigenstate of the product of Hadamard and Phase gate P H. Any magic state is equivalent as a resource to any other state obtainable from it by a Clifford operation. We can define magic states more generally as the eigenstates of Clifford operations and obtain them as follows. Taking any H-type magic state |H , we have 4. M is in the third but not the second level of the Clifford hierarchy. Amongst this set of gates is the canonical M d gate Which is defined so that it maps the X operator to a Clifford operator proportional to XP : Here3 is the multiplicative inverse of 3 modulo d. This Clifford operator has order d. This condition, and the condition det M = 1, gives the following form for the λ j (See Appendix A of [11]): The parameter m determines the order d m of the operator M . For d = 3 the form above is valid when m ≥ 2. For d > 3 it is valid when m ≥ 1. By definition M maps X, a generalized Pauli operator, to a non-Pauli Clifford operator and so is in the third, but not the second, level of the Clifford hierarchy. We can therefore think of M as a generalized T gate. From the definition of the matrix M in (B4), we have for d = 3 and m = 2: and for d = 5 and m = 1 where ω = e 2πi/5 . The qudit version of the T gate M , is further generalized in [22], which we will discuss below. The T gate is also sometimes called the π/8 gate because Vala and Howard developed the qudit versions of this gate concurrently with Campbell et al's development of qudit magic states [11,22]. The results are equivalent and we give the details of the relationship between them here. Vala and Howard parameterize the set of diagonal gates on a single qudit as follows: (B10) All diagonal gates fix D (0,1) and so their action is com- This parallels the development of Campbell et al. who considered the action of their canonical gate M on the operator X and insisted that the result of that action was ∝ XP . Vala and Howard proceed more generally, computing the action of these diagonal matrices: Vala and Howard then consider the case that U v is in the third level of the Clifford hierarchy so that the image of X can be written (c.f. eq (18) in [22]): where , γ , z ∈ Z d . The right hand side here is the most general form allowed because eqn. (B11) implies that the image of X must be X times a diagonal Clifford operator, and the most general form of a diagonal Clifford operator has χ = (0, 1) and β = 0, α = 1. Combining equation (B11) and (B12), one obtains (c.f. eq. (19) in [22]): Vala and Howard then solve for U v with these 3 parameters. This analysis is equivalent to that performed in Campbell et al. [11], Appendix A. The d = 3 case as usual presents some special difficulties. In the Campbell analysis one must choose m = 2 for λ as there are no Clifford operators with m = 1, d = 3 [11]. The set of operators U v for d = 3 is given by: where ξ = e 2πi/9 . The v k are given by: where all operations can be taken modulo 9. The determinant of U v for d = 3 can be computed from this definition: showing that U v is not in SU (3) for d = 3. We can relate the diagonal operators U v defined by Vala and Howard and the operators M defined by Campbell et. al as follows. Writing: (B17) and: we wish to compare: These are both cubic in k so we can find the particular U v that corresponds to M by equating the coefficients. We begin by setting k = 0 to find the constant term. We immediately obtain: We conclude that U v and M will only be equivalent up to a global phase determined by this convention. Equating the cubic terms yields γ = 1. Equating the quadratic terms gives so that z = (d − 1)/2. Finally, equating the linear terms gives: We may therefore relate U v (z , γ , ) and M for arbitrary d > 3 as follows: The first two cases of this equivalence are for d = 5 amd d = 7 and, up to a global phase, are as given in equations (70) and (71) of [22]. The case of d = 3 is distinct (12 does not exist modulo 3.) but from the definition of U v for d = 3 given in eqn. B15 and eqn. B16 we have: This is, up to a global phase, as given in eqn. (69) of [22]. Qudit Magic states The gates M also allow us to find eigenstates of C M as follows. Define the state |M k = M |+ k , where |+ k is the eigenstate of X with eigenvalue ω k . We can calculate: Given eq.(B12), Vala and Howard recovered the definition of the magic states of Campbell and showed that these magic states U v |+ are eigenstates of C γ ,(1,z ) T with eigenvalue ω − : Appendix C: Strong Simulation for qubits. We review here the strong simulation algorithm given by Bravyi and Gosset in [8]. Let t be the number of T gates in the n-qubit quantum circuit we wish to classically simulate. The first step is to replace every T gate in the circuit by Clifford gates and an ancilla input of a magic state |A , defined in [9] as: This is accomplished using the gadget shown in Figure 1 [40]. The number of ancilla qubits is t. We consider an initial state |0 ⊗n for the Clifford+T circuit and |0 ⊗n ⊗ |A ⊗t for the gadgetized circuit. At the end of the computation we will measure w of the n qubits in the logical basis. This measurement with outcome x (where x is a bitstring of length w), postselected to the case where all ancilla measurements have result 0, is represented by a projector Π(x) = |x x|⊗1⊗|0 t 0 t |. The strong simulation algorithm classically computes the probability of this measurement outcome after acting with a Clifford circuit V , which is our original (non-Clifford) circuit with all T -gates replaced by the gadget of Figure 1. Therefore we can express the probability of obtaining output x as: The factor of 2 t here compensates for the fact that we postselected on the measurement outcomes of the t ancilla qubits. We define a t-qubit projection operator Π G = 0 n | V † ΠV |0 n . This projector maps states onto a stabilizer subspace. Then eq.(C2) becomes where u is an integer that depends on the number of qubits we are measuring out of n and the dimension of the stabilizer subspace Π G is mapping onto. If we can expand |A t into a sum of stabilizer states, then we can express P (x) as a sum of inner products of t-qubit stabilizer states, which can be computed in O(t 3 ) time ([1, 8, 10, 15]). The fewer stabilizer states in the expansion of |A t , the more efficient the algorithm is. Stabilizer rank is defined as the minimal number of stabilizer states needed to write a pure state as a linear combination of stabilizer states. The value of χ(t) is trivially upper bounded by 2 t because logical basis states are stabilizer states, and χ(t) is also believed to be lower bounded by an exponential in t. For practical purposes we can achieve progress through a series of constructive upper bounds. In [10], Bravyi et al. found a stabilizer rank upper bound by obtaining χ A (6) ≤ 7 for A 6 and dividing the t-qubit state into a product of 6-qubit states. Therefore, χ A (t) has a upper bound 7 t/6 2 0.47t . If we denote the stabilizer rank for the tensor product of t single-qubit magic states |A t as χ A (t), the cost of classically computing P (x) by taking inner products as described above is O(t 3 χ A (t) 2 ). The quadratic dependence on stabilizer rank can be improved by a Monte Carlo method, developed by Bravyi and Gosset, to approximate the norm of a tensor product of magic states projected on a stabilizer subspace: therefore enabling one to calculate P (x) with cost O(t 3 χ A (t)), linear in stabilizer rank. This concludes our summary of the strong simulation algorithm of Bravyi and Gosset. The magnitude of this expression can be determined from the sum, which is real: While this shows that the sum is real, it does not guarantee that it is positive, and hence the phase of the inner product, up to a sign, is given by: complete the square: where2,f meaning that 22 ≡ 1 mod d and ff ≡ 1 mod d. The value of this Gauss sum is well known: where (2 f d ) is the Legendre symbol. Hence: The new coefficients −f and −2((2g + 1)f − 1) here are still in Z d . This means that the general form k ω f (k−1)k 2 +gk |k of single-qudit stabilizer states is preserved under the action of any Clifford operations. For multi-qudit states, we have the same affine space property as the qubit case except that the additions are modulo d. Before we give the proof, we need to show that quadratic form given in terms of the basis vectors of the affine space u and the qudit vectors itself x are equivalent. Changing the arguments only changes the coefficients of the quadratic form. Given eq.(F4), we further assume the quadratic and linear matrices in terms of x beingQ andL: From this equation, we can see the relationship between Q, L andQ,L: Q = G TQ G, and L = 2h TQ G +LG. Now we use Van den Nest's method [31] to prove that the canonical form (F2) is preserved under the action of CSU M , P and H. The CSU M i→j gate shifts the affine space by mapping |a |b to |a |a ⊕ b , without changing the phases. As in the qubit case, we only need to add the ith column of the matrix G to the jth column: Acting with P on qudit i results in the state: which again leaves the canonical form unchanged. The Hadamard gate requires some work. Without loss of generality, we assume that H acts on the first qudit: T is the first row of G andḠ is the rest of it. IfḠ is still full rank after taking outg 1 T , we obtain the new G to be: Therefore we have m+1 basis vectors now, and v becomes the new u 1 . The term v(ḡ 1 u + t 1 ) in the phase can be absorbed in the quadratic form q n (u). So this is of the canonical form (F2). IfḠ is rank m − 1 after taking outg 1 T , then the columns ofḠ are not linearly independent. In this case one of the u i s is redundant and we want it to be summed out in order to get back to the canonical form. Without loss of generality, let's assume that u 1 = m i=2 r iḡi , thereforeḠu +h = m i=2 (u i + r i )ḡ i +h. If we denote u i ≡ u i + r i for i = 2 to m (ū) and u 1 ≡ v, q n (ū) and q d (ū) can be written in terms ofū with different coefficients from q n and q d , say q n (ū ) and q d (ū ), together with some constant factor which can be neglected. Then eq.(F14) becomes: Here the parenthesis contains the Gauss sum we computed earlier. Then we can drop the prime for the us and absorb the result of the Gauss sum and u 1 ( m i=2 g 1i (u i − r i ) +h 1 ) into the q n and q d functions. Finally we arrive at the same form but with different coefficients. Hence, the canonical form is preserved under the action of all Clifford gates. We now use this canonical form and the Gauss sum techniques to provide an O(n 3 ) algorithm for the computation of the inner products of two qudit stabilizer states. The inner product of two qudit stabilizer states The inner product between two qubit stabilizer states can be computed efficiently in O(n 3 ) ([1, 8, 10, 15]). How-ever, a corresponding algorithm for qudits has not yet been given, although most aspects of the theory of stabilizer states have been generalized [16,19]. We will now describe a O(n 3 ) algorithm that computes the inner product of two qudit stabilizer states based on the Gauss sum techniques we discussed in the previous section. As discussed above, the quadratic form in terms of the basis vector of the affine space u and the qudit vector itself x are equivalent. Therefore eq. (F2) is equivalent to the following: where A is the affine space defined by Gu + h in eq.(F3). Assume we have two qudit stabilizer states |ψ 1 and |ψ 2 , which take the above form (F17) with subindices 1 and 2: whereq 1 =q 1d +q 1n ,q 2 =q 2d +q 2n ,q =q 1 −q 2 , k is the dimension of A 1 ∩ A 2 and q is the quadratic form in the new basis of A 1 ∩ A 2 . The new basis of the affine space A 1 ∩ A 2 , as well as the new quadratic form associated with it, can be calculated with the same method used by Bravyi and Gosset in Appendix B, C for qubits [8], with cost O(n 3 ). What remains in eq.(F18) is a Gauss sum, which we again rewrite in the following form: where the exponent is given by eq. (F4). We can diagonalize Q and factor this sum into a product of k Gauss sums over F d . We obtain a transformation matrix P that gives: where Λ is the diagonal matrix with entries (λ 1 , ..., λ k ). Then if we further define u = P u , we obtain where l i = j p ji l j . This is a product of k Gauss sums, as given in eq. (F8, F9, F10). Each Gauss sum only takes O(1) time, so the product of k of them takes time O(k). The scaling of this algorithm is determined by the complexity of Gaussian elimination, O(k 3 ) because Q has rank k. Therefore, together with the first step to obtain A 1 ∩A 2 , the algorithm takes O(n 3 ) time overall in the worst case. where |x| is the Hamming weight of codeword x in code L, i.e. the number of nonzero elements in the codeword. |x| j means the number of digits in string x that equals to j. If we regard L as a linear code, then the qubit case Z(L) is exactly the weight enumerator of the code. In the qudit case, Z(L) depends on the Hamming weight as well as the β j s. Now let's calculate an explicit expression for the β j s. For the d = 3 case, we specifically obtain β 1 = e πi/18 and β 2 = e −πi/18 . For d > 3 case, we assume our initial stabilizer state 0 = Z p |+ . And where the C for Campbell's choice of |M d is simply ω −3 XP according to eq.(3) and subsection C of section IV. We can calculate (XP ) j as (XP ) j = k ω j−1 l=0 ( k+l 2 ) |k + j k| = ω6 (j 3 −3j 2 +2j) k ω2 (jk 2 +(j 2 −2j)k) |k + j k| . (G4) Therefore we can rewrite C j as C j = ω −3j (XP ) j = ω6 (j 3 −3j 2 ) k ω2 (jk 2 +(j 2 −2j)k) |k + j k| . (G5) Then we can calculate β j as This is a quadratic Gauss sum times a phase. Using eq.(F8) and (F9) for f = j and k + g =2(j 2 − j), we obtain: k ω2 (jk 2 +(j 2 −2j)k) = ω −2 3 j(j−2) 2 ( 2j d ) ω −2 3 j(j−2) 2 i( 2j d ). (G7) The final expression of β j in terms of p is: where again (2 j d ) is the Legendre symbol.
2018-12-18T17:40:03.000Z
2018-08-07T00:00:00.000
{ "year": 2018, "sha1": "711801a2ea3c0624efd1bfd1813b10430d14630a", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevA.99.052307", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "711801a2ea3c0624efd1bfd1813b10430d14630a", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
233970050
pes2o/s2orc
v3-fos-license
Decoding the Cauzin Softstrip: a case study in extracting information from old media Having content in an archive is of limited value if it cannot be read and used. As a case study of extricating information from obsolete media, making it readable once again through deep learning techniques, we examine the Cauzin Softstrip: one of the first two-dimensional bar codes, released in 1985 by Cauzin Systems, which could be used for encoding all manner of digital data. Softstrips occupy a curious middle ground, as they were both physical and digital. The bar codes were printed on paper, and in that sense are no different in an archival way than any printed material. Softstrips can be found in old computer magazines, computer books, and booklets of software Cauzin produced. However, managing the digital nature of these physical artifacts falls within the scope of digital curation. To make the information on them readable and useful, the digital information needs to be extracted, which originally would have occurred using a physical Cauzin Softstrip reader. Obtaining a working Softstrip reader is already extremely difficult and will most likely be impossible in the coming years. In order to extract the encoded data, we created a digital Softstrip reader, making Softstrip data accessible without needing a physical reader. Our decoding strategy is able to decode over 91% of the 1229 Softstrips in our Softstrip corpus; this rises to 99% if we only consider Softstrip images produced under controlled conditions. Furthermore, we later acquired another set of 117 Softstrips and we were able to decode nearly 95% of them with no adjustments to the decoder. These excellent results underscore the fact that technology like deep learning is readily accessible to non-experts; we obtained these results using a convolutional neural network, even though neither of the authors are expert in the area. Introduction Home computers in the late 1970s and early 1980s were becoming more affordable and targeting the mass market instead of computer enthusiasts, but still required a lot of technological knowledge. For instance, computer owners' manuals typically included information about BASIC programming, teaching users how to program their own software (e.g., Apple Computer 1979). It was also common for computer magazines to contain software in the form of source code-type-in programs. In order to use the programs, people had to manually enter the code, and even a single mistake could result in an error. To make matters worse, storage devices were usually not part of a computer and had to be purchased separately. Cassette tape decks were one option, albeit slow, and floppy disk drives were available but costly. A potential solution came in the form of bar codes. Two of the most widely recognized bar codes today are the Universal Product Code (UPC) for identifying products and the Quick Response (QR) code which is used in many different areas such as manufacturing, health care, and marketing (Denso 2012). By contrast, the Cauzin Softstrip shown in Fig. 1 is an almost-forgotten relic, a two-dimensional bar code format released in 1985 by Cauzin Systems (Sandberg-Diment 1985). All kinds of digital data, such as graphics, software or text files, could be encoded as a Cauzin Softstrip and then printed on paper. Cauzin's optical reader, the Softstrip System Reader, sold for approximately 200 USD (Baskin 1987;Johnson 1986), with Cauzin's encoding software-dubbed the "Stripper"-retailing at under 30 USD (Baskin 1987; Cauzin Systems, Inc (n.d.a)). It was compatible with the IBM PC, Apple II, and Macintosh, and in fact Softstrips could be used to transfer data between these different platforms (Cauzin Systems, Inc (n.d.b)). Softstrips appeared in magazines, were sold in stores, and appeared in at least one book. It provided an inexpensive software distribution medium for publishers. Although MacUser magazine proclaimed the Cauzin Softstrip the most innovative concept of 1986 (MacUser 1987) and Cauzin Systems had plans to use the Cauzin Softstrip on cards such as credit or calling cards (Glaberson and Santulli 1989), the technology was not as successful as anticipated and eventually disappeared a few years after its release in 1985. Floppy disks became cheaper and more widespread, and programs became larger; the Softstrip is impractical for larger data, with a single Softstrip only able to store 5500 bytes (Cauzin Systems, Inc 1986a; Cauzin Systems, Inc (n.d.a); Johnson 1986). By contrast, an Apple II floppy disk from that time would hold over 25 times as much data. Larger data items could be split across multiple Softstrips, but this increased scanning time (already 30 s for a full strip) and required the user to adjust the Softstrip Reader again for each Softstrip (Johnson 1986). It is already difficult to find a Cauzin Softstrip Reader, working or otherwise. To underscore this point, we were able to acquire one on eBay only after this work was complete, and even then the software is missing. Further, the mechanical nature of a Cauzin reader suggests that long-term functionality is not guaranteed. The only other option to decode the Softstrips without access to a Cauzin Softstrip Reader is to do it by hand, a time-consuming and error-prone task. By way of illustration, a Softstrip had to be decoded manually for this work to locate a decoding error, a process that took about eight hours until the decoding was successful. Use of a Softstrip Reader, of course, is predicated on the assumption that an appropriate "accessory kit" is found to interface the Reader to one of the supported host computers (for instance, either a serial port or a cassette port was used to connect to the Apple II, depending on the computer model), that a decades-old host computer is available and functioning, and that a mechanism for exporting data off the host computer exists. This is the situation we found ourselves in, one doubtless familiar to memory institutions and software preservationists, where we had data locked in an obsolete format-here, we had physical Softstrips with data on them but no physical reader. How could we access this data and allow it to be assessed, researched, and ultimately preserved? Instead of compounding our obsolescence problems by trying to find a physical reader and cajole it back into life, we applied modern technology. We created a digital optical reader using deep learning, demonstrating in the process that deep learning is a technique within reach of non-experts. Anatomy of a Softstrip This section explains the structure of the Cauzin Softstrip and how digital data is encoded within it. The Cauzin Systems patents (Brass et al. 1987(Brass et al. , 1988a were particularly helpful for understanding the details of the Softstrip format, and the information here is drawn from them unless stated otherwise. Basics A Softstrip is 5/8 inches wide and up to 10 inches long (Brass et al. 1987;Johnson 1986), with two positioning marks for the Softstrip reader: a circle in the upper left and a rectangle in the lower left ( Fig. 1). Figure 2 shows how the example strip looks when aligned in the reader. Each Softstrip is divided into three sections (Fig. 3). The first part is the horizontal synchronization section, the second is the vertical synchronization section and the last part contains the strip's data. The two synchronization sections are collectively referred to as the header and contain encoded metadata about the Softstrip itself. The Cauzin Softstrip uses two adjacent squares, called a dibit, for encoding a single data bit. A zero data bit is encoded by a black square followed by a white square, whereas a one data bit is encoded by a white square followed by a black square; other combinations are invalid (Fig. 4). The start of the Softstrip is indicated by a one-dibit-wide black bar on the left side ( Fig. 3a), followed by one white square. Both the checkerboard and rack (Fig. 3b, c respectively) change each row and are used to determine the start and end of a row. Data are located in between the checkerboard and rack on each row. After the checkerboard comes the first parity dibit. Parity refers to a method of detecting (some) bitwise data errors, at the cost of using an additional bit (or, in this case, dibit). The first parity dibit is for detecting errors in the following odd-numbered data dibits; a second parity dibit for even-numbered dibits appears after the data dibits in the row, just prior to one or two white squares and the rack. Softstrip header The first part of each Softstrip is the horizontal synchronization section. It includes the number of four-bit groups (nibbles) per row and is used to align the optical reader for scanning the strip. (A physical reader also uses this section to determine the contrast between paper and ink color, but that is not required for our work.) The horizontal synchronization section is followed by the vertical synchronization section, where the height of the dibits is encoded and is repeated multiple times per row. The vertical synchronization concludes with three zero bytes that indicate the start of the data. Data section A file header with metadata is encoded first; it contains information about the encoded file. There are provisions for multiple files to be encoded in one strip, but we did not find any instances of that occurring. Full details about the file header can be found elsewhere (Cauzin Systems, Inc 1986b), but suffice it to say that the file header contains fairly typical file metadata: file name, length, type. Crucially for our purposes, the file header also contains a strip checksum, which is a strong(er) means of detecting data errors than parity alone. Corpus The Softstrips we used were from five different sources and provided a wide spectrum of encoding densities and image quality. Digital generated Softstrips (Corpus1) A Softstrip creation tool (Osborn 2016b) was used to generate Softstrips. Here the selection of input data could be arbitrary; we chose to use an icon collection, where each approximately 25 × 25 pixel icon was encoded into a single Softstrip with a typical image resolution of 390 × 2500 pixels. This large dataset provided a baseline with optimal conditions to decode. Unfortunately, all of the following non-artificial datasets exhibit real-world problems: damaged dibits, missing dibit parts, white noise on black areas, damaged racks, and smeared printer ink. Figure 5 shows visual artifact samples from the best, professionally printed data sources below. Animated algorithms (Corpus2) The book Animated Algorithms (1986) is one of the few books which used Softstrips. They were scanned at 300 DPI and have an image resolution of about 360 × 4200 pixels. Cauzin Softstrip application notes and marketing material (Corpus3) This collection was found on the Internet Archive (Cauzin Systems, Inc 1986c) with no DPI information; we extracted images from the PDFs. Many Softstrips in this collection contain some blurred areas. Magazine collection (Corpus4) This collection consists of scans found online (Apple II Scans (n.d.) 2019) from various computer magazines and was processed similarly to Corpus3. Corpus4 contains by far the worst quality Softstrips, including some that would be difficult for humans to decode. StripWare (Corpus5) Computer programs in the form of Cauzin Softstrips were sold in computer stores as booklets called StripWare. Corpus5 is comprised of a number of StripWare artifacts from eight booklets in very good condition (some still sealed in plastic). These Softstrips were scanned at 1200 DPI, with resolution ranging from about 750 × 6000 pixels up to 800 × 9600 pixels. Method and results Overall, we tried four different methods of decoding Softstrips, beginning with ones that did not involve deep learning; full details are in Reimsbach (2018). The method described here is the second-best one, but this method used a simpler convolutional neural network (CNN) that is less prone to overfitting, and only decoded three fewer strips than a more complex CNN. Method A single strip is selected using the Gimp image processing software and run through a three-step process: header processing, row extraction, and row decoding. Their description is followed by a discussion of the methods we implemented to improve the decoding of damaged strips. Header processing The horizontal synchronization section is the first part of a Softstrip and, in order to decode it, the white-to-black transitions need to be counted. Because a damaged horizontal synchronization section could have more white-to-black transitions than actually exist (Fig. 6), we first apply image noise reduction. There is no guarantee that the noise reduction removes all the noise, however. Therefore, we use a heuristic distance measure to look for a group of self-similar pixel lines, such as the ones highlighted by the red rectangle in Fig. 6, and consider that a noise-free area. After an area with no noise is extracted, the decoder is then able to compute the number of nibbles per strip row. A strip's structure changes drastically from the horizontal to the vertical synchronization section and is reflected in the above features. Therefore, the distance measure is used again to locate the boundary between those sections. Row extraction A Softstrip image is divided into multiple segments, each 70 pixel lines, which are processed separately. The segmentation is to compensate for distortions in the scanned image. To process each segment, the boundaries of the black checkerboard and the last rack square are first determined for each pixel line in the segment. After all pixel lines are scanned, the collected locations are filtered: every checkerboard location must be within the checkerboard window, which we make slightly larger than the actual checkerboard to help compensate for cases where the checkerboard is slightly shifted. In addition, every checkerboard and rack location must occur at least 6 times in the segment, in order to filter out noisy pixel lines. Once the boundaries are known, the checkerboard and rack pattern can be determined for each pixel line, using the color of the center pixel of the left checkerboard and last rack square. After all segments are processed, adjacent pixel lines with the same valid pattern are grouped together into rows. There is no minimum number of pixel lines required to form a row, because in some cases only a single pixel line can be found in a row. Row decoding We used the Keras Python library along with a Tensorflow back end 1 to develop a CNN-based row decoder. It was trained on data from five Corpus2 Softstrips whose rows we had decoded algorithmically (Reimsbach 2018): 17,094 1-dibit samples and 26,596 0-dibit samples, where 75% of the samples were used for training and 25% for testing. Almost 100% accuracy was reached after one forward/backward training pass (epoch). Input to the CNN is a 20 × 20 pixel grayscale image of a dibit. Recall that the number of nibbles per row is known from the strip's header, and the row image to decode is divided by this value to get the average dibit size. If necessary, the row's dibits are resized to make each one 20 × 20 pixels. The CNN itself has six layers. Handling damaged strips We automatically apply three methods to attempt decoding of otherwise undecodable rows where parity checks have failed. As these are local improvements to make a row parity check succeed, all successful combinations must be stored and considered later during the checksum test. Row splitting repeatedly partitions the row image horizontally in an effort to handle dibits whose damage falls in either the upper or lower areas of a partition. In the worst case, only a single pixel line in the entire row will lead to a correct result. Row shifting addresses the case where the start bar is damaged and the row image was incorrectly extracted as a result. Here, we try shifting the row 1-2 pixels to the left or right. Finally, low confidence flipping uses the confidence value output by the row-decoding CNN and tries flipping the dibits that were classified with low confidence ( < 90%). Results Our method is able to successfully decode slightly over 91% of the corpora. 2 This rises to over 99% if we exclude Corpus3 and Corpus4, and only consider images produced under controlled conditions. Obviously the generated strips of Corpus1 are a dominant factor, and if we exclude those the successful extraction drops to around 71%. However, this climbs again to almost 94% if we consider non-generated strips whose scanning we controlled. Overall, our experience with this method on different corpora is that the higher resolution strips gave a clear advantage when decoding, and even minor rotations of 0.01 • during strip selection could lead to decoding failures. We performed a baseline validation of our decoder using Corpus1. Since the exact inputs were known, we decoded all the strips in Corpus1 and compared the results to the original input files; all matched exactly. Beyond that, we have defined successful decoding of a strip using the objective metric of having all of a strip's parity bits and its checksum match, the two Softstrip mechanisms designed for this purpose. These mechanisms allegedly gave 'an undetected bit error rate of less than one bit error per 10,000,000,000 bits' (Cauzin Systems, Inc (n.d.a)). Marketing claims notwithstanding, these are not strong tests in modern terms, and it is indeed possible for a strip to contain a combination of errors that cause the parity checks and checksum to succeed, or for there to be different readings that succeed under these criteria. We observe, though, that even in the case of errors it may be easy to reconstruct the intended data. For example, one Softstrip contained a text file whose checksum failed, producing the corrupted line 'There is no$warranty.' The correction here is obvious, and we argue that similar repairs of localized errors would be possible even with BASIC programs or binary code given semantic knowledge that is out of the scope of strip decoding. But using semantic knowledge is no different a situation than might be faced when interpreting writing on damaged, traditional physical artifacts; the important point is that our reader is able to extract strip content so that it may be seen and interpreted. While it would be possible to attempt automated correction of failed data reads using, for instance, a trained text or code corpus, we would argue that it is best left as a separate process to be applied by the user of the data. While some types of Softstrip data are relatively robust, such as English text, computer code is not. A minor mistaken correction in code might easily yield nonfunctional code, or worse, code that apparently does function but produces incorrect results. Data correction is a task we feel is best left for a domain expert; the important part is to identify when such intervention may be necessary. Related work One of the earliest bar codes for encoding software was the Paperbyte in 1977 (Budnick 1977), a 1-D barcode that was used to encode programs in Paperbyte books and Byte magazines. In 1983, the OSCAR 1-D bar code was released for encoding software in the Databar magazine, but only a single issue of it was ever published (Savetz et al. 2017). A modern OSCAR barcode reader is available (Teuwen 2016). However, this is a more straightforward decoding problem: OSCAR was a 1-D format, and the number of published OSCAR bar codes is very limited. We needed to handle many more strips and their visual artifacts. Chris Osborn, the author of the Softstrip generator we used for Corpus1, used it to pose a challenge (Osborn 2016a): decode a generated Softstrip that contained the private key to a Bitcoin wallet. The winner did write a Softstrip reader (Sowerbutts 2016), but one that required a number of inputs from the user, only worked for strips under ideal conditions-and only needed to be able to read one particular Softstrip. More broadly, there is prior work in cultural heritage on reading old media formats (Chenot et al. 2018) andwork using CNNs (e.g., Can et al. 2018). Our work here adds to that body of knowledge. Also on point is preservation-focused archival work dealing, for example, with handling content on floppy disks (Durno and Trofimchuk 2015;Levi 2011), or extracting data from magnetic tapes (van der Knijff 2019). Many of the concerns echo our experience, with hardware and software either well past its best-before data, or missing entirely. We also note McNally's advocacy for do-it-yourself solutions, albeit with a focus on born-digital content (McNally 2018); we feel our work is one answer to that call, for a medium that straddles both the physical and digital. Parallels between our work and optical character recognition (OCR) can definitely be drawn. Indeed, a recent OCR survey observed that OCR work of late has been trending toward deep learning (Memon et al. 2020). The input for OCR is intended(!) to be legible to humans, of course, whereas the dibits of finer-resolution Softstrips are not easily discernable to the naked eye. Having said that, OCR must deal with an enormous variety of fonts and letter shapes well beyond the relatively fixed format of Softstrips; our point is that using deep learning to address this and similar tasks is now within the reach of non-specialists. There are certainly specific aspects of OCR, such as noise reduction, that are applicable in contexts like ours. Discussion and conclusion What we have demonstrated with our digital reader for Cauzin Softstrips is that it is tractable to develop bespoke software solutions for obsolete media formats. We stress that neither of the authors are experts in neural networks, and yet we have seen excellent results simply using freely available Python libraries in a straightforward way. And, while Softstrips are arguably a niche, esoteric data format, in the bigger picture they and our digital reader act as a vehicle for more generally applicable lessons. One important idea is how we decomposed the problem, abstracting away unnecessary functionality as well as breaking down the task until we were left with a welldefined classification problem. In our case, multiple Softstrips often appear on the same page, and instead of trying to automatically identify and process them all, we left the job of identifying single Softstrips to decode for a human. Similarly, we had initially considered having our software locate Softstrips' positioning based on the alignment marks that the real physical reader would use, but eschewing that notion in favor of a human selecting the image region to process also simplified the software required. In other words, there is a "sweet spot" in design where, depending on the anticipated scale, it is quicker and easier to not automate things. We can see the decomposition playing out further in our eventual decoding method, where parts of the Softstrip were tackled incrementally until we were left with the comparatively constrained task of classifying/reading dibits, albeit one made challenging by real-world conditions. Intuitively, one might reasonably think that our solution using deep learning is overkill, and that simpler algorithmic approaches would suffice. That is, in fact, what we initially thought-that was our starting point for this work, and an earlier independent attempt overseen by the second author not reported here. Ultimately those methods worked for a small number of Softstrips, but quickly broke down when faced with the range of damage present in some physical Softstrips along with the variant quality of Softstrip images not created under controlled conditions. CNNs, which learn relevant classification features by themselves and are known for image-based applications, presented an appealing, and surprisingly good, solution. We do want to stress the helpfulness of having Cauzin information available in patents and other sources. Our task would not have been impossible without that information, but it would have required far more heroic efforts that would be beyond the reach of most non-specialists. Even so, there were some incompletely documented aspects of Softstrips that we needed to figure out on our own, but fortunately they were relatively minor ones. The Cauzin Softstrip itself suffered a chicken-and-egg problem (Savetz et al. 2016). The limited number of available Softstrips did not attract enough customers to buy a Softstrip reader; magazines stopped publishing Softstrips because the target market was not big enough. Situations like these leave portions of our digital past locked into obsolete media formats, yet we restored access to the encoded data without need of a rare, functioning physical reader. While our decoding method was necessarily specific to the Softstrip, knowing the approach we used to read the strips, handle a wide variety of artifacts, and the lessons we learned are useful more generally. Although analysis of the Softstrip corpora contents is outside the scope of this paper, we observe that this work speaks directly to archival accessibility. For the Cauzin Softstrip, despite them being physical artifacts, physical preservation is insufficient; it is only through being able to extract their data that the corpora become accessible for research. The Softstrips' data can now be analyzed en masse to add to the growing body of scholarship on the history of personal computing (e.g., Halvorson 2020;Nooney et al. 2020;Rankin 2018), and close readings can be performed of individual Softstrips. For example, we came across a purported encryption utility program in the Softstrip collection, and by extracting its data using our software, we were able to study the program and determine its encryption method. As an epilogue, we recently acquired a set of Softstrips from eBay, and with no adjustments whatsoever to our decoding software, we were able to decode nearly 95% of the 177 strips. This highlights both the robustness of our solution along with the utility of deep learning for archival and digital curation tasks.
2021-05-08T00:02:56.593Z
2021-02-25T00:00:00.000
{ "year": 2021, "sha1": "40deda3883148e07208ce64d4bdefb05cf252899", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "33d18d02e2c706e2cedf0390735a66045d203d18", "s2fieldsofstudy": [ "Computer Science", "History" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
86443364
pes2o/s2orc
v3-fos-license
Association Analyses with Carcass Traits in the Porcine KIAA1717 and HUMMLC2B Genes* By screening a subtracted cDNA library constructed with mRNA obtained from the longissimus dorsi muscles of F1 hybrids Landrace×Yorkshire and their Yorkshire female parents, we isolated two partial sequences coding for the H3-K4-specific methyltransferase (KIAA1717) and skeletal muscle myosin regulatory light chain (HUMMLC2B) genes. In the present work we investigated two SNPs, one (C1354T) at the 3’ untranslated region (UTR) of KIAA1717 and one (A345G) at the SINE (PRE-1) element of HUMMLC2B, in a resource population derived from crossing Chinese Meishan and Large White pig. The selected pigs were genotyped by means of a PCR-RFLP protocol. Significant associations were observed for the KIAA1717 C1354T polymorphic site with thorax-waist backfat thickness (p<0.05), buttock backfat thickness (p<0.05), average backfat thickness (p<0.05), loin eye height (p<0.05), loin eye area (p<0.05), carcass length to 1 spondyle (p<0.01) and carcass length to 1 rib (p<0.01). HUMMLC2B A345G were significantly associated with loin eye width (p<0.05), loin eye area (p<0.05). Further studies are needed to confirm these preliminary results. (Asian-Aust. J. Anim. Sci. 2005. Vol 18, No. 11 : 1519-1523) INTRODUCTION Continued genetic improvement of swine requires molecular markers to assist selection.One method of developing such markers is by studying candidate genes.A quantitative trait (QT) is controlled by several or many genes (quantitative traits loci-QTLs) which may contribute to the phenotype to a different extent, and be also affected by environmental factors.Genes involved in the biology of a trait of interest are candidate genes for association studies (Brunsch et al., 2002).In the last decade, animal genomics has contributed to the mapping and identification of genes responsible for several traits.In some cases both the genes and the underlying causal mutations were identified by the candidate gene approach (Harlizius and van der Lende, 2001;Li et al., 2002;Yan et al., 2004;Zhang et al., 2005). Muscle comprises as much as 45% of an animal's body mass (Young, 1970).Hence there is much interest in understanding the development, physiology and metabolism of this tissue.The regulation of muscle development is complex, involving many integrated biochemical pathways that interact with the environment to ultimately control the rate of protein accretion.One strategy in dissecting out these networks is to identify genes that are differentially expressed between properly defined breeds, selection lines, or developmental stages.The isolation of these differentially expressed genes will give insight into their biological function, provide clues as to what molecular pathway they participate in, and suggest potential candidate genes for breeding programs or genetic methods of manipulating growth in livestock (Janzen et al., 2000).In addition, the phenomenon of heterosis is in fact the external exhibition of gene expression and regulation in the heterozygote (Liu et al., 2004).The development of cDNA libraries from tissues directly related to important production traits is also important for the identification of candidate genes (Gellin et al., 2000).For these reasons we developed a porcine longissimus dorsi subtracted cDNA library between F1 hybrids Landrace×Yorkshire and their Yorkshire female parents.Recently, from this subtracted cDNA library we isolated partial cDNAs representing KIAA1717 and HUMMLC2B, which were highly expressed in the female Yorkshire parents. In this study we investigated the polymorphisms of KIAA1717 and HUMMLC2B and evaluated the effects of KIAA1717 and HUMMLC2B Msp I PCR-RFLP on carcass traits in the population derived from crossing Chinese Meishan and Large White pigs. Animals and data collection The animals were produced by crossing Large White×Meishan and raised in Huazhong Agriculture University.They were fed twice daily diets formulated according to age under a standardized feeding regimen and had free access to water.The finishing animals were slaughtered and the carcass traits were measured according to the method of Xiong and Deng (1999).Genomic DNA was prepared from blood samples using a standard phenol: chloroform extraction method. PCR-RFLP analysis The alleles of KIAA1717 and HUMMLC2B were analysed by using the restriction fragment length polymorphism (RFLP) protocol.8.5 µl of PCR products were digested with 5 U of Msp I (MBI Fermentas, Vilnius, Lithuania) at 37°C for 4 h in a volume of 10 µl and the digested products were electrophoresed on 1.5% argarose gels and stained with ethidium bromide. Statistics analysis The genotype distribution was tested for Hardy-Weinberg equilibrium as described by Falconer and Mackay (1996).Associations between genotypes and carcass traits were evaluated by means of the least square method (GLM procedure, SAS version 8.0).According to the method of Liu (1998), both additive and dominant effects were also estimated by using the REG (regression) procedure of SAS version 8.0, where the additive effect was denoted as -1, 0 and 1 for AA, AB and BB, respectively, and the dominance effects represented as 1, -1 and 1 for AA, AB and BB, respectively.The model used to analyze the data was assumed to be: Where Y ijk is the observation of the trait; µ is the least square mean (LSM), S i is the effect of i-th sex (i = 1 for male or 0 for female), B j is the effect of j-th year (j = 0 for 2,000 or 4 for 2,004), G k is the effect of k-th genotype (k = AA, AB, BB), b ijk is the regression coefficient of the carcass weight and e ijk is the random residue. In order to study the possible associations between carriers of the different genotypes and the trait values, the 1 and 2. Within F2 animals of the Large White×Meishan population the genotype distributions were at Hardy-Weinberg equilibrium.Statistically significant associations with thorax-waist backfat thickness (p<0.05),buttock backfat thickness (p<0.05),average backfat thickness (p<0.05),loin eye height (p<0.05),loin eye area (p<0.05),carcass length to 1 st spondyle (p<0.01) and carcass length to 1 st rib (p<0.01) were found for the KIAA1717 Msp I PCR-RFLP site.This site seemed to have a significantly additive effect on carcass length to 1 st spondyle (p<0.01),carcass length to 1 st rib (p<0.01) and thorax-waist backfat thickness (p<0.01). Allele A was favourable for these traits.Statistically significant associations with loin eye width (p<0.05),loin eye area (p<0.05) were found for the HUMMLC2B Msp I PCR-RFLP site.No significantly additive or dominance effect was detected for this site. KIAA1717 encodes a H3-K4-specific methyltransferase (H3-K4-HMTase) (also called SET7/9) with 3 MORN repeats and a SET domain (Nagase et al., 2000;Wang et al., 2001).The evolutionarily conserved SET (Suvar3-9, Enhancer-of-zeste, Trithorax) domain occurs in most proteins known to possess histone lysine methyltransferase activity, and methylates diverse proteins, such as, histones, Rubisco and cytochrome C. In particular, they play an important role in the dynamics of the eukaryotic chromatin and are present in several chromatin-associated proteins (Wilson et al., 2002;Aravind and Iyer, 2003).It had also been reported that histone methylation had significant effects on heterochromatin formation and transcriptional regulation (Nishioka et al., 2002).In addition, SET domains have been identified as protein-protein interaction domains and they interact with members of a family of proteins that display similarity to dual specificity phosphatases and could link the epigenetic regulatory machinery with signalling pathways involved in growth and differentiation (Cui et al., 1998;Firestein et al., 2000).Chuikov et al. (2004) reported a novel mechanism of p53 regulation through lysine methylation by Set9 methyltransferase.Set9 specifically methylates p53 at lys372 within the C-terminal regulatory region.Methylated p53 is restricted to the nucleus and the modification positively affects its stability.Set9 regulates the expression of p53 target genes in a manner dependent on the p53 methylation site.HUMMLC2B encodes a Ca 2+ binding protein, skeletal muscle myosin regulatory light chain (RLC) with EF-hand calcium binding motif.Ca 2+ binding protein is involved in Ca2+/CaM-mediated signaling pathways related to morphogenesis, cell division, cell elongation, ion transport, gene regulation, cytoskeletal organization, cytoplasmic streaming, pollen function, and stress tolerance (Reddy et al., 2002).RLC is also a primary regulatory component of the thick filament-linked systems. It had been suggested that phosphorylation of RLC served as an efficient mechanism during cross-bridge cycle, calcium sensitivity and other parameters strengthening muscle performance (Sweeney et al., 1993).In molluscan muscle, direct binding of Ca 2+ by the regulatory RLCs promotes actin activation of the myosin ATPase (Claudia and Philip, 1988).Therefore, KIAA1717 and HUMMLC2B seem to be involved in the regulatory mechanisms of genes with responsibility for some carcass traits.The International Radiation Hybrid Mapping Consortium has mapped the human KIAA1717gene to HSA4q28.No mapping information is available for porcine KIAA1717.But the Sus scrofa chromosome 8 (SSC8) is homologous to most of the Homo sapiens chromosome 4 (HSA4) as reported previously (Goureau et al., 1996).Another gene, FGG, on HSA4q28 has also been assigned to SSC8 (Jiang et al., 2002).So it is probable the porcine KIAA1717gene is located on SSC8.In addition, HUMMLC2B has already been localized to SSC3 (Davoli et al., 2003).On SSC8, QTLs affecting backfat thickness, loin eye area and carcass length have been detected.On SSC3, QTLs affecting loin eye area have also been detected.Therefore, KIAA1717 and HUMMLC2B are probably linked with the causal mutations and the genes with responsibility for some carcass traits, too. In addition, in this study, the polymorphic Msp I sites were at position in the 3′-UTR of KIAA1717, and in the SINE (PRE-1) element of HUMMLC2B, respectively.The important role that UTRs of eukaryotic mRNAs may play in gene regulation and expression is now widely recognized.Indeed, experimental studies have demonstrated that sequence motifs located in the UTRs are involved in crucial biological functions.SINEs (short interspersed nucleotide elements) have also been taken as important elements to generate variations in genome structure and expression.Therefore, the variations in these sequences may have important regulation roles and be directly related to functional variations, too. The specific gene function, and the results obtained from the PCR-RFLP analyses of KIAA1717 and HUMMLC2B are worth being further investigated using larger samples to better evaluate their effects on carcass traits Figure 2 . Figure 2. Agarose gel electrophoresis (1.5%) showing polymorphisms in PCR fragments of KIAA1717 (A) and HUMMLC2B (B) after digestion with Msp I.The genotypes (AA, AB and BB) are shown at the top of the lanes.M, marker DL 2000 DNA Ladder (TaKaRa).B Table 1 . Association between KIAA1717 genotypes and carcass traitsLeast square means (LSM) estimated for each polymorphism is indicated with its standard error (SE).Significant differences (within a trait) between the genotype classes indicated with different lower case superscripts are significant at p<0.05, those with capital superscripts differ at p<0.01.* p<0.05 and ** p<0.01.n = Number of individuals. Table 2 . Association between HUMMLC2B genotypes and carcass traits
2018-12-21T17:59:26.642Z
2005-12-02T00:00:00.000
{ "year": 2005, "sha1": "b7f15be12a14556d3b48a3d5909fd158329dec15", "oa_license": "CCBY", "oa_url": "https://www.animbiosci.org/upload/pdf/18_237.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b7f15be12a14556d3b48a3d5909fd158329dec15", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
7546452
pes2o/s2orc
v3-fos-license
Serine/arginine-rich splicing factor 3 (SRSF3) regulates homologous recombination-mediated DNA repair Background Our previous work found that serine/arginine-rich splicing factor 3 (SRSF3) was overexpressed in human ovarian cancer and the overexpression of SRSF3 was required for ovarian cancer cell growth and survival. The mechanism underlying the role of SRSF3 in ovarian cancer remains to be addressed. Methods We conducted microarray analysis to profile the gene expression and splicing in SRSF3-knockdown cells and employed quantitative PCR and western blotting to validate the profiling results. We used chromatin immunoprecipitation to study transcription and the direct repeat green fluorescent protein reporter assay to study homologous recombination-mediated DNA repair (HRR). Results We identified 687 genes with altered expression and 807 genes with altered splicing in SRSF3-knockdown cells. Among expression-altered genes, those involved in HRR, including BRCA1, BRIP1 and RAD51, were enriched and were all downregulated. We demonstrated that the downregulation of BRCA1, BRIP1 and RAD51 expression was caused by decreased transcription and not due to increased nonsense-mediated mRNA decay. Further, we found that SRSF3 knockdown impaired HRR activity in the cell and increased the level of γ-H2AX, a biomarker for double-strand DNA breaks. Finally, we observed that SRSF3 knockdown changed splicing pattern of KMT2C, a H3K4-specific histone methyltransferase, and reduced the levels of mono- and trimethylated H3K4. Conclusion These results suggest that SRSF3 is a new regulator of HRR process, which possibly regulates the expression of HRR-related genes indirectly through an epigenetic pathway. This new function of SRSF3 not only explains why overexpression of SRSF3 is required for ovarian cancer cell growth and survival but also offers a new insight into the mechanism of the neoplastic transformation. Electronic supplementary material The online version of this article (doi:10.1186/s12943-015-0422-1) contains supplementary material, which is available to authorized users. Physiologically, SRSF3 is essential for embryo development since Srsf3-null mouse embryos failed to form blastocysts and died at the morula stage [13]. Mice with hepatocyte-specific knockout of Srsf3 exhibited altered hepatic architecture, prolonged expression of fetal liver markers, impaired glucose homeostasis and reduced cholesterol synthesis, suggesting that Srsf3 is indispensable for hepatocyte maturation and metabolic function in mice [14]. Pathologically, there is increasing evidence indicating that SRSF3 plays an important role in tumorigenesis. In a mouse model of mammary tumorigenesis, it was observed that SRSF3 was remarkably increased during the development of mammary cancer [15]. In human ovarian tumors, we found that SRSF3 was overexpressed in invasive ovarian cancer at all stages and its overexpression was critical for tumor cell growth and maintenance of transformation properties [16,17]. Knockdown of SRSF3 expression causes growth inhibition or apoptosis of ovarian cancer cells, depending on the extent of SRSF3 knockdown [16]. SRSF3 was also found upregulated in a variety of other tumors, such as cervical cancer and rhabdomyosarcoma [18]. It was showed that ectopically expressed SRSF3 promoted cell growth and transformation of human and mouse fibroblasts [18]. In addition, knockdown of SRSF3 resulted in G1 arrest and downregulation of several G1/S transition-related genes in colon cancer cells [19] and led to p53dependent cellular senescence in fibroblasts [20]. Besides the tumor promoting role, a recent study found that SRSF3 might function as a suppressor of hepatic carcinogenesis, because mice with hepatocyte-specific knockout of Srsf3 invariably developed hepatocellular carcinoma at late ages [21]. Our previous studies mentioned above raise questions why SRSF3 is required for ovarian cancer cell growth and how it contributes to the neoplastic transformation. In the present study, we show that knockdown of SRSF3 suppresses expression of breast cancer 1, early onset (BRCA1), BRCA1 interacting protein C-terminal helicase 1 (BRIP1), and RAD51 recombinase (RAD51). These genes all play important roles in the homologous recombination (HR)-mediated DNA damage repair pathway [22,23]. Correspondingly, we observed impaired HRmediated DNA damage repair (HRR) activity and accumulation of DNA double-strand breaks (DSBs) after SRSF3 knockdown. We also provide evidence suggesting that SRSF3 possibly regulates the expression of above genes through an epigenetic pathway. Profiling of gene expression and splicing in SRSF3knockdown cells In our previous study, we established three A2780 sublines, A2780/SRSF3si1, A2780/SRSF3si2 and A2780/ LUCsi, which express doxycycline (Doxy)-induced SRSF3 siRNA1 (SRSF3si1), SRSF3 siRNA2 (SRSF3si2) and luciferase siRNA (LUCsi), respectively. SRSF3si1 and SRSF3si2 suppress SRSF3 expression by about 50 and 90 %, respectively, while LUCsi has little effect on SRSF3 expression [16]. We confirmed these results in the present study by regular reverse transcription PCR (RT-PCR), quantitative RT-PCR (qPCR) as well as western blotting, as shown in the Fig. 2a, b and c. Induction of SRSF3si1 caused cell growth inhibition whereas induction of SRSF3si2 led to apoptosis [16] (Fig. 2f). In order to determine the mechanisms underlying the role of SRSF3 in ovarian cancer, we conducted human exon microarray analysis to examine the genome-wide profiles of gene expression and splicing in A2780/SRSF3si2 cells with or without SRSF3 knockdown. Using p < 0.05 and absolute fold changes greater than 2 as the cutoff values, we found 687 genes altered in their expression in SRSF3-knockdown cells, among which 424 genes were upregulated while 263 genes were downregulated (Additional file 1: Table S1). Using false discovery rate (FDR) less than 0.05 as the criterion, we identified 807 genes altered in their splicing in the SRSF3si2 cells (Additional file 2: Table S2). Shown in Fig. 1a is the Venn diagram of expression-altered genes and splicing-altered genes in SRSF3-knockdown cells. Gene ontology analysis revealed that genes involved in double-strand break repair, especially those involved in HRR, were enriched among the expression-altered genes, as shown in Fig. 1b. Figure 1c lists the changed HRRrelated genes, which are all downregulated in SRSF3knockdown cells. In addition, genes involved in sterol biosynthesis are also enriched in the expressionaltered genes and they are all upregulated in SRSF3knockdown cells (Additional file 3: Figure S1). Among the splicing-altered genes, those involved in cellular protein modification, especially those related to polyubiquitilation, are the most highly enriched (Additional file 3: Figure S2). Knockdown of SRSF3 suppresses the expression of BRCA1, BRIP1 and RAD51 We have confirmed the downregulation of BRCA1, BRIP1 and RAD51 expression induced by SRSF3 knockdown at both mRNA and protein levels, as shown in Fig. 2. We confirmed the downregulation of other three genes, XRCC2, RAD54B and BLM, only at mRNA levels (Additional file 3: Figure S3) but not at protein levels due to problems with the antibodies we tested. Figure 2a and b show the results of RT-PCR and qPCR, respectively. Figure 2c shows the results of western blotting. As can be seen, the downregulation of BRCA1, BRIP1 and RAD51 is more substantial in Doxy-treated A2780/ SRSF3si2 cells than in Doxy-treated A2780/SRSF3si1 cells, indicating that the effects correlate with the extent of SRSF3 knockdown. As the primer pairs used for PCR are located on the exons common to all or most known splice variants of these genes, the results shown in Fig. 2 reflect the downregulation of overall expression rather than specific splice variants. Similar results were obtained with sublines of another ovarian cancer cell line, SKOV3, as shown in the Additional file 3: Figure S4, indicating that the phenomenon is not cell line specific. It is worth pointing out that our microarray analysis did not find any significant alterations in the splicing of BRCA1, BRIP1 and RAD51 in SRSF3-knockdown cells. We also measured the time course of the expression of BRCA1, BRIP1 and RAD51 at mRNA and protein levels after the A2780/SRSF3si2 cells were treated with Doxy. As can be seen in Fig. 2d and e, the expression of these genes started to decrease from day one after Doxy treatment and the downregulation was gradually intensified in the later days. These results suggest that downregulation of these genes is likely a primary effect of SRSF3 knockdown rather than secondary to the growth inhibition or apoptosis caused by SRSF3 knockdown, which was not observed until day 4 after Doxy treatment, as shown in Fig. 2f. SRSF3 knockdown-induced downregulation of BRCA1, BRIP1 and RAD51 is not due to nonsense-mediated mRNA decay (NMD) NMD is an important quality-control mechanism but also plays a role in the regulation of gene expression. It recognizes and degrades mRNAs harboring premature termination codons (PTCs) [24]. SRSF3 is a well-known splicing factor and its knockdown may cause aberrant splicing and thus trigger NMD to downregulate gene expression. To determine whether the downregulation of BRCA1, BRIP1 and RAD51 is mediated by this mechanism, we examined the effects of inhibition of NMD pathway on the expression of these three genes. NMD is primarily carried out by up-frameshift (UPF) proteins, which consist of UPF1, UPF2 and UPF3 with UPF1 as the key effector of NMD [25]. Previous studies showed that depletion of UPF1 by shRNAs substantially inhibited NMD activity, leading to the upregulation of hundreds of mRNAs [26,27]. We introduced one of the reported UPF1 siRNA (UPFsi) sequences [28] or LUCsi sequence [29] into A2780/SRSF3si2 cells using lentiviruses and achieved Doxy-induced simultaneous knockdown of SRSF3 and UPF1, as shown in Fig. 3a. Then we measured the expression of BRCA1, BRIP1 and RAD51 in these cells treated with or without Doxy for 3 days by qPCR. As can be seen in Fig. 3b, these genes were similarly downregulated in A2780 cells simultaneously expressing SRSF3si2 and UPF1si or SRSF3si2 and LUCsi, indicating that the downregulation of these genes could not be reversed by inhibition of NMD and thus was not mediated by NMD. Knockdown of SRSF3 suppresses the transcription of BRCA1, BRIP1 and RAD51 To determine whether the downregulation of BRCA1, BRIP1 and RAD51 is caused by reduced transcription, we examined RNA polymerase II (RNApII) occupancy on these genes in A2780/SRSF3si2 cells treated with or without Doxy using chromatin immunoprecipitation (ChIP) technology. RNApII occupancy on chromatin DNA has been shown to be reliable surrogate readout for transcription rates [30,31]. We analyzed RNApII occupancy in two regions for each gene: one is about 1 kb downstream of transcription start site (TSS) and the other is 10 kb to 14 kb downstream of TSS. Chromatin DNAs precipitated by RNApII antibody or negative control IgG (Neg IgG) were analyzed by regular PCR and qPCR. As shown in Fig. 4 (results of ChIP in the region of 1 kb downstream of TSS) and Additional file 3: Figure S5 (results of ChIP in the region of 10 to 14 kb downstream of TSS), RNApII occupancy was decreased in BRCA1, BRIP1 and RAD51 genes, but not in the control gene, GAPDH, after SRSF3 knockdown. Figure 4a shows the results of regular PCR and the Fig. 4b shows the results of qPCR. Knockdown of SRSF3 impairs HRR and increases DSBs Given the role of BRCA1, BRIP1 and RAD51 in HRmediated repair of DSBs, the downregulation of their expression is very likely to impair this process and cause accumulation of DSBs in the cells. To test this hypothesis, we first examined the levels of γ-H2AX, a biomarker of (See figure on previous page.) Fig. 2 DSBs, in A2780 subline cells treated with or without Doxy by western blotting. As shown in Fig. 5a, γ-H2AX was substantially increased in A2780/SRSF3si2 cells treated with Doxy but not in other Doxy-treated subline cells, indicating that robust suppression of SRSF3, which resulted in deeper downregulation of BRCA1, BRIP1 and RAD51 (Fig. 2), indeed caused accumulation of DSBs. Immunofluorescent staining of A2780/SRSF3si2 cells treated with or without Doxy confirmed above finding, as shown in Fig. 5b. The time course of γ-H2AX levels after SRSF3 knockdown is shown in Additional file 3: Figure S6. Next we examined whether knockdown of SRSF3 impaired cellular capability to repair DSBs via HR-mediated pathway. We employed DR-GFP reporter [32] to analyze HRR activity in the cell. The reporter consists of two tandem mutated GFP genes with one being a full-length GFP mutated to contain an I-SceI site and the other being a 5′ and 3′-truncated GFP in the downstream. A single DSB is generated in the upstream GFP gene by ectopically expressed I-Scel and can be repaired by HR with the downstream truncated GFP as the template, which results in the formation of functional GFP gene and thus GFPpositive cells. Therefore, the percentage of GFP-positive cells reflects the cellular capability to carry out HRR. We performed this assay in 293 T cells because of the high efficiency at which they can be transfected. As shown in Fig. 5c, the percentage of GFP-positive cells is lowest in 293 T cells expressing SRSF3si2, indicating impaired HRR in these cells. Although 293 T cells expressing SRSF3si1 also had lower percentage of GFP-positive cells than control cells, the difference between two was not statistically significant. These results correspond well to the changes of γ-H2AX shown in Fig. 5a and b. Expression of siRNA-resistant SRSF3 offsets the effects of knockdown of endogenous SRSF3 To further establish the role of SRSF3 in the regulation of HRR gene expression, we conducted rescue study to determine whether siRNA-resistant SRSF3 could offset the effects of knockdown of endogenous SRSF3 in the A2780/SRSF3si2 cells. We made three silent mutations in the coding region of SRSF3 that was targeted by SRSF3 siRNA2, as shown in Fig. 6a. The mutated entire coding sequence with HA tag fused to the N-terminus (HA-mutSRSF3) was then cloned into the lentiviral vector pLVTHM [33] under the direction of EF-1α promoter, as shown in Fig. 6b. pLVTHM was also the vector we used to express the SRSF3 siRNAs and the luciferase siRNA in the cell [16]. The expression of HA-mutSRSF3, like the expression of siRNAs, was Doxy-inducible in the cells expressing regulatory fusion protein tTR/KRAB, which is a hybrid of the tetracycline repressor (tTR) and KRAB A B Fig. 4 Knockdown of SRSF3 reduces RNApII occupancy on BRCA1, BRIP1 and RAD51 genes. a Regular PCR amplification of immunoprecipitated chromatin DNAs. b qPCR analysis of immunoprecipitated chromatin DNAs. Shown are immunoprecipitated DNAs expressed as percentages of corresponding input DNAs (mean ± s.d, n = 3). * and ** indicate p < 0.05 and p < 0.01, respectively, for comparisons of RNApII occupancy between samples treated with and without Doxy domain of human Knox1 protein [33]. We infected A2780/ SRSF3si2 cells using the lentiviruses carrying HA-mutSRSF3 expression cassette and obtained a new cell culture (A2780/SRSF3si2/mutSRSF3), which demonstrated Doxy-induced expression of HA-mutSRSF3 and simultaneous suppression of endogenous SRSF3, as shown in Fig. 6c. With these cells we observed that Doxy treatment caused little changes in the expression of HRR-related genes BRCA1, BRIP1 and RAD51, indicating that the expression of HA-mutSRSF3 offset the effects of knockdown of endogenous SRSF3 (Fig. 6c). In accordance with unchanged expression of HRR-related genes, the expression of γ-H2AX was not increased after Doxy treatment in these cells (Fig. 6c), indicating that HA-mutSRSF3 rescued DNA damages caused by knockdown of endogenous SRSF3. Further, we found that HA-mutSRSF3 also prevented SRSF3 knockdown-induced apoptosis, as shown in Fig. 6d. Taken together, these rescue experiments provide additional evidence to support a role of SRSF3 in the regulation of HRR and cell survival. Knockdown of SRSF3 changes splicing pattern of lysinespecific methyltransferase 2C (KMT2C, also known as MLL3) and decreases methylated histone H3 lysine 4 (H3K4) KMT2C is a H3K4-specific histone methyltransferase, catalyzing H3K4 monomethylation [34,35]. Our exon microarray analysis found that KMT2C expression was upregulated in SRSF3-knockdown cells (Additional file 1: Table S1). According to Ensembl database, two large protein variants could be generated from this gene with one being 4911 amino acids long and the other 4968 amino acids long, depending on whether exon 45 is included. In an attempt to validate the microarray finding, we amplified the region of KMT2C cDNA spanning exon 44 to exon 46 from the samples of A2780 subline cells treated with or without Doxy. As shown in Fig. 7a, the amplification generated more DNA fragments than expected 2 DNA bands. More interestingly, SRSF3 knockdown changed the expression pattern of these fragments. Amplicon sequencing of the PCR products from A2780/SRSF3si2 cells revealed that the extra fragments were derived from Fig. 7a) containing the whole exon 46. Given the molecular function of KMT2C in H3K4 methylation, we wondered whether altered splicing of KMT2C was accompanied by any changes in H3K4 methylation. Therefore, we examined monomethylated H3K4 (H3K4me1) and trimethylated H3K4 (H3K4me3) in A2780/SRSF3si2 and the control A2780/LUCsi cells. As shown in Fig. 7b and c, H3K4me1 and H3K4me3, especially the latter, were decreased in Doxy-treated A2780/SRSF3si2 cells but not in Doxy-treated control cells. In contrast, trimethylated H3K9 and H3K27 were basically unchanged in Doxy-treated cells. H3K4me1 and H3K4me3 have been associated with active transcription [34] while H3K9me3 and H3K27me3 have been linked to gene repression [36]. Whether the downregulation of BRCA1, BRIP1 and RAD51after SRSF3 knockdown can be ascribed to the reduction of methylated H3K4 requires more investigation to determine. Discussion In this report, we present data showing that knockdown of SRSF3 results in downregulation of BRCA1, BRIP1 and RAD51 expression and causes impaired HRR activity. These results suggest a novel role for SRSF3 in the regulation of HRR pathway. HRR is a major mechanism to repair DSBs, which are the most deleterious form of DNA damage and can be generated by exogenous insults as well as endogenous factors [37]. In dividing cells like cancer cells, DSBs are mainly caused by endogenous factors (endogenous DSBs, EDSBs), such as reactive oxygen species (ROS) and replication stress [37], and can be induced by activated oncogenes [38][39][40][41]. It was estimated that EDSBs were produced at the rate of~50 per cell per cell cycle in the normal human cells [42]. In cancer cells, this rate could be higher because of the effects of increased oncogene activity. DSBs are repaired primarily by two mechanisms: non-homologous end-joining (NHEJ) and HRR [43,23]. NHEJ repairs DSBs by promoting direct ligation of DNA ends, which frequently introduces insertions, deletions, substitutions and even chromosome rearrangements. In contrast, HRR repairs DSBs faithfully by using homologous sister chromatids as the template to guide the repairing process and thus playing a pivotal role in the maintenance of genomic stability [43,23]. HRR involves following steps: DSB recognition, damage signal transduction and break repair by HR [23]. The six downregulated genes shown in Fig. 1c all have a role or roles in this repair pathway [22,23,44]. For example, BRCA1 helps to direct the cell to choose HRR over NHEJ to repair DSBs during S and G2 phase [44]. BRCA1 is also required for the recruitment of RAD51to the damage sites [45], which is necessary for homology search and subsequent strand exchange with intact sister chromatid duplex DNA [23]. If DSBs are left unrepaired or aberrantly repaired, the outcome would be cell death or genomic instability. Although genomic instability is a characteristic of most cancers and is believed to facilitate the development of permanent oncogenic changes in the genome [46], there is no evidence suggesting that cancer cells could tolerate continuous DNA damage generation after generation. On the contrary, a relatively stable genome is essential for any cell, normal or tumor, to grow and survive [47], and it is cancer cell's reliance on a stable genome that makes DNA-damaging agents to be effective in cancer treatment. Given the more frequent occurrence of spontaneous DSBs in cancer cells and the importance of a relatively stable genome for cell growth and survival, it is logical that cancer cells need upregulated HRR activity to keep their genomes from continuous alterations. Otherwise, accumulated DSBs or genomic alterations would eventually lead to cell death. The new role of SRSF3 in the regulation of HRR pathway provides a mechanism for cancer cells to meet this need. Therefore, it is no wonder that almost all invasive ovarian tumors that we examined overexpressed SRSF3 and knockdown of SRSF3 induced growth inhibition and cell death [16]. Analysis of the serous ovarian cancer microarray dataset from The Cancer Genome Atlas project shows that SRSF3, BRCA1, RAD51, XRCC2 and BLM are upregulated in tumors compared to normal ovaries, as shown in Additional file 3: Figure S7, supporting the notion that tumor cells need enhanced HRR activity. The new role of SRSF3 discovered in this study also suggests a new paradigm to understand the tumorigenic process. It is widely accepted that activated oncogenes are a driving force of tumorigenesis [48,49]. However, they alone cannot cause cancer. Instead, activated oncogenes induce senescence or cell death in normal and partially transformed cells due to their induction of DNA damage and DNA damage response (DDR) [49,40]. According to current tumorigenic model, after oncogene activation, further genetic or epigenetic changes in tumor suppressor genes are needed to overcome replicative stress and make tumorigenesis proceed (Fig. 8, left panel) [48,49]. Our observation suggests that there exist another mechanism to promote tumorigenesis. That is, during neoplastic transformation, which could be initiated by oncogene activation, SRSF3 is upregulated by presently unknown factor(s) and confers cells enhanced capability to carry out HRR and thus allows cells to bypass replicative stress and complete transformation process (Fig. 8, right panel). This new mechanism may explain not only the development of tumors that lack mutations or alterations in tumor suppressors involved in DNA damage repair and response but also the overexpression of RAD51 found in a wide variety of human tumors, including BRCA1-deficient ones [50,51]. Overexpression of RAD51 can rescue the defects caused by depletion of BRCA1 and thus may contribute to the genesis of BRCA1-deficient tumors [51]. Finally, the results shown in Fig. 7 provide a clue to understand the molecular mechanisms behind the new role of SRSF3. Based on those results, we hypothesize that SRSF3 regulates the expression of HRR-related genes indirectly through an epigenetic pathway. That is, SRSF3 controls alternative splicing of KMT2C, whose splice variants determine the methylation status of H3K4, by which the transcriptional activities of HRRrelated genes are set. To test the hypothesis, more work will be needed to establish causal relationships between the changed alternative splicing of KMT2C and reduced methylated H3K4 and between reduced H3K4me3 and suppressed expression of HRR-related genes. Conclusions Our results indicate that SRSF3 is a regulator of HRR process, which possibly regulates the expression of HRRrelated genes indirectly through an epigenetic pathway. This novel function explains why overexpression of SRSF3 is required for ovarian cancer cell growth and survival but also offers a new insight into the mechanism of the neoplastic transformation. Cell cultures Ovarian cancer cell line A2780 sublines, A2780/SRSF3si1, A2780/SRSF3si2 and A2780/LUCsi, were established in our previous study [16]. These sublines were grown in DMEM supplemented with 10 % FBS and 2 mM L-glutamine at 37°C, 5 % CO 2 . 293 T cells were purchased from the American Type Culture Collection (ATCC) and grown in the same media as A2780 sublines. Microarray analysis Total RNAs were extracted from A2780/SRSF3si2 cells grown in the presence or absence of Doxy (0.1μg/ml) for 3 days using TRIzol reagent (Life Technologies, Grand Island, NY) and treated with TURBO DNA-free kit (Life Technologies). The prepared total RNA samples were submitted to Asuragen (Austin, TX) for expression profiling by Affymetrix Human Exon 1.0 ST Array (Affymetrix, Santa Clara, CA). The microarray data were analyzed using Partek Genomics Suite Version 6.6 (Partek, St. Louis, MO) to determine the differentially expressed or spliced genes. Gene ontology analysis was performed also using Partek Genomics Suite Version 6.6. Apoptosis assay Cells were fixed in 4 % paraformaldehyde for 10 min and then stained in a solution of Hoechst 33342 (Life Technologies) for 15 min. Apoptotic cells and non-apoptotic cells were counted under fluorescent microscope manually with computer assistance. Immunofluorescent staining A2780/SRSF3si2 cells were grown on poly-L-lysinecoated glass coverslip in the presence or absence of Doxy for 3 days before subjected for staining. The cells were fixed in ice-cold methanol for 10 min followed by air-dry. Afterwards, the cells were blocked in 5 % normal donkey serum (Jackson ImmunoResearch, West Grove, PA) for 1 h before they were incubated with γ-H2AX antibody (Cell signaling Technology, Cat # 9718S, 1:400 dilution) for 1 h and then with Dylight 488-conjugated donkey anti-rabbit IgG (Jackson ImmunoResearch, Cat # 711-485-152, 1:200 dilution) for 45 min. The cells were rinsed in 1xPBS for three times after each incubation step. Finally, the coverslips were mounted on glass slides with VECTASHIELD Mounting Medium containing 4′, 6-Diamidino-2-phenylindole dihydrochloride (DAPI) (Vector Laboratories, Burlingame, CA). HR assay The direct repeat green fluorescent protein (DR-GFP) reporter was used to measure HR activity in 293 T cells with or without SRSF3 knockdown. Briefly, 293 T cells grown in 12-well plate were infected at multiplicity of infection 5 with lentiviruses expressing SRSF3si1, SRSF3si2 or LUCsi for 12 h. Two days after infection, these cells were cotransfected with plasmids pDRGFP, pCBASceI (Addgene, Cambridge, MA) and pmCherry-N1 (Clontech Laboratories, Mountain View, CA) by calcium phosphate precipitation method [54]. The transfected cells were subjected to flow cytometric analysis for GFP-positive and mCherry-positive cells two days after transfection. The percentages of GFP-positive cells were normalized to the percentages of mCherry-positive cells before comparison. Statistical analysis Unless otherwise stated, Student's t-test was used in comparisons between samples. All tests were two-sided and p-values < 0.05 were considered significant.
2016-05-17T19:29:44.094Z
2015-08-19T00:00:00.000
{ "year": 2015, "sha1": "bcf3836d6cfc5ae35f2dfff8d06364221d4174bd", "oa_license": "CCBY", "oa_url": "https://molecular-cancer.biomedcentral.com/track/pdf/10.1186/s12943-015-0422-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bcf3836d6cfc5ae35f2dfff8d06364221d4174bd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
251103630
pes2o/s2orc
v3-fos-license
Reference gene identification for normalisation of RT‐qPCR analysis in plasma samples of the rat middle cerebral artery occlusion model Abstract Objective In quantitative reverse transcription‐polymerase chain reaction (RT‐qPCR) studies, the selection and validation of reference genes are crucial for the accurate analysis of MicroRNAs (miRNAs) expression. In this work, the optimal reference genes for RT‐qPCR normalisation in plasma samples of rat middle cerebral artery occlusion (MCAO) models were identified. Methods Six rat MCAO models were established. Blood samples were collected before modelling and approximately 16–24 h after modelling. Two commonly used reference genes (U6 and 5S) and three miRNAs (miR‐24, miR‐122 and miR‐9a) were selected as candidate reference genes, and the expression of these genes was detected with RT‐qPCR. The acquired data were analysed using geNorm, Normfinder, BestKeeper, RefFinder and comparative delta threshold cycle statistical models. Results The analysed results consistently showed that miR‐24 was the most stably expressed reference gene. The ‘optimal combination’ calculated by geNorm was miR‐24, U6 and5S. The expression level of the target gene miR124 was similar when the most stable reference gene miR‐24 or the ‘optimal combination’ was used as a reference gene. However, compared with miR24 or the ‘optimal combination’, the less stable reference genes influenced the fold change and the data accuracy with a large standard deviation. Conclusion These results confirmed the importance of selecting suitable reference genes for normalisation to obtain reliable results in RT‐qPCR studies and demonstrated that the identified reference gene miR‐24 or the ‘optimal combination’ could be used as an internal control for gene expression analysis in the rat MCAO model. INTRODUCTION MicroRNAs (miRNAs) are small non-coding RNAs consisting of 19-30 nucleotides, and miRNAs regulate gene expression and play critical roles in many biological and pathological processes (Bartel, 2009). miRNA-124 (miR-124) is almost a central nervous system-specific miRNA that is preferentially expressed in the cerebrum and cerebellum and has been reported to be capable of protecting neurons in cerebral ischemic/reperfusion injury by regulating key processes, such as neuroinflammation , oxidative stress (Kanagaraj et al., 2014) and neuronal excitability (Wang et al., 2016). The expression of miR-124 in peripheral blood has been reported to be influenced by cerebral ischemia and reperfusion (I/R; Sun et al., 2019). Therefore, the concentration of brain-specific miR-124 in peripheral blood is a promising biomarker for I/R. Quantitative reverse transcription-polymerase chain reaction (RT-qPCR) is a widely used method for quantitative analysis of circulating miRNAs in biomedical research. In the RT-qPCR analysis, a standard curve can be constructed to calculate the absolute quantification of a particular transcript gene, or an internal reference gene can be used to calibrate and standardise a target gene in order to acquire relative quantification of the target gene and avoid the effects of differences in RNA quality, reverse transcription efficiency and PCR conditions of various samples (Brunner et al., 2004;Gutierrez et al., 2008). Ideally, the expression of reference genes should not be affected by experimental conditions or different tissues. There is no reference gene suitable for all study systems (Liu, 2005). For example, let-7a and miR-16 are stably expressed in healthy and breast cancer tissues, but they are not suitable for the quantitative correction of miRNAs in lung cancer patients (Davoren et al., 2008;Peltier & Latham, 2008). In young rats with pentylenetetrazole-induced seizures, the stability of nine tested reference genes, Actb, Gapdh,B2m,Rpl13a,Sdha,Ppia,Hprt1,Pgk1 and Ywhaz,varied significantly between different brain regions (Schwarz et al., 2020). Therefore, the stability of reference genes varies with different experimental settings, tissue types and even different tissue regions (Chapman & Waldenström, 2015). To ensure the efficiency of PCR, the selection of correspondingly stable reference genes according to different experimental conditions is an important prerequisite (Lin et al., 2014). In the present study, we aimed to evaluate the expression of a panel of candidate reference genes in rats performed with middle cerebral artery occlusion (MCAO) operation to mimic ischemic/reperfusion injury. We selected five candidate genes (U6 snRNA,5S rRNA, for identification. The five most popular algorithms, geNorm, BestKeeper, NormFinder, RefFinder and the comparative delta threshold cycle (ΔCt) approach, were used to evaluate the expression stability of the reference genes. Animals Male Sprague-Dawley rats at the age of approximately 6-10 weeks were purchased from the Beijing Vital River Laboratory Animal Tech-nology Co. Ltd. The animals were group-housed in a controlled environment with a temperature of 20-26 • C and a humidity of 40%-70%. The housing room was supplied with ≥15 air exchanges per hour, with 100% fresh air (no air recirculation), and on a 12-h light/dark cycle. Food and water were supplied ad libitum. After a 3-day quarantine and a 2-day acclimation, the rats were screened for the establishment of MCAO models. The study protocol was approved by the Institutional Animal Care and Use Committee of Shanghai InnoStar Bio-Tech Co. Ltd. (IACUC No.: IACUC-2020-r-078), and this study was performed in accordance with the standard ethical guidelines established by the institution. MCAO model establishment The rat MCAO model was established using the Zea-Longa line plug method (Sempere et al., 2004). The rats were anaesthetised with 3% pentobarbital sodium at 30-45 mg/kg via intraperitoneal injection. After anaesthesia, the rats recovered on an electric blanket (kept at 37 ± 0.5 • C) of the operating table in a supine position. In summary, the right common carotid artery was isolated and ligated at a distance of 1-1.5 cm from the bifurcation of the internal carotid artery and external carotid artery. A small incision of 0.2 mm was made near the head end of the common carotid artery ligation. The thread plug was inserted into the common carotid artery from the small incision and entered the internal carotid artery through the bifurcation of the internal carotid artery and external carotid artery. The thread plug was pushed along the internal carotid artery in the direction of cranial entry and inserted into the bifurcation of the anterior cerebral artery and middle cerebral artery, which blocked the blood supply of the ipsilateral internal carotid artery and contralateral internal carotid artery through the anterior cerebral artery. The time of cerebral ischemia was recorded from the successful insertion of the suture. After the operation, infrared light was used to keep the plate warm. After 2 h of cerebral ischemia, the suture left in vitro was gently pulled out for 1 cm to achieve middle cerebral artery reperfusion. The animals were housed separately after the MCAO operation. Cerebral infarction determined by 2,3,5-triphenyl tetrazolium chloride (TTC) staining Two rats were randomly selected after modelling. The rats were anaesthetised after the neurological scoring test, and the brain tissues were F I G U R E 1 2,3,5-Triphenyl tetrazolium chloride staining of rat brain tissue in the established model. In the middle cerebral artery occlusion (MCAO) rat, the infarcted brain area exhibited a white colour, while the non-infarcted brain area exhibited a red colour, which demonstrated the successful establishment of the MCAO model. separated. The brain tissue was sliced coronally. Each slice was about 2-3 mm thick and stained with 2% TTC solution in a dark water bath at 37 • C for 40 min. After staining, the brain slices were placed in 4% formalin at 4 • C for 24 h. The ischemic region should be white, while the normal region should be red. Extraction of total RNA Total RNA was extracted from the plasma collected before and after MCAO model establishment using a combination of TRIzol LS reagent (Invitrogen) and the miRNeasy Serum/Plasma Kit (Qiagen) according to the manufacturer's instructions. We used a NanoDrop 1000 to measure the quantity of the extracted total RNA at 260 nm. The samples with OD260/OD280 values of 1.8-2.0 and measured concentrations of 8-12 ng/μl were selected. Reactions were performed with 2 μl cDNA, 2.5 μl of each primer, 2.5 μl 10× miScript Universal Primer and 10 μl 2× SYBR PCR Master Mix with the following amplification conditions: for 15 min at 95 • C and for 15 s at 95 • C for 40 cycles and for 60 s at 60 • C. All measurements were performed using three biological replicates. Data analysis Applied p-values ≤ 0.05 were considered statistically significant. Evaluation of the MCAO model The successful MCAO model showed obvious neurological deficit symptoms, and the neurological score of the established model increased significantly. Stable models with scores of 2 or 3 were selected in this study. TTC staining was also adopted to evaluate the infarction ( Figure 1). Specificity and amplification efficiency of reference genes The total RNA of candidate reference genes was transcribed reversely to cDNA, which was used as a template for RT-qPCR reaction. In these reactions, the melt curve of all candidate reference genes had a single peak, indicating that the designed primers could amplify specifically without the formation of primer dimers. The linear correlation coefficients (R 2 ) of all standard curves obtained by two-fold dilution series covering a 5 log dynamic range ranged from 0.985 to 0.998. The PCR amplification efficiencies ranged from 94.8% to 113.3% (Table 1). In general, strong correlation and high efficiency were noted in all tested primers. No amplification was noted in the absence of templates. Expression of candidate reference genes in premodelling rats The abundance of five candidate reference genes was detected with RT-qPCR and presented as Ct values (Figure 2). The mean Ct values of the reference genes before modelling ranged from 17.69 (5S rRNA) to 36.721 (miR-9a-5p). miR-24 showed the least variation (with a coefficient of variation (CV) of 2.5%), whereas 5S rRNA (with a CV of 8.6%) was the most variable reference gene. The CV of the other reference genes ranged from 4.0% to 5.6%. Effects of MCAO modelling on the expression of candidate reference genes After log-transformation, the relative quantity of transcripts of the candidate reference genes was statistically analysed with the independent-samples t test between the pre-modelling and postmodelling rats (Figure 3). No statistically significant changes were noted in any reference genes between the pre-modelling and postmodelling rats (P > 0.05). Therefore, MCAO modelling did not affect the expression of the candidate reference genes in this study. The most stable reference gene was miR-24, followed by U6, miR-122, 5S and miR-9a in pre-modelling animals (Table 2A), and followed by 5S, U6, miR-122 and miR-9a in post-modelling animals (Table 2B). Analysis of candidate reference genes based on the GeNorm algorithm The pairwise variation values (V n/n+1 ) calculated by geNorm were used to determine the optimal number of reference genes. When V n/n+1 < 0.15, the optimal number of reference genes is n; when V n/n+1 > 0.15, the optimal number of reference genes is n + 1 (Vandesompele et al., 2002). In this study, all values of V n/n+1 in pre-modelling animals were greater than 0.15, which suggested that three stable reference genes were optimal (Vandesompele et al., 2002;Zhang et al., 2020). Analysis of candidate reference genes based on the NormFinder algorithm The algorithm principle of NormFinder was similar to that of GeNorm. The candidate reference gene with the smallest stability value was considered the most stable gene. Following this approach, the stability F I G U R E 3 Effects of MCAO modelling on the expression of candidate reference genes (n = 6). The relative expression of candidate reference genes in pre-modelling and post-modelling rats are shown. The bars show the mean value of the relative quantity of reference genes, and the standard deviations (SDs) of the six animals are also shown. Independent-samples t test between the pre-modelling and post-modelling rats was conducted, and no statistically significant changes were noted (p > 0.05). NormFinder (stability value) ΔCt (Mean SD) Comprehensive values of candidate reference genes ranged from 0.204 to 0.907 in premodelling animals and from 0.207 to 0.800 in post-modelling animals. Analysis of candidate reference genes based on the BestKeeper algorithm The BestKeeper algorithm ranked the candidate reference genes based on the standard deviation (SD), CV and coefficient of correlation (r). A (a) (b) F I G U R E 4 Gene expression stability ranking of candidate reference genes in pre-modelling animals (a) and post-modelling animals (b) based on the GeNorm algorithm. The candidate reference gene with the smallest M value was considered the most stable gene. candidate reference gene with a higher r value and lower SD and CV values was considered a stable reference gene. The candidate reference gene with SD < 1 was acceptable. In our study, miR-24 had the highest r value of 0.96 and lowest SD and CV values of 0.73 and 2.04, respectively, followed by U6, miR-122, 5S and miR-9a in pre-modelling animals (Tables 2A and 3A). In post-modelling animals, miR-24 was also the most stable reference gene, followed by U6, miR-9a, miR-122 and 5S (Tables 2B and 3B). Analysis of candidate reference genes based on the comparative ΔCt method The ΔCt method compared 'pairs of genes' and bypasses the need to accurately quantify the input RNA (Silver et al., 2006). The candi-date reference genes with lower SD values were considered stable reference genes. In our study, miR-24 with the lowest SD values in premodelling (SD value of 1.08) and post-modelling (SD value of 1.25) rats was considered the most stable reference gene, followed by miR-122 and U6 (Table 2A,B). Analysis of candidate reference genes based on the RefFinder algorithm The expression stability analysis of candidate reference genes assessed using the above four proven statistical algorithms showed basic consistency in pre-modelling animals. However, the statistical algorithms showed inconsistency when analysed in post-modelling animals. To further confirm the rank, a comparative stability ranking of genes TA B L E 3 A Expression stability of candidate reference genes in pre-modelling animals by BestKeeper based on their Ct values was performed with RefFinder. Based on the stability value, miR-24 and U6 were more stable reference genes (Tables 2A,B). Validation of identified reference genes For further validation of the reliability of selected reference genes in MCAO models, the effects of the most stable reference gene miR-24 tested in this study, the 'optimal combination' of miR-24, U6 and 5S and two less stable reference genes miR-9a and miR122 on the target gene miR-124 expression were investigated ( Figure 5). When normalised to miR-24, miR-124 increased by 12.60 ± 6.71-fold (p ≤ 0.01) in post-modelling animals, compared with pre-modelling animals. The expression level of the target gene miR124 was similar when the most stable reference gene miR-24 or the 'optimal combination' was used as a reference gene. However, the less stable reference genes influenced the fold change and the data accuracy with a large SD. Since we used surgical operation to establish animal models, even though the operation process was conducted in strict accordance with the method described in Section 2.2 and the evaluation criteria were consistent, there were still biological variations in individuals according to the expression level of the target gene. After normalisation, the less stable internal reference genes could amplify the variation. These results demonstrate the importance of selecting suitable reference genes for normalisation to obtain reliable results in gene expression studies. DISCUSSION RT-qPCR is frequently used to study relative gene expression in molecular biological research due to its versatility and accuracy. One of the most important factors ensuring the accuracy of RT-qPCR analyses is the stability of the reference gene selected for normalisation of gene expression data; therefore, it is crucial to select a stably and consistently expressed gene as an internal reference gene. miRNA-124, the most abundant miRNA in brain tissue, is strongly expressed in neurons, and its expression increases as neurons mature. miRNA-124 plays a critical role in controlling neuronal differentiation (Akerblom et al., 2012), neuroimmunity , synaptic plasticity and axonal growth (Rajasethupathy et al., 2009), which exert fundamental effects on normal brain processes. Recently, accumulating studies have demonstrated that miR-124 is aberrant in peripheral blood and brain vascular endothelial cells following cerebral ischemia. miR-124 is considered to be a potential diagnostic and prognostic biomarker and pharmacological target of ischemic encephalopathy. To date, we have not found in PubMed any report detailing the identification and validation of suitable reference genes for circulating miR-124 in rat MCAO modelling. In order to identify suitable reference genes, we chose two commonly used reference genes (U6 and 5S) and three miRNAs (miR-24, miR-122 and miR-9a) to perform RT-qPCR. These candidate genes were chosen based on their biological functions F I G U R E 5 Relative expression of miR124 in pre-modelling and post-modelling animals when normalised with miR24 (a), miR122 (b), miR9a (c) and the optimal combination (d). The less stable reference genes influenced the fold change and the data accuracy with a large SD (the gene function of U6 is RNA splicing, and the gene function of 5S is protein synthesis; Lardizábal et al., 2012) or their previous use as ref- erence genes for miR-124 RT-qPCR analysis (Viola et al., 2019;Vuokila et al., 2018). In our study, we established MCAO models and calculated the variation coefficient of Ct values, which evaluated the expression dispersion of each candidate reference gene among pre-and post-modelling samples. Five statistical algorithms were employed to thoroughly analyse the stability of various candidate reference genes. miR-24 was demonstrated to be a stably and consistently expressed gene in both pre-modelling and post-modelling animals that can be used as a reference control in the expression profiling analysis of miR-124. Multiple reference genes are considered to be more reliable than a single reference gene for evaluating target gene expression. The optimal number of reference genes calculated by geNorm in this study was three. The 'optimal combination' was miR-24, U6 and 5S. We further validated the results with miR-24, the 'optimal combination' , and two less stable reference genes, miR-9a and miR122. The less stable reference genes led to poor normalisation and resulted in false-negative or falsepositive results. The expression level of the target gene miR124 was similar when the most stable reference gene miR-24 or the 'optimal combination' was used as a reference gene. In this study, miR24 and the optimal combination were selected as stable internal reference genes due to the stability ranking. (Schwarz et al., 2020). To date, GAPDH and β-actin reference genes have predominantly been used as internal reference controls due to their high and constant expression levels in many different cells and tissues (Eisenberg & Levanon, 2013;Zhu et al., 2008). However, cancerous tissues often exhibit a higher level of gene expression variability than normal tissues due to tumour heterogeneity, genetic instability and the fact that genetic alterations in diverse cancer types may differentially affect cellular processes at the transcriptome level. Therefore, Jo et al. 2019 proposed three potential reference genes (HNRNPL, PCBP1 and RER1) to be the most stably expressed genes across various cancerous and normal human tissues (Jo et al., 2019). In conclusion, to the best of our knowledge, this is the first study planned to screen a set of stable reference genes for miR-124 in rat MCAO models. For statistical analysis, the GeNorm, NormFinder, Best-Keeper, RefFinder and comparative ΔCt methods were used. The above algorithms yielded the same stable gene, miR-24. The ranking results of the other candidate reference genes were different, indicating the importance of using more than one software type to achieve the best result. The 'optimal combinations' calculated by geNorm were miR-24, U6 and 5S. The present study is crucial for successful biomarker discovery and validation for the diagnosis of brain ischemia. Cerebral ischemia is one of the leading causes of morbidity and mortality worldwide and can trigger a vast array of pathological processes, including apoptosis, inflammation, excitotoxicity, oxidative stress and mitochondrial dysfunction, that lead to neuronal cell death (Khoshnam et al., 2017;Rodrigo et al., 2013). Therefore, we infer that miR124 could be a good biomarker for the above pathological processes of brain damage with miR24 and the 'optimal combination' of miR-24, U6 and 5S to be stable reference genes. CONFLICT OF INTEREST All authors have no conflicts of interest. FUNDING INFORMATION The authors received no funding for this work. ETHICS STATEMENT All applicable international, national and institutional guidelines for the care and use of animals were followed. The protocol for animal care and use was approved by the Institutional Animal Care and Use Committee of Shanghai InnoStar Bio-Tech Co. Ltd. AUTHOR CONTRIBUTIONS Jing Ma, Xijie Wang and Hui Zhou contributed to the conception of the study; Xin Yang, Jiayi Yu, Jingyi Xu, Ruiwen Zhang and Ting Zhang performed the experiments; Hui Zhou contributed significantly to the analysis and manuscript preparation. DATA AVAILABILITY STATEMENT The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
2022-07-28T06:18:21.957Z
2022-07-27T00:00:00.000
{ "year": 2022, "sha1": "6b1643e299bfb27173d7003392d9a5ca00eeef78", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b9ff090112298ec8424fd88952819b1b793edc50", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
53248142
pes2o/s2orc
v3-fos-license
Temperature dependence of the solid-liquid interface free energy of Ni and Al from molecular dynamics simulation of nucleation The temperature dependence of the solid-liquid interfacial free energy, {\gamma}, is investigated for Al and Ni at the undercooled temperature regime based on a recently developed persistent-embryo method. The atomistic description of the nucleus shape is obtained from molecular dynamics simulations. The computed {\gamma} shows a linear dependence on the temperature. The values of {\gamma} extrapolated to the melting temperature agree well with previous data obtained by the capillary fluctuation method. Using the temperature dependence of {\gamma}, we estimate the nucleation free energy barrier in a wide temperature range from the classical nucleation theory. The obtained data agree very well with the results from the brute-force molecular dynamics simulations. I. INTRODUCTION The solid-liquid interfacial (SLI) free energy, , plays a fundamental role in crystal nucleation and growth process 1 .It is also a key parameter required to model the formation of solidification microstructures 2 .Despite its importance, the measurement of the SLI free energy is extremely difficult in experiments.Therefore, computer simulation, which provides detailed atomistic information, remains heavily employed to quantitatively investigate . A well-established method to compute is the capillary fluctuation method (CFM) 3 which measures the SLI stiffness based on capillary wave theory 4,5 .While CFM makes an accurate determination of , it is only available at the melting point Tm and usually computationally expensive 6 .To obtain at other temperatures, Laird and co-workers further extend the CFM results along the pressure-temperature coexistence curve using the "Gibbs-Cahn integration" method 7 .However, the temperature dependence of at p=0 remains unclear.Moreover, in the case when several crystal phases compete with each other, a large pressure can trigger a nucleation of the phase which was metastable at p=0.On the other hand, one can make an indirect measurement of the SLI free energy from nucleation simulation with the classical nucleation theory (CNT) 8,9 .This method utilizes the results of molecular dynamics (MD) simulations where the critical nucleus was actually observed.While the method is in principle reliable (see details below), the accuracy strongly depends on the measurement of the size and shape of the critical nucleus 10 .In particular, this method faces the well-known difficulty associated with the fact that the nucleation is usually too rare event.Recently we developed a persistent-embryo method (PEM) 11 to overcome this problem in moderately undercooled liquids.With the PEM, one can observe the actual fluctuations of the large critical nucleus without any biasing.In this work, using the PEM, we determined the average nucleus shape for two fcc crystals, Al and Ni, in the moderately undercooled regime.Then the temperature dependence of the SLI free energy was obtained in the framework of the CNT.These data were used in turn to predict the free energy barrier in a wide temperature range for both systems. The rest of the paper is organized as follows: in Section II, we will introduce the persistent embryo method and provide the simulation details.In Section III, we will present the obtained temperature dependences of SLI free energy for Al and Ni.In Section IV, we will show the obtained SLI free energy data lead to the nucleation barriers in agreement with the data determined using a very different technique.In Section V, we will discuss the obtained results and we will provide the summary in Section VI. II. PERSISTENT EMBRYO METHOD According to the CNT 1 , a homogeneous nucleation involves a formation of the critical nucleus in the undercooled liquid.The formation of such a nucleus is governed by two factors.The first one is the thermodynamic driving force towards the lower-free-energy bulk crystal.This term is negative and proportional to the number of atoms in the nucleus.The other is the energy penalty for creating an interface between the nucleus and the liquid.This term is positive and proportional to the area of the interface.Therefore, the excess free energy to form a nucleus with atoms is where ∆ (< 0) is the chemical potential difference between the bulk solid and liquid, is the solid-liquid interfacial free energy, and is the interface area which can be evaluated as = ( ⁄ ) 2/3 , where is the crystal density and is a shape factor.The competition between the bulk and interface terms leads to a nucleation barrier ∆ * when the nucleus reaches the critical size * , i.e. This assumption can be lifted by introducing the shape factor , assuming that the averaged shape of the sub-critical nucleus does not change at the critical size.Mathematically, the interfacial free energy density and the shape factor in Eq. ( 2), which are both difficult to compute, can be replaced by the critical nucleus size * at the critical point 11 based on the relation |∆| * .According to Eq. ( 3), four quantities ( , ∆, * , and ) are needed to obtain from the MD to calculate the interfacial free energy at a given temperature.The determination of the crystal density, , is trivial.The chemical potential difference, ∆, can be calculated by integrating the Gibbs-Helmholtz equation from the undercooling temperature to the melting point 12 .The determination of the critical nucleus size * and the shape factor can be obtained from the PEM simulations which will be described in detail below. The PEM utilizes the main CNT concept that homogeneous nucleation happens via the formation of the critical nucleus in the undercooled liquid.The PEM allows efficient sampling of the nucleation process by preventing a small crystal embryo (with 0 atoms which is much smaller than the critical nucleus) from melting using external spring forces 11 .This removes long periods of ineffective simulation where the system is very far away from forming a critical nucleus.As the embryo grows, the harmonic potential is gradually weakened and is completely removed when the cluster size reaches a sub-critical threshold (< * ).During the simulation, the harmonic potential only applies to the original 0 (< Nsc) embryo atoms.The spring constant of the harmonic potential decreases with increasing the nucleus size as () = 0 if < and () = 0, otherwise.This strategy ensures the system is unbiased at the critical point such that a reliable critical nucleus is obtained.If the nucleus melts below (< * ) the harmonic potential is gradually enforced preventing the complete melting of the embryo.When the nucleus reaches the critical size, it has equal chance to melt or to further grow causing fluctuations about * .As a result, the () curve tends to display a plateau during the critical fluctuations, giving a unique signal to detect the appearance of the critical nucleus.In addition, multiple plateaus can be collected before a critical nucleus eventually grows, allowing sufficient statistical analysis of nuclei's size and shape. All MD simulations in the present study were performed using the GPU-accelerated LAMMPS code [13][14][15] .The interatomic interaction was modelled using the Finnis-Sinclair potentials 16 developed for the Ni 17 and Al 12 .During the MD simulation, the NPT ensemble was applied with Nose-Hoover thermostats.The damping time in the Nose-Hoover thermostat is set as = 0.1 which is frequent enough for the heat dissipation during the crystallization (see the Supplementary Material). The time step of the simulation was 1.0 fs.The simulation cell contained up to 32,000 atoms which is at least 20 times larger than the critical nucleus size.This setting ensures the effect of pressure change during the nucleation is minimal to the entire simulation box (see the Supplementary Material). To identify the nucleus size during the MD simulation, we used the bond-orientational order (BOO) parameter 18,19 .In this approach, one first defines the correlation between the structures of two neighbor atoms i and j as where is the Steinhardt parameter, ( ⃗ ) are the spherical harmonics, () is the number of nearest neighbors of atom and ⃗ is the vector connecting it with its neighbor j.Two neighboring atoms i and j are considered to be connected when exceeds a threshold .To choose a reasonable value of , Espinosa et al.'s suggested an "equal mislabeling" method 20 by plotting the population of mislabeled atoms in the bulk solid and liquid as a function of the threshold values.As shown in Fig. 1(a), the crossing point of the mislabeling curves of the bulk liquid and solid phases is chosen as the threshold, , to provide that the probability of mislabeling atoms in the bulk liquid as solidlike atoms is the same as the probability of mislabeling atoms in the bulk solid as liquid-like atoms. This approach works very well when one needs to detect "solid" atoms within a bulk liquid. However, it tends to mislabel "solid" atoms at the cluster interface.To account for that, one can determine how many solid-like neighbors an atom has. Figure 1(b) shows that this quantity, , is quite different for majority of atoms in the bulk solid and liquid phases and the number of mislabeled atoms is very small (see the insert in this figure).Intuitionally, it is natural to choose the threshold value, , to be 6 for FCC-liquid interfaces.This approach is quite sufficient for the PEM which requires on-the-fly identification of solid-like atoms during the MD simulation. However, recent study shows that the choice of considerably affects the value of N * determined from the MD snapshots 21 .We will return to this issue in Section V. III. TEMPERATURE DEPENDENCE OF THE SLI FREE ENERGY Figure 2(a) shows a typical PEM simulation.The plateau indicates the appearance of the critical nucleus.Therefore the critical size * can be directly measured by averaging the size at the plateau 11 .To make a statistically sound description of the nucleus shape, we first averaged the nucleus by superposing the configurations collected in a short time interval (∆ 0 = 10 ) during the plateau.As shown in Fig. 2(b), the superposed configuration shows a clear non-spherical nucleus shape.Since the crystalline order fades at the interfacial region, it results in a less dense atomic distribution at the outer shell of the nucleus.In order to see the averaged nucleus shape more clearly, a Gaussian smearing scheme [22][23][24] was applied to convert the atomic distribution into the atomic density in the 3D space.By applying a fast-clustering algorithm 25 on the density profile, we were able to extract the high-density points, which are essentially the as-formed crystalline sites.Then the crystalline sites, which were occupied in at-least half of the snapshots collected during the time interval ∆ 0 , were used to construct the surface of the nucleus by the geometric surface reconstruction method 26 integrated in the OVITO software package 27 as shown in Fig. 2(b). Finally, the shape factor, was computed based on the surface area and the volume of the polyhedron computed from OVITO as = / 2 3 ⁄ .Figure 2(c) shows the measured shape factor and the critical nucleus size as functions of temperature for Ni.The shape factor clearly demonstrates a non-spherical shape.However, while the critical nucleus size dramatically increases with the increase of the temperature, the shape factor shows only a slight decrease.With the measured shape factor and the critical size, the interfacial free energy can be calculated by Eqn.(3). Figure 3 shows the obtained data for both Ni and Al.In both systems, the interfacial free energy shows a nearly linear dependence on the temperature.Therefore, we fit the data with a linear relation to the temperature and extrapolate to the melting point.Figure 3 shows that within the accuracy of the measurement, the extrapolated interfacial free energies agree very well with the data obtained by CFM 3 for both Al and Ni 28 . IV. CALCULATION OF THE NUCLEATION BARRIER A straightforward application of the temperature dependence of the interfacial free energy is to estimate the free energy barrier at very small and very large supercoolings where the PEM cannot be applied.The case of very small supercooling is interesting because it corresponds to the experimental conditions of solidification.The only way to judge about the reliability of the calculations here is to compare with the experimental data although both experimental and computational data will be affected by the factors not related to the CNT (e.g., the quality of the employed semi-empirical potential in the case of simulation or the presence of impurities in the case of experiment).The case of very large supercooling in the case of pure metals is interesting because the nucleation rate can be directly obtained from the MD simulation.In this case, the quality of the employed semi-empirical potential is not an issue.However, the extrapolation to this temperature range may not work because of several other issues.For example, the temperature dependence of the SLI free energy can be different than the one observed at higher temperatures. Another issue is associated with the fact that the critical nucleus at low temperatures becomes so small that the entire CNT concept may not be applicable. In the extrapolation of the nucleation barriers (see Eqn. 2), we used a linear fitting for the temperature dependences of the SLI free energy and the shape factor (see Fig. 2).The obtained temperature dependences of the nucleation barriers are shown in Fig. 4. The obtained temperature dependences well describe our PEM data, which was expected because these dependences were obtained by fitting to the PEM data.The question is if these dependences can be useful to predict the nucleation barrier in a temperature range where the PEM is not applicable.In the case of Ni, the nucleation barrier for the same semi-empirical potential was obtained at T=1180 K 29 using the combination of the mean first-passenger time (MFPT) method [30][31][32] and the Fokker-Planck equation 30,31,33 directly from an unbiased MD simulation 34,35 .In the present work, we used exactly the same approach to obtain the nucleation barrier for Al at T=580 K. Figure 4 shows that the obtained MFPT data are in excellent agreement with the data we obtained using the temperature dependences of the SLI free energies. V. DISCUSSION In the present study we obtained the temperature dependence of the SLI free energy at the moderate undercooling range where other existing techniques are not applicable.Therefore, to validate the obtained results we extrapolated the obtained temperature dependences to the temperatures where well-established methods can be applied.Figure 3 shows that the extrapolation to the melting temperature very well agrees with the CFM data.It should be noted that contrary to the CFM which provides the SLI free energy as function of the interface orientation, in the present study we obtained the SLI free energy averaged over all orientation using the CNT framework (see Eqn. 3). Therefore, we compare the current results to the 0 value from the CFM (see Eqn. 1 in Ref. 3). This was reasonable for pure Ni and Al since the anisotropy of the SLI free energy is not very large for the pure fcc metals 36,37 at least at the melting temperature.Moreover, the PEM provides ample statistics to measure the shape of the nucleus in the temperature range where it is applicable and in the present work, we did not observe very large deviation from the spherical nucleus shape. However, one should be cautious in the interpretation of the SLI free energy value obtained from the PEM in the case crystal phase with very anisotropic SLI free energy (e.g., see Fig. 10 in Ref. 38 ). Another possibility to validate our results was to extrapolate the obtained temperature dependences to low temperatures and compare the obtained nucleation barrier free energies with the data obtained from the brute-force MD simulations.The obtained excellent agreement is rather surprising because it suggests that the CNT still works at these temperatures in spite of the fact that critical nucleus size (only tens of atoms 29 ) is so small that it is not really possible to distinguish between the bulk and the interface regions within a nucleus.In this case, even the concept of the SLI free energy is not clear.Yet, one can always describe the change in the free energy associated with the nucleus formation as the sum of two contributions: the product of the difference in the bulk free energy per atom and the number of atoms in the nucleus and a contribution, which accounts for the nucleus interface.The latter can be treated as the flat interface free energy corrected for the high interface curvature (e.g., see Ref. 35,39,40 ).In fact, this is the quantity we obtained from the PEM.At high temperatures, where the nucleus is large and the correction for the high interface curvature is negligible we obtained a good agreement with the flat interface free energy data from the CFM.At low temperatures, we obtained a good agreement with the bruteforce MD simulation data but the quantity we extracted includes not just the flat SLI free energy but also corrections associated with the SLI curvature.The authors of Ref. 41 argued that namely these corrections explain why the value of the SLI free energy obtained from the seeding simulations is always below that estimated from the Turnbull correlation 42 which was proposed in Ref. 34 to use to estimate the temperature dependence of the SLI free energy.The temperature dependences of the SLI free energy obtained in the present study are also below the predictions based on the Turnbull correlation (see Fig. 3). The main source of the uncertainty in the determined value of the SLI free energy comes from the uncertainty in determination of the number of atoms in the critical crystal cluster, N * .This quantity can be rather sensitive to the choice of order parameters as has been noted in 21,43 and can be seen in Fig. 5.In addition to the BOO parameter we also employed the clusteralignment (CA) method 23 in which minimal root-mean-square deviations (RMSD) between the atom cluster and the perfect packing templates such as FCC, HCP and BCC polyhedral are calculated for crystal-structure recognition.Interestingly that the CA order parameter leads to almost identical results comparing to the use of the BOO parameter with = 6 which was assumed to be the most reasonable value.Figure 6 shows how the uncertainty in N * caused by the choice of the order parameters propagates in the uncertainty of the SLI free energy determined within the present study.A vivid systematic difference can be seen.However, it is important that the temperature dependence remains qualitatively the same: no matter what order parameter we used the obtained temperature dependence was linear.What is even more important is that all lines come to almost the same point which is in excellent agreement with the CFM value of the SLI free energy.Next, we exam whether the Nose-Hoover relaxation time is longer than the timescale over which the size of the nucleus fluctuates.We note that the thermostat is applied to the entire simulation cell rather than to the growing nucleus region.Therefore, the thermostat can affect the obtained results only if it cannot keep up with the latent heat generated during the solidification. To test the employed thermostat we performed a simulation of the nucleus which well exceeded the critical size.In this case the heat generation is much faster than in the case of the critical nucleus and if the employed thermostat is suitable for this situation and will be even more suitable for the critical nucleus simulations.In the MD simulations reported in our paper, the damping parameter of the Nose-Hoover thermostat was set as = 0.1 (following the recommendation of the LAMMPS developers).Figure S2(a) shows the increase of the numbers of atoms in the solid phase during the growth and the temperature in the model during this simulation.Obviously, in this case the employed thermostat is capable to keep up and the temperature does not change.Figure S2(b) shows the same simulation except = 10 .In this case, the temperature increases during the simulation, therefore, this choice of the thermostat damping parameter is not appropriate.Thus, the simulations described above justify that the choice of = 0.1 is good enough for the heat dissipation during the crystallization.Note again, that since the nucleus size does not change significantly at the plateau, we should not expect a considerable latent heat generated during the plateau period., the statistic uncertainty of (in Fig. 3 of the main text) is |∆| 2/3 * |∆| * , (S3) which are plotted as the error bars in Fig. 3 and Fig. 4 of the main text.The systematic uncertainty, which comes from the definition of the order parameters, has been discussed in the Section V of the main text. III. THE PRESSURE ON THE NUCLEUS The current simulations use a Nosé-Hoover based NPT simulation technique.Such methods ensure that pressure in the simulation box fluctuates around the target input value.In this case, a simulation box containing a crystal nucleus and a bulk fluid separated by a solid-liquid interface will have a pressure that is inhomogeneous within the system.To test whether this effect changes the pressure significantly, we examined the PEM simulation data for the plateau period shown in the Fig. 2 Fig. 1 . Fig. 1.Determination of the threshold to distinguish solid-like and liquid-like atoms.(a) Population of mislabeled atoms by different threshold values in bulk Ni crystal and liquid at 1430 K. (b) Population of connections number per atom in bulk Ni crystal and liquid at 1430K.The insert zooms in the region of from 3 to 9. Fig. 2 ( Fig. 2 (a) Nucleus size as a function of time in a typical PEM simulation for Ni at 1430 K. Blue dashed line shows the size of the embryo, 0 , and the green dashed line shows the threshold to remove the spring on the embryo atoms.The box (red) indicates the plateaus of the critical nucleus.(b) From left to right: Superposed fcc nucleus configurations obtained from the plateau in the PEM simulation; the density contour plot corresponding to the atomic distribution in the superposed configuration; the surface of the polyhedron constructed by the high-density points.(c) The measured shape factors (black) and the size (blue) of the critical nucleus for Ni as a function of the temperature.The error bars are obtained by measuring the shape factors of different critical nucleus collected from PEM simulations.The dash line indicates the shape factor under spherical shape assumption. Fig. 3 2 𝑁 * 2 Fig. 3 The interfacial free energy as a function of the temperature for Ni and Al.The open squares and circles are the data obtained using the PEM.The error bar of PEM results are obtained by the error propagation of Eqn.(3) as = 3 2 |∆| 2/3 * 1 3√ 2 2 + 1 9 * 2 * 2 , where , * are the statistic uncertainties of the measurement of and * in the PEM simulations.The filled square and circle are the data obtained at the melting points of Ni and Al using the CFM 28 .The dash lines are the linear fitting and extrapolations of the PEM data ( Ni = 4.475 + 0.006290 (/Å 2 ) and Al = 3.819 + 0.002788 (/Å 2 )).The solid lines are obtained from the Turnbull correlation.The insert shows linear fitting of the measured shape factor as a function of the temperature for both systems.The red dashed in the insert shows the shape factor of spherical assumption as the reference. Fig. 4 Fig. 4 The predicted temperature dependence of the nucleation barrier for Ni and Al.The PEM data of Ni is from Ref. 11 and the MFPT data of Ni is from Ref. 29 .The PEM and MFPT data of Al is measured in the current work.The error bars are obtained as ∆ * = 1 2 |∆| * , where * are Fig. 5 . Fig. 5. Dependence of the critical nucleus size in Ni determined from MD simulation on the choice of the order parameter (BOO or CA) or the threshold value in the BOO parameter. Fig. 6 .Fig. 7 . Fig. 6.The temperature dependence of the SLI free energy in Ni calculated with the critical nucleus sizes determined using different order parameters.The dash lines indicate the linear fitting of the dots/square with the same color. Fig Fig. S1 The local temperature as a function of the radius to the nucleus center of mass for Ni at 1430K.The dashed line shows the target temperature of the simulation.The grey shadow indicates the average radius of the nucleus. Fig Fig. S2 (a) The growth of the supercritical nucleus of Ni at 1430K.The damping time of the Nose-Hoover thermostat is set as = 0.1 .The upper panel shows the size of the nucleus, while the lower panel monitors the temperature of the whole simulation cell.(b) The simulation starts from the same initial configuration as (a), while the damping time is set as = 10 . (a) in the main text for Ni at 1430 K. To compute the local pressure in the nucleus, we defined a nucleus region by setting a box which covers most of the nucleus atom as shown in Fig.S3(b).We also set a similar box in the bulk liquid.As shown in Fig.S3(a), the local pressure in the liquid region and nucleus region did not show a significant difference when fluctuating.The averaged pressures over the time period are -0.048GPa and 0.062 GPa for liquid region and nucleus region, which is very minimal to the entire simulation box.It is very unlike that such a small pressure can affect the obtained results but more studies are needed. Fig. S3 ( Fig. S3 (a) The pressure as a function of time during the plateau of critical nucleus for Ni at 1430K.The dashed line is a reference of P=0 GPa.(b) The MD snapshot at t=450 ps.The liquid region and nucleus region are highlighted by the blue and green boxes, respectively.The large red dots are the nucleus, while the small black dots are liquid atoms.The size of the boxes in the liquid region and nucleus region is 16 × 16 × 16 Å 3 . * , ∆ , are the statistic uncertainties of measuring shape factor , critical nucleus size * , chemical potential difference ∆ and solid density , respectively.According to the equation ∆ * = , the uncertainty of the free energy barrier ∆ * (in Fig.4of the main text) isThe statistic uncertainty of the PEM simulation mainly comes from the measurement of the nucleus size and shape.The determinations of and ∆ from MD simulation are very accurate for pure metals.Therefore, we assume that ∆ = = 0.The uncertainties and ∆ * in the current work become *
2018-10-28T00:42:55.000Z
2018-10-28T00:00:00.000
{ "year": 2018, "sha1": "23079f57e41cc044b30da6359cac56d2f009c2a5", "oa_license": null, "oa_url": "https://dr.lib.iastate.edu/bitstreams/0f7f8096-0da7-4f5c-8fca-8ccf18117c8d/download", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "23079f57e41cc044b30da6359cac56d2f009c2a5", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine", "Physics", "Chemistry" ] }
38289443
pes2o/s2orc
v3-fos-license
Slow Breathing Training Reduces Resting Blood Pressure and the Pressure Responses to Exercise Slow breathing training reduces resting blood pressure, probably by modifying central autonomic control, but evidence for this is lacking. The pressor response to static handgrip exercise is a measure of autonomic control and the aim of this study was to determine whether slow breathing training modulates the pressor responses to exercise of untrained muscles. Twenty hypertensive patients trained for 8 weeks, 10 with unloaded slow breathing (Unloaded) and 10 breathing against an inspiratory load of 20 cm H2O (Loaded). Ten subjects were untrained controls. Subjects performed a 2 min handgrip pressor test (30 % MVC) preand post-training, and blood pressure and heart rate (HR) were measured before the contraction, at the end and following 2 min recovery. Resting systolic (sBP) and HR were reduced as a result of training, as reported previously. After training there was both a smaller pressor response to hand grip exercise and a more rapid recovery of sBP and HR compared to pre-training. There were no changes in the Controls and no differences between the Unloaded and Loaded groups. Combining the two training groups, the sBP response to handgrip exercise after training was reduced by 10 mm Hg (95 % CI: −7, −13) and HR by 5 bpm (95 % CI: −4, −6), all p<0.05. These results are consistent with slow breathing training modifying central mechanisms regulating cardiovascular function. Introduction With a prevalence rising to over 60 % in older age groups, hypertension is recognised as a major health problem throughout the world leading to a range of life threatening cardiovascular diseases.While there is a range of pharmaceutical treatments available there is also a need for non-pharmaceutical interventions, partly because they are more affordable in the developing world, but also because they offer the prospect of addressing the underlying problem rather than just the symptoms. Whole body exercise training has an important role to play in the management of hypertension (Halbert et al. 1997, Whelton et al. 2002) but while the mechanism underlying the improvement are not fully understood there are reasons to think that it involves changes in central autonomic control of blood pressure rather than, or in addition to, any peripheral modifications.For instance, training two legs reduces resting blood pressure (Devereux et al. 2010) and might be analogues to whole body exercise, but the same benefit can be achieved by training only one leg (Ray 1999) or even a relatively small muscle mass such as with hand grip exercise (Ray andCarrasco 2000, Taylor et al. 2003), the latter differing qualitatively and quantitatively from whole body aerobic exercise. Interestingly, reductions in resting blood pressure comparable with those obtained by whole body exercise are also seen with the practice of yoga and meditation (Patel andNorth 1975, Spicuzza et al. 2000) and a common feature of these techniques is slow and regular breathing (Bernardi et al. 2001b, Cysarz andBüssing 2005).Furthermore, a number of randomized controlled studies have shown slow breathing to be effective in reducing blood pressure (Jones et al. 2010, Schein et al. 2009, 2001, Sharma et al. 2011). Slow breathing does not place any serious energetic demands to the inspiratory muscles, at least in the absence of an inspiratory load, and is unlikely, therefore, to result in any peripheral adaptations that may occur with aerobic training.An alternative mechanism is that slow breathing training modifies some aspect of the central control of blood pressure.Lehrer et al. (2006Lehrer et al. ( , 2003) ) have shown that heart rate variability (HRV) feedback, which is essentially slow breathing training, increased baroreflex gain. In addition to increased resting blood pressure, hypertensive subjects have an enhanced blood pressure response to static exercise (Delaney et al. 2010, Sausen et al. 2009).Aerobic training reduces the rise in blood pressure in response to muscle contraction (O'Sullivan and Bell 2001) which may be partly due to metabolic adaptations in the trained muscle (Fisher and White 2004) but it is also possible that there is a down regulation of metaboreflex sensitivity either at the peripheral receptors or of the central autonomic response to afferent stimulation. It is not known whether slow breathing training which reduces resting blood pressure also reduces the pressor response to muscle contraction, but if it does this would be strong evidence of a central adaptation since slow breathing is most unlikely to cause any adaptations in an untrained peripheral muscle.The primary objective of the present study was, therefore, to examine this point.It was hypothesized that slow breathing training would not only improve resting blood pressure in patients with essential hypertension but would also reduce the blood pressure responses to contraction of a muscle that had not been trained.Although slow breathing entails minimal energy expenditure it is possible that signals modifying autonomic function might arise from working muscles in which case adding an inspiratory load might increase the beneficial effects of slow breathing training. Subject characteristics Subjects were recruited from patients attending the hypertension clinic of Srinagarind Hospital into the study that was approved by the Ethical Committee of Khon Kaen University.The patients all received full information about the nature of the study before providing written consent.1. Laboratory-based measurements Subjects reported to the laboratory between 9-10 am and rested in a comfortable chair for 15 min before any measurements were made. Heart rate and heart rate variability ECG was recorded with a three lead BIOPAC MP100 system and resting heart rate calculated from the R-R interval.Heart rate was then recorded over a 5 min period with the subjects resting and breathing at 10-12 breaths per minute and subsequently analysed off line for heart rate variability (HRV) in the frequency domain.Total spectral power (ms 2 ) and the power in the low frequency (0.04-0.15 Hz) and high frequency (0.15-0.4 Hz) regions are reported together with the low to high frequency ratio (LF/HF). Isometric handgrip challenge The challenge consisted of a sustained isometric handgrip contraction on the dominant side with the forearm supported and elbow flexed at 90 degrees.After determining the maximal voluntary contraction (MVC) as the best of three contractions, each separated by two minutes, the subject rested for approximately 30 min before being given a target of 30 % MVC to maintain for two minutes with visual feedback.Subjects were instructed to breathe normally during the exercise and not to hold their breath.Blood pressure and heart rate were measured on the non-dominant side before the start of the handgrip contraction, in the last 30 s of the two minute handgrip exercise and again after two minutes of recovery.Blood pressure was measured with an automatic digital bedside monitor (Nikon Kohden-life scope®). Slow breathing training protocol Subjects inspired deeply using a device that humidified the inspired air.Subjects in the Load group had an inspiratory load of 20 cm H 2 O while the sham, or No Load group had no additional load (see Jones et al. 2010, for details).Subjects were trained to adopt a breathing pattern with a duty cycle (inspiratory time: total respiratory time) of 0.4 with a total respiratory time of 10 s.The paced breathing was practiced using a metronome in the laboratory until it could be performed reliably without the metronome.Subjects rested for 5 s after every 6 deep breaths.The training program was performed at home for 30 min, twice a day, and every day for 8 weeks.Control subjects were asked to continue their normal patterns of daily living. Data analysis and statistics Changes in blood pressure and heart rate from pre-to post-training for the three groups, Control, No load and Load, were examined with a 3-way repeated measures ANOVA with as within factors, time (3 levels; baseline, isometric and recovery) and training (pre vs. post), and between factor, type of training (control, unloaded and loaded).HRV data were examined with a 2-way repeated measures ANOVA with as within factors training (pre vs. post), and between factor type of training (control, unloaded and loaded).Where there was evidence of an interaction (p≤0.05)post hoc paired and independent Bonferroni-corrected Student's t tests were used to identify the significant changes as a result of training within and between groups. Results The consequences of slow breathing training for resting blood pressure have been reported previously (Jones et al. 2010) and we present here the heart rate and pressor responses to handgrip exercise, before and after training in Figures 1-3.The previous report was of changes in resting blood pressure measured at home and in the laboratory, in both cases early in the morning.The pressor responses were measured later in the day at the laboratory and the resting blood pressures reported here are the values obtained just before the handgrip exercise.Thus, the resting values reported here are similar, but not identical, to the data in our previous report. Resting systolic blood pressure and heart rates before and after training are shown in Figures 1-3 (left panel, "Rest").There were significant group interactions for changes in systolic pressure and heart rate with post hoc analysis revealing significant reductions for the two training groups although there were no differences between No load and Loaded.With diastolic pressure the two training groups showed reductions that were greater than seen with the Control group, but there were no significant group interactions.Combining the two training groups, the decrease in resting mean arterial pressure was 10 mm Hg (95 % CI: −7, −13).There was a tendency for patients with higher initial resting systolic pressure to show greater decreases with training but this did not achieve statistical significance (p=0.14). Pressor response to isometric handgrip The effects of the slow breathing training were to reduce the rise in blood pressure and heart rate during the handgrip exercise and to hasten recovery. Systolic blood pressure Systolic blood pressures for the three groups of subjects, before, at the end of the 2 min handgrip test and following 2 min recovery are shown in Figure 1 (left panel) with the changes from the resting value in the right panel of Figure 1.For the changes from rest at the end of 2 min handgrip there were significant group interactions (p=0.017) and post hoc tests revealed differences between pre-and post-training measurements for the No load (p=0.002) and the Loaded groups (p<0.001) while there were no significant changes in the Control group, nor any differences between the two training groups.Systolic blood pressure fell in the 2 min recovery phase.There were group interactions (p<0.001) and post hoc tests revealed significant differences between pre-and posttraining measurements for the No Load (p<0.001) and the Loaded group (p<0.001) while there were no significant changes in the Control group, nor any differences between the two training groups. Diastolic blood pressure The diastolic blood pressures before, at the end of the 2 min handgrip test and following 2 min recovery are shown in Figure 2 (left panel) with the changes from resting values in the right panel.There were no significant group interactions either for diastolic pressure responses at end of exercise (p=0.371) or recovery (p=0.431)although the data shown in Figure 2 suggest a trend towards a smaller diastolic pressor response after training. The changes in mean arterial pressure at the end of 2 min handgrip exercise for the combined training groups was 15 mm Hg before, and 9 mm Hg after eight weeks of training.For the Control group over the same period the comparable values were 12 mm Hg and 13 mm Hg.Total spectral power and the power in the low frequency (LF) and high frequency (HF) regions of the spectrum, together with the ratio of power in these two regions (LF/HF).Data are given as mean ± SD.There were no significant differences in total power but the changes in distribution of power within the spectra were all significant p<0.001. Heart rate Values for heart rate before, at the end of the 2 min handgrip test and following 2 min recovery are shown in Figure 3 (left panel) with the changes from resting in the right panel.For the changes from rest after 2 min handgrip exercise there were significant group interactions (p=0.006) and post hoc tests revealed significant differences between values before and after training for the No Load (p<0.001) and the Loaded group (p<0.001) while there were no significant changes in the Control group, nor any differences between the two training groups. Heart rate variability Breathing frequency was not specifically regulated but was observed to be between 10-12 breaths per minute during the five minute rest period when heart rate data were collected.There were no group interactions with respect to total spectral power (p=0.78)but there were highly significant interactions for both high and low frequency power as well as the LF/HF ratio (p<0.001 in each case).HRV data are shown in Table 2 and after eight weeks there was a significant shift in the distribution of spectral power from the low to high frequency region in the two training groups (p<0.001 for LF, HF and LF/HF in both Sham and Load).Somewhat surprisingly there was an opposite effect in the control group with a significant decrease in HF and increase in the low frequency component and LF/HF ratio (p<0.001 in each case) over the eight week period. Discussion The aim of the present study was to determine whether slow breathing training results in a modification of central neural pathways controlling blood pressure and the fact that slow breathing training reduced the pressor response to handgrip exercise clearly points to a central mechanism playing a major part.Adding an inspiratory load to the breathing training had no significant effect on either the reduction in resting blood pressure, or on the pressor response to hand grip, indicating that the slow breathing alone, rather than the contractile activity of the respiratory muscles, was key to the adaptations reported here. The beneficial effects of breathing training for resting systolic blood pressure we report are very similar to previous reports where breathing has been regulated in various ways (Elliot et al. 2004, Schein et al. 2009, Viskoper et al. 2003).The values are also very similar to those we reported previously for the same subjects (Jones et al. 2010), except that the measurements for the present data were taken slightly later in the day.Probably for this reason the changes in diastolic pressure (Table 2) were just not significant, whereas there were significant changes in the early morning values reported previously (Jones et al. 2010).Although there was a trend for slightly larger changes in resting systolic pressure in the Loaded training group, there was no statistical difference between the No Load and Load groups.Both forms of breathing training reduced the pressor response to handgrip exercise despite the fact that the forearm Vol.64 muscles involved had not been trained. The pre-training pressor response to forearm contractions (change in MAP ~15 mm Hg) was somewhat smaller than ~25 mm Hg reported by Delaney et al. (2010), but this may have been because the latter subjects had higher resting blood pressures since they had been off medication for two days prior to the experiment.The decrease in MAP pressor response with training was similar to the difference between pressor responses of normotensive and hypertensive subjects in the study of Delaney et al. (2010). Blood pressure measurements are notoriously affected by a variety of emotional cues such as fear and anxiety as well as with the Valsalva maneuver sometimes used when trying to sustain a long muscle contraction.It is possible, therefore that the trained subjects may have been more relaxed or that there were subtle differences in breathholding during the second handgrip exercise.However, it is unlikely that this would account for the differences between the Control and Trained groups since both had the same experience of the hand grip test and none were observed breathholding during the test.Moreover, differences between the trained and control subjects were seen during the recovery phase of the handgrip exercise when they were resting and there was no reason why they should be breathing abnormally. The pressor response to muscle contraction has a number of components, some being due to metabolic and mechanical reflexes arising in the working muscles, others being feed-forward from central command (Fisher and White 2004).The heart rate response to contraction is thought to be largely a consequence of central command since HR returns to baseline at the end of contraction even during post-exercise circulatory occlusion (PECO) and HR responses are also less pronounced if the muscle is activated by electrical stimulation rather than by voluntary effort (Fisher and White 1999).Systolic and diastolic blood pressures, on the other hand, remain elevated during PECO, indicating that the increased pressure during that phase is a consequence of metabolite accumulation in the muscle.There may also be a central component to the blood pressure rise since the PECO pressures are usually about half the level measured during contraction, but this could also reflect the afferent input from mechano-receptors active during the contraction but absent during PECO (Carrington et al. 2003). The present results cannot differentiate between changes in central command or the central modification of metabo-or mechano-receptors as a result of breathing training; for this it would have been useful to have a period of PECO.What is clear, however, is that no matter whether the breathing training affected one, or all three, of the stimuli for pressor responses, the modification occurred at some central site rather than in the peripheral muscle. The increased power in the high frequency region of the HRV spectrum is consistent with a modification of central control of cardiovascular function.The respiratory component of the spectrum evident in the HF region increases with decreasing breathing rate (down to 9 breaths per minute) and it is possible that the trained subjects altered their breathing pattern.However, Lehrer et al. (2006) found no change in breathing patterns after a period of biofeedback training which was similar in many ways to the slow breathing training used here. It is not immediately clear how slow breathing training can effect a change in the central mechanisms regulating the pressor response but there have been reports that one of the acute effects of slow breathing is to increase the sensitivity of the baroreflex in patients with heart failure (Bernardi et al. 2001a) and hypertension (Joseph et al. 2005).Lehrer et al. (2006Lehrer et al. ( , 2003) ) also showed that HRV biofeedback, which is essentially slow breathing, not only increased baroreflex sensitivity during the slow breathing but also had more lasting effects as a result of 10 weeks of training.It is possible, therefore, that an increased sensitivity of the baroreflex as a result of training may more effectively buffer the increases in blood pressure in response to handgrip exercise.Slow breathing is associated with major changes in HRV and blood pressure variability (Lehrer et al. 2006(Lehrer et al. , 2003) ) and these authors suggest that the large fluctuations may "exercise the baroreflex" which could lead to longer lasting modifications of function.It should be noted, however, that increased baroreflex sensitivity does not necessarily explain the reduction in resting blood pressure as a result of breathing training since this requires a change in the set point. It has been observed that hypertensive subjects tend to have a low end-tidal PCO 2 which may be a consequence of an increased sensitivity to CO 2 (Joseph et al. 2005).It is possible that slow breathing might shift the balance of bicarbonate buffering in the blood by increasing end tidal-PCO 2 with possible acute and long term effects.In future work it would be useful to determine how breathing rate, end tidal PCO 2 and blood bicarbonate are affected by slow breathing training. Meditation, which generally involves slow breathing, is known to reduce levels of stress hormones (Brand et al. 2012, Fan et al. 2014) and it is possible that a change in circulating cortisol levels might affect central autonomic pathways, although it would be difficult to distinguish cause and effect. In summary, we have shown for the first time that slow breathing training which reduces resting blood pressure also reduces the pressor response to handgrip exercise.The implication of these finding is that breathing training modifies central cardiovascular control which attenuates the pressor response to contraction of muscles throughout the body. The fact that improvements in blood pressure control were seen with hypertensive patients who were considered to be well controlled with a variety of drugs highlights the fact that pharmacological treatments do not necessarily "cure" the problem and that slow breathing training, meditation or some form of exercise, is an important adjunct to even the best pharmacological treatments. Fig. 1 . Fig. 1.Slow breathing training and systolic blood pressure (sBP).On the left are values for sBP before (open symbols) and after (filled symbols) eight weeks of slow breathing training.Values are mean and standard errors for resting pressure (Rest), after 2 min handgrip contraction (Contract) and after a further 2 min recovery (Recovery).There were three groups, Control, who did no training, No load, who undertook slow breathing but with no load, and Loaded who trained with an inspiratory load.* indicates significant differences between pre-and post-training resting pressures.On the right are the mean changes, with 95 % CI, of sBP from the resting baseline values at the end of the hand grip exercise (End Ex) and subsequent recovery (Recovery) before (open columns) and after (filled columns) training.* indicates significant differences between pre-and post-training; see text for p values. Fig. 2 . Fig. 2. Slow breathing training and diastolic blood pressure (dBP).On the left are values for dBP before (open symbols) and after (filled symbols) eight weeks of slow breathing training.Values are mean and standard errors for resting pressure (Rest), after 2 min handgrip contraction (Contract) and after a further 2 min recovery (Recovery).There were three groups, Control, who did no training, No load, who undertook slow breathing but with no load, and Loaded who trained with an inspiratory load.On the right are the mean changes, with 95 % CI, of dBP from the resting baseline values during the hand grip test (Contract) and subsequent recovery (Recovery) before (open columns) and after (filled columns) training. Fig. 3 . Fig. 3. Slow breathing training and heart rate (HR).On the left are values for HR before (open symbols) and after (filled symbols) eight weeks of slow breathing training.Values are for mean resting rates and standard errors (Rest), after 2 min handgrip contraction (Contract) and after a further 2 min recovery (Recovery).There were three groups, Control, who did no training, No load, who undertook slow breathing but with no load, and Loaded who trained with an inspiratory load.* indicates significant differences between pre-and post-training resting pressures.On the right are the mean changes, with 95 % CI, of HR from the resting baseline values during the hand grip test (Contract) and subsequent recovery (Recovery) before (open columns) and after (filled columns) training.* indicates significant differences between pre-and post-training; see text for p values. Table 1 . Patient characteristics at the start of the study. Data for the three patient groups, Control, those undertaking No load training and Loaded training.BMI, body mass index; Duration is the length of time since first diagnosis of hypertension; Treatment is the time since the start of pharmacological treatment; SBP, systolic blood pressure; DBP, diastolic blood pressure; PP, pulse pressure; MAP mean arterial blood pressure; HR, resting heart rate Table 2 . Heart rate variability pre-and post-training.
2015-06-01T23:46:22.000Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "f1f2aa132352b53292fcfb0e9cd63b456a3aa0d1", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.33549/physiolres.932950", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "f1f2aa132352b53292fcfb0e9cd63b456a3aa0d1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
201969655
pes2o/s2orc
v3-fos-license
One‐year clinical outcomes of anticoagulation therapy among Japanese patients with atrial fibrillation: The Hyogo AF Network (HAF‐NET) Registry Abstract Background Although anticoagulation therapy could reduce the risk of strokes in patients with atrial fibrillation (AF), large‐scale investigations in the direct oral anticoagulant (DOAC) and AF catheter ablation (CA) era are lacking. Methods This study was designed as a prospective, multicenter, observational study and a total of 2113 patients from 22 institutions were enrolled in the Hyogo area. Results The mean age and CHADS2 score were 70.1 ± 10.8 years old and 1.5 ± 1.1, respectively. The follow‐up period was 355 ± 43 days. CA was performed in 614 (29%) and DOACs were prescribed in 1118 (53%) patients. Ischemic strokes/systemic embolisms (SEs) and major bleeding occurred in 13 (0.6%) and 17 (0.8%) patients, respectively. New onset dementia, hospitalizations for cardiac events, and all‐cause death occurred in eight (0.4%), 60 (2.8%), and 29 (1.4%) patients, respectively. A multivariate analysis demonstrated that persistent AF and the body weight (BW) were associated with ischemic strokes/SEs and major bleeding, respectively (persistent AF: hazard ratio, 9.57; 95%CI, 1.2‐74.0; P = .03; BW: hazard ratio, 0.94; 95%CI, 0.90‐0.99; P = .02). AFCA history was associated with the cardiac events (hazard ratio, 0.44; 95%CI, 0.20‐0.99; P = .04). Age was associated with new onset dementia (hazard ratio, 1.1; 95%CI, 1.0‐1.2; P = .03). Conclusions In the DOAC and CA era, the incidence of ischemic strokes/SEs, major bleeding and cardiac events could be dramatically reduced in patients with AF. However, some unsolved issues of AF management still remain especially in elderly patients with persistent AF and a low BW. | INTRODUC TI ON The number of patients with atrial fibrillation (AF) is increasing at a rapid rate and is expected to reach beyond one million patients in Japan as the population ages. 1 AF has a major risk of thromboembolisms and heart failure. Several studies reported that AF is also related to new onset dementia. 2 The annual incidence of cerebral thromboembolisms in AF patients is almost 2%-4% in Japan and increases with the CHADS 2 score/CHA 2 DS 2 -VASc score number. 3,4 Direct oral anticoagulants (DOACs) have been widely used to prevent cerebral infarctions in patients with AF. The advantages of DOACs over warfarin in reducing cerebral infarctions and bleeding complications have been demonstrated in several randomized clinical trials (RCT). [5][6][7][8] However, the long-term outcomes of DOAC use remain unclear in the catheter ablation (CA) era. AF catheter ablation (AFCA) is widely performed, and some investigations have reported that it is more effective for preventing AF recurrences than medical therapy. 9,10 CA in patients with heart failure has been reported to be associated with a significantly lower rate of a composite end point of death from any cause or hospitalization for worsening heart failure than medical therapy, while the impact is less in patients without heart failure. 11,12 Evidence that the mortality improves in patients who undergo CA is still limited. Therefore, it is important to reveal how to select the best treatment of AF based on each patient's background. AF patients without strokes have been followed by cardiologists in Japan. However, once cerebral vascular events occur, those patients are followed by brain surgeons or neurologists. Therefore, it is difficult for primary care doctors to share the events. To share those events, we established the HAF-NET (HYOGO ATRIAL all-cause mortality and to clarify the reality of AF management in Japan by using the data form the HAF-NET Registry. | Study cohort The HAF-NET Registry is multicenter, prospective, observational study of Japanese patients with AF. The patients were enrolled from April 2015 to August 2016. Inclusion criteria were those aged 20 or older in whom AF was diagnosed by a 12-lead or Holter electrocardiogram. There were no exclusion criteria. A total of 22 institutions, all of which were located in Hyogo Prefecture, participated in this registry. They consisted of eight cardiovascular centers, two affiliated or community hospitals, and 12 primary care clinics. All patients were followed through a review of the inpatient and outpatient medical records, and additional information was obtained through contact with the patients, relatives, and/or referring physicians by mail or telephone. The data were checked by clinical the posted information will be updated as needed to reflect the protocol amendments and study progress. | Registration card and data collection All patients had a certification of attendance, which contained information including the anticoagulation therapy regimen and contact information of the primary care doctor ( Figure 1). Even though clinical adverse events occurred while being seen by secondary care doctors, they could inform the primary care doctor of the events by using this card. The primary care doctor was able to log into the website and register information about any adverse clinical events. The clinical patient data were registered on the online database system by the doctor in charge at each institution. The data were automatically checked for any missing or contradictory entries and values out of the normal range. Additional editing checks were performed by the clinical research coordinators at the general office of the registry. The baseline clinical background data were as follows: patient clinical characteristics including the date of birth, age, gender, body weight, serum creatinine level, date when AF was diagnosed, history of treatment including CA, cardiac surgery, percutaneous coronary intervention, or coronary arterial bypass grafting, type of AF, comorbidities, and risk factors including heart failure, hypertension, diabetes mellitus, strokes/TIAs, vascular disease, valvular disease, ischemic heart disease, cardiomyopathy, dementia, whether patients smoked or consumed alcohol at the time of enrollment, a reduced left ventricular function (%FS < 25% or EF < 35%), current medications including anticoagulant drugs (DOACs or warfarin) and antiplatelet drugs, and subjective symptoms including palpitations, dyspnea and dizziness. Paroxysmal AF was defined as AF that terminated spontaneously within 7 days, while persistent AF was defined as AF that lasted for > 7 days but could be terminated with medication or electrical cardioversion. Long-lasting persistent AF was defined as AF that lasted > 1 year. The risk of a stroke was evaluated by the CHADS 2 score and CHA 2 DS 2 -VASc score. 13 The risk of bleeding was evaluated by the HAS-BLED score. 14 A PT-INR of 1.6-2.6 was the optimal therapeutic range for patients aged 70 or older and a PT-INR of 2.0-3.0 was appropriate for patients aged 69 or younger. | Primary and secondary endpoints The primary endpoints of this registry were symptomatic cerebral infarctions including TIAs, SEs, and fatal bleeding complications requiring hospitalization including an intracranial hemorrhage. A TIA was defined as a sudden onset of focal neurologic symptoms and/or a sign lasting less than 24 hours, brought on by a transient decrease in the blood flow, which rendered the brain ischemic in the area pro- | Statistical analysis Continuous data were presented as the mean ± SD for normally distributed variables. Medians and quartiles were given for nonnormally distributed variables. If these data followed a normal distribution, they were tested with an unpaired t-test or Welch test. If not, they were tested with a Mann-Whitney test. Categorical variables were analyzed with the Fisher's exact test. Cox proportional hazards regression models were used to estimate the hazard ratios and 95% confidence intervals for each event. Previously reported variables including age, gender, BW, AF type, AFCA history, valvular disease, ischemic heart disease, cardiomyopathy, EF less than 35%, heart failure, hypertension, age > 75 years, diabetes mellitus, stroke/TIA, vascular disease, antiplatelet drug, DOAC use, and HAS-BLED were also selected as cofounders. The multivariable Cox proportional hazards regression model included variables with a P < .05 using an unadjusted Cox proportional hazard regression analysis. To compare the clinical events between the warfarin and DOAC users, 667 age, BW, and CHADS 2 score-matched DOAC and warfarin users were tested. The cumulative incidence of a stroke or SE was determined by the Kaplan-Meier method. The survival analysis between warfarin and DOACs was performed using a log-rank test. A value of P < .05 was considered statistically significant. All statistical analyses were performed using SPSS, Release 25 software (SPSS). | RE SULTS A total of 2113 patients were enrolled from 38 institutions in Hyogo prefecture between April 2015 and August 2016. Of those, 1343 F I G U R E 1 Registration card. The left panel shows the front side of the registration card where the actual anticoagulation therapy could be checked. The following text was described in the registration card: "Please inform your primary care doctor of the following events: Ischemic stroke, SE, Hemorrhagic stroke, new onset dementia, hospitalization for major bleeding, hospitalization for cardiac event, all-cause mortality" was written by Japanese. The right panel shows the opposite side of the registration card where the patient name and birthday, primary care doctor's name and telephone number were written. The following text was described in the registration card: "Please always carry this card to inform your doctors of anticoagulation therapy. When you see a doctor, please show this card your doctors, dentists and pharmacists. According to the doctor's suggestion, please do not change the dosage of anticoagulants by self-determination. Please inform your primary care doctor when the anticoagulation therapy reluctantly stopped." (64%) were enrolled from cardiovascular centers, 66 (3%) from affiliated or community hospitals, and 704 (33%) from private clinics. Two thousand and seventy (98%) of the 2113 patients were followed for 1 year after the enrollment and the mean follow-up period was 355 ± 43 days. | Baseline characteristics of the registered patients The baseline characteristics of the registered patients are summarized in Table 1. Almost 70% of the patients were male. The mean age was 70.1 years and 36% of them were aged 75 or over. Half of the patients had paroxysmal AF. Almost half of the patients were symptomatic and the most common subjective symptom was palpitations. The mean CHADS 2 and CHA 2 DS 2 -VASc scores were 1.5 ± 1.1 and 2.6 ± 1.6, respectively. Figure 2 shows the patients distribution according to the CHADS 2 score and CHA 2 DS 2 -VASc score. A CHADS 2 score = 1 and CHA 2 DS 2 -VASc score = 3 were the most common subpopulations. Table 2 shows the comorbidities of the patients. Hypertension was by far the most prevalent underlying disease, and 21.3% of the patients suffered from heart failure, of which 5.2% had an ejection fraction of <35%. Ischemic heart disease and valvular disease were present in 7.2% and 13.1%, respectively. Of those, mitral regurgitation was remarkably frequent. Of note, dementia was present in 2.7% of the patients. Almost 30% of the patients had a history of CA. | Medications in HAF-NET patients Low-dose users were common in both the dabigatran and edoxaban users, but not in the rivaroxaban and apixaban users. | Main findings of the study The data from the HAF-NET registry demonstrated a higher DOAC use and AFCA history as compared to the previous studies, which resulted in excellent outcomes after 1 year of follow-up among Japanese patients with AF in the DOAC and AF ablation era. Persistent AF and a lower BW were strongly associated with stokes/ SEs and major bleeding, respectively. AFCA history as well as age | Patient characteristics One-third (800 patients) of the patients in this registry were from Chuo-Ku, which is located in the southern region of Kobe city. The population of Chuo-Ku is approximately 135 000 people. Based on the epidemiological prevalence of AF in the Japanese population of 0.6%, the number of AF patients in Chuo-Ku was estimated to be approximately 810. As the number in our registry was almost equal to the estimated AF patients in Chuo-Ku, the AF patents in this registry were assumed to fully reflect a typical ward in Kobe city. In Japan, real-world data of the anticoagulation therapy in patients with AF have been published from two major AF registry such as FUSHIMI and SAKURA registry. 15,16 The enrollment of the | Medications in HAF-NET patients Warfarin was prescribed in only around 30% of the patients in the HAF-NET registry. As compared to the FUSHIMI and SAKURA reg- The prevalence of a low-dose usage of DOACs was significantly less in rivaroxaban/apixaban users than in dabigatran/edoxaban users. Of importance, the SAKURA registry identified inappropriately low dosing in 19.7 to 27.6% of the DOAC users. The proportion of an adjusted low dosing was estimated to be almost 20% in the rivaroxaban or apixaban users and almost 50% in the dabigatran or edoxaban users. Furthermore, postmarketing studies for each DOAC also estimated that the proportion of an adjusted low dosing was almost 30% in the rivaroxaban or apixaban users and almost 60% in the dabigatran or edoxaban users. [19][20][21][22] The proportion of this estimated adequately low-dosing usage in the SAKURA registry was similar to our results. This indicated that the inadequate low dosing in the HAF-NET registry was extremely less than that in the previous AF registries. Almost 8 years have passed since dabigatran was released as the first DOAC in 2011. Over the past decade, we have experienced the importance of adequate dosing of DOACs, which has increased major bleeding as well as strokes or TIAs. Therefore, adequate dosing might be challenged in the recent real world of AF anticoagulation therapy in Japan. Actually, our data clearly supported this challenge and demonstrated excellent outcomes. | Primary and secondary endpoints and clinical predictors The two major postmarket surveillance (PMS) studies (J-Dabigatran surveillance, XAPASS) showed the incidence rates of major bleeding and thromboembolic events, suggesting that dabigatran and rivaroxaban were safe and effective in the Japanese clinical prac- | Catheter ablation and anticoagulation therapy Recently, several studies have reported the impact of CA on the mortality and cardiovascular hospitalization in patients with AF. Especially in patients with heart failure, the impact has been greater. The CATSLE-AF study clearly demonstrated that CA was associated with a significantly lower rate of the composite end point of death from any cause or hospitalization for worsening heart failure as compared to medical therapy. Although no statistical significance could be found, cerebrovascular accidents were dramatically reduced by CA as compared to medical therapy. 11 Furthermore, the impact of CA has been greater in patients with an age of <65 years old, heart failure of <NYHA functional class II, and EF of ≧ 25%. The CABANA study also reported that the impact of CA was greater in patients with an age of <65 years old. 12 | Impact of DOACs on preventing clinical events Previous RCTs have revealed better clinical outcomes especially for fatal bleeding under DOAC therapy than under warfarin therapy. However, fewer Japanese patients could be enrolled in those RCTs. [5][6][7][8] The SAKURA registry showed no significant differences in the rates of strokes or SEs, major bleeding, and all-cause mortality for DOAC vs. warfarin users. Under propensity score matching, the incidence of strokes or SEs and all-cause death remained equivalent, but the incidence of major bleeding was significantly lower among DOAC than warfarin users. 25 In the HAF-NET registry, the incidence of strokes or SEs was significantly lesser in the DOAC users, but not that for major bleeding. This discrepancy might be caused by the frequency of the CA history. Progression of AF was reported to be associated with an increased risk of clinical adverse events during the arrhythmia progression period from paroxysmal to persistent AF among Japanese patients with AF. The risk of adverse events was also transiently elevated during the progression period from paroxysmal to persistent AF and declined to a level equivalent to persistent AF after the progression. 26 A CA history was found in almost 10% and 25% in the SAKURA an HAF-NET registries, respectively. As compared to medical therapy, CA could strongly reduce the AF burden and no progression toward persistent AF was observed during a median follow-up of 6 years especially in patients with paroxysmal AF. 27 In such patients without AF recurrence after a successful CA, DOACs might be continued without a dose reduction, while the PT-INR level might be controlled at a lower level to avoid the fatal bleeding. This suggested the importance of CA and DOACs for preventing strokes or SEs and the awareness of an adequate DOAC lower dosing after a successful CA. | Dementia and AF A meta-analysis reported that AF was independently associated with an increased risk of all forms of dementia. 1 The incidence of dementia in the patients without AF was almost 3.0% during a followup period of over 5 years. After a dementia diagnosis, the presence of AF was associated with a marked increased risk of mortality. 2 Recently, individuals with AF have been reported to have an almost threefold increased risk of dementia during a 12 year follow-up (HR 2.8; 95% CI 1.3-5.7; P = .004). The population attributable risk for dementia resulting from AF was 13%. They concluded that patients with AF should be screened for cognitive symptoms. 28 In the HAF-NET registry, dementia diagnosed before enrollment was found in 56 (2.7%) of the patients and the incidence of new onset dementia was 8 (0.4%) patients. This annulus incidence of dementia was similar to that in patients without AF. This might be the impact from anticoagulation therapy with DOACs and a strong rhythm control therapy with CA. We hope that this impact would continue during the follow-up of over 3 years because the average time to the development of dementia has been reported to be almost 3 years. | Study limitations This study had several limitations. First, this study was designed as a prospective observational study, therefore, only associations were shown, not causality. The possibility of unmeasured or residual confounding factors was not ruled out. Second, anticoagulant therapy was assessed at the time of the enrollment, but the changes in the medical therapy could not be assessed. Third, to assess the impact of the DOAC therapy, age, BW, and CHADS 2 score-matched DOAC, and warfarin users were compared because of the small number in each medical therapy group. Fourth, this study involved AF patients recruited from a small region of Japan, and therefore, the results might not be generalizable to the overall population. | CON CLUS ION The HAF-NET registry was characterized as (a) having a high incidence of DOAC prescriptions and a CA history and (b) including relatively younger patients with lower CHADS 2 scores. In the DOAC and CA era, the incidence of ischemic strokes/SEs, major bleeding and hospitalization for the cardiac events could be strongly reduced in patients with AF. However, some unsolved issues of AF management still remain especially in elderly patients with persistent AF and a low BW. ACK N OWLED G EM ENTS We would like to thank John Martin for his linguistic assistance and Hiromasa Suzuki for his assistance with the clinical research coordination. The key personnel and institutions participating in the registry are as follows: Chief investigator: Yoshida A (Kita-harima Medical Center)
2019-09-09T18:38:37.702Z
2019-08-16T00:00:00.000
{ "year": 2019, "sha1": "9d7c6444f7ba3be055127509e4171f50a30f3421", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/joa3.12226", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3fd77e046af42f32db1551e9a0fef5f1c98a860b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216318827
pes2o/s2orc
v3-fos-license
Digitalization as a New Stage in the Formation of Economic Relations . From barter system trading, to paper currency and now to digital economy, the global economy has come a long way. With enhanced technology, the digitization has spurred the communication between people. People can interact with each other, and thus, the effect is quickly observed in the economy. The country-to-country enhanced communication has sped up the economic activity. The countries are using the digitized technology the make their countries prosperous for the business environment. Seeing all of this, the countries are slowly adapting the digitalized technology. The impact created by the digitalized economy has been felt greatly by the firms and overall economy. Digitalisation of the economy Nicholas Negroponte believed that the information that is stored in physical form such as USBs and CDs will be saved in the form of 0s and 1s. Hence, described by a metaphor "shifting from processing atoms to processing bits" [5]. The introduction of information technology into the everyday life has not only improved peoples' life but has transformed the face of global economy. Unlike olden days, people can now talk to the other person sitting at any part of the world. The online connection between people, businesses have resulted in greater economic activities. The 21 st century also saw a new term, globalization, where different countries got interlinked with each other. The business activities expanded as the interlink age between people and organizations occurred at a breakneck speed. The core driving force of the digital economy is the mobile technology and the Internet, the hyper connectivity, where the person-to-person and person-to-machine communications have advanced to higher levels [6]. The digitization process involves collection of huge amount of data that is organized, structured and transformed for evaluative purposes. Using this data, the businesses have come out of their shells as with greater information, the products and services have been given superior innovations. Digitalization of the economy in various countries The desire for efficient system has what inspired the global leaders to adopt the digital revolution. The digitization has penetrated into various elements of life, and has positively impacted the people, businessmen and governments. By going digital the government now have access to an abundant data volume. The digitization has given the benefit of internetworking to various government organizations providing ease in terms of planning and execution. The adaption of new technology into the governmental departments ensued in time saving opportunity. Also, as the data could be saved into digital form, the paper wastage decreased down to significant levels, thus, bringing down the wastage cost, drastically [7]. The Estonia, for instance, has introduced a new concept of e-residency, where people from different national identities can register themselves as e-residents of Estonia. The increased usage of digital technology has developed a healthy business environment for the country, thus inviting the entrepreneurs from all over the world. The country has built its reputation as an advocate of digital world, therefore becoming a paradise for entrepreneurs [8]. The introduction of Adhar cards to the residents of India highlights the importance and significance of digitalization. The Indian citizens are given a 12-digit ID number, which gets linked to different social programs setup by the government. The policy shift made the Indian government more efficient and allowed them to provide a fairer distribution of subsidies. Previously, the ghost beneficiaries claimed the subsidies, which got undetected by the government officials. The subsidies on liquefied petroleum gas are transferred directly to the Adhar cardholders' bank account as the identification numbers are directly linked to the person's bank account [9]. The economist has provided the evidence to how digitalization has affected the consumer price index or inflation rates. The e-commerce industry growing at a fast pace, have given an equal opportunity to remove the barriers to entry. With increasing online competition, the price setters offers low prices to the consumers and putting downward pressure on the prices of the goods. People have now access to international market by showcasing their products on giant e-commerce platforms like Alibaba and Amazon. The two have opened up the global market and making a tough rivalry for the competitors. The only winner of the price wars is the consumer sector [10]. The technological advancement has allowed lots of different countries to make an efficient business environment in their countries. If the company can get registered over the Internet, and businesses could file their taxes online, people will find it easier to get into the tax net. The developing nation can be the real beneficiaries of such digitized policy implications. As the ease of doing of business will go one step ahead, the domestic and international investors will find it invest in various untapped markets of an emerging economy. The business sector might be the director beneficiaries but Advances in Economics, Business and Management Research, volume 128 the countries will feel the real positive affect. With a growing number of businesses, the country could add millions through corporate taxes [11]. Despite all the efficiencies brought by the digitalization of the economy, some analysts are of the view that the digital economy will not push up the productivity growth. In the argument, analyst tells how the inventions of the combustion engine, telegraph, and electricity hugely supported the productivity growth. Though, as compared to this, the economist thinks that future innovations like driverless cars will unlikely to stimulate the GDP numbers. Thus, citing opinion that the latest digitalization is more evolutionary than revolutionary [12]. Through, digitalization in the economy, the societies have turned into cashless societies. The people can use their debit and credit cards restaurants or shopping malls. Instead of cash, people can pay directly from their bankcard, and the amount gets transferred from buyers' account to sellers account. The introduction of new payment method will have a positive environmental impact, as there would be reduction in paper consumption for printing cash. The World Bank reveals that Russia's brought reforms in its telecommunication sector enabling larger accessibility combined with greater affordability of Internet connection. The introduction of data analytics and artificial intelligence will help the government to focus on health and education sector [13]. The Russian government however, lags behind its other European and Central Asian counterparts in utilizing the services of digitalization. The weak payment system does not allow the Russia to get a hold on to its e-commerce market. But, with the development of digital economy, Russia has a sublime opportunity to improve its usage of technology for a better payment method. Meanwhile, The US has a different story to tell. It is expected that by 2025, the US will generate $2 trillion through digitization. The finance and ICT sectors have the highest digitalization in its operation. Although, the advent of automation brought by the technological advancement would displace 12 million middle skill workers [14]. Implementation of digitalization tools in businesses and their effectiveness The small and medium scale enterprises know how to take benefit of the growing digital advancement. In fact, a large number of organizations have adapted to digital transformation, with regard to big data and social media. The utilization of enormous amount of data has been a lot easier and more meaningful for the companies. Businesses are devising plans and strategies to combat the challenges through proper analysis of the data on advanced data analytical software. The from health sector department to large manufacturing firms, the digitization has helped the organizations to gather data and analyzed it for improved results. For instance, overstaffing is a problem in health sector. The nurses would be given overtime pay if they were called to the hospitals. With no emergencies, the overstaffing can prove to be expensive for the hospitals. Thus, to control the problems through data analytical software, the analyst visualizes the data and creates an efficient schedule for its staff. The digitization is believed to change the business outlook giving it more elasticity and smoothness. One such advantage is given by the entry of artificial intelligence in large organizations. Despite the debate for its complete success, around 75% executives believe that artificial intelligence will provide new horizons to the businesses. Another 85% believes that the companies with AI technology will have an upper hand due to completive advantage. Currently, companies have automatized their products through AI [15]. Successfully creating algorithms and using the data analysis techniques have successfully done this. The social messaging apps like WhatsApp have allowed the companies to refine the inter-organizational communication skills. With better communications, the companies are able to efficiently convey their strategies to the operations department. As, all the data is now stored on digital platforms like clouds or other digital media, the employee can quickly retrieve it from wherever it wants. Thus giving the comfortability and flexibility of choosing work schedules. Advances in Economics, Business and Management Research, volume 128 With growing globalizations and technological progression, many different global organizations get connected with each other. This gives them the luxury to outsource their work to someone working in remote area giving them timely work at better cost. This helps the freelancer to earn money and the outsourcing will allow the firm to trim down the cost. Recommendations The organizations, which are yet to find ways on how to capitalize on the growing efficient of digitalization must quickly, learn to adapt it for brighter future. The flexible working hours allow the employees to work according to his schedule. This creates a nice bond between the employee and employer. Thus, the organizational heads must put the company's data on digital mediums. With proper usage of digital technology, the companies can save heaps of data into microchips that cost peanuts as compared to the data saved in the physical mediums like registers and books. Therefore, the small companies must buy a system, where they would be able to store important companies' data. The data can be retrieved easily with no fuss. The governments must take the same policies, as it would help them to record the tax collection. The digitization can help the government improve its ease of doing business and thus, bringing foreign investment in the country. Conclusion With the advent of technological development in the new age, the digitalization has taken over the world economies. The governments are going cashless by using the enhanced payment methods through bankcards. With this, the governments have found a way to develop a strong business environment in the country, which could bring in international investors from different parts of the world. The digitization helps the country to take better tax measures. The information about non-filers through digitized documentation let, the governments to bring the non-filers to tax regime. The digitization has not only put a positive impact on the government but the business class is enjoying the benefits of artificial intelligence. With highly improved and innovative products, the AI has helped the companies to earn larger revenues. Seeing the advantages provided by the new digital age, the companies, individuals and governments are gearing up to learn more about the digitalization.
2020-04-02T09:33:06.452Z
2020-03-17T00:00:00.000
{ "year": 2020, "sha1": "ddf7aac965896fd8703e943548bb9b1a2898fad5", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/aebmr.k.200312.443", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2874b1bf0f2a9e07501ef95461c6a5a605396a64", "s2fieldsofstudy": [ "Economics", "Computer Science", "Business" ], "extfieldsofstudy": [ "Economics" ] }
204092024
pes2o/s2orc
v3-fos-license
Piezoelectricity: a literature review for power generation support . It is undeniable the potential of piezoelectric materials as an energy source. However, do current methods have the potential to be applied widely. To answer this, we require to know their current value according to socio-urban and environmental conditions. This paper presents the first literature review to study the ways of most successful piezoelectric forms of generation, implemented today, and a comparison between them according to their energy potential, as well as the socioeconomic implications presented. Results give a clearer picture of the challenges and advantages of piezoelectric materials as a means of power generation support, showing the positive practical implications of the implementation of piezo systems. Introduction The lack of urban physical space and steady growth of the energy matrix, allow us to envision a future where the constant improvement of the existing matrix, becomes an essential practice towards more intelligent and self-sustainable cities [1]. Thus, following Priya and Inman's solution, we go along their piezoelectric solution by their quote in p.14 [2]: "One of the possible solutions to this problem are called energy harvesting techniques, these allow collection of residual energy from environmental sources, clean and free." However, this is a challenge, since creating new options to satisfy the growing energy needs, leads to an essential increase in R&D of such power generation support. On the subject, do we imagine power generation being supported just by traveling on public roads, and/or visiting the park and/or, while commuting to work, and/or exploit the waves swing of the seas? Since 1880, this is possible thanks to the phenomenon of piezoelectricity, discovered by the Curie brothers [3]. The concept and breadth of applications of piezoelectric materials have evolved over time. Thus, considering one of the most recent definitions of such phenomenon, we quote, Ali B. and A. Mashaleh in p.2 [4]: "Piezoelectricity is a property for some materials which lies in generating electrical voltage when mechanical force is subjected into it and vice versa". Currently, there are various studies for the use of this phenomenon, ranging from special bricks on the sidewalk for pedestrians, pressure points in roadways and railroads [5], up to specialized breakwaters for coastal zones [5,6]. Hence, the literature in this regard seems to show some glimpse of the use of piezoelectric materials, indicating that they could have an interesting future for supporting power generation [7,8]. Considering the above, the purpose of this paper is to summarize methods of energy harvesting and their socio-economic implications, of applying such methods of power generation, which does not completely alter urban spaces. By this, piezo electrical solutions to the steady growth of the energy matrix, and a comparison of energy production between the methods, as well as a comparison with other energy parameters to measure its applicability are sought [9]. The structure of this article continues with the reference framework based on methods of collecting energy through piezoelectric materials and the resulting propositions. Then, in Section 4 the results of the literature review are presented. In the last section, we present a brief discussion of our findings and implications of this technology so that it can encourage the various players in the productive sector to bet by emerging technologies that can integrate the development of cities and their inhabitants. Framework and proposals for piezoelectricity The key methods of piezoelectric materials for power generation support, their advantages and disadvantages, as well as special cases of applications and propositions are next presented. They are followed by a study on the environmental impact and economic aspects of piezoelectricity of power generation and a corresponding proposition. Piezoelectric tiles If considering that a tile is a brick floor, whereby passing pedestrians, electricity is obtained when people walk through tile implementing the principles of piezo generation materials, which may be considered micro power generation for auxiliary power [10]. On this line, S. Joo et al, proposes the following classification in p.1: "Two ways of generating power from piezoelectric modules are available: hitting and vibrating." By hitting, they argue that such material will undergo a load directly on piezoelectric modules. On the other hand, in the vibration method, where loads are merely superficial, the latter has a wider application because of its increased durability. However, due to the amplitude of the surfaces, it can be installed in wider areas, making it a source of macro power [11]. Since people must mobilize, walking almost anywhere, what better than to choose crosswalks, office buildings, shopping malls or even discotheques to take advantage of that energy and create electricity. This means an innovative way to generate clean energy, and its success is guaranteed because mankind will not fail to move from one place to another by walking as the raw material. Thus, we have Proposition 1: There is useful generation by energy harvesting through piezoelectric tiles. Piezoelectric pavement as energy collector by traffic Piezoelectric pathway involves placing devices on asphalt, able to transform the pressure and vibration exerted by the vehicles into electric power. On the one hand, this sort of energy as electrical pulses must be rectified and transformed to obtain a storable and usable energy. On the other hand, in recent decades it has become a priority to build new roads by the significant increase in the vehicle fleet [12]. Thus, as Mike Gatto says [13]: "A major source of renewable energy is right beneath our feet --or, more accurately, our tires. California is the car capitol of the world. It only makes sense to convert to electricity the energy lost as cars travel over our roads.", specialized sections of roads have been designed to convert vibrations caused by cars into energy in places such as Israel, Japan, the United States and Colombia, just to name a few countries [10]. This application is somewhat new and interesting because a truck, for instance, can generate a large amount of potential, once it travels on the road that has the piezoelectric material layer under the asphalt. This layer also serves to obtain data such as speed, weight and frequency of passing vehicles. Further, there is literature of high generation capacity with as short as one-kilometer road stretches, using piezoelectric technology [14]. Given such prototype tests of energy harvesting techniques, with real practical grounded applications [15], it is a great potential which may be used by the transit of vehicles on these devices. Besides, when talking about roads where heavy vehicles cross by, the potential is greater, even to the point of achieving independence from the network and indirectly recovering the investment from these piezoelectric devices. We then present Proposition 2: Designing new roads where heavy traffic is expected, by implementing piezoelectric pavement might help for power generation support Railways adapted with piezoelectric devices The generation of electricity through the train tracks are an adaptation of the application for power generation from roads by means of pavements with piezoelectric elements [16]. The advantage with the application in the railways is that it is guaranteed that pressure will be exerted at the same point repeatedly [14]. This is because the deformable piezoelectric devices are in the joints that make up the support of the rails, which are connected to the rest of the equipment so that the energy collected can be used [4,9]. Whether for power supply of trains, for signals of the road or even for supply toward the general power grid, each one may be possible because with the passage of the trains it is estimated that it could generate at least some 120kWh of clean energy and at the same time provide information such as speed, weight and number of wheels [17]. Finally, this type of transport emits inevitably vibration and pressure to specific points of the rails, thus reaching self-sustaining signaling pathway, or even better yet, if they are electric trains, it may reduce the requirements for the electricity network and in such case provide power to the grid [14]. Considering this, we have Proposition 3: Adapting piezoelectric devices to railways as energy harvesters might help for power generation support. Piezoelectric breakwater The wave energy is a huge source of potential energy, clean, which can be exploited by piezoelectric devices. In addition, given the large number of coastal areas, since 70% of our planet is made of oceans, it is a great amount of electricity which is calculated could be obtained through the waves, you can even install devices on buoys or floats offshore, but to be profitable these devices must be large [5,6], this system would have a minimal impact on the marine environment as it could be used to have useful sensors for navigation [17]. Due to the size of this resource, savings will be provided in the installation of lines for coastal areas that do not have electrification, along with the benefit that it is a clean and renewable energy, which is being harvested in this way. We then preset Proposition 4: Harnessing the natural potential of the seas and oceans energy through piezoelectric breakwater might help for power generation support. Environmental impacts and piezoelectricity If there is something common to all methods of implementation of piezoelectric materials as a means of generating energy, it is that they obtain electricity from structures or activities forming part of the current routines of human beings. Such materials have come a long way, and recently, they had a huge impact on the scientific community, considering that the first paper in which the term "nano piezoelectric generator" was used is of ZL Wang, who is the father of this concept [18]. Based on the Web of Science database, this paper has been cited more than 3500 times in a decade (2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016) [19]. In other words, there is research for alternatives to mitigate the environmental impact as a priority. You just need to look at the different weather events that challenge us to imagine relational cities, where all members of society and productive sectors, seek to address climate change [20]. In addition, there is the importance of supplying energy demand, lack of physical space available in fully urbanized areas. In the context of a complex global energy situation, environmental pollution and climate change, will require eco-friendly technology applications. On this matter, the impact caused to the environment by the piezoelectric materials is minimal, as they adapt to existing structures without collateral damage [21]. When analyzing the energy matrix of developed countries, we can see a clear trend to transform what we know as methods of conventional generation (fossil, nuclear, thermal, etc.), in practices that respect the urban space and biodiversity, coupled with the relentless pursuit of the integration of the individual as the protagonist of this transformation [22]. Furthermore, the climate change and the way we are forced to seek solutions for power generation in the friendliest way for the environment, methods of harvesting energy through piezoelectric materials are positioned as an alternative for the future, allowing us to have Proposition 5: Piezoelectricity can generate energy without affecting the environment, which is harvested and adapted with existing urban spaces as power generation support for the electrical grid. Economic implications of energy harvesting methods from piezoelectricity The multidimensional relationship of renewable energy projects, no matter what kind they are, must also consider the issues from a one-dimensional perspective, such as society and the energy market [23]. Private enterprise and even the political framework of government are apparently involved for the relevant care of environment, which is extremely necessary to create an atmosphere of competitiveness against traditional forms of generation, to which countries had to stick, partially for the lack of private investment on alternative methods of harvesting energy [24]. Hence, one of the biggest challenges are the costs that have the storage from piezoelectric materials, so as to encourage both innovation and public-private support [25]. Consequently, socioeconomic analysis should be conducted, and it is appropriate, especially in private investment projects in which profitability plays an important factor. Given this, it is essential to choose investment projects that are well adapted to the frames that investment offers. Public policies that encourage investment are essential for the growth of countries, especially when it comes to the transparency of implementation processes, as well as from the support of banking institutions through financing projects with long-term return. Therefore, we present Proposition 6: Developing and clarifying public policy play a decisive role for the methods of harvesting energy not only for being innovative but friendly to all parts involved, in order to be positioned successfully Research methodology To achieve the desired objective of this paper, a detailed review of the literature is required, following the methodology proposed by [26]. Thus, this part seeks to analyze the literature in the area of piezoelectricity as a power generation support, by placing within the global context. As far as data source, the collection of information was carried out from various websites, such as Science Direct, JSTOR and EBSCO. Complementarily, we also consulted secondary sources such as Google Scholar, Research Gate and Academia, where we sometimes requested directly to the author. We found papers from 1994 to 2019, by choosing those whose content have information on methods of generation and energy harvesting by means of piezoelectric devices. As far as study selection, it was done in two stages. Initially, a general search was conducted to probe the recurring parameters in the search, years and languages considered as primary written in English and Spanish, thus collecting a total of 51 documents including papers, theses and books. The document selection criteria were based on the leading authors in the field, as they are the ones who were preferred for decision criteria and citation in our paper. At this first stage of review 14 documents were ruled out. On the second stage, 7 were discarded by means of reading their abstracts, thus proving useful 34 for the literature review. The search strategy was to use keywords such as piezoelectricity, power generation, energy harvesting. We also verified documents by date, where those with a range within at least 5 years old are the most relevant to our study. Results As said before, 51 documents were initially collected. By stage two, there were 34 documents left for review. Useful documents in first stage: Piezoelectricity in general: 7, piezoelectric tile: 5, piezoelectric devices pavements: 10, piezoelectricity wave: 5, environmental issues: 3, economic issues: 5, generating solar support: 2. Discarded documents in first stage: Types of piezoelectric materials: 5, operating frequencies of piezoelectric materials: 3, piezoelectric microcircuits: 1, piezoelectric crystal structures of materials: 2, wind generation: 1, micro generation: 2. From such literature, we found that the search for energy harvesting methods has grown exponentially in recent years. According to the forecast of piezoelectric energy use from IDTechEx, the development of such industry was calculated from $ 145 million in 2018 to $667 million by 2022 [27]. Further, the interest in finding methods that respond to the growing energy needs inspire researchers to explore the potential of piezoelectric materials converting mechanical energy into electricity. From such piezoelectric materials, we explored in this review the ones that are the most researched and implemented around the world, to establish a comparative thereof. Hence, after reading the contents of the 30 documents from stage two, we were left with four papers in have been carried out in different parts of the world and give us some key values extracted from documents in which the authors provide data from places where they have developed prototypes of application of such technology, while omitting the theoretical data presented and focusing on practical data, collected from various authors, which with future research and improvements on the piezoelectric technology, could have a greater energy production. Furthermore, as seen in Table 1, it is in the rails, where there is more potential power generation, exceeding by 343% to the second one, piezoelectric pavement. The piezoelectric railway could keep lighted 2400 sodium lamps 12 hours a day for 30 days. This shows that the potential of piezoelectricity as a means of power generation support in places where one of its generation methods is usable. Each method described here has potential for generation, according to their field and area of application. For instance, when comparing with the power consumption of street lighting, piezoelectric pavement provides 125% of the energy required in the case of sodium lamps and 287% in the case of led lamps [31]. This show a range of applicability of piezoelectric materials as a means of harvesting energy. Piezoelectric energy production may probably not yet be enough for the application site to really become independent of the power grid, but it may return self-sufficient networks, while generating clean energy. In relation to similar applications, a tile solar generation of support in the bike path made in Tourouvre-au-Perche, in France produces 409 kWh/day [30] A similar project was also installed in -Krommenie and Wormerveer, Amsterdam [31]. With such literature comparison, it is demonstrated that power generation in pavement, from piezoelectric materials are superior to solar pavement. Discussions and conclusions A summary of answers to the six propositions from Section 2, by an analysis of the compliance of each one. The table below shows total acceptances of propositions 1-3, and 5, as well as partial acceptance to propositions 4 y 6. We next, go over the six propositions. Proposition 1: The growth of the projects that take piezoelectric tiles as a generation principle is exponential. In Table 1, Comparison of Generation, you can observe some specific performance data that vary according to the area and the technology applied, representing energy that could be perfectly used for lighting either a street or a house. The companies dedicated to the installation of tiles have contracts in the largest corporations in the world, such as Google, Cisco, Siemens, and the case of Pavegen Industries, leaves in evidence the enthusiasm and potential that is identified in this way of harvest of energy. Proposition 2: On the other hand, the possibility of obtaining electric power from unusual sources is a concept that continues to expand. For example, roads that apply piezoelectric materials and generate electricity by the passage of vehicles are a reality and have a great future, according to what HJ Xiang says, who presents an analysis of the energy collection of roads. Currently, although there are several projects around the world, a very particular case is that of the company Innowatech, who under the No. 4 highway in Israel placed piezoelectric generators that were then covered with asphalt, due to the success of the pilot project it is planned to place the generators in more roads of the country's road system [32]. Proposition 3: In addition to the above, the generation capacity in railways is enormous, as shown in Table 1. We can see that the piezoelectric railways are those with the greatest generation potential. P. Kumar, in his work "Piezo-Smart Roads," states that the advantage offered by the structure and nature of the means of transport is unparalleled to be exploited through piezoelectricity [14]. On the other hand, there are countries with an average of 40,000km of rail network in which this application would have enormous success in the future [33]. Proposition 4: Not only can energy be generated on land with pedestrian crossings, streets and roads, there is also a huge challenge in determining ways to take advantage of sea waves to produce energy. There are already projects that take the basic principle of piezoelectric generation and adapt it. Y. Yan, and S. J. Priya, affirm in their work, Piezoelectric Materials for Energy Harvesting, that energy can be generated through the sea and that it is friendly to the environment [5]. The challenge remaining is a large pilot project of these systems to be able to know the real potential of it. However, from Table 1, we can identify early explorations of energy capacity of this method. Proposition 5: On the other hand, there is no doubt, the importance of climate change is growing, Far PR, in his work "Climate change and the relational city", says that if we create a future in which all the parts that make up the productive sector, converge and work together towards the same north [19]. We can then affirm that we have achieved sustainable development. Piezoelectric technology is not exempt from this and seeks the incorporation of forms to generate energy to existing structures, without damaging them, nor generating emissions, due to the nature of its process, which requires mechanical strength, evidenced in the methods of generation described through this review. Proposition 6: One of the main challenges of piezoelectric energy generation methods is to position oneself in the market to be able to establish itself in all possible areas of application. For this, the participation of the state apparatus is a facilitator of the process. It is necessary that innovation be encouraged, and the licensing and financing processes become more transparent. N. Edomah, affirms in his work: Economics of Energy Supply, the complexity of the processes of creation of energy projects in terms of investment and the complexity of the state apparatus [24]. In recent years, renewable energy has increased considerably around the world; specifically, the percentage of energy from renewable sources in the final gross consumption of energy has almost doubled in recent years, in the case of the European Union, they have gone from approximately 8.5% in 2004 to 17.0% in 2016 [34]. This positive evolution has contributed to the binding state objectives aimed at increasing the percentage of energy from renewable sources, which were established in European Union agreements. The study presents some limitations as opportunities for future research, in subjects such as the viability and economic profitability of more advanced projects, due to the little diffusion of piezoelectricity in the industry. In addition, as of now there is still very little data available to perform a comparison that determines its competitiveness with traditional methods of energy generation, urging for more empirical and experimental research. Piezoelectric materials, as protagonists of harvesting energy support not only represent a solution to explore, but also an opportunity to encourage innovation, it is a new field of study that deserves to be analyzed. Therefore, our paper has presented a review from many perspectives of the piezoelectric energy and its future, determining its relevance in an increasingly changing world and thirsty for new options, which are viable and whose impact be as small as possible. Finally, this represents a first literature review about piezoelectricity as a support to power generation, showing the practical implications of the implementation of piezoelectric systems.
2019-09-26T08:53:39.682Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "1a6626daceb62de8fed315d1e15bb11dea0c6ec0", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2019/42/matecconf_acmme2019_05004.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d42eeb1774575224f3d9504fc1eac757209f7a8f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
104839212
pes2o/s2orc
v3-fos-license
Solid state isomerisation of atropodiastereomers : A promising method to obtain high diastereomeric ratios A series of imides with two chirality axes have been synthesized. Comparatively low activation barriers to isomerisation and, at the same time, high melting points lead to interesting behaviors of these compounds: whereas thermodynamic equilibria of nearly 50:50 diastereomeric ratios were reached rapidly in solution at 110°C, ratios up to 99:1 in favor of one diastereomer have been observed after prolonged heating above the same temperature of the mixture of diastereomers in the solid state. Crystallographic and differential scanning calorimetric (DSC) techniques have been used to support these studies and their interpretation. Introduction Synthesis Stereochemistry, X-ray structures Isomerisations in solution Isomerisations in the solid state Conclusion References and Notes. [Presentation] [Introduction] [Synthesis] [Stereohemistry, X-ray structures ] [Isomerisations in solution ] [Isomerisations in the solid state ] [Conclusion] [References and Notes] Solid state isomerisation of atropodiastereomers : A promising method to obtain high diastereomeric ratios. Introduction Synthesis Stereochemistry, X-ray structures Isomerisations in solution Isomerisations in the solid state Conclusion References and Notes. Introduction Atropoisomerism is defined as a particular type of stereoisomerism due to restricted rotation around single bonds which behave like stereogenic axes [1]. This type of isomerism is mainly found in bi-or polyaryl compounds. Various natural products, chiral auxiliaries and catalysts have one or more stereogenic axes. Many efforts have been devoted to the development of new methods for the absolute stereochemical control of one stereogenic axis (selective synthesis of one atropenantiomer at the expense of the other) [7]. But, up to now, only little attention has been paid to the relative stereochemical control of two or more stereogenic axes (selective preparation of one atropodiastereomer at the expense of several others). The following represents an original contribution to this topic : high diastereomeric ratios of atropodiastereomers of 1 have been obtained via thermal isomerisation in the solid state. Solid state isomerisation of atropodiastereomers : A promising method to obtain high diastereomeric ratios. Scheme 1 The synthesis of compounds 1 a -d was readily accomplished via isobenzofuran chemistry as depicted on scheme 1. Isobenzofuran 2 was prepared in two steps by standard reactions [8]. Diels-Alder reactions of 2 with maleimide and Nbenzylmaleimide proceeded in high yields giving adducts 3a and 3b as single diastereomers, presumably with an endo stereochemistry [9]. Aromatisation of the Diels-Alder adducts furnished the target compounds 1a and 1b in high yields. Their demethylation with BBr 3 gave 1c and 1d nearly quantitatively. Solid state isomerisation of atropodiastereomers : A promising method to obtain high diastereomeric ratios. Stereochemistry, X-ray structures The presence of two stereogenic axes lead to the existence of two atropodiastereomers : one cis (or meso), achiral and one trans (or dl), chiral. Diastereomers of 1a were separable by crystallization, those of 1b, 1c and 1d by column chromatography. No interconversion was noticed in solution at room temperature after several hours [10]. Attribution of the stereochemistry has been made by chiral HPLC analysis [11] : The trans diastereomer was resolved into its enantiomers, giving two peaks of equal intensities, whereas the achiral cis diastereomer gave only one peak. This attribution has been confirmed on 1a by X-ray crystallography: monocrystals of each 1a cis and 1a trans have been obtained by slow evaporation of their solutions. The case of 1a trans was interesting : while the trans relationship of the two OMe groups is obvious on the structure, it appears that in the crystal the two enantiomers are statistically interchangeable at each molecular site of the lattice, by exchanging the position of the fused benzene ring and of the imide pattern; it crystallises as a pseudoracemate or solid solution of enantiomers, a rather uncommon behavior of chiral compounds in the solid state [12]. [ Solid state isomerisation of atropodiastereomers : A promising method to obtain high diastereomeric ratios. Isomerisations in solution Whereas diasteromers of 1a did not interconvert at room temperature, fast equilibration occurred in refluxing toluene (bp 110°C): after two hours, starting from 1a cis or 1a trans, a 45:55 ratio of 1a cis:1a trans was obtained. The same ratio was obtained with 1b. Kinetics of isomerisation of 1a was followed in refluxing iso-propanol (bp 82°C) giving a rate constant k = 0.17 h -1 (t 1/2 = 1.5 h). Solubility problems did not allow the same experiment with 1b but the atropoisomerisation barrier of 1b should not be very different from that of 1a: one can reasonably assume little or no dependence of atropoisomerisation barrier from the nature of the nitrogen substituent X. The nature of the R substituent should have a more pronounced influence : Indeed isomerisation of 1c (R = OH, X = H) in boiling iso-propanol gave a rate constant of 0.52 h -1 (t 1/2 = 0.5 h). After equilibration, a 1c cis:1c trans ratio of 50:50 was obtained. Nearly the same data were obtained with 1d. [ Solid state isomerisation of atropodiastereomers : A promising method to obtain high diastereomeric ratios. I. Results : Both diastereomers of 1a are crystalline solids with high melting point (> 250°C). As isomerisation of 1a was found to be fast in solution above 110°C we became interested in the possibility of performing isomerisations in the solid state. This was possible indeed, but the results were quite unexpected: when a sample of pure 1a cis as a microcrystallin powder has been heated at 110°C for two hours, small amounts of 1a trans had formed but the diastereomeric ratio of the sample was far from that of the equilibrium in solution. The isomerisation rate increased as the temperature was raised : a cis:trans ratio of about 70:30 was obtained after one hour at 180°C. Much more surprizing, after one night at 180°C the diastereomeric ratio was 10:90 in favor of the trans isomer , having overtaken the solution equilibrium ratio ! Further heating at 180°C for two days led to almost complete isomerisation of the sample (cis:trans : 1:99). During these experiments no macroscopic changes of the sample, in particular no melting (even partial), were noticed. When the same experiments were performed starting with solid 1a trans no isomerisation at all was noticed. Heating 1a cis or 1a trans up to complete melting (T > 285°C) and fast cooling gave the same diastereomeric ratio as that of the solution equilibrium (cis:trans 45:55). Obviously something interesting happened when 1a was heated as a crystalline solid. A more thorough study of the crystal behaviour of 1a cis and 1a trans was therefore undertaken. Melting points were determined in different conditions. Kofler hot bench gave "instantaneous" melting points of 250 -252°C for 1a cis and > 270°C (beyond the limits of the apparatus) for 1a trans. Progressive heating of the samples gave different results : capillary tubes melting point apparatus gave mp > 270°C for both 1a cis and 1a trans. Finally DSC (Differential Scanning Calorimetry) studies were undertaken : For 1a trans the DSC trace at a temperature gradiant of 10°C/min gave a sharp heat absorption at the melting point (281.4°C) with no other noticeable phase transition under this temperature. For 1a cis a progressive heat absorption occurred within a large temperature interval, followed by a sharp peak with a maximum at 283.6°C. Although complementary studies are necessary such as modification of the temperature gradiant, a solid state transformation of 1a cis seems likely, yielding 1a trans, in the same crystalline state than that obtained by room temperature crystallization of 1a trans. This result was further confirmed by powder X-ray diagram of a sample of 1a cis isomerized at 180°C for 2 days. It has a very similar diffraction pattern to a simulated powder X-ray diagram, obtained from single crystal X-ray data of 1a trans. Interestingly, similar behaviours have been found with other compounds of the series: solution equilibration of 1b gave a cis:trans ratio of 45:55 whereas solid state heating of this mixture for two days at 180°C gave a 5:95 cis:trans ratio. Products 1c and 1d were also isomerisable in the solid state. When the 50:50 cis:trans mixture of 1c (obtained after solution equilibration) was heated 2 days at 180°C in the solid state a 95:5 ratio was obtained in favor of the cis isomer. Although further experiments are necessary, the same tendency was found for 1d : solid state isomerisation of a 40:60 cis:trans mixture gave almost pure cis. II. Discussion : The preceding results can be rationalized in the following manner : At room temperature, cis and trans isomers do not interconvert. As true diastereomers, they have physicochemical distinct properties and each of them has his own crystalline form. If the temperature is raised up to a value at which interconversion between the cis and trans form becomes fast, they can no longer be considered as diastereomers but as simple conformers of the same compound. Their crystalline forms should then be considered as polymorphs [13]. In general, at a given temperature and pressure one crystalline form of a given compound is more stable than the other one. There will be a thermodynamic tendency of the metastable crystal to transform into the more stable one. Of course, kinetic factors do not always allow such transformations at significant rates and some special conditions are required for solid to solid transformations to occur [13]. Such conditions are most probably fulfilled in the solid state isomerisations disclosed before. After cooling, conformers again become distinct diastereomers and the diastereomeric ratio is "frozen" at the point it had reached in the solid-solid transformation. Above the melting, ordering is destroyed and equilibration occurs very rapidly. Fast cooling then "freezes" the equilibrium reached in the melted phase. Solid state isomerisation of atropodiastereomers : A promising method to obtain high diastereomeric ratios. Conclusion The compounds disclosed in this paper combine medium sized atropoisomerisation barriers and high melting points. These features allow atropoisomerisation in solution and in the solid state. The last method, giving diastereomeric ratios of up to 99:1 in favor of one atropodiastereomer, offers new opportunities to the relative stereochemical control of two stereogenic axes. The scope and limitations of this method are now under investigation. Introduction Atropoisomerism is defined as a particular type of stereoisomerism due to restricted rotation around single bonds which behave like stereogenic axes [1]. This type of isomerism is mainly found in bi-or polyaryl compounds. Various natural products, chiral auxiliaries and catalysts have one or more stereogenic axes. Many efforts have been devoted to the development of new methods for the absolute stereochemical control of one stereogenic axis (selective synthesis of one atropenantiomer at the expense of the other) [7]. But, up to now, only little attention has been paid to the relative stereochemical control of two or more stereogenic axes (selective preparation of one atropodiastereomer at the expense of several others). The following represents an original contribution to this topic : high diastereomeric ratios of atropodiastereomers of 1 have been obtained via thermal isomerisation in the solid state. Scheme 1 The synthesis of compounds 1 a -d was readily accomplished via isobenzofuran chemistry as depicted on scheme 1. Isobenzofuran 2 was prepared in two steps by standard reactions [8]. Diels-Alder reactions of 2 with maleimide and Nbenzylmaleimide proceeded in high yields giving adducts 3a and 3b as single diastereomers, presumably with an endo stereochemistry [9]. Aromatisation of the Diels-Alder adducts furnished the target compounds 1a and 1b in high yields. Their demethylation with BBr 3 gave 1a and 1d nearly quantitatively. Stereochemistry, X-ray structures The presence of two stereogenic axes lead to the existence of two atropodiastereomers : one cis (or meso), achiral and one trans (or dl), chiral. Diastereomers of 1a were separable by crystallisation, those of 1b, 1c and 1d by column chromatography. No interconversion was noticed in solution at room temperature after several hours [10]. Attribution of the stereochemistry has been made by chiral HPLC analysis [11] : The trans diastereomer was resolved into its enantiomers, giving two peaks of equal intensities, whereas the achiral cis diastereomer gave only one peak. This attribution has been confirmed on 1a by X-ray crystallography : Monocrystals of each 1a cis and 1a trans have been obtained by slow evaporation of their solutions. The case of 1a trans was interesting : while the trans relationship of the two OMe groups is obvious on the structure, it appears that in the crystal the two enantiomers are statistically interchangeable at each molecular site of the lattice, by exchanging the positoin of the fused benzene ring and of the imide pattern; it crystallises as a pseudoracemate or solid solution of enantiomers, a rather uncommon behavior of chiral compounds in the solid state [12]. Isomerisations in solution Whereas diasteromers of 1a did not interconvert at room temperature, fast equilibration occurred in refluxing toluene (bp 110°C) : After two hours, starting from 1a cis or 1a trans, a 45:55 ratio of 1a cis:1a trans was obtained. The same ratio was obtained with 1b. Kinetics of isomerisation of 1a was followed in refluxing iso-propanol (bp 82°C) giving a rate constant k = 0.17 h -1 (t 1/2 = 1.5 h). Solubility problems did not allow the same experiment with 1b but the atropoisomerisation barrier of 1b should not be very different from that of 1a : One can reasonably assume little or no dependence of atropoisomerisation barrier from the nature of the nitrogen substituent X. The nature of the R substituent should have a more pronounced influence : Indeed isomerisation of 1c (R = OH, X = H) in boiling iso-propanol gave a rate constant of 0.52 h -1 (t 1/2 = 0.5 h). After equilibration, a 1c cis:1c trans ratio of 50:50 was obtained. Nearly the same data were obtained with 1d. I. Results : Both diastereomers of 1a are crystalline solids with high melting point (> 250°C). As isomerisation of 1a was found to be fast in solution above 110°C we became interested in the possibility of performing isomerisations in the solid state. This was possible indeed, but the results were quite unexpected : When a sample of pure 1a cis as a microcrystallin powder has been heated at 110°C for two hours, small amounts of 1a trans had formed but the diastereomeric ratio of the sample was far from that of the equilibrium in solution. The isomerisation rate increased as the temperature was raised : a cis:trans ratio of about 70:30 was obtained after one hour at 180°C. Much more surprizing, after one night at 180°C the diastereomeric ratio was 10:90 in favor of the trans isomer, having overtaken the solution equilibrium ratio ! Further heating at 180°C for two days led to almost complete isomerisation of the sample (cis:trans : 1:99). During these experiments no macroscopic changes of the sample, in particular no melting (even partial), were noticed. When the same experiments were performed starting with solid 1a trans no isomerisation at all was noticed. Heating 1a cis or 1a trans up to complete melting (T > 285°C) and fast cooling gave the same diastereomeric ratio as that of the solution equilibrium (cis:trans 45:55). Obviously something interesting happened when 1a was heated as a crystalline solid. A more thorough study of the crystal behaviour of 1a cis and 1a trans was therefore undertaken. Melting points were determined in different conditions. Kofler hot bench gave "instantaneous" melting points of 250 -252°C for 1a cis and > 270°C (beyond the limits of the apparatus) for 1a trans. Progressive heating of the samples gave different results : capillary tubes melting point apparatus gave mp > 270°C for both 1a cis and 1a trans. Finally DSC (Differential Scanning Calorimetry) studies were undertaken : For 1a trans the DSC trace at a temperature gradiant of 10°C/min gave a sharp heat absorption at the melting point (281.4°C) with no other noticeable phase transition under this temperature. For 1a cis a progressive heat absorption occurred within a large temperature interval, followed by a sharp peak with a maximum at 283.6°C. Although complementary studies are necessary such as modification of the temperature gradiant, a solid state transformation of 1a cis seems likely, yielding 1a trans, in the same crystalline state than that obtained by room temperature crystallization of 1a trans. This result was further confirmed by powder X-ray diagram of a sample of 1a cis isomerized at 180°C for 2 days. It has a very similar diffraction pattern to a simulated powder X-ray diagram, obtained from single crystal X-ray data of 1a trans. Interestingly, similar behaviours have been found with other compounds of the series : Solution equilibration of 1b gave a cis:trans ratio of 45:55 whereas solid state heating of this mixture for two days at 180°C gave a 5:95 cis:trans ratio. Products 1c and 1d were also imerisable in the solid state. When the 50:50 cis:trans mixture of 1c (obtained after solution equilibration) was heated 2 days at 180°C in the solid state a 95:5 ratio was obtained in favor of the cis isomer. Although further experiments are necessary, the same tendency was found for 1d : solid state isomerisation of a 40:60 cis:trans mixture gave almost pure cis. II. Discussion The preceding results can be rationalized in the following manner : At room temperature, cis and trans isomers do not interconvert. As true diastereomers, they have physicochemical distinct properties and each of them has his own crystalline form. If the temperature is raised up to a value at which interconversion between the cis and trans form becomes fast, they can no longer be considered as diastereomers but as simple conformers of the same compound. Their crystalline forms should then be considered as polymorphs [13]. In general, at a given temperature and pressure one crystalline form of a given compound is more stable than the other one. There will be a thermodynamic tendency of the metastable crystal to transform into the more stable one. Of course, kinetic factors do not always allow such transformations at significant rates and some special conditions are required for solid to solid transformations to occur [13]. Such conditions are most probably fulfilled in the solid state isomerisations disclosed before. After cooling, conformers again become distinct diastereomers and the diastereomeric ratio is "frozen" at the point it had reached in the solid-solid transformation. Above the melting, ordering is destroyed and equilibration occurs very rapidly. Fast cooling then "freezes" the equilibrium reached in the melted phase. Conclusion The compounds disclosed in this paper combine medium sized atropoisomerisation barriers and high melting point. These features allow atropoisomerisation in solution and in the solid state. The last method, giving diastereomeric ratios of up to 99:1 in favor of one atropodiastereomer, offers new opportunities to the relative stereochemical control of two stereogenic axes. The scope and limitations of this method are now under investigation.
2019-04-10T13:12:01.621Z
2000-09-11T00:00:00.000
{ "year": 2000, "sha1": "9b8c216c66b7fc982a36a8a7e918ea2b1ad05635", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ecsoc-4-01786", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7aa2c9eb07f147d2953010ccf166c02d97af268c", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
227295552
pes2o/s2orc
v3-fos-license
Evidence based models of care for the treatment of alcohol use disorder in primary health care settings: a systematic review Background Pharmacological and behavioural treatments for alcohol use disorders (AUDs) are effective but the uptake is limited. Primary care could be a key setting for identification and continuous care for AUD due to accessibility, low cost and acceptability to patients. We aimed to synthesise the literature regarding differential models of care for the management of AUD in primary health care settings. Methods We conducted a systematic review of articles published worldwide (1998-present) using the following databases; Medline, PsycINFO, Cochrane database of systematic reviews, Cochrane Central Register of Controlled Trials and Embase. The Grey Matters Tool guided the grey literature search. We selected randomised controlled trials evaluating the effectiveness of a primary care model in the management of AUD. Two researchers independently assessed and then reached agreement on the included studies. We used the Cochrane risk of bias tool 2.0 for the critical appraisal. Results Eleven studies (4186 participants) were included. We categorised the studies into ‘lower’ versus ‘higher’ intensity given the varying intensity of clinical care evaluated across the studies. Significant differences in treatment uptake were reported by most studies. The uptake of AUD medication was reported in 5 out of 6 studies that offered AUD medication. Three studies reported a significantly higher uptake of AUD medication in the intervention group. A significant reduction in alcohol use was reported in two out of the five studies with lower intensity of care, and three out of six studies with higher intensity of care. Conclusion Our results suggest that models of care in primary care settings can increase treatment uptake (e.g. psychosocial and/or pharmacotherapy) although results for alcohol-related outcomes were mixed. More research is required to determine which specific patient groups are suitable for AUD treatment in primary health care settings and to identify which models and components are most effective. Trial Registration PROSPERO: CRD42019120293. Background Alcohol use disorder (AUD) is highly prevalent and contributes to 4% of the global disease burden and 5.3% of mortality worldwide [1]. Effective and safe treatments are available but are underutilised [2,3]. For example, it is estimated that only 3% of AUD patients receive approved pharmacotherapy in Australia [4,5]. In the USA, only 2.1% of a cohort with AUD were found to have been prescribed alcohol pharmacotherapy [6]. Moreover, time between onset of the disorder and initial treatment can be decades [2,7]. Only 1 in 10 individuals with AUD perceive a need for treatment which possibly contributes to the low rate of enrolment and high dropout in specialty care [8]. Patients that do specifically seek AUD treatment are likely to be those with severe conditions, including greater alcohol intake and concurrent mental and physical comorbidity [3]. However, a significant proportion of AUD patients access primary health care, albeit for other reasons [9], and this represents an opportunity for earlier intervention. Primary health care appears to be an ideal treatment setting for AUD due to this accessibility but also due to low costs and acceptability for patients. Primary care settings are able to provide longitudinal, comprehensive and coordinated care with medication management [10]. Patients commonly present to primary care for problems related to AUD such as mood disorders, hypertension, injuries and others. The chronic and relapsing nature of some with AUD make this type of care appropriate and necessary. Indeed, while the rate of prescribing of AUD pharmacotherapy is low, one recent study demonstrated that clients who had more contact with the primary care system were more likely to be prescribed AUD medications [6]. Identifying and treating early-stage AUD in these settings can potentially prevent conditions deteriorating. In recent years, several models of care have been evaluated in primary care settings. The 'screening, brief intervention and referral to specialty care (SBIRT)' model is best known and multiple systematic reviews confirm its effectiveness [11][12][13]. However, in the management of moderate-severe AUD, the effectiveness of SBIRT is limited at best [3,14,15]. Integrated models of care or pathways have been developed, whereby the treatment is delivered either by the general practitioner or an on-site nurse practitioner. Accordingly, we aimed to synthesise the existing models of care, other than SBIRT, for the management of AUD in primary care settings. We sought to evaluate the effectiveness of these care models with regards to increasing treatment engagement (e.g. number of visits and/or uptake of AUD pharmacotherapy) and reducing alcohol consumption and to provide recommendations for further research. Methods We followed the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines for systematic reviews [16]. We registered the systematic review with the international Prospective Register of Systematic reviews (PROSPERO: CRD42019120293). Additional information on the methods can be found in the published protocol [17]. Eligibility criteria Studies were eligible if: 1) they were published in English, 2) they were published after 1 January 1998 (to allow for a 20 year period from search commencement), 3) they compared models for the management of AUD, and 4) at least 80% or more of the subjects had an AUD, or if results for subjects with AUD were presented separately to those with other conditions. We excluded languages other than English given the costs and time required for translation were unavailable. Our interventions of interest are complex health interventions which target how care is organised in addition to types of treatments. For inclusion, the model of care had to cover several parts of the care pathway other than screening. The setting had to be in primary health care using primary care physicians, nurse practitioners and/ or case managers. Consultations with specialty care was accepted. Treatment facilities had to be physically in or attached to the primary care clinic. We excluded studies where the independent variable was the specific treatment rather than the model of care. We also excluded articles examining SBIRT (screening, brief intervention, referral to treatment) for individuals with mild AUD unless a novel component was added to the model of care. Search strategy We searched Medline, PsycINFO, Cochrane database of systematic reviews, Cochrane Central Register of Controlled Trials (CENTRAL) and Embase (2019). We conducted reference searches of relevant reviews and articles. Grey Matters tool, which is a checklist of health-related sites organized by topic, and Google were used in the grey literature search. Authors of identified conference abstracts were contacted for additional information about their study and potential availability of preliminary data. Before publication of this systematic review we ran the search again to include all newly published studies (04/06/2020). See Appendix 1 for our search strategy in Medline and Appendix 2 for grey literature. Study selection Initially, duplicates were removed from the database after which all the titles were screened with the purpose of discarding irrelevant articles (unrelated to alcohol treatment or primary care). The remaining papers were included in an abstract and full-text screen. All steps were completed by one researcher (SR) with consultation with two other researchers (KM and JC). Disagreements were resolved in consensus-based discussion. Data extraction and synthesis Key information extracted from the articles included design of the study; type of participants; study setting; type of intervention/ model of care; type of health care worker; duration of follow-up and outcome measures. Outcome data on treatment engagement (e.g. number of visits and/ or uptake of AUD pharmacotherapy or any treatment) and alcohol use were extracted. Categorical outcomes were converted into log odds ratios (OR) and log incidence rate ratios (IRR). Continuous measures were converted into standardized mean differences (SMD). Data extraction was completed by one researcher (SR) with error checking by two other researchers (JC and KM). Due to variability in study design, measures and outcome data reporting, we were unable to extract sufficient data to perform a meta-analytic synthesis. Quality appraisal All studies were critically assessed by two researchers independently using the Revised Cochrane risk-of-bias tool (RoB 2.0) [18]. Meta-biases such as outcome reporting bias was evaluated by determining whether the protocol was published before recruitment of patients. Additionally, trial registries were checked to determine whether the reported outcome measures and statistical methods matched original protocols. We also reported on funding from the pharmaceutical industry. To minimise publication bias, we looked at conference abstracts and grey literature. Results The literature search including synonyms for 'model of care' returned 1060 records. An additional 71 records were identified from other sources (Fig. 1). The details of the included studies (n = 11) are summarised in Table 1 according to intensity and/or duration of care (from low to high). Population This systematic review included 11 studies with a combined number of 4186 participants (72% male). Identification of hazardous alcohol use or AUD differed among the studies, ranging from utilizing assessment tools to more formal diagnosis of AUD using the International Statistical Classification of Diseases (10th revision) (ICD-10) or according to the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) criteria for current alcohol dependence. The Stepped-care interventions: Step 1: 20 min session of behavioural change counselling; Step 2: MET (three 40 min sessions on weekly basis) Step 3: referral to specialist alcohol treatment. *Referral to next step happened when patients still consumed alcohol at hazardous levels after 4 weeks Control group: 5-min structured brief intervention + short self-help booklet outlining consequences of [29]. Three studies had strict exclusion criteria regarding current substance abuse and dependence [21,22,24] other than alcohol (a maladaptive pattern of substance use, such as cannabis or amphetamines, leading to clinically significant impairment and distress) while others did not mention this exclusion criterion. Two studies specifically included patients with AUD and substance use disorder (SUD) [25,28]. Watkins et al. reported that 94% of the sample had an AUD, of which 40% had both an opioid and alcohol use disorder (OAUD) [30]. Data for the AUD subgroup without comorbid opioid dependence was obtained from the authors upon request. The study by Upshur et al. specifically included homeless women with AUD, whilst most others excluded homeless people from their studies [26]. Setting Two studies were conducted in the United Kingdom [22,23], one study was set in Sweden [21] and the remaining eight trials in the United States. Most studies were conducted in community primary care settings [19][20][21][22][23] and there were three studies set in VA primary care clinics [24,27,29]. Other locations included a hospital-based primary care clinic [28], a health centre for the homeless [26] and a federally qualified health centre [25]. Study-design Two studies were cluster-randomised controlled trials [20,26] and one study was labelled a randomised encouragement trial that offered services to patients but did not require that they accept [27]. None of the studies blinded participants or physicians. Six studies included blinded assessment of the outcomes (researchers were unaware of the patients' group assignment) [19,20,22,24,25,27], however, alcohol consumption was often obtained by self-report (e.g. standard drinks (SD) of alcohol per week). Intervention The models of care in each of the studies differed significantly with regards to the duration, the setting, health professionals engaged and access to types of treatment. As a result, following data extraction, we decided to divide the models of care into lower intensity models and higher intensity models whereby the components for each of these are depicted in detail in Table 2. Lower intensity models The studies by Moore et al. [19] and Ettner et al. [20] evaluated a multi-faceted model with personalised patient reports, educational booklets and a drinking diary to educate patients about their drinking habits. The primary care physician would also receive a drinking report prior to every scheduled appointment to stimulate discussion about alcohol consumption. Subsequently, patients would receive 3 telephone behavioural counselling sessions. These two studies differed with regards to the timing of these counselling sessions (frontloading versus more spread out, respectively). The health professionals included a primary care physician and health educators. The studies by Wallhed-Finn et al. [21], Drummond et al. [22] and Coulton et al. [23] evaluated a variation of a stepped-care model. They all started with a standard brief intervention (5-10 min). The intensity of the treatment increased when patients continued to drink at hazardous levels. Treatment included feedback, behavioural counselling (based on cognitive behavioural therapy (CBT) and/or motivational enhancement therapy (MET). Referral to specialty care would be followed if necessary. The model evaluated by Wallhed-Finn et al. [21] was unique in that it provided psychosocial therapy adapted to the context and time constraints of primary care with the option for any pharmacological treatment. Higher intensity models (longitudinal care models) Six of the included studies [24][25][26][27][28][29] assessed the effectiveness of models of care that were based on elements of the collaborative care/ chronic care model (CCM) [31,32]. The six studies offered high intensity intervention with psychosocial support (MET and/or CBT) and pharmacological treatment for AUD. They all integrated addiction expertise and behavioural counselling support and assured good communication between primary care physicians and other health professionals using the electronic medical system (EMR). Often a case manager kept track of treatment and attendance, assuring active follow-up. To increase treatment engagement, CCM concepts such as shared-decision making and selfmanagement support were incorporated in these studies. Shared decision making directed the duration, length, type and intensity of the treatment. Self-management support was usually provided by biomarker testing feedback and routine assessment. Two out of six studies utilised specialty addiction treatment as the comparator. The remaining studies compared the care model against usual primary care with access to specialty addiction treatment resources. Control groups Control groups are described in detail in Table 1. These included usual primary care plus possible addition of alcohol counselling [20], an education booklet [19], 5 min structured brief intervention with self-help booklet [22,23], provision of a number for outpatient treatment [25,28], specialty counselling or psychiatry [26], annual behavioural health screening and integrated mental health services [27]. Addiction specialty treatment was the comparator model of care in three studies but may have been provided separately [21,24,29]. Quality appraisal Overall, the quality of the studies was mixed with most trials having a moderate risk of bias for both engagement and drinking outcome measures (see Table 3). More specifically, the majority of studies had low risk of bias arising from the randomisation process except for some risk of bias regarding cluster randomisation with Ettner et al. [20] and Upshur et al. 2015. With regards to bias due to deviations from the intended intervention in terms of assignment to intervention, the majority of studies had low risk of bias although our appraisal yielded some-high bias for Oslin et al. [24] and high bias for the Willenbring et al. [29]. All the studies were judged to have low risk of bias in terms of adhering to the intervention. Bias with regards to missing outcome data was observed in several studies including Drummond et al. [22], Watkins et al. [25], Willenbring et al. [29] and Upshur et al. [26]. Bias with regards to measurement of outcome was observed to some degree in all the studies except for Ettner et al. [20]. Half of the studies were judged to have some risk of bias regarding selection of reported results. Funding from the pharmaceutical industry was not apparent in 10 of the studies. In one of the studies, Watkins et al. [25], Alkermes provided long-acting injectable naltrexone at no charge to patients. None of the studies were blinded and all studies used self-reported measures for alcohol consumption. Effectiveness We aimed to evaluate effectiveness of models of care in primary care-settings in increasing treatment engagement and reducing alcohol consumption via meta-analytic synthesis. However, due to the small number of studies, high heterogeneity between studies, and due to large variations in outcome measures, meta-analysis was not feasible. We thus illustrated patterns using tables. Treatment engagement We tabulated treatment engagement outcomes with significant results (Table 4). There was a high heterogeneity between studies in outcome measures for treatment engagement. The uptake of AUD medication was reported in 5 out of 6 studies that offered AUD medication. Three studies reported a significantly higher uptake of AUD medication in the intervention group. Reduction of alcohol consumption Clinical outcomes relating to alcohol consumption are presented in Table 5 Discussion In the current review we examined the evidence base supporting treatment of AUD in primary care settings, providing an overview of the models of care. The models of care were generally aligned to either lower intensity models of care such as extended brief intervention and stepped care or higher intensity care models that were often based on the principles of the collaborative care/ chronic care model (CCM) [10,[31][32][33][34]. We were unable to extract sufficient data to conduct a meta-analysis due to variability in study measures and outcome data reporting. Nonetheless, we observed that the majority of care models improved treatment engagement of AUD patients, although the lower intensity models often did not report engagement outcomes. Significant reductions in alcohol consumption in patients treated in primary care settings relative to comparison groups were reported in less than half of the studies (two out of five lower intensity models; three out six higher intensity models) with more than half (seven out of eleven studies) reporting significant reductions in any alcohol outcomes (e.g. heavy drinking or alcohol-related problems). Several methodological differences may explain mixed findings with regards to alcohol outcomes, such as inconsistent treatment compliance, shorter treatment duration and inadequate training of staff and/or lack of fidelity measures for psychosocial techniques. In addition, negative studies all reported similar reductions of alcohol consumption in both the intervention and control group, which may indicate issues with study design regarding comparison groups. None of the studies were blinded and, for example, in the study by Upshur et al. feedback of screening was provided to all participants which may have served as a brief intervention, prompting physicians to commence AUD treatment [26] or for mild AUD patients [35] to reduce consumption [14,36]. Regarding higher intensity models of care, there were three studies that reported significant reductions in alcohol consumption (reduced HDD or increased abstinence), relative to control [24,25,29,30]. These studies did not include participants with co-morbid SUD, and for those that did, the beneficial results were restricted to the AUD participants only [24,25,29,30]. In comparison, the higher intensity trials with null results included individuals with co-morbid SUD [26][27][28]. It is thus possible that the primary care model may be somewhat limited for patients with more complex needs, although studies with CCM for other conditions have reported effectiveness, even in patients with high social needs and co-morbidity [37]. Higher intensity models also often included patients with higher drinking levels and engagement of multiple healthcare professionals (e.g. psychologists, medical specialists, case managers). While the current systematic review demonstrates that provision of AUD treatment can be implemented in primary care, there is a gap in the evidence base regarding our capacity to define which patients are suitable for AUD treatment in primary care and which interventions are effective. Finally, the issue of feasibility in terms of time constraints and resources, particularly for complex patients, should not be underestimated as a barrier to widespread adoption of AUD treatment in primary care. It is worth noting that our findings suggest pharmacotherapy can be simply and safely provided in the primary care setting. Which may lead to increased uptake and engagement with AUD treatment. There is thus potential for wide-spread benefit should primary care physicians adopt the responsibility for recognition, screening and prescribing. The provision of education regarding pharmacological treatment options could overcome some previously noted barriers such as lack of knowledge about the available treatment possibilities and misconceptions about medication efficacy [38][39][40]. Future alcohol treatment research in primary care settings will require more consistent measures of relevant alcohol-related outcomes. Both 1) sustained abstinence and also 2) no heavy drinking days are the two potential AUD treatment outcome measures recommended by various bodies [41,42]. Sustained abstinence is arguably a 'gold standard' outcome but is infrequently achieved and reliance on this measure may underestimate treatment effects. Reductions in the World Health Organization (WHO) risk drinking levels [43] have recently been proposed as an alternative primary outcome for all alcohol clinical trials [44] and these endpoints are suitable for primary care alcohol treatment research. Findings among both AUD treatment seekers and the general drinking population show that reductions in WHO risk drinking levels are associated with improvements in physical and mental health such as liver disease, depression and anxiety [45][46][47]. We suggest consistent reporting of WHO risk levels will facilitate cross comparison of outcomes and also provision of clinically significant measures of improvement as outlined above. The use of objective markers of alcohol use to corroborate self-report may also serve to improve consistency and quality of alcohol treatment research in primary care settings. One example is phosphatidylethanol (PEth) which is the new gold standard for reliable laboratory corroboration of alcohol consumption [48,49]. While likely to be less accessible in primary care settings at the current time this may change in future years. Liver enzymes particularly γ-glutamyltransferase (GGT) are highly relevant to harms of AUD and also serve as an objective marker of recent consumption. These tests are readily available in primary care settings and are generally cheap and acceptable to most patients. Falling levels of aspartate transaminase (AST), alanine aminotransferase (ALT), and GGT are strongly correlated with alcohol consumption and associated with better health outcomes [50]. Predictors of treatment engagement and response Patient characteristics such as alcohol severity and readiness to change may potentially predict suitability for alcohol treatment in a primary care setting. The Alcohol Use Disorders identification Test (AUDIT)-C, which is first 3 questions of the 10-item AUDIT, assesses alcohol consumption patterns in the past year and has been validated as a brief alcohol-screening test [51] and widely recommended for use in primary care. While consumption obtained via the AUDIT-C is not always entirely accurate, with potential for underestimation of actual consumption [52], increasing scores are associated with increasing severity of alcohol-related problems in the past 12 months [53]. Patients with higher readiness to change scores are associated with improved treatment engagement and alcohol use outcomes [54]. Thus, the potential for varying degrees of treatment seeking and ambivalence about treatment should be measured given that patients in primary care may not be interested in receiving AUD treatment. There are several validated readiness to change measures such as the Readiness to Change Questionnaire [55] and the Stages of Change Readiness and Treatment Eagerness Scale [56]. However, although relatively brief, these require dedicated data collection and consequently researcher input. Brief assessments and algorithms of readiness to change suitable for primary care also exist with face validity and potentially good concurrent validity when compared with the longer Readiness to Change Questionnaire [57,58]. Method of data collection The emerging secondary use of electronic medical records (EMRs) for research purposes is occurring throughout the world [59]. As EMRs become more widely adopted in primary health care, research in these settings will be improved. Information from primary care EMRs can be used to evaluate the treatment outcome and uptake and also treatment fidelity, which would be particularly useful for evaluating psychosocial interventions (to the extent that these are recorded). EMRs can also be used to evaluate implementation facilitators and barriers and potentially assist in recruitment by earlier screening for alcohol problems [60]. Data linkage with repositories of primary care clinical data will significantly improve our capacity to evaluate treatment in these settings [61]. While these systems may already be utilised consistently in some countries they are not in many regions. For example, in Australia, there are multiple EMR systems that limit use of primary health care data for research and for data linkage between health care settings [59]. Limitations One of the main limitations is the use of varied outcome measures across studies which makes comparison of study findings difficult. In addition, it is important to note that the synthesis of non-inferiority trials, comparing primary care management versus specialty care management, may be complicated by potential varying degrees of treatment seeking in the patients involved and ambivalence about need for treatment. The majority of AUD treatment trials in addiction specialty care settings involve treatment seeking individuals whereas many patients in primary care may often not be interested in receiving treatment for AUD. To this degree, studies examining primary care versus specialty care whereby there was comparable baseline contemplation or previous treatment history, similar or even improved alcohol outcomes were observed in the primary care group [e.g. 20]. This suggests that a null result in noninferiority trials can be perceived as supporting the recommendation for implementation of AUD treatment into primary care whereby the aim is to facilitate earlier uptake of treatment rather than determining a more effective setting for treatment in comparable patients. Conclusion Models of care in primary care-settings enhanced treatment uptake (psychosocial and/or pharmacotherapy) while the results for alcohol consumption were somewhat mixed. Our findings show that models of care in primary care-settings have promise to be beneficial in the management of AUD in terms of engagement. More studies are required with consistent outcome measures in order to determine effectiveness and cost effectiveness of these models of care, to clarify the most appropriate components of the models and to determine which patients are most suitable. Appendix 1 Search strategy MEDLINE Search strategy -Draft of at least one database Full text read -14. Used in systematic review -nil (some of the titles that were found in the google scholar search were used in the systematic review but these articles were also yielded by the database search).
2020-12-06T14:18:57.568Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "2377d43b7f35d982bf86337e4318a23c44bb4efb", "oa_license": "CCBY", "oa_url": "https://bmcfampract.biomedcentral.com/track/pdf/10.1186/s12875-020-01288-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2377d43b7f35d982bf86337e4318a23c44bb4efb", "s2fieldsofstudy": [ "Medicine", "Political Science", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
267245827
pes2o/s2orc
v3-fos-license
The Effects of Caustic Soda and Benzocaine on Directed Grooming to the Eyestalk in the Glass Prawn, Palaemon elegans, Are Consistent with the Idea of Pain in Decapods Simple Summary The possibility of pain occurring in animals is often accepted if various criteria are fulfilled. These criteria include prolonged grooming or rubbing at the site of a wound or tissue damage, or other behaviour involving the site of damage. We also expect to see a reduction in such activities if a local anaesthetic is applied. Here, we report on an experiment that applied caustic soda, a known irritant in humans, to one eyestalk of the glass prawn. This caused immediate escape responses and then nipping and picking at the treated eyestalk rather than at the untreated eyestalk. Prior application of a local anaesthetic reduced the amount of directed behaviour. However, the local anaesthetic also appeared to be an irritant as it too caused immediate escape responses and directed behaviour to the eyestalk. The results provide further support to the idea that these animals can experience pain. Abstract Acceptance of the possibility of pain in animals usually requires that various criteria are fulfilled. One such criterion is that a noxious stimulus or wound would elicit directed rubbing or grooming at the site of the stimulus. There is also an expectation that local anaesthetics would reduce these responses to damage. These expectations have been fulfilled in decapod crustaceans but there has been criticism of a lack of replication. Here, we report an experiment on the effects of a noxious chemical, sodium hydroxide, applied to one eyestalk of the glass prawn. This caused an immediate escape tail-flick response. It then caused nipping and picking with the chelipeds at the treated eyestalk but much less so at the alternative eyestalk. Prior treatment with benzocaine also caused an immediate tail-flick and directed behaviour, suggesting that this agent is aversive. Subsequently, however, it reduced the directed behaviour caused by caustic soda. We thus demonstrated responses that are consistent with the idea of pain in decapod crustaceans. Introduction Animals often encounter situations in which tissue damage occurs, and that damage might have major negative impacts on fitness [1,2].However, the early evolution of nociceptors provided a means of detection of such damage and enabled animals to withdraw all or part of their body using a nociceptive reflex [3].Nociception thus provides a system that should stop the damage in the short-term [4].In some animals, a second system has evolved, which is called pain.Pain is defined by the International Association for the Study of Pain (IASP) as "An unpleasant sensory and emotional experience associated with, or resembling that associated with, actual or potential tissue damage" [5].The unpleasant sensory and emotional experience must have some function beyond that of nociceptive reflexes, and it is generally accepted that it provides long-term protection by changing behaviour in ways that prevent further damage and promote recovery [2]. A key problem in the study of possible pain is that a reaction to a noxious stimulus might just be a nociceptive reflex [4].However, it is widely accepted that humans, and Animals 2024, 14, 364 2 of 9 at least some other vertebrates, experience pain following a nociceptive input [2].Note, however, that absolute certainty about pain experience is not possible [4].To consider that pain is a possibility requires various criteria to be fulfilled.Various lists of criteria have been constructed, some long [2] and some short [6].Because the function of pain seems to involve a relatively long-term alteration of behaviour, it has been suggested that an emphasis be put on behaviour that is not easily explained by reflexes [7].The more that criteria are fulfilled for a particular taxon, the more likely it is that members of the taxon experience pain. One behavioural criterion for pain is that it should elicit behaviour directed toward the site of the noxious stimulus, or protection of that site by guarding or limiting the use of that part of the body [2,6,7].For example, we expect to see rubbing, licking, holding, guarding, or limping, which can be too complex and prolonged to be explained by reflexes, and these behaviours have been noted in a broad range of animals [2].A second expectation of pain is that the reaction to the noxious stimulus be ameliorated by analgesics or local anaesthetics [2,6,7] and, again, these reactions have been noted in a range of animals [2].Here, we examine these two criteria in a decapod crustacean, the glass prawn, Palaemon elegans. These criteria have received attention in P. elegans [8].Brushing of caustic soda or acetic acid on one antenna caused immediate tail-flicking escape responses, which may be nociceptive reflexes, that were not noted when just sea water was applied.Further, there was a marked increase in grooming of the specific antenna and rubbing that antenna against the side of the tank that was not apparent when sea water was applied.Prior brushing with a local anaesthetic was initially noxious, as suggested by immediate tail-flicking escape responses and, subsequently, more grooming of the antenna was observed.However, in animals in which caustic soda or acetic acid was applied after the local anaesthetic, there was a reduction in grooming and rubbing of the noxiously treated antenna compared to the animals that did not receive the local anaesthetic.These findings were viewed as being consistent with the idea of pain [8].An attempt to replicate the study in three different species of decapods, however, found no effect of either caustic soda, hydrochloric acid or local anaesthetic on antennal grooming or tail-flick responses [9].This led other authors to state that there is no reliable evidence for decapods being sensitive to extreme pH and they criticised the lack of replication of this type of study [10].This call for replication provided the impetus to report on an experiment conducted some years ago.This experiment used one eyestalk of a glass prawn as the site of application of caustic soda to determine if this caused immediate tail-flick escape responses and/or behaviour directed to that eyestalk.We also investigated the effects of prior treatment with a local anaesthetic, benzocaine, on the behaviour of the glass prawn. Collection and Experimental Treatments P. elegans were collected in hand nets from rock pools during low tide on the shore at Ballywalter, Co Down, Northern Ireland (OS; J 634708), between November 2006 and January 2007.They were immediately transported to Queen's University Belfast and housed in tanks containing aerated sea water, maintained between 11 • C and 13 • C on a 12 h light/12 h dark photoperiod regime, with seaweed (Fucus serratus) present in the tanks.Before each treatment, a prawn was removed using a small net and placed in a glass dish containing seawater, covered with a paper towel to prevent the animal escaping, and transferred to an adjacent observation room.The prawns were randomly assigned (by drawing tokens from a bag) to one of four experimental groups (n = 18 per group).Each animal was subject to two sequential treatments. For the first treatment, the animal was placed into a clean dish containing paper towel dampened with seawater.Then, either seawater or 2% benzocaine solution was applied to a randomly chosen (by coin toss) eyestalk using a small brush.One application was carried out along the eyestalk to the tip, and a separate brush was used for each treatment and for each prawn.An immediate reaction to the treatment in the form of a tail-flick escape response was noted.The prawn was then placed in an observation tank (19.5 × 9 × 9 cm) containing fresh seawater (11-13 • C) to a depth of 3 cm.The observation tank was housed in an observation chamber behind a one-way mirror and the behaviour was recorded for 5 min.The activities recorded were (a) the time taken to first cross a marked line that divided the tank in half, (b) the number of times the prawn crossed the line, (c) the number of tail-flick movements (d) the amount of time the animal spent grooming its treated eyestalk and (e) the amount of time spent grooming the untreated eyestalk.Grooming of the eyestalks consisted of the animal remaining stationary and using its chelipeds to nip and pick at its eyestalks. For the second treatment, the prawn was removed from the observation tank and placed into a new, clean treatment dish containing a paper towel dampened with seawater.The same eyestalk was then either treated with seawater or 10% NaOH following a similar procedure as before.The prawn was then placed into the observation tank for another 5 min, and the same activities were recorded. Statistical Methods First treatment: The occurrence of tail flicking immediately following the first treatment was determined using χ 2 contingency tests.The occurrence of tail flicking in the observation tank was determined using χ 2 contingency tests.Differences in general activity, as indicated by the time taken to cross the line and the number of line crosses, were ascertained using unpaired t-tests.The effects of water or benzocaine on the treated and untreated eyestalks were analysed using two-factor ANOVA (with factor 1 being water or anaesthetic, and factor 2 being the repeated measure of treated eyestalk or untreated eyestalk). Second treatment: The occurrence of tail flicking immediately following the second treatment was determined using χ 2 contingency tests, in relation to the first and second treatments.Differences in the occurrence of tail flicking in the observation tank were determined using χ 2 contingency tests.Differences in general activity, as indicated by the time taken to cross the line and the number of line crosses, were ascertained using a two-factor ANOVA (with factor 1 being the first treatment with benzocaine or water, and factor 2 being the second treatment with NaOH or water).The effects on the treated and untreated eyestalks were further analysed using three-factor ANOVA, with treated and untreated eyestalk as repeated measures (factor 1: water or anaesthetic; factor 2: NaOH or water; factor 3: treated or untreated eye as a repeated measure).The grooming of just the treated eyestalk was also analysed using a two-factor ANOVA (factor 1: first treatment, factor 2: second treatment). Ethical Considerations This experiment was conducted in 2006/2007, when there was little or no support for the idea of pain in decapod crustaceans.There were no legal restrictions on experiments on this group of animals.However, similar experiments which applied noxious stimuli to animals, published between 2008 and 2017 [4], provided evidence to suggest the idea of pain in decapods.Nevertheless, we kept the numbers of animals low in each experimental group (n = 18 per group) and we anticipated that only one group would be subject to an aversive experience.However, the data on benzocaine treatment subsequently suggested that three groups would have an aversive experience, which was more than expected.The results of similar experiments have changed the legal situation for decapods within the United Kingdom, which now recognises that these animals are sentient.Despite this, there has been no change in the UK legal requirements for research on these animals, and thus the experiment is fully compliant with current UK regulations.Nevertheless, we suggest that researchers should take the potential sentience of these animals into account when designing future studies rather than waiting for legal change and refer to guidelines on the use of wild animals [11]. First Treatment: Effects of Seawater or Anaesthetic Significantly more animals flicked their tails upon application of anaesthetic compared with seawater (32/36 vs. 0/36; χ 2 1 = 57.6;p < 0.0001).However, the time taken to first cross the line during the 5 min observation did not differ significantly between treatments (t 70 = −1.421,p = 0.1599) and there was no significant difference in general activity as indicated by the number of line crossings (t 70 = 1.3, p = 0.2).There was no significant difference in the occurrence of tail flicking during the 5-min observation between animals treated with water or anaesthetic (9/36 vs. 7/36; χ 2 1 = 0.32; p = 0.57).For eyestalk grooming, there was a significant interaction effect between whether the eyestalk had been treated or not and the nature of that treatment (F 1,70 = 10.01,p < 0.01; Figure 1).This interaction was due to the high level of grooming of the treated eyestalk when the first treatment was benzocaine rather than water.Overall, there was a significant effect of first treatment (F 1,70 = 13.2, p < 0.001; Figure 1), with more grooming occurring when the first treatment was anaesthetic, and grooming was directed more towards the treated eyestalk than the untreated eyestalk (F 1,70 = 7.96, p < 0.01; Figure 1). United Kingdom, which now recognises that these animals are sentient.Despite this, there has been no change in the UK legal requirements for research on these animals, and thus the experiment is fully compliant with current UK regulations.Nevertheless, we suggest that researchers should take the potential sentience of these animals into account when designing future studies rather than waiting for legal change and refer to guidelines on the use of wild animals [11]. First Treatment: Effects of Seawater or Anaesthetic Significantly more animals flicked their tails upon application of anaesthetic compared with seawater (32/36 vs. 0/36; χ 2 1 = 57.6;p < 0.0001).However, the time taken to first cross the line during the 5 min observation did not differ significantly between treatments (t70 = −1.421,p = 0.1599) and there was no significant difference in general activity as indicated by the number of line crossings (t70 = 1.3, p = 0.2).There was no significant difference in the occurrence of tail flicking during the 5-min observation between animals treated with water or anaesthetic (9/36 vs. 7/36; χ 2 1 = 0.32; p = 0.57). For eyestalk grooming, there was a significant interaction effect between whether the eyestalk had been treated or not and the nature of that treatment (F1,70 = 10.01,p < 0.01; Figure 1).This interaction was due to the high level of grooming of the treated eyestalk when the first treatment was benzocaine rather than water.Overall, there was a significant effect of first treatment (F1,70 = 13.2, p < 0.001; Figure 1), with more grooming occurring when the first treatment was anaesthetic, and grooming was directed more towards the treated eyestalk than the untreated eyestalk (F1,70 = 7.96, p < 0.01; Figure 1). Second Treatment: Effects of Seawater and NaOH Following First Treatment Significantly more animals flicked their tails upon application of sodium hydroxide compared with seawater (28/36 vs. 0/36; χ 2 1 = 45.82;p < 0.001).However, first treatment (water vs. anaesthetic) did not significantly affect the occurrence of tail flicking upon application of the second treatment (13/36 vs. 15/36; χ 2 1 = 0.234; p = 0.63).There was no significant effect of the first treatment on the time taken to first cross the line (F 1,68 = 0.49, p = 0.49).However, the second treatment did have a significant effect (F 1,68 = 5.68, p < 0.05), with prawns that underwent sodium hydroxide treatment taking longer to first cross the line.There was no interaction effect between first and second treatments (F 1,68 = 1.47, p = 0.23).The number of line crosses during the second session was not significantly affected by either the first treatment (F 1,68 = 0.03, p = 0.87) or the second treatment (F 1,68 = 0.64, p = 0.43) and there was no interaction (F 1,68 = 0.37, p = 0.55).There was no significant difference in the occurrence of tail flicking in the observation tank between animals first treated with water or anaesthetic (10/36 vs. 4/36; χ 2 1 =3.19, p = 0.07).Tail flicking did not differ depending on the nature of the second treatment (χ 2 3 = 0.36, p = 0.55). The three-way interaction between the first and second treatments and which eyestalk was groomed was not quite significant (F 1,68 = 3.75, p = 0.06, Figure 2).However, the two-way interaction between whether the antenna was treated or not and the nature of the second treatment was clearly significant (F 1,68 = 194.36,p < 0.001, Figure 2), showing that most grooming was directed towards the treated antenna when the second treatment was NaOH.There was a significant interaction effect between the first and second treatments (F 1,68 = 4.6, p < 0.05, Figure 2) because there was more grooming when the first treatment was water rather than benzocaine and the second treatment was sodium hydroxide rather than water. There was no significant effect of the first treatment on the time taken to first cross the line (F1,68 = 0.49, p = 0.49).However, the second treatment did have a significant effect (F1,68 = 5.68, p < 0.05), with prawns that underwent sodium hydroxide treatment taking longer to first cross the line.There was no interaction effect between first and second treatments (F1,68 = 1.47, p = 0.23).The number of line crosses during the second session was not significantly affected by either the first treatment (F1,68 = 0.03, p = 0.87) or the second treatment (F1,68 = 0.64, p = 0.43) and there was no interaction (F1, 68 = 0.37, p = 0.55).There was no significant difference in the occurrence of tail flicking in the observation tank between animals first treated with water or anaesthetic (10/36 vs. 4/36; χ 2 1 =3.19, p = 0.07).Tail flicking did not differ depending on the nature of the second treatment (χ 2 3 = 0.36, p = 0.55). The three-way interaction between the first and second treatments and which eyestalk was groomed was not quite significant (F1,68 = 3.75, p = 0.06, Figure 2).However, the two-way interaction between whether the antenna was treated or not and the nature of the second treatment was clearly significant (F1,68 = 194.36,p < 0.001, Figure 2), showing that most grooming was directed towards the treated antenna when the second treatment was NaOH.There was a significant interaction effect between the first and second treatments (F1,68 = 4.6, p < 0.05, Figure 2) because there was more grooming when the first treatment was water rather than benzocaine and the second treatment was sodium hydroxide rather than water.When only grooming of the treated eyestalk was analysed, there was a significant effect of first treatment (F 1,68 = 8.72, p < 0.05), with more grooming occurring in the second session when the first treatment was seawater.The second treatment had a significant effect on grooming (F 1,68 = 604.63,p < 0.001), with most grooming following treatment with sodium hydroxide.Importantly, there was a significant interaction effect (F 1,68 = 8.72, p < 0.05) due to the high level of grooming of the eyestalk when the first treatment was water and the second treatment was sodium hydroxide, but there was less grooming if the first treatment was benzocaine. Discussion Brushing the eyestalk with seawater was not noxious, as no animal showed a tail flick at that time.This is in marked contrast to the immediate response to being brushed with benzocaine, following which virtually all animals showed an immediate tail-flick response.Tail flicking is the key escape response in prawns, and, in water, the animal would be propelled backwards.This response to benzocaine was unexpected.However, there are reports that benzocaine is often prepared in an acid solution [12] and topical application may cause stinging or burning sensations in humans [13].Note, however, that this response to benzocaine was not observed in experiments on three other decapod species [9]. When the animals were returned to the water, there was no effect of the first treatment on general activity, either in terms of time to first crossing a mid-line in the tank or total number of line crossings.Also, there was no effect of the local anaesthetic on tail flicking while the animals were in the water.However, behaviour was directed towards the treated eyestalk when that had been brushed with benzocaine; the animal used its chelipeds to pick and nip at the eyestalk (Figure 1).This demonstrates that the aversive effect of benzocaine persisted after the animal was returned to the seawater.The interaction term shows that the grooming was directed primarily, but not entirely, at the treated antenna.It is not clear why a minor amount of grooming was directed to the untreated eyestalk.The eyestalks are on either side of the head and sufficiently separated to ensure there was no accidental transfer of benzocaine from one side to the other.Thus, it is the behaviour that appears to be misdirected, albeit in very small amounts. During the application of the second treatment, only animals receiving sodium hydroxide showed tail-flicking escape responses; however, pretreatment with benzocaine failed to ameliorate this response.When placed in the water, those animals that had sodium hydroxide on an eyestalk took longer to initiate movement, but again, this measure was not affected by prior treatment with benzocaine.A previous experiment on Palaemonetes sp.found that sodium hydroxide applied to an antenna resulted in a longer time passing before locomotion was initiated, but that study considered the finding to be a false positive as it was not noted in two other species [9].That it was also found in the present study, however, suggests that it is a true effect.Overall movement and tail flicking while in the water, however, were not affected by any treatment.Nevertheless, there was a clear indication of sodium hydroxide being aversive because there was significantly more grooming of the eyestalk when NaOH was applied as the second treatment.Further, the grooming was directed significantly more at the treated than the untreated eyestalk.There was some amelioration of the response to sodium hydroxide by pretreatment with benzocaine, but, although statistically significant, the effect was not large.Thus, there is some doubt as to the effectiveness of benzocaine as a local anaesthetic when applied to the eye.It is possible that it does not fully block nerve transmission within 5 min of application.However, the lack of eyestalk grooming during the second observation period in animals pretreated with benzocaine and secondly with seawater indicates that the aversive nature of benzocaine is short-lived.The second observation period occurred about 6-11 min after the application of benzocaine.Future experiments on benzocaine or other local anaesthetics should test over different durations to establish the timing of aversive and anaesthetic properties [14]. This experiment shows that glass prawns respond to potentially noxious chemical stimuli both with immediate tail-flicking, which presumably is mediated by a nociceptive reflex [8], and with the later, more complex and prolonged behaviour directed to the site of application.Sometimes, the animals use one cheliped to do this but at times two are used simultaneously.In this case, the movements of the two chelipeds are different and the joints bend in different ways to reach the treated eye.These activities appear to have a major protective function.The eyes are thus vital for the fitness of glass prawns and visual stimuli in the wild readily elicit escape or feeding (pers.obs.).The eyes are situated at the distal end of flexible eyestalks and these eyestalks have important endocrine functions [15], and virtually all aspects of crustacean physiology are affected by eyestalk removal [16].The regenerative capability of crustaceans does not extend to the eyestalks, suggesting that regeneration of the central nervous system may not be possible [17].Thus, the ability to detect noxious stimuli that might damage either the eye and/or the stalk would be beneficial to the animal.Hence, the tail-flicking escape responses seen when noxious, potentially damaging chemicals are applied, and the nipping and picking at the eye and stalk, appear to maintain the integrity of this vital organ complex. The present findings are similar to those shown by Barr et al. [8]) when antennae were subject to different treatments.However, Diggles et al. [10] expressed deep concern that the previous experiment had not been replicated and expressed surprise that directed grooming was included in the review by Birch et al. [18].We note, however, that Diggles et al. [10] ignored other reports of chemicals causing extreme responses in various decapod species, despite some of these being reviewed in [19].For example, shore crabs (Carcinus maenas) used their claws to scratch at their mouthparts if they had been brushed with acetic acid [20].Acetic acid brushed on an eye also caused that eye to be held down for longer than a control-treated eye [20].Further, shore crabs (Hemigrapsus sanguineus) reduced their use of a claw if it was injected with formalin and often pressed that claw against the carapace [21].Crabs also shook and rubbed the injected claw.Some of these animals subsequently autotomised a claw injected with formalin [21] and similar autotomy occurred if the base of a walking leg of C. maenas was injected with acetic acid [22].We have also seen cases of autotomy due to high temperature [23], damage to a flexible distal joint on a leg [24], and electric shock to a basal joint of a walking leg [25].A further example of behaviour being directed to the site of a noxious stimulus is seen in hermit crabs that groomed their abdomen if an electric shock was applied [26].Also, brown crabs (Cancer pagurus) that had a cheliped twisted off, as in fishery practice, held their remaining claw over the wound during competitive interactions [27]. These observations of behaviour directed to a specific site are consistent with the idea of pain [4,6].Similar responses to injection of acetic acid have been reported for octopi [28] and fish [29].Mammals also direct their behaviour to the site of noxious treatment [30][31][32].However, not all treatments induce the same responses in crustaceans and mammals.For example, capsaicin induces pain and much directed grooming and rubbing in mammals [33] but it has no effect on decapods [20,34].This is likely due to differences in nociceptor channels seen between taxa [3]. Experiments that examine the possibility of pain in decapods are important for understanding animal welfare.Decapods are fished in the wild, and reared in aquaculture systems, in vast numbers for human consumption, and, until recently, their treatment has not been shaped by welfare concerns [35,36].The traditional view is that these animals respond to noxious stimuli only by reflex reactions and thus they have no capacity for pain or suffering [19].However, various experimental approaches have questioned this view because behavioural, physiological and morphological criteria have been fulfilled and thus are consistent with the idea of decapods feeling pain [2,4,6].Thus, apart from the directed behaviour towards the site of damage, and the modulation of responses by local anaesthetics noted here, we see rapid avoidance learning [37], anxiety [38], trade-offs with other requirements [39,40] and long-term shifts in motivation [26].These animals will also give up highly valuable resources to escape high-, but not low-intensity noxious stimuli [41].None of these are easily explained by nociceptive reflexes [4].Nociceptors have been identified [34], as have suitable brain organisations [42] and physiological changes that indicate stress following noxious stimuli [24,43].These aspects and many others were reviewed and presented to the UK government and accepted as being sufficient evidence to declare that decapods were sentient [18]. Conclusions We show that caustic soda is a noxious stimulus when applied to an eye and it elicits prolonged, directed grooming towards that eye.Benzocaine is also noxious when first applied, but it subsequently reduces directed grooming elicited by caustic soda.These results further fulfil two key criteria for pain. Whilst we accept that there can be no absolute proof of pain in decapods the evidence that is consistent with sentience is sufficiently widespread within the taxon and of sufficient quality and variety that it is unreasonable to conclude that sentience is not possible [6,44].Given that sentience is a possibility, protection for this group should be afforded, particularly in the food industry, in which billions of these animals are subject to extreme treatments, as well as in research [18,35]. Figure 1 . Figure 1.Mean (±SE) total duration (s) (log (x + 1)) of grooming of the treated and untreated eyestalk in the first observation.The low scores for water treatment groups are zero and the small scores shown here were created so those experimental groups may be seen in the figure. Figure 1 . Figure 1.Mean (±SE) total duration (s) (log (x + 1)) of grooming of the treated and untreated eyestalk in the first observation.The low scores for water treatment groups are zero and the small scores shown here were created so those experimental groups may be seen in the figure. Figure 2 . Figure 2. Mean (±SE) total duration (s) (log (x + 1)) of grooming of the treated and untreated eyestalks in the second observation.The low scores for the four water treatment groups are zero and the small scores shown here were created so those experimental groups may be seen in the figure. Figure 2 . Figure 2. Mean (±SE) total duration (s) (log (x + 1)) of grooming of the treated and untreated eyestalks in the second observation.The low scores for the four water treatment groups are zero and the small scores shown here were created so those experimental groups may be seen in the figure.
2024-01-26T16:04:07.741Z
2024-01-23T00:00:00.000
{ "year": 2024, "sha1": "de552ba822bdde74ba82b10e4409e05c3f19c4c1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/14/3/364/pdf?version=1706079006", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c48c20c3e05e0dfa1c0394cae007071c9ed8c6e9", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
5829376
pes2o/s2orc
v3-fos-license
Facile synthesis of SrCO3 nanostructures in methanol/water solution without additives Highly dispersive strontium carbonate (SrCO3) nanostructures with uniform dumbbell, ellipsoid, and rod-like morphologies were synthesized in methanol solution without any additives. These SrCO3 were characterized by X-ray diffraction, field emission scanning electron microscopy, and N2 adsorption-desorption. The results showed that the reaction temperature and the methanol/water ratio had important effects on the morphologies of SrCO3 particles. The dumbbell-like SrCO3 exhibited a Broader-Emmett-Teller surface area of 14.9 m2 g−1 and an average pore size of about 32 nm with narrow pore size distribution. The formation mechanism of the SrCO3 crystal was preliminary presented. Background Recently, nanomaterials with different morphologies have attracted great attention for their promising applications such as optical materials, efficient catalysts, drug-delivery carriers [1][2][3], etc. Strontium carbonate (SrCO 3 ) is one of the important reagents used in firework, pigment, and electron manufacturing [4]. There are two main usages of SrCO 3 : they are used in the production of cathode ray tubes and ferrite magnets for small direct-current motors [5]. However, SrCO 3 with different morphologies may own different potential usages. For example, SrCO 3 with a needle-like crystal is used in optical polymers to reduce birefringent phenomena [6]. A sphere-shaped crystal with a diameter less than 1 μm is favorable for high-temperature electric components. So far, SrCO 3 with various morphologies such as hierarchical branches, hexagonal prisms, straw-like, pancake, ellipsoid, needle, flower ribbon, bundle, dumbbell, sphere, and rod-like have been reported [5,[7][8][9][10][11][12][13]. Various methods have been reported on the preparation of SrCO 3 nanostructures including hydrothermal [14], microwave-assisted [9], microemusion-mediated solvothermal methods [5], etc. Although nanoscale SrCO 3 with special morphologies was obtained, the preparation processes were complex and strict, such as high-pressure, high-temperature, and tedious synthetic procedures as well as high cost were required. Aside from that, large-scale synthesis of SrCO 3 nanostructures still remains a considerable challenge. In this work, a new facile way was reported to synthesize SrCO 3 nanostructures by continuously carbonizing Sr(OH) 2 with CO 2 in methanol/water solution without additives. The effects of the reaction temperature and the methanol/water (m/w) molar ratio on the morphology evolution were investigated. This method is simple, lowcost, and easy to control in producing large-scale monodisperse SrCO 3 nanostructures. Methods Sr(OH) 2 Á8H 2 O was purchased from Alfa Aesar (Ward Hill, MA, USA). Other reagents were purchased from Beijing Reagents Co., Ltd. (Beijing, China) All reagents used in our experiments were of analytical grade. Experiments were carried out in a 150-ml reactor with a refluxing system. The reaction temperature was controlled using a thermostatic bath, as shown in Figure 1. In the typical experiment, several grams of Sr(OH) 2 Á8H 2 O were dissolved into the methanol/water solution and kept at room temperature for 24 h. The concentration of the solution was kept at 0.05 mol l −1 for all the experiments. The solution was put into the reactor and stirred using a propeller agitator with a speed of about 800 rpm. Mixed CO 2 of 100 ml min −1 (80 ml min −1 N 2 + 20 ml min −1 CO 2 ) was induced into the reactor and lasted for 30 min. Then, the gas was cut off and continuously agitated for another 2 h. Finally, the solution was naturally cooled to room temperature. The products were separated by centrifugation and washed with deionized water and ethanol alternately for three times. The obtained carbonate samples were dried at 60°C for 24 h. The products were characterized by field emission scanning electron microscopy (FESEM; JSM-6700 F, JEOL, Akishima-shi, Japan) and X-ray diffraction (XRD; X'pert PRO MPD, PANalytical B.V., Almelo, The Netherlands); patterns of carbonate were recorded on a diffractometer (using Cu Kα radiation; λ = 0.154 nm) operating at 40 kV/30 mA. A scanning rate of 0.2°s -1 was applied to record the patterns. The N 2 adsorption-desorption isotherms were measured at 77 K using an automated surface area and pore size analyzer (QUADRASORB SI-MP, Quantachrome Instruments, Boynton Beach, FL, USA). Results and discussion The XRD patterns in Figure 2 confirm the SrCO 3 obtained by carbonating Sr(OH) 2 with CO 2 in the methanol/water system. All peaks in these patterns can be indexed as orthorhombic phase (JCPDS No. 84-0418) with lattice constants a = 5.107 Å, b = 8.414 Å, c = 6.029 Å; α = β = γ = 90. By comparing the XRD patterns of the as-synthesized The morphologies of the products characterized by FESEM were shown in Figures 3 and 4. Figure 3 shows the morphology evolution of the products obtained at different temperatures in pure methanol. All these products have characteristics of being highly monodisperse and uniform. Ellipsoidal particles with a long axis of 350 nm and a short axis of 180 nm were obtained when a reaction temperature of 70°C was presented ( Figure 3A). As the temperature was decreased to 60°C, the products were rod-like with a diameter of 110 nm and a length of about 250 nm. However, some particles have a trend to swell up, turning into dumbbell-like ( Figure 3B inset). Finally, when 50°C was presented, uniform dumbbell-like particles with a handle diameter of 160 nm and a top diameter of about 200 nm were observed, and the length of the particles is about 340 nm ( Figure 3C). It is obvious that these dumbbelllike particles are constructed by small nanocrystallites with a diameter of about 20 nm (Figure 3C inset). The particles seem to have a mesoporous structure, which will be confirmed by the results of the N 2 adsorptiondesorption measurement later. In order to investigate the effect of the m/w ratio on the morphology of SrCO 3 , the crystals were prepared in m/w ratios of 3:2, 1:1, and 2:3. Figure 4 shows the morphologies of the products from the different m/w ratios. It is interesting that the rod-like SrCO 3 with different length-diameter ratios were observed in most cases. The results indicated that the m/w ratio has a great effect on the morphology of products. When the ratio of m/w is 3:2 ( Figure 4A), irregular plate-like products with a few short rods have been observed in the picture; the rods had a diameter of 90 nm and a length of 600 nm. As the m/w ratio was changed to 1:1, monodisperse rod-like products with a diameter of 120 nm and a length of 1.2 μm were observed ( Figure 4B). When the m/w ratio is 2:3, the morphologies ( Figure 4C) of the products were similar with those obtained in the m/w ratio of 1:1; it seems that the crystallinity of the products is better than that of Figure 4B. The high porosity properties of the dumbbell-like SrCO 3 products were confirmed by the measurement of the Brunauer-Emmett-Teller (BET) surface area and the N 2 adsorption-desorption isotherms ( Figure 5). The BET specific surface area is 14.9 m 2 g −1 , which is smaller than those of the mesoporous SrCO 3 [15] and hierarchical mesoporous SrCO 3 submicron spheres [16]. The reason can be ascribed to the relative larger diameter of the constructing particles and the larger total pore volume of 0.18 cm 3 g −1 . According to IUPAC recommendations [17], the observed hysteresis of the dumbbell-like product was a characteristic of a type III isotherm with a type H3 hysteresis loop in the P/P 0 > 0.8. This means that the mesopores in the reign size of 12 to 32 nm were presented ( Figure 5 inset). Moreover, the observed hysteresis loop near to P/P 0 % 1 suggests that pores >50 nm were also presented [18]. This may be explained that mesopores of 12 to 32 nm ascribed to the auto-assembled stacks of uniform nanosphere, while the large pores >50 nm attributed to the aggregation of the dumbbell-like particles. It was reported that materials with a mesoporous structure possessed higher chemical reactivity due to their higher mass transportation performance [19]. Although the exact formation mechanism of these morphologies of the SrCO 3 crystal is still not clear at present, our results show that the reaction temperature and the m/w ratio have great effects on the morphology. The m/w ratio or the polarity of the mixed solvent had a great effect on the morphology which had been reported by Lou et al. [17] and Zhang et al. [20]. Studies showed that alcohols can affect the dielectric constant of the medium and change the crystal growth rate [21]. When pure methanol was presented, the -OH of methanol might adsorb to the nuclei of the crystals and change its surface energy and then the morphology of the products. As the temperature was increased, the vibration of -OH groups in methanol was more rapid and absorption effects were weakened [22]. Thus, the morphology of the product has little change at relatively high temperatures of 60°C and 70°C ( Figure 3A,B). However, when the methanol/water solution was presented, the forming hydrogen bond between methanol and water prevented the absorption to the nuclei, and rod-like products growing along the c-axis were obtained. Conclusions Highly dispersive SrCO 3 nanostructures with unique ellipsoid, dumbbell, and rod-like morphologies were successfully synthesized by a facile way in pure methanol or methanol/water solution without additives. The morphology of SrCO 3 nanostructures can be controlled flexibly by adjusting the reaction temperature and the m/w ratio. N 2 adsorption-desorption result reveals that this dumbbelllike SrCO 3 has a mesoporous structure. It is expected that these SrCO 3 nanostructures can be used in photocatalysis and electronic manufacturing in the future.
2018-04-03T03:29:18.380Z
2012-06-15T00:00:00.000
{ "year": 2012, "sha1": "c0e71c419002e2a7bc7f2a96edd978fc2e139803", "oa_license": "CCBY", "oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/1556-276X-7-305", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2c705ac10d65a91af01dfa284e6f6052107a7594", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
255257472
pes2o/s2orc
v3-fos-license
Effectiveness of Flipped Classroom Model on Mathematics Achievement at the University Level: A Meta-Analysis Study Many studies have been conducted on the differences in the effectiveness of the flipped classroom model compared to traditional learning models on mathematics learning achievement. However, the results of previous studies have inconsistent results. Therefore, this study aims to test the effectiveness of the flipped classroom model on mathematics learning achievement when compared to traditional learning models. For this purpose, the study design used was a contrast group meta-analysis. The studies analyzed were 20 independent studies from 13 main studies published in Scopus-indexed journals. Data analysis using JASP software version 0.16.4. The results of the analysis showed that the combined effect size using random effect estimation was (Cohen'd = 0.494; p < 0.001). This effect size belongs to the medium effect category. These results prove that students' mathematics learning achievement using the flipped classroom model is more effective than the traditional learning model. The results of this study suggest that the difference in size from the previous study became clear after the meta-analysis, namely the moderate effect category. These results are also expected to be the basis for policymaking in improving the quality of mathematics learning. INTRODUCTION Mathematics has contributed to the development of science such as in the fields of language, religion, culture, and social justice (Larnell et al., 2016;Ishartono et al., 2019; 768 Effectiveness of Flipped Classroom Model on Mathematics … International Journal of Instruction, January 2023 • Vol.16, No.1 Habibi & Prahmana), as well as technical ones such as engineering, architecture, and economics (Mensik, 2015;March & Steadman, 2020).In learning mathematics, understanding concepts have a role in achieving learning objectives (Kilpatrick et al., 2001;Setyaningrum, 2018).Educators must also realize that each student has a different time understanding mathematical concepts as a whole.Students with high academic abilities need less time to understand concepts than students with lower academic abilities (Prayitno et al., 2022).With current technological advances, educators must be able to create learning strategies that can facilitate the diversity of student learning needs (Setiawan et al., 2022).For this problem, the flipped classroom model can be used as an alternative solution. The flipped classroom learning model reverses the traditional learning model.Before studying in class, students first study the material by watching videos or other teaching materials and then continue to do homework (Bergmann & Sams, 2012;Network, 2014;Ramakrishnan & Priya, 2016;Shih & Huang, 2020).Sessions in class can be continued with active learning.For example, practicing what they have learned during sessions outside the classroom, collaborative group work, discussion, problem-solving, and working on projects with instructor feedback and guidance (Mok 2014;Huang & Hong, 2016).In class activity sessions, educators act as student facilitators (Bergmann & Sams, 2012).The speed and diverse learning needs of students can be facilitated with the flipped classroom model (Bergmann & Sams, 2012;Prescott et al., 2018;Angelona et al., 2020). Research on flipped classrooms has become a familiar topic in various fields of study including mathematics (Lo & Hew., 2017).Research from Hung et al (2018) in a middle school in China shows that the flipped classroom learning model with MOOCs can improve student motivation and learning outcomes.The results of research by Bhagat et al (2016) also reveal that flipped classrooms can increase students' motivation and achievement in learning mathematics in Taiwan.Furthermore, the results of Lo & Hew's (2018) research also reveal that flipped classrooms with the help of Moodle can improve students' mathematics learning achievement.Flipped classrooms are also identified as having a better effect than traditional learning models at the university level (Emily, 2015;Daniel, 2015;Sergis et al., 2017;Li et al., 2017) Although the achievement of learning mathematics using the flipped classroom model was identified as having a better effect than the traditional learning model at the university level, different results were found by Files, 2016;Clark, 2015;and Briggs, 2014.The results of their research show that mathematics learning achievement at the university level between the flipped classroom model and traditional learning is not significantly different.Inconsistent research results on the same topic can of course lead to ambiguous conclusions.on the other hand, mathematics teachers want to obtain accurate and mutually supportive information to be considered in improving the quality of mathematics learning.Meta-analysis needs to be done because of the reality that no research is free from errors in research even though researchers have tried to minimize errors or errors in the research.The meta-analysis study aimed to find the effect size.Effect size is a quantitative index used to summarize study results in a meta-analysis. 769 International Journal of Instruction, January 2023 • Vol.16, No.1 This means that the effect size reflects the magnitude of the influence, the magnitude of the difference, and the relationship of a variable with other variables (Schmidt & Hunter, 2004;Retnawati et al., 2018). So far, a meta-analysis study on the effectiveness of flipped classrooms on mathematics learning achievement has been conducted by Yakar (2021) in Turkey.However, the meta-analysis studies conducted were limited to the elementary school level.Based on the literature review that we examined, there has been no meta-analysis research on the effectiveness of the reverse class model on mathematics learning achievement at the university level.This study attempts to measure the effectiveness of the flipped classroom model on mathematics learning achievement at the university level when compared to traditional learning models.The results of this meta-analysis can provide clear conclusions from the inconsistent results of previous studies so that they can be used as a basis for policy-making in improving the quality of mathematics learning at the university level. METHOD Search and Screening Literature Search for primary studies that match the inclusion criteria using several databases, such as: Education Resources Information Center (ERIC), Google Scholar, Directory of Open Access (DOAJ), Springer publishing, AIP Proceedings, IOP Sciences, and Elsevier.The keywords used in the search for primary studies were "Flipped Classroom" AND "Mathematics". Based on the initial search data using the database and keywords above, 218 preliminary studies were found.The initial studies found were then screened using the following inclusion criteria: 1) Articles published from 2015 to 2021; 2) Articles must be indexed by Scopus; 3) Experimental research related to flipped classroom and math achievement; 4) Minimum research sample is 15 people; 5) Articles are required to report data on the mean, number of samples, and standard deviation of the control and experimental classes. Based on the search results that match the specified inclusion criteria, found independent studies (k = 20) from 13 main studies for further evaluation.The final data collected is then performed with variable coding and data extraction in Microsoft Excel for further data analysis.Table 1 describes information on primary studies that have been published by various Scopus-indexed journals. Data analysis Data analysis in this meta-analysis used JASP software version 0.16.4.The steps of meta-analysis data analysis follow these steps: 1) Calculating the effect size of each study; 2) Conduct heterogeneity test; 3) Calculating the size of the Combined effect; 4) Evaluation of publication bias.Classification of the effect sizes of each study and the combined effect refers to the classification of Cohen et al. (2018) which is shown in As described previously, the main objective of this meta-analysis study was to calculate the combined effect size.The combined effect size was calculated after the heterogeneity test.The aim is to select an appropriate effect size estimation model.If the heterogeneity assumption is met, then the random effects model is used to estimate the combined effect size, and if the heterogeneity assumption is not met (homogeneous data) then the effects model is still used.Furthermore, to ensure that the research included in the meta-analysis has shown objective results, an assessment of publication bias is carried out (Retnawati et al., 2018;Setiawan al., 2022;Muhtadi et al., 2022).Evaluation of publication bias in this study used the FSN (File-Safe N) approach. FINDINGS The effect size of each study The first step is to calculate the effect size of each study.Table 3 presents a summary of effect sizes, variances, and standard error values for each study calculated using the JASP software.The effect size range of the 20 studies ranged from 0.036 to 1.18.This shows that the effect sizes range from having no effect to having a large effect.There are four effect sizes (n = 4) in the no effect category.Eight effect sizes (n = 8) were categorized as small effects, five effect sizes (n = 5) were categorized as moderate effects, and three effect sizes (n = 3) were categorized as large effects.The results of the heterogeneity test (see Table 4), obtained p-value < 0.001.These results indicate that the data variance between the primary studies is heterogeneous.Thus, the estimation model used to calculate the combined effect size is a random effect. Overall Effect Size The calculation of the combined effect size aims to confirm the research question in this meta-analysis study, namely how large the effect size of using the flipped classroom model is compared to the traditional model on mathematics learning achievement.The combined effect size was calculated using a random effect approach.A summary of the The results of the analysis using JASP 0.16.4 software (see Table 5), the combined effect size was obtained (Cohen'd = 0.494; p < 0.05), and the standard error was (SE = 0.075).This effect size belongs to the moderate effect category.These results indicate that the effect of using the flipped classroom model has a moderate effect on student achievement when compared to the traditional learning model. Evaluation of Publication Bias Meta-analytical studies that can reflect objectivity and are scientifically justified can be assessed by evaluation of publication bias.In this study, the File-Safe N Approach (FSN) was used to evaluate publication bias.Table 6 presents a summary of the publication bias test.The results of the publication bias test analysis (see Table 6), obtained an FSN value of 921.This value is greater than 5k+10=110.Thus it can be concluded that this metaanalysis study does not have publication bias problems.The following table provides a summary of the evaluation of publication bias. DISCUSSION The results of this meta-analysis indicate that student achievement with the flipped classroom model is more effective than the traditional learning model.The combined effect size value was (Cohen'd = 0.494; p < 0.05).This effect size belongs to the medium effect category.This finding is in line with the meta-analysis research conducted by Strelan et al (2019).The results of their study showed that student achievement using the flipped classroom model was more effective than the traditional learning model, but the effect size was in the small effect category (d = 0.35).Another fact found in their research is that the effect sizes found in other fields of study fall into strong categories such as the humanities (d = 0.98; k = 34).These findings suggest that the inverse class effect may not be as strong in disciplines with highly structured materials such as mathematics.The use of the flipped classroom has a different effect on scientific disciplines, of course, this becomes the basis for further research.Thus, there Effectiveness of Flipped Classroom Model on Mathematics … International Journal of Instruction, January 2023 • Vol.16, No.1 is room for future researchers to expand our understanding of the inverse class effect in various disciplines.A similar meta-analysis was also carried out by Algarni (2018), the results of his research show that the flipped classroom is effective in learning mathematics, but the effect size found is in a small category.Therefore, it can be concluded that the use of the flipped classroom has a positive contribution to learning mathematics.The findings of this meta-analysis reinforce previous research examining the effectiveness of the inverted classroom model on student mathematics achievement at the university level. The flipped classroom model provides opportunities for students to be more independent in managing their learning (Ishartono et al., 2022).They can study outside the classroom with flexible time before in-class learning.They can learn material such as videos or other teaching materials at their own pace of learning (Holton et al., 2016).The flipped classroom must be supported by active learning activities (studentcentered).Teachers not only provide information but make them independent learners (Bergmann & Sams, 2012;Abeysekera & Dawson, 2015).Another benefit of the flipped classroom model is the positive perception of students.Student perceptions tend to be positive because students are facilitated to learn according to their learning style and speed through the various media used.Students also want to use the media for positive things and become independent learners.Student-centered learning can be carried out well with the flipped classroom model.However, further, development is still needed to improve the soft skills and 21st-century skills of students.Future research is expected to conduct a meta-analysis study related to flipped classrooms on the achievement of 21stcentury skills.Anderson et al (2001) and Bregmann (2012) revealed the elements in the flipped classroom model, including; 1) Providing opportunities for students to get first exposure before learning in class.The mechanisms used to study at home can vary, from textbooks, modules, video podcasts, and other learning resources.Students can also complete assignments or quizzes; 2) Provide more time for students to prepare material before participating in the learning process in class; 3) Provide procedures or procedures for assessing students' understanding of knowledge.4) Activities in the classroom focus on higher-level cognitive activities.Where student activities in class are discussions, data analysis, or synthesis activities.The key is that students use class time to deepen their understanding. The flipped classroom model consists of three phases, namely the pre-class phase, the in-class phase, and the post-class phase (Roeheling, 2018;Ishartono et al., 2022).In pre-class activities students are given preliminary knowledge (schemas) before continuing with active learning in class.In addition to studying the material, students work on assignments.Lecturers give assignments related to topics, such as summarizing notes, answering questions, or playing quizzes using additional platforms.Furthermore, students are advised to note things that have not been understood.This list will be discussed later in class.This means that before studying in class, students already have an overview of the material to be taught.This situation will encourage students to be more prepared to carry out discussions and find solutions to problems given under the guidance of the teacher (Bergmann & Sams, 2012;Lai & Hwang, 2016).The same statement was also made by Ishartono et al. (2022) who revealed that giving assignments and teaching materials to students before studying in class can enrich students' prior knowledge so that they are better prepared when studying in class. Preparations that must be done by the lecturer before the learning process include; Prepare learning media, in this case in the form of learning videos or other teaching materials that support independent learning activities, then send them to students one week before face-to-face learning in class.Lecturers need to prepare study instructions that students must study at home.While student activities at home can be; study independently or in groups related to the material provided by the lecturer.Students must understand the instructions regarding the activities to be carried out.Students record things that have not been understood and things they want to learn further regarding the material in the learning video to be further discussed in class. This research also has limitations.This study analyzed only 20 effect sizes from 13 primary studies.The mathematics achievement measured in this study is still general in nature.Future research can expand the sample and analyze mathematical achievement more specifically, for example: understanding mathematical concepts, creative thinking, critical thinking, and others.In addition, further research can analyze it further by examining the analysis of moderator variables, so that other variables that are thought to influence the effect size can be found. CONCLUSION The results of this meta-analysis show that the use of the flipped classroom model has a moderate effect on mathematics learning achievement, compared to the traditional learning model.The meta-analysis studies carried out also did not have publication bias problems, the results of this study were objective and scientifically acceptable.The results of this study can provide a clear conclusion that the confusing effect size differences between the variables of the flipped classroom model and students' mathematics learning achievement at the university level become clearer.In addition, the results of this study are expected to provide an overview of the effect of the flipped classroom model on student achievement, so that it can be used as a basis for policymaking in improving the quality of mathematics learning.
2022-12-30T16:06:24.272Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "cede91ebfc1ad6c221e684ebfd3bdb01fc9e17f2", "oa_license": null, "oa_url": "https://doi.org/10.29333/iji.2023.16143a", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "29322f8e28a73fdaae9a8d1ae410d85e5a41a5d6", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
202880232
pes2o/s2orc
v3-fos-license
SYNTHESIS AND BIOLOGICAL EVALUATION OF 3-((1-METHYL-1 H -PYRROL-2-YL)METHYLENE)INDOLIN-2-ONE DERIVATIVES AS POTENT ANTICANCER ACTIVE AGENTS – A series of 3-((1-methyl-1 H -pyrrol-2-yl)methylene)indolin-2-one derivatives were designed, synthesized, and evaluated for their inhibition activities against four tumor cells in vitro . These compounds were fully characterized by 1 H NMR, 13 C NMR INTRODUCTION Cancer is a serious threat to human life and health with high mortality.Thus, developing anticancer drugs with high efficiency and showing minimal side effects remains a challenge in drug development.Indole also known as benzopyrrole, is a parallel compound of pyrrole and benzene.Indole and its congener derivatives widely occurred in plants, animals and microbial hormones. 1The indole derivatives have attracted a great deal of interest because of their antibacterial, 2 antifungal, 3 anti-inflammatory, 4 antihistamine, 5 antioxidant, 6 anti-diabetes, 7 anti-virus, 8 anticholinesterase 9 and antitumor agents. 10In view of the fine biological activity of indole structure, many indole derivatives have been used in the field of medicine, natural and synthetic products with potent bioactivity profile.For instance, the anti-mitotic vinblastine isolated from Vinca plant has been widely used to treat a variety of cancers, including Hodgkin's disease, non-Hodgkin's lymphoma, Kaposi's sarcoma, breast cancer and testicular cancer; 11 the small molecule kinase inhibitor Sunitinib 12 was developed by the American company Prifzer and was approved by the FDA in 2006; the clinical phase III experimental PDGF and RKit oncoprotein oral inhibitor Motesanib 13 developed by Takeda Bio Development Center Limited; VEGFRs, PDGFRs and FGFRs 14 growth factor inhibitor Nintedanib was developed by Boehringer Ingelheim of Germany. 6][17][18] N-H is a relatively active proton which can increase the polarity of the molecule and the water solubility of the drug to some extent because of carbonyl group structure.When a molecule binds to a protein, -CONH-readily forms a hydrogen bond with a residue in the biomacromolecule to enhance the efficacy of the drug.There are currently many amide-containing drugs for clinical applications: Imatinib, 19 a small molecule inhibitor for the treatment of chronic myelogenous leukemia; the small molecule tyrosine kinase inhibitor Dasatinib 20 approved by the FDA in 2006; Nilotinib, 21 a second-generation tyrosine kinase inhibitor for the treatment of chronic leukemia; vascular endothelial growth factor receptor (VEGFR) kinase inhibitor Axitinib. 22ny of the compounds containing the pyrrole have some pharmacological activity.Especially, lamellarin showed strong cytotoxicity in multidrug-resistant (MDR) tumor cells.Furthermore, lamellarin K effectively increased its anticancer effect on tumor cells by inhibiting the release of p-glycoprotein (Figure 1). RESULTS AND DISCUSSION Chemistry.The synthetic route of the target compounds 7a-q was shown in Scheme 1. First, 1-methyl-1H-pyrrole-2-carbaldehyde (3) is obtained by Vilsmeier-Haack reaction and methylation of pyrrole (1). 24,25Then the intermediates 4a-c 26 were based on the mixture of indolin-2-one and 1-methyl-1H-pyrrole-2-carbaldehyde heated under reflux at 60 °C using piperidine as a catalyst and anhydrous ethanol as a reaction solvent to form a condensation reaction.Next, the intermediates 6d-k 27 were obtained by amidation of chloroacetyl chloride and aniline substituted with various groups with K2CO3 as an acid scavenger and dichloromethane as a solvent in an ice bath.The target compounds 7a-q were obtain by the nucleophilic substitution reaction of intermediate 4a-c and intermediate 6d-k at 80 °C using DMF as solvent and K2CO3 as acid binding agent.Reactions were monitored by thin layer chromatography (TLC) plates.The chemical structures of the compounds synthesized were elucidated on the basis of 1 H NMR, 13 C NMR, and HRMS.The characteristic hydrogen atoms of the target compounds were assigned to the 1 H NMR spectrum as follows.The single peak at δ 10-11 was the signal of active hydrogen on CONH, and the single peak at δ 4.5-5.0 was the signal of two hydrogens on the methylene group connected to N atom by NCH2CON, and the single peak at δ 3.6-3.9was the signal of three hydrogens on NMe on the pyrrole.The measured value of the HRMS [M+H] + of the target compounds was consistent with the theoretical value within the tolerance (± 0.0030) in the HRMS spectrum.The detailed physical and analytical data was given in experimental part.The antitumor biological activity evaluation results of 17 newly synthesized compounds 7a-q in vitro (shown in Table 1) indicated that these compounds exhibited certain inhibitory activities against SMMC-7721, PC-3, A549 and K562.At concentration of 10 μmol/L, compounds 7h, 7g, 7k, 7l, 7m, and 7p had a relatively high inhibition rate to SMMC-7721, PC-3, A549 and K562 tumor cells.The IC50 of four cells were measured for the six compounds, and the results were shown in Table 2.These compounds were determined by the MTT method, and Sunitinib was used as a positive drug for comparison.The test results were shown in Table 2. Clearly, compounds 7k and 7l (IC50 = 8.08 ± 0.95 μM, IC50 = 3.01 ± 0.61 μM) displayed higher antiproliferative activity than Sunitinib (IC50 = 8.27 ± 0.40 μM), respectively.According to the activity data and compound structures in Tables 1 and 2, preliminary SARs were proposed based on biological results.The type and position of the substituents on the indole and benzene rings had a certain effect on the inhibitory activity of tumor cells.Comparing compound 7a to 7t, when the fifth substituent on the indole was a chlorine atom, it was more beneficial to increase the activity of the compound.Comparing compound 7g to 7l, it was found that benzene ring with amide structure containing an electron-donating group such as a methyl group or a methoxy group had a good antiproliferative activity.Comparing the compound 7h with 7m, the results showed that the more electron-withdrawing substituents on the benzene ring, the worse inhibitory activity against SMMC-7721. Table 1.Inhibition against cancer cell lines of target compounds 7a-q at 10 μmol/L a a Average of three independent experiments.b Sunitinib was used as positive control. General procedure for the synthesis of N-methylpyrrole-2-carbaldehyde (3): Step 1: Pyrrole (1) (20.0 g, 0.30 mol) was added dropwise to a stirred solution of POCl3 (50.00 g, 0.33 mol) and DMF (24.0 g, 0.33 mol) in dry Et2O (100 mL) at 0 °C.The mixture was stirred at room temperature overnight.After the reaction was over, the mixture was added to 10 volumes of a saturated aqueous NaHCO3 solution.Extracted with EtOAc, extract were washed with brine, then dried over Na2SO4, and finally the solvent was evaporated under reduced pressure, the resulting product was used directly in the next step (yield: 15.5 g, 55.4%). Step 2: Pyrrole-2-carbaldehyde (10.00 g, 0.10 mol) was added to a suspension of dry NaH (5.0 g, 0.20 mol) in 300 mL DMF with stirring.After 30 min, MeI (16.4 g, 0.12 mol) was added to the mixture and then stirred at room temperature for 30 min.After the reaction was completed, the mixture was then cooled, and the mixture was poured into water and extracted with CH2Cl2.The organic layer was washed with brine, dried over Na2SO4 and then evaporated to give N-methylpyrrole-2-carbaldehyde (yield: 10.2 g, 89.4%). General procedure for the synthesis of compounds 4a-4c: In a 25 mL single-neck round bottom flask, anthrone (15.00 mmol), N-methylpyrrole-2-carbaldehyde (18.02 mmol) and 15 mL of EtOH were sequentially added.Then, 2 drops of piperidine were added, and the reaction was continued for 6 h in an oil bath at 60 °C.The reaction was followed by TLC.After the reaction, the brown solid sediment, which was filtered, washed twice with water and dried to give a crude product which was recrystallized from THF.A brown solid was obtained (yield: 73.5%), and the intermediates 4a-4c were obtained by the same synthesis and purification method. General procedure for the synthesis of compounds 6d-6k: In a 25 mL single-neck round bottom flask, substituted aniline 5d-5k (5.9 mmol), potassium carbonate (11.81 mmol) and 15 mL CH2Cl2 were added in sequence, and chloroacetyl chloride (8.85 mmol) was slowly added dropwise while stirring in an ice bath.The reaction was continued for 1 h, and the reaction was carried out for 3 h at room temperature.The reaction was followed by TLC.After the reaction was completed, the solvent was evaporated under reduced pressure, and washed three times with water, and the solid was filtered, dried, and recrystallized from EtOAc to give a light purple fine needle (yield: 91.3%).Other intermediates 6d-6k were also obtained by the same synthesis and purification method. General procedure for the synthesis of compounds 7a-7q: Intermediates 6d-6k (2.23 mmol), cesium carbonate (3.34 mmol) and 5 mL DMF were added in a 25 mL one-neck round bottom flask, and stirred at room temperature for 2 h.Intermediate 3a-3h (2.68 mmol) and potassium iodide (0.1 mmol) were then added and placed in an oil bath at 80 °C for 6-12 h.The reaction was followed by TLC.After the end of the reaction, the reaction mixture was poured into six times its volume of water, and then dilute hydrochloric acid was added until the pH was between 3-4. The crude final product was suction filtered, washed several times with cold water and then dried in the infrared.The product was recrystallized from THF and petroleum ether (yield: 53.5%).Other target compounds 7a-7q were obtained by the same synthesis and purification method.Hz, 3H). 13 Biological assays.The test compounds were dissolved in an appropriate amount of dimethyl sulfoxide (DMSO) to obtain a solution of known concentration prior to the experiment, and then were diluted to concentrations ranging from 0.625 μM to 10 μM with culture medium.Then, the cells in the exponentially growing were trypsinized and plated at a density of 3×10 4 cells/mL in a 96-well plate, cultured at 37 °C in a 5% CO2-supplemented atmosphere for 24 h, and then treated with various concentrations of the test compounds for 48 h.The control group was treated only with complete medium. Then 20 μL MTT solution (5 mg/mL) was added to each well, incubated for 4 h, the medium was carefully aspirated, and the formazan crystals were dissolved in 150 μL DMSO for each well.All measurements were performed in three times and had triplicate samples in each time.Error bars were calculated from standard deviation from the mean.The absorbance of each well was measured at a test wave length of 490 nm by using a microplate reader to obtain a sample signal.Using the Graphpad Prism software package, the IC50 value for the anticancer activity of each compound was calculated based on the corresponding absorbance values. Table 2 . In vitro antiproliferative activity data of compounds a Average of three independent experiments.bSunitinibwasusedaspositive control.In summary, we have prepared a series of derivatives of pyrrole-indolin-2-one possessing various amide structures.The biological results in vitro demonstrated that all the target compounds exhibited moderate to potent antitumor activities.Especially compounds 7l and 7k showed higher activity than Sunitinib against SMMC-7721.The results of a structure-activity relationship study showed a compound having a 5-chloroindole and an amide structure containing an electron-donating group such as a methyl group or a methoxy group had a good antiproliferative activity.Eventually, in view of the results obtained above, the 7l and 7k compounds were of great value for further investigation as new antitumor agents.EXPERIMENTALGeneral methods.All commercially available reagents were purchased from commercial sources and used as received.Products were visualized by UV on TLC plates (silica gel 60 F254).Melting points (uncorrected) were determined on X-4 digital display micro melting point mete.1HNMR and l3 C NMR spectra were measured on a Bruker Avance spectrometer at 600 MHz and 101 MHz in DMSO-d6 solutions, respectively.Splitting patterns in the 1 H NMR spectra are designated as follows: s, singlet; d, doublet; t, triplet; q, quartet; m, multiplet; br, broad.High resolution mass spectra (HRMS) were recorded on a Waters Micromass Q-T of Micromass spectrometer.
2019-09-17T01:05:03.947Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "a38ceedb8f42673154e77b8d8b42846021456dec", "oa_license": null, "oa_url": "https://doi.org/10.3987/com-19-14110", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "48f180626f38d7a71372caf49cbff4daec149d23", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
18032226
pes2o/s2orc
v3-fos-license
Imbalances in serum angiopoietin concentrations are early predictors of septic shock development in patients with post chemotherapy febrile neutropenia Background Febrile neutropenia carries a high risk of sepsis complications, and the identification of biomarkers capable to identify high risk patients is a great challenge. Angiopoietins (Ang -) are cytokines involved in the control microvascular permeability. It is accepted that Ang-1 expression maintains endothelial barrier integrity, and that Ang-2 acts as an antagonizing cytokine with barrier-disrupting functions in inflammatory situations. Ang-2 levels have been recently correlated with sepsis mortality in intensive care units. Methods We prospectively evaluated concentrations of Ang-1 and Ang-2 at different time-points during febrile neutropenia, and explored the diagnostic accuracy of these mediators as potential predictors of poor outcome in this clinical setting before the development of sepsis complications. Results Patients that evolved with septic shock (n = 10) presented higher levels of Ang-2 measured 48 hours after fever onset, and of the Ang-2/Ang-1 ratio at the time of fever onset compared to patients with non-complicated sepsis (n = 31). These levels correlated with sepsis severity scores. Conclusions Our data suggest that imbalances in the concentrations of Ang-1 and Ang-2 are independent and early markers of the risk of developing septic shock and of sepsis mortality in febrile neutropenia, and larger studies are warranted to validate their clinical usefulness. Therapeutic strategies that manipulate this Ang-2/Ang-1 imbalance can potentially offer new and promising treatments for sepsis in febrile neutropenia. Background Despite improvements in supportive care, sepsis remains the most common cause of death in intensive care units, with mortality rates of 30% to 50 [1,2]. Recently, therapies targeting elements from the inflammatory and coagulation cascades yielded disappointing results in clinical trials [3][4][5], possibly due to redundancy within the inflammatory pathways activated during sepsis. Endothelial barrier disruption plays a key role in the pathogenesis of sepsis and septic shock, making it an attractive target for studies aimed to identify new therapeutic targets for sepsis [6]. Recently, the participation of VEGF, a vascular growth factor with potent microvascular permeability functions, in the pathogenesis of septic shock was demonstrated [7]. Another important regulator of endothelial barrier function are the angiopoietins (Ang) -1 and -2, and the tyrosine kinase receptor Tie2 expressed in endothelial cells. Binding of Ang-1 to Tie2 maintains the quiescent resting state of the endothelium and reduces vascular permeability in response to inflammatory stimuli. In contrast, Ang-2 inhibits biding of Ang-1 to Tie2, resulting in vessel destabilization [8][9][10]. Circulating levels of Ang-1 and Ang-2 have been recently evaluated in patients with sepsis, and levels of Ang-2 have been correlated with sepsis severity in children [11] and adults [12][13][14][15], when evaluated in patients admitted to intensive care units with established signs and symptoms of sepsis. Febrile neutropenia (FN) in patients with hematologic malignancies is characterized by increased susceptibility to sepsis complications, and a higher risk of septic shock, with mortality ranging from 2-21% [16]. Most patients with FN are well at the time of fever onset. However, ways to predict the development of fulminant sepsis and septic shock is a great challenge in the care of these patients [17,18]. Here we evaluated the time-course of Ang-1 and Ang-2 expression in FN patients early in the course of sepsis and explored the diagnostic accuracy of Ang-1 and Ang-2 levels as potential predictors of poor outcome in this clinical setting. Patient's eligibility criteria Recruitment of patients took place at the Bone Marrow Transplantation Unit of University of Campinas between March 2008 and March 2009. Patients were included if they fulfilled the following criteria: (1) diagnosis of hematological malignancies, and (2) admission as inpatients for intensive chemotherapy (induction for acute leukemia or high-dose sequential therapy for lymphomas) or hematopoietic stem-cell transplantation (HSCT). Patients were invited to participate before the initiation of any chemotherapy regimen. Fever (T ≥ 38.0°C) at admission was the only exclusion criteria. The study was performed in accordance with the Declaration of Helsinki and approved by the local Ethics Committee. Informed written consent was obtained from all patients prior to collection of samples. Only patients that presented fever during neutropenia (defined as a neutrophil count <500 μl) were included in the second phase of the study. Descriptive data consisting of demographics, diagnosis, clinical data, and disease severity scores were obtained from the medical records. Twenty healthy individuals volunteered to determine a normal reference range for Ang-1 and Ang-2 levels (10 males, 10 females; median age 40range 24 to 53). Sepsis definitions and risk stratification scores An infectious etiology was assumed for all patients with post chemotherapy neutropenia with new onset fever, in accordance with FN management protocols. Blood and urine cultures were immediately obtained and broadspectrum antibiotics were initiated [19]. Sepsis, in this population, was defined by the presence of two or more of the following: (1) temperature > 38.0°C, (2) heart rate>90 beats/min, (3) respiratory rate > 20 breaths/min or PaCO 2 < 32 mmHg, and a microbiologically proven or clinically evident source of infection. Septic shock was present in patients in which sepsis was complicated with hypoperfusion or hypotension (systolic arterial pressure <90 mmHg or a reduction in systolic blood pressure of >40 mmHg from baseline), despite adequate volume resuscitation. Patients were subdivided into two outcome groups: sepsis (non-complicated) and septic shock. Severity of illness was assessed by calculating the Sequential Organ Failure Assessment (SOFA) score [20] daily after the development of fever, and by calculation of the Multinational Association for Supportive Care In Cancer (MASCC) score at the time of fever [21,22]. Laboratory measurements Venous blood was drawn at enrollment (baseline), within 12 hours after first episode of neutropenic fever, and 48 hours thereafter. Samples were immediately centrifuged at 3000 rpm (4°C, 20 mins) and plasma and serum was stored at -80°C until analysis. Samples were processed by the same investigator. Serum levels of Ang-1 and Ang-2 were measured in duplicate using a commercial enzymelinked immunosorbent assay (ELISA) kit (Quantikine, R&D Systems, Minneapolis, MN, USA) according to the manufacturer's instructions. Interassay coefficient of variations were 2.95% for Ang-1 and 6.87% for Ang-2 samples with concentrations within the range observed in our study. Von Willebrand Factor (VWF) levels were measured in duplicate in citrate plasma samples using a rabbit anti-human VWF peroxidase conjugate (Dako Netherlands BV, Heverlee, Belgium). Statistical Analysis Patients were divided in two outcome subgroups according to the presence of absence of septic shock at any time point before the resolution of FN. Differences in continuous variables between patients from each subgroup, and between patients and healthy controls were analyzed using the Mann-Whitney rank sum test. Categorical variables were compared using the Fisher's exact test. Data are expressed as median and range unless otherwise stated. Correlation (Spearman's rank correlation) and linear regression analysis were performed between sepsis severity scores and angiopoietin concentrations. Receiver operator characteristics (ROC) procedures were used to identify optimal cut-off values of angiopoietin concentrations to differentiate patients with non-complicated sepsis and patients with septic shock. A second analysis was performed to explore the effect of angiopoietin concentrations on 30-day mortality, calculated from the day of fever onset. Survival curves were estimated using the Kaplan-Meier method. Parameters independently associated with survival were identified by univariate and multivariate Cox proportional hazards models. Variables found to be statistically significant at a 10% level in the univariate analyses were included in the multivariate model. Different models were established, and variable selection was performed by different forward and back-ward procedures, with comparable results. A p value less than or equal to 0.05 was considered statistically significant. All statistical analysis were performed with the SPSS package (SPSS Inc., Chicago, IL, USA) and the GraphPad Prism Software (GraphPad Prism Software Inc. San Diego, California, USA). Patients Characteristics A total of 60 patients fulfilled the primary criteria for study entry, of which 41 patients experienced neutropenic fever and completed the study ( Figure 1). Characteristics of these patients are shown in Table 1. In 10/41 patients (24%), septic shock was present before the resolution of neutropenia, and all of them required mechanical ventilation. The median time to the development of septic shock was 4.1 days (range 1 -7 days) after the first episode of neutropenic fever, and in only one patient, septic shock onset occurred in the first 48 hours after the first episode of neutropenic fever. Eight patients died within the first 30 days after the onset of fever, reaching a 30-day mortality of 13.3%. All deaths were attributed to complications of septic shock. The only clinical significant difference between patients with non-complicated sepsis and septic shock were: (1) age, (2) presence of bloodstream infection, (3) sepsis severity score SOFA calculated 48 hours after fever onset, and (4) MASCC at the time of neutropenic fever. An anatomic site of infection was established in 10 (24%) patients. Compared to healthy individuals, patients presented lower baseline levels of Ang-1, and similar baseline levels of Ang-2 ( Table 2). Time-course of Ang-1 serum levels in hematological patients with FN At baseline, no difference in Ang-1 levels could be observed between patients with non-complicated sepsis and with septic shock ( Table 2). No statistical significant difference could be detected between Ang-1 levels in patients with non-complicated sepsis (185.46 pg/ml, range 9.30-3206.40 pg/ml) or with septic shock (82.34 pg/ ml, range 9.30-571.05 pg/ml; Mann-Whitney test, p = 0.29) at the time of fever onset. After 48 hours, Ang-1 levels remained similar in patients with non-complicated sepsis (129.27 pg/ml, range 9.30-8485.10 pg/ml) and in patients with septic shock (105.10 pg/ml, range 9.30-3510.60 pg/ml; Mann-Whitney test, p = 0.90) ( Figure 2). Time-course of Ang-2 serum levels in hematological patients with FN At baseline, no statistically significant difference could be observed in levels of Ang-2 between patients from the two groups (Table 2). Furthermore, Ang-2 levels were similar at the time of neutropenic fever between patients with non-complicated sepsis (105.55 pg/ml, range 19.40-1653.50 pg/ml) and septic shock (152.39 pg/ml, range 19.97-644.27 pg/ml). However, 48 hours after neutropenic fever a striking difference could be observed between patients with and without septic shock, with markedly increased Ang-2 concentrations in patients with septic shock (840.77 pg/ml, range 30.67-2085.30 pg/ ml) compared to patients with non-complicated sepsis (91.10 pg/ml, range 19,40-785,24 pg/ml; Mann-Whitney test: p = 0.002) (Figure 2). We estimated the diagnostic accuracy of Ang-2 levels measured 48 hours after fever onset. Ang-2 48 hours after fever onset yielded an area under the ROC curve of 0.84 (95%CI = 0.65-1.0; P = 0.004). In our population, an optimal cut-off value of Ang-2>233 pg/ml predicted the development of septic shock with a sensitivity of 75% (95%CI = 34.9%-96.8%) and a specificity of 92.6%(95%CI = 75.7%-99.1%). The Ang-2/Ang-1 ratio is increased in FN patients that evolve to septic shock The Ang-2/Ang-1 ratio was calculated in an effort to detect relevant changes in the relative concentration of these two antagonistic mediators of microvascular permeability. At the time of neutropenic fever, the Ang-2/ Ang-1 ratio was much higher in patients with septic shock (6.80, range 0.03-17.20) compared to patients with non-complicated sepsis (0.80, range 0.01-16.8; Mann-Whitney test: p = 0.05). After 48 hours, the Ang-2/Ang-1 ratio was higher in patients that developed septic shock (10.58, range 0.10-101.70) compared to patients with non-complicated sepsis (0.40, range 0.01-18.70), but statistical significance was not reached (Mann-Whitney test: P = 0.06) ( Figure 2). Estimation of the diagnostic accuracy of the Ang-2/Ang-1 ratio at fever onset yielded an area under the ROC curve of 0.71 (95%CI = 0.50-0.93; P = 0.05). In our population, a median Ang-2/Ang-1 ratio of 1.17, which was the optimal cut-off value identified by the ROC procedure, predicted the development of septic shock with a sensitivity of 77.8% (95%CI = 40.0%-97.2%) and a specificity of 60% (95% CI = 40.6%-77.3%). Serum Ang-1 and Ang-2 levels correlate with severity of illness score (SOFA) We next evaluated whether serum Ang-1 and Ang-2 levels correlated with the SOFA score. We utilized the SOFA score calculated 48 hours after fever because it segregates each outcome group better than SOFA at the time of fever onset (Table 1). Significant correlations were observed between SOFA and the following parameters: Ang-2 48 h after fever (Rs = 0.40; P = 0.02), Ang-2/Ang-1 ratio at fever onset (Rs = 0.51; P = 0.001) and Ang-2/Ang-1 ratio 48 h after fever (Rs = 0.35; P = 0.04) (Figure 3). Ang-2/Ang-1 ratio at fever onset and 30-day mortality in patients with FN To determine the relationship of angiopoietin levels with 30-day mortality, we initially performed univariate Cox proportional hazards analysis, in which the following variables were found to be statistically significant: age, duration of neutropenia, SOFA, MASCC, Ang-2 (48 hours after fever) and Ang-2/Ang-1 ratio (both at fever onset and 48 hours thereafter). We then performed multivariate Cox regression analysis incorporating these variables, and the only variable that remained statistically significant in the multivariate setting was the Ang-2/Ang-1 ratio, measured at fever onset (Hazard ratio 1.20 -95%CI 1.02-1.41; P < 0.01). Figure 4 illustrates the Kaplan-Meier curve of 30-day survival stratified to less versus greater than the median values of Ang-2/Ang-1 ratio at fever onset. Logrank confirmed statistical significance. The 30-day mortality of patients with Ang-2/Ang-1 ratio below 1.17 was 5.5% compared to 31.6% for patients with greater ratios. Discussion Post chemotherapy FN in patients with hematological malignancies is a condition that carries a high risk of sepsis complications with mortality rates as high as 21% [16], usually preceded by septic shock. Patients with FN are a heterogeneous group in terms of risks of complications and mortality, and the identification of parameters capable to accurately identify high risk patients is one of the great challenges in their care. Clinical scores such as the MASCC [21], and laboratory parameters such as C-reactive protein, IL-6, IL-8 and procalcitonin have been recently proved to be useful tools for this purpose [23]. However, it is generally acknowledged that these parameters are more useful to identify patients at low risk for complications, leaving room for refinements in the risk stratification of FN patients. Ideally, one such marker should be easily detected in samples obtained before the development of severe sepsis, and should not be influenced by cytopenias or by the inflammatory milieu associated with disease status. Elements that are directly involved in the pathogenesis of sepsis complications are thus more attractive candidates than non-specific elements of the inflammatory cascade. Angiopoietins are a family of vascular growth factors with critical roles in embryonic and postnatal angiogenesis. Ang-1 and Ang-2 both act on the Tie2 tyrosine kinase receptor found primarily on endothelial cells, but appear to play antagonist roles [10]. Apparently, Ang-1 promotes vessel stabilization whereas Ang-2 is involved in the destabilization of newly formed vessels. These processes of disassemble and reassemble of the endothelial lining of primitive blood vessels is important because it allows the incorporation of new endothelial cells to growing endothelial tubes, leading eventually to the formation of functional and mature blood vessels [24]. A role of angiopoietins in the pathogenesis of septic shock is supported by multiple lines of evidence. First, increased levels of Ang-2 levels have been demonstrated in adult and pediatric patients with sepsis in intensive care units (ICU) and these levels correlated with sepsis severity [11,12,14,25]. Second, serum from patients with sepsis has been shown to disrupt endothelial architecture, an effect that correlates with Ang-2 levels, is reversed by Ang-1 and is mimicked by recombinant Ang-2 [14]. Finally, Ang-2 levels have been shown to correlate with pulmonary permeability edema and occurrence of acute respiratory distress syndrome in mechanically ventilated patients [26]. More recently, Kumpers et al, studying a population of 43 medical ICU patients demonstrated that Ang-2 levels at the time of ICU admission was a strong predictor of mortality [12]. Ang-1 levels were also evaluated in some of these studies, but so far studies have failed to a relationship between Ang-1 and sepsis outcomes. In our prospective study we explored the time-course of Ang-1 and Ang-2 in a population of patients with high risk of sepsis complications before the development of severe sepsis. As far as we are aware, the significance of Ang-1 and Ang-2 levels has not been studied in patients with FN. An additional contribution of our study is that we prospectively evaluated the significance of Ang-1 and Ang-2 levels at an earlier time-point in the development of sepsis (which developed in a median time of 4.1 days after fever onset), as opposed to studies that evaluated patients at the ICU. This difference is evidenced by the median SOFA score of our patients (4 range 1-8) compared to higher median SOFA scores (16 range 1-22) from other key studies [12]. Last, Ang-1 and Ang-2 levels were serially evaluated at three time points, thus offering a view of the time-course of Ang-1 and Ang-2 release in the early hours of sepsis. For this study we only included patients with hematological malignancies and FN after intensive chemotherapy that were treated as inpatients from day one of chemotherapy until the resolution of neutropenia. By doing so, we intended to obtain a representative sample of patients with high risk of sepsis complications. Furthermore, the fact that all patients were treated as inpatients allowed us to standardize important variables that could affect the validity of our results such as the time between fever onset and sample collection. So as to limit as much as possible the influence of different diagnosis and chemotherapy regimens on our results, Ang-1 and Ang-2 levels were also collected immediately before the initiation of chemotherapy, allowing differences between outcome groups to be compared not only as absolute values, but also as fold-increase from baseline levels, which did not alter our results (data not shown). In fact, neither Ang-1 nor Ang-2 levels were significantly different before the initiation of chemotherapy between patients with non-complicated sepsis and septic shock, supporting the uniformity of our group of patients as far as the two study variables were concerned. The main finding of our study is that the relative concentration of Ang-1 and Ang-2 are different in subgroups of patients with FN that evolve to non-complicated sepsis compared to patients that develop septic shock, and that evaluation of these two proteins within the first 48 hours after neutropenic fever, before the development of any signs and symptoms of septic shock, is a promising tool to discriminate high risk patients with FN. In accordance with previous studies, Ang-2 levels were significantly higher in patients with septic shock compared to patients with non-complicated sepsis. This difference was not present at the time of fever onset, rose sharply and became evident after 48 hours, when levels in the poorer outcome group were 8 times higher than in the good outcome group. Importantly, Ang-2 baseline levels were similar. As in previous studies, we were not able to detect any significant difference of Ang-1 levels between study groups. However, when the relative concentrations of these two antagonistic cytokines were evaluated, we did observe a relative deficiency of Ang-1 compared to Ang-2 levels in patients that developed septic shock, that was evident early at the time of fever onset. It has been known for more than a decade that Ang-1 can protect adult vasculature against VEGF-induced plasma leakage [10]. Ang-1 has also been shown to protect mice from endotoxic shock [27]. In this respect, it is tempting to speculate that the imbalance between Ang-1 and Ang-2 levels present already at the time of fever onset is associated with a poorer outcome of sepsis in FN, and possibly in other patients with sepsis. This hypothesis is well illustrated in our work by the divergent trend of Ang-2/Ang-1 ratio observed in each outcome subgroup of patients. Early in the course of sepsis, patients that evolved to septic shock presented a 7-fold increase in the Ang-2/Ang-1 ratio, whereas patients with non-complicated sepsis presented a 0.8 decrease when compared to baseline (prechemotherapy) ratios. After 48 hours, this divergent trend persisted, with patients that evolved to septic shock presenting a 10-fold increase, and patients with noncomplicated sepsis with a 0.4-fold decrease. We also evaluated VWF antigen levels, which is increased in patients with sepsis [28], and is an indicator of endothelial dys- function and stimulation [29]. In contrast to Ang-2, no difference could be observed between both patient groups at any time-point (table 1). We also demonstrated a significant, though weak correlation of Ang-2 concentrations 48 hours after fever onset and of Ang-2/Ang-1 ratio (both at fever onset and after 48 hours) with the SOFA score of sepsis severity. Based on these results, we estimated diagnostic accuracy of Ang-2 concentrations 48 hours after fever and of Ang-2/Ang-1 ratio at fever onset. Despite wide confidence intervals, area under the ROC curves suggest that both biomarkers should be further investigated as promising tools to discriminate highrisk FN patients in studies with larger sample sizes. The relationship between angiopoietin concentrations and 30-day mortality was also evaluated. Using a multivariate Cox model, we demonstrate that the Ang-2/Ang-1 ratio at the time of fever onset can be an independent predictor of sepsis survival in our population, an observation that will have to be validated in larger studies. Recently, the time course of Ang-1 and Ang-2 release was evaluated in a model of human endotoxemia [13], which showed that Ang-2 release is initiated 2.5 hours after LPS challenge, and observed no significant variations in levels of Ang-1. Ang-2 is a Weibel-Palade bodystored molecule that is rapidly released upon endothelial stimulation. In contrast, Ang-1 is believed to exert its vessel-sealing effect by low-level constitutive activation of the Tie2 receptor, in a model in which constitutive Ang-1/Tie2 interactions control endothelial barrier integrity as a default pathway, and Ang-2 acts as a dynamically regulated antagonizing cytokine [30]. Our results showing that patients that evolve to septic shock present an initial relative deficiency of Ang-1 associated with a sharp increase in Ang-2 levels in the first 48 hours of sepsis supports that similar events can be involved in the pathogenesis of septic shock in FN. There are certain limitations of our study that need to be acknowledged to avoid overinterpretation of its conclusions. Research about new diagnostic tools follow a sequence of phases along which the questions answered by each phase progress from the demonstration that a biomarker behaves differently in diseased and normal individuals (phase I) to the ultimate demonstration of improved clinical outcomes after its incorporation (phase IV) [31]. Ours is a phase II study in which the performance of a new diagnostic assay was tested in patients with a defined condition (febrile neutropenia) and potential different outcomes, but in which several confounding variables were controlled. Phase II studies tell us whether the test shows diagnostic promise under ideal conditions. Therefore, our study design does not allow us to universalize our conclusions to clinical settings where the impact of treatment related and other variables might reject our conclusions. For the formal validation of the assay and its incorporation intro clinical practice, a larger sample size (ideally from distinct centers) under less controlled conditions should be used. The relatively low number of deaths used for the multivariate analysis also deserves to be discussed. Our study design required a very specific population of patients, with stringent inclusion criteria to control confounding variables, so that a further increase in sample size was not feasible. On the other hand, it has been demonstrated that relaxing the rule of 10 events per variable in Cox models is possible without unacceptable increases in confounding bias [32]. With this in mind, we believe that as long as the abovementioned limits of our study are acknowledged, the association of 30-day mortality with Ang-2/Ang-1 ratio is an important information to the literature. Conclusions In summary, our data suggest that imbalances in the concentrations of Ang-1 and Ang-2 present already at the time of the first fever peak is an independent marker of the risk of developing septic shock and of 30-day mortality in FN. Further studies are warranted to validate this new biomarker as clinically relevant tool in the daily care of these patients. In addition, therapeutic strategies designed to manipulate the Ang-2/Ang-1 imbalance can offer a new and promising paradigm for the treatment of sepsis and septic shock in patients with FN.
2017-06-21T16:11:35.923Z
2010-05-28T00:00:00.000
{ "year": 2010, "sha1": "45315b330aa1a08c623dd86a31f6006a058646e1", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-10-143", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "afe9ffb5f7e2b25af6b8bbb8623c7f40b699c424", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
153311124
pes2o/s2orc
v3-fos-license
Harmonic-Mean-Based Dual-Antenna Selection with Distributed Concatenated Alamouti Codes in Two-Way Relaying Networks In this letter, a harmonic-mean-based dual-antenna selection scheme at relay node is proposed in two-way relaying networks (TWRNs). With well-designed distributed orthogonal concatenated Alamouti space-time block code (STBC), a dual-antenna selection problem based on the instantaneous achievable sum-rate criterion is formulated. We propose a low-complexity selection algorithm based on the harmonic-mean criterion with linearly complexity ( ) R O N rather than the directly exhaustive search with complexity ( ) 2 R O N . From the analysis of network outage performance, we show that the asymptotic diversity gain function of the proposed scheme achieves as 1 1 R N ρ − , which demonstrates one degree loss of diversity order compared with the full diversity. This slight performance gap is mainly caused by sacrificing some dual-antenna selection freedom to reduce the algorithm complexity. In addition, our proposed scheme can obtain an extra coding gain because of the combination of the well-designed orthogonal concatenated Alamouti STBC and the corresponding dual-antenna selection algorithm. Compared with the common-used selection algorithms in the state of the art, the proposed scheme can achieve the best performance, which is validated by numerical simulations. Introduction Wireless relaying systems have attracted much attention for the current and the future communication networks [1] [2]. For a higher spectrum efficiency, two-way relaying networks (TWRNs) were proposed in [3] [4], which just need two transmission phases to complete the bidirectional signal transmission and reception. Specifically, this is accomplished by simultaneously transmitting from the sources to the relay in the multi-access phase and by broadcasting the processed information from the relay to the sources in the broadcast phase. To achieve high diversity gain and coding gain, distributed space-time block codes (DSTBCs) are extensively studied in [5] [6] where the relaying networks consist of multiple nodes or multiple antennas. These DSTBCs are generally designed to obtain the best performance improvement by constructing the distinctive space-time code structure [7][8] [9]. Antenna selection scheme is an effective method to reduce the system resources while keeping a moderate performance [10]. Most of articles can be categorized into two groups. The one is to peruse the optimal antenna selection algorithm without performance loss. Single antenna selection scheme has been generally studied, and the relative analyses have shown that the same diversity gain can be achieved as the all-antenna-used system. The other focuses on reducing the realization complexity. The optimal antenna selection algorithm is exhaustive searching according to some performance metric among all the candidate antenna set. However, its complexity is generally prohibitive, especially in large number of antennas or in the relaying networks. As we know, the space-time code transmission strategies not only have remarkable diversity gain, but also bring the coding gain. In this letter, we expect to obtain the diversity gain by using low-complexity antenna selection scheme, while to achieve the coding gain by using some space-time code transmission strategy. Therefore, multi-antenna selection combined with space-time code transmission is naturally studied. In this letter, we concentrate on dual-antenna selection combining with DSTC at the relay node. Based on our well-designed distributed concatenated Alamouti space-time block code (STBC), we aim to study the dual-antenna selection algorithm based on instantaneous achievable sum-rate of overall network. Meanwhile, considering the implementation complexity, we also propose a low-complexity selection strategy, which may need to make some tradeoff between the algorithm complexity and the network performance. The rest of this paper is organized as follows. The related works are presented in Section 2. The system model and the optimal antenna selection criterion are shown in Section 3. In Section 4, the near-optimal harmonic-mean-based dual-antenna selection scheme is proposed. Performance analysis and numerical simulation results are shown in Section 5 and Section 6, respectively. Finally, the conclusions are drawn in Section 7. Related Work Although many benefits of multi-antenna systems have been verified, the deployment of multiple antennas requires multiple radio frequency (RF) chains. These RF chains include multiple analog-digital converters, low noise amplifiers, down-converters, etc., whose high cost is undesirable especially for mobile handsets. To reduce the number of RF-chains and keep the system simple and inexpensive, several antenna selection (AS) algorithms are proposed to feed the most favorable transmit and/or receive antennas [10][11][12][13][14][15][16][17]. In two-way networks, AS has also been extensively considered. In [12], single antenna selection scheme at source node was studied and compared with the beamforming, which shows that the same diversity order can be obtained for both schemes. In [13] [14], joint relay and antenna selection was discussed over Nakagamin fading channels. Greedy-based AS scheme is presented in [15] and the theoretical analysis about joint relay and antenna selection is shown in [16]. A max-min-based approach for relay AS was proposed in [17], which selected single antenna at the relay node to maximise the minimum end-to-end receiving signalto-noise ratio (SNR). A minimum mean square error based greedy AS algorithm was proposed for amplify-and-forward (AF) MIMO relaying systems [18], which adopted an iterative selection algorithm to minimise the mean square error. Recently, AS combined with interference alignment (IA) scheme [19] was newly studied, which greatly improves the received SINR of each user in cognitive radio networks. Sum-rate maximization scheme by using the second-order cone programming was studied in [20], which showed a promising algorithm to guarantee the secure transmission of primary user when the spectrum is shared with secondary users. However, most AS schemes in the state of the art generally consider selecting single antenna at the source node or relay node. There are no reports on combining the multi-antenna selection and the DSTBCs in relaying networks. Distributed Concatenated Alamouti Codes with Dual-Antenna Selection at Relay Node In this paper, we consider a two-way relaying network in which the source nodes 1 T and 2 T , communicate with the help of an intermediate relaying node R by using the amplify-and-forward (AF) protocol. Each source node is equipped with single antenna, while the relay R is equipped with R N antennas. We assume the channels between the sources and the relay satisfy Rayleigh fading distribution, and there is no direct link between 1 T and 2 T , in case the sources are located far away from each other or within the deep fading areas. We denote the channel fading vector between 1 T and R as System Model As shown in Fig. 1, in our two-way relaying network, two antennas are selected at the relay node R according to carefully designed selection criterion. There are mainly two transmission phases in one whole information exchange between source nodes 1 T and 2 T , i.e., multiple accessing (MA) phase at the two sources and broadcasting (BC) phase at the relay node. Fig. 1. Two-way relaying network with dual-antenna selection The source 1 T transmits * 1 11 12 [ , ] T s s = s and the source 2 T transmits * 2 21 22 [ , ] T s s = s to relay R simultaneously in consecutive two time slots. Here, we assume the i -th and j -th antennas at R are selected to receive signals. Therefore, the received signals at R via the selected antennas can be expressed as where ( ) denote the selected channel vectors between 1 T , 2 T and R respectively, 1 R n and 2 R n denote the noise vectors at R with each element having zero mean and variance 2 σ . B. Broadcasting (BC) Phase The relay R first processes the received signals by using a linear combination matrix A as follows Then, R broadcasts t by using Alamouti code in consecutive two time slots via the selected antennas. Specially, in the first time slot R broadcasts 1 t and 2 t from the i -th and j -th antennas respectively, and in the second time slot R broadcasts * where β denotes the power scaling factor at R , i.e., The received signals at source node 2 T can be obtained similarly. Combining (1)-(5) and removing the self-interference, the received signals at 1 T and 2 T can be expressed as : where the equivalent channel matrices H and G can be expressed as In addition, T H G and T G H are all orthogonal Alamouti matrices which will greatly simplify the maximum-likelihood (ML) detection to symbol-by-symbol detection. We call this space-time code as distributed concatenated Alamouti codes. Substituting β into (6) and (7), the instantaneous end-to-end received SNR at 1 T and 2 T can be written as We note that 1 γ and 2 γ are not statistically independent with each other, which are all related to the selected antennas. Dual-Antenna Selection Criterion at Relay Node To effectively evaluate the network performance, we define the instantaneous achievable sum-rate of overall network as ( ) ( ) Therefore, the optimal dual-antenna selection criterion is to maximize the instantaneous achievable rate  , i.e., Therefore, we expect to propose a low-complexity and pragmatic dual-antenna selection algorithm in the next section. Harmonic-Mean-Based Dual-Antenna Selection Algorithm By using the inequality of arithmetic and geometric means 1 , the sum rate (10) can be bounded as Combining with (8) and (9), and further using the inequality ( ) For more tractable analysis, we alternatively consider the dual-antenna selection criterion as maximizing this lower bound of instantaneous achievable sum rate. Consequently, we propose a near-optimal harmonic-mean-based dual-antenna selection algorithm as the following Lemma. Lemma 1: A near-optimal harmonic-mean-based dual-antenna selection algorithm based on (11) is expressed as where (11), Lemma 1 alternatively converts the original two-dimension optimization problem to a one-dimension search, which achieves a linear selection complexity Fig. 2 illustrates the comparisons of selection complexity 2 between two algorithms. We can see the harmonic-mean-based dual-antenna selection is more efficient especially for large R N . 1 The inequality of arithmetic and geometric means: Performance Analysis In this section, a sum-rate outage probability upper bound is derived based on the proposed dual-antenna selection algorithm in Lemma 1. We define the sum-rate outage happens only when sum-rate sum R is below a given threshold th sum R . For analytical tractability, we consider the lower bounded sum-rate presented in (14) which consequently results in an outage probability upper bound as follows. Before further analysis and discussions, we first provide the following lemma. where we just take the first summand in the above infinite series when x is large. Thus, we obtain ( ) Similarly, we have Consequently, we have the sum-rate outage probability upper bound as the following theorem. From Theorem 1, we can clearly see the proposed harmonic-mean dual-antenna selection can achieves the diversity gain function 1 1 R N ρ − at least. Compared with full diversity performance with the same network deployment, which claimed a lower bound of diversity gain function being log R N e ρ ρ , one degree loss of diversity gain in proposed scheme is mainly ascribed to some sacrifice of selection freedom for a low-complexity algorithm. Numerical Results In this section, we provide simulations to evaluate the performance of the proposed harmonic-mean-based dual-antenna selection scheme, and to validate the theoretical analysis of the diversity gain function of the outage probability. We consider the independent identical distributed Rayleigh fading channels as described in Section 2. Firstly, we carry out the sum-rate simulations. As shown in Fig. 3 and Fig. 4, compared with the common-used antenna selection algorithms, such as max-min antenna selection [12][13], geometric antenna selection, arithmetic antenna selection, our proposed harmonic-mean-based dual-antenna selection scheme can obtain the best sum-rate. Especially, from the simulation results in Fig. 4, we observe that the sum-rate improvement is remarkable when the number of antennas is large. Secondly, Fig. 5 shows the comparisons of outage probability performance between the proposed scheme and the other antenna selection schemes. From the results, our proposed harmonic-mean-based dual-antenna selection scheme outperforms the max-min antenna selection used in [12] [13]. In addition, with the similar procedures, the arithmetic-mean-based and geometric-mean-based selection schemes are also simulated, which show the harmonic-mean-based selection scheme achieves apparent superiority. Fig. 6 shows the outage probability performance with increasing number of antennas, which also validates the improved performance of our proposed scheme. Finally, Fig. 7 shows the upper outage probability performance for different antenna configurations. The simulated results based on equation (17) Conclusion In this paper, we have proposed a harmonic-mean-based dual-antenna selection scheme for the two-way relaying networks. Combining with the well-designed distributed orthogonal concatenated Alamouti codes, we alternatively convert the optimal dual-antenna selection to a near-optimal linear-complexity selection algorithm. From the asymptotic analysis, we demonstrate the proposed scheme achieves the diversity gain function 1 1 R N ρ − at least. Numerical results verify our analysis and provide insights into the outperformed performance of the proposed scheme.
2019-05-15T14:28:30.429Z
2019-04-29T00:00:00.000
{ "year": 2019, "sha1": "68164497f3ff0a9ea1ce5b4a6b43ff979ab3d23e", "oa_license": null, "oa_url": "http://itiis.org/digital-library/manuscript/file/22070/TIISVol13No4-12.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "7f13f9a125e94ae78c78862c59e5d6cf5800e023", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
2366325
pes2o/s2orc
v3-fos-license
Control of Unit Power Factor PWM Rectifier * To solve the problem of harmonic pollution to the power grid that caused by traditional diode rectifier and phase controlled rectifier, the unit power factor PWM rectifier is designed. The topology structure of the rectifier circuit is introduced and the double closed-loop control strategy in three-phase stationary coordinate system is analyzed. For the deficiency of control strategy, the control strategy in two-phase synchronous rotating coordinate system is proposed. This makes the independent control of active current and reactive current to be realized. The simulation model of the PWM rectifier is built and the effectiveness of the control method proposed in this paper is verified by simulation. Introduction Unity power factor PWM rectifier has the advantages of high power factor, low harmonic content of grid side current, energy bidirectional transmission etc, is widely used in AC drive, reactive power compensation, active power filter, unified power flow control, as well as uninterruptible power supply, etc [1].This paper introduces the topology of three-phase PWM rectifier, and describes the control method of the rectifier in three-phase static coordinate system.On the basis of the analysis of the advantages and disadvantages of this control method, the control method in two-phase synchronous rotating coordinate system is put forward.Then the mathematical model of three-phase PWM rectifier in d, q coordinates is established and the single control of active current and relative current is realized. The Control Method of Three-phase Voltage Source PWM Rectifier in Three-phase Static Coordinate System Figure 1 shows the topology of three-phase voltage source PWM rectifier, a b c is three-phase voltage source, C is the dc side filtering capacity and e e e L R is the load.In order to realize the control of input current and output voltage, the traditional method is controlling the three-phase input current directly.The control of the input current is also the control of the flow of energy, thus the control of the output voltage can be realized.The control method of PWM rectifier in the three-phase sta-tionary coordinate system is shown in Figure 2. In this control method, the outer loop controls the DC voltage I by sinusoidal signal whose phase is the same as three-phase voltage.The difference value of command current and actual current is imported to PI regulator, and the sinusoidal modulation wave can be deserved.By comparing the sinusoidal modulation wave with carrier wave, the PWM wave which can control the switch can be deserved.This control method is simple, but the command current in control system is a varying sine time-varying signal which has a certain frequency, amplitude and phase angle.The effect of steady-state performance is not desirable, and the independent control of the active current and the reactive current can't be achieved [3]. The Control Method of Three-phase Voltage Source PWM Rectifier in Two-phase Synchronous Rotating Coordinate System In order to realize the non-static error control of threephase current and independent control of active current and reactive current, the control method of unit power factor PWM Rectifier in two-phase synchronous rotating coordinate system will be introduced. Figure 3 shows the control method of unity power factor PWM rectifier in dq rotating coordinate system.Through coordinate transformation, three-phase stationary coordinate system (a, b, c), can be converted to synchronous rotating (d, q) coordinate system that synchronous rotate with the grid fundamental wave [2].The transformation matrix is: (1) The inverse transformation matrix is: sin cos 2 / 3 sin 120 cos 120 sin 120 cos 120 The most prominent advantage of this transformation is the fundamental sinusoidal quantitative in (a, b, c) coordinate system can be converted into a DC variable in (d, Figure 3.The control method of PWM rectifier in dq rotating coordinate system.q) Coordinate system.In this transformation, the d-axis in two-phase synchronous rotating coordinate system represents the active component, and the q-axis represents the reactive component.If we take the position of input voltage vector as the positive direction of d-axis, and the three-phase input voltage can be written as: Through coordinate transformation the power supply voltage in dq coordinate system can be written as: According to the instantaneous power theory, the instantaneous active power p and reactive power q of the system is: Because 0 q e  , the equation ( 5) can be simplified as 3 2 3 2 If we don't consider the fluctuations in grid voltage, d is a fixed value.So the instantaneous active power p and instantaneous reactive power q of PWM rectifier is proportional to d and q .So that by controlling d and q , the active and reactive power of PWM rectifier can be controlled [4]. In three-phase PWM rectifier, the input instantaneous value of active power in the DC side is dc dc , if the loss of PWM rectifier is not considered, from equation In this control method, the PI regulator can realize nonstatic error control.Compared with the control method in three-phase static coordinate system, the steady state performance is better.At the same time, the independent control of active current and reactive current can be realized [5]. The Simulink Results MATLAB/SIMULINK is used to establish the simulation model of PWM rectifier.Simulation parameters are as follows: the voltage of power grid is 380 V/50Hz.the inductance in AC side is 0.8 mH.The given value of DC side capacitor voltage is 700 V.The resistance load is 3.72 Ω. Figure 4 shows the three-phase input current wave and its FFT analysis of unit power factor PWM rectifier under the control of three-phase static coordinate system.Figure 5 shows the three-phase input current wave and its FFT analysis of unit power factor PWM rectifier under the control of two-phase synchronous rotating coordinate system.Figure 6 shows the control effect of DC side capacitor voltage in two-phase synchronous rotating coordinate system.It can be seen from the FFT analysis of current waveform that control in twophase synchronous rotating coordinate system has a better steady state response than control in three-phase static . The difference value of the command signal and actual signal of DC side voltage is imported to PI regulator.The output value of PI regulator is DC current signal m I , m I is proportional to the amplitude of AC input current.So the command signal of three-phase AC current a * I 、 * b I 、 * c I can be obtained by separately multiplying m Figure 1 . Figure 1.The topology of three-phase voltage source PWM rectifier. Figure 2 . Figure 2. The control method of PWM rectifier in threephase stationary coordinate system. d grid voltage is a fixed value and the loss of rectifier is ignored, the DC side voltage dc is proportional to d , so that DC side voltage of PWM rectifier can be controlled by the control of .uiThe control method shown in Figure3also consists of voltage outer loop and current inner loop.Introducing DC feedback and the non-static error control of DC voltage can be realized.Due to the DC voltage can be controlled by the control of d , the output value of voltage outer loop of PI regulator is the reference value of current inner loop, so that the active power of PWM rectifier can be adjusted.The reference value of reactive i i current is based on the reference value of reactive power, so when , PWM rectifier operates on unit power factor state. ic (a) Three-phase input current wave under the control of three-phase static coordinate system ) = 284.2, THD= 1.41% M ag (% o f Fu nd a me n ta l) (b) FFT analysis of input current under the control of three-phase static coordinate system Figure 4 . Figure 4. Three-phase input current wave and its FFT analysis of unit power factor PWM rectifier under the control of three-phase static coordinate system. Figure 5 .Figure 6 . Figure 5. Three-phase input current wave and its FFT analysis of unit power factor PWM rectifier under the control of two-phase synchronous rotating coordinate system.
2017-10-22T04:06:18.935Z
2013-06-30T00:00:00.000
{ "year": 2013, "sha1": "c82308cd1ce6b8313b88d51e249f0eeae66b1f0c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4236/epe.2013.54b023", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "c82308cd1ce6b8313b88d51e249f0eeae66b1f0c", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
118737645
pes2o/s2orc
v3-fos-license
A Unified Theory for the Cuprates, Iron-Based and Similar Superconducting Systems: Application for Spin and Charge Excitations in the Hole-Doped Cuprates A unified theory for the cuprates and the iron-based superconductors is derived on the basis of common features in their electronic structures including quasi-two-dimensionality, and the large-U nature of the electron orbitals close to E_F (smaller-U hybridized orbitals reside at bonding and antibonding states away from E_F). Consequently, low-energy excitations are described in terms of auxiliary particles, representing combinations of atomic-like electron configurations, rather than electron-like quasiparticles. The introduction of a Lagrange Bose field is necessary to enable the treatments of these auxiliary particles as bosons or fermions. The condensation of the bosons results in static or dynamical inhomogeneities, and consequently in a commensurate or an incommensurate resonance mode. The dynamics of the fermions determines the charge transport, and their strong coupling to the Lagrange-field bosons results in pairing and superconductivity. The calculated resonance mode in hole-doped cuprates agrees with the experimental results, and is shown to be correlated with the pairing gap on the Fermi arcs. A variety of normal-state properties including, e.g., the transport properties (ı.e. resistivity, Hall coefficient and thermoelectric power) of both the cuprates [17][18][19][20][21] and the FeSCs [3,22,23] are characterized by a remarkably similar anomalous behavior. Also, in both systems the suppression of SC by a high magnetic field results in a zero-temperature insulator-to-metal transition upon doping [24,25]. Even though the pairing symmetry is different in the cuprates [26][27][28] and the FeSCs [29][30][31][32], a resonant spin excitation, characterized by wave vectors around those of the magnetic order in the parent compound, exists in the SC state of both systems [33][34][35]. The approximate tetrahedral arrangement of the pnictogen/chalcogen atoms around the iron atoms in the FeSCs is typical of covalent bonding, and thus considerable hybridization is expected between orbitals corresponding to the two atoms. This is confirmed in electronic-structure calculations [10][11][12][13][14][15][16]; however, such hybridization is found in antibonding and bonding states which lie at least ∼ 1 eV away from the Fermi level (E F ), while the states at the close vicinity of E F are non- * Electronic address: jashkenazi@miami.edu bonding and of almost a pure Fe(3d) nature [12][13][14][15][16]. Consequently, the intrasite Coulomb and exchange integrals, corresponding to Wannier functions of the hybridized orbitals of the entire bands which determine the Fermi-surface (FS), magnetic moments, etc., may be not large in the FeSCs [36], resulting in itineracy and largely reduced magnetic moments [7][8][9]. On the other hand, due to the dominantly Fe(3d) nature of the states at E F , their intrasite integrals are rather large [36] and a large-U approach should be applied to study the physical properties (e.g. transport and SC) derived from low-energy excitations. This aspect of the electronic structure of the FeSCs is different from that of the cuprates, where an entire band around E F is believed to correspond to the large-U physics [28,37], and an insulating state of large gaps and magnetic moments exists in the parent compounds. Low-energy carriers are present in the cuprates due to doping, and since such carriers in both the cuprates and the FeSCs correspond to the large-U regime, a unified theory could be worked out for both of them. This theory should be valid also for other quasi-two-dimensional SC systems which are close to a magnetic instability, and have large-U electrons at the vicinity of E F . At the basis of this theory stands the observation that SC exists at stoichiometries where the dynamics of the low-energy carriers dominantly involves fluctuations between two adjacent occupation numbers (n) of atomic-like configurations (3d n ) around the copper or iron atoms. In the cuprates [28,37] these are fluctuations between effective Cu(3d 9 ) (hybridized with O(2p) orbitals) and Cu(3d 10 ) configurations for electron doping, and between effective Cu(3d 9 ) and Cu(3d 8 ) (obtained through Zhang-Rice-type hybridization with O(2p) orbitals) configurations for hole doping. In the FeSCs these are fluctuations between Fe(3d 6 ) and Fe(3d 7 ) configu-rations for electron doping, and between Fe(3d 6 ) and Fe(3d 5 ) configurations for hole doping. Such dynamics of carriers could be treated through the auxiliary-particle approach [38]. A configuration corresponding to an occupation number n is denoted by α(n), and a combined orbital-spin index of an atomic-like electron by η. For notation simplicity, let α(n − 1(η)) be the configuration obtained by removing an η electron from α(n) ∋ η. The operator a † iα(n) creates an auxiliary particle representing the configuration α(n) at site i (a twodimensional approximation is applied of points R i within a planar lattice which could be defined to contain one Cu or Fe atom per unit cell [11,39]). The creation operators of electrons of spin-orbitals η at sites i can be expressed as: They satisfy anticommutation relations of independent fermion operators under the following conditions: (i) the consequence of the large-U approximation that only the contribution of two adjacent values of n could be considered in rhs of Eq. (1) is valid; (ii) the auxiliary particles created by a † iα(n) are either bosons for even n and fermions for odd n, or fermions for even n and bosons for odd n; (iii) the following constraint is satisfied in every site i: As was discussed above, two occupation numbers (n) are considered, including n 0 (corresponding to the parent compound), and either n 0 + 1 (for electron doping) or n 0 − 1 (for hole doping). Let us denote by α, β and γ the configurations corresponding to the occupation numbers n 0 + 1, n 0 and n 0 − 1, respectively. Their creation operators at site i are denoted by: Auxiliary-particles created by s † iβ are chosen as bosons, and thus those created by e † iα and h † iγ are fermions. The Hamiltonian H, applied to study low-energy electron excitations, is based on intrasite one-and twoparticle terms, and intersite one-particle terms. It is expressed in terms of the auxiliary-particle operators through Eqs. (1,3). A grand-canonical formalism is applied by including in the Hamiltonian terms corresponding to the chemical potential µ, and to a field of Lagrange multipliers λ i (λ = λ i ) associated with the auxiliaryparticles constraint [Eq. (2)]. The values of λ i and µ should be determined to yield the correct charge and constraint in every site. H could be, formally, expressed as (using constraint-preserving term): The λ i −λ Lagrange field represents an effective fluctuating potential which prevents, through ∆H, constraintviolating fluctuations in the auxiliary-particle site occupation (thus enabling the treatment of atomic-like electron configurations as bosons or fermions). The effect of such a fluctuating potential on these configurations is analogous to the effect of vibrating atoms on electrons. Consequently, similarly to lattice dynamics, the quantization of the λ i − λ field yields bosons. In the cuprates one often applies a one-orbital model [28,37], under which there is one α configuration, corresponding to a complete Cu(3d 10 ) shell, one γ configuration corresponding to a Zhang-Rice singlet, and two β configurations corresponding to the spin states of the orbital σ =↑ and σ =↓ (also presented here as σ = ±1). The present auxiliary-particle method then becomes the "slave-fermion" method applied in previous works by the author [40,41]. The parameters appearing in Eq. (4) are then simplified to the intrasite and transfer (hopping) integrals: In the FeSCs one needs at least three Fe(3d) orbitals [10][11][12][13][14][15][16] (of the xz, yz and x 2 − y 2 symmetries) to study the electrons at the vicinity of E F , and there are numerous α, β and γ configurations. The parameters appearing in Eq. (4) are then derived from intersite transfer, and intrasite one-particle, Coulomb and Hund's-rule exchange integrals [42]. Within the large-U approximation, applied in the derivation of H, it could be approximated by omitting either H e and the α term in ∆H, or H h and the γ term in ∆H, and applying a second-order perturbation expansion in H eh . This results in corrections to hopping and intersite exchange terms which are expressed, within a one-orbital model for the cuprates, as: and Eq. (4) is approximately replaced, for hole-doped stoichiometries, by: Considering values of t(R) up to third-nearestneighbor R, and of t(R, R ′ ) and J(R) in Eq. (6) for nearest-neighbor R and R ′ , yields an expression for H in terms of the parameters t, t ′ , t ′′ and J. Values of these parameters for hole-doped cuprates have been obtained in first-principles calculations [43][44][45][46]. Explicit expressions for H (and its terms discussed further below), derived on the basis of Eq. (7), will appear elsewhere [47]. The Lagrange field bosons are referred to as "lagrons". They are soft at wave vectors corresponding to major fluctuations of spin and orbital densities. A typical lagron spectrum in hole-doped cuprates is presented in Fig. 1; it has soft modes at the points: where Q = (π/a)(x +ŷ) is the wave vector of the antiferromagnetic (AF) order in the parent compounds, and δq m = ±δqx or ± δqŷ are modulations around it, corresponding to striped structures [48][49][50][51]. The s iβ -field bosons are referred to as "svivons". Their Bose condensation is manifested, at low doping levels, in AF order, in the cuprates [40], and in a structural distortion and magnetic order, characterized by a spin-density wave (SDW), in the FeSCs [7][8][9]. At higher doping levels the Bose condensation of svivons is manifested in static or dynamical inhomogeneities, based on modulations of the low-doping order. When svivons are Bose condensed, an s iβ field operator can be expressed as a sum of its "condensed" part (i.e. the nonzero s iβ ) and fluctuating part s iβ − s iβ . Thus, the expression of an electron creation operator in term of products of auxiliary-particle operators, through Eqs. (1,3), includes terms where either e † iα or h iγ are multiplied by a condensed part of svivon operators, and terms where they are multiplied by their fluctuating part. A "quasi-electron" (QE) is defined as the fermion created by a normalized approximation to an electron creation operator, where only the terms in its expression which include condensed parts of svivon operators are maintained. The QEs represent hypothetical approximate electrons which do not introduce fluctuations to the inhomogeneities resulting from the Bose condensation of the svivon field. Since QE states are expanded as combinations of auxiliary-particle fermion states created by either the e † iα or the h iγ operators, these auxiliary-particle states form a basis to the QE states, and could be referred to as QEs as well. Thus, the problem of SC in strongly-interacting electron systems is treated in terms of an auxiliary space consisting of three types of coupled "particles": (i) boson svivons which represent combinations of atomic-like electron configurations of the parent compounds, and their condensation results in static or dynamical inhomogeneities; (ii) fermion QEs which represent combinations of such configurations with an excess of an electron or a hole over those of the parent compounds, and their dynamics largely determines charge transport; (iii) boson lagrons which represent an effective fluctuating potential, enabling the treatment of the above configurations as bosons and fermions. Within this auxiliary space the pairing between the fermions through the exchange of bosons could be rigorously worked out in terms of coupled independent fields, in analogy to the electron and phonon fields within the BCS-Migdal-Eliashberg theory. The strong coupling between QEs and lagrons, necessary for the constraint [Eq. (2)] to be satisfied, results in high pairing temperatures. If the same scenario were worked out as the pairing between electrons through the exchange of spin or charge fluctuations, generated by the same system of electrons, then two problems would have existed: (i) it is doubtful that such strongly-interacting electrons could be treated as quasiparticles; (ii) the coupled fermion and boson fields are not independent of each other. Svivon and QE spectra in hole-doped cuprates have been evaluated through a self-consistent second-order di- agrammatic expansion [47], where a mean-field treatment of H in Eq. (7) is applied at the zeroth order. The expansion is carried out on two Hamiltonian terms. One of them is ∆H which introduces svivon-lagron and QElagron coupling. Vertex corrections to it are negligible by a phase-space argument, as in Migdal's theorem, since the dominant contribution of the fluctuating part of the constraint [Eq. (2)] comes from a limited k-space range of the lagron spectrum around point Q (see Fig. 1). The other term, H ′ , introducing QE-svivon coupling, is the contribution of the fluctuating part of the svivon operators to H. It is treated as a perturbation, and approximated through a first-order expansion of the rhs of Eq. (7) in terms like s iσ − s iσ [47]. Lagron spectra of the type presented in Fig. 1 determine degenerate Bose-condensed svivon states, with energy minima at points ±Q m /2, for one of the four values of m in Eq. (8). Since there are four inequivalent values of Q/2 at ±(π/2a)(x ±ŷ), the number of possible condensates is eight. In the absence of symmetry-breaking long-range order, the system is generally in a combination of these states (reflecting fluctuations between them). Tetragonal symmetry occurs when all the eight degenerate states are combined, while orthorhombic symmetry breaking results in the combination of four of the eight states. The resulting stripe-like inhomogeneities [48][49][50][51] (which resemble a checkerboard in the combination state) would be static or dynamical, depending on how close to zero are the spectrum minima. As they occur in Bose fields, the svivon spectral functions are positive at positive energies, and negative at negative ones. Their absolute values for two typical cases, of different nonzero spin gaps, in hole-doped cuprates below T c (where the low-energy svivon linewidths are small) are presented in Fig. 2. Shown are the results for the svivon condensate with energy minima at ±[(π/2a)(x +ŷ) + 1 2 δqx], and the average of the results for the four condensates with minima at the vicinity of ±(π/2a)(x +ŷ) (representing their combination), in vertical and diagonal directions around this point. The QE spectrum of hole-doped cuprates has been evaluated treating the fluctuations between the combined svivon condensates adiabatically, as is detailed in a separate paper [52]. By the definition of electron creation operators in Eq. (1), the electron Green's functions are obtained at the zeroth-order as sums of products of QE and svivon Green's functions. This results in the non-Fermi-liquid (non-FL) scenario of a distribution of con- Fig. 2; the small-and large-spin-gap results demonstrate, respectively, the existence of an incommensurate [34] or a commensurate [33] resonance mode. voluted QE-svivon poles. It is shown [52] that multiple scattering of QE-svivon pairs introduces to the electron Green's functions additional FL-like electron poles, and thus the effect of both types of poles is reflected in various physical properties. The spin susceptibility (SS) of hole-doped cuprates has been evaluated, under an approximation where only the non-FL convoluted QE-svivon poles are considered [52]. Linear-response theory has been applied on the basis of spin-flip processes, expressed by constraint-preserving terms of the form s † iσ s i,−σ s † j,−σ s jσ , and thus determined by the svivon spectrum. Results obtained for the imaginary part of the SS, in vertical and diagonal directions around k = Q, are presented in Fig. 3. They correspond to the two svivon spectra shown in Fig. 2, and since the svivon-system is in a combination state, the SS results are averaged over those of the four combined condensates. These results reproduce those observed in neutron-scattering measurements in different hole-doped cuprates. The larger spin-gap results correspond to the "commensurate resonance mode (RM)" [33], and the smaller spin-gap results correspond to the "incommensurate RM" [34]. If the constraint is imposed in any two sites i and j, the following equation should be satisfied in these sites in hole-doped cuprates [see Eqs. (2,3)]: The terms in the lhs of Eq. (9) are formally similar to the above spin-flip term applied for the derivation of the SS results presented in Fig. 3. Thus, a susceptibilitylike function, referred to as the "constraint susceptibility" (CS) could be derived on the basis of either the svivon spectrum, through the lhs of Eq. (9), or the QE spectrum, through the rhs of Eq. (9). The results obtained for the CS on the basis of both spectra should agree with each other in order for the constraint to be satisfied, and this condition is the basis for the determination of the lagron spectrum, and of their coupling to the svivons and QEs. The CS represents the response of auxiliary particles, and not of electrons. However, it reflects, under certain conditions, an approximation to the response of the system to charge fluctuations which could be measured, e.g., by Raman spectroscopy [26,53]. FIG. 4: The imaginary part of the constraint susceptibility corresponding to the svivon spectra for hole-doped cuprates below Tc presented in Fig. 2; the low-energy peaks around k = 0, approximately, correspond to the integrated energies of the resonancemode peaks shown in Fig. 3; these peaks also, approximately, correspond through Eq. (9) to the SC gap over the Fermi arcs (see discussion in the text). Results obtained for the imaginary part of the CS, on the basis of the lhs of Eq. (9), in vertical and diagonal directions around k = 0, are presented in Fig. 4. They correspond to the two svivon spectra shown in Fig. 2, and evaluated similarly to the SS results presented in in Fig. 3. The major feature observed in CS results is a low-energy peak around k = 0 at energies which, approximately, correspond to the energies of the k-integrated low-energy features of the SS at the vicinity of k = Q (thus the incommensurate or commensurate RM). Since the same CS results, as those presented in Fig. 4, should be obtained also on the basis of the QE spectrum through the rhs of Eq. (9), and since they correspond to the SC state, the observed peak at k = 0 should represent some kind of an average value of the QE gap below T c . As is explained elsewhere [52], this gap has two contributions; one of them originates from Brillouin zone (BZ) ranges around the antinodal points, where a narrow peak (of energy ǫ = 0 at T = 0), lying between two humps, splits due to pairing below T * ; the other contribution to that gap opens below T c on the Fermi arcs (FAs) around the line of nodes. In the SC state there are both "normal" and "anomalous" (pair-correlation) QE Green's functions, and their contributions to the QE expression for the CS have opposite signs [47]. These contributions cancel each other for "gap-edge states", where ǫ = 0, E = √ ǫ 2 + ∆ 2 = ∆, and thus the fraction of both the particle and the hole states within the Bogoliubov states is 1 2 [1 ± ǫ/E] = 1 2 . So the QE-spectrum contributions to the CS peak at k = 0 come from states where ǫ = 0. Consequently [52], the averaged QE gap which determines the k = 0 CS peak is lowly weighted around the antinodal points, and represents a value somewhat larger than the averaged QE gap on the FAs. Since the averaged electron FA gap is also somewhat larger than the QE FA gap (due to convolution with svivon states), one expects a correlation between the values of this gap and the k = 0 CS peak, and as was discussed above (see Figs. 3 and 4), also with the averaged RM energy. The electron FA gap has been measured through, e.g., the B 2g Raman mode, and its value has indeed been found to be correlated with the RM energy [26,27,33]. A correlation between the energies of the A 1g Raman mode and the RM [53] has been found to be partial [54]. The observed correlation of the FA gap with ∼ 5k B T c [26,27] is explained elsewhere [52]. The fact that the average RM energy is lower when it is incommensurate (see Fig. 3) explains the observation that T c is lower in cuprates with an incommensurate RM. Even though the electronic structure of low-energy states in the FeSCs is based on more orbitals than in the cuprates, important physical conclusions could be drawn from one system to the other due to the formally common Hamiltonian applied for both of them. Within a two-dimensional approximation, the lagron spectrum of the FeSCs is expected to differ from that of the cuprates, presented in Fig. 1, by replacing the minima positions from satellite points around Q, to satellite points around the two possible SDW wave vectors in the parent compounds: Q 1 = (π/a)x and Q 2 = (π/a)ŷ [7,8] ( or Q 1 = (π/2a)(x +ŷ) and Q 2 = (π/2a)(x −ŷ) [9]), or points close to them. Similarly to the cuprates [48][49][50][51], stripe-like inhomogeneities characterized by modulations due to the differences between the satellite points and Q 1 or Q 2 , could exist also in the FeSCs. The svivon spectrum in the FeSCs is expected to have analogous features to those of the cuprates, presented in Fig. 2, resulting in a resonance mode in the vicinity of Q 1 and Q 2 , below T c , as has been observed [35]. In a separate paper [55], it is explained that QE pairing requires a sign reversal of the order parameter upon a shift of Q in the BZ, in the cuprates, and of Q 1 or Q 2 in the FeSCs. Due to their different FSs, this results in pairing symmetry of an approximate d x 2 −y 2 type in the cuprates, and of an s ± type (thus with different signs on different FS pockets) in the FeSCs. Thus, it is predicted that there are no Fermi arcs in the FeSCs, and that their RM energy is correlated with an averaged value of the SC gap, as has been observed [30][31][32]35]. It could be concluded that high-T c SC occurs in quasi-two-dimensional strongly-interacting electron systems due to the fact that low-energy excitations in them are described in terms of auxiliary particles, representing combinations of atomic-like electron configurations, rather than electron-like quasiparticles. A Lagrange Bose field which must be introduced to enable the treatments of these auxiliary particles as fermions or bosons, serves as the pairing glue between the fermions.
2010-08-06T13:39:17.000Z
2008-09-24T00:00:00.000
{ "year": 2008, "sha1": "fb4c7fd6f85d625d22332db1e2a046de0d6d35e0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0809.4237", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fb4c7fd6f85d625d22332db1e2a046de0d6d35e0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
216424937
pes2o/s2orc
v3-fos-license
Kinetic characteristics of partially purified invertase from Citrullus lanatus Rind Invertases are enzymes that hydrolyze sucrose to produce equimolar mixture of glucose and fructose. They are widely used in various industrial food applications. The aim of this study was to isolate, partially purify, and characterize invertase from Citrullus lanatus rind. Invertase isolated from C. lanatus rind was purified to 46.94 folds with 23.19% yield by means of ammonium sulphate precipitation, dialysis and Sephadex G-25 gel filtration chromatography. The enzyme has an optimum temperature of 50 °C and maximum activity at pH 7 and a relatively high activity at pH 4. Invertase enzyme from C. lanatus rind maintained its activity at 50 °C and 95 °C after 20 minutes of incubation. Maximum activity of the enzyme occurred at 0.25 M sucrose concentration. Kinetic parameters, Vmax and Km were 15 mM and 40 μM/min, respectively. C. lanatus rind invertase was competitively inhibited by Fe2+, Cu2+, Mg2+ and Ag2+, while Co2+ enhanced its activity. Zn2+ has relatively little or no effect on the activity. Thus, C. lanatus rind may be employed as a local source for the production of invertase enzyme. INTRODUCTION Invertase (E.C 3.2.1.26) also known as β-fructofuranosidase is an enzyme which catalyzes the breakdown of sucrose which is a non-reducing disaccharide to fructose and glucose which are reducing monosaccharides (Ahmed, 2008). The mixture of glucose and fructose produced is called inverted sugar syrup (Mobini-Dehkordi et al., 2008). Invertases exist in different isoforms in nature and these isoforms are differentiated by their locations, in yeasts cell for example, it is present in two forms as either extracellular or intracellular invertase, while in plants, as three isoforms, each different in their biochemical properties and subcellular locations (Acosta et al., 2000). Invert sugar consists of an equimolar mixture of fructose and glucose which has been reported to be sweeter and to have lower crystallinity than sucrose (Goosen et al., 2007). Invertases are known to be used in various industrial food applications especially in the preparation of jams and candies, these enzymes are also essential in the production of non-crystallizing creams, artificial honey, lactic acid, ethanol, confectionary (food), in digestive aid tablets, powder milk for infants and other infant foods (Acosta et al., 2000;Phadtare et al., 2004;Sikander 2007). Despite the wide range of application of invertase in various industries, the commercially available invertase is rather expensive, thus limiting the applicability of the enzyme. Micro-organism are mainly employed in the production of invertase in a process that needs very strict regulation of production conditions and requires high level of purification for taste and health reasons, thereby making the enzyme expensive (Laluce, 1991). Plant enzymes have been reported to be have higher thermal stability than microbial enzymes (Tananchai and Chisti, 2010). Thermostability is a very important prerequisite for industrial applicability of enzymes. Citrullus lanatus (water melon) is a flowering plant which is vine-like, and from the family Cucurbitaceae. It comprises of about 6% sugar, 91% water and the residual portion consisting of vitamins and minerals. Water melon has been reported in several studies to have therapeutic properties, including antihypertensive, anti-diabetic, antioxidant, anti-inflammatory and antimicrobial effects (Arise et al., 2016a;Arise et al., 2016b;Erhirhie and Ekene, 2013). The rind of watermelon rind is usually discarded as waste, although it is edible (Al-Sayed and Ahmed, 2013). The increasing concern about pollution that occurs from agricultural and industrial wastes has stimulated interest in the conversion of waste materials into commercially valuable products such as enzymes, oil, pectin, wine/vinegar e.t.c (Rashad and Nooman, 2009;Sangeetha et al., 2005). Therefore, the aim of the study was to isolate, partially purify and characterize invertase from watermelon rind as well as to determine the kinetic factors and conditions that will maximize the activity of the enzyme which will help boost its suitability for industrial prospects. Preparation of Plant Extract The rind of water melon was removed from the whole water melon ball, and the white portion of the rind was carefully scraped into a clean container, cut into small pieces and weighed. Three thousand milliliter (3000 ml) of ice cold sodium phosphate buffer (pH 7) containing 1 mM EDTA, and 50 mM sodium metabisulfite, were added to 1700 g of the rind. A sterilized stainless Maxwell blender placed in a freezer for 24 hours was used to pulverize the mixture of rind and buffer. The entire slurry was filtered using a clean three-layer muslin cloth at 4 °C and then centrifuged at 15000 g for 30 minutes at 1 ºC using a cold centrifuge. The supernatant obtained afterwards was then stored at 4 ºC and it served as the crude enzyme source. Invertase and protein assays Assay for invertase activity was carried out using a 5-minute standard test as described by Timerman (2012). The original enzyme source (OES) was appropriately diluted five (5) times by adding 8ml of 50 mM sodium phosphate buffer to 2ml of OES to obtain 10ml of diluted enzyme source (DES). Aliquot volume (0.5 ml) of DES was carefully transferred into a test tube containing 2 ml of substrate solution (composed of 50 mM sucrose in 50 mM sodium phosphate buffer at pH 7) and the reaction was allowed to proceed for 5 minutes, after which the reaction was stopped by denaturation, using the rapid addition of 2.0 ml of alkaline solution composed of NaOH, 3, 5-dinitrosalicylate, and sodium potassium tartrate. The assay tube was transferred to a boiling water bath for 7-8 minutes. The solution was then allowed to cool and diluted to a final volume of 6.1 ml with the addition of 1.6 ml of 50 mM sodium phosphate buffer at pH 7. Absorbance of the assay was measured spectrophotometrically at 540 nm. Ammonium sulphate precipitation Ammonium sulphate precipitation was carried out in ice bath using freshly prepared ammonium sulphate crystals. Powdered ammonium sulphate was weighed and slowly added to the crude extract and stirred gently and the solution kept overnight at 4 o C. After saturation, each precipitate was collected using a centrifuge at 16500 ×g for 30 minutes at 4 o C. The precipitate was dissolved again using sodium phosphate buffer (pH 7) and then assayed for invertase activity (5 ml buffer for every 5 ml precipitate). The saturation with the highest activity of invertase was further purified. The protein concentration was determined using Biuret method, as described by Gornall et al. (1949) with bovine serum albumin (BSA) serving as standard. Dialysis of precipitated fraction Ammonium sulphate precipitation fraction was dialysed overnight with intermittent stirring. The dialysis buffer was changed 3 times at 2 h interval. The activity of invertase and protein concentration this fraction were determined as described above. Gel filtration with Sephadex G-25 The sephadex G-25 gel was pre-equilibrated with sodium phosphate buffer at pH 7, a slurry of the gel was poured into a column filled to 1/4 th of its volume with buffer and was allowed to set for 24 hours. The dialyzed fraction was then loaded at the top of the gel and washed down with sodium phosphate buffer, pH 7. The fractions were collected by volume at the interval of 2 ml each with a flow-rate of 37 ml/h. The activity of invertase and protein concentration of each fraction were determined and the active fractions were pooled. Determination of optimum pH Different 50 mM buffers (sodium phosphate, sodium acetate and Tris-HCl buffers) at pH ranging from pH 2-9 were prepared, they were used differently to incubate the enzyme for 30 mins after which the activity of invertase was determined (Bhatti et al., 2006;Amin et al., 2008). Determination of optimum temperature Invertase activity was studied for optimum temperature using a method described by Amin et al. (2008). Five milliliter (5 ml) of the diluted enzyme source was incubated at temperatures ranging from 10 ºC to 90 ºC for 30 minutes and then assayed for invertase activity. Effect of heat treatment The effect of heat treatment on the activity of the enzyme was carried out using a method described by Violet and Meunier (1989). Aliquot volume of enzyme source was heated at 50 ºC and 95 ºC, after which samples were removed at interval of 5 minutes and then assayed for invertase activity. Three milliliter (3 ml) of enzyme source was pipetted into 6 sample bottles. Three milliliter (3 ml) of blank preparation was also pipetted simultaneously into another set of 6 sample bottles; all were placed in a hot water bath at the desired temperature. At 5 min interval, a sample bottle containing enzyme and another containing blank were withdrawn and cooled to room temperature in order to attain equilibrium and then assayed for invertase activity. Residual activity was expressed as a percentage of the activity under the standard assay condition. Substrate kinetics Substrate kinetics was done according to the method adapted by Sivakumar et al. (2012). Varying concentrations of substrate ranging from 50 mM to 300 mM were prepared. The standard 5 min assay was carried out with each substrate concentration and the absorbance read at 540 nm. Double reciprocal graphs were then plotted to determine the Vmax and Km values of the enzyme. Effect of some metal ions on invertase activity The method adapted by Bhatti et al. (2006) was used to measure the effect of some metal ions on the invertase activity. Fifty millimole (50 mM) concentrations of each ion were prepared and the reaction mixtures were incubated for 30 minutes, after which the activity of invertase was determined, and compared with the activity of the enzyme in the absence of metal ion. Statistical Analysis All data were expressed as the mean of three replicates ± standard error of mean (S.E.M). Statistical evaluation of data was performed by SPSS version 16 using one way analysis of variance (ANOVA). RESULTS The three step purification of invertase from water melon rind, gave the following result: Dialysis of precipitated fraction The dialysate was assayed for invertase activity and the total protein determined. There was an increase in the specific activity of the enzyme after dialysis as undialyzed fraction had 14.46 µM/min/mg while dialyzed fraction had 116.29 µM/min. Gel filtration The gel filtration purification profile is presented in Figure 1. Fraction 8 had the highest invertase activity, followed by fractions 9 and 10 respectively. These fractions were pooled together and further characterized. Going down the whole purification profile from crude extract to gel filtered enzyme source, there was an increase in the specific activity of the enzyme while the protein concentration decreased as unwanted proteins were removed through the purification process as shown in Table 2. The gel filtration fraction had the highest value of specific enzyme activity (501.88 µM/ min/mg), which eventually gave a final purification fold of 48.94. Effect of substrate concentration on C. lanatus rind invertase activity Substrate kinetics for the rate of sucrose hydrolysis by C. lanatus rind invertase was estimated using a Michealis-Mentens curve (Figure 2). The result showed the activity of the enzyme increased progressively up until 250 mM substrate concentration, after which further increase in substrate concentration did not lead to an appreciable increase in activity of the enzyme. The Lineweaver-Burk plot for hydrolysis of sucrose catalyzed by C. lanatus rind invertase (Figure 3) gave 15 mM for the Km value and 40 µM/min for Vmax. Effect of pH on the activity of invertase from C. lanatus rind The effect of different pH (2-9) on the activity of C. lanatus rind invertase was determined. The result showed that the activity of C. lanatus rind invertase was at its peak at pH 7, but it also had a very high activity at pH 4 as graphically represented in Figure 4. Determination of optimum temperature The graphical representation of the effect of temperature on the activity of C. lanatus rind invertase is shown in Figure 5. The peak activity of the enzyme was observed at 50 ºC, with gradual decline in activity as the temperature was increased further. Effect of heat treatment on C. lanatus rind invertase activity The effect of heat treatment at 50 ºC and 95 ºC on the activity of C. lanatus rind invertase is represented in Figure 6. The enzyme was extremely thermostable as it retained very high activity after incubation for 20 min at 50ºC and 95 ºC respectively and the activity slightly decreased with further incubation. The optimum activities observed after 20 min incubation at 50 ºC was 30% higher than the activity observed at less than 10 ºC (temperature at which all reactions were carried out) while the activity was 32.5% higher after incubation for 20 min at 95 ºC. Effect of some metal ions on C. lanatus rind invertase activity The effect of different concentrations of Fe 2+ , Cu 2+ , Co 2+ and Zn 2+ on the rate of hydrolysis of sucrose catalyzed by C. lanatus rind invertase was observed. All the metal ions caused reduction in the activity of C. lanatus rind invertase at 50 mM concentration of each ion (Figure 7). In the presence of Fe 2+ , there was an initial 19% increase in activity of C. lanatus rind invertase at 30 mM concentration, but a 48% reduction in activity occurred at 50 mM concentration when compared to control. In the presence of Cu 2+ , there was also an initial 18% increase in the enzyme's activity at 20 mM concentration, while there was a 24% reduction in activity at 50 mM concentration when compared with control. In the presence of Co 2+ , there was a 38% increase in activity at 20 mM concentration, while it brought about a 1% reduction in activity at 50 mM concentration when compared with control. However, in the presence of Zn 2+ at 20 mM and 50 mM concentrations, there was a 29% and 9% increase in activity of the enzyme respectively, when compared with the control. It therefore means there is a certain concentration at which each of these metal ions will have beneficial effect on the activity of invertase after which, it becomes inhibitory to invertase activity. Figure 8 shows the graphical representation of the effect of Zn 2+ , Co 2+ and Ag + on the activity of C. lanatus rind invertase. The result obtained was compared with the control and it showed that the presence of Zn 2+ caused considerably little or no effect on the activity of C. lanatus rind invertase; Co 2+ was shown to cause a 17% increase in activity of the enzyme, while an 85% decrease in activity was observed in the presence of Ag + . Figure 9 shows a bar chart representing the summary of the effect of metal ion on the activity of C. lanatus rind invertase in comparison with the activity of the enzyme without any metal ion. Inhibition Studies The mode of inhibition exhibited by Fe 2+ , Cu 2+ and Mg 2+ was determined. Lineweaver-Burk plots were used to ascertain the mode of inhibition of C. lanatus rind invertase by the above listed metals. Figure 10 shows that all the investigated inhibitors exhibited the mixed type of inhibition. DISCUSSION The purification fold of 46.938 with a yield of 23% obtained from this study was higher than what was obtained for Aspergillus terreus which was 8.21 fold but with a better yield of 76.04% (Shaker, 2015). Similar to this observation is the work of Guimaraes et al. (2009) that reported 24% yield of purified invertase. Aslam et al. (2013) also reported a purification fold of 15 with recovery of 38 % yield for an extracellular invertase purified with ammonium sulphate precipitation and DEAE Sepahdex A-50. The effect of pH on the activity of invertase isolated from C. lanatus rind was investigated. Stability was observed over a pH range of 2-9 with different buffers (acetate, phosphate, Tris-HCl) with the highest peaks at pH 4 and pH 7, which is an indication of the presence of acidic and alkaline invertases. This agrees with the result of Liu et al. (2005) who reported the presence of acidic and alkaline invertase at pH 4.5 and pH 7 respectively in Bambusa edulis. Ionization state of amino acid residues present in the active site of an enzyme is normally pH reliant, so changes in the pH will significantly affect the ionic state of amino and carboxylic acid groups on the protein and consequently, the conformation and catalytic site of the enzyme (Essel and Osei, 2014). Invertase enzyme from C. lanatus rind had an optimum temperature of 50 ºC. Similarly, invertase from Saccharomyces cerevisiae also showed highest activity at 50ºC, while that from Saccharomycopis fibuligera showed an optimum temperature of 55 ºC (Skowronek et al., 2003). Biological reactions happen faster with increase in temperature until the point of enzyme denaturation, above which the enzyme activity and the rate of the reaction decreases abruptly (Marepally, 2017). The initial increase in enzyme activity as the temperature increases is possibly due to increase in reaction rate, as a result of increased kinetic energy of the reacting molecules. Thermostability studies of C. lanatus rind invertase revealed that the enzyme was highly thermostable with optimum activity obtained at 50 ºC and 95 ºC after 20 mins of incubation. Esawy et al. (2014) reported that free invertase isolated from honey lost its activity completely at 70 °C after 45 minutes, while the immobilized enzyme kept 80% of its original activity at the same conditions. This suggest that it is very possible that immobilization of C. lanatus rind invertase will substantially increase its thermostabilty. Thermal stability is also an important criterion in choosing an enzyme for industrial use (Esawy et al., 2014). A double reciprocal plot of the enzyme affinity for sucrose gave a straight-line graph from which the Km was calculated to be 15 mM and the Vmax as 40 µM/min. The kinetic parameters are similar to that of Hsiao et al. (2002) who reported a Km of 15.28 mM for an invertase isolated from rice. However, Gallagher and Pollock (1998) reported a Km of 18 mM for Lolium temulentum invertase which was higher than the Km obtained in this study. The observed decrease in activity of C. lanatus rind invertase in the presence of Fe 2+ , Cu 2+ and Mg 2+ is similar to that reported by Esawy et al. (2014) in which Fe 2+ , Cu 2+ and Mg 2+ brought about a reduction in the activity of invertase isolated from honey. Cu 2+ also reportedly inhibited the invertase activities of carrot peels (Zill-e-Huma, 2015), and spent yeast (Kumar and Kesavapillai, 2015). Zn 2+ was found to have relatively little or no effect on the activity of the enzyme, whereas Uma et al. (2010) reported Zn 2+ to be a competitive inhibitor of the invertase enzyme. Kumar and Kesavapillai (2015) also reported slight inhibition of invertase activity by Zn 2+ . The activity of C. lanatus rind invertase was found to be enhanced by incubating it with Co 2+ . The increase in activity of invertase in the presence of Co 2+ is in correlation with many previous reports (Rubio et al., 2002;Kumar and Kesavapillai, 2015;Zill-e-Huma, 2015). However, the activity of the enzyme was almost completely lost as 85% reduction in activity was observed on incubation with Ag + . Inhibition or activation of invertase activity by metals may be due to the effect metals have on the amino acid residues present at the active site and the exterior surface of the enzymes. This perhaps may bring about alterations in the charge of the amino acids or structure distortions (Salis et al., 2007). CONCLUSION This study established the presence of invertase activity in Citrullus lanatus rind. Invertase isolated from C. lanatus rind was found to have maximum activities at pH 4 and pH 7 and maximum temperature of 50ºC. It was found to be highly thermostable as it maintained high level of activity at 50ºC and 95ºC after incubation for 20 minutes. The activity of the enzyme was affected by the presence of metal ions in various degrees. Hence, C. lanatus rind may be recommended as a local source for the production of invertase enzyme, thus reducing the cost of production significantly. DECLARATION OF CONFLICT OF INTEREST The Authors declare that there is no conflict of Interest.
2020-03-26T10:42:33.597Z
2020-03-20T00:00:00.000
{ "year": 2020, "sha1": "1372b63f100f233e7939fd859e6e829dbb5720e6", "oa_license": "CCBY", "oa_url": "http://cjs.sljol.info/articles/10.4038/cjs.v49i1.7706/galley/6178/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "91d2b2df689d1f6b934f434aaf04d32345cad106", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
17466976
pes2o/s2orc
v3-fos-license
Colour-straight four-quark operators and liftemes of beautiful hadrons Using the relation between the harmonic oscillator wave function and the light quark scattering form factor, the expectation values of colour-straight four-quark operators are evaluated and found to be directly proportional to the cubic power of the oscillator strength. It is predicted that the ratio \tau(\Lambda_b)/\tau(B) \approx 0.79(0.84) due to the factorizable (nonfactorisable) piece, against the experimental 0.79 \pm 0.06. Notwithstanding the numerical prediction, the present study shows that the four-quark operators play a role as far the lifetimes of b-flavoured hadrons are concerned. I. INTRODUCTION In the description of heavy hadron decays by heavy quark expansion (HQE), the preasymptotic effects appearing at next-to-leading order and beyond are vital in predicting the decay properties accurately. These effects are due to the operators of dimensions D = 5 and 6. At 1/m 2 Q , the operators are suppressing the leading order. The evaluation of D = 5 operators which describe the motion of the heavy quark inside the hadron and the chromomagnetic interaction is definite. The estimation of the D = 6 operators which are four-quark operators (FQO) containing both heavy and light fields is based on the vacuum insertion assumption for mesons and on the quark models for baryons. Though their effects are negligible as the heavy quark's volume occupation is of the order of (Λ QCD /m Q ) 3 but are considerably enhanced due to partial compensation by the four-quark phase-space, these operators are predicting the lifetime differences in the world of the hadrons of given flavour quantum number. Therefore, accurate value of the FQO is necessary due to the confrontation existing between theory and experiment over the hadronic properties like the experimentally smaller than theoretically expected lifetime of Λ b and the theoretically smaller than experimentally predicted semileptonic branching ratio of B-meson. Theoretically upto whereas experimentally [3], The agreement among the B mesons is as expected of but not so between the Λ b and B. The later issue continues to be central point of the physics of heavy quark hadrons. It is suspected that the explanation for these discrepancies within the HQET framework is hidden in the yet-not-satisfactorily-understood FQO. As regards the evaluation of the FQO, there were two works [4,5] which attempted to explain substantially the enhanced decay rate of Λ b , whereas the work of P. Colangelo and F. De Fazio [6] which is QCD sum rules based prediction leads to conclude that the reason for the smaller lifetime is not due to FQO. In Ref. [4], the authors evaluated the FQO parameterising the matrix elements in terms of hadronic parameters which are not practically known. But the parameters have been calculated using QCD sum rules [7]. However, the prediction is not able to account for the lifetime difference between Λ b baryon and B meson. On the other hand, the author of Ref. [5] used quark model and accounted for the FQO for 13% of the required enhancement in the Λ b decay rate. The above estimation used yet to be confirmed result of DELPHI collaboration [8] on the mass splitting of Σ * b and Σ b . The same method has been modified by taking the logarithmic dependence of the wave function at the origin and this explains the difference between the decay rates of B meson and Λ b baryon to the extent of 40% [9]. Since the striking point in the evaluation of the FQO is not yet obtained to clear the situation in one way or the other, it is important and interesting to explore other avenues to estimate the four-quark matrix elements. In this paper, we adopt the colour-straight formalism approach of [10] to evaluate the expectation values of the four-quark matrix elements. On the specific choice of the harmonic oscillator wavefunction model for the form factor and slightly different potentialr for meson and baryon, it is found that where the values given within brackets are due to non-factorisable part of the FQO. These values are in agreement with the data, eq. (2). In a recent work [10], Pirjol and Uraltsev discussed the four-fermion operators on certain quantum mechanical basis. In the nonrelativistic quark theory, the wave function density and diquark density are related to the associated operator where the Γ b,q are arbitrary Dirac structures, through The wave function at the origin, in momentum representation, is given by The transition amplitude is then the Fourier transform of the light quark density distribution: Integrating over all q yields the expectation value: For any Dirac structure Γ, the light quark current density and the light quark transition amplitude are given by: where the J Γ (0) is gauge invarinat operator and not required to be a bilinear. Thus, for spin-singlet operators, we have And, for spin-triplet operators, similarly with S/2 being the b-quark spin operator and Equations (11) and (12) The extraction of the form factor involves assumption of a function such that it satisfies the constraints on the form factor that F(q 2 = 0) is equal to the corresponding charge of the hadron. Then the form factor has to be extrapolated into the region where q 2 > 0. We take the hadronic wave function of ISGW harmonic oscillator model [11] for the form factor. The wave functions of ISGW model are the eigenfunctions of orbital angular momentum L = 0 satisfying the overlapping integral The overlapping integral can be equated to the form factor. Hence for diffrent initial and final hadrons where N is the normalisation constant given by [2β f β i /(β 2 f + β 2 i )] 3/2 and β's are oscillator strengths. For same initial and final hadrons, the transition amplitude is The calculation of β's can be made using the QCD inspired potential. In the present calculation, we use the potential for B-meson containing the Coulomb, confining and a constant terms: β Bq = 0.4 GeV and β Bs = 0.44 GeV. For Λ b baryon, following the similar procedure but for the potential of the form where the r 2 term is a harmonic oscillator term justifying the consideration that Λ b be a where r λ,ρ are the internal coordinates for three body system. Due to the idea of considering the Λ b as a system containing the bound state of light quarks and a heavy quark, the separation of the two light quarks which make the bound state is treated negligibly. This allows then that the baryon is a system of two body. It is a reasonable approximation only. The difference between a meson and a baryon is essentially due to the value of the oscillator strength. Hereinafter the operators are referred to by the following notaion: for meson In what follows, q stands for u and d quarks and s for strange quark. And < O V,A >, < T V,A > will be respectively denoted as ω V,A , τ V,A . A. B-meson The parametrisation of the matrix element of the colour-straight operators for vector current is with the constraints F B (0) = 1 for valence quark current and F B (0) = 0 for sea quark current. The former is relevant for the b-meson composition of quarks bq. Then the corresponding transition amplitude is Under isospin SU(2) symmetry, For B s , we have If the case of violation of isospin symmetry, and SU (3) Finally the following equality leads the absence of the structure (ǫ * q)v µ Following the Goldberger-Treiman relation, we have Correspondingly, the expectation values ars We have taken in the above esitmates the value g = -0.03 [13,14]. For Λ b baryon, treating u and d quarks equally, In the case of Ξ b , we have, There are corrections additionally to form factors due to charge radius. The same can be ignored as we are looking at the wave function density at the origin. IV. NON-FACTORISABLE PART OF THE FQO The nonfactorisable part of the FQO come in four. The following is one of the ways of parameterising them [4]. where B 1,2 and ǫ 1,2 are hadronic parameters. They are related to w V,A and τ V,A which are the expectation values of the operators O V,A and T V,A as defined earlier. In the case the Λ b baryon, the nonfactorisable piece corresponds to [4] < V. DECAY RATES AND LIFETIMES The decay rates of the b-flavoured hadrons are given by Then the ratio is This agrees well with the one obtained in terms B-meson decay constants. The decay rates due to spectator quark(s) processes are: For B − , Hence the ratio is 1.03. Lifetime ratio of B − and B s The lifetimes difference between the two neutral mesons B s and B d is due to W exchange. The numerical result is Corresponding to the nonfactorisable part, we get the decay rate Therefore the ratio becomes 1.02. Lifetime ratio of Λ b and B − In the HQE, the difference in lifetimes between mesons and baryons begins to appear at order 1/m 2 Q . Nevertheless it is dominant at third power in 1/m Q . At this order, the FQO receives corrections due to WS and PI. They are where C 1 = −c + (2c − − c + ). As mentioned earlier PI is destructive for radiative corrections and enhances the decay rate leading to smaller lifetime for Λ b . The effect of WS, on the other hand, is colour enhanced and its consequence is smaller. Hence, The deacy rate modified by the nonfactorisable piece is given by whereλ stands for the term in Eq. (43). Correspondingly, the ratio is In mesonic cases, the nonfactorisable piece gives a little bit higher values. In particular, the ratio of the lifetimes of the baryon and meson is significantly larger. VI. CONCLUSION In this paper, we have evaluated the FQO for beauty hadrons. Though the spectator effects are suppressed by powers of (Λ QCD /m Q ) 3 , in the HQE for inclusive decays, they cannot be neglected. We have expressed the four-quark operators in terms of light quark scattering form factor which are in turn related to the harmonic oscillator wave function. The use of the wave function model is to replace the exponential and two pole ansatz used in [10]. Basically both are same. The distinction arises only due to β, the oscillator strength of the model. Interestingly this simple alternative predicts the lifetimes ratio of Λ b and B closer to the experimental value. On the other hand, the nonfactorisable part does not have much effect in the case of mesons. But it keeps still away the ratio between B and Λ b away from the experimental value. As far the B-mesons are concerned, the present study once again affirms the existing predictions. In this case too, there are omissions like SU(2) and SU(3) symmetry breaking. They may play a role but too negligibly. Finally, we conclude that we have taken one, which is dominant, of the sources of the preasymptotic effects and shown that it predicts the lifetime of the Λ b close to the experimental figure. As we have not taken into account all possible corrections to the four-quark operators, the present prediction can be considered at least indicative in order to look into the four-quark as well as six-quark operators more seriously. However given the basis provided in [10], the prediction has to be believed. Of course, this prediction can be checked by lattice studies. A refined analysis of b and c flavoured baryon lifetimes will be published elsewhere. ACKNOWLEDGMENTS The author is grateful to Prof. P. R. Subramanian for useful discussion and constant encouragement. He thanks Dr R. Premanand for clearing certain doubts in preparing this paper. The University Grants Commission is thanked for its support through Special Assistance Programme. He is thankful to F. De Fazio who brought the Ref. [6] to his attention. I thank the referee for his useful comments.
2018-05-08T18:42:33.076Z
1999-03-09T00:00:00.000
{ "year": 1999, "sha1": "e6c3e4a3ca29521dea5efc8b9da3a4caa180ff54", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/9903293", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e6c3e4a3ca29521dea5efc8b9da3a4caa180ff54", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
55623967
pes2o/s2orc
v3-fos-license
History of Islamic Bank in Indonesia: Issues Behind Its Establishment This paper aimed at making a critical review of the history behind the establishment and development of Islamic bank in Indonesia. This review revealed that the establishment of Islamic bank in Indonesia emanated from the undertowed and it’s divided into three phases. These phases are the phase of idea or thinking, the phase of rethinking, and the phase of establishmented and maturation. The main issues and problems that inhibited the establishment of Indonesian Islamic banks were political issues, lack of government support, legal issues, social problems, economic problems (lack of capital) and the debate among scholars about the legal prohibition of halal-interest conventional banks. These issues and problems delayed the establishment of Islamic bank in Indonesia. Introduction The existence of modern banking institutions that operate under Islamic system is still at its early stages compared to the conventional banking system that existed about 420 years ago. Establishment of a conventional bank called Banco della Pizza in Rialto, Venice in 1587, followed by the establishment of a modern bank in the UK in 1694 were the starting point of modern banking world (Sumner, 1971;Hamoud, 1985). The establishment of financial institutions without interest as a landmark of Islamic bank was tried in the mid 1940s in Malaysia, but was unsuccessful (Erol & El-Bdour, 1989). The existence of modern Islamic bank commenced with Bank MYT Ghamir founded in Egypt in 1963. Bank Islam MYT Ghamir was closed in 1967, but the attempt inspired the establishment of other Islamic banks (Haron, 2005). It is believed that the establishment of Bank Islam MYT Ghamir paved the way for the establishment of Islamic banks. In the 1970s, Islamic banks grew and developed in number and size (Metawa & Almossawi, 1998). Besides the government, the private sector also contributed to the development of Islamic banks. The first private Islamic bank called Dubai Islamic Bank was established in 1975. In the same year, the International Development Bank (IDB) in Jeddah, the Faysal Islamic Bank in Egypt and Sudan, and Kuwait Finance House in Kuwait were established. By the early 1980s, Islamic banks have also emerged in non-Muslim countries like Europe, America and Australia (Ebrahim & Joo, 2001), and thus the acceptance of Islamic banks by the world was encouraging. Wilson (1994) reported that Citibank (one of the largest banks in the US), HSBC (in Hongkong) and ABN AMRO (in the Netherlands) have opened Islamic bank window. Islamic bank in Indonesia was established in 1992 through the establishment of Bank Muamalat Indonesia (BMI). The establishment of Islamic banks in Indonesia is considered late as compared to other Muslim-majority countries such as the Philippines (1973) and Malaysia (1983). Ariff (1998) stated that delays in the establishment of Islamic banks in Indonesia was due to lack of support from the Muslim community and the government (lack of political will). Chapra (1987) and Haron and Yamirudeng (2003) also reported that, the development of Islamic bank in a particular economy is influenced by the support of Muslims and the government. Efforts for the establishment of Islamic bank in Indonesia began before the Second World War (Rahardjo, 1998), but it was not until the 1990s that Islamic bank was established in Indonesian. The main questions one would ask is that, what led to the delays in the establishment of Islamic bank in Indonesia? What issues prevented it early establishment? How can these issues be resolved? The objectives of this review are to provide information on the establishment of Islamic bank, the struggle involved in the establishment of Islamic bank and the latest developments in Islamic bank in Indonesia. Methodology Literature on Islamic bank with emphasis in Indonesia was reviewed. Secondary data was obtained from Bank of Indonesia official reports, refereed journals, postgraduate theses and conference papers. There was indequate published data and articles related to Islamic banks in Indonesia. This is because not much researchers in Indonesia have published their study in English and are online (internet). This paper provides an overview of the history of Islamic bank in Indonesia, from it inception to recent developments. Brief of Indonesian Structure Indonesia is the fifth most populous country in the world with a population of around 237 million of which about 204 million (ca. 86 percent) are Muslims (BPS, 2015). Indonesia is the largest Muslim nation in the world. Although majority of Indonesians are Muslims, Indonesia is not an Islamic state. Acording to Ariff (1988) and Wouters (2007) Indonesia is a country with an ideology of Pancasila and basically shows the character of a secular state, with the monetary system as a capitalist country. The Muslim population in Indonisia is divided into two main groups; these are Abangan and SantriAbangan also known as Islamic by Identity Card (Islam KTP) refers to Muslims who do not practice Islam. In terms of the daily life they are Muslims merely and only by identification card (IC). Santri follow Islamic values in their daily lives. According to Ariff (1988), Santri are also fragmented into two parts. First is the elit or reformist Santri; those who are usually highly educated, modern and live in urban areas. Second is the fundamentalist or traditional Santri with educational background in traditional system and live in villages or pesantren and are likely conservative. Establishment of Islamic Banking Issues related to the establishment of Islamic bank in Indonesia can be grouped into three phases. The first phase is the phase of thinking, the second phase is the phase of preparation and establishment, and the third phase is the phase after formation (maturation of the concept and setting). First Phase (Phase of Thinking) The first phase of the establishment of the Indonesian Islamic bank began in the 1930s. This phase was known as the phase of thinking or theoretical. This happened at the time Indonesia was colonized by the Dutch. This was the most difficult time in the establishment of Islamic bank in Indonesia because of the uncordial relationship between the Government (the Dutch) and Muslims. The extreme Islamic movement or Islamic fundamentalist group made the first attempt towards the establishment of Islamic bank in Indonesia. K. H. Mas Mansur, one of the ulama (Muslim scholars) and head of Muhammadiyah organization, was the first person to conceive the idea that Islamic bank should operate without benefit/interest system. His idea elicited reactions and debates among ulama and leaders of the socialist such as Muhammad Hatta. The socialists justified that the interest of conventional banks is voluntary between the two sides, there is no element of extortion or coercion, has a function in the public interest, and the amount required is not large (Rahardjo, 2002). K. H. Mas Mansur pioneered and championed the course that the interest charge by conventional bank is illegal (haram) because it is required by the contract and the existing of extortion (Rahardjo, 1998;Karim, 2001). Then came the third opinion of the 'gray' (doubtful) or in the middle between "legal (halal) and illegal (haram)". According to this opinion, the amount and condition of the interest can be legal, but in other situations it can be illegal (Rahardjo, 1998). Indonesians first vice President (from 1945-1956) did not support the establishment of Islamic bank. This contributed to the negative response from the community and government towards the establishment of Islamic bank in Indonesia. The issues of halal-haram/legal-interest conventional banks that led to the slow growth of Islamic bank still exist among Muslims in Indonesia (Mukhlis, 2011). Islamic bank was only discussed theoretically among ulama and Muslim intellectuals until 1960s. There were no real measures and clear plans for establishing Islamic bank, despite this, it has emerged as one of the solutions to economic problems and has improved social welfare of many Muslims (Sutedi, 2009). Publishing books and papers on Islamic economics especially on interest-free loans gain more interest during this period. Most writers of these books were the instigators of forward-thinking Islamic party associated with the Muslim Brotherhood movement in Egypt. Legal issues were also hindrances to the establishment of Islamic bank in Indonesia. The Indonesian banking practices supported by the law No 14 of 1967 on banking states that, every bank in Indonesia must operate based on interest. In 1968, the Islamic organization called Muhammadiyah opinioned that the 1967 law was unclear and the interest stated is allegorical. According to Antonio (2005), the Muhammadiyah tried to establish banking institutions in accordance with Islamic rules. In 1969, the fight to establish interest free Islamic banks intensified following a conference held by the Organization of Islamic Countries (OIC) in Kuala Lumpur (Sudarsono, 2005). As a result of this conference, the Islamic Development Bank (IDB) was established in Jeddah in 1975 and Indonesia was one of the countries involved in its establishment (Admadja & Antonio, 1999). This encouraged and motivated Indonesia to establish its own Islamic bank to improve upon the economic empowerment of Muslims. The need for Indonesia to have more Islamic banks also received attention among Muslim intellectuals. Various attempts including the introduction of Islamic economic courses at the universities and the creation of a non-political organization oriented towards economic development of Muslims were made to increase the establishment of Islamic banks in Indonesia. Nevertheless, the establishment of Islamic banking is still at the theoretical framework, despite the change of leadership from the Old Order (ORLA) to the New Order (ORBA). ORBA authorities continued to link the establishment of Islamic banks to the promotion of an Islamic state (Karim, 2007). Islam has been developed solely for the exercise of worship and religious ceremonies. Islam as an ideology of live, have no place in the constitutional system of Indonesia. All these, together with lack of commitment from Muslim community's and support from the government delayed the establishment of Islamic bank in Indonesia. Second Phase (Preparation and Establishment Phase) In the 1980s, the Indonesian Muslim intellectuals and ulama re-visited the idea of establishment Islamic banks in Indonesia. Their desires were ignited by seeing the success of Malaysia and other Muslim countries in setting up Islamic banks, but their efforts failed because of the political situation that followed suit. At that time the government tried to impose Pancasila as the sole foundation of all social organization. Any idea that has to do with the word "Islam", including Islamic bank does not have the support of government (Yasin, 2010). The phobia for the word 'Islam' by the government in relation to the creation of an Islamic state linked with Islamic extremism and fundamentalism hampered the establishment of Islamic banks in Indonesia. In 1982, the relationship between Indonesian government and Islam slightly improved. New Order (ORBA) government began to lose the support of the Indonesian Armed Forces (ABRI) and this forced the government to gain support and legitimacy with non-ABRI such as the ulama and Muslim intellectuals. The government showed open attitude towards Islamic organizations. By this, lots of concessions were granted to Muslims in an era referred to as the political accommodation for Islam (Salim, 2004). This era was exploited by ulama and Muslim intellectuals to re-submit the idea of Islamic bank which they have been fighting for. Within this era, there was also an increased in the quality of education of Muslim intellectuals (educated abroad). The government felt the need to listen to opinions, voices and wishes of Muslim intellectuals. This contributed to an increased bargaining power for Muslims for the establishment of Islamic bank in Indonesia (Effendy, 1998). In October 1988, the Indonesia government issued a policy package (PAKTO) on the liberalization of the banking industry which stated that, banking can be established at 0% interest. This policy paved the way for the establishment of Islamic banks that operated on interest-free principles. After the government gave the 'green light' for the establishment of Islamic banks, another problem which has to do with startup capital surfaced. Partial support from the government made it difficult to raise the needed initial capital for the establishment of Islamic banks. Various efforts were made by Muslim communities and intellectuals towards raising funds as seed capital for the establishment of Islamic banks. Movements to intensify the establishment of Islamic banks in the country began in 1990. In this year, Indonesian Muslim Council (MUI) held a seminar to discuss the bank interest issues. As a result of this seminar, it was agreed to establish interest-free Islamic banks. The seminar was followed by the National Congress of MUI where it was decided to form a working group to complete preparations for the establishment of Islamic banks. The working group was called MUI banking team and the team was given the responsibility to conduct preparations related to the establishment of Islamic banks with consultation with all concerned parties (Admadja & Antonio, 1999;Antonio, 2001). After all the issues above related to the establishment of Islamic banks addressed, the problem of the name to be used arose. ORBA under the leadership of President Suharto still had an issue with the use of the word "Islam" because of the potential issues associated with fundamentalism and the concerns that it will cause inconvenience to the people of Indonesia, since Indonesia consists of people of various religion and ethnic groups (Triyuwono, 2000). Thus Islamic banks were tasked by the government to work with caution. Besides the non-Muslims, there were Muslims who opposed the idea of using the word 'Islam'. There were also some groups that have always linked the establishment of an Islamic bank with an extreme religious issues and suspicions of the establishment of an Islamic state. Islamic bank teams continued to work diligently to adapt appropriate measures to convince the parties who were in disagreement with the use of 'Islam' in Islamic banks (Hidayati, 2005). The efforts of these teams made the Government of Indonesia to establish the 1991 Act to approve the establishment of Islamic banks in Indonesia. In 1992, the Bank Islam/Shariah in Indonesia, currently known as Bank Muamalat Indonesia (BMI) began operation with an initial capital of Rp 84 billion (Ariff, 1996). Third Phase (Maturation of the Concept and Setting Phase) This was a unique phase in the establishment of Islamic banks in Indonesia which took place from 1990 -2000. At that time only one Islamic bank that is the BMI was established. This bank was a trademark for the economic revival of Islamic banks in Indonesia and thus the history of the establishment of Islamic banks in Indonesia cannot be emphasized without noting the role played by BMI (Yasin, 2009). In 1992, the Indonesian government issued the law No. 7 of 1992 on Indonesian banks. In law No. 7, the term Islamic bank was not explicitly stated, it was expressed as "bank based on the principle of sharing". This law was a legal framework for Islamic bank operations in Indonesia. From 1992-1998 BMI was the only commercial bank based on Islamic principles in Indonesia. In 1998 the government amended the law No. 7 of 1992 and replaced it with the law No. 10 of 1998. This law provided stronger legal basis for the establishment of the Islamic banks in Indonesia. In law No. 10 the term "bank based on the principle of sharing" was changed to "bank based on the principle of Shariah" or abbreviated as "Shariah bank" (the official term for Islamic banks in Indonesia based on law No. 10 of 1998). In the law No. 10, Shariah banks were allowed to operate under dual banking system. The dual banking system allowed Shariah banks to operate on both interest and interest-free bases. This opened up vast opportunities for conventional banks to offer addition Shariah products while retaining their conventional systems. Thus eventhough Islamic bank in Indonesia was established by law No. 7 in 1992, the development of Islamic bank was significant only after the enforcement of law No. 10 of 1998 (Rae, 2008;Bank Indonesia, 2002). Phases of establishment and development of Islamic banks in Indonesia is summarized in Table 1. Current Condition of Islamic Banks in Indonesia Indonesian Islamic banks began operation in the 1990s. At that time, Islamic banks in other Muslim countries such as Malaysia was already at the stage of development of their Islamic bank products. Islamic banks in Indonesia are now index in the Dow Jones and the Financial Times. Haron (2005) reported that, currently regulations and supervisions have been emphasized, and AAOIFI standards have been issued. These happenings indicate the development of Islamic banks in Indonesia. Table 2 shows the development of Islamic banks in Indonesia from 2003 to 2015. The data revealed that, the assets and shares of Islamic banks in Indonesia was below targets, thus the development of Islamic banks in Indonesia is yet to reach the set targets. In 2009, the total assets of Islamic banks was Rp 66 trillion, which was below the target of Rp 87 trillion set by the Bank of Indonesia. In 2013, the Islamic bank assets was Rp 242.3 trillion which was also below the target of Rp 255 trillion set by the Bank of Indonesia. Based on total assets, the market share of Islamic banks in 2013 was 4.8% of the total share of the national bank. The market share of Islamic banks was also below the target set by Bank of Indonesia that is 5% of the total share of the national bank market. Bank of Indonesia set a target of 5.25% in banking market share in 2014 for Islamic banks, but the market share position of Islamic bank in 2014 remained below 5%. There were no significant developments in market share position of Indonisia Islamic banks even in 2015, it remained at 5%. Total assets of Indonisia Islamic banks amounted to Rp. 272 trillion, much lower than the total assets set by the national bank (Rp 5,615 trillion) (Bank of Indonesia, 2014). The low market share of Islamic banks in Indonesia has led to low ranking on the world islamic market. Islamic banks in Indonisia was ranked as eighth out of the nine Muslim-majority countries evaluated in 2011-2012. The country ranked first was Saudi Arabia, followed by Malaysia. Pakistan was ranked last after Indonesia (Kuwait Finance House Research, 2013). The market shares of Islamic banks in Indonesia is 20% lower than Islamic bank market shares of Malaysia (Witjaksono, 2015). The market share of Islamic banks in Indonesia is even lower than the market share of Islamic banks in the UK (Mansour et al., 2010). This clearly shows that there is a problem with the development of Islamic banks in Indonesia. Thus the New Horizon Global Perspective on Islamic Banking and Insurance (2012) referred to Islamic banks in Indonesia as "the sleeping giant stirring". Undoubtedly Islamic banks in Indonesia has shown some progress, however, according to Sula (2011) the development of a bank, should be seen from the market share. The market share reflects the (portion) from the sale of industrial goods or services carried out by an industry. Shaffer (1993) reported that market share is important, because it reflects the performance associated with bank's competitive position in the banking industry. Discussion Middle class group which consisted of Muslim intellectuals and educated Islamic scholars played important roles in the establishment of Islamic bank in Indonesia. These Muslim intellectuals and educated Islamic scholars were members of Indonesian Muslim Council (MUI), Indonesian Association of Muslim Intellectuals (ICMI), Muhammadiyah and other Islamic society organizations (Mufti and Sula, 2007). There are historical parallels between Indonesia and Malaysia about the term middle class. In 1970-1990, a New Economic Policy (NEP) in Malaysia opened up opportunities for education and economic empowerment of young Malays (Muslims). This policy enabled young Malays to be educated abroad in various disciplines and returned to Malaysia to form their own community structure termed middle class (Kahn, 1996;Crouch, 1996). Members of this group had power and influence in the political and economic situation in Malaysia. There were also other groups with high educational background and resideded in urban areas (upper class or bourgeoisie) and those with no educational background and resideded in rural comminuties (lower class or proletariats). These classes of people also occured in Indonesia. From 1970 was the era and rise of Indonesian Muslim intellectuals. They actively put forward ideas, pioneered and championed movements to include Islamic elements in politics and the economy of Indonisia. One of the activities of Indonesian Muslim intellectuals in the field of Islamic economy was the efforts put into the establishment of Islamic banks in Indonisia. Ariff (1988) believed the establishment of Islamic bank in a secular country like Indonesia, is a form of informal arrangements associated with public demand. This condition was termed as bottom-up movement by Karim (2007) and Muqorobin (2010). The ulama and Muslim intellectuals who were members of the MUI, ICMI and other Islamic organizations felt it was time to establish an Islamic bank in Indonesia, a country with the largest number of Muslims in the world. The group saw the need to establish Islamic banks to meet the financial needs of Muslims based on Islamic law. This situation was different from that of Malaysia and other Islamic countries such as Iran, Sudan, Jordan, Kuwait, Saudi Arabia and other Middle Eastern countries where the establishment of Islamic banks came from above or the government (top-down). With the support of the government in other Muslim countries, the establishment of Islamic banks was easier than that of Indonesia (Ahmed, 1990;Shallah, 1990;Wilson, 1990;Aryan, 1990;Gierath 1990). Due to the delay in the establishment of Islamic banks in Indonesia, the development of the Islamic financial systems also delayed. Hitherto the market share of Islamic bank is only about 5% according to national banking system of Indonesia. Thus the necessary commitments, efforts and hard work in pursuit of developmental increment in Islamic banks in Indonesia have to be put in place. From the description above it can be concluded that the idea to set up Islamic bank in Indonesia was perceived and fought for before independence. However, there were some issues that hindered its establishment. The issues were religion, that is the problem of halal-haram interest, the laws of Indonesia, politics, social and financial resources (capital) (Antonio, 1992). The first issue was about halal-haram interest. Indonesian Muslims were too busy fighting among themselves with the issue of halal-haram conventional bank interest. The debate raised potential conflicts among socialists, secular nationalists and Islamic scholars or groups which then called for a compromised idea. According to Mujiburrahman (2006) a compromised idea was eventually formed which gave a "gray area" for the establishment of Islamic bank in Indonesia. The second issue was about the laws of Indonesia. This relates to the absence of a law which was fundamental to the operation of Islamic banks in Indonesia. This is because the concept of Islamic bank was not in accordance with the existing banking laws. Indonesia law No. 14 of 1967 concerning banking stated that, any financing transactions or loans made by the bank must be accompanied by interest (Rahardjo, 2002). The third issue was political in nature. Strong desire of one party or group of people is not enough if the government does not support that desire. Experience with the establishment of Islamic banks in other countries showed the need for the strong commitment of Muslim community to adopt a way of life based on Islam, including the economical way. Strong support is also needed from the government and the Indonesian government at that time saw Islam as potentially threat to it activities. Until now, even the use of "Islam" banking terms is still not allowed. Therefore Indonesia is the only country to use the term Shariah banks instead of the term Islamic banks. The fourth issue was social in nature. Ariff (1998) concluded that the Muslim population in Indonesia has strong political influence simply because of their population (quantity), but there is lack of determination to bring about change. Social influences include the establishment of trust, shared values, and habits of people who use conventional banking services (Muhammad, 2005). The first bank in Indonesia, de Javasche bank was established in 1972 and operated on the conventional principles. As a result, people are familiar with the benefits of conventional banks, including Muslims. They have accepted it as a fair economic system. Therefore, their interest in using the services of Islamic banks without interest was low (Sudarsono, 2005) The last issue was financial or capital fund to set up Islamic banks. It is still questionable to find someone who is willing to put up capital for the establishment of a modern bank? The lack of government support and the existing banking laws in effect at that time led to obstruction of opportunity for foreign banks to open branches in Indonesia. This situation resulted in the limited inflow of money from abroad, causing economic problems (capital) that made it difficult to receive huge funds for the establishment of an Islamic bank in Indonesia, though lots of offers came from banks in the Middle East who wanted to invest in Islamic banks in Indonesia (Rahardjo, 1998). Conclusion and Implication This study gives an overview of the establishment of Islamic bank in Indonesia and the issues behind it establishment. Delays in the establishment of the Indonesian Islamic bank compared with countries in other Muslim majority countries shows that the development of Islamic bank in Indonesia was slow. The current situation of Indonesian Islamic bank market share by the end of 2015 was about 5% which is lower than the target set by the national bank of Indonesia. Various issues hampered the establishment of Islamic banks in Indonesia. The fact is that, these issues are still relevant today, and they are threats to further development of Islamic banks in Indonesia. The findings of this study are useful for Islamic bank stakeholders to take steps to resolve issues that can interfere with the future development of Islamic banks in Indonesia.
2019-05-19T13:03:25.691Z
2016-09-06T00:00:00.000
{ "year": 2016, "sha1": "555a090dd8f32f9cc50055b292f8cb894dc9e88f", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijfbr.20160205.13.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "68034ae444d1867e2fffaaec1d283aef3b2b8abc", "s2fieldsofstudy": [ "History", "Business" ], "extfieldsofstudy": [ "Economics" ] }
216055375
pes2o/s2orc
v3-fos-license
Evaluation of antibody testing for SARS-CoV-2 using ELISA and lateral flow immunoassays Background: The SARS-CoV-2 pandemic caused >1 million infections during January-March 2020. There is an urgent need for robust antibody detection approaches to support diagnostics, vaccine development, safe individual release from quarantine and population lock-down exit strategies. The early promise of lateral flow immunoassay (LFIA) devices has been questioned following concerns about sensitivity and specificity. Methods: We used a panel of plasma samples designated SARS-CoV-2 positive (from SARS-CoV-2 RT-PCR-positive individuals; n=40) and negative (samples banked in the UK prior to December-2019 (n=142)). We tested plasma for SARS-Cov-2 IgM and IgG antibodies by ELISA and using nine different commercially available LFIA devices. Results: ELISA detected SARS-CoV-2 IgM or IgG in 34/40 individuals with an RT-PCR-confirmed diagnosis of SARS-CoV-2 infection (sensitivity 85%, 95%CI 70-94%), vs 0/50 pre-pandemic controls (specificity 100% [95%CI 93-100%]). IgG levels were detected in 31/31 RT-PCR-positive individuals tested ≥10 days after symptom onset (sensitivity 100%, 95%CI 89-100%). IgG titres rose during the 3 weeks post symptom onset and began to fall by 8 weeks, but remained above the detection threshold. Point estimates for the sensitivity of LFIA devices ranged from 55-70% versus RT-PCR and 65-85% versus ELISA, with specificity 95-100% and 93-100% respectively. Within the limits of the study size, the performance of most LFIA devices was similar. Conclusions: The performance of current LFIA devices is inadequate for most individual patient applications. ELISA can be calibrated to be specific for detecting and quantifying SARS-CoV-2 IgM and IgG and is highly sensitive for IgG from 10 days following symptoms onset. anti-spike Our data on the kinetics of antibody responses to SARS-CoV-2 infection build upon studies of hospitalised patients in China reporting a median 11 days to seroconversion for total antibody, with IgM and IgG seroconversion at days 12 and 14 respectively; 15 another similar study reports 100% IgG positivity by 19 days. 16 Our ELISA data show IgG titres rose over the first 3 weeks of infection and that IgM testing identified no additional cases. Methods to enhance sensitivity, especially shortly after symptom onset, could consider different sample types (e.g. saliva), different antibody classes (e.g. IgA) 20 , T-cell assays or antigen detection. 21 we We which resulted from cross-reactivity of INTRODUCTION The first cases of infection with a novel coronavirus, subsequently designated SARS-CoV-2, emerged in Wuhan, China on December 31 st , 2019. 1 Despite intensive containment efforts, there was rapid international spread and three months later, SARS-CoV-2 had caused over 1 million confirmed infections and 60,000 reported deaths. 2 Containment efforts have relied heavily on population quarantine ('lock-down') measures to restrict movement and reduce individual contacts. 3,4 To develop public health strategies for exit from lock-down, diagnostic testing urgently needs to be scaled-up, including both mass screening and screening of specific high-risk groups (contacts of confirmed cases, and healthcare workers and their families), in parallel with collecting robust data on recent and past SARS-CoV-2 exposure at individual and population levels. 2 Laboratory diagnosis of infection has mostly been based on real-time RT-PCR, typically targeting the viral RNA-dependent RNA polymerase (RdRp) or nucleocapsid (N) genes using swabs collected from the upper respiratory tract. 5,6 This requires specialist equipment, skilled laboratory staff and PCR reagents, creating diagnostic delays. RT-PCR from upper respiratory tract swabs may also be falsely negative due to quality or timing; viral loads in upper respiratory tract secretions peak in the first week of symptoms, 7 but may have declined below the limit of detection in those presenting later. 8 In individuals who have recovered, RT-PCR provides no information about prior exposure or immunity. In contrast, assays that reliably detect antibody responses specific to SARS-CoV-2 could contribute to diagnosis of acute infection (via rises in IgM and IgG levels) and to identifying those infected with or without symptoms and recovered (via persisting IgG). 9 Receptormediated viral entry to host cells occurs through interactions between the unique and highlyconserved SARS-CoV-2 spike (S) glycoprotein and the ACE2 cell receptor. 10 This S protein is the primary target of specific neutralising antibodies, and current SARS-CoV-2 serology assays therefore typically seek to identify these antibodies ( Figure 1A-C). Rapid lateral flow immunoassay (LFIA) devices provide a quick, point-of-care approach to antibody testing. A sensitive and specific antibody assay could directly contribute to early identification and . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Enzyme-linked immunosorbent assay (ELISA) We used a novel ELISA. Recombinant SARS-CoV-2 trimeric spike protein was constructed, 13 tagged and purified. Immunoplates coated with StrepMAB-Classic were used to capture tagged soluble trimeric SARS-CoV-2 trimeric S protein and then incubated with test plasma. Antibody binding to the S protein was detected with ALP-conjugated anti-human IgG or antihuman IgM. (Further details in Supplementary Material.) Lateral flow immunoassays (LFIA) We tested LFIA devices designed to detect IgM, IgG or total antibodies to SARS-CoV-2 produced by nine manufacturers short-listed as a testing priority by the UK Government Department of Health and Social Care (DHSC), based on appraisals of device provenance and available performance data. Individual manufacturers did not approve release of device-level data, so device names are anonymised. Testing was performed in strict accordance with the manufacturer's instructions for each device. Typically, this involved adding 5-20µl of plasma to the sample well, and 80-100µl of manufacturer's buffer to an adjacent well, followed by incubation at room temperature for 10-15 minutes. The result was based on the appearance of coloured bands, designated as positive (control and test bands present), negative (control band only), or invalid (no band, absent control band, or band in the wrong place) ( Figure 1C). We recorded results in real-time on a password-protected electronic database, using pseudonymised sample identifiers, capturing the read-out from the device (positive/negative/invalid), operator, device, device batch number, and a timestamped photograph of the device. Testing protocol We tested 90 samples using ELISA to quantify IgM and IgG antibody in plasma designated SARS-CoV-2 negative (n=50) and positive (n=40). All positive samples were included and an unstratified random sample of negative plasma from healthy blood donors (n=23) and organ donors (n=27). We tested the nine different LFIA devices using between 39-165 individual . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted April 20, 2020. ; https://doi.org/10.1101/2020.04. 15.20066407 doi: medRxiv preprint plasma samples (8-23 and 31-142 samples designated SARS-CoV-2 positive and negative, respectively, Table S2). Total numbers varied according to the number of devices supplied to the DHSC; samples were otherwise selected at random. Statistical analysis Analyses were conducted using R (version 3.6.3) and Stata (version 15.1), with additional plots generated using GraphPad Prism (version 8.3.1). Binomial 95% confidence intervals (CI) were calculated for all proportions. The association between ELISA results and time since symptom onset, severity, need for hospital admission and age was estimated using multivariable linear regression, without variable selection. Non-linearity in relationships with continuous factors was included via natural cubic splines. Differences between LFIA devices were estimated using mixed effects logistic regression models, allowing for each device being tested on overlapping sample sets. Differences between devices were compared with Benjamini-Hochberg corrected p-value thresholds. (Further details in Supplementary Material.) As safe individual release from lock-down is a major application for serological testing, we chose OD thresholds that maintained 100% specificity (95%CI 93-100%), while maximising sensitivity. Using thresholds of 0.07 for IgM and 0.4 for IgG (3 and 5 standard deviations above the negative mean respectively; Figure 2A,B), the IgG assay had 85% sensitivity (95%CI 70-94%; 34/40) vs. RT-PCR diagnosis. All six false-negatives were from samples taken within 9 days of symptom onset ( Figure 2D). IgG levels were detected in 31/31 RT-PCR-positive individuals tested ≥10 days after symptom onset (sensitivity 100%, 95%CI 89-100%). The IgM . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted April 20, 2020. ; https://doi.org/10.1101/2020.04. 15.20066407 doi: medRxiv preprint assay sensitivity was lower at 70% (95%CI 53-83%; 28/40). All IgG false-negatives were IgMnegative. No patient was IgM-positive and IgG-negative. Considering the relationship between IgM and IgG titres and time since symptom onset ( Figures 2C,D), univariable regression models showed IgG antibody titres rising over the first 3 weeks from symptom onset. The lower bound of the pointwise 95%CI for the mean expected titre crosses our OD threshold between days 6-7 ( Figure 2D). However, given sampling variation, test performance is likely to be optimal from several days later. IgG titres fell during the second month after symptom onset but remained above the OD threshold. No temporal association was observed between IgM titres and time since symptom onset ( Figure 2C). There was no evidence that SARS-2-CoV severity, need for hospital admission or patient age were associated with IgG or IgM titres in multivariable models (p>0.1, Table S3). . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted April 20, 2020. ; https://doi.org/10.1101/2020.04. 15.20066407 doi: medRxiv preprint Of 50 designated negative samples tested by both ELISA and the nine different LFIA devices, nine separate samples generated at least one false-positive, on seven different LFIA devices ( Figure 3). Four samples generating false-positive results did so on more than one LFIA device, despite the absence of quantifiable IgM or IgG on ELISA, potentially suggesting a specific attribute of the sample causing a cross-reaction on certain LFIA platforms but not ELISA. DISCUSSION We here present the performance characteristics of a novel ELISA and nine LFIA devices for detecting SARS-COV-2 IgM and IgG using a panel of reference plasma. After setting thresholds for detection using 50 negative (pre-pandemic) controls, 85% of 40 RT-PCR-confirmed positive patients had IgG detected by ELISA, including 100% patients tested ≥10 days after symptom onset. A panel of LFIA devices had sensitivity between 55 and 70% against the referencestandard RT-PCRs, or 65-85% against ELISA, with specificity of 95-100% and 93-100% respectively. These estimates come with relatively wide confidence intervals due to constraints on the number of devices made available for testing. Nevertheless, this study provides a benchmark against which to further assess the performance of platforms to detect anti-SARS-CoV-2 IgM/IgG, with the aim of guiding decisions about deploying antibody testing and informing the design and assessment of second-generation assays. LFIA devices are cheap to manufacture, store and distribute, and could be used as a point-ofcare test by healthcare practitioners or individuals at home, offering an appealing approach to diagnostics and evaluating individual and population-level exposure. A positive antibody . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted April 20, 2020. ; https://doi.org/10.1101/2020.04. 15.20066407 doi: medRxiv preprint test is currently regarded as a probable surrogate for immunity to reinfection. Secure confirmation of antibody status would therefore reduce anxiety, provide confidence to allow individuals to relax social distancing measures, and guide policy-makers in the staged release of population lock-down, potentially in tandem with digital approaches to contact tracing. 14 As a diagnostic tool, serology may have a role in combination with RT-PCR testing to improve sensitivity, particularly of cases presenting sometime after symptom onset. 15,16 Reproducible methods to detect and quantify vaccine-mediated anti-SARS-CoV-2 antibodies are also crucial, as vaccines enter clinical trials, evaluating the magnitude and durability of immunogenicity. Appropriate thresholds for sensitivity and specificity of an antibody test depend on its purpose, and must be considered when planning deployment. For diagnosis in symptomatic patients, high sensitivity is required (generally ≥90%). Specificity is less critical as some falsepositives could be tolerated (provided other potential diagnoses are considered, and accepting that over-diagnosis causes unnecessary quarantine or hospital admission). However, if antibody tests were deployed as an individual-level approach to inform release from quarantine, then high specificity is essential, as false-positive results return non-immune individuals to risk of exposure. For this reason, the UK Medicines and Healthcare products Regulatory Agency has set a minimum 98% specificity threshold for LFIAs. 17 Appraisal of test performance should also consider the influence of population prevalence, acknowledging that this changes over time, geography and within different population groups (e.g. healthcare workers, teachers). The potential risk of a test providing false reassurance and release from lock-down of non-immune individuals can be considered as the proportion of all positive tests that are wrong, as well as the number of incorrect positive tests per 1000 people tested. Based on the working 'best case' scenario of a LFIA test with 70% sensitivity and 98% specificity, the proportion of positive tests that are wrong is 35% at 5% population seroprevalence (19 false-positives/1000 tested), 13% at 20% seroprevalence (16 falsepositives/1000) and 3% at 50% seroprevalence (10 false-positives/1000) (Figure 4). However, more data are needed to investigate antibody-positivity as a correlate of protective immunity. Indeed pre-existing IgG could enhance disease in some situations, 18 with animal data . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted April 20, 2020. ; https://doi.org/10.1101/2020.04. 15.20066407 doi: medRxiv preprint demonstrating that SARS-CoV anti-spike IgG contributes to a proinflammatory response associated with lung injury in macaques. 19 Our data on the kinetics of antibody responses to SARS-CoV-2 infection build upon studies of hospitalised patients in China reporting a median 11 days to seroconversion for total antibody, with IgM and IgG seroconversion at days 12 and 14 respectively; 15 another similar study reports 100% IgG positivity by 19 days. 16 Our ELISA data show IgG titres rose over the first 3 weeks of infection and that IgM testing identified no additional cases. Methods to enhance sensitivity, especially shortly after symptom onset, could consider different sample types (e.g. saliva), different antibody classes (e.g. IgA) 20 , T-cell assays or antigen detection. 21 In contrast to others, 16,22-24 we did not find evidence of an association between disease severity and antibody titres. We observed several LFIA false positives, which may have potentially resulted from cross-reactivity of non-specific antibodies (e.g. reflecting past exposure to other seasonal coronavirus infections). The main study limitation is that numbers tested were too small to provide tight confidence intervals around performance estimates for any specific LFIA device. Expanding testing across diverse populations would increase certainty, but given the broadly comparable performance of different assays, the cost and manpower to test large numbers may not be justifiable. Demonstrating high specificity is particularly challenging; for example, if the true underlying value was 98%, 1000 negative controls would be required to estimate the specificity of an assay to +/-1% with approximately 90% power. Full assessment should also include a range of geographical locations and ethnic groups, children, and those with immunological disease including autoimmune conditions and immunosuppression. In summary, antibody testing is crucial to inform release from lockdown. This study offers insights into the performance of both a novel ELISA and a panel of LFIA devices that have been made widely available, but to date with limited systematic validation. Our findings suggest that while current LFIA devices may provide some information for population-level surveys, their performance is inadequate for most individual patient applications. The biobank of samples assembled for this study continues to be expanded and will provide a valuable resource for developing the next generation of ELISA and lateral flow assays. The ELISA we . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted April 20, 2020. ; https://doi.org/10.1101/2020.04. 15.20066407 doi: medRxiv preprint describe is currently being optimised and adapted to run on a high-throughput platform and provides promise for the development of reliable approaches to antibody detection that can support decision making for clinicians, the public health community, policy-makers and industry. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. DATA AVAILABILITY Results generated for all samples and relevant metadata is provided in Table S6. ACKNOWLEDGEMENTS This work uses data and samples provided by patients and collected by the NHS as part of their care and support. We are extremely grateful to the frontline NHS clinical and research staff and volunteer medical students, who collected this data in challenging circumstances; and the generosity of the participants and their families for their individual contributions. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted April 20, 2020. ; https://doi.org/10.1101/2020.04. 15.20066407 doi: medRxiv preprint grants from NIHR, during the conduct of the study. No other author has a conflict of interest to declare. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted April 20, 2020. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted April 20, 2020. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted April 20, 2020. ; https://doi.org/10.1101/2020.04. 15.20066407 doi: medRxiv preprint Any positive test for IgG, IgM, both or total antibody is shown as positive, please see Figure S2 for more detailed breakdown. Grey blocks indicate missing data as a result of insufficient devices to test all samples and one assay on one device with an invalid result. Samples in both panels are ranked from left to right by quantitation of IgG (as indicated in panel A). QB04 QA10 BD26 QB05 QB11 QC01 BD28 QA07 BD15 BD27 QB12 QA05 BD13 QC04 BD19 QA11 BD03 BD07 BD08 QB02 BD20 BD24 QD11 QB03 BD01 BD12 BD04 QA09 QB10 QA02 QB08 BD29 QC03 BD17 BD25 QB07 BD09 QB09 QB01 QA12 QC02 QD02 QB06 QA01 BD18 BD23 QA03 BD10 UKCOV006_D5 UKCOV017_D3 UKCOV029_D5 UKCOV022_D5 UKCOV007_D3 HCW07_D14 UKCOV019_D5 UKCOV028_D3 HCW04_D14 UKCOV024_D5 HCW08_D14 UKCOV031_D3 UKCOV003_D3 UKCOV027_D5 HCW09_D14 HCW05_D14 UKCOV005_D3 HCW06_D14 UKCOV018_D3 UKCOV035_D5 UKCOV033_D3 UKCOV020_D5 COV19− . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted April 20, 2020. ; https://doi.org/10.1101/2020.04. 15.20066407 doi: medRxiv preprint CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted April 20, 2020. ;
2022-04-08T17:19:32.607Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "176418cf638426756f817080404778d3bb015b04", "oa_license": "CCBY", "oa_url": "https://doi.org/10.12688/wellcomeopenres.15927.1", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "176418cf638426756f817080404778d3bb015b04", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
73617525
pes2o/s2orc
v3-fos-license
Rapid estimation of notch stress intensity factors in 3 D large-scale welded structures using the peak stress method The Peak Stress Method (PSM) is an engineering, FE-oriented application of the notch stress intensity factor (NSIF) approach to fatigue design of welded joints, which takes advantage of the singular linear elastic peak stresses from FE analyses with coarse meshes. Originally, the PSM was calibrated to rapidly estimate the NSIFs by using 3D, eight-node brick elements, taking advantage of the submodeling technique. 3D modelling of large-scale structures is increasingly adopted in industrial applications, thanks to the growing spread of high-performance computing (HPC). Based on this trend, the application of PSM by means of 3D models should possibly be even more speeded up. To do this, in the present contribution the PSM has been calibrated under mode I, II and III loadings by using ten-node tetra elements, which are able to directly discretize complex 3D geometries without the need for submodels. The calibration of the PSM has been carried out by analysing several 3D mode I, II and III problems. Afterwards, an applicative example has been considered, which is relevant to a large-scale steel welded structure, having overall size on the order of meters. Two 3D FE models, having global size of tetra elements equal to 5 and 1.66 mm, have been solved by taking advantage of HPC, being the global number of degrees of freedom equal to 10 and 140 millions, respectively. The NSIFs values estimated at the toe and root sides according to the PSM have been compared with those calculated by adopting a shell-to-solid technique. Introduction On the basis of the fundamental contributions of Williams [1], who studied two-dimensional notch problems under mode I (opening) and mode II (sliding) loadings, and Qian and Hasebe [2], who analysed the notch problem under mode III (tearing) loading, the singular linear elastic stress distributions in the neighborhood of a sharp V-notch tip, see the example of a toe side in a welded joint in Fig. 1, can be written as functions of the notch stress intensity factors (NSIFs), which quantify the intensity of the asymptotic stress fields.The following equations define the mode I, II and III NSIFs, respectively, according to Gross and Mendelson [3]: In the literature, the NSIF approach has been used to analyse the medium as well as the high-cycle fatigue strength of specimens weakened by sharp V-notches and made of structural materials [4,5].Focusing on welded joints, NSIFs allow to correlate the fatigue strength under uniaxial [6][7][8][9] as well as multiaxial loadings [10]. Nevertheless, it should be noted that the calculation of NSIFs on the basis of the results of numerical analyses shows a major drawback in engineering applications, since very refined FE meshes (element size on the order of 10 -5 mm) are required in order to apply definitions (1a)-(1c) to calculate the NSIFs.When dealing with three-dimensional notched components, both the solution of the FE model and the postprocessing of numerical results could be even more timeconsuming. In order to overcome this issue, an engineering and rapid technique, the so-called Peak Stress Method (PSM), has been proposed, which allows to speed up the calculation of the NSIFs by adopting coarse FE analyses, the element size being some orders of magnitude larger than that required to apply definitions (1a)-(1c).The second advantage of the PSM is that only a single stress value is necessary to estimate the NSIFs, instead of a number of stress-distance results, as usually necessary in order to apply definitions (1a)-(1c).The method takes inspiration from the contribution by Nisitani and Teranishi [11,12], who proposed a technique to readily estimate the mode I Stress Intensity Factor of a crack propagating from an ellipsoidal cavity.The PSM has been theoretically justified and extended to allow the rapid calculation also of the NSIF relevant to sharp and open V-notches under mode I [13,14], the SIF of cracks under mode II [15] and, finally, the NSIF of open Vnotches under mode III [16].It is worth noting that any NSIF-based approach for the structural strength assessment can in principle be reformulated on the basis of the PSM.Recently, the PSM has been applied in combination with the approach based on the averaged strain energy density (SED) to assess the fatigue strength of welded joints under axial [15,17], torsion [16] and multiaxial [18,19] loading conditions. Practically, the NSIFs K 1 , K 2 and K 3 can be readily estimated according to the PSM by adopting the singular, linear elastic, opening (mode I), sliding (mode II) and tearing (mode III) peak stresses σ θθ,θ=0,peak , τ rθ,θ=0,peak and τ θz,θ=0,peak , respectively, which are referred to the V-notch bisector line, according to Fig. 2, and calculated at the V-notch tip from FE analyses with coarse meshes.The estimated NSIF values can be obtained from the following expressions [13,15,16]: where a is the characteristic size of the considered sharp V-notch, as an example it is the notch depth in next Fig.4b, d is the so-called 'global element size', i.e. the average FE size adopted by the free mesh generation algorithm available in the numerical software.Fig. 2. Sharp V-shaped notches in a welded joint at the weld root (2α= 0°) and at the weld toe (2α typically equal to 135°) sides.Definition of peak stresses σ θθ,θ=0,peak , τ rθ,θ=0,peak and τ θz,θ=0,peak . Parameters K * FE , K ** FE and K *** FE depend on the calibration options: (i) element type and formulation; (ii) mesh pattern of finite elements and (iii) procedure for stress extrapolation at FE nodes.See the detailed discussion reported in [20]. Originally, 2D, four-node plane quadrilateral elements of Ansys® element library [13,15,16] were adopted to calibrate parameters K * FE , K ** FE and K *** FE , which resulted equal to 1.38, 3.38 and 1.93, respectively, such values being valid under the conditions discussed in the relevant literature [13][14][15][16], to which the reader is referred.Recently, K * FE and K ** FE have been also calibrated by adopting six commercial FE packages other than Ansys®, taking advantage of a Round Robin between Italian Universities [20]. Afterwards, the PSM has been extended to be used with 3D, eight-node brick elements [14], taking advantage of the submodeling technique available in Ansys® software.More precisely, when dealing with complex 3D joint geometries a submodel consisting of brick elements was generated after having analysed a main model meshed by employing ten-node tetra elements. Three-dimensional modelling of large-scale, or even full-scale, structures is increasingly adopted in industrial applications, thanks to the growing spread of highperformance computing (HPC).Based on this trend, the application of PSM by means of three-dimensional models should possibly be even more speeded up.To do this, in the present contribution parameters K * FE , K ** FE and K *** FE have been calibrated by using ten-node tetra elements, which are able to discretize complex 3D geometries, making the PSM applicable directly to a single model, without the need for submodels.The calibration of the PSM based on tetrahedral elements has been carried out by analysing several three-dimensional mode I, II and III problems.Afterwards, an applicative example has been considered, which is relevant to a large-scale steel welded structure, having overall size on the order of meters and containing many different welded details. Calibrating the PSM with 10-node tetrahedral elements When adopting tetrahedral elements to analyse a 3D notch problem, the mesh pattern obtained by the free mesh generation algorithm is intrinsically not regular, so that a node belonging to the notch tip could be shared by a different number of elements having significantly different shape.Therefore, the peak stress could vary along the notch tip profile even in the case of a constant applied NSIF. Figure 3 shows an example of the variability of the peak stress along the notch tip profile of a plate subjected to pure tension loading (see next Fig.4b) and analysed by adopting a 3D FE model in which a plain strain condition has been simulated, as will be discussed in more detail in the following.For comparison purposes, the reciprocal of the normalised number of elements (N° FE/max N° FE) -1 , which share the node at which peak stress is evaluated, has been reported in Fig. 3.It can be observed that the higher the number of elements sharing the node, the lower the peak stress.Accordingly, to reduce the variability of the peak stress along the notch tip profile, an average peak stress value has been introduced (see red markers in Fig. 3), being defined at the generic node n=k as the moving average on three adjacent vertex nodes, i.e. n= k-1, k and k+1: It is worth noting that only peak stress values calculated at vertex nodes of the quadratic tetrahedral elements have to be input in Eq. ( 3), i.e. stress values at mid-side nodes located at the notch tip profile, which are provided by Ansys® when "path operations" or "GET commands" are employed in post-processing, must be neglected.On the other hand, when "list nodal results" or "query results" are adopted in the post-processing environment of Ansys® code, stress values at mid-side nodes are automatically excluded. Given this, the PSM has been calibrated by analysing several 3D notch problems under pure mode I, pure mode II and pure mode III loadings.After having calculated the peak stresses, the parameters K * FE , K ** FE and K *** FE have been evaluated from Eqs. (2a)-(2c), rearranged as follows: The same material properties have been adopted in all FE analyses, consisting in a structural steel, having Young's modulus equal to 206000 MPa and a Poisson's ratio ν of 0.3.3D, ten-node, quadratic tetrahedral elements (SOLID 187 of Ansys® element library) have been used in all FE analyses.The numerical integration has been carried out by adopting 4 Gauss points, i.e. a reduced integration, being the sole element formulation available in Ansys®.Once selected the proper element type, the average FE size d has been the sole parameter used to drive the automatic free mesh generation algorithm. Dealing with mode I and mode II problems, in order to obtain a uniform distribution of the relevant NSIFs along the notch tip profile, either plane-strain or planestress analyses have been performed: • plane strain state was simulated by constraining the out-of-plane element displacement u z (and thus the corresponding ε z strain component, ε z = 0); • plane stress state was simulated by defining an ideal orthotropic material having out-of-plane ε z strain uncoupled from the in-plane components, i.e. ε x and ε y .For this aim, ν zx and ν zy Poisson coefficients were set to zero.In absence of coupling of in-plane with out-of-plane strains, being ε z = 0, then σ zz = 0, i.e. plane stress conditions are applied.everal three-dimensional notch and crack problems under pure mode I (see Fig. 4 and Table 1) have been analysed.The considered case studies are a selection of those adopted in the first calibration of the PSM under mode I loading, which had been carried out by considering 2D [13] and 3D [14] problems.Here, those case studies have been extruded to obtain 3D components having thickness equal to s. 3D linear elastic analyses have been performed by simulating plane strain or plane stress conditions and by adopting ten-node, quadratic tetrahedral elements, in order to calculate the peak stress values.Only one eighth of each geometry has been modeled by using the triple symmetry condition.The free mesh generation algorithm available in Ansys code has been used after having input the average element size d.The mesh density ratio a/d has been varied between 1 and 20, by varying either the characteristic size of the notch problem, i.e. a, or the element size d (see Table 1).Moreover, three values of the notch opening angle, i.e. 2α = 0°, 90°, 135° have been considered, 0° and 135° being typical for weld root and toe side, respectively.A nominal gross-section stress equal to 1 MPa has been applied to each FE model. After having solved the FE model, the peak value of the opening stress σ 11,peak , being almost equal to σ θθ,θ=0,peak in all cases of Figs.4a and 4b, has been calculated at vertex nodes belonging to the V-notch tip profile, then Eq. ( 3) has been applied to derive the average peak stress at each vertex node. 3D problems (plane strain/plane stress), mode II loading, 2α=0°A plate having the geometry shown in Fig. 5 (see also Table 1), weakened by a central crack (2α = 0°) and subjected to pure mode II loading has been analysed.The case study has been taken from the first calibration of the PSM under mode II loading for 2D problems [15] and it has been extruded to obtain a 3D plate having thickness equal to s. 3D linear elastic analyses have been carried out by simulating plane strain or plane stress conditions and by adopting ten-node, quadratic tetrahedral elements, in order to calculate the peak stress values.The mesh density ratio a/d has been varied in a wide range from 1 to 200, being 2<2a<200 mm.Only one eight of the plate geometry has been modeled by using the double antisymmetry on planes Y-Z and X-Z and the symmetry on plane X-Y. The external load has been applied to the model by means of displacements u x =u y =1.262•10 -3 mm at the plate free edges, which correspond to a nominal gross shear stress of 1 MPa when no crack is present.After having solved the numerical model, the peak value of the (mode II) shear stress τ rθ,θ=0,peak = τ xy,peak has been calculated at the vertex nodes belonging to the crack tip profile, then Eq. ( 3) has been applied to derive the average peak stress at each vertex node. 3D problems, mode III loading, 2α=0°, 135°T he weld root (2α = 0°) as well as the weld toe (2α = 135°) sides of tube-to-flange joints subjected to pure mode III loading, see Table 1 and an example in Fig. 6, have been analysed.The case studies have been taken from the first calibration of the PSM under mode III loading, in which 2D axis-symmetric models were adopted [16].In the present analysis, the previous 2D geometries have been extruded about the tube axis to obtain the 3D geometries.It should be noted that only one quarter of each geometry has been analysed by taking advantage of the double anti-symmetry boundary conditions. 3D linear elastic analyses have been carried out by adopting ten-node, quadratic tetrahedral elements, in order to calculate the peak stress values.The mesh density ratio a/d has been varied in a range from 1 to 10, a being the tube thickness t which ranged between 7 and 10 mm (see Table 1). A nominal torsion shear stress equal to 1 MPa has been applied to each FE model at the tube side.After having solved the numerical model, the peak value of the (mode III) shear stress τ θz,θ=0,peak has been calculated at the vertex nodes belonging to the root or toe profile by adopting a local coordinate system r-θ-z rotated at each node according to Fig. 2. Then Eq. ( 3) has been applied to derive the average peak stress at each vertex node.Finally, the exact values of the mode I NSIF K 1 , mode II SIF K 2 and mode III NSIF K 3 to be input in Eq. ( 4a)-(4c), respectively, have been calculated by applying definitions (1a)-(1c) to the stress-distance numerical results obtained from 2D FE analyses by adopting very refined FE meshes (the size of the smallest element being on the order of 10 -5 mm).Dealing with mode I and mode II problems, eight-node, quadratic quadrilateral elements (PLANE 183 of Ansys® element library) under plane strain conditions have been adopted, while concerning mode III problems, eight-node, quadratic quadrilateral harmonic elements (PLANE 83 of Ansys® element library) have been employed. Results of FE analyses The results obtained from the calibration of the PSM under mode I, mode II and mode III loadings by using ten-node tetra elements are shown in Figs. 7, 8 and 9, respectively.The figures report the parameters K * FE, K ** FE and K *** FE , calculated from Eqs. (4a)-(4c), respectively, as functions of the mesh density ratio a/d.It should be noted that in each FE analysis, parameters K * FE, K ** FE and K *** FE exhibited a non-uniform distribution along the notch tip profile, given the variability of the average peak stress shown in Fig. 3. Accordingly, Figs. 7, 8 and 9 report the average value of the K FE parameters evaluated from each FE analysis along with the relevant bar, representing the range between minimum and maximum values calculated along the notch tip profile. In the case of 3D problems under mode I loading, Fig. 7 shows that K * FE ≅ 1.01±15% for 2α equal to 0° or 90°, while K * FE ≅ 1.21±10% when 2α equals 135°.Convergence is obtained when a/d ≥ 3 and 1 for 2α equal to 0°, 90° and 135°, respectively.Dealing, then, with mode II loading, the obtained results are reported in Fig. 8, which shows that K ** FE ≅ 1.63 ± 20%.Convergence is obtained when the ratio a/d ≥ 1.Finally, concerning mode III loading, the obtained results are reported in Fig. 9, which shows that K *** FE ≅ 1.37±10% when root side (2α = 0°) is considered, while K *** FE ≅ 1.75±5% when toe side (2α = 135°) is of interest.Convergence is obtained when a/d ≥ 2 both at weld root and toe sides.A summary is reported in Table 2.It can be observed that the scatter bands of K FE parameters are wider at root side (2α = 0°) than at toe side (2α = 135°) for all loading modes.This is due to the presence of elements having significantly different shape and size (the FE size d given as input being an average value) at the notch tip profile, especially in the case of cracks or weld root sides as compared to the case of open V-notches or weld toe sides.The non-uniform local mesh pattern has particular effect for cracks under mode II loading, the deviation of K ** FE being maximum and equal to ±20% (see Fig. 8). Application to a case study After presenting the calibration of the three-dimensional PSM based on tetra elements, an applicative example has 10).It is relevant to a largescale steel welded structure, which represents a detail of a sluice gate having overall size on the order of tens of meters.The considered detail has size on the order of meters and is located at a geodetic height of about 10 meters referred to the free surface.The 3D PSM based on tetra elements, previously calibrated, has been applied to estimate the NSIFs at both weld toe and root sides of the considered largescale steel welded structure.For the sake of brevity, only a selection of toe and root sides undergoing pure mode I loading have been analysed in the following.According to previous Fig. 7, the mesh density ratio a/d must be greater than 1 to analyse weld toe sides and greater than 3 to analyse weld root sides.With the aim of comparing the solution time, two different rather coarse meshes (see Fig. 11) of ten-node tetra elements have been generated by adopting a/d = 1 and 3, respectively.The minimum plate thickness being equal to 2a = 10 mm, the mesh density ratio a/d = 1 corresponds to d = 5 mm (see Fig. 11a), while a/d = 3 to a mesh size d = 1.66 mm (see Fig. 11b).The boundary conditions applied to both FE models are reported in Fig. 10.It should be noted that half of the hydrostatic pressure has been applied to the X-Z plane being active an anti-symmetry boundary condition.Both 3D FE models have been solved by taking advantage of Ansys HPC®, the global number of degrees of freedom (dof) being equal to 10 millions, for the case a/d = 1, while it reaches 140 millions, when a/d = 3.The performance of the computer cluster adopted to solve the FE analyses and the solution times are reported in Table 3. 3. Performance of the computer cluster adopted to solve the FE analyses of Fig. 11 by using Ansys 18.2 HPC. Figures 12 and 13 report the distribution of the mode I NSIF estimated at selected toe and root sides according to the 3D PSM previously calibrated.More in detail, the NSIF K 1 has been estimated at the selected toe side (Fig. 12) by applying Eq. (2a) with K * FE = 1.21 to the average peak stress values calculated from the FE model of Fig. 11a.On the other hand, the SIF K 1 has been estimated at the selected root side (Fig. 13) by applying Eq. (2a) with K * FE = 1.01 to the average peak stress values calculated from the FE model of Fig. 11b.obtained by applying Eq. (2a) and adopting the FE mesh shown in Fig. 11a and those derived by a shell-to-solid model according to [21]. Fig. 13.Mode I SIF K 1 estimated along a selected root side of the case study reported in Fig. 10.Comparison between results obtained by applying Eq. (2a) and adopting the FE mesh shown in Fig. 11b and those derived by a shell-to-solid model according to [21]. For comparison purposes, the mode I NSIF values estimated at the selected toe and root sides according to a shell-to-solid technique [21] have been included in Figs. 12 and 13.A quite good agreement can be observed, the maximum deviation being equal to approximately 10% at the toe side and approximately 15% at the root side. Conclusions The peak stress method (PSM) employs the singular, linear elastic peak stresses evaluated at the notch tip by means of FE analyses with coarse meshes to rapidly estimate the mode I, mode II and mode III NSIFs.Three calibration constants are needed, namely K * FE (Eq.(4a)), K ** FE (Eq.(4b)) and K *** FE (Eq.(4c)), respectively.Originally, the PSM was calibrated by using 2D plane or 3D brick elements, taking advantage of the submodeling technique.In the present contribution the PSM has been calibrated under mode I, II and III loadings by using ten-node tetra elements, which are able to directly discretize complex 3D geometries without the need for submodels.The following conclusions can be drawn: • It has been shown that under mode I loading the constant K * FE to use in Eq. (2a) is 1.01±15% for 2α = 0°, 90°, while it equals 1.21±10% when 2α = 135°.Convergence is obtained when the mesh density ratio a/d ≥ 3 for 2α equal to 0°, 90° and a/d ≥ 1 for 2α = 135°. • Dealing with mode II loading, it has been shown that the constant K ** FE to use in Eq. ( 2b) is 1.63 ± 20% for a mesh density ratio a/d ≥ 1. • Finally, an applicative example has been considered, which is relevant to a large-scale steel welded structure, having overall size on the order of several meters.The Ansys High Performance Computing (HPC) has been adopted to solve two MATECFig. 1 . Fig. 1.Polar reference system centred at the weld toe of a typical tube-to-flange welded joint geometry subjected to multiaxial bending and torsion loading. Fig. 3 . Fig. 3. Example of variability of the peak stress σ θθ,θ=0,peak calculated at vertex nodes belonging to the V-notch tip profile and comparison with the number of elements sharing each node (N° FE).Definition of the average peak stress , 0,peak θθ θ= σ . Fig. 10 . Fig. 10.Geometry (dimensions are in mm) and boundary conditions applied to the detail of the sluice gate.γ is the water specific weight being equal to 10 4 N/m 3 , h A and h B are the geodetic height referred to the free surface and are equal to 9.232 and 10 meters, respectively.The detail of Fig. 10 is characterised by many different welded geometries, including T-and cruciformjoints, both fillet and full-penetration welded.The plates thickness ranges between 10 and 58 mm.The 3D PSM based on tetra elements, previously calibrated, has been applied to estimate the NSIFs at both weld toe and root sides of the considered largescale steel welded structure.For the sake of brevity, only a selection of toe and root sides undergoing pure mode I loading have been analysed in the following.According to previous Fig.7, the mesh density ratio a/d must be greater than 1 to analyse weld toe sides and greater than 3 to analyse weld root sides.With the aim of comparing the solution time, two different rather coarse meshes (see Fig.11) of ten-node tetra elements have been generated by adopting a/d = 1 and 3, respectively.The minimum plate thickness being equal to 2a = 10 mm, the mesh density ratio a/d = 1 corresponds to d = 5 mm (see Fig.11a), while a/d = 3 to a mesh size d = 1.66 mm (see Fig.11b).The boundary conditions applied to both FE models are reported in Fig.10.It should be noted that half of the hydrostatic pressure has been applied to the X-Z plane being active an anti-symmetry boundary condition.Both 3D FE models have been solved by taking advantage of Ansys HPC®, the global number of degrees of freedom (dof) being equal to 10 millions, for Fig. 11 . Fig. 11.Coarse meshes of ten-node tetra elements generated in Ansys 18.2 environment and adopted to analyse (a) weld toe sides, being a/d = 1 according to Fig. 7b, and (b) weld root sides, being a/d = 3 according to Fig. 7a.Dof = degree of freedom.Table3.Performance of the computer cluster adopted to solve the FE analyses of Fig.11by using Ansys 18.2 HPC. Fig. 12 . Fig.12.Mode I NSIF K 1 estimated along a selected toe side of the case study reported in Fig.10.Comparison between results obtained by applying Eq. (2a) and adopting the FE mesh shown in Fig.11aand those derived by a shell-to-solid model according to[21]. 3D FE models, by employing ten-node tetra elements with global element size d = 5 and 1.67 mm, respectively.The solution time was on the order of 200 s for the FE model with d = 5 mm, i.e. 10 millions degrees of freedom, while about 3500 s were needed to solve the FE model with d = 1.67 mm, i.e. 140 millions degrees of freedom.The mode I NSIF values estimated at the toe and root sides according to the PSM have been compared with those calculated by adopting a shell-to-solid technique, showing a quite good agreement.• Because of the relatively coarse FE analyses required and simplicity of post-processing the calculated peak stresses, the PSM based on threedimensional models of ten-node tetra elements might be useful in the everyday design practice, even when large-scale structures are considered.EnginSoft SpA (Padova, Italy) is gratefully acknowledged for making Ansys HPC available for this project. Table 1 . Geometrical and FE parameters considered in the calibration of the PSM with 10-node tetrahedral element under mode I, mode II and mode III loadings.
2018-12-21T20:44:13.375Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "13b2ee762923319d88bfd094e6080713d451dd1d", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/24/matecconf_fatigue2018_17004.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "13b2ee762923319d88bfd094e6080713d451dd1d", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
246904490
pes2o/s2orc
v3-fos-license
Final results on the $0\nu\beta\beta$ decay half-life limit of $^{100}$Mo from the CUPID-Mo experiment The CUPID-Mo experiment to search for 0$\nu\beta\beta$ decay in $^{100}$Mo has been recently completed after about 1.5 years of operation at Laboratoire Souterrain de Modane (France). It served as a demonstrator for CUPID, a next generation 0$\nu\beta\beta$ decay experiment. CUPID-Mo was comprised of 20 enriched Li$_2$$^{100}$MoO$_4$ scintillating calorimeters, each with a mass of $\sim$ 0.2 kg, operated at $\sim$20 mK. We present here the final analysis with the full exposure of CUPID-Mo ($^{100}$Mo exposure of 1.47 kg$\times$yr) used to search for lepton number violation via 0$\nu\beta\beta$ decay. We report on various analysis improvements since the previous result on a subset of data, reprocessing all data with these new techniques. We observe zero events in the region of interest and set a new limit on the $^{100}$Mo 0$\nu\beta\beta$ decay half-life of $T^{0\nu}_{1/2}>1.8 \times 10^{24}$ year (stat.+syst.) at 90% CI. Under the light Majorana neutrino exchange mechanism this corresponds to an effective Majorana neutrino mass of $\left<(0.28$--$0.49)$ eV, dependent upon the nuclear matrix element utilized. Introduction Ever since the neutrino was found to have mass via the observation of flavor state oscillations [1,2], the nature of the neutrino mass itself has remained a mystery. Unlike charged leptons, neutrinos may be Majorana [3,4] instead of Dirac particles. In addition to implying that neutrinos and antineutrinos would be the same particle [5], this would also imply that the total lepton number is not conserved [6]. This may provide for a possible explanation of baryon asymmetry (i.e., the imbalance between matter and anti-matter) in the early universe [7,8]. Two-neutrino double-beta (2 ) decay is a rare Standard Model process which can occur in some even-even nuclei for which single beta decays are energetically forbidden (or heavily disfavored due to large changes in angular momentum). In this process two neutrons are converted into two protons with the emission of two electrons and two electron anti-neutrinos. The possibility of the neutrino being a Majorana fermion raises the prospect that neutrinoless double-beta (0 ) decay may occur [9,10]. In the case of 0 decay, we would again observe the conversion of two neutrons into two protons, but such a decay would only produce two electrons. Whereas 2 decay conserves lepton number, 0 decay would result in an overall violation of the total lepton number by two units [11][12][13], and indicate physics beyond the Standard Model. At present this process is unobserved with limits on the decay half-life at the level of 10 24 − 10 26 year via the isotopes 76 Ge, 82 Se, 100 Mo, 130 Te, and 136 Xe [14][15][16][17][18][19][20][21][22]. The process of 0 decay has several possible mechanisms [11,13,[23][24][25][26][27][28][29], however the minimal extension to the Standard Model provides the simplest via the exchange of a light Majorana neutrino. The rate of this process is dependent upon the square of the effective Majorana neutrino mass, : where is the weak axial vector coupling constant, 0 is the nuclear matrix element, 0 is the decay phase space, and is the electron mass. The effective mass is a linear combination of the three neutrino mass eigenstates, with the present limits on ranging from 60-600 meV [13]. A vibrant experimental field has emerged to search for 0 decay with experiments using a variety of nuclei and a wide range of methods (see [13] for a review of some of these). The main experimental signature for this decay is a peak of the summed electron energy at the -value of the decay ( ; the difference in energy between the parent and daughter nuclei) broadened only by detector energy resolution. There are 35 natural decay isotopes [30], however from an experimental perspective only a subset of these are relevant. Searching for 0 decay requires that the number of target atoms should be very large and that the background rate should be small. In an ideal case a candidate isotope should have > 2.6 MeV so it is above significant natural backgrounds and has the phase space for a relatively fast decay rate. It should also occur with a high natural abundance or be easily enriched. The isotope 100 Mo meets these requirements with a of 3034 keV and a relatively favorable phase space compared to other isotopes. Additionally it is relatively easy to enrich detectors with 100 Mo. Several experiments have utilized 100 Mo for 0 decay searches: NEMO-3, LUMINEU, and AMoRE. NEMO-3 utilized foils containing 100 Mo and used external sensors to measure time of flight and provide calorimetry. It ran from 2003-2010 at Modane and accumulated an exposure of 34.3 kg×year in 100 Mo. It found no evidence for 0 and set a limit of 0 1/2 > 1.1 × 10 24 year at 90% C.I. with a limit on the effective Majorana mass of < (0.33-0.62) eV [20]. LUMINEU was a pilot experiment for 100 Mo based calorimeters and served as a precursor to CUPID-Mo. LUMINEU utilized both Zn 100 MoO 4 and Li 2 100 MoO 4 crystals, and found Li 2 100 MoO 4 is more favorable for 0 decay searches [31]. AMoRE operates at Yangyang underground laboratory in Korea. The AMoREpilot operated with 48depl Ca 100 MoO 4 crystals. As with CUPID-Mo, AMoRE utilizes light and heat to provide particle identification. The AMoRE-pilot has set a limit using 111 kg×day of exposure of 0 1/2 > 9.5 × 10 22 year at 90% C.I. with a corresponding effective Majorana mass limit of < (1.2-2.1) eV [32]. Scintillating calorimeters are one of the most promising current technologies for 0 decay searches, with many possible configurations [16,[32][33][34][35][36][37][38]. These consist of a crystalline material, containing the source isotope, capable of scintillating at low temperatures which is operated as a cryogenic calorimeter coupled to light detectors to detect scintillation light. Particle identification is based on the difference in scintillation light produced for a given amount of energy deposited in the main calorimeter. This technology has demonstrated excellent energy resolution, high detection efficiency, and low background rates (due to the rejection of events). The rejection of events is of primary concern as the energy region above ∼2. 6 MeV is populated by surface radioactive contaminants with degraded energy collection of particles [17,18]. CUPID (CUORE Upgrade with Particle IDentification) is a next-generation experiment [39] which will use this scintillation calorimeter technology. It will build on the success of CUORE (Cryogenic Underground Observatory for Rare Events) which demonstrated the feasibility of a tonne-scale experiment using cryogenic calorimeters [17,40]. In this paper we describe the final 0 decay search results of the CUPID-Mo experiment which has successfully demonstrated the use of 100 Mo-enriched Li 2 MoO 4 detectors for CUPID. In sections 2 and 3 we introduce the CUPID-Mo experiment and an overview of the collected data. In sections 4 and 5 we describe the data production and basic data quality selection. Then in sections 6-11 we describe in detail the improved data selection cuts we use to reduce experimental background rates. We then describe our Bayesian 0 decay analysis in sections 12-14. Finally, the results and their implications are discussed in section 15. CUPID-Mo Experiment The CUPID-Mo experiment was operated underground at the Laboratoire Souterrain de Modane in France [41] following a successful pilot experiment, LUMINEU [31,42]. The CUPID-Mo detector array was comprised of 20 scintillating Li 2 MoO 4 (LMO) cylindrical crystals, ∼210 g each (see Fig. 1). These are enriched in 100 Mo to ∼97%, and operated as cryogenic calorimeters at ∼ 20 mK. Each LMO detector is paired with a Ge wafer light detector (LD) and assembled into a detector module with a copper holder and Vikuiti TM reflective foil to increase scintillation light collection. Both the LMO detectors and LDs are instrumented with a neutron-transmutation doped Ge-thermistor (NTD) [43] for data readout. Additionally, a Si heater is attached to each LMO crystal which is used to monitor detector performance. The modules are organized into five towers with four floors and mounted in the EDELWEISS cryostat [44] (see Fig. 1). In this configuration each LMO detector (apart from those on the top floor) nominally has two LDs increasing the discrimination power. We note that one LD did not function resulting in two LMO detectors which are not on the top floor having only a single working LD. CUPID-Mo has demonstrated excellent performance, crystal radiopurity, energy resolution, and high detection efficiency [41], close to the requirements of the CUPID experiment [39]. An analysis of the initial CUPID-Mo data (1.17 kg×year of 100 Mo exposure) led to a limit on the half-life of 0 decay in 100 Mo of 0 1/2 > 1.5 × 10 24 yr at 90% C.I. [19]. For the final results of CUPID-Mo we increase the exposure and also develop novel analysis procedures which will be critical to allow CUPID to reach its goals. Images showing the CUPID-Mo detector array (5 nearest towers) mounted in the EDELWEISS cryostat (Top) and a single module assembled in the Cu holder (Bottom) [41]. (Bottom left) view from the top on the LMO detector, NTD-Ge, Si heater, copper holder and PTFE clamps. (Bottom right) view from the bottom on the Ge LD and its NTD-Ge thermistor and PTFE clamps. CUPID-Mo Data Taking The data utilized in this analysis was acquired from early 2019 through mid-2020 (481 days in total) with a duty cycle of ∼89% of the EDELWEISS cryogenic facility. The data collected between periods of cryostat maintenance or special calibrations, which require the external shield to open, are grouped into "datasets" typically ∼1-2 months long. Within each dataset we attempt to have periods of calibration data taking (typically, ∼2-d-long measurements every ∼10 days) bracketing physics data taking, corresponding to 21% and 70% of the total CUPID-Mo data respectively. CUPID-Mo utilizes a U/Th source placed outside the copper screens of the cryostat (see [41]) for standard LMO detector calibration, providing a prominent peak at 2615 keV, as well as several other peaks at lower energies to perform calibration. The primary calibration source is a thorite mineral with ∼50 Bq of 232 Th, and ∼100 Bq of 238 U with significantly smaller activity from 235 U. Overall, nine datasets are utilized in this final analysis with a total LMO exposure of 2.71 kg×year, corresponding to a 100 Mo exposure of 1.47 kg×year. As was the case in the previous analysis [19], we exclude three short periods of data taking which have an insufficient amount of calibration data to adequately perform thermal gain correction, and determine the energy calibration. We also exclude one LMO detector which has abnormally poor performance in all datasets. Additional periods of data taking with a very high activity 60 Co source (∼100 kBq, ∼2% of CUPID-Mo data) were performed near regular liquid He refills (every ∼ 10 days). While the 60 Co source was primarily used for EDELWEISS [44], it was also utilized in CUPID-Mo for LD calibration via X-ray fluorescence [41] and is further described in section 4.3. The remainder of the data in CUPID-Mo is split between calibration with a 241 Am+ 9 Be neutron source (2%) and a 56 Co calibration source (∼5%). Data Production We outline here the basic data production steps required to create a calibrated energy spectrum. Starting with AC biased NTDs, we perform demodulation in hardware and sample the resulting voltage signals from all heat and light channels at 500 Hz to produce the raw data. We then utilize the Diana and Apollo framework [45,46], developed by the CUORE-0, CUORE, and CUPID-0 collaborations, with modifications for CUPID-Mo. Events in data are triggered "offline" in Apollo using the optimum trigger method [47] to search for pulses. This method requires an initial triggering of the data to construct an average pulse template and average noise power spectrum. This in turn is used to build an optimum filter (OF) which maximizes the signal-to-noise ratio. This OF is then used as the basis for the primary triggering. An event is triggered when the filtered data crosses a set threshold relative to the typical OF resolution obtained from the average noise power spectrum for a given channel (set at a value of 10 ). We periodically inject flags to indicate noise triggers into the data stream in order to obtain a sample of noise events which allows us to characterize the noise on each channel. For this data production we utilize a 3 s time window for both the heat and light channels. This is long enough to allow sufficient time for the LMO waveform to return towards baseline whilst being short enough to keep the rate of pileup events relatively low. This choice also keeps the event windows of equal size between the LMO detectors and LDs (see Fig. 2). The first 1 s of data prior to the trigger is the pretrigger window which is used in pulse baseline measurements. For reference the typical 10%-90% rise and 90%-30% fall times for the LMO detectors are ∼20 ms and ∼300 ms respectively, and for the LDs they are much shorter at ∼4 ms and ∼9 ms respectively [41]. Once triggered data is available, basic event reconstruction quantities are computed, such as the waveform average baseline (the mean of the waveform in the first 80% of the pretrigger window), baseline slope, pulse rise and decay times, and other parameters that are computed directly on the raw waveform. A mapping of so-called "side" channels is generated, grouping the LDs that a given LMO crystal directly faces in the data processing framework. In each dataset, a new OF is constructed for each channel, and used to estimate the amplitude of both the LMO detector and LD events, the latter being restricted to search in a narrow range around the LMO event trigger time. After the OF amplitudes are available, thermal gain correction is performed on the LMO detectors (see section 4.1) and finally the LMO detector energy scale is calibrated from the external U/Th calibration runs (see section 4.2). Each step of the data production is done on runs within a single dataset, with the exception of the first two datasets which share a common thermal gain correction and energy calibration period to boost statistics. Thermal Gain Correction After we have reconstructed pulse amplitudes via the OF we must perform a thermal gain correction (sometimes referred to as "stabilization") [48]. This process corrects for thermalgain changes in detector response which cause slight differences in pulse amplitude for a given incident energy, resulting in artificially broadened peaks. The pulse baseline is used as a proxy for the temperature, allowing us to use it to correct for thermal-gain changes due to fluctuations in temperature. This correction uses calibration data, from which we select a sample of events determined to be the 2615 keV -ray full absorption peak from 208 Tl. We perform a fit of the OF amplitudes ( ) as a function of the mean baselines ( ) given by the linear function ( ) = 0 + 1 · and compute the scaled corrected amplitude (˜) as˜= ( / ( )) · 2615. This correction is applied to both calibration and physics data within a dataset. We observe that the LDs do not demonstrate any significant thermal gain drift and as such do not perform this step on them. LMO Detector Calibration To perform energy calibration, four of the most prominent peaks from the U/Th source are utilized: 609, 1120, 1764, and 2615 keV. These peaks are fit to a model comprised of a smeared-step function and linear component for the background, along with a crystal ball function [49] for the peak shape. The smeared step is modeled via a complimentary error function with mean and sigma equal to those used in the peak shape. Then, the best-fit peak location values are fit against the literature values for the specified energies using a quadratic function with zero intercept which provides the calibration from the thermal gain corrected amplitude to energy for each channel: In general this fit performs well for the selected peaks used in calibration with only minimal residuals. Using these calibration functions we can compute the deposited energy for each event, it is at this point that summed spectra from all channels can be meaningful for 0 decay analysis. We note that between successive datasets, there is some small variation in the calibration fit function coefficients for any given channel, however this is acceptable as the calibration removes residual detector response non-linearities that may change slightly over the course of the data taking. We check the stability of each calibration run over all datasets for each channel relative to the expected energy and find the central location of the 2615 keV peak for each channel-run is consistent to within the channel energy resolution. LD Calibration The LD energy scale is calibrated using a high activity 60 Co source. This source produces 1173, 1333 keV 's which interact with the LMO crystals to produce fluorescence Xrays. In particular, Mo X-rays with energy ∼17 keV can be fully absorbed in the LDs and used for energy calibration. We use Monte Carlo simulations to determine the energy of the X-ray peak, accounting for the expected contribution of scintillation light. We extract the amplitude of the X-ray peak for each channel using a Gaussian fit with linear background and perform a linear calibration. Three datasets do not have any 60 Co calibration available, so we assume a constant light yield with respect to the closest dataset in time that does have a 60 Co calibration and extrapolate the LD calibration instead. The combined 60 Co calibration spectrum is shown in Fig. 3. Time Delay Correction For studies that involve the use of timing information of events in multiple crystals, a correction of the characteristic time offsets between pairs of channels is performed. This correction is done by constructing a matrix of channel-channel time delays using events that are coincident in two LMO detectors (referred to as multiplicity two, M 2 ) within a conservative (±100 ms) time window, and whose energy sum to a prominent peak in the calibration spectra. This is done to ensure the events under consideration are likely to originate from causally related interactions and not from accidental coincidences. The timing information for an event comes from two sources: the raw trigger time and an offset from the OF. The OF time, OF, , is the interpolated time offset which minimizes the 2 between a pulse and the average pulse template. Together these two values are used to estimate the time differences between any two events, and : The distribution of this time offset for a given channel pair is computed. From this the time offset between channels , (ˆ, ) is estimated as the median of the distribution. Several checks of the reliability of this estimate are performed: consistency of median and mode to within the ∼1 ms binning size, and that there are sufficient counts (≥ 5). Any channel pair that fails either of these checks is deemed unsuitable for direct computation ofˆand an iterative approach is used exploiting the fact that time differences add linearly: (7) the single escape peak for 2615 keV 's. The four most prominent peaks (denoted by larger labels) are utilized for LMO detector energy calibration. Right: Calibration spectra for LDs with X-ray fluorescence during irradiation with a high activity 60 Co source. The ∼17 keV X-ray line is used for a linear LD absolute energy calibration. solely from the M 2 summed peaks and found to agree within ∼ 1 ms. We purposefully zero out valid channel-pair cells in the matrix to check the reliability of the iterative approach, finding it reliably reproduces the Δ values that are directly computable. As described in section 7, this time delay correction greatly improves our anti-coincidence cut as the distributions of corrected time differences is much narrower (see Fig. 4). Time differences for M 2 events whose energy sums to a prominent peak in calibration data for both raw time (black) and corrected times (red). Note the time scales are different in the two cases to account for the much sharper peak with corrected times. Due to the high event rate in calibration data, an elevated rate of accidental coincidences is present leading to the presence of an elevated flat background in the Δ distributions. Data Selection Cuts and Blinding After calibration is performed, the data are able to be meaningfully combined for analysis. We apply a set of simple "base" cuts to remove bad events. These cuts require that an event be flagged as a signal event (i.e, not a heater nor noise event), reject periods of bad detector operating conditions manually flagged due to excessive noise or environmental disturbances, reject any events with extremely atypical rise times, and reject any events with atypical baseline slope values. Additionally we reject all events from a single LMO that was observed to have an abnormally low signal-to-noise ratio which compromises its performance, as was done previously [19]. Beyond these base cuts, other improvements are possible with the use of more sophisticated selection cuts to remove background in order to increase the sensitivity to 0 decay. We expect to observe background from: spurious / pileup events, suppressed with pulse shape discrimination cuts (see section 6); external events, suppressed by removing multiple scatter events (see section 7); background, removed using LD cuts (see section 8); events from close sources, suppressed by delayed coincidence cuts (see section 9); external muon induced events, removed with muon veto (see section 10). Finally, we note that all cuts are tuned without utilizing data in the vicinity of (3034 keV) for 100 Mo. As was done previously [19], we blind data by excluding all events in a 100 keV window centered at . In the following sections we describe these selection cuts. Pulse-Shape Discrimination An expected significant contribution to the background near are pileup events in which two or more events overlap in time in the same LMO detector. This causes incorrect amplitude estimation and shifts events into our region of interest (ROI). In order to mitigate this effect we employ a pulse shape discrimination (PSD) cut that is comprised of two different techniques. The main method we utilize for pulse-shape discrimination is based on principal component analysis (PCA), as was originally utilized in the previous analysis [19,50], and successfully applied recently to CUORE [17] with more details in [51]. This method utilizes 2 decay events between 1-2 MeV to derive a set of principal components that are used to describe typical pulse shapes for each channel-dataset. The leading principal component typically resembles an average pulse template with subsequent components adding small adjustments. These are used to compute a quantity referred to as the reconstruction error ( ) which characterizes how well a given pulse with samples, , is described by a set of principal components: where is the -th eigenvector of the PCA with the projection of onto each component given by = · . is energy dependent and this is corrected for by subtracting the linear component, ( ), and normalizing by the median absolute deviation (MAD): The resulting normalized reconstruction error, , is then used with an energy independent threshold to reject abnormal events. PCA Improvements We improve several aspects of the PCA cut compared to the previous implementation [50]: we utilize a cleaner training sample, perform normalization on a run-by-run basis, and correct for the energy dependence of the MAD. Abnormal pulses in the training sample result in distortions to all principal components leading to degraded performance in both efficiency and rejection power. To mitigate this we use a stricter selection cut requiring that the pretrigger baseline RMS not be identically zero (indicative of digitizer saturation and subsequent baseline jumps), and that a simple pulse counting algorithm must identify no more than one pulse on the LMO waveform and primary LD in the event window. This cleaner training sample allows us to utilize higher numbers of principal components without sacrificing efficiency. By performing the normalization of on a run-by-run basis, as opposed to whole-dataset, the fit for the linear component better reflects changes in that may arise due to variations in noise. To correct for the energy dependence of the MAD, we require the aggregate statistics of a whole dataset. We perform a linear regression in energy and compute the average MAD of the ensemble. We then use the ratio of the linear regression function and ensemble average MAD as a correction to the individual channel MAD values, providing a proxy for a channel-dependent energy scaling of the MAD. We examine the overall efficiency, impact on the 2615 keV peak resolution, and optimization of the median discovery significance as suggested in Cowan et al. [52], as a function of number of PCA components and cut threshold. From this we choose to utilize the first 6 leading components of the PCA for this portion of the PSD cut. As seen in Fig. 5 the quantity has no energy dependence and is able to reject obvious abnormal pulses. PSD Enhancements To finalize the PSD cut we utilize a two additional parameters developed in previous CUORE analyses [53]. These parameters are computed on the optimally filtered pulse itself and are measures of goodness of fit on the left/right side of the filtered pulse, and are referred to as test-valueleft and test-value-right (TVL and TVR) respectively. These 2 -like quantities are normalized via empirical fits of their median and MAD energy dependencies using events between 500-2600 keV. As these quantities are computed on the filtered pulses they provide an additional proxy to detect subtle pulse-shape deviations and provide a complimentary way to reject pileup events, especially for noisy events [54]. We observe that some pileup events still leak through the six component PCA cut alone, primarily pileup with a short separation with the earlier pulse having a small amplitude relative to the "primary" pulse. Energy independent cuts on TVL and TVR are able to remove a large portion of these with negligible loss to efficiency. The discrimination power from these two cuts arises from the fact they are derived on the optimally filtered waveforms. They are sensitive to pileup in a fashion that the PCA is not, and owing to the better signalto-noise ratio, tend reject small-scale pileup events that the PCA cut is insensitive to. We combine the various pulseshape cuts to form the final PSD cut by requiring that the absolute value of the normalized reconstruction error be less than 9, and that the absolute value of the normalized TVR and TVL quantities each be less than 10. The resulting cut maintains an efficiency comparable to the previous analysis (see section 12) while being able to reject more types of abnormal events. Anti-Coincidence Due to the short range of 100 Mo electrons in LMO (up to a few mm [55]), 0 decay events would primarily be contained within in a single crystal. A powerful tool to reduce backgrounds is to remove events where simultaneous energy deposits in multiple LMO crystals occur. It is useful to classify multi-crystal events for a background model and other analyses (e.g. transitions to excited states). We define the multiplicity, M , of an event by the total number of coincident crystals with an energy above 40 keV in a pre-determined time window. This requires measuring the relative times of events across different crystals. Previously we utilized a very conservative window of ±100 ms, which due to the relatively fast 2 decay rate in 100 Mo of ∼ 7 × 10 18 yr or ∼2 mHz in a 0.2 kg 100 Mo-enriched LMO crystal [56], leads to ∼2% of single crystal (M 1 ) events being accidentally tagged as two-crystal (M 2 ) events. This results in a slight pollution of the M 2 energy spectrum with these random coincidences as events that should be M 1 have been incorrectly tagged as M 2 events. The channel-channel time offset correction described in section 4.4 substantially narrows the Δ distribution amongst channel-channel pairs allowing for a much shorter time window to be used (see Fig. 4). For this analysis we choose a coincidence window of 10 ms which reduces the dead time due to accidental tagging of M 1 events as M 2 by a factor of ∼10, while also producing a more pure M 2 spectrum. The anti-coincidence (AC) cut then ensures we only examine single-crystal events. Light Yield LDs are the primary tool we use in CUPID-Mo to distinguish from / particles to reduce degraded backgrounds. Using the detected LD signal relative to energy deposited in the LMO detector, we are able to separate 's from / events as the former have ∼20% the light yield of the latter for the same heat energy release. Previously, we exploited the information provided from the LDs by using a resolutionweighted summed quantity and direct difference to select events with light signals consistent with / 's [19]. In this analysis we modify the light cuts to utilize the correlation between both LDs associated with an LMO detector more directly. To account for the energy dependence of the light cut, we model the light band mean and width. We divide the 60 Co contamination resulting in a higher rate of events that deposit significant energy into this light detector. These are easily rejected with the light cut shown here as well as an anti-coincidence cut with the specific LD. Events populating the lower left quadrant are 's from the various detectors and show the hallmark deficiency of scintillation light on both light sensors. The energy dependence of the normalized light distance is shown in the bottom figure, again with the cut boundary denoted by a solid line. events are clearly well separated from the / 's. We observe that the norm light distance for these events is roughly flat with energy. The contamination due to excess 60 Co on 1 LD is evident. A cluster of events at low normalized light distance is present due to a short period of time with sub-optimal performance on a single LD. light band into slices in energy for each channel and dataset. For each slice we perform a Gaussian fit of the LD energies to determine the mean and resolution, then fit the means to a second order polynomial in energy, and the resolutions to: This is used to determine the best estimate of the expected LD energy for a given energy. We define the normalized LD energy for a given LMO detector in dataset as: where is the LD neighbor index, , , is the measured LD energy, , , ( ) is the expected LD energy, and , , ( ) is the expected width of the light band. This procedure explicitly removes the energy dependence, and we note that , , has a normal distribution. We expect signal-like / events to have similar energies on the both LDs [41]. We observe background events where the total light energy is consistent with / signal events but the resulting individual LD energies are very different. This can happen due to surface events where a nuclear recoil deposits some energy onto only one LD (see [31]), or contamination on the LDs themselves. To remove these background-like events we exploit the full information of two LDs by making a two-dimensional light cut. In particular, we expect the joint distribution of ,0, and ,1, to be a bivariate Gaussian. This is also observed in data, with minimal correlations between the two normalized LD energies, thus a simple radial cut can be defined by computing the normalized light distance, , : For channels which do not have two LDs we instead make a simple cut on the single normalised light energy which is available. We chose a cut of < 4 (corresponding to ∼ 3.5 equivalent coverage). As shown in Fig. 6 this is sufficient to remove the background which is characterized by a large negative value of , , . Delayed Coincidences A significant background for calorimeters can be surface and bulk activity in the crystals themselves due to natural U/Th radioactivity (see [57] for more details). In particular, because of 100 Mo (3034 keV) is above most natural radioactivity, the only potentially relevant isotopes are 208 Tl, 210 Tl and 214 Bi [34]. However, given both the low contamination in the CUPID-Mo detectors and the very small branching ratio (∼0.02%), the decay chain of 214 A common approach is to reject candidate 208 Tl events that are preceded by a 212 Bi decay [16,34]. We note that for bulk activity, the candidate is detected with > 99% probability, so it is the efficiency at which these events pass the analysis cuts that sets this background. For surface events, ∼50% reconstruct at their -value, so a delayed coincidence cut would remove only about ∼50% of surface events (see [16]). In this analysis we use the same energy and time difference as was used previously [19]: we reject any candidate 208 Tl event that is within 10 half-lives from a 212 Bi candidate event. We note that the CUPID-Mo detector structure with a reflective foil and Cu holder surrounding each crystal reduces the effectiveness of this cut for surface events. In a future experiment with an open structure (for example CUPID [39], CROSS [58], or BINGO [59]) the detection of multi-site events may significantly improve this detection probability (and therefore cut rejection). In addition to this commonly used cut, the extremely low count rate for 's in CUPID-Mo, due to low contamination [60,61], enables a novel extended delayed-coincidence cut designed to remove potential 214 Bi induced events. We focus on the lower part of the decay chain: We tag the 214 Bi nuclei based on either the 222 Rn or 218 Po decay. Compared to 212 Bi → 208 Tl coincidences, a much larger veto time window is required. We set these time cuts based on a simulation of the time differences between decays in order to have a 99% probability of the decay being in the selected time range, as shown in Table 1. We veto events where there is an candidate within [ −100, +50] keV and within the time differences in Table 1 in the same LMO detector. This energy range is chosen to fully cover thevalue peaks. Despite the dead time per event being large, the total dead time is acceptable (< 1%, see section 12) thanks to the low contamination of 226 Ra in the CUPID-Mo detectors. We observe several events with > 2600 keV that are rejected, while the events removed at lower energy are dominated by accidental coincidences of 2 decays. Muon veto coincidences We apply an anti-coincidence cut between the LMO detectors and an active muon veto to reject prompt backgrounds from cosmic-ray muons which may deposit energy in the ROI, with LY similar to a / . The muon veto system is described in detail in [62]. We utilize muon veto timestamps to compute an initial set of coincidences between LMO detectors and the veto system. We observe a clear Δ peak of muon induced events which we correct for (see Fig. 7). The muon veto coincidences are then defined using the corrected times with a window of ± 5 ms. The relatively small window removes the need to also place a requirement on the number of muon veto panels triggered, maximizing the rejection of background events with minimal impact on livetime. Energy spectra After all cuts are tuned on the blinded data we proceed to compute cut efficiencies, extract the resolution energy scaling, energy bias, and define the ROI. The application of successive cuts can be seen in Fig. 8. Starting with the base cuts, the application of the PSD cuts produces a spectrum of events originating from real physical interactions with the detector (i.e., devoid of abnormal events). We see that the spectrum is dominated by 2 decay from ∼1 MeV up towards with few events populating the region. The application of the AC cuts removes only a small amount of events as the majority of events are single-crystal interactions. The most significant selection cut is the application of the LY cut which removes almost all remaining events at high energies where degraded events may be present. Efficiencies In order to compute the cut efficiencies we use three methods that span the distinct types of cuts present in this analysis: noise events for pileup efficiency; efficiency from peaks; efficiency from 210 Po peak. .2). The pileup efficiency is the probability that an event will not have another pulse in the same time window during which event reconstruction takes place. In addition, we check if the energy of the noise event is biased by > 20 keV. If either of these two possibilities occur, we consider the event a pileup. We compute the pileup rejection efficiency as the ratio of the noise events passing the single trigger criterion and with energy inside ±20 keV to the total number of noise events. We present the exposure weighted average over all datasets in Table 2 and assign a 1% uncertainty to this calculation due to the extrapolation from noise to physics events. We note that this is equivalent to a statistical calculation based on the known trigger rate, but this method averages over varying trigger rates (in time or across channels). The anti-coincidence, delayed coincidence, and muon veto cuts are not expected to have energy dependent efficiencies and represent detector deadtimes. For each of these we evaluate the efficiency utilizing events in the 210 Po -value peak at 5407 keV, as this peak has a very high energy and provides a clean sample of physical events. We extract the efficiency as = pass / total integrating in a ±50 keV window around the peak; the results are listed in Table 2. We compute the efficiency of the normalized light distance cut (i.e., LY cut) and the PSD cut using a new method in this analysis. We fit the peaks in the M 1 data as they provide a clean sample of signal-like events, and are a more robust population with which to evaluate the efficiency, compared to using all physics events as was done previously [19]. In order to account for background with non-signal like events around each peak we fit the distributions of both events passing and failing each cut to a Gaussian plus linear model. The efficiency is then given as: We do not expect large variation in the cut efficiency across datasets and in order to maintain sufficient statistics when using the peaks we compute only the global cut efficiencies. We estimate the uncertainty numerically by sampling from the uncertainty on the number of events in the photopeaks from the Gaussian fit. We apply the LY cut in order to gain a clean sample of events when measuring the PSD efficiency and vice-versa, which is possible due to the independence of the heat and normalized light signals. We perform this for each significant peak in the M 1 physics data (excluding the 60 Co peaks for the LY cut as they are known to be biased due to a contaminated LD). We fit the efficiency as a function of peak energy to a linear polynomial and observe that the efficiency is consistent with being constant (between 238-2615 keV). We extrapolate to in order to obtain the efficiencies for each cut in order to account for any systematic energy dependence. These fits are shown in Fig. 9. We combine the efficiencies measured in Table 2 to determine the overall total analysis efficiency. We sample from the errors for each efficiency (assumed to be Gaussian), and obtain an estimate of the probability distribution of the total efficiency from which we extract the analysis cut efficiency with a Gaussian fit as = (88.4 ± 1.8)%. Fig. 9 Plot showing the efficiency for the PCA cut (Upper) and normalized light distance cut (Lower) obtained from M 1 peaks as a function of the peak energy (back points). We fit these graphs to a linear polynomial (red line) and the confidence interval of this linear fit is shown in gray. As there is no significant naturally occurring peak near we must perform an extrapolation of the resolution as a function of energy and likewise for the energy scale bias. In order to account for variations in the performance and noise of each LMO detector over time, we obtain the energy scale extrapolations on a channel-dataset basis. Due to the excellent radiopurity and the relatively fast 2 decay rate which covers most peaks in the spectrum, we cannot determine this scaling directly from physics data alone. In order to have sufficient statistics, we utilize calibration data to obtain a lineshape from the 2615 keV events which is then extrapolated to physics data. Resolution in Calibration Data As in [19] we perform a simultaneous fit of the 2615 keV peak in calibration data for each dataset. This fit is an unbinned extended maximum likelihood fit implemented using RooFit [64]. We model the data in each channel as: where is the channel number, is the dataset and the functions ( ; ), ( ; , , , ), ( ; , , , ) are normalized linear background, smeared step and Gaussian functions. The parameter is the slope of the linear background, , is the mean of the peak for channel in dataset and , is the corresponding standard deviation. , are the background and smeared step ratio (these parameters are shared for all channels). is the number of events in the Gaussian peak, while , are the resolution and mean for this channel. An example of one of these fits is seen in Fig. 10. We observe in each dataset that the core of the peak is well described by the model with some distortion in the low-energy tail due to the presence of pileup events due to the high event rate in calibration data. We use the individual channel-dataset widths and means in the physics data extrapolation. Resolution in Physics Data In order to reconstruct the resolution in physics data we use a slightly different procedure compared to [19] and [18]. We fit selected peaks with the lineshape model and extract an energy dependent resolution function from this. In the previous analysis we utilized a simple Gaussian plus linear background for each peak fit on the total summed spectrum and took the ratio, , of each peak resolution to the calibration summed spectrum 2615 keV peak. Here we introduce a new exposure weighted lineshape function: where the summation occurs over channels , and datasets , is the exposure, ( ) is a Gaussian, is the mean of the peak and is a ratio scaling from calibration to physics data. We fit each peak in the physics data summed spectrum to this lineshape plus a linear background as a binned likelihood fit with the number of events in the peak, and the linear background, and as free parameters. After all peaks in physics data have been fit we can model the resolution ratio as a function of energy. A typical functional form for the resolution of a calorimeter can be given by: where the term 0 is related to the baseline noise in the detector, while 1 characterizes any stochastic effects that degrade the resolution with increasing energy, as in [31]. We use noise events to constrain the baseline component of the energy resolution. By fitting the distribution of noise events to the same model as the physics peaks we measure (0 keV). We fit ( ) for each physics peak and also the noise peak to Eq. 15 as shown in Fig. 11. As in the previous analysis, we also considered a simple linear model, = 0 + 1 for the resolution scaling. Previously, in physics data there were insufficient statistics to favor one model over another, however with the additional two datasets this linear model is disfavored, as has been seen in calibration data. Using the model in Eq. 15 we extrapolate the ratio at to be (3034 keV) = 1.126 ± 0.052. This number then is used to scale each of the channel-dataset dependent 2615 keV resolutions from the simultaneous lineshape fit in calibration data to resolutions at in physics data: These extrapolated resolutions are used to compute the containment efficiency (see section 12). The exposure weighted harmonic mean of the 2615 keV line in calibration data is (6.6 ± 0.1) keV FWHM. We use this to compute the effective resolution in physics data at by scaling by (3034 keV), obtaining (7.4 ± 0.4) keV FWHM. Energy Bias The total effective energy bias is also extracted from the fit done in physics data described in section 13.2. Using the best fit peak locations, from the lineshape fit (Eq. 14), we fit the residuals of − lit. as a function of lit. to a second order polynomial as shown in Fig. 12. As in the previous analysis, we find the distribution is well described by this model and we extract the energy bias at as − = (−0.42 ± 0.30) keV. Here the best fit central value vs true value for each peak in the physics data is fit against a quadratic polynomial. The residual evaluated at is then obtained from this fit giving an estimate for the energy scale bias. Model Definition We use a Bayesian counting analysis to extract a limit on 0 1/2 , similar to that in [19]. However, due to significant improvements in the background modelling of the CUPID-Mo data we modify this analysis. We model our background in the ROI as the sum of an exponential and linear background: where is the total background index (averaged over the 100 keV blinded region) in counts/(keV · kg · year), Δ is the width of the blinded region (100 keV), is the slope of the exponential and is the probability of flat background. Finally, is a normalization factor for the exponential. We use a counting analysis with three bins, with the expected number of counts in a bin with index given by: The sum is over all channels and over all datasets. Γ 0 is the 0 decay rate, is Avogadro's number, is the total LMO exposure, while ( ) , is the exposure for one channel/dataset, is the isotopic enrichment, and is the enriched LMO molecular mass. ( , ) is the total 0 decay detection efficiency for channel , dataset , and bin . This is the product of the analysis efficiency (see section 12) and the containment efficiency. This is the probability for a 0 decay event to have energy in bin and to be M 1 . The expected number of counts is a sum of a signal contribution ( , ) · 0 , and a background contribution from integrating ( ) between the bounds [ , ( , ), , ( , )], the upper and lower bounds for the bin . The decay rate is normalized by a constant to give the number of 0 decay events. The three bins used in this analysis represent lower/upper side-bands to constrain the background, and a signal region. The energy ranges of the signal region are chosen on a channel-dataset basis (see section 14.2), and the remaining energies out of the 100 keV fit region form the sidebands. The efficiencies ( , ) are defined for each detectordataset from Monte Carlo (MC) simulations accounting for the energy resolution and its uncertainty. Our likelihood is then given by a binned Poisson likelihood over three bins: We simultaneously minimise and sample from the joint posterior distribution using the Bayesian Analysis Toolkit (BAT [65]). Our model parameters are: -: the background index; -: the probability of flat background; -: the exponential background decay constant; -Γ 0 : the 0 decay rate. We also include systematic uncertainties as nuisance parameters as described in section 14.5. Optimization of the ROI Due to the different performance of each channel across datasets we use different ROIs for each. These are optimized using blinded data to maximize the mean expected sensitivity using the same procedure defined in [19]. We optimize the ROI window based on the likelihood ratio defined as: where L ( ) is the probability that an event at energy in channel and dataset is background, and L ( ) is the same for signal. We divide the energy in 0.1 keV bins between 2984-3084 keV for each channel-dataset from which we extract the containment efficiency and estimated background. We rank these bins via the likelihood ratio: where the background index is assumed to be constant at 5 × 10 −3 counts/(keV · kg · year) (in the previous analysis we found this assumption does not significantly impact the results [19]). We then optimize the choice of the maximum allowed likelihood ratio to include by maximizing the mean limit setting sensitivity, as a Poisson counting analysis: with the limit, ( ), of 2.3 counts in the case of zero events, 3.9 for one event, etc., and ( ) is the probability of observing counts based on the expected background rate. The chosen channel-dataset based ROIs are shown in Fig. 13, with an exposure weighted effective ROI width of (17.1 ± 4.5) keV, corresponding to (2.3 ± 0.6) FWHM at . Containment Efficiency Once the channel-dataset based ROIs have been chosen we can compute the containment efficiency for each channel and dataset pair. This efficiency is evaluated using Geant4 MC simulations, accounting for the energy resolutions extracted in section 13. The average containment efficiency is (75.9 ± 1.1)%. To estimate the systematic uncertainty from the MC simulations we vary the simulated crystal dimensions and Geant4 production cuts resulting in a 1.5% relative uncertainty. Extraction of the Background Prior The most significant prior probabilities in our analysis are for the signal rate Γ 0 and the background index . Due to the very low CUPID-Mo backgrounds and a relatively small exposure, data around the ROI does not constrain well. However, detailed Geant4 modelling does provide a measurement of the background averaged over our 100 keV blinded region (a forthcoming publication on the background modelling is in preparation). This fit models our experimental data in bin as (in units of counts/keV): where the sum is over all simulated MC contributions, MC , is the number of events in the simulated MC spectra and bin , and is a factor we obtain from the fit. This fit is performed using a Bayesian fit based on JAGS [66,67], similar to [68,69]. It estimates the joint posterior distribution of the parameters , and we sample from this distribution at each step in the Markov chain computing: From the marginalized posterior distribution of the observable background index we obtain: This value is used as a prior in our Bayesian fit with a split-Gaussian distribution; two Gaussian distributions with the same mode are combined such that values on either side of the mode have different variances. We have found that in the case of observing zero events, this prior does not change the observed limit. However, if some events are observed, this is a more conservative choice than a non-informative flat prior since it prevents the background index from floating to high values that are strongly disfavored by the background model. To extract a prior on the slope of the exponential background, , we perform a fit to the blinded data (between 2650 to 2980 keV) to a constant plus exponential model, as seen in Fig. 14. This results in a best fit of = (65.7 ± 4.6) keV, which is used as a prior in our analysis. The probability of the background being uniform (instead of exponential) is given a uniform prior between [0, 1]. Systematic Uncertainties We include systematic uncertainties in our Bayesian fit as nuisance parameters, in particular we account for uncertainties in: cut efficiencies; isotopic enrichment; containment efficiency. These are each given Gaussian prior distribution with the values from sections 12 and 13 as indicated in Table 3. As in [19] these uncertainties are marginalized over and are automatically included in our limit. We note that the systematic uncertainties from the energy bias and resolution scaling are incorporated in the computation of the containment efficiency. We chose a uniform prior on the rate, Γ 0 ∈ [0, 40 × 10 −24 ] yr −1 . This is consistent with the standard practice for 0 decay analysis [14,17,22]. The range is large enough that it has minimal impact on the possible result, and provides as little information as possible on the rate to avoid possible bias. Fig. 15 The unblinded background spectrum near the ROI for 2.71 kg×year of data (1.47 kg×year for 100 Mo). After application of all cuts we observe no events in both the ROI and in the full 100 keV blinded region. In this work, the event near 3200 keV present in the previous analysis [19], was tagged as coincident with the improved muon veto. The exposure weighted mean ROI (17.12 keV) is shown with dashed lines, and the full blinded region is within the solid lines. Results After unblinding our data, we observe zero events in the channel-dataset ROIs and zero events in the side-bands, as shown in Fig. 15. This leads to an upper limit on the decay rate Γ 0 including all systematics of: or: 0 1/2 > 1.8 × 10 24 year (stat.+syst.) at 90% CI. This limit surpasses our first result of 0 1/2 > 1.5×10 24 yr [19], becoming a new leading limit on 0 decay in 100 Mo. The posterior distribution of the decay rate is shown in Fig. 16. We find that this can be fit well by a single exponential as expected for a background-free measurement. We extract: where = (6.061 ± 0.001) × 10 24 year, (29) and CUPID-Mo is the CUPID-Mo data. We can extract the This is mostly consistent with the informative background model prior. Further studies are ongoing to include extra information into the background model fit (i.e. constraints on pileup from simulation or calibration data) to reduce this uncertainty. The posterior distributions for the exponential background parameters are consistent with the priors derived from the fit of the 2 decay spectrum in an energy interval between 2650−2980 keV (as done previously [19]). In order to study the effect of systematics we perform a series of fits allowing only one nuisance parameter to float at a time, with all others fixed to their prior's central value. The nuisance parameters we allow to float are the isotopic abundance, MC containment efficiency factor, and analysis efficiency. These are compared against fits with all parameters fixed (e.g., a statistics-only run), and again allowing all parameters to float. For each category of test we run ∼1000 toys, each generating 10 4 Markov chains. We find that relative to statistics-only runs (i.e., fixing all nuisance parameters), the effect of each nuisance parameter on the marginalized rate is less than 1%. The largest impact originates from the global analysis efficiency at ∼ 0.7%. This is not surprising as the relative uncertainty on the analysis efficiency is high compared to the other parameters. We interpret the obtained half-life limit on the 0 decay in 100 Mo in the framework of light Majorana neutrino exchange. We utilize = 1.27, and phase space factors from [71,72]. We consider various nuclear matrix elements from [73][74][75][76][77][78][79][80]. This results in a limit on the effective Majorana neutrino mass of: This result improves upon the previous constraint by virtue of an increased 100 Mo exposure in the new processing and is set with a very modest exposure of 1.47 kg×year of 100 Mo. This is seen in Fig. 17 which shows this result in the context of other experiments, indicating the promise of utilizing 100 Mo as a 0 decay search isotope. Conclusions In this work, we implemented refined data production and analysis techniques with respect to the previous result [19]. We report a final 0 decay half-life limit of 0 1/2 > 1.8 × 10 24 year (stat.+syst.) at 90% CI. with a relatively modest exposure of 2.71 kg×year (1.47 kg×year in 100 Mo), with a resulting limit on the effective Majorana mass of < (0.28-0.49) eV. We show that an iterative channel-channel time offset correction is feasible and significantly improves the ability to tag multiple crystal events while reducing accidental coincidences. This results in a highly efficient singlescatter cut, and a more pure higher multiplicity spectra, which is useful for analyses such as decay to excited states and the development of a background model. We have also shown an improved method used for particle identification by utilizing normalized light energy quantities derived from the absolute LD calibration. This allows for an improvement in the rejection of events with a high efficiency and relatively conservative cut. The pulse shape discrimination is improved via a cleaner training sample, run-by-run normalization and full energy dependence correction. It is further enhanced by combination of pulse shape parameters derived from the optimally filtered waveform. Further improvements may be possible with better tuned pulse templates and a multivariate discrimination using portions of the waveform to allow for even more pileup rejection. Finally, the very low contamination of the LMO detectors also allows for the implementation of extended delayed coincidence cuts to reject not just 212 Bi-208 Tl decay chain events, but also 222 Rn-214 Bi and 218 Po-214 Bi decay chain events, allowing for the reduction of the background in the high energy region. This type of cut in particular may be especially useful for a larger scale experiment such as CUPID [39] due to the ability to remove potentially dangerous events. The result of these enhanced analysis steps produces a total analysis efficiency of (88.4 ± 1.8)% or combining with the containment efficiency, a total 0 decay efficiency of (67.1 ± 1.7)%. This high total efficiency, along with low background index, and excellent energy resolution at of (7.4 ± 0.4) keV FWHM show that the potential for scintillating Li 2 100 MoO 4 crystals coupled to complimentary LDs in a larger experiment such as CUPID is entirely feasible. Analysis techniques developed here can be easily applied to larger datasets. The CUPID-Mo data can be used to extract other physics results. The analysis techniques described here have been used for an analysis of decays to excited states (publication forthcoming). Other foreseen analyses include spindependent low-mass dark matter searches via interaction with 7 Li [63,81] in the Li 2 MoO 4 and axion searches [82]. CUPID-Mo has succeeded in demonstrating the feasibility of scintillating calorimeters for use in 0 decay searches, having demonstrated that backgrounds from 's can be easily rejected via scintillation light, and that pulse-shape rejection techniques can be utilized with high efficiency. Russian and Ukrainian scientists have given and give crucial contributions to CUPID-Mo. For this reason, the CUPID-Mo collaboration is particularly sensitive to the current situation in Ukraine. The position of the collaboration leadership on this matter, approved by majority, is expressed at https:// cupid-mo.mit.edu/collaboration#statement. Majority of the work described here was completed before February 24, 2022.
2022-02-18T06:42:44.024Z
2022-02-17T00:00:00.000
{ "year": 2022, "sha1": "56f89d04c48d534dc2c27cef06a42a62c2ca138b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-022-10942-5.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "c9481beb41b5f927596547c099d25bb20371705e", "s2fieldsofstudy": [ "Art", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
15814668
pes2o/s2orc
v3-fos-license
Response of microchip solid-state laser to external frequency-shifted feedback and its applications The response of the microchip solid-state Nd:YAG laser, which is subjected to external frequency-shifted feedback, is experimentally and theoretically analysed. The continuous weak response of the laser to the phase and amplitude of the feedback light is achieved by controlling the feedback power level, and this system can be used to achieve contact-free measurement of displacement, vibration, liquid evaporation and thermal expansion with nanometre accuracy in common room conditions without precise environmental control. Furthermore, a strong response, including chaotic harmonic and parametric oscillation, is observed, and the spectrum of this response, as examined by a frequency-stabilised Nd:YAG laser, indicates laser spectral linewidth broadening. L aser feedback, also called self-mixing interference, is a commonly observed phenomenon that can easily be caused by any surface in the light path of optical systems. Initially, laser feedback was regarded as a severely detrimental to laser performance, inducing instability in the laser's power and frequency 1,2 , as well as strong quantum noise [3][4][5] and coherent collapse [6][7][8] . Thus, significant attention was placed on eliminating the external feedback. However, the fruitful phenomena in a laser diode subjected to external optical feedback attracted increasing interest, indicating the possibility of actively using laser feedback to improve laser performance to suit practical demands. Sacher and co-workers 6 analysed the coherence collapse in a diode laser with optical feedback and were able to narrow the laser spectral linewidth. Utilising orthogonal polarisation optical feedback, GHz pulses of a single-mode diode laser were produced by Cheng 9 . In addition, polarisation switching and optical bistable states have been widely investigated 10,11 . A common view about laser frequency-shifted feedback is that the laser resonator has gain amplification to the external feedback power. This amplification depends on the ratio of c c /c 1 , where c c and c 1 , are, respectively, the damping rate of the laser cavity and the population inversion (i.e., the inverse of photon lifetime and of laser upper-level lifetime). For a laser diode, the typical value 12 of this ratio is approximately 10 3 . Benefiting from this amplification of ,10 3 , the weak feedback power reflected/scattered by a diffusing or absorbing surface of a measured target can be amplified by three orders of magnitude in a laser resonator, and thereby, detectable intensity modulation signals can be generated. From these intensity signals, the movement information of the measured target can be recovered. Thus, the gain amplification makes the laser diode sensitive to external feedback power and suitable for some cases of contact-free measurement (non-cooperative targets with diffusing or absorbing surfaces). For micro-chip solid-state lasers, this value 12 of the ratio reaches as high as ,10 6 . Consequently, these lasers can sense the extremely weak feedback power as low as 10 212 , which permits broad application in fields such as tomography 12,13 , microscopy 14 , interferometry 15 , vibrometry 16 , and velocimetry 17 . However, it is not yet clear to what amount of feedback power, the laser can be used for heterodyne phase measurement. Additionally, this large amplification of ,10 6 will result in a strong response of the laser to the external feedback power even when that power is very weak. Some phenomena reported in diode lasers should also be observed in solid-state lasers, such as spectral linewidth broadening, chaotic spiking oscillation, and multiple stable states. We have previously reported on the frequency characteristics of an Nd:YAG laser subjected to frequencyshifted feedback 18 . Only the frequency distribution in the low frequency region (i.e., in the frequency range around the laser's own relaxation oscillation frequency) was observed, and laser intensity information was not collected. In addition, the optical spectrum was not examined by frequency-stabilised lasers; thus, detailed spectrum information could not be obtained. Accordingly, it was difficult for us to judge whether the spectrum is broadened by the strong response of the laser to external feedback power. Moreover, the principle of a contact-free measurement scheme and its application were not performed. In this report, we present the weak and strong response of the microchip solid-state Nd:YAG laser to external frequency-shifted optical feedback. For very weak feedback, the response of the laser to external frequency-shifted feedback exhibits continuous intensity modulation in amplitude and phase. From this intensity modulation, the movement information of a measured target can be extracted with high sensitivity, resolution and accuracy by heterodyne phase measurement. Based on this principle, a laser feedback interferometer is designed and developed to meet the demand of non-cooperative displacement/vibration measurement for all types of material surfaces, such as blackened aluminium, liquids, piezoelectric material, and even heated iron at high temperatures. In addition to weak response, with the increase of the feedback level, the laser displays a strong response to external feedback, which results in harmonic and parametric oscillations in the frequency domain and corresponding abrupt change in laser intensity and phase in the time domain. Through examination of a frequency-stabilised Nd:YAG laser, the spectrum is observed to be broadened. Results The experimental setup employed in this work is illustrated in Fig. 1. A w5 mm 3 1 mm Nd:YAG crystal microchip, with both surfaces coated to be highly reflective (R1 5 99.8%, R2 5 99%) at the lasing wavelength of 1.064 mm, was employed to form a plane-parallel Fabry-Perot resonator. A fibre-coupled single-mode laser diode (not shown in Fig. 1) with a narrow linewidth (, 0.1 nm) served as pump source, whose output was focused on the Nd:YAG crystal using a GRIN lens. The pump end of the Nd:YAG crystal was coated to be highly transmissive at the pump wavelength of 808 nm. The lasing threshold was P th 5 10 mW. During the entire experiments, this microchip Nd:YAG laser worked in a linearly polarised TEM 00 transverse mode. The path between the microchip lasers and the external feedback mirror M is the so-called external feedback cavity or external cavity. A variable attenuator ATT was inserted into the external cavity to modify the feedback level. Here, the feedback level k is defined as the ratio of the feedback power to the laser output power. The laser output is divided into two parts by the beam splitter BS 1 . The transmissive one is frequency-shifted by a pair of acousto-optic modulators (AOM 1 and AOM 2 , the central frequencies are V 1 5 70.04 MHz and V 2 5 70.08 MHz, respectively.). By carefully aligning the AOMs and external feedback mirror M, we shift the optical carrier frequency by an amount of V 5 2 3 (70.08 2 70.04) 5 80 kHz after a roundtrip of the laser beam. The reflected beam is again separated into two parts by the beam splitter BS 3 . One part is detected by a photoreceiver (New Focus Model 1592, 3.5 GHz) accompanied by an oscilloscope to capture the laser intensity modulation and its power spectrum. The other part is combined with a frequency-stabilised Nd:YAG laser (Innolight M500, linewidth 100 kHz). The beat signal is sent to another photoreceiver (New Focus Model 1592, 3.5 GHz) followed by a spectrum analyser (Agilent N9020B) to capture the spectrum distribution of the beat signal. Actually, after a roundtrip in the external cavity, four beams can return to the laser resonator, whose frequency shifts are V, V/2, V 1 , and V 2 for each beam. Owing to the limited bandwidth of the relaxation oscillation 15 , the microchip laser cannot respond to feedback light whose frequency shift is much larger than the relaxation oscillation frequency f R 5 290 kHz at the pump level of P/P th 5 2. Consequently, the two beams of frequency shifts V 1 , and V 2 can be omitted. In Fig. 1, the red line with the heavy arrow represents the feedback light with the frequency shift at V, and the blue line with light arrow represents the feedback light with the frequency shift at V/2. Weak response to external frequency-shifted feedback. By setting the very weak feedback level (k is approximately 10 26 ), the microchip Nd:YAG laser exhibits weak response to external feedback, as illustrated in Fig. 2 (a). The Fig. 2 (a1) is the laser intensity, and Fig. 2 (a2) is the corresponding power spectrum. I f represents the laser intensity with feedback, and I 0 represents the laser intensity without feedback. The laser intensity is sinusoidally modulated at a frequency of V 5 80 kHz, which is the shifted frequency of the light after one roundtrip propagation in the external cavity. The insert shows a temporal zoom of the laser intensity. The corresponding power spectrum (observed by the fast Fourier transform function of the oscilloscope) displays two oscillating frequencies, i.e., V 5 80 kHz, and f R 5 290 kHz. The signal envelope is approximately 5 kHz corresponds to the noise of the photoreceiver. By slightly increasing the feedback level a little (k is approximately 2 3 10 26 ), an expected intensity increment occurs, which is the intensity modulation depth in Fig. 2 (b1) and is twice that in Fig. 2 (a1). Although the power spectrum exhibits slight harmonic oscillations at 2V, 3V and so on, the laser intensity remains sinusoidally modulated. The results are quite different from the results in Fig. 9, where the strong feedback causes large resonant quantum noise of 270 dB and the multiple solutions to laser frequency. Consequently, the laser frequency skips the unstable region of multiple solutions and jumps down to the neighboring stable region. For the weak level, there is only one solution, and the laser is continuously modulated by external feedback. We will explain this phenomenon in detail in the discussion section. Here, the intensity modulation depth proportionally increases with that of the feedback level k, which's why this response is called weak (linear) response. When the laser is subjected to external frequency-shifted feedback with a weak level (k , 10 24 ), the laser output exhibits a weak (linear) response to the feedback field, whose intensity amplitude linearly increases with the feedback level and the phase of the light is continuously modulated without abrupt change. If the feedback mirror M in Fig. 1 is replaced by a moving target, its movement information will be loaded on a continuous phase change in the V-modulated laser intensity. Using a heterodyne phase measurement, we can obtain the movement information of the target. In addition, the weak response is validated under a weak feedback level (k , 10 24 ), which makes it appropriate for the contact-free measurement of materials with low or even extremely low reflectivity. Based on this character, a scheme of non-cooperative measurement is developed to achieve high resolution, high sensitivity, and environmental robustness in the later sections. Contact-free measurement scheme with high accuracy, sensitivity and environmental robustness. Based on the weak response of the laser subjected to the weak feedback level, a scheme of contact-free measurement is developed to achieve high accuracy, high sensitivity and environmental robustness, as illustrated in Fig. 3. Except for the reference mirror M r , most of the components used in Fig. 3 are the same as those presented in Fig. 1. The laser beam (frequency: v) in turn passes through two AOMs. Then, after AOM 2 , one of the resultant beams (frequency: v 1 V/2), denoted by the red line with the heavy arrow in Fig. 3, serves as the probing beam and impinges on the measured target (M). The reflection by the target then returns to the microchip laser resonator along its incoming path, forming the feedback measurement arm (frequency: v 1 V). The other part, denoted by the blue line with light arrow, remains v because it is not diffracted. This part is reflected by the M r back to the resonator along the path of the measurement arm, forming the feedback reference arm (frequency: v 1 V/2).Thus, the total path of the measurement and reference arms L m and L r can be written as where L 0 is the light path between the laser and the AOM 1 ; L d1 is the light path of the diffracted beam between AOM 1 and M r ; L d0 is the light path of the un-diffracted beam between AOM 1 and M r ; D 0 is the light path between M r and the initial position of M; and DL is the displacement to be measured. Therefore, the optical path changes in the arms of measurement and reference are obtained Because it is close enough for the arms of measurement and reference to be subjected to the same environmental disturbance, the optical path changes induced by the environment can be assumed to be the same, i.e., DL d1 < DL d0 . Then, we can obtain Thus,the corresponding phase change can be written as where k is the wave vector of the laser beam. In the measurement, the reference mirror is placed close enough to the initial position of the measured target. Therefore, the phase change Dw D0 in this path (DD 0 < 0) can be omitted. Then, we obtain This equation clearly indicates that the phase change in the measurement arm contains two parts: one is the real displacement to be measured, and the other is the phase change in the reference arm, which represents the environmental disturbance. Consequently, the difference between the measurement and reference arms really reflects the movement of the measured target. This scheme compensates for the environmental disturbance between the laser and reference mirror M r . Thus, in the practical use, it is important to place the reference mirror M r close enough to the measured target. To obtain the real displacement of the measured target, the phase changes in the measurement and reference arms in Eq. (7) should be recovered from the intensity modulation signals of the laser frequency-shifted feedback system. For weak feedback, when both the frequency-shifted lights at V and V/2 are reflected/scattered back into the laser resonator, the laser output is modulated at two frequencies separately as follows: where DI is the laser intensity modulation; k is the feedback level; G(V) is a frequency-dependent gain amplification factor 12,15,16,17,18 that reaches its maximum c c /c 1 when V approaches f R ; V 5 80 kHz is the frequency shifting; w is the phase of the light carrying the information of the measured targets; and Q 0 is the fixed initial phase. The subscript symbols m and r represent the measurement and reference arms, respectively. Compared with traditional interference, the intensity modulation has an extra gain factor G(V), whose value ranges from 10 4 to 10 6 . Therefore, even if the reflected/scattered light is extremely weak such as 10 212 , the relative intensity modulation amplitude can reach 100%, which implies it has high sensitivity to external feedback. Thus, this characteristic promises the ability of contact-free measurement. Using a heterodyne phase measurement technique, the phase changes Dw m and Dw r in Eq. (8) and (9) can be demodulated, respectively. Then, according to Eq. (7), their difference (Dw m 2 Dw r 5 2k 3 DL) accurately reflects the real movement or displacement (DL) of the targets with an accuracy on the nanometer level under common room condition without precise temperature control and vibration isolation. To verify the points discussed above, the scheme is used to directly measure the displacement (movement) of a servo-controlled PZT stage (PI Model P-762) driven by a triangular waveform with an amplitude of approximately 40 nm. The laser beam impinges on the stage's surface, and the reflected/scattered light returns to produce a V-frequency modulated laser intensity. By demodulating the signals from the measurement and reference arms, the displacement (optical path change) is obtained and is shown in Fig. 4. The DL r results indicate that the light path change in the reference arm induced by the environmental disturbance is approximately 40 nm, which is at the same level as the stage's movement amplitude DL. Consequently, it is difficult to judge the stage's displacement from the DL m , which is submerged into the environmental disturbance DL r . After the compensation of subtracting DL r from DL m , DL accurately reflects the movement of the stage with an amplitude of 40 nm. It is concluded that the contact-free measurement with nanometre accuracy and environmental robustness can be achieved by a combination of frequency multiplexing (environmental compensation) and frequency-shifted feedback (gain amplification). Based on this principle, a compact laser feedback interferometer was developed as shown in Fig. 5. To simultaneously demodulate the measurement and reference signals, a commercial phase meter is used, whose effective resolution is about 0.1u, corresponding to 0.14 nm resolution of displacement. Given the influence of the residual dead path (D 0 in Fig. 3) on the measurement, the effective resolution of displacement measurement is evaluated as about 1 nm. During the measurement process, the laser beam emitted from the instrument output directly impinges onto the surface of the targets to be measured, and then the feedback light generates intensity modulation, from which information about the targets can be recovered. Note that an attenuator should be added to decrease the feedback power if the reflected or scattered light is too strong. In remote sensing where the target is several metres far from the instrument, to guarantee high accuracy, a movable reference mirror M r is placed outside but near the measured target. Thus, the large dead path between the target and the instrument can be compensated for. Of course, the movable reference mirror M r is a sophisticated arrangement of mirrors and lens to ensure the feedback reference arm can return to the resonator parallel to the measurement arm. Application in vibration measurement. The hysteretic characteristic of the piezoelectric ceramics supplied by PI Company was directly measured using the instrument above. The results are presented in Fig. 6 (left). We also tested the vibration of the piezoelectric ceramics at different frequencies ranging from 1 Hz-10 kHz with different amplitudes from several tens to hundreds of nanometres. A typical result for vibration test at a frequency of 6 kHz with an amplitude of tens of nanometres is presented in Fig. 6 (right). The piezoelectric thin films were also successfully measured using this instrument with an amplitude of tens of nanometres at common room conditions. Application in liquid evaporation. The common approaches [19][20][21] for measuring the liquid evaporation involve capacitive sensors, a fibre liquid level sensor, or laser triangulation. The first two approaches are not convenient because of the contact measurement. The last approach requires frequent calibration based on the measured targets. We used the instrument presented above to measure the evaporation rates of four types of liquids: (1) water, (2) alcohol, (3) acetone, (4) ether. The measurement laser beam is turned 90u down onto the surface of the liquid directly. The fall of the liquid surface is transformed into the optical path change in the measurement arm and thus, it is detected. The experimental results illustrated in Fig. 7 reveal that after 20 minutes, the levels of the liquids decline as a result of evaporation as follows: water, 41.165 mm; alcohol, 206.098 mm; acetone, 1117.854 mm; and ether, 2818.231 mm. The evaporation rate is expressed by the declined height divided by the time. Therefore, the evaporation rates for these four liquids are as follows: water, 34 nm/s; alcohol, 172 nm/s; acetone, 932 nm/s; and ether, 2349 nm/s. Other experiments indicate that the evaporation rate is higher in the afternoon than in the morning due to the increasing temperature. The evaporation rate decreases when the liquid is maintained in a closed container due to vapor saturation. Application in thermal expansion of materials at high temperature. Previous methods 22,23 for thermal expansion measurement include optical interference, X-ray, optical and mechanical lever, and density measurements; most of these methods require preprocessing of the surface, a certain geometric shape of the materials, or need cooperative reflective mirrors. The laser feedback interferometer is expected to solve the above problem. First, 45# steel is heated in a muffle furnace, whose temperature can be set from room temperature (approximately 20uC) to 1200uC. In the front surface of the muffle furnace, a hole is drilled to pass the measurement light. In the experiments, the beam emitted from the laser feedback interferometer passes through the hole into the muffle furnace and impinges on the surface of the 45# steel. Then, the reflected or scattered light returns back into the laser resonator to generate the V-frequency modulation in laser intensity, from which the thermal expansion (optical path change) of the steel at different temperatures can be determined. The change in the optical path (i.e., the displacement between the instrument and the surface of the steel) is used to represent the thermal expansion of the steel. The results measured here can qualitatively reflect the thermal expansion of the steel to some degree. As shown in Fig. 8, the entire heating process lasts approximately 30 minutes, and the optical path change (displacement) between the instrument and the steel is approximately 150 microns. The insert on the left is the initial state of the 45# steels (not the same one but the same material), and the insert on the right shows the final state of the steels heated. Clearly, its property has changed as a result of the chemical reaction. In the experiments, the temperature in the muffle furnace is changed from 20uC to 500uC. The reflectivity of the steel surface decreases with increasing temperature due to strong oxidation. Above 500uC, the reflectivity of the steel is extremely low that the feedback signal cannot be adequately detected using the present scheme. A lock-in amplifier could possibly be used to further improve the detectability. Strong response to external frequency-shifted feedback. When the amount of feedback is increased to 10 23 or even higher, a strong response clearly results, as demonstrated in Fig. 9. Both the feedback light frequency shifted at V and V/2 returns to the laser resonator, shown in Fig. 9 (a). Therefore, the intensity is modulated by these two frequencies. Unlike the cases in Fig. 2, the spiking pulses superpose on the weak response signals. Every spiking pulse represents an abrupt change in laser frequency caused by strong frequency-shifted feedback. The power spectrum exhibits strong harmonic (2V, 3V, 4V …) and parametric (V/2 1 V, V/2 1 2V, V/2 1 3V …) oscillations resulting from these two frequencies. Additionally, the parametric oscillation frequency (280 kHz) resonates with the relaxation oscillation frequency (f R 5 290 kHz). Consequently, compared with Ref18, the parametric oscillation peaks (f R 1 V/2, f R 2 V/2, f R 1 V,…) related to the relaxation oscillation frequency vanish. Only the harmonic peaks at V and the mixed parametric peaks between V and V/2 survive. Simultaneously, the laser quantum noise (280 dB) is stronger than that in Fig. 2 (2100 dB). The insert indicates that the amplitude and phase of the laser intensity break abruptly, where the frequency of the laser is thought to also jump sharply. This result is similar to the cases occurring in diode lasers 25 . The laser frequency is modulated by the phase of the external feedback light. At the strong feedback level, the modulation of the external feedback is so large that multiple solutions of laser frequency co-exist in the laser feedback system 25,26 . However among these solutions, some are unstable and violate the stability criteria 26 . Consequently, during the modulation, when the laser frequency is tuned to the region of unstable solutions, the frequency suddenly jumps to the stable one in the nearest neighbor period. A detailed interpretation will be provided in the discussion section. In this feedback regime, the intensity modulation depth no longer increases with the feedback level k(.10 23 ), which is accordingly called the strong (nonlinear) response to external feedback. However, a more complicated spectrum distribution occurs. For strong feedback level, a sharp change such as a spiking pulse in the laser intensity indicates that it is not suitable for heterodyne phase measurement. Although electronic filter can be used to filter the irregular spiking pulses, this filter can cause an extra phase change in the measurement signals, which causes the considerable errors in measurements on the nanometre level. However, the fruitful information of the power spectrum, as shown in Fig. 9 implies that the frequency of the laser subjected to the external frequency-shifted strong feedback is greatly affected. Consequently, in the following section, the optical spectrum distribution is examined using a frequency-stabilised Nd:YAG laser (Innolight M500, linewidth 100 kHz). Spectrum broadening induced by strong response. To observe the optical spectrum, the microchip Nd:YAG subjected to external frequency-shifted feedback is compared with a frequency-stabilised Nd:YAG laser, whose linewidth is approximately 100 kHz. When the feedback loop is off, the beat frequency is located near the frequency of 1 GHz around, whose linewidth is approximately 3 MHz, as indicated in Fig. 10. Given that the linewidth of the frequencystabilised Nd:YAG laser is only 100 kHz, we can qualitatively determine that the linewidth of the microchip Nd:YAG laser is at most 3 MHz. When the feedback loop is on, the linewidth of the beat frequency is greatly broadened to about 30 MHz. Thus, the linewidth of the microchip laser is approximately 30 MHz under strong external frequency-shifted feedback. Although it is called strong feedback, the feedback level is only 2 3 10 23 . If it is largely increased, the linewidth of the laser is expected to be further broadened to several hundreds of MHz. We also verify the case of weak feedback. The linewidth hardly broadens under this condition and, thus, does not affect the application of the frequency-shifted feedback on precision measurement. Discussion In the presence of frequency-shifted optical feedback, the dynamic behaviour of a re-injected laser can be described using the modified Lang-Kobayashi equation [16][17][18] : where N(t) is the population inversion, E(t) is the amplitude of the laser electric field, W is the phase of the light, v c is the laser cavity frequency, v is the optical running laser frequency, V is the frequency-shifting by the AOMs, and t is the roundtrip time in the external cavity. B is the Einstein coefficient, c 1 N 0 is the pumping rate, c 1 is the decay rate of the population inversion, c c is the laser cavity decay rate, k is the effective laser feedback level, and a is the linewidth enhancement factor in solid-state microchip lasers. Solving Eq.(10) above, the following equations are obtained: Similar to the cases reported for diode lasers 24 , this transcendental Eq. (11) has only one solution for v, when c c t ffiffiffi k p a ffiffiffiffiffiffiffiffiffiffiffi ffi a 2 z1 p , 1. This result means that the laser frequency is continuously tuned during the frequency-shifted feedback under this condition, as is the laser intensity. This situation is called weak feedback. The amplitude and phase of the laser intensity is modulated continuously without any abrupt changes, as illustrated in Fig. 2. Therefore, under very weak feedback, the movement information of the targets loaded on the modulated intensity signals can be demodulated to recover the information. However, when c c t ffiffiffi k p a ffiffiffiffiffiffiffiffiffiffiffi ffi a 2 z1 p . 1, Eq. (11) has multiple solutions in the region bounded by two special points, which have a vertical slope on the curve of v versus t, as reported by Ref 25. This area is well known as the hysteresis region, which is an unstable region. When the laser frequency is increased to one of these points, the frequency skips the unstable region and suddenly drops down to another stable region in the nearest neighbouring period. Correspondingly, the laser intensity also changes abruptly. Accordingly, representative spiking pulses superpose the modulated laser intensity expressed by Eq.(12), as observed in Fig. 9 (a1) and (b1). By straightforwardly solving Eq. (10), the laser intensity jEj 2 and the corresponding power spectrum can be obtained. Fig. 11 presents the numerical simulation of the intensity and relevant power spectrum for the laser subjected to external frequency-shifted feedback at weak and strong levels, which agree well with the experimental results in Figs. 2 and 9. The values used for numerical simulation are summarised bellow: c c 5 6 3 10 9 s 21 , c c 5 4 3 10 3 s 21 , a 5 2, t 5 10 28 s, all of which are related to the parameters of the laser resonator and external cavity. Other parameters, such as Einstein coefficient B and the pumping rate c 1 N 0 can be simplified in the arithmetic process. Details about solving the differential equations (10) can be found in Ref 18. In a similar manner to laser diodes 27 , the linewidth broadening of microchip solid-state lasers in the presence of frequency-shifted feedback can be defined as: where C~c c t ffiffiffi k p a ffiffiffiffiffiffiffiffiffiffiffi ffi a 2 z1 p , Dn is the broadened laser spectral linewidth, and Dn 0 is the solitary laser spectral linewidth. According to Eq.(13), the broadened spectral linewidth is related to a, feedback level k, and the roundtrip time in the external cavity. Under the weak feedback level (k , 10 24 ), there is no evident broadening in the laser linewidth. However, it is clear that the linewidth can be broadened considerably for a large value of C. Although large narrowing of the laser linewidth by three orders of magnitude has been observed in laser diodes 28 , we did not observe any significant linewidth narrowing in our experiments. The linewidth broadening with a maximum Dn/Dn 0 , 10 occurred when chaotic spiking oscillation at the shifting frequency appeared, as demonstrated in Fig. 10. Along with the increase of the feedback level or the shifting frequency, larger broadening of laser spectral linewidth can be further achieved. In conclusion, we investigate the weak and strong response of lasers to external frequency-shifted feedback. Based on a weak response, a contact-free measurement scheme combining external frequency-shifted feedback and frequency multiplexing has been developed to achieve high sensitivity, high accuracy and high environmental robustness. Its application in the measurement of displacement, vibration, liquid evaporation, and thermal expansion has also been experimentally demonstrated. Furthermore, spectral linewidth broadening of the order of 10 is observed in the condition of strong response regime. A theoretical analysis that agrees well with the experiments is also presented. Methods Differential frequency-shifting. A pair of AOMs is used in the differential frequency-shifting model. The laser beam (frequency v) passes through the AOM 1 (driven at V 1 ), whose output contains 0-order (frequency v) and -1-order (frequency v-V 1 ) diffracted beams. The distance between the two AOMs is kept short enough to ensure that both the output from AOM 1 pass through the aperture of AOM 2 (driven at V 2 ). Simultaneously, the incident angle of the laser beam at AOM 2 is carefully adjusted to ensure the generation of a 11-order diffracted beam. Then, after AOM 2 , four laser beams are obtained, whose frequencies are v, v 2 V 1 , v 1 V 2 , and v 2 (V 1 2 V 2 ). Among these beams, the beams whose frequencies are v and v 2 (V 1 2 V 2 ) are selected as reference and measurement arms, which return to resonator along the incoming path of the beam (frequency v 2 (V 1 2 V 2 )). Thus, this process is called differential frequency-shifting. The -1-order diffracted beams of AOM 1 and the 11order diffracted beams of AOM 2 are combined to obtain the differential effect. The relative position of the AOMs to each other is important in constructing this differential frequency-shifting. The gain amplification G(V) is a frequency-dependent factor. The closer the V approaches to f R , the larger the G(V) is. Thus, in the practical laser feedback interferometer, to get larger gain amplification G(V) and thereby to sense the measured target with extremely low reflectivity, the frequency shifting of V (typical value 200 kHz) is more close to the f R (290 kHz) than that (80 kHz) in the experiments. Signal processing and data analysis. Both the measurement and reference signals are filtered and amplified and are then sent to a dual-channel phase meter (Pretech Science, PT-1313B-2D, China) for simultaneous demodulation to obtain the movement information of the measured targets and the environmental disturbance. Afterwards, the data are transmitted to a computer by USB. A graphic interface programmed by Labview software (developed by National Instruments, USA) is used to read the data, to analyse the data, and to display the final results.
2016-05-17T23:28:59.690Z
2013-10-09T00:00:00.000
{ "year": 2013, "sha1": "2d30ad2db1a21f279de26e4ad9bcb744801e4c2b", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/srep02912.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2d30ad2db1a21f279de26e4ad9bcb744801e4c2b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
184137957
pes2o/s2orc
v3-fos-license
Morbidity and Mortality among Patients Referred from District Hospitals to Butare University Teaching Hospital Chub/Obgyn Department in Post-Partum and Post-Operative Period ISSN: 2319-7706 Volume 5 Number 7 (2016) pp. 135-147 Journal homepage: http://www.ijcmas.com Obstetrical patients are among referrals from District Hospitals (DHs) to Butare University Teaching Hospital(CHUB)/Obstetrics and Gynecology (OBGYN) department. Most of the time they are referred in very critical conditions. Knowing the main reasons of referrals from district hospitals and their management could help in improving patients care at the level of both settings (CHUB and District Hospitals). A retrospective cohort study was done on 318 women in post-partum period with obstetrical or pregnancy-related complications referred at CHUB OBGYN department over a 2 year period (September 1, 2012-August 31, 2014). The aim of this study was to determine the morbidity and mortality amongst patients referred from district hospitals to CHUB/OBGYN Department in postpartum period and focusing on reasons of transfer, their management and outcome. The data were processed using Epidata 3.1, Excel and STATA 13 software. Chi-square and linear regression were used to compare variables. In total 318 patients were referred to CHUB/OBGYN in our study period and main reasons of referrals were post-partum hemorrhage 32.38% (103/318) followed by post-C/S peritonitis with Septic shock 9.1% (29/318).DVT, post-partum anemia, superficial wound infection, endometritis, preeclampsia and eclampsia were among less frequent reasons of reference. Amongst patients with post-partum hemorrhage, 10.06% (32/318) of patients underwent abdominal hysterectomy. Overall 45.2% (144/318) of patients were managed surgically and 16% (51/318) patients underwent hysterectomy in their management. Case fatality rate is 6.3% (20/318) and 11.63% (37/318 patients) were discharged with complications. A linear regression established that the time spent at the district hospital prior to referral was significantly associated with poor patient outcome (P<0.01). Patients who were managed with the right diagnosis at admission had significantly less morbidity and mortality, p<0.01. Most referrals from the district hospital for patients in post-partum period were for PPH and post-C/S peritonitis with 6.3% as case fatality rate. Training of DHs staff in emergency obstetric care and in essential surgical skills, reviewing their decision making in case of C/S to be done and early referral would improve patients care at DHs. K ey wo rd s Morbidity, Mortality, postpartum, postoperative, reference, District Hospital. Accepted: 08 June 2016 Available Online: 10 July 2016 Article Info Int.J.Curr.Microbiol.App.Sci (2016) 5(7): 135-147 Obstetrical patients are among referrals from District Hospitals (DHs) to Butare University Teaching Hospital(CHUB)/Obstetrics and Gynecology (OBGYN) department. Most of the time they are referred in very critical conditions. Knowing the main reasons of referrals from district hospitals and their management could help in improving patients care at the level of both settings (CHUB and District Hospitals). A retrospective cohort study was done on 318 women in post-partum period with obstetrical or pregnancy-related complications referred at CHUB OBGYN department over a 2 year period (September 1 st , 2012-August 31 st , 2014). The aim of this study was to determine the morbidity and mortality amongst patients referred from district hospitals to CHUB/OBGYN Department in postpartum period and focusing on reasons of transfer, their management and outcome. The data were processed using Epidata 3.1, Excel and STATA 13 software. Chi-square and linear regression were used to compare variables. In total 318 patients were referred to CHUB/OBGYN in our study period and main reasons of referrals were post-partum hemorrhage 32.38% (103/318) followed by post-C/S peritonitis with Septic shock 9.1% (29/318).DVT, post-partum anemia, superficial wound infection, endometritis, preeclampsia and eclampsia were among less frequent reasons of reference. Amongst patients with post-partum hemorrhage, 10.06% (32/318) of patients underwent abdominal hysterectomy. Overall 45.2% (144/318) of patients were managed surgically and 16% (51/318) patients underwent hysterectomy in their management. Case fatality rate is 6.3% (20/318) and 11.63% (37/318 patients) were discharged with complications. A linear regression established that the time spent at the district hospital prior to referral was significantly associated with poor patient outcome (P<0.01). Patients who were managed with the right diagnosis at admission had significantly less morbidity and mortality, p<0.01. Most referrals from the district hospital for patients in post-partum period were for PPH and post-C/S peritonitis with 6.3% as case fatality rate. Training of DHs staff in emergency obstetric care and in essential surgical skills, reviewing their decision making in case of C/S to be done and early referral would improve patients care at DHs. Introduction Butare University Teaching Hospital (CHUB) as a referral hospital receives patients from district hospitals mainly from Southern and Western Provinces in Rwanda. Health care providers at district hospitals refer patients either because they lack skills, facilities, or both to manage a given clinical condition (Dean et al., 2006). This study evaluates referrals in the postpartum and in post-operative period. As it is known, the burden of maternal ill-health includes not only the levels of maternal mortality and complications during pregnancy and around the time of delivery but also extends to the standard postpartum period of 42 days with consequences of obstetric complications and poor management at delivery ; WHO/UNICEF/UNFPA/World Bank, 2012). Globally, the total number of maternal deaths decreased by from 543 000 in 1990 to 287 000 in 2010. Likewise, the global maternal mortality ratio (MMR) declined from 400 maternal deaths per 100 000 live births in 1990 to 210 in 2010, representing an average annual decline of 3.1 per cent (Ferdous et al., 2012). Throughout history, pregnancy has carried a high risk of death secondary to complications as obstructed labor, ruptured uterus, postpartum hemorrhage, postpartum infection, hypertensive disease of pregnancy, and complications stemming from unsafe abortion. Maternal mortality is still high as about 800 women die from pregnancy-or childbirth-related complications around the world every day and in 2010, 287 000 women died during and following pregnancy and childbirth. Almost all of these deaths occurred in lowresource settings, and most could have been prevented (Conde-Agudelo et al., 2004;WHO, 2012). Improving maternal health is one of the eight Millennium Development Goals (MDGs) adopted by the international community in 2000. Under MDG5, countries committed to reducing maternal mortality by three quarters between 1990 and 2015. Since 1990, maternal deaths worldwide have dropped by 47%. In sub-Saharan Africa, a number of countries have halved their levels of maternal mortality since 1990. In other regions, including Asia and North Africa, even greater headway has been made. However, between 1990 and 2010, the global maternal mortality ratio (i.e. the number of maternal deaths per 100 000 live births) declined by only 3.1% per year. This is far from the annual decline of 5.5% required to achieve MDG5.Women die as a result of complications during and following pregnancy and childbirth. Most of these complications develop during pregnancy. Other complications may exist before pregnancy but are worsened during pregnancy. The major complications that account for 80% of all maternal deaths are: severe bleeding (mostly bleeding after childbirth), infections (usually after childbirth), high blood pressure during pregnancy (pre-eclampsia and eclampsia), unsafe abortion. The remainder are caused by or associated with diseases such as malaria, and AIDS during pregnancy (Conde-Agudelo et al., 2004;WHO, 2012 Rwanda, 2016). Knowing the main reasons of referrals from district hospitals and their management could help in improving patients care at the level of both settings (CHUB and District Hospitals). The hypothesis was that there was a high mortality rate amongst patients referred from District Hospitals to CHUB/OBGYN Department. Early diagnosis and early transfer will decrease this mortality rate. The aim of this study was to determine the morbidity and mortality amongst patients referred from district hospitals to CHUB/OBGYN Department in postpartum period and focusing on reasons of transfer, their management and outcome. Materials and Methods This is a retrospective cohort study of patients referred post-partum from district hospitals to CHUB/OBGYN department in a period of two years (September 1st, 2012 to August 31st, 2014). This cohort included all patients referred from district hospitals to CHUB/OBGYN department within 42 days of delivery regardless of mode of delivery and includes patients who were at least 24 weeks gestational age prior to delivery. Patients referred from district hospitals for gynecologic indications, all patients who underwent spontaneous or therapeutic abortion, all patients referred who are not in postpartum period and all patients who delivered at CHUB were excluded. All files for women in the cohort were reviewed and de identified data were extracted using a preset data collection form. The data were processed using Epidata 3.1, Excel and STATA 13 software. Data were doublechecked before analysis. Chi-square and linear regression were used to compare variables. P< 0.05 was considered significant and power was 80%. The proposal protocol was approved by CHUB Research Committee and permission from CHUB authorities was obtained before starting data collection. To ensure confidentiality of data confined in patient files, the data extracted did not contain patient identifiers such as name, medical record number, date of birth, or address. Data extraction was conducted by a trained health professional (a midwife, an intern, medical student) who are trained on medical research ethics. Our main objective was to determine the morbidity and mortality amongst patients referred from district hospitals to CHUB/OBGYN Department in postpartum period. We calculated the mortality of women referred within 42 days postpartum as well as the percentage of women who had complications including surgical site infection and repeat laparotomy. Results and Discussion In a period of two years, three hundred eighteen (318) patients fitting in our criteria were admitted and managed at CHUB/OBGYN department from District Hospitals (DHs). There were from 15 district hospitals and ranged from early to late post-partum period. The mode of delivery was either Cesarian-Section or vaginal delivery. Their mean age is 29.6years, and the youngest patient is 16 years and the oldest patient is 65 years. Most of them were of 25-34 year-old 49.37% (157/318). Most of the patients referred to CHUB from DHs were of 1 to 3 parities (64.8%), 48.7 % of all referred patients were delivered eutocically one to three times, 39.6% reported one prior cesarian delivery, while a majority of referred patients do not have any history of C/Section (49.1%), 33.3% of recently born neonates did not survive. The prevalence of HIV among referred patients were 3.1% (10/318 patients) and 0.6% of them were taking only Bactrim while 2.5% were taking HAART (Highly Active Anti-Retroviral Treatment).The majority of referred patients (89.3%) were with no history of chronic disease while 7.5% were having asthma in their history. (Table 1) We found the majority of referrals were for PPH (32.38% (103/318)) either post-C/S or post-NSVD, a majority of these patients came in at CHUB in hemorrhagic shock. Septic shock, post-C/S peritonitis, DVT, post-partum anemia, surperficial wound infection after C/S, Endometritis, Preeclampsia and Eclampsia were also among the main reasons of referral. (Table 3) Among 103 patients referred for mainly PPH, 12.6% 0f them (13/103) were managed conservatively, 11.6% patients (12/103) were managed only medically and more than a half of patients with PPH 56.3% (58/103) were managed both medically and surgically. 32 patients (10% of the total referred patients) underwent abdominal hysterectomy for hemostasis. Patients who were diagnosed to have Post-C/S peritonitis with other associated conditions like septic shock were 9.1% patients (29/318) and they were mainly managed medically and surgically together. 25patients 25/318=7.8%) underwent laparotomy and 5 patients (5/318= 1.57% of the referred patients) of them underwent hysterectomy as the uterus were necrotised and impossible to conserve. This study showed that 6.3% (20/318) patients referred to CHUB/OBGYN died and most of them 65% (13/20) patients died within 3 days of admission, 40% (8/20) died in ICU. 80.20% (255/318) patients improved and were discharged and most of them 40.8% (104/255) were discharged within ten days of admission and only 16.5% (42/255) patients of them spent more than 20days at the hospital. In total 11.63% (37/318 patients) were discharged with complications and 1.9%patients (6/318) were referred to another hospital mainly at KIGALI University Teaching Hospital (CHUK) and at Rwanda Military Hospitals, primarily for urologist review. 72.3% of all referred patients to CHUB /OBGYN were managed with the right diagnosis from their admission. 3.5 % (11/318) patients of referred patients to CHUB/OBGYN had various kinds of fistulas, 1.9% had ureteric or bladder injury as post-partum complications and the majority of patients recovered with no complication. In total 8.2% patients (26/318) were admitted in ICU and of them 34.6% (9/26) patients spent there more than six days. Among patients who were diagnosed to have post-C/S peritonitis, 17.2% (5/29) had fistulas, mainly enterocutaneous fistulas and 6.8% (2/29) had bowel injury with colostomy or ileostomy plus fistula. (Table 6) When we analysed health conditions at discharge day and admission on ICU, we found that there was a significant association. Those patients who were admitted in ICU were also prone to have complications and the risk of death. (P=0.04). Results of this study also showed that patients from Western province district hospitals mainly MIBIRIZI and GIHUNDWE DHs spent more than 16hours at the hospital before being referred (Table 7). A linear regression established that the time spent to DHs before being referred could statistically significantly predict complications that patients had with their health conditions at discharge day, P=0.0005, more the patient spent more time at DH the more she got severe complications and the more she is likely to die if she has post-c/section peritonitis (P=0.007) and Patients who were managed with the right diagnosis at admission were less likely to have complications and to die, ,P=0.0033. 50% patients (11/20) who died spent between 0 and 6hours at their respective DHs before being referred and there were having PPH as the working diagnosis. (Table 6) In this study, when the effect of other chronic diseases on the conditions of patients all along in their admission time was studied, we found that most of patients had no significant medical history and most were HIV negative. No significant association between past medical history and the outcomes evaluated in this study (P=0.9) Most of patients 60% (12/20) who died were HIV negative, there was no association between having HIV and the risk of death during the hospitalization. No patient with HIV had complications or referred to another hospital. (P=0.4). In a period of two years, three hundred eighteen (318) This study showed that almost a third 32.38% (103/318) of patients referred had PPH as morbidity associated with other conditions like hypovolemic shock, anemia, etc ; followed by Septic shock and post-C/S peritonitis, deep venous thrombosis (DVT), post-partum anemia , surperficial wound infection after C/S, Endometritis, Preeclampsia and Eclampsia. Our results are not far from those of other results that show that women suffer pregnancy-related complications before, during and even after delivery and among post-partum complications include PPH, Post-partum infection, pre-eclampsia and eclampsia with DVT (Ferdous et al., 2012;Conde-Agudelo et al., 2004;WHO, 2012;Iyengar et al., 2012). In a study done in Tanzania on maternal near miss (MNM) showed that major causes were eclampsia and postpartum haemorrhage and CS complications accounted for 7.9% of the MNM events and 13% of the maternal deaths (Litorp et al., 2014). Another study done at three university hospitals with a high rate of CS in Tehran-Iran on Maternal near miss revealed that Severe postpartum hemorrhage (35%, 29/82), severe preeclampsia (32%, 26/82), and placenta previa/abnormally invasive placenta (10%, 8/82) were the most frequent causes of MNM (Mohammadi et al., 2016). Rulisa et al in their study on Maternal near miss and mortality in a tertiary care hospital in Rwanda revealed that the majority of severe obstetric morbidity and mortalities resulted from: sepsis/peritonitis (30.2 %)--primarily following caesarean deliveries, hypertensive disease (28.6 %), and hemorrhage (19.3 %) (Mantel et al., 1998). When we looked at the management of patient diagnosed to have PPH, this study revealed that among 103 patients referred for mainly PPH, 12.6% of them (13/103) were managed conservatively, 11.6% patients (12/103) were managed only medically and more than a half of patients with PPH 56.3% (58/103) were managed both medically and surgically. And among them 32 patients 32/318= 10.06%) underwent abdominal hysterectomy for hemostasis that equals to 10.06% 0f the total referred patients. The above management is comparable to the management of such patients from other studies as they mention that the first step is to attempt conservative management by giving uterotonics like oxytocin, cytotec, ergot alkaloids such ergometrine or its derivative methylergonovine (methylergometrine, methergine), if conservative management fails invasive treatment of PPH is initiated to avoid severe morbidity and mortality ; uterine balloon tamponade, uterine compression sutures, angiographic arterial embolisation, uterine ligation and hysterectomy are being used in this case. However Bassey et al, in their study on emergency peripartum hysterectomy in a low resource setting found that the commonest indication of peripartum hysterectomy was uterine rupture but for us hysterectomy was done when uterus was damaged either through fetal extraction and unable to repair or when there is atonic uterine that is responding to conservative and invasive procedure rather than hysterectomy (Rwanda et al., 2009;Litorp et al., 2014). Results of this study highlight the big number of mother who lost their uterus and so their obstetrical future. In total 45.2% (144/318) of all referred patients to CHUB/OBYGN in post-partum period in study period were managed surgically and 16% (51/318) patients underwent hysterectomy among other management. when we look at the management of referred patients with post-partum infection, 9.1% patients (29/318) had post-C/S peritonitis associated with other conditions like septic shock. 5 patients (5/29=17.24% or 5/318= 1.57% ) of them plus 4patients (3/20=20%) among patients diagnosed initially with superficial site wound infection underwent hysterectomy as the uterus were necrotised and impossible to conserve. Recent cesarian section was associated with more PPH and more post-partum infection compared to vaginal delivery and they were more likely to undergo hysterectomy (P=0.021). Those results are similar so those of other studies where pueriperal infection and postc/section infection either superficial surgical site infection and post-CS peritonitis are among severe morbidity and causes of maternal near miss and mortality. Severe infection ends up with sepsis and septic shock and most patients with sepsis post-CS end up with having their uterus removed as it is necrotised and impossible to conserve. Patients from Western province district hospitals mainly MIBIRIZI and GIHUNDWE DHs spent more than 16hours at the hospital before being referred. And a linear regression established that the time spent to DHs before being referred could statistically significantly predict complications that patients had with their health conditions at discharge day. Results of this study showed that the more the patient spent more time at DH the more she got severe complications and the more she is likely to die if she has post-c/section peritonitis (P=0.007) and Patients who were managed with the right diagnosis at admission were less likely to have complications and to die, P<0.01. 50% patients (11/20) who died spent at least 6hours at their respective DHs before being referred and there were having PPH as the working diagnosis, this is too much time as we know that PPH can kill a patient within two hours post-partum if nothing is done. When we add this delay to the long way from Western province DHs and to the questionable availability of obstetrical emergency, time interval between the admission, diagnosis and starting management where with an average time of 72hours we understand that most of patients with PPH could not even arrive at CHUB. Time matters in case of emergency obstetrical care. In our study period, case fatality rate was 6.3% (20/318). This means that 6.3% of all patients referred from DHs to CHUB/OBGYN departments died and most of them 65% (13/20) patients died within 3days of admission, 40% (8/20) died in ICU. They died because of the severity of their conditions at admission and probably because of the delay in transferring them or consulting as most of them (72.3%) were managed with the right at OBGYN department from their admission. This study revealed that some patients had various kinds of complications and depending on the initial diagnosis while majority of them were discharged with no complications. Here our attention was on long term complications and complications that interfere with the social life of the patient after discharge like fistulas. Patients referred for PPH and Post-C/S peritonitis had more complications among as 44.6% (46/103) patients had kinds fistula mainly vesico-vaginal fistulas. Among patients who were diagnosed to have post-C/S peritonitis, 17.2% (5/29) had fistulas also mainly enterocutaneous fistulas and 6.8% (2/29) had bowel injury with colostomy or ileostomy plus fistula. Those results show how much those patients with fistula are suffering even after discharge as they can be marginalized and abandoned by their family. This study did not compare complications among patients managed at CHUB and patients referred from district hospitals and for the best of our knowledge there is no any other similar study done at CHUB/OBGYN department. Two years-period may be enough period to have a general picture of how patients are being managed at DHs and this may serve as data base when supervisions are to be done. Training of DHs staff in emergency obstetric care and essential surgical skills and encouraging DHs to early refer their patients and to review their decision making when it is for C/S to be done. In conclusion, most patients who were referred from DHs to CHUB/OBGYN department in study period were having PPH and post-C/S peritonitis. Among them 6.3% were died and 80.2% were discharged home improved and the rest of them had long term complications. Long stay at DH before being referred is associated with adverse outcome and subsequent complications while early right diagnosis at CHUB/OBGYN department were associated with good outcome and less complications.
2019-06-11T13:10:10.908Z
2016-07-15T00:00:00.000
{ "year": 2016, "sha1": "fcee764ad72c6222a9424a679889d546ed7add47", "oa_license": null, "oa_url": "https://www.ijcmas.com/5-7-2016/E.%20Habimana,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "811f55ffc59deab054f1388a6b68fe3699c919cd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219691461
pes2o/s2orc
v3-fos-license
Multidisciplinary team approach in acute myocardial infarction patients undergoing veno-arterial extracorporeal membrane oxygenation Background Limited data are available on the impact of a specialized extracorporeal membrane oxygenation (ECMO) team on clinical outcomes in patients with acute myocardial infarction (AMI) complicated by cardiogenic shock (CS). This study evaluated whether specialized ECMO team is associated with improved in-hospital mortality in AMI patients undergoing veno-arterial (VA) ECMO. Methods A total of 255 AMI patients who underwent VA-ECMO were included. In January 2014, a multidisciplinary ECMO team was founded at our institution. Eligible patients were classified into a pre-ECMO team group (n = 131) and a post-ECMO team group (n = 124). The primary outcome was in-hospital mortality. Results In-hospital mortality (pre-ECMO team vs. post-ECMO team, 54.2% vs. 33.9%; p = 0.002) and cardiac intensive care unit mortality (pre-ECMO team vs. post-ECMO team, 51.9% vs. 30.6%; p = 0.001) were significantly lower after the implementation of a multidisciplinary ECMO team. On multivariable logistic regression model, implementation of the multidisciplinary ECMO team was associated with reduction of in-hospital mortality [odds ratio: 0.37, 95% confidence interval (CI) 0.20–0.67; p = 0.001]. Incidence of all-cause mortality [58.3% vs. 35.2%; hazard ratio (HR): 0.49, 95% CI 0.34–0.72; p < 0.001) and readmission due to heart failure (28.2% vs. 6.4%; HR: 0.21, 95% CI 0.08–0.58; p = 0.003) at 6 months of follow-up were also significantly lower in the post-ECMO team group than in the pre-ECMO team group. Conclusions Implementation of a multidisciplinary ECMO team was associated with improved clinical outcomes in AMI patients complicated by CS. Our data support that a specialized ECMO team is indispensable for improving outcomes in patients with AMI complicated by CS. Background Cardiogenic shock (CS) is the main cause of mortality in patients with acute myocardial infarction (AMI) [1,2]. Despite advancements in reperfusion and pharmacological therapy, the short-term mortality rate of patients with AMI complicated by CS remains unacceptably high [1,2]. Particularly, in refractory CS not responding to Open Access *Correspondence: jhysmc@gmail.com † David Hong and Ki Hong Choi contributed equally to this work 1 Division of Cardiology, Department of Internal Medicine, Heart Vascular Stroke Institute, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul 06351, Republic of Korea Full list of author information is available at the end of the article conventional medical therapies, in-hospital mortality rate reaches 50% to 60% [3,4] and mechanical support such as veno-arterial (VA) extracorporeal membrane oxygenation (ECMO) is recommended in both the latest American Heart Association and the European Society of Cardiology guidelines (classes IIA and IIB, respectively) [5,6]. These poor outcomes are due to complex and hemodynamically diverse state of cardiogenic shock [7,8]. The high-acuity of maintaining ECMO and the interaction between native heart and VA-ECMO may also be related to the poor outcomes [9,10]. In particular, running VA-ECMO is associated with many serious complications, which may contribute to further increase in morbidity and mortality [9,[11][12][13]. Accordingly, related organizations recommended that these patients be managed by a collaborative multidisciplinary team with trained specialists [8,14]. However, for AMI complicated by CS, which is the most common cause for the use of VA-ECMO [15], the impact of a multidisciplinary approach on the clinical outcome has not been investigated. Therefore, we sought to identify whether a multidisciplinary ECMO team is associated with improvements in in-hospital mortality among patients with AMI complicated by CS who underwent VA-ECMO. Study population The study population was derived from the prospective institutional VA-ECMO registry of Samsung Medical Center in Seoul, Republic of Korea from May 2004 to July 2018 (Fig. 1). From this registry, AMI patients complicated by CS were included in the analysis. AMI was defined as evidence of myocardial injury (defined as an elevation of cardiac troponin values, with at least one value above the 99th-percentile upper-reference limit) with necrosis in a clinical setting, consistent with myocardial ischemia [6]. CS was defined as persistent hypotension (systolic blood pressure < 90 mmHg) for 30 min or a state that required inotrope or vasopressor support to achieve a systolic blood pressure of more than 90 mmHg despite adequate filling status, with signs of hypoperfusion [6]. VA-ECMO was applied to patients with medically refractory CS that did not respond to inotropes and vasopressors, or cardiac arrest that was not resuscitated with advanced cardiac life support [3,9]. Patients who received VA-ECMO due to stable angina, unstable angina, and variant angina were excluded from this study. Patients, who were clinically stable before revascularization, but received VA-ECMO for prophylactic purpose because of their poor cardiac function and high risk of expected treatment, were also excluded from the study. Finally, 255 patients were analyzed. As of the date the multidisciplinary ECMO team was founded at our institution, patients were classified into two groups: a pre-ECMO team group (before January 2014, n = 131) and a post-ECMO team group (after January 2014, n = 124). The institutional review board of Samsung Medical Center approved this study, and written informed consent was obtained. Multidisciplinary ECMO team Our institution is a tertiary referral hospital with a tertiary-level intensive care unit. Since the initiation of the use of ECMO in 2004, the number of patients treated with ECMO had increased gradually. Currently, more than 100 patients are treated with ECMO each year at our institution. Cardiac surgeons or interventional cardiologists inserted VA-ECMO at bedside or in the catheterization laboratory. As far as there were no special indications, peripheral cannulation with percutaneous approach using the Seldinger technique was chosen as the initial implant method. The Capiox Emergency Bypass System (Capiox EBS ™ ; Terumo, Inc., Tokyo, Japan) and Permanent Life Support (PLS; MAQUET, Rastatt, Germany) were used in our hospital. All patients received unfractionated heparin as an anticoagulant unless there was active bleeding. Through our hospital's own protocol, the heparin infusion rate was adjusted to achieve the target activated clotting time of 150 to 180 s and activated partial thromboplastin time of 55 to 75 s, respectively. In the event of persistent pulmonary edema after ECMO initiation despite diuresis and inotropes, left ventricular decompression was achieved by either percutaneous atrial septostomy or surgical venting. In January 2014, a multidisciplinary ECMO team was founded at our institution. Our ECMO team consists of interventional cardiologists, critical care physicians, cardiovascular surgeons, heart failure physicians, a pharmacist, a nutritionist, and perfusionists who were formal intensive care registered nurses and received specific ECMO training. Before the team's establishment, attending physician, who was capable of inserting and maintaining ECMO, was responsible for running ECMO. Most of the ECMO-related decisions, from initiation to weaning, were made solely by the attending physician. Instaff training was in charge of attending physician as well. No protocol existed for maintaining ECMO. Only elective consultation to experienced cardiothoracic surgeons was possible in difficult clinical situations, with no 24-h on-call coverage by an ECMO specialist. However, after the foundation of the ECMO team, team members readily participated in the management of ECMO patients and all ECMO-related decisions, as described below. First, both the initiating and weaning of ECMO were performed under the supervision of the ECMO team. Based on our institutional ECMO protocols for indications and contraindications (Additional file 1: Table S1), the ECMO team evaluated the eligibility of the patient for ECMO and made the final decision of whether to initiate ECMO or not. The decision of weaning was also made together by the attending physician and ECMO team based on our institutional weaning criteria. Second, as part of daily rounds, echocardiography was performed to evaluate cardiac function and recovery. The pharmacist and nutritionist adjusted prescribed medications and nutritional plan in accordance with alterations of pharmacokinetics and metabolic status due to running ECMO and the critically ill status of the patient. Also, the ECMO team checked the functional status of the ECMO device including the pump, oxygenator, and cannula daily, and assessed the occurrence of ECMO-related complications and the adequacy of relevant management. Third, ECMO-trained physicians, cardiovascular surgeons, and perfusionists provided 24-h on-call coverage for ECMO patients and potential candidates. Fourth, the ECMO team was responsible for staff training. Doctors and nurses who were in charge of ECMO patients were educated by the ECMO team in order to properly manage patients according to their complicated clinical situations. Fifth, a weekly meeting was held to discuss the issues of current ECMO patients as well as review previous cases for quality assurance. Patient management, data collection, and study outcomes Patient management was performed according to current standard guidelines [5,6,16,17]. The choice of treatment strategy of percutaneous coronary intervention (PCI) (type, diameter, and length of stents; use of intravascular ultrasound; glycoprotein IIb/IIIa inhibitor use; and thrombus aspiration) was left to the discretion of the attending physicians. Unless there was an undisputed reason for discontinuing dual-antiplatelet therapy, all patients were recommended to take aspirin indefinitely plus a P2Y12 inhibitor for at least 1 year after the index procedure. Coronary artery bypass graft (CABG) was performed using current standard methods. The left internal mammary artery was considered preferential for revascularization of the left anterior descending artery. Patients who underwent CABG were recommended to take aspirin indefinitely. If intolerant to aspirin, taking clopidogrel as an alternative was also allowed. Patients were prospectively registered at the time of index hospitalization. Demographic feature and cardiovascular risk factor data were collected by detailed interview with patients or their families at admission. Coronary angiographic findings and procedural history of PCI, CABG, and ECMO were gathered during hospitalization. Information about adjunctive therapies in addition to ECMO such as inotropes, mechanical ventilation, and continuous renal replacement therapy was collected at the time of discharge. Follow-up outcomes were obtained from the review of patients' electronic medical records by research coordinators of the dedicated registry. Clinical events that occurred within a 6-month follow-up period were analyzed. The primary outcome was in-hospital mortality. Secondary outcomes included cardiac intensive care unit (CICU) mortality, 6-month all-cause death, 6-month readmission due to heart failure, successful weaning of ECMO, complications in the CICU, length of CICU stay, duration of ECMO, duration of mechanical ventilation, and duration of continuous renal replacement therapy. All clinical outcomes were defined according to the Academic Research Consortium [18]. All deaths were considered cardiac-related unless a definite non-cardiac cause could be established. Successful weaning of ECMO was defined as maintaining hemodynamic stability after ECMO removal with or without getting durable left ventricular assist device or heart transplantation. Included complications were major bleeding, vascular complications, infection, and limb ischemia. Major bleeding was defined as bleeding in the brain, thorax, mediastinum, gastrointestinal tract, or abdomen or any fatal bleeding requiring transfusion or intervention. Vascular complications included vessel perforation, arterial dissection, and site bleeding. Site bleeding that was fatal was not included in vascular complications and included in major bleeding. Minor complications such as local hematoma were not recorded in vascular complications. Infection was defined as the presence of clinical symptoms or signs of infection with concurrent microbiological evidence of infection confirmed by blood culture during CICU stay. Limb ischemia was defined as cases requiring surgical management or having dependent performance from 0 to 2 scale on functional ambulation classification resulting from limb ischemia at discharge [19]. Statistical analysis Categorical variables were presented as numbers and relative frequencies and compared using the Chi-square test or Fisher's exact test, as appropriate. Continuous variables were presented as mean ± standard deviation or median with interquartile range (Q1 to Q3) and compared using the Student's t test or the Wilcoxon rank-sum test, as appropriate. The risk of in-hospital mortality was compared using logistic regression analysis and was presented as odds ratios (OR) and 95% confidence intervals (CI). To identify independent predictors of in-hospital mortality, multivariable logistic regression analysis was performed. Variables were included in the analysis if they showed a significant relation in the univariate analysis with a p value of less than 0.1 and were considered clinically relevant. Cumulative incidences of clinical outcomes were calculated by Kaplan-Meier estimates and compared using a log-rank test. Cox proportional hazards regression analysis was performed to compare the risk of clinical events before and after the ECMO team establishment. Risks of clinical events were presented with hazard ratios (HR) and 95% CIs. All probability values were two-sided and p-values of less than 0.05 were considered statistically significant. Statistical analyses were performed using the R Statistical Software (version 3.5.2; R Foundation for Statistical Computing, Vienna, Austria). Baseline and treatment characteristics Baseline clinical and angiographic characteristics are shown in Table 1. Of the total patients, 64.3% presented with ST-segment elevation myocardial infarction (STEMI), 15.3% had out-of-hospital cardiac arrest, and 63.9% had in-hospital cardiac arrest. As for angiographic profile, the left anterior descending artery and left main coronary artery accounted for 44.7% and 23.1% of the culprit vessels, respectively. A total of 78.8% of patients presented with multivessel disease. Nevertheless, there were no differences in baseline clinical and angiographic characteristics between the two groups, except for body mass index, previous history of myocardial infarction and PCI, and baseline total bilirubin. Also, indicators of severity in ECMO patients such as ENCOURAGE score, AMI-ECMO score, and SOFA score were not different between the two groups. Regarding treatment characteristics (Table 2), successful revascularization through either PCI or CABG was higher in the post-ECMO team group than in the pre-ECMO team group (84.7% vs. 94.4%; p = 0.022). In STEMI patients, door-to-balloon time was shorter in the post-ECMO team group than in the pre-ECMO team group (114.0 vs. 88.0, p = 0.032). Extracorporeal cardiopulmonary resuscitation was performed in 67.8% of study population and there was no significant difference in proportion between the two groups. Arrest to ECMO pump-on time (for extracorporeal CPR patients only) and shock to ECMO pump-on time (for non-extracorporeal CPR patients only) were numerically shorter in the post-ECMO group than the pre-ECMO group, with no statistical significance. For supplementary treatments after ECMO insertion, the use of inotropes or vasopressors, intra-aortic balloon pump, and mechanical ventilation was significantly lower, whereas distal perfusion was more frequently performed in the post-ECMO team group than in the pre-ECMO team group. Clinical outcomes Clinical outcomes are presented in Table 3. In-hospital mortality occurred in 113 patients (44.3%) and CICU mortality occurred in 106 patients (41.6%). In-hospital mortality (54.2% vs. 33.9%; p = 0.002) and CICU mortality (51.9% vs. 30.6%; p = 0.001) were significantly lower in the post-ECMO team group than in the pre-ECMO team group (Fig. 2). The lower rate of in-hospital mortality in the post-ECMO team group was mainly driven by the lower rate of cardiovascular death (45.0% vs. 25.0%; p = 0.001). However, there were no significant differences between the two groups regarding non-cardiovascular death (9.2% vs. 8.9%; p > 0.99). Regarding the management of VA-ECMO patients in the CICU, specific parameters are compared in Table 3 and Additional file 1: Table S2. The successful weaning of VA-ECMO (57.3% vs. 75.8%; p = 0.003) was higher in the post-ECMO team group than in the pre-ECMO team group. However, the length of CICU stay did not differ significantly between the two groups. Also, the duration of ECMO, mechanical ventilation, and continuous renal replacement therapy were longer in the post-ECMO team group than in the pre-ECMO team group. As for complications (i.e., major bleeding, vascular complication, infection, limb ischemia), each component tended to be lower in the post-ECMO team group than in the pre-ECMO team group, resulting in a statistically significant decrease in overall complications in the post-ECMO team group (50.4% vs. 29.0%; p = 0.001). Independent predictors of in-hospital mortality Age, out-of-hospital cardiac arrest, successful revascularization, use of mechanical ventilation, use of continuous renal replacement therapy, annual ECMO volume and the multidisciplinary ECMO team approach showed significant relation in the univariable analysis and were included in multivariable logistic regression model (Table 4). In this model, the multidisciplinary ECMO team approach was associated with decreased risk of in-hospital mortality (adjusted OR: 0.37, 95% CI 0.20-0.67; p = 0.001). Discussion The current study is the first to evaluate the impact of a multidisciplinary ECMO team approach on clinical outcomes in AMI patients complicated by CS using data from a prospective VA-ECMO registry. The main findings were as follows. First, in-hospital mortality and CICU mortality were significantly lower in the post-ECMO team group than in the pre-ECMO team group. Second, the risks of all-cause death and readmission due to heart failure at 6-month follow-up were also significantly lower in the post-ECMO team group than in the pre-ECMO team group. Third, in a multivariable logistic regression model, multidisciplinary team approach was associated with decreased risk of in-hospital mortality in AMI patients with CS undergoing VA-ECMO. Although multidisciplinary team approach has been recommended in the care of critically ill patients, only a few studies to date have addressed its effects on clinical outcomes [20,21]. Also, even though the American Heart Association recommended that patients with CS be managed by a multidisciplinary team [8], nonetheless, this recommendation was primarily based on expert opinions and research regarding the association of hospital volume with clinical outcomes in CS patients, not the multidisciplinary approach [8,22]. Furthermore, considering that inserting ECMO is a high-risk intervention and maintaining ECMO requires highly sophisticated measures, the Extracorporeal Life Support Organization guidelines recommended that ECMO be operated by multidisciplinary team including trained specialists [14]. However, there are no data about the relationship between multidisciplinary care and clinical outcomes in AMI patients complicated by CS undergoing VA-ECMO. Therefore, we aimed to investigate the impact of multidisciplinary approach in this setting and demonstrated its beneficial effect, including reduction in mortality. Our study has several strengths. A large number of patients were observed for a sufficient follow-up period of 6 months considering that the study population was extremely severely ill patients with CS. Also, mortality as well as various treatment strategies and secondary outcomes were compared before and after the introduction of the multidisciplinary ECMO team. Lastly, the study population was extracted from a large prospective registry of a tertiary university hospital that reflects the real-world population and practices. The in-hospital mortality in our study before multidisciplinary team introduction was 54.2%, similar to that of other multicenter studies (50-60%) [4,23]. Therefore, our study suggested that, in addition to contemporary practice of CS, the additional benefit of a multidisciplinary approach might exist. The reasons how the multidisciplinary approach improved clinical outcomes are multifactorial in the current study. First, the multidisciplinary team consisted of experts from diverse fields. Thereby, the multidisciplinary approach enabled critically ill CS patients to receive systematic care and at the same time appropriate treatment for each problem. As a team leader, the critical care physician was closely involved and coordinated the multidisciplinary approach in order to properly manage multifaceted acute critical care [24]. Heart failure physicians were also involved in the treatment from the beginning of the initial state of shock and contributed to improve mortality not only by providing acute heart failure care, but also by maintaining the patient's long-term cardiac function and stably directing the process toward implementing exit strategies such as ventricular assist devices and heart transplantation for indicated patients [25]. Furthermore, a pharmacist and nutritionist were included in the multidisciplinary team. The adjustment of medications according to the altered pharmacokinetics of ECMO patients led to the maintenance of drugs at the appropriate therapeutic levels without side effects [26,27]. Likewise, customizing nutritional delivery according to Table 4 Predictors of in-hospital mortality C-statistic of the logistic regression model for in-hospital mortality was 0.795 (95% CI 0.740-0.850) Entered variables in univariate analysis for evaluating significant relation with the primary outcome included multidisciplinary approach, age, male, body mass index, hypertension, diabetes mellitus, dyslipidemia, chronic kidney disease, history of myocardial infarction, history of percutaneous coronary intervention, history of cerebrovascular accident, ST-segment elevation myocardial infarction, out-of-hospital cardiac arrest, left ventricular ejection fraction, laboratory findings in Table 1, anterior infarction, multivessel disease, percutaneous coronary intervention, coronary artery bypass graft, extracorporeal cardiopulmonary resuscitation, insertion of ECMO before revascularization, distal perfusion, use of inotropes or vasopressors, use of intra-aortic balloon pump, use of mechanical ventilation, use of continuous renal replacement therapy, overall complications and annual ECMO volume CI confidence interval, ECMO extracorporeal membrane oxygenation, OR odds ratio [28,29]. Second, our institutional maintenance strategies of ECMO patients were changed in order to reduce ischemic time after multidisciplinary team implantation. If cardiopulmonary resuscitation (CPR) persisted for longer than 10 min without the return of spontaneous circulation, the ECMO team was activated and extracorporeal CPR was immediately started, unless a patient was contraindicated to receive ECMO. Also at least one primed ECMO circuit was always prepared in advance at our institution. As a result, in STEMI patients, doorto-balloon time was significantly shorter in the post-ECMO group than in the pre-ECMO group. Also arrest to ECMO pump-on time and shock to ECMO pump-on time showed shorter tendency in the post-ECMO team group than in the pre-ECMO team group. Third, various efforts were made to reduce ECMOrelated complications. During daily rounds, evaluation of cardiac function through echocardiography and modifications of clinical settings were made in order to maintain appropriate hemodynamic status. These efforts have contributed to prevent organ damage due to ischemia or overperfusion. Also, multidisciplinary team assessed the risk of ECMO-related complications by checking physical examinations and related laboratory results on a daily basis. In addition, as one of the changes in our institution's ECMO maintenance strategies, awake ECMO was pursued unless pulmonary gas exchange was insufficient to cause upper body hypoxia. In our study, the use of mechanical ventilation was significantly lower in the post-ECMO team group and this may have played an important role in avoiding complications related to mechanical ventilation and sedation [30]. Lastly, mandatory distal perfusion, which was reported to reduce limb ischemia and even improve `survival [31], was strongly recommended. As a result, all of these diverse efforts significantly reduced the incidence of complications after the team establishment, which was considerably lower than the values shown in other studies [9]. As limitations, first, this study was an observational, prospective registry based, single-center study. Consequently, the influence of confounding bias or selection bias affecting the results of the research cannot be excluded. Although multivariable adjusted analysis was performed by adding various variables, the effects of confounding variables, such as annual ECMO volume or the learning curve of ECMO, were not completely corrected. Therefore, the results may be influenced by multifactorial causes other than multidisciplinary team. Furthermore, there might be concern about differences between the two groups when selecting patients who were appropriate candidates for using VA-ECMO. However, considering that selecting appropriate patient with team-based and protocolized decision is the effect of the multidisciplinary team, this can be considered as one of benefit of multidisciplinary team rather than the selection bias. Second, the advances in the treatment of shock patients or accumulation of experiences over time may have served as potential bias in the study. During the study period, three major randomized trials in AMI patients by CS were done [2,32,33]. First two studies were conducted to investigate the prognostic implications of immediate multivessel PCI and IABP, respectively, and showed no significant difference in mortality [2,33]. On the other hand, subgroup analysis of the other study, that compared the effects of vasopressors in patients with CS, showed survival benefit of norepinephrine over dopamine [32]. These advancements seemed to have played some role in improving the clinical results. However, as shown in Fig. 2, when the patients who were treated before the multidisciplinary team establishment (2004-2013) were divided into two groups according to time, there was no significant difference in clinical outcomes between the two groups. On the other hand, there was a significant improvement in mortality between before and after 2014. Considering there was no major change in patient management other than the foundation of the multidisciplinary team, this improvement could be regarded as an additional benefit of multidisciplinary approach on the top of other advances in practice strategy or the accumulation of experiences. Third, our data could not show in detail how multidisciplinary approach affected mediating outcomes and which mediating outcomes were improved, that led to decreased mortality. This is a limitation of our retrospective study, in which data were insufficiently investigated. Further thoroughly investigated prospective study is needed to elucidate the detailed influence of multidisciplinary approach. Fourth, the multidisciplinary approach did not show a significant reduction in the duration of CICU stay and adjunctive treatment. Nonetheless, the interpretation of this result should be done with caution. This result might be related with the ability of multidisciplinary team to maintain patients stable in the long-term and save those who may have died previously. As a result, the multidisciplinary approach inevitably increased the duration of organ support. Conclusion A multidisciplinary approach was associated with significantly lower in-hospital mortality in AMI patients complicated by CS who underwent VA-ECMO. Therefore, our findings support the current expert consensus that a multidisciplinary ECMO team is indispensable for improving outcomes in AMI patients with CS.
2020-06-16T14:50:01.094Z
2020-06-16T00:00:00.000
{ "year": 2020, "sha1": "eac2cb298765dbdcbf9d7eaafdb605db58b3cd86", "oa_license": "CCBY", "oa_url": "https://annalsofintensivecare.springeropen.com/track/pdf/10.1186/s13613-020-00701-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eac2cb298765dbdcbf9d7eaafdb605db58b3cd86", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3720757
pes2o/s2orc
v3-fos-license
Impact of age on the diagnostic performances and cut-offs of APRI and FIB-4 for significant fibrosis and cirrhosis in chronic hepatitis B Aims Assessing the diagnostic performances of APRI and FIB-4 using age as a categorical marker. Methods 822 chronic hepatitis B (CHB) patients were included. Using METAVIR scoring system as a reference, the performances of APRI and FIB-4 were compared between patients aged≥30 and patients aged<30 years. Results The APRI AUROC in patients aged<30 years was lower than that in patients aged≥30 years for significant fibrosis (0.61 vs 0.70, p<0.001) and cirrhosis (0.64 vs 0.78, p<0.001). The FIB-4 AUROC in patients aged<30 years was lower than that in patients aged≥30 years for significant fibrosis (0.57 vs 0.65, p<0.001) and cirrhosis (0.63 vs 0.72, p<0.001). Using specificity≥90%, the APRI cut-off in patients aged<30 years was lower than patients aged≥30 years for significant fibrosis (1.0 vs 1.2) and cirrhosis (1.2 vs 1.5). Using sensitivity≥90%, the APRI cut-off in patients aged<30 years was also lower than patients aged≥30 years for significant fibrosis (0.2 vs 0.4) and cirrhosis (0.3 vs 0.5). Using specificity≥90%, the FIB-4 cut-off in patients aged<30 years was lower than that in patients aged≥30 years for significant fibrosis (1.2 vs 2.1) and cirrhosis (1.4 vs 2.6). Using sensitivity≥90%, the FIB-4 cut-off in patients aged<30 years was also lower than that in patients aged≥30 years for significant fibrosis (0.5 vs 0.8) and cirrhosis (0.8 vs 1.2). Conclusions Evaluation of the diagnostic performances of APRI and FIB-4 should take age into consideration. INTRODUCTION Globally, an estimated 240 million patients have chronic hepatitis B virus (HBV) infection, which is intermediate to high prevalence in Asia-Pacific region [1]. In China, the HBV seroepidemiology has already shown a decrease in the prevalence of HBsAg, from 9.75 % in 1992 to 7.18 % in 2006 [1,2]. Chronic hepatitis B (CHB) patients with liver fibrosis were at increased risk of cirrhosis, and cirrhotic patients were at increased risk for liver de-compensation, hepatocellular carcinoma (HCC) and death [3]. A sustained suppression of HBV replication was associated with improvement in liver histology [4,5]. According to CHB guidelines, patients with significant fibrosis or cirrhosis should receive antiviral therapy [1,[6][7][8]. Besides, evaluation of liver fibrosis has an important role in prognosticating patients and determination of candidacy for surveillance for HCC. Therefore, the assessment of liver fibrosis needs to be considered in patients in whom liver fibrosis or cirrhosis is suspected. Liver biopsy is the gold standard to assess the degree of liver fibrosis, but limited by its high cost, invasiveness, and risk of complications [9]. Non-invasive fibrosis tests based on serum indices or ultrasound are increasingly used for evaluating liver fibrosis. The transient elastography performed with FibroScan is recognized as an excellent www.impactjournals.com/oncotarget/ Oncotarget, 2017, Vol. 8, (No. 28), pp: 45768-45776 Research Paper www.impactjournals.com/oncotarget fibrosis test because of its high diagnostic performance, non-invasive procedure, and can be undertaken in outpatient [10]. However, the FibroScan is limited by the high cost of equipment and fee for maintenance [11]. Serum fibrosis indices such as the aspartate transaminase (AST) to platelet ratio index (APRI) and fibrosis index based on the 4 factors (FIB-4) consist of indirect markers such as alanine transaminase (ALT), AST and platelet count, which are associated with lower costs, do not require particular expertise in their interpretation, and can be performed in an outpatient setting [11]. Currently, APRI and FIB-4 have been used widely in clinical practice. However, one of the research gaps is to evaluate the impact of other factors on the diagnostic performances of APRI and FIB-4. According to the American Association for the Study of Liver Diseases (AASLD) guidelines for the treatment of CHB, in patients who acquired HBV infection at birth or in early childhood, the average age of transitioning from immune-tolerant to immune-clearance phases is 30 years [6,12]. According to the European Association for the Study of the Liver (EASL) guidelines for CHB, liver biopsy or even therapy should be considered in patients over 30 years of age and/or with a family history of HCC or cirrhosis [7]. According to the Asian-Pacific Asociation for the Study of the Liver (APASL) guidelines for CHB, assessment of liver histology is usually recommended to determine the stage of fibrosis in patients older than 30 years and with a high viral load [1]. In CHB patients, age over 30 years is associated with higher likelihood of liver fibrosis than those under 30 years [6,13]. It was hypothesized that age might be an influence factor on the diagnostic performances of APRI and FIB-4. This study evaluated the impact of age on the diagnostic performances of APRI and FIB-4 in 822 CHB patients. Diagnostic thresholds of APRI and FIB-4 in patients aged≥30 years The cut-offs of APRI and FIB-4 for patients aged≥30 years were presented in Table 5. By obtaining a sensitivity of at least 90%, the low cut-off of APRI was 0.4 and 0.5, respectively; and the low cut-off of FIB-4 was 0.8 and 1.2, respectively, for significant fibrosis and cirrhosis. By obtaining a specificity of at least 90%, the high cut-off of APRI was 1.2 and 1.5, respectively; and the high cutoff of FIB-4 was 2.1 and 2.6, respectively, for significant fibrosis and cirrhosis. Diagnostic thresholds of APRI and FIB-4 in patients aged<30 years The cut-offs of APRI and FIB-4 for patients aged<30 years were presented in Table 6. By obtaining a sensitivity of at least 90%, the low cut-off of APRI was 0.2 and 0.3, respectively; and the low cut-off of FIB-4 was 0.5 APRI, aspartate transaminase to platelet ratio index; FIB-4, fibrosis index based on the 4 factors; AUROC, the area under the receiver operating characteristic curve; CI, confidence interval. and 0.8, respectively, for significant fibrosis and cirrhosis. By obtaining a specificity of at least 90%, the high cut-off of APRI was 1.0 and 1.2, respectively; and the high cutoff of FIB-4 was 1.2 and 1.4, respectively, for significant fibrosis and cirrhosis. DISCUSSION In this study, we evaluated the impact of age on the diagnostic performances and cut-offs of APRI and FIB-4. Using liver biopsy as a gold standard, the AUROC of APRI for patients aged<30 years was lower than patients aged≥30 years for significant fibrosis (0.61 vs 0.70, p<0.001) and cirrhosis (0.64 vs 0.78, p<0.001); and the AUROC of FIB-4 for patients aged<30 years was also lower than patients aged≥30 years for significant fibrosis (0.57 vs 0.65, p<0.001) and cirrhosis (0.63 vs 0.72, p<0.001). Thus it could be claimed that APRI and FIB-4 have better diagnostic performances for significant fibrosis and cirrhosis in CHB patients aged≥30 years, compared with patients aged<30 years. In this study, APRI and FIB-4 use two cut-offs for diagnosing significant fibrosis or cirrhosis, as the use of a single cut-off would result in suboptimal sensitivity and specificity according to the recent WHO HBV guidelines [8]. A high cut-off with high specificity (i.e. fewer false-positive results) is used to diagnose patients with significant fibrosis or cirrhosis, and a low cut-off with high sensitivity (i.e. fewer false-negative results) to rule out the presence of significant fibrosis or cirrhosis. Using specificity≥90%, the high cut-off for APRI in patients aged<30 years was lower than that in patients aged≥30 years for significant fibrosis (1.0 vs 1.2) and cirrhosis (1.2 vs 1.5). Using sensitivity≥90%, the low cut-off for APRI in patients aged<30 years was also lower than that in patients aged≥30 years for significant fibrosis (0.2 vs 0.4) and cirrhosis (0.3 vs 0.5). Similar results were found in FIB-4. These results indicated that different cut-offs should be applied for APRI and FIB-4 based on patient age. So far, several liver biopsy scoring systems have been developed, of which the METAVIR system, Knodell and Ishak scores are the most widely used [8]. Although the Ishak scoring system shows the necroinflammatory activity more clearly, the METAVIR scoring system was preferred in this study for the following reasons. First, according to the WHO HBV guideline, the diagnostic performances of APRI and FIB-4 for cirrhosis and significant fibrosis compared to METAVIR scoring system as the reference standard [8]. Second, according to the APASL guideline for CHB, APRI was recommended to diagnosis significant fibrosis (METAVIR≥F2) and cirrhosis (METAVIR=F4) using METAVIR scoring system as the reference standard [1]. Third, this study aimed to assess the diagnostic performances of APRI and FIB-4 for significant fibrosis and cirrhosis, rather than liver necroinflammatory activity. In this study, 30 years was determined as a threshold value based on three reasons. First, age over 30 years is associated with higher likelihood of significant fibrosis and cirrhosis than those under 30 years in CHB patients [6,13]. Second, patient who acquired HBV infection at birth or in early childhood, the average age of transitioning from immune-tolerant to immune-clearance phases is 30 years [6,12]. In China, the majority of patients acquire HBV either at birth or early in childhood [14]. Third, all international guidelines for CHB recommended age over 30 years as one of the criterions for the assessment of liver histology to determine the stage of fibrosis [1,6,7]. Difference in cut-offs for APRI and FIB-4 between patients aged≥30 years and patients aged<30 years may be related to difference in prevalence of fibrosis, known as the spectrum bias [15,16]. In this study, patients aged≥30 years had higher prevalence than patients aged<30 years for significant fibrosis and cirrhosis. Generally, the development of fibrosis is a step-by-step process starting from minimal fibrosis to cirrhosis, which may take years or decades. In patients without antiviral therapy, the longer the duration of HBV infection, the higher the likelihood for significant fibrosis, which indicated the duration of HBV infection associated with the development of fibrosis. Although it is difficult to get a precise duration of HBV infection in real-life situations due to the long non-symptom stage, we believe age is a surrogate marker of the duration of HBV infection in China where vertical transmission or infection in childhood was highly likely. Previous research has also shown that age was an independent predictor of significant fibrosis in CHB patients (OR=4.588, p=0.012) [17]. Similar results were showed in the study by Vardar et al, which found that age is associated with the extent of fibrosis [18]. However, several important caveats need to be noted. First, the PPV of all noninvasive fibrosis tests was low, especially for APRI and FIB-4, and many patients of significant fibrosis or cirrhosis will be missed using APRI or FIB-4 alone [8]. Therefore, it is important that APRI and FIB-4 are used alongside other clinical or laboratory criteria to identify significant fibrosis and cirrhosis. Second, the results of APRI or FIB-4 may be impacted by comorbidities, such as heavy alcohol intake (due to increase in AST), use of drugs (due to increase in ALT and AST), and malaria or HIV (due to decrease in platelet count) [8]. The impact of above conditions on the diagnostic performances of APRI and FIB-4 has not been fully evaluated. Last but not least, although APRI and FIB-4 are now commonly used, treatment decisions based on either false-positive or false-negative results need to be concerned. A false-positive result may lead to a patient being treated unnecessarily [8]. Conversely, a false-negative result means that a person with cirrhosis APRI, aspartate transaminase to platelet ratio index; FIB-4, fibrosis index based on the 4 factors; AUROC, the area under the receiver operating characteristic curve; CI, confidence interval. APRI, aspartate transaminase to platelet ratio index; FIB-4, fibrosis index based on the 4 factors; PPV, positive predictive value; NPV, negative predictive value; Cut-offs* were established by obtaining a sensitivity of at least 90%; Cut-offs** were established by obtaining a specificity of at least 90%. APRI, aspartate transaminase to platelet ratio index; FIB-4, fibrosis index based on the 4 factors; PPV, positive predictive value; NPV, negative predictive value; Cut-offs* were established by obtaining a sensitivity of at least 90%; Cut-offs** were established by obtaining a specificity of at least 90%. would not be identified, and may therefore not receive antiviral therapy [8]. There were several limitations in this study. First, the retrospective design might have caused selective bias [19]. Patients in this study had liver biopsy because of various clinical and laboratory indications such as age over 30 years, a family history of HCC or cirrhosis, a high HBV DNA load and fluctuant ALT level. Age over 30 years was one of the indications for liver biopsy, so the number of patients aged≥30 years was twice as many as patients aged<30 years in this study. Second, the prevalence of significant fibrosis and cirrhosis in this study might be higher than that at a community, because of patients in this study was based on a highly selected population who had liver biopsy because of various indications. Third, the detection limit of HBVDNA is 500 copies/ml, which is a very high value affecting the reliability of the study. Four, the number of F4 patients is markedly less than the number of F2-4 patients in both groups. A small number of cirrhotic patients may result in statistic bias and then affect the study results. Five, our study population, with high prevalence of HBeAg-positivity and narrow interval of years, might not be fully representative of CHB patients. The number of HBeAg (+) patients is large in this study. Although this is expected in the patient group aged<30 years, it seems a more-than-expected value in the patient group aged≥30 years. In conclusion, APRI and FIB-4 as simple and practicable fibrosis index could identify patients with significant fibrosis or cirrhosis, and free a portion of CHB patients from liver biopsy. Different diagnostic performances and cut-offs were observed for APRI and FIB-4 between patients with age≥30 years and those with age<30 years, which indicated that more attention should be paid to the influence of age on the performances and cut-offs of noninvasive tests. Patients Thirteen hundred and twenty-seven consecutive CHB patients who underwent liver biopsies in Shanghai Public Health Clinical Center, Shanghai, China between January 2010 and January 2017 were screened for inclusion. CHB was defined as the persistent presence of hepatitis B surface antigen (HBsAg) for more than six months [1]. Patients with following conditions were excluded: antiviral therapy (n=147); hepatitis C virus (HCV), hepatitis D virus (HDV) or human immunodeficiency virus (HIV) co-infection (n=87); alcohol consumption over 20g/day for more than 5 years (n=103); accompanied by nonalcoholic fatty liver disease (NAFLD) (n=128), or autoimmune liver disease (n=40). Finally, 822 patients were included. All patients signed the informed consent before liver biopsy, and all clinical procedures were in accordance with the Helsinki declaration in 1983. The study protocol was permitted by the ethics committee of Shanghai Public Health Clinical Center. Liver histological examination Ultrasonography-guided liver biopsy was performed under local anesthesia. Liver samples of minimum length 15mm were immediately 10% formalin-fixed and paraffinembedded. Liver tissue with at least six portal tracts was considered sufficient for histologic scoring [20]. The METAVIR scoring system was adopted as the standard of liver fibrosis [21], which was classified into five stages: F0, no fibrosis; F1, portal fibrosis without septa; F2, portal fibrosis with rare septa; F3, numerous septa without cirrhosis; and F4, cirrhosis. All biopsy samples were interpreted independently by two liver pathologists who were blinded to non-invasive fibrosis tests. If they failed to reach an agreement, a third highly experienced pathologist reviewed the biopsy samples. In this study, we defined significant fibrosis as METAVIR F2-4, and cirrhosis as METAVIR F4. Blood fibrosis tests The routine laboratory tests were performed the day before liver biopsies. The serological markers of HBV were detected with ELISA kits (Abbott, Wiesbaden, Germany). The HBV DNA was quantified by real-time PCR (Applied Biosystems, Foster City, USA), with the detection limit 500 copies/ml. The parameters including ALT and AST were measured by automation biochemistry analyzer (Hitachi, Tokyo, Japan). Platelet count was detected with automated hematology analyzer (Sysmex, Kobe, Japan). The calculation formulas of APRI and FIB-4 as follows: (1) APRI= (AST/ULN of AST)/platelet count×100; (2) FIB-4= (age × AST)/ (platelet count × (ALT) 1/2 ). Statistics Normality test of data was performed by Kolmogorov-Smirnov test. The baseline data was presented as follows: normal distribution data as mean ± standard deviation, non-normal distribution continuous data as median (interquartile range, IQR), and categorical data as number (percentage). Chi-square test (for categorical data), Mann Whitney test (for nonnormal distribution continuous data), and t-test (for normal distribution data) was performed to identify statistical differences between two groups, respectively. The performances of APRI and FIB-4 were estimated using AUROCs [22]. The comparison of AUROCs was performed by MedCalc Statistical Software. APRI and FIB-4 use two cut-offs for diagnosing significant fibrosis www.impactjournals.com/oncotarget and cirrhosis: (1) the low cut-offs obtaining a sensitivity of at least 90%; (2) the high cut-offs obtaining a specificity of at least 90%. Diagnostic accuracy was evaluated by sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). All significance tests were two-tailed, and p<0.05 was considered statistically significant. All statistical analyses were carried out using the SPSS statistical software version 15.0 (SPSS Inc. Chicago, Illinois, USA) and MedCalc Statistical Software version 16.1 (MedCalc Software bvba, Ostend, Belgium).
2018-03-09T18:59:43.980Z
2017-04-27T00:00:00.000
{ "year": 2017, "sha1": "5c339106f890633f633ef805b4f62e8fce554df8", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=17470&path[]=55900", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c339106f890633f633ef805b4f62e8fce554df8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
86257279
pes2o/s2orc
v3-fos-license
LCK connects NTB-A and SAP signaling in T cells to restimulation-induced cell death Signaling lymphocyte activation molecule (SLAM)-associated protein (SAP) is an adaptor protein required for SLAM family receptor signaling. In T cells, signaling from different SLAM receptors (SLAM-Rs) governs differentiation, effector function, and apoptosis specifically through the self-regulatory program of T cell receptor restimulation-induced cell death (RICD). Indeed, SLAM-R signaling and RICD are impaired in Xlinked lymphoproliferative disease (XLP) patients that are deficient for SAP, as well as in SAP-deficient mice. Importantly, defective RICD likely contributes to excessive CD8 T cell accumulation and severe immunopathology noted in XLP patients upon infection with Epstein-Barr Virus (EBV). It is well established that SAP signaling through different SLAM-Rs is associated with the recruitment of the Src-family kinase FYN. Surprisingly, we recently discovered that FYN has no role in RICD. Instead, our data suggest that SAP enhances the recruitment and activation of LCK to the SLAM family receptor NK, T, and B cell Antigen (NTB-A), and thus amplifies TCR signaling for optimal RICD. In this research highlight we review the role of SAP in T cells, and describe our recent findings placing LCK as an important player in SAP-mediated NTB-A signaling for T cell apoptosis. Restimulation-induced cell death (RICD) is an autoregulatory form of apoptosis that is thought to limit T cell expansion during an immune response [1] .Defects in this process may lead to autoimmune and/or lymphoproliferative syndromes associated with damage to self-tissues.One example is X-linked lymphoproliferative disease (XLP), for which patients can develop a fatal fulminant infectious mononucleosis following Epstein-Barr virus (EBV) infection [2,3] .Severe immunopathology is driven by unbridled expansion of activated CD8 + T cells and infiltration into multiple organs, resulting in widespread necrosis.The genetic cause for most XLP cases is a null mutation in signaling lymphocyte activation molecule (SLAM)-associated protein (SAP), a 128 amino RESEARCH HIGHLIGHT acid adaptor protein composed almost entirely of a single SH2 domain [2] .Since mutations in the SAP gene SH2D1A were linked to XLP over 15 years ago, the question of how SAP deficiency translates into a pathological overaccumulation of activated T cells has remained unclear.A systematic analysis of several apoptosis pathways revealed XLP patient T cells are specifically resistant to RICD [4] .Knockdown experiments in normal human T cells confirmed SAP deficiency impairs RICD, particularly for CD8 + T cells [4] .Further analysis showed that T cell receptor (TCR) signaling was compromised in activated SAP-deficient T cells, manifesting in reduced BIM and FAS-ligand (FASL) induction after restimulation.Strikingly, siRNA-mediated silencing of only one SLAM family receptor, NTB-A (SLAMF6), resulted in defective RICD akin to SAP-deficient T cells.Indeed, SAP was recruited to NTB-A receptors after TCR restimulation [4] , suggesting the pro-apoptotic function of SAP works through NTB-A to ensure a robust TCR signal that is sufficient to cause RICD in activated effector T cells. How does SAP, through its association with NTB-A, contribute to stronger TCR signaling necessary for RICD induction?One mechanism involves SAP displacing the SH2 domain-containing protein tyrosine phosphatase (SHP)-1 from NTB-A after TCR restimulation in healthy donor T cells, switching NTB-A from a negative to a positive signaling receptor.SHP-1 remained docked with NTB-A in SAP-deficient XLP patient T cells after restimulation, allowing for modulation of TCR signaling [4] .However, SHP-1-specific siRNA knockdown experiments only partly rescued the RICD defect in XLP patient T cells.These data suggested that apart from preventing a negative effect on TCR signaling, SAP also served to amplify TCR signaling, presumably through the recruitment of a Src-family kinase.Herein we review the current data on SAP expression and function in T cells, signal transduction associated with SAP and SLAM-Rs, and our most recent work on SAP and NTB-A signaling for RICD. SAP expression and function in T cells SAP is expressed throughout the T cell lineage; in fact, SAP expression in thymocytes is significantly higher than in resting mature T cells [5] .SAP expression in the thymus is critical for the development of unconventional T cells that are selected by hematopoietic cells, including natural killer T (NKT) cells and other innate-like T cell lineages [6] .In mature T cells, SAP is required for CD4 + Th2 differentiation [7,8] , CD8 + cytotoxicity [9,10] , T follicular helper (TFH) cell differentiation and provision of B cell help for optimal humoral responses [11] , and efficient RICD of activated T cells [4] .Defects in these processes tend to be more severe in mice deficient for SAP versus individual SLAM-Rs, implying functional redundancy in the SLAM-R family.For example, at least two SLAM-Rs are known to cooperate in facilitating germinal center responses (CD84 and Ly108, the murine homolog of NTB-A) and cytolysis of target B cells (Ly108 and 2B4) by promoting sustained, SAP-dependent T:B cell interactions [9,10,12] .In contrast, SAP appears to constrain the magnitude of a given T cell response by promoting RICD in activated, cycling T cells solely through NTB-A signaling [4] .Although NTB-A deficiency has not been documented in humans, loss of Ly108 in mice rescues certain immune deficits noted in Sap -/-animals [13] , underscoring the "switch" like property of SLAM-R signaling dictated by the presence or absence of SAP.Indeed, NTB-A knockdown actually boosts RICD in XLP patient T cells, ostensibly by removing the aforementioned SHP-1-driven modulatory signal [4] . It is now clear that SAP deficiency in mice [7,14] and humans [2,3] leads to an acute increase in antigen-specific T cell expansion in response to certain infections, which can result in significant immunopathology.Despite its importance in multiple T cell subsets, relatively little is known about the regulation of SAP expression.Both mouse and human SH2D1A (SAP) promoters contain a highly homologous region between -185 to -163 (mouse) and -167 to -134 (human) with an Ets binding site that is crucial to the basal promoter activity [15] .In humans, a single nucleotide polymorphism at position -346 (T->C) was also correlated with SH2D1A expression, such that lower SAP mRNA and protein were found in individuals with -346C compared with -346T [16] . Initial TCR stimulation down regulates SAP expression for 24-72 hours through a mechanism that is not well understood [9,15] .However, both SAP and NTB-A expression increase dramatically as activated T cells expand in the presence of interleukin-2 (IL-2), exceeding expression levels noted in resting T cells [5] .IL-2 also up regulates SAP expression in NK cells with similar kinetics [17] .It is tempting to speculate that increased SAP expression in activated T cells is a major factor in conveying sensitivity to RICD, which is dependent on IL-2-mediated cell cycle progression [1] .In fact, our preliminary studies suggest that while NTB-A expression remains remarkably constant on activated T cells from different individuals, low RICD sensitivity noted in certain human donors correlates with abnormally low SAP expression (A.L.Snow, unpublished data). Signaling by SLAM-Rs through SAP is classically associated with the Src-family kinase FYN [3] .SAP interacts with the SH3 domain of FYN through arginine 78 (R78) [18,19] , allowing for recruitment to immunoreceptor tyrosine-based switch motifs (ITSMs) in SLAM-Rs.This interaction results in tyrosine phosphorylation of the receptors themselves and downstream signaling components, such as Dok-1/Dok-2 or Vav-1 when SLAM (CD150/SLAMF1) or 2B4 (CD244/SLAMF4) are phosphorylated, respectively [18,20] .Beyond recruiting FYN to SLAM-Rs, SAP can directly enhance FYN kinase activity [21] .Phosphorylation of several SLAM-Rs is significantly attenuated in both SAP and FYN-deficient thymocytes and T cells, although some residual phosphorylation is observed in the absence of FYN [19, 21- 23] , implying that other Src-family kinases may compensate for the loss of FYN in SLAM-R signaling in thymocytes.In accordance with this idea, NKT cells are completely absent in SAP -/-thymi, while a small population of NKT cells persists in FYN -/-thymi [24] . LCK signaling through SAP and RICD The possibility that a Src-family kinase other than FYN can participate in SAP-dependent SLAM-R signaling was first suggested through overexpression experiments and in vitro binding assays.Overexpression of SLAM, SAP, and LCK in 293T cells resulted in robust SLAM phosphorylation, above levels noted when only SAP or LCK were over expressed with SLAM [21] .This study also demonstrated that recombinant SAP could bind in vitro translated LCK through its kinase domain, in contrast to the SH3-mediated interaction with FYN.Subsequent work with Sap R78A knock-in mice established that not all SAPdependent processes require FYN interaction, including T:B cell conjugation and NKT cell effector function [25,26] .In addition, expression of either WT or R78A SAP restored RICD sensitivity to XLP patient T cells, suggesting FYN was also dispensable for this process [4] .However, no direct association between LCK and a SLAM-R had been detected in primary cells.Furthermore, a clear role for LCK in SAP-mediated SLAM-R signaling had not been demonstrated. We recently investigated the molecular mechanism by which SAP potentiates TCR signaling through NTB-A for efficient RICD.Our results demonstrated that LCK, but not FYN, associates with NTB-A in activated, cycling human T cells [27] .Consistent with this finding, we also found knockdown or pharmacological inhibition of FYN had no effect on RICD sensitivity.On the other hand, specific inhibition of LCK impaired RICD significantly, akin to SAP deficiency itself.Similar results were obtained by silencing LCK expression using LCK-specific siRNA.We next assessed changes in LCK association with NTB-A both before and after TCR restimulation by immunoprecipitating NTB-A from primary human T cell cultures.We showed that LCK reliably precipitates with NTB-A, and that this association is significantly increased after TCR restimulation.In addition, NTB-A-associated LCK phosphorylation on tyrosine 394 (Y394) and serine 59 (S59), representing the fully active conformation of LCK [28,29] , also increased after TCR restimulation.In contrast, no association of FYN with NTB-A was detectable before or after TCR restimulation.Importantly, the TCR-induced increase in LCK association and phosphorylation at NTB-A receptors was abrogated in both SAP knockdown T cells and XLP patient T cells, consistent with a block in RICD.Given that increased recruitment and phosphorylation of LCK to NTB-A was dependent on SAP, we suspected that NTB-A, SAP, and LCK are found within the same protein complex.We confirmed this hypothesis by showing that immunoprecipitation of SAP reliably pulled down both LCK and NTB-A in primary human T cells, and that these interactions were enhanced after TCR restimulation. To better characterize the binding of SAP to NTB-A, and its effect on LCK recruitment and phosphorylation, we mutated key tyrosine residues within the cytoplasmic tail of NTB-A to phenylalanines and expressed these mutant receptors in the NTB-A-deficient T cell line PEER [30] .As stated previously, SAP binds to the cytoplasmic tails of SLAM-Rs via phosphorylated tyrosines [3] found within ITSMs.Following stable selection of NTB-A + clones, we tested both recruitment and Y394 phosphorylation of LCK to NTB-A, as well as SAP binding, before and after TCR restimulation.Our results showed that two ITSM-based tyrosine residues in the NTB-A tail, Y284 and Y308, are critical for SAP binding to NTB-A after TCR restimulation, as well as for the recruitment and Y394 phosphorylation of LCK.In contrast, mutation of Y273 actually enhanced SAP and LCK recruitment, suggesting this tyrosine may play a role in attenuating positive signals conveyed through NTB-A, perhaps by associating with a regulatory molecule like SHP-1. To confirm that NTB-A-associated LCK kinase activity was enhanced after TCR re-engagement, we also performed in vitro kinase assays using NTB-A immunoprecipitates from primary T cells and a wellestablished substrate for LCK, recombinant GST-CD3ζ [28] .We found that NTB-A-associated LCK kinase activity was significantly increased following TCR restimulation in a SAP-dependent manner.Congruent with these results, we also detected higher levels of endogenous phosphorylated CD3ζ and global tyrosine phosphorylation in normal versus SAP-deficient T cells following restimulation.In summation, these results demonstrate that SAP boosts TCR signaling over the required threshold for RICD induction by enhancing the interaction between active LCK and NTB-A receptors.Increased recruitment and activation of LCK at the immunological synapse, where CD3 and NTB-A are known to colocalize [4] , serves to amplify proximal TCR signaling and induce downstream pro-apoptotic molecules (e.g.FASL, BIM) that execute the RICD program (Figure 1). Our data further highlight SAP as a versatile adaptor protein capable of coupling both FYN and LCK to SLAM-R signaling.Although it is not yet clear what conditions favor FYN or LCK binding to SAP and different SLAM-Rs, the formation of these complexes clearly results in different signaling outcomes.While FYN is required for the development and differentiation of several T cell subsets, LCK may be reserved for specific effector functions and RICD.Our results, obtained using primary human T cells under physiologically relevant conditions, lay the groundwork for future studies to determine whether this molecular complex represents a viable therapeutic target for controlling T cell responses by modulating RICD sensitivity. Figure 1 . Figure 1.SAP-dependent association of active LCK with NTB-A promotes strong TCR signaling for RICD.Left panel: schematic diagram of key signaling molecules involved in determining RICD sensitivity, including the NTB-A receptor containing 2 ITSMs and 1 putative immunoreceptor tyrosine-based inhibitory motif (ITIM) for signaling.Right panel: In wild-type (WT) T cells, TCR restimulation induces SAP-dependent recruitment and activation (via Y394 and S59 phosphorylation) of LCK at NTB-A receptors, as well as displacement of SHP-1.Strong colocalization of TCR and NTB-A likely amplifies proximal signaling via LCK to promote RICD.In XLP patient T cells, loss of SAP weakens proximal signaling by leaving more SHP-1 and less LCK associated with NTB-A after TCR restimulation.This manifests as less tyrosine phosphorylation of downstream signaling components, poor induction of pro-apoptotic molecules, and impaired RICD.
2019-03-30T13:06:53.853Z
2014-09-22T00:00:00.000
{ "year": 2014, "sha1": "bacad3901a374170038aadb98b3faca63ca98f52", "oa_license": "CCBY", "oa_url": "https://www.smartscitech.com/index.php/rci/article/download/292/286", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bacad3901a374170038aadb98b3faca63ca98f52", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
14727992
pes2o/s2orc
v3-fos-license
Are we a step forward with targeted agents in resolving the enigma of mantle cell lymphoma? Mantle cell lymphoma has been recognized as a distinct entity from the other non-Hodgkin lymphomas in middle 1990's. It carries a worst prognosis among all mature B-cell malignancies. Cyclin D1 and recently SOX11 are the hallmarks for this disease. Even if it is highly responsive to induction treatment, it remains incurable, since it inevitably relapses. Highly aggressive approaches with stem cell transplantation can shift the survival curve for a bit, but even so the overall survival is not significantly improved in most of the cases. Small portion of patients with this heterogeneous disease have an indolent course with long-term survival. Conventional immunochemotherapy has reached its maximal possibilities, so novel target agents are absolutely warranted. The large number of ongoing early phase trials demonstrated promising results, especially emphasizing agents that target B-cell receptor. They are mostly investigated in relapsed/refractory disease, while front-line approaches with those agents need to be explored in future times. Introduction Mantle cell lymphoma (MCL) is a subtype of B-cell lymphomas first defined as a clinical entity distinct from the other non-Hodgkin lymphomas (NHL) in 1994 [1]. It accounts for 2-10% of all NHL, with male predominance (about 2.3-2.5 : 1) and a median age at presentation close to 70 years [2]. The stage usually is advanced, adenopathy typically is non-bulky, but extranodal involvement is frequent, such as bone marrow, liver, spleen, or Waldeyer ring. Gastrointestinal tract involvement, especially in a form of multiple lymphomatous polyposis (MLP) is the common presentation [3,4]. A leukemic phase is not uncommon, but CNS involvement is unusual at presentation, yet can be seen upon relapse or when histology is blastic. Although morphologically similar, MCL is significantly more aggressive than other small cell lymphomas and therefore must be differentiated from them. Mantle cell lymphoma is characterized by the chromosomal translocation t(11;14)(q13;q32), resulting in constitutional overexpression of cyclin D1 and cell cycle dysregulation in virtually all cases [5]. Cyclin D1 is detected by immunochistochemistry in 98% of MCL, although in remaining cases it may lack [6]. Those cyclin D1 negative cases often show expression of cyclin D2 and cyclin D3 [7]. The SOX11 (neuronal transcriptional factor) is highly expressed in both cyclin D1 negative and positive MCL, suggesting that in addition to its value as a diagnostic biomarker, it may be an important factor in the pathogenesis of MCL [8]. The MCL International Prognostic Index (MIPI) is the prognostic model most often used as a predictive tool for overall survival (OS) rates. It incorporates: ECOG performance status, age, leukocyte count, and lactic dehydrogenase level. MIPI classifies patients in 3 risk subgroups (low, intermediate and high risk) with the portion of patients (44%, 35% and 21%), respectively. The median OS for the low risk group was not reached (5-year OS of 60%), for the intermediate risk group was 51 months and 29 months for the high risk group [6]. Adding cell proliferation Ki67 index to MIPI (MIPI biological index-MIPI-B) is an important biological marker that showed strong additional prognostic relevance [9]. Conventional chemotherapy is only palliative and the median duration of remission (DOR) is only 1-2 years. With the exception of allogeneic stem cell transplantation (allo-SCT), current treatment approaches are non-curative and corresponding survival curves are characterized by a delayed, but continuous decline and a median survival of 3 to 7 years [5]. Mantle cell lymphoma has the worst prognosis among all adult B-cell malignancies. Novel agents targeting various molecular pathways are now in the focus of investigation mostly as phase II studies in relapsed/refractory disease. Awaiting phase III studies will show their accurate clinical benefit. Current overview of the induction approach to MCL patients "Gold standard" CHOP/CHOP-like protocols (cyclophosphamide, doxorubicin, vincristine, prednisone) have been the main induction approach to MCL, for long period of time. After the introduction of rituximab (R) and its adjunction to CHOP chemotherapy, in the study of Lenz et al., it was demonstrated an increased overall response rate (ORR) from (76% to 94%, p = 0.0054), and the complete remission rate (CR) from (7% to 34%, p = 0.00024), respectively [10]. Interestingly, this improvement does not translate into prolonged OS and even significantly better progression free survival (PFS).The results of a meta-analysis of randomized controlled trials in (n = 260) MCL patients allegedly demonstrated survival benefit in patients treated with immunochemotherapy compared to those treated with chemotherapy alone [11]. Nevertheless, the number of patients included express doubts of the validity and the sufficient statistical power to confirm such findings. Elderly patients seem to have benefit of R-CHOP induction followed by rituximab maintenance therapy. This is not only in PFS but also in a significant survival advantage [12]. Treatment of MCL in younger patients is the most challenging, since the primary goal is to develop long-term remissions with prolongation of survival or to cure a patient, if possible. For transplant-eligible patients the standard of care is up-front induction therapy followed by autologous (auto-SCT) consolidation in first remission, especially in the intermediate risk group, whereas in the high risk group such an approach remains suboptimal. Randomized studies are needed to clarify the significance of allo-SCT in first remission, which seems to be the best known option to this time point [13]. There are many published trials which used R-HCVAD/ AM (hyperfractioned cyclophosphamide, high dose dexamethasone, vincristine, doxorubicin/high dose methotrexate and cytarabine) as an induction treatment followed by consolidation with auto-SCT. The Italian group published results for patients aged ≤ 70 years who received 4 alternating cycles each of R-HCVAD/AM. Patients who obtained a partial response proceeded to auto-SCT. ORR and CR rates were 83% and 72%, respectively. After a median follow-up of 46 months (range 1-72) the estimated 5-year OS and PFS rates were 73% and 61%, respectively. MIPI maintained the prognostic value with an estimated 5-year OS of 89%, 80% and 24% for low, intermediate, and high risk groups, respectively (p < 0.001). This multicentre study confirmed that R-HCVAD-AM is an active regimen for the initial treatment of patients with MCL, but is associated with significant toxicity [14]. The authors of the SWOG 0213 trial had the same conclusions in patients aged < 65 years, with median OS of 6.8 years [15]. The results of the GELTAMO group showed that induction with R-HCVAD-AM and consolidation with 90 Y-ibritumumab tiuxetan is effective, although less feasible than expected. The substantial toxicity advised against the use of this strategy [16]. Polish single center experience study in patients was conducted. The median age of patients was 59 years (range 41-68) with 90% of stage 3/4 MCL. As an induction regi-men R-CHOP was used in all patients except 1 who received R-CVAD. All patients responded (n = 13 first CR, n = 4 second CR and n = 3 PR). The conditioning regimen was CBV (high dose cyclophosphamide, BCNU, etopozide) in (n = 18) and BEAM (BCNU, etopozide, cytarabine and melphalan) in (n = 2) patients, respectively. Median OS and PFS were 48 and 29.8 months, respectively. The estimated 5-year OS and PFS were found to be 52% and 35%, respectively. After median follow-up after auto-SCT of 36 months 10 patients were alive (8 remaining in CR, and 2 relapsed). Other 10 patients died from disease recurrence and subsequent chemoresistance. Authors concluded that auto-SCT consolidation for MCL patients is safe and effective procedure [17]. A French group of authors published their results of phase II study with CHOP and DHAP (high dose dexamethasone, etopozide, cytarabine, cisplatin) + rituximab followed by auto-SCT in MCL [18]. Included were patients aged < 66 years with stage 3/4 MCL. As an induction treatment 3 cycles of CHOP (the third one was with the addition of rituximab) and 3 cycles of R-DHAP sequentially, were used. Responding patients were eligible for auto-SCT with conditional regimens (TAM6 or BEAM). The ORR was 93% after (R)-CHOP and 95% after R-DHAP. With a median follow-up of 67 months, the median EFS were 83 months, and the median OS had not been reached. Five-year OS was 75%. This study confirmed that induction with rituximab and cytarabine-based regimen is safe and effective in MCL patients. In an updated review of the Nordic MCL2 trial, median observation (6.5 years), the authors reported median OS and response duration longer than 10 years, and median EFS of 7.4 years. The MIPI and Ki-67 expression were the only independent prognostic factors for EFS and OS. Subdivided by the MIPI-B, more than 70% of the patients with low-intermediate MIPI-B were alive at 10 years, in contrast to 23% of the patients with high MIPI-B. The conclusion was that risk-adopted treatment strategy is required [19]. The study of Eastern German Study Group Hematology/Oncology (OSHO) conducted in (n = 39), where (n = 33) responding patients proceeded allo-SCT after induction with R-CHOP/R-DHAP for de novo MCL, or R-DHAP for relapsed/refractory MCL demonstrated 5-year PFS of 67% and OS of 73%. Enrolled were patients aged . Most of the patients received reduced intension conditioning (RIC) (n = 26) with TreoFlu (treosulfan, fludarabine) protocol-age > 55 years and (n = 7) received myeloablative regimen BuCy (busulfan, cyclophosphamide) aged < 55 years. The overall mortality after the procedure was 24% (n = 8) with (n = 5) patients who relapsed after the procedure. The results were comparable between de novo MCL and relapsed/refractory MCL patients. The authors concluded that allo-SCT is a feasible and promising consolidation therapy for relapsed and refractory disease and an attractive option for young patients with de novo MCL of high risk and that significantly better outcome was in younger patients [20]. Summarized results of up-front use of SCT are presented in (Table 1). Nevertheless, in younger transplant-eligible patients first line induction with R-HCVAD/AM followed by auto-SCT consolidation remains the standard of care with documented survival benefit, at this time point. The problem with this regimen is connected with the frequent stem cell mobilization failure and high toxicity rate. This implies the need for new front-line treatment strategy (Polish, French, Nordic MCL trials and OSHO) or consideration of an early stem cell collection, when R-HCVAD/ AM induction regimen is to be used. One is clear that high doses cytarabine containing regimens should be used before proceeding SCT in transplant-eligible MCL patients. Some very new statements [21] are questioning the role of SCT consolidation approach in first remission, especially in the era of an improved survival and higher response rates with immunochemotherapy. This might be due to the heterogeneity of some clinical factors that have to be considered (patient age, MIPI or comorbidity index scores) before making a decision on SCT. The most current trial (the comparison of R-CHOP+R-DHAP sequential induction followed by consolidation with auto-SCT and ibrutinib maintenance arm vs. the same combination but without auto-SCT, followed only with ibrutinib maintenance therapy arm) will try to define the role of SCT consolidation in first remission. This was orally presented by Prof. Dreyling at European Hematology Association (EHA 19) Congress 2014 in Milan. Targeted agents in mantle cell lymphoma As above mentioned, optimal treatment approach to MCL is undefined. This disease still remains incurable with almost inevitable relapse over time period of remission, whatever approach is used. Novel targeted agents are infallible needed. Many of them are included in various numbers of studies, mostly phase II with more than promising results, among some of them. However, larger phase III studies are required to determine real clinical benefit of those novel therapies. The aim of this review article will be to summarize an updated treatment approaches and recent study results through consideration of different mo-lecular target pathways. Summarized results of efficacy of targeted agents in MCL are presented in (Table 2). Inhibitors of mammalian target of rapamycin The mammalian target of rapamycin (mTOR) is an intracellular kinase that controls the mRNA translation of many proteins (eg, cyclin D1 in MCL) that can act as oncogenes and contribute to lymphomagenesis [22]. Temsirolimus as the first generation mTOR inhibitor agent was investigated in phase II and III trials in MCL. In two phase II studies, first conducted in (n = 35) patients, the ORR for the single-agent temsirolimus 250 mg was 38% and the DOR for responders was 6.9 months [23], and in the second one single-agent temsirolimus 25 mg dose, 3 years later, in (n = 29) patients the ORR was 41%, with median DOR in responders of 6 months [24]. In pivotal phase III study in (n = 162) patients who were randomized as 1 : 1 : 1-(175 mg weekly for 3 weeks followed by either 75 mg (175/75 mg), 25 mg (175/25 mg) weekly, or investigator's choice therapy from prospectively approved options). It was found that 175/75 mg dose schedule significantly improved PFS and objective ORR of 22% compared with investigator's choice therapy (ORR of 2%) in patients with relapsed/refractory MCL [25]. The safety profile of temsirolimus is mostly acceptable and manageable with dose modifications or medical interventions [26]. In multicenter phase II study with single-agent everolimus with (n = 35) patients enrolled, the ORR was 20%, median PFS was 5.5 months and 17 months for responders (those who received 6 or more cycles of therapy) [27]. Proteasome inhibitors The proteasome inhibitors are agents that block the action of proteasomes, cellular cylindrical complexes that [32]. In responding patients bortezomib as a single-agent is associated with lengthy responses and notable survival in patients with relapse/ refractory MCL, suggesting substantial clinical benefit [29], although the addition of rituximab or dexamethasone significantly increases ORR, those combinations are decreasing their safety profiles. Immunomodulatory agents or drugs Immunomodulatory drugs (IMiDs) target microenvironment and neoangiogenesis of the tumor and have immunomodulatory activity. We found only one published trial with thalidomide and rituximab in relapsed/refractory MCL. It included (n = 16) patients. Objective response was achieved in 81% of patients, with 5 CR (31%). Median PFS was 20.4 months and estimated 3-year survival was 75%. In responders PFS was expectably longer. The study suggested that rituximab plus thalidomide has marked activity in relapsed/refractory MCL with low toxicity profile [33]. In phase II MCL-001 (EMERGE) trial single-agent lenalidomide was investigated in relapsed/refractory or patients who progressed after bortezomib. Study included (n = 134) patients, median age of 67 with a median prior therapies (range 2-10). The achieved ORR was 28% with 7.5% CR, median DOR was 16.6 months, median PFS 4 months and median OS 19 months. This study showed durable efficacy of lenalidomide after the progression on bortezomib [34]. The combination of lenalidomide, low-dose dexamethasone, and rituximab achieved high response rates with durable responses in patients with rituximab-resistant, indolent B-cell lymphomas and MCL (n = 5) in this phase II study. ORR increased from 29% after two 28-day cycles of lenalidomide and low-dose dexamethasone to 58% after the addition of rituximab, suggesting that lenalidomide can overcome resistance to rituximab [35]. Cyclin-dependent kinase inhibitors Cyclin-dependent kinases (CDK) are a family of protein kinases which have the role in cell cycle regulation. They are coded by CDK genes. This small molecules binds with regulatory proteins called cyclines and become full active and than phosphorilate their substrates. CDK inhibitors (CDKi) target cyclin-dependent kinases and are involved in cell cycle arrest at G1 phase. The phase I study of Lin TS et al., with flavopiridol (NSC-649890) added to rituximab and fludarabine (FFR) regimen was investigated in indolent lymphomas of which one group included MCL (n = 10). The MCL patients (median age 68, of whom 6 were untreated and 4 relapsed patients who had each received two prior therapies), received a median of 3.5 cycles. Eight patients responded (7 had CR, 1 PR). Median PFS was 21.9 months, ranging from 1.1 months when a patient withdrew to receive another regimen to at least 68.2 months. Two patients with blastoid variant MCL responded but relapsed within 1 year of study entry. Median PFS of the eight patients with non-blastoid MCL was 35.9 months. This regimen appeared to be most promising in older MCL patients with acceptable toxicity. However, results indicate a larger phase II study in previously untreated or relapsed disease to define regimen's activity across the MIPI risk group [36]. Furthermore, findings from this study suggest that FFR may be active in a particular histology of MCL even if flavopiridol demonstrates limited clinical activity as monotherapy for that particular lymphoma [37]. The single-agent activity of this first-generation CDKi suggests that other agents in this class merit further study in lymphoid malignancies, both alone and in combination [38]. Bruton's tyrosine kinase inhibitors The Bruton's tyrosine kinase (BTK) is a mediator of the B-cell receptor signaling pathway. Its gene is located on X chromosome and it plays a crucial role in B cell maturation, but exact mechanism of action remains unknown at this moment. The potent inhibitor of this pathway-ibrutinib (PCI-32765), have demonstrated the power to induce impressive responses in B-cell malignancies through irreversible bond with cysteine-481 in the active site of BTK (TH/SH1 domain) and inhibits BTK phosphorylation on Tyr 223 [39]. Phase I studies have pointed to its antitumor activity in MCL. Afterword, pivotal phase II study, conducted on (n = 111) patients with relapsed/refractory MCL, at daily doses of 560 mg (patients previously received at least 2 cycles of bortezomib or less, or who had no received bortezomib), with median age of 68 years and 86% of patients had intermediate or high risk disease, showed ORR of 68% with 21% CR rate and PR of 47%. The estimated median follow up was 15.3 months, with the estimated DOR of 17.5 months, median PFS 13.9 months and median OS not reached (estimated OS rate was 58% at 18 months). This study concluded durable single agent efficacy of ibrutinib in relapsed/refractory MCL [40]. Axelrod et al., have performed a preclinical combinatorial screen of ibrutinib and carfilzomib as a targeted agents that could provide improved clinical response. All 4 cell lines responded to the combination of proteasome and BTK inhibition, including Jeko-1, a leukemic, classically indolent form of MCL, and Z138, a blastic, characteristically aggressive form of MCL, suggesting that the carfilzomib and ibrutinib combination may prove efficacious regardless of variations in specific patient MCL tumor biology. The study suggested that combination of agents not targeting BTK with ibrutinib provides higher benefit, over combination with two BTK inhibitors [41]. Awaiting phase III trials with ibrutinib will show its real clinical benefit. Phosphatydilinosytol 3-kinase δ inhibitors Idelalisib (CAL-101, GS-1101) is a phosphatydilinosytol 3-kinase inhibitor (PI3Kδ) which specifically blocks the delta isoform of the enzyme p110δ. This isoform plays a critical role in B-cell homeostasis and function. It was evaluated in phase I study in (n = 40) patients with relapsed/refractory MCL. Patients who entered the study had median age of 69 years and received 4 prior therapies and were refractory to their most recent treatment. ORR was 40%, with CR in 5% of patients. Median DOR was 2.7 months, median PFS was 3.7 months, and 1-year PFS was 22%. These data provide proof of concept that targeting PI3Kδ is a viable strategy and worthy of additional study in MCL [42]. Safety profile of idelalisib in this study showed moderate adverse events. Other agents in most current trials In phase II study the addition of bevacizumab (monoclonal antibody that blocks VEGF) to the standard R-CHOP regimen in (n = 11) MCL patients as induction approach did not appear to significantly improve efficacy beyond that observed from previous studies using R-CHOP alone [43]. The phase II study investigated histone deacetylase (HDAC) inhibitor-vorinostat in patients with relapsed/refractory indolent B-cell lymphomas and MCL (n = 4) patients. First results showed moderate ORR, but results are mostly based for FL. Those results warrant further investigations of this agent for MCL [44]. The second HDAC inhibitor-panobinostat was investigated in phase I study but with small number of relapsed/refractory MCL patients concomitant with everolimus, this combination found to be active especially in Hodgkin lymphoma but is associated with severe thrombocytopenia [45]. There are many pre-clinical investigations of MCL cell lines with very promising results which awaits clinic introduction in future times. Today, we know that MCL is very heterogeneous disease with approximately 15% of patients with an indolent course, slow in progress which could be hold only with "watch and wait" strategy. However, in remaining percentage it behaves aggressively or it inevitably relapses after induction treatment, so new agents are ultimately required or rather we need to change the current treatment paradigm by introducing new agents. Furthermore, risk stratification by using MIPI, MIPI-B as predictive tools should be incorporated in treatment decisions. The possibilities of conventional imunochemotherapy in MCL without SCT are well established. SCT is the only measure that can shift the survival curve, but it still remains unclear if long-remissions are possible. Auto-SCT gives the opportunity of durable remissions in younger fit patient, but late relapses still occur. Allo-SCT has curative potential, however with poor applicability, due to its toxic potential and high procedure-related mortality rates, especially in pretreated patients and the median age when MCL mostly occur. Large number of patients is not feasible for such radical options, so the procedure has to be limited only for those patients who will have the optimal benefit. Nevertheless, by using RIC regimens it might become more widely applicable, but still in highly selected group of patients (younger, fit, and with high risk who does not have any choice for longer survival with other approaches). Mantle cell lymphoma is still considered as incurable disease, but something is definitely changing. The recent period of investigations has demonstrated some progress which could be found to be encouraging. As from that point of view, ibrutinib even as a single agent has demonstrated long time expected promising results like no other agent did in past history of MCL treatment. This agent is now included in large number of trial combinations and the results are still expected. Did we make a step forward with targeted agents? We can conclude that slight approach to the target has been made, but still need a time to see are we really close enough to solve the enigma of MCL. The authors declare no conflict of interest.
2016-05-12T22:15:10.714Z
2014-10-16T00:00:00.000
{ "year": 2014, "sha1": "847d5a4d7e53c9e184568c79d354f1f85e86bfcc", "oa_license": "CCBYNCND", "oa_url": "https://www.termedia.pl/Journal/-3/pdf-23489-10?filename=Are%20we%20a%20step%20forward.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "847d5a4d7e53c9e184568c79d354f1f85e86bfcc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16760437
pes2o/s2orc
v3-fos-license
Computation of ATR Darmon points on non-geometrically modular elliptic curves ATR points were introduced by Darmon as a conjectural construction of algebraic points on certain elliptic curves for which in general the Heegner point method is not available. So far the only numerical evidence, provided by Darmon--Logan and G\"artner, concerned curves arising as quotients of Shimura curves. In those special cases the ATR points can be obtained from the already existing Heegner points, thanks to results of Zhang and Darmon--Rotger--Zhao. In this paper we compute for the first time an algebraic ATR point on a curve which is not uniformizable by any Shimura curve, thus providing the first piece of numerical evidence that Darmon's construction works beyond geometric modularity. To this purpose we improve the method proposed by Darmon and Logan by removing the requirement that the real quadratic field be norm-euclidean, and accelerating the numerical integration of Hilbert modular forms. Introduction Let F be a totally real number field and let E/F be an elliptic curve of conductor N. Denote by L(E/F, s) the Hasse-Weil L-series attached to E, which is known to converge in the half plane ℜ(s) > 3/2. Let us assume thorough this note that E is modular ; that is to say, that L(E/F, s) equals the L-series of a Hilbert modular form over F of weight 2 and level N. Thanks to the modularity theorems of [Wil95], [BCDT01] and [SW01], E is known to be modular if either F = Q, or if [F : Q] > 1 and it satisfies certain mild conditions on the reduction type at primes above 3. The L-series L(E/F, s) admits analytic continuation, and the Birch and Swinnerton-Dyer (BSD) Conjecture predicts that its order of vanishing at s = 1, called the analytic rank of E/F , equals the rank of the group of F -rational points E(F ). In this context, BSD Conjecture is known to hold in analytic rank 0 or 1 provided that E satisfies the following Jacquet-Langlands hypothesis: (JL) Either [F : Q] is odd or there is a prime p in F such that ord p (N) is odd. In analytic rank 0, BSD is also known for modular elliptic curves not satisfying (JL), thanks to the work of Longo [Lon06]. However, in analytic rank 1 (JL) cannot be dispensed with at the moment, because the construction of non-torsion points relies on the existence of the so-called Heegner points. Indeed, if E satisfies (JL) then it is geometrically modular : there exists a non-constant F -homomorphism (1.1) π E : Jac(X) −→ E, from the Jacobian of a suitable Shimura curve X defined over F onto E. Shimura curves are endowed with CM points, which are defined over ring class fields of quadratic CM extensions K/F . The projection of CM points via π E gives rise to Heegner points on E, whose arithmetic behavior is linked to the corresponding L-series of E thanks to formulas of Gross-Zagier and Zhang. On the other hand, if E does not satisfy (JL) then it is not known to be geometrically modular unless it is a Q-curve; i.e., a curve isogenous to all of its Galois conjugates. As a consequence of Serre's modularity conjecture Q-curves admit Qparametrizations from classical modular Jacobians, and this has been exploited in [DRZ12] in order to construct Heegner points and prove BSD in analytic rank 1 for some Q-curves not satisfying (JL). But BSD in analytic rank 1 seems to remain intractable for elliptic curves which are not Q-curves and do not satisfy (JL). Indeed, since they are not geometrically modular the Heegner point method sketched in the previous paragraph cannot be applied in this setting. The Heegner point construction constitutes the only known procedure for systematically manufacturing algebraic non-torsion points on elliptic curves. However, several conjectural constructions have emerged in the last years under the generic name of Stark-Heegner points, or also Darmon points as the first such construction was introduced in [Dar01]. Variants of this initial construction applying to several different settings have been proposed since then, for instance in [Das05], [Gre09], [LRV09], [Gär11a], and [GRZ12]. The leitmotif of these methods is the analytic construction of algebraic points on ring class fields of quadratic extensions K/F which, unlike the classical case, are not CM. This note deals with the effective computation of a type of Darmon points known as ATR points, which where introduced in [Dar04, Chapter VIII]. To explain the terminology, recall that a number field is said to be almost totally real, or ATR for short, if it has exactly one complex non-real archimedean place. Let K be a quadratic ATR extension of the totally real field F . For an ideal c of F denote by R c ⊂ K the order of conductor c, and H c the ring class field corresponding to R c . Darmon associates to any optimal embedding ϕ : R c ֒→ M 2 (O F ) a point P ϕ ∈ E(C), called an ATR point, which is conjectured to be defined over H c . Moreover, by analogy with the formulas of Gross-Zagier and Zhang, its trace down to K is believed to be non-torsion if and only if ord s=1 L(E/K, s) = 1. An algorithm for computing ATR points in the particular case where F is real quadratic and E has conductor 1 is given in [DL03]. These elliptic curves do not satisfy (JL), so that the Heegner point construction is not available in general. Both the definition of the points P ϕ and the conjectures of Darmon concerning them will be recalled in Section 2. For the moment, it is enough for us to mention that they are the image under the Weierstrass map C/Λ E → E(C) of a complex number of the form where ω is a certain differential 2-form on the Hilbert modular surface SL 2 (O F )\H 2 . The limits of integration depend on the embedding ϕ, but they are not uniquely determined: for a given ϕ there are many possible choices for a i , b i , c i , d i . The Fourier series of ω is explicitly computable from E, and term by term integration of a truncation leads to a numerical approximation to J ϕ . The rate of convergence depends essentially on the imaginary parts of the limits, and this turns out to be the main computational restriction of this method. The algorithm outlined above was used in [DL03] to obtain numerical evidence towards Darmon's conjectures. More concretely, ATR points on three concrete elliptic curves were computed, and they were checked to be (up to a certain numerical precision) multiples of the corresponding generators of the Mordell-Weil groups. Calculations of the same kind were performed in [Gär11b] for one more curve. However, computational limitations restricted them to elliptic curves which all happen to be geometrically modular. In this case, the BSD Conjecture implies that ATR points in these curves should be related to the already existing Heegner points. Actually, in the case of Q-curves (as they are all examples considered in [DL03]) a precise relation between Heegner and ATR points is conjectured in [DRZ12,§4.2]. In this note we speed up the algorithm devised by Darmon and Logan by improving its two main bottlenecks. Namely, the computation of integrals of Hilbert modular forms, and the determination of limits a i , b i , c i , d i in (1.2) having the highest imaginary part possible. This allows us to gather more numerical evidence in support of Darmon's conjectures, by calculating ATR points on elliptic curves which were not computationally accessible using the algorithm in [DL03]. In particular, we have been able to compute for the first time an ATR point of infinite order on a non-geometrically modular elliptic curve. More concretely, the contents of the article are as follows. In Section 2 we review the definition of the points P ϕ and Darmon's conjectures on their arithmetic properties. We also sketch the algorithm of Darmon and Logan for their explicit computation. In Section 3 we present the algorithm for speeding up the computation of integrals of Hilbert modular forms. The idea is to use the fact that the limits are invariant under the group SL 2 (O F ) in order to transform the given integral into a sum of integrals whose limits are uniformly bounded away from the real axis, the bound depending only on F . It is worth remarking that this algorithm does not exploit any particular property of the integrals involved in ATR points, and therefore it may be of independent interest for computing integrals of Hilbert modular forms in other contexts. In Section 4 we comment on a trick that can sometimes accelerate the computation of ATR points. The procedure for computing the limits a i , b i , c i , d i in (1.2) for an embedding ϕ involves at some point the calculation of a continued fraction expansion of an element in F . We exploit the non-uniqueness of continued fractions to attach to a given ϕ limits a i , b i , c i , d i with as high imaginary part as possible. Finally, in Section 5 we use Darmon-Logan's algorithm together with the improvements of sections 3 and 4 to enlarge the pool of elliptic curves on which Darmon's conjectures have been numerically tested. Arguably, the most interesting among them is the curve E 509 given by the equation y 2 − xy − ωy = x 3 + (2 + 2w)x 2 + (162 + 3w)x + (71 + 34ω), ω = 1 + √ 509 2 , because it is not a Q-curve. We have computed an ATR point corresponding to the field K = F ( √ 9144ω + 98577), and we have numerically checked that it coincides with a multiple of the Mordell-Weil generator of E 509 (K). Since E 509 is not geometrically modular, such point does not seem to be explained by the presence of Heegner points. This gives experimental evidence that Darmon's construction leads to algebraic points that are genuinely new, not attainable by classical methods. Finally, it is worth mentioning that ATR points are also the base of an algorithm by L. Dembélé [Dem08] for computing equations of elliptic curves with everywhere good reduction attached to Hilbert modular forms of level 1. The authors hope that the algorithm presented in this note can be useful for this purpose, and that it may lead in the future to a systematic computation of such equations using Dembélé's method. for initially suggesting the problem and for many helpful conversations, and to the anonymous referee for many valuable comments and suggestions. They are grateful to the Max Planck Institute for Mathematics for the hospitality and financial support, and for making available its computational resources, crucially needed for part of this note. Part of this work was also carried out in the facilities of Centro de Ciencias de Benasque Pedro Pascual during the Summer of 2011. The authors received financial support from DGICYT Grant MTM2009-13060-C02-01 and from 2009 SGR 1220. Computation of ATR points Let F be a real quadratic number field of discriminant D and narrow class number 1. Write O F for its ring of integers and set Γ = SL 2 (O F ). We denote by v 0 , v 1 the embeddings of F into R. For an element x ∈ F we may write x i instead of v i (x), and |x| for the norm of x, given by x 0 x 1 . Recall that Γ acts discretely on H 2 via v 0 × v 1 . The analytic variety Γ \ H 2 can be compactified by adding one cusp, which gives rise to the Hilbert modular surface X attached to Γ. Let K/F be a quadratic ATR extension. For an ideal c of F we denote by H c the ring class field corresponding to the order R c of conductor c of K. We assume without loss of generality that v 0 extends to a complex place of K and v 1 extends to a pair of real places of K. We fix an extension of v 0 to Q ⊂ C, which we use to to identify K and its extensions with subfields of C. Recall that an embedding of We denote by E c the set of such optimal embeddings. Let E/F be an elliptic curve of conductor 1. In this section we review Darmon's construction, which attaches to each optimal embedding ϕ a point P ϕ ∈ E(C) that conjecturally belongs to E(H c ). There are several equivalent ways of defining P ϕ . For instance, a nice geometric definition in terms of a non-algebraic analogue to the Abel-Jacobi map is given in [DRZ12, §2.1]. However, for computational purposes the original definition of [Dar04, Chapter VIII], or rather the subsequent refinement of [DL03] are better suited. Key to the approach of [DL03] is the definition of certain semi-definite integrals of Hilbert modular forms, whose existence and main properties we will take as a black box. 2.1. Semi-definite integrals of Hilbert modular forms. Let f ∈ S 2 (Γ) be a Hilbert modular form. Recall that f has a Fourier expansion indexed by totally positive elements of O F . Actually, the Fourier coefficient corresponding to n ∈ O + F only depends on the ideal (n) generated by n, and the expansion is of the form where δ i = v i (δ F ) and δ F is a totally positive generator of the different ideal of F . Let us assume from now on that all Fourier coefficients a (n) are rational numbers. The reader can refer to [GRZ12,§2.4] for the definition of ATR points when the Fourier coefficients generate a number field of degree > 1, in which case they belong to some higher dimensional modular abelian variety. The differential form is invariant under the action of Γ and extends to a holomorphic form at the cusp, thus defining a holomorphic 2-form on X. The expansion in Equation (2.1) is useful for computing integrals of ω f . Indeed, for x 0 , x 1 , y 0 , y 1 ∈ H we have that In the definition of ATR points the key role is not played by ω f but instead by the non-holomorphic differential where u is a fundamental unit in F such that u 0 > 1 and u 1 < −1. The differential form ω + f is also Γ-invariant, and it follows from the definition that it is invariant under the action of the matrixγ u = u 0 0 1 . Therefore, if we let Here γ acts on the outer limits (reps. inner limits) through v 0 (resp. v 1 ). Let There exists a unique map c1 ω + f satisfying the following properties: For the existence of such map we refer the reader to [DL03,§1]. Uniqueness is proved in [DL03,§4], and it follows from repeated application of properties (i), (ii) and (iii). Since this also leads to an algorithm for computing the map, we review the proof in the next section. 2.2. Computation of semi-definite integrals via continued fractions. Since O × F is infinite and F has trivial class number, by a result of Cooke-Vaserstein [Coo76] every element c ∈ F can be written as a finite continued fraction. See [GM12] and §4 for an effective version of this result. Two cusps c 1 , c 2 ∈ P 1 (F ) are said to be adjacent if c 1 = γ · 0 and c 2 = γ · ∞ for some γ ∈ Γ. One can join every cusp c ∈ F with ∞ by a sequence of adjacent has this property thanks to (2.3). Using this fact and property (i) every integral τ0 c2 c1 ω + f can be written as a sum of integrals of the form τ0 γ·∞ γ·0 ω + f . Thanks to (i) they are of the form τ ∞ 0 ω + f , and can be computed as follows: Integrals of the form τ2 τ1 ∞ 0 ω + f , with τ 1 , τ 2 ∈ H, can be computed by taking τ 3 ∈ H and writing for which formula (2.2) can be used. Definition of ATR points. Under the assumption that E is modular there exists a Hilbert modular newform Fourier expansion of f E can be explicitly computed as follows. For a prime ideal p ⊂ F , let a p := |p| + 1 − #E(O K /p), where |p| denotes #O F /p. For arbitrary ideals n ⊂ F the integer a n is defined by means of the identity n⊂F a n |n| −s = p prime 1 − a p |p| −s + |p| 1−2s −1 , and the Fourier expansion of f E is given by Let ϕ : R c ֒→ M 2 (F ) be an optimal embedding. Since K is ATR the group has rank 1; let γ ϕ be one of its generators. Since v 0 extends to a complex place of K, the action of K × on H by means of v 0 ❛ ϕ has a single fixed point τ 0 . Let J ϕ be the quantity in C/Λ + f defined as Let ω E ∈ H 0 (E, Ω 1 ) be a differential which extends to a smooth differential on the Néron model of E over O F . Let Granting Conjecture 2.2 and denoting by η : C/Λ 0 → E 0 (C) the Weierstrass parametrization map, the ATR point P ϕ is defined as belongs to E 0 (K) and it is non-torsion if and only if ord s=1 L(E/K, s) = 1. Note that under the assumptions of this section, namely the conductor of E/F is 1 and K/F is ATR, the L-function L(E/K, s) has sign −1. Thus the condition ord s=1 L(E/K, s) = 1 is equivalent to L ′ (E/K, 1) = 0, which can be numerically checked. Integration of Hilbert Modular forms When evaluating (2.2) one can collect the elements n ∈ O + F modulo powers of u 2 , because the Fourier coefficient corresponding to n only depends on the ideal (n). The sum corresponding to n · u 2 is then For a fixed k, the modulus of each exponential term (after multiplying out the brackets) is of the form It is easy to see that (3.1) has a single maximum, when viewed as a function of k, and that the maximum value afforded by the four exponential terms is bounded by Moreover, one easily checks that S n is dominated by a geometric series and that This estimate allows us to know a priori how many terms need to be considered in order to obtain a prescribed accuracy. We observe that the speed of convergence of expression (2.2) depends on the limits x 0 , y 0 , x 1 , y 1 through the quantity ǫ(x 0 , y 0 , x 1 , y 1 ). The main result of this section is the following. Theorem 3.1. There exists a constant ǫ F , which depends only on F , such that for every (x 0 , y 0 , x 1 , y 1 ) ∈ H 4 and for every ǫ 0 < ǫ F the integral y0 x0 y1 x1 ω f can be expressed as x1 ω + f are involved in the computation of ATR points. One can choose any y 1 ∈ H whose imaginary part is large enough to satisfy that x1 ω + f . We devote the rest of the section to prove Theorem 3.1, which as we will see can be made effective and algorithmic. It will be useful for us to regard F as a subset of R 2 by means of v 0 × v 1 . Let · denote the norm on R 2 given by The basic ingredient in the proof of 3.1 is the following classical result. Lemma 3.2. There exists a constant C F , only depending on F , such that for each x ∈ R 2 and for each 0 < δ < 1 there are elements c, d ∈ O K , with c = 0, such that Proof. This is [Fre90, Lemma 3.6]. We rewrite the proof in an algorithmic fashion in order to give an approximation to C F (see Remark 3.3 below), as well as to give an algorithm to find the elements c and d. Consider a fundamental parallelogram P for O F as a subgroup of R 2 . Let U 1 , . . . , U N be boxes of side δ that cover P . It is easy to see that there is a constant N ′ , depending only on F , such that N can be taken to be ≤ N ′ /δ 2 . For each positive real r, consider the set A well-known result in Ehrhart theory (see e.g. [BR07, Theorem 2.9]) implies that there exists a constant C F > 0, which depends on F but not on δ, such that Consider now an ordering {c n } n≥1 of the elements of S F (C F /(2δ)). For each of the c n , find d n ∈ O F such that c n x + d n ∈ P, and set i(n) to be the integer such that c n x + d n ∈ U i(n) . By the pigeonhole principle the sequence {i(n)} n will have a repetition, say i(n 1 ) = i(n 2 ). Therefore The seeked elements are thus c = c n−1 − c n−2 and d = d n−1 − d n−2 . Remark 3.3. In our applications, the parameter δ will be small enough so that the quantity N in the above proof can be approximated by area(P )/δ 2 . Also, [BR07, Theorem 2.9(b)] gives in this case that Therefore, it is enough for #S F ( r 2δ ) to be larger than N that This yields an approximate value of C F ≈ area(P ), which is good enough for our purposes. Note that this area is easily calculated . In the next lemmas we prove that this can, indeed, be taken as the constant ǫ F of Theorem 3.1. Proof. Let z j = r j + is j and let r = (r 0 , r 1 ). If s 0 s 1 ≥ ǫ 2 F one may take γ = 1 and there result is obvious, so assume from now on that s 0 s 1 < ǫ 2 F . By Lemma 3.2 for each 0 < δ < 1 there exist c ′ , d ′ ∈ O F with c ′ = 0 and such that We have that Let g = gcd(c ′ , d ′ ) and let n = Nm K/Q (g) . If we let c = c ′ /g and d = d ′ /g then Since gcd(c, d) = 1 there exists γ ∈ Γ having (c, d) as bottom row and We choose δ that maximizes this expression. The optimal value for δ turns out to be δ = (C 2 F s 0 s 1 ) 1/4 . Note that δ < 1 since s 0 s 1 < ǫ 2 F . With this value of δ the above inequality gives where G(s 0 , s 1 ) and A(s 0 , s 1 ) are the geometric and arithmetic means, respectively. Of course, for this quantity to be not too small we should ensure that the ratio G/A is not too small. That is, s 0 and s 1 should be close. This can be done by the action of the matrix γ u = u 0 0 u −1 , which guarantees that: Therefore one obtains (the worst case is when the ratio is at any extreme): (3.5) G(s 0 , s 1 ) A(s 0 , s 1 ) ≥ 2u 0 1 + u 2 0 , and the result follows. Indeed, in this case one can improve (3.4) to by using the action ofγ u = u 0 0 1 ∈Γ. We will apply this remark to the computation of ATR points, because the integrals y0 x0 y1 x1 ω + f areΓ-invariant, and then we can take ǫ F in Theorem 3.1 to be . Given x, y ∈ H denote by ρ(x, y) the geodesic in H joining x and y. We also let d(x, y) be the hyperbolic distance between x and y. This distance is invariant under the action of SL 2 (R), and it is given by the formula Observe that . Observe that if this process finishes in a finite number of steps, then at the end the list V contains integrals I 0 , I 1 , . . . , I n with I = I 0 + · · · + I n and ǫ(I i ) ≥ ǫ 0 for all i = 1, . . . , n, as desired. What remains to be shown, then, is that this procedure cannot be repeated infinitely many times. As we shall now see, this follows from properties (2) and (3) of Lemma 3.6. First of all, observe that I is the integral of ω f along the region { y0 y1 x0 x1 }. Applying Lemma 3.6 to a certain integral W 0 = s0 r0 s1 r1 ω f amounts to give a decomposition of the region { s0 s1 r0 r1 } ⊆ { y0 y1 x0 x1 } into a union of regions with zero measure intersection. This decomposition can be of the following 4 types, which correspond to (3.12), (3.15), (3.10) and (3.14) respectively: Each time that an application of Lemma 3.6 gives a decomposition of type (i) or (ii), the first term is a subregion of { y0 y1 x0 x1 } of area at least d 2 min . Since the area of { y0 y1 x0 x1 } is finite, this cannot happen infinitely many times. Therefore, in the algorithmic process described above we can assume that, after a finite number of steps, all applications of Lemma 3.6 give rise to decompositions of types (iii) or (iv). At this stage, the algorithmic procedure applied to an integral W 0 is as follows. Dependence on the continued fractions The computation of ATR points is equivalent to the computation of semi-definite integrals of the form As we recalled in §2.2, one can write (4.1) as a sum of ordinary 4-limit integrals by expressing c as a continued fraction with coefficients in O F . If the limits of the resulting ordinary integrals are too close to the real axis then the number of terms to sum for a prescribed accuracy, and therefore the number of Fourier coefficients to be computed, may be too large. In this case, the algorithm described in §3 can be used to express them as sums of integrals whose limits are uniformly bounded away from the real axis, reducing the number of terms and Fourier coefficients needed. If F is norm-euclidean then the euclidean algorithm gives an effective procedure for computing continued fractions. This is the method used in the numerical calculations over Q( √ 29), Q( √ 37) and Q( √ 41) carried out in [DL03]. But this can only be done in a few fields: there are only finitely many norm-euclidean real quadratic fields, being Q( √ 73) the one having largest discriminant. An algorithm for computing continued fractions in 2-stage euclidean real quadratic fields was given in [GM12]. A field F is said to be 2-stage euclidean if for every a, b ∈ O F , b = 0, there exist either: (i) q, r ∈ O F with a = qb + r and Nm(r) < Nm(b), or (ii) q 1 , q 2 , r 1 , r 2 ∈ O F with a = q 1 b + r 1 , b = q 2 r 1 + r 2 and Nm(r 2 ) < Nm(b). All real quadratic fields of class number 1 are conjectured to be 2-stage euclidean (see [Coo76]). Actually, the algorithm of [GM12] can also be used to verify that a given F is 2-stage euclidean, and this was used to prove that all real quadratic fields of class number 1 and discriminant up to 8, 000 are indeed 2-stage euclidean [GM12, Theorem 4.1]. Unlike the situation encountered in norm-euclidean fields, 2-stage division chains as in condition (ii) are not unique. As a consequence, elements of F admit in general many different continued fraction expansions. This leads to different expressions of (4.1) as a sum of ordinary integrals, whose limits may have very different imaginary parts. As it will be illustrated in §5 with some explicit examples, numerical experiments suggest that it is useful to exploit non-uniqueness of continued fractions, by searching for continued fractions leading to integrals whose limits have large imaginary parts; or, to be more precise, whose limits give large values of the quantity ǫ defined in (3.2). The procedure for computing an expression such as (4.1) is then: (1) Compute all continued fraction expansions of c given by the algorithm of [GM12], which have length up to a certain fixed bound. (2) For each continued fraction, compute the corresponding expression of (4.1) as a sum of ordinary integrals and compute ǫ min : the minimum of the quantities ǫ corresponding to the limits. (3) Choose the continued fraction giving the highest ǫ min . (4) For each of the ordinary integrals appearing in the expression given by the continued fraction found in the previous step, compute the quantity ǫ and apply the algorithm of Theorem 3.1 if ǫ < ǫ F , with a suitable choice of ǫ < ǫ 0 < ǫ F . We end the section with some remarks about the algorithm above. 1. One can exploit non-uniqueness of 2-stage division chains even if F is euclidean. As the next section illustrates, this may be beneficial since it usually gives rise to integrals with larger values of ǫ, providing an improvement even on the curves already considered in [DL03]. In some cases, it may even happen that the ǫ min obtained in this way is higher than ǫ F , in which case it is not necessary to apply the algorithm provided by Theorem 3.1. However, the lack of an a priori estimate of the value ǫ min obtained by the non-uniqueness of division chains trick explains the key importance of Theorem 3.1 in treating the cases where ǫ min < ǫ F . 2. In Step (3) we choose the continued fraction giving the highest ǫ min because experimentally this seems to produce the fewer resulting integrals in step (4). We have no rigorous explanation for this fact, although it seems reasonable that better initial conditions give better results. 3. There is a trade-off between small and large values of ǫ 0 in Step (4) above: smaller values yield less integrals after the breaking process, but each of these integrals requires more Fourier coefficients at the time of integration; on the other hand, higher values lead to integrals requiring less Fourier coefficients, but the number of resulting integrals tends to be higher. Experimentally, we found that the running time of the algorithm is more sensible to the number of needed Fourier coefficients, so a value of ǫ 0 close to ǫ F seems to be a good choice. For instance, in the implementation of the algorithm used to compute the numerical examples of the next section, we used ǫ 0 = 0.81ǫ F , which corresponds to a value of ǫ 1 = 0.9ǫ F . Numerical verification of Darmon's conjecture In this section we illustrate the algorithm described above by calculating approximations to ATR points which add numerical evidence on top of the one presented in [DL03]. Before detailing the computation of an ATR point on E 509 , we comment on some calculations of ATR points on three Q-curves that we denote E 29 , E 37 , and E 109 . The curves E 29 and E 37 were also considered in [DL03], and we have included them here in order to compare the computational requirements of the algorithm used in [DL03] with the one presented in this note. The curve E 109 is an example of a curve of conductor 1 defined over a real quadratic field of class number 1 which is not norm-euclidean. Therefore, it was not numerically accessible before, although it is a Q-curve and algebraic points can be more efficiently computed by using the Heegner point method of [DRZ12]. The computations for E 29 , E 37 , and E 109 were performed on a laptop with Intel Core TM i5-2540M CPU running at 2.60 GHz, and 8 GB of memory. For the curve E 509 we used a machine equipped with eight Quad-Core AMD Opteron TM Processor 8384 for a total of 32 cores running each at 800 MHz, and equipped with 320 GB of memory. For this field the estimated C F is C F ≃ 5 (see Remark 3.3), which by (3.6) yields ǫ F ≃ 0.0736. We consider the ATR field K = F (β) with β = √ 9ω + 3, for which E 29 (K) has a non-torsion point with x-coordinate equal to −1/3. With the algorithm used in [DL03], one obtains integrals with ǫ min ≃ 0.00145. In order to get 12 decimal digits, which is the minimum precision in which the calculations in [DL03] were performed, one would have needed to find the Fourier coefficients of all ideals up to norm N ≃ 6.7 · 10 7 . Using the non-uniqueness of continued fraction expansions as explained in Section 4, considering expansions of length up to 5, we obtained 5 integrals with imaginary part ǫ min ≃ 0.0072, which is almost 5 times better than before. In order to obtain the same precision one would have to find the Fourier coefficients of ideals up to norm N ≃ 2.7 · 10 6 , which is almost 25 times less. Since ǫ min < ǫ F , we broke further the integrals with the algorithm of Theorem 3.1 to move the imaginary parts of the limits close to this theoretical optimal, with a choice of ǫ 0 = 0.81ǫ F in Step (4) of the algorithm outlined in Section 4. This yielded 539 integrals with an imaginary part of ǫ min ≃ 0.0596, and allowed us to obtain the same precision of 12 digits by only considering ideals up to norm N ≃ 40, 000: an improvement by a factor of 1, 675. By taking ideals of norm up to 40, 000 we obtained that J τ = 13.2923360157968468468 . . . − 10.78402031269077180934 . . .i, and −3 · J τ coincides with P up to the prescribed accuracy. The calculation took less than two minutes. For this field the constant C F is approximately equal to 6. By (3.6) we see that ǫ F ≃ 0.044. We consider the field K = F (β), with β = √ 4ω + 10, and one of the points computed by [DL03], namely Using the algorithm of [DL03] one obtains a minimal imaginary part of ǫ min ≃ 0.0012, which means that one has to integrate using the Fourier coefficients up to norm N ≃ 1.12 · 10 8 for obtaining 12 digits of precision. In order to illustrate the algorithm of Theorem 3.1 we rewrote the integrals provided by the method of [DL03] as a sum involving 328 integrals (with a choice of ǫ 0 = 0.81ǫ F ). The minimal imaginary part then improved to ǫ min ≃ ǫ 0 ≃ 0.0359, which means that to get 12 digits of precision it is enough to use ideals of norm up to N ≃ 138, 000. This is an improvement by a factor of 815. In this case, it took about 7 minutes to find that The curve E 109 . In the two remaining subsections we present larger examples that were not available to [DL03]. First, consider the curve E 109 defined over the field F = Q( √ 109), and given by the equation Although E 109 is a Q-curve, the field F is not norm-euclidean and therefore this example was not available before. For this field we have that C F ≃ 10.4, giving that ǫ F ≃ 0.006. By using the algorithm of Theorem 3.1 with, say, ǫ 0 = 0.81ǫ F , one can express any integral y0 x0 y1 x1 ω + f as a sum of integrals with ǫ ≃ 0.81 · ǫ F ≃ 0.0048. In order to compute any such integral with ǫ = 0.0048 to a precision of 12 digits, one needs to sum the Fourier coefficients of norm up to roughly 2 · 10 7 . Let us consider the point P = (3ω + 11, 1 2 β − 7ω − 81/2) defined over the field K = F (β), with β = √ 268ω + 1265. Exploiting the non-uniqueness of continued fractions, and considering expansions of length up to 5, we obtained 8 integrals with ǫ min ≃ 0.035, which is roughly 6 times higher than ǫ F . It is not necessary then in this case to further break the integrals. We computed an approximation to the ATR point by considering ideals of norm up to N = 430, 000, obtaining that J τ = −3.24024368505944150 . . . · 10 −12 − 42.392087963225793791 . . .i, which satisfies J τ ? = −2P up to the prescribed precision of 12 digits. This computation took less than 3 minutes. We have that C F ≃ 22.5, and therefore ǫ F ≃ 0.0015. Theorem 3.1 allows us to express any integral y0 x0 y0 x0 ω + f as a sum of integrals having ǫ < ǫ 0 < ǫ F . For instance, for a choice of ǫ 0 = 0.81ǫ F we obtain that each of those integrals could be computed to 12 digits of precision by summing over the Fourier coefficients of norm up to roughly 1.6 · 10 9 ; a bound which, although being large, is within reach of the current technology. The field extension K/F has relative discriminant of norm 55, which is relatively small. Write O K = O F + αO F , α 2 + α = 127 √ 509 + 2865. The ATR points are conjectured to be defined over the Hilbert class field of K. Since K has class number 2, we will need to compute the points corresponding to the two non-equivalent optimal embeddings if we want to obtain a point defined over K. The first of these embeddings maps α → ϕ 1 (α) = 0 254ω + 2738 1 −1 , whereas the second maps α → ϕ 2 (α) = 0 127ω + 1369 2 −1 . The fixed points for the induced action of K × on H given by the embedding v 0 : K ֒→ C are, respectively: (2) min > ǫ F we see that in this case it is not necessary to break further the integrals using Theorem 3.1. In order to obtain about 12 decimal digits of accuracy we precomputed the Fourier coefficients of all ideals up to norm 4 · 10 8 . The total computation time was under two days on the 32-processor machine specified at the beginning of this section. We should note that in this computation we heavily exploited parallelism, both when computing the Fourier coefficients as well as during the integration step. The period lattices for E K attached to the Néron differential ω EK = dx
2012-10-19T15:14:21.000Z
2012-04-30T00:00:00.000
{ "year": 2013, "sha1": "7ef643774584e80dc0aa1d018e28f218e440cf59", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1204.6680", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "624455e89b2b16fee588a7bdfad2523ea1314031", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
7104839
pes2o/s2orc
v3-fos-license
Distributed Pricing-Based User Association for Downlink Heterogeneous Cellular Networks This paper considers the optimization of the user and base-station (BS) association in a wireless downlink heterogeneous cellular network under the proportional fairness criterion. We first consider the case where each BS has a single antenna and transmits at fixed power, and propose a distributed price update strategy for a pricing-based user association scheme, in which the users are assigned to the BS based on the value of a utility function minus a price. The proposed price update algorithm is based on a coordinate descent method for solving the dual of the network utility maximization problem, and it has a rigorous performance guarantee. The main advantage of the proposed algorithm as compared to the existing subgradient method for price update is that the proposed algorithm is independent of parameter choices and can be implemented asynchronously. Further, this paper considers the joint user association and BS power control problem, and proposes an iterative dual coordinate descent and the power optimization algorithm that significantly outperforms existing approaches. Finally, this paper considers the joint user association and BS beamforming problem for the case where the BSs are equipped with multiple antennas and spatially multiplex multiple users. We incorporate dual coordinate descent with the weighted minimum mean-squared error (WMMSE) algorithm, and show that it achieves nearly the same performance as a computationally more complex benchmark algorithm (which applies the WMMSE algorithm on the entire network for BS association), while avoiding excessive BS handover. I. INTRODUCTION M ODERN wireless networks are designed based on the cellular architecture in which multiple user terminals are associated with the base-stations (BSs) to form cells. The cellular concept has further evolved to include heterogeneous networks (HetNets) where the BSs can transmit with widely different powers at disparate locations, and consequently the cells can vary considerably in size. An essential feature of HetNet is that it allows the off-loading of traffic from the macro BSs to pico or femto BSs. By splitting the conventional macro cellular structure into small cells (i.e., femto/pico cells), the HetNets allow for more aggressive reuse of frequencies as This work is supported in part by BLiNQ Networks, in part by National Science and Engineering Research Council (NSERC) of Canada, and in part by Ontario Centre of Excellence (OCE). well as improved coverage and higher overall throughput for the entire network. A main challenge in the deployment of HetNet is the appropriate setting of the transmit power levels at different tiers of macro/pico/femto BSs and the association of users to the different BSs (or equivalently the determination of coverage area for each cell). The cell association problem is further compounded when multiple antennas are deployed at the BSs with multiple users spatially multiplexed using multipleinput and multiple-output (MIMO) beamforming techniques. Conventionally, the downlink cell coverage areas are determined according to the signal-to-interference-plus-noise ratio (SINR). Each user terminal simply associates with the BS from which the received SINR is the highest-herein referred to as the max-SINR rule. A key problem with the max-SINR BS association is that it does not account for the varying data traffic pattern in the network, hence it can lead to poor load balancing. Load balancing is essential for wireless networks with small cells, because femto/pico BSs are often deployed to alleviate traffic "hot-spots" with higher-than-average user density. This paper addresses the downlink user association problem for HetNets from an optimization perspective under the proportional fairness criterion. We follow a pricing-based strategy in which the users are associated with the BS according to the value of a utility minus a price-a strategy first adopted in [2], where a price update method based on the subgradient algorithm is proposed. The main novelty of this paper is that we advocate an alternative price update method based on a coordinate descent approach on the dual of the network utility maximization problem. The proposed algorithm has the advantage that it is free of parameter choices and that it can be implemented asynchronously across the BSs. This paper further proposes joint optimization of BS association with downlink power control and with beamforming. We show that the proposed pricing-based distributed user association can significantly improve the conventional max-SINR association. Throughout the paper, we use the terms BS association and user association interchangeably-the former emphasizes a user perspective, while the latter a BS perspective. A. Related Work The BS association problem has been considered extensively in the literature. While the early works in this area [3]- [7] mostly deal with the code-division multiple access (CDMA) system, they already reveal that the joint optimization of BS association and transmit power can significantly improve the overall network performance. These earlier works, as well as some of the more recent ones [8]- [13], tend to focus on the power-based optimization objectives, e.g., minimizing the total transmit power under a predefined set of minimum SINR constraints at the user terminals. While the power minimization formulation may be appropriate for networks with fixed rate and fixed quality-of-service (QoS) requirement, modern wireless networks often maximize the objective of the overall throughput, or more generally, a network utility function across all users in the network. In this realm, [14]- [17] consider the maximization of the sum rate across the network, while [18], [19] consider the weighted sum rate objective for the BS association problem. More general network utility maximization formulation is considered in [2], [20]- [22], which use a proportional fairness objective function, while [16] considers max-min fairness in addition. This paper considers the network utility maximization problem under the proportional fairness, i.e., log-utility objective for the downlink of a wireless cellular network. Because the BS association problem is inherently a discrete optimization problem involving the assignment of users to BSs, finding the optimal solution for such a problem is nontrivial. While conventional BS association simply uses the max-SINR rule, it is also clear from a network utility maximization or loadbalancing perspective that max-SINR is far from being adequate. In this direction, [21] proposes an intuitive idea of expanding the coverage area of small cells by adding constant bias terms to the SINR values, so as to balance the load among different cells (although [21] does not analyze what the optimal bias terms should be). Other common heuristics proposed in the literature include that in [22], [20], [23], which optimize BS association through the greedy method, and [24], which randomly assigns each user terminal to the BS with the probability proportional to the estimated throughput, and [14], [15], [25], which devise their respective methods based on the relaxation heuristic. In addition to the network utility maximization formulation, [26] addresses the BS assignment problem from a game theory perspective (as the assignment problem can be thought of as a game among the BSs), where the Nash equilibrium of the game is found. The BS association algorithm proposed in this paper is most closely related to [2], where under fixed transmit powers, a dual pricing method based on the subgradient update is proposed. This paper adopts this pricing approach, but makes further progress in identifying an alternative price update method. Other related works on BS association assuming fixed transmit powers include [16], which considers a simple model consisting of only a single pair of macro and pico BSs, and [18], which considers a special situation where user terminals may not report their channel state information (CSI) truthfully out of selfish motivation. For the purpose of load balancing and interference management, it has also been well recognized in the existing literature that BS association and transmit power levels need to be optimized jointly. From this joint optimization perspective, an intuitive but heuristic idea is to optimize BS association and power levels in an iterative fashion, as suggested in [15], [17], [23]. The approach of [25] addresses the joint optimization problem using duality theory, but only for a relaxed version of the problem with the discrete constraints eliminated. In general, BS association and power optimization for weighted ratesum maximization are both challenging problems, but there are some special cases where the globally optimal solution to the joint optimization problem can be found. For example, in [27] the optimal settings of BS association and power levels that maximize the sum throughput are obtained under certain restricted conditions for the case where the number of user terminals and the number of BSs are the same. Instead of searching for globally optimal solutions, this paper treats the joint BS association and power optimization problem from an iterative optimization perspective. Our main contribution here is some key observations on the role of pricing-based BS association in this heuristic approach. For multi-cell networks with multiple antennas at the BSs, this paper also considers the joint optimization of BS association and beamforming for the scenario where multiple users can be spatially multiplexed within each cell. In this domain, [12], [28] provide algorithms for such a joint optimization problem, but only under the power minimization objective. In [14], BS association, transmit power and beamforming vectors are optimized through coordinate descent. Note that the beamforming problem by itself (assuming fixed BS association) is well studied in the literature (e.g., [29]- [33]). In this regard, the WMMSE algorithm [32] is of particular interest, because it can handle weighted rate sum maximization, hence the proportional fairness objective. A recent work [19] proposes a modification of the WMMSE algorithm that is capable of optimizing BS association and the beamforming vectors jointly. The WMMSE algorithm of [19] is, however, computationally complex; further it induces excessive BS handover. One of the contributions of this paper is that the pricing-based BS association can be incorporated with WMMSE beamforming design to significantly reduce the computational complexity of joint BS association and beamforming method of [19], while achieving nearly the same performance and avoiding excessive handover. B. Main Contributions This paper considers the optimal joint BS association with power control and with beamforming for the downlink HetNets under the proportional fairness objective. The main contributions of this paper are as follows: 1) BS Association: For a single-input and single-output (SISO) network with fixed transmit powers and with flatfading channels, this paper proposes a distributed pricingbased user association scheme with a price update method based on coordinate descent in the dual domain. The proposed price update algorithm has faster convergence than the conventional subgradient method [2]. It is a fundamental building block for subsequent generalizations to the frequency-selective case and to the cases with power control and MIMO beamforming. Moreover, we provide a duality-gap based analysis to bound the performance error of the proposed algorithm. 2) Joint BS Association and Power Control: This paper proposes an iterative optimization approach for the joint BS association and power control problem. We make a key observation that the choice of BS association method is crucial in joint optimization. In particular, when used in conjunction with power control, the conventional max-SINR association tends to exacerbate load imbalance, while the proposed pricingbased association alleviates load imbalance. To quantify the performance of the proposed iterative approach, we devise a benchmark algorithm based on dual optimization and by solving the nonconvex power optimization problem from multiple starting points. We show that the proposed iterative approach provides comparable performance, while being much less computationally complex. 3) Joint BS Association and Beamforming: When BSs are equipped with multiple antennas and have the ability to spatially multiplex multiple users within each cell, this paper shows that the optimization of BS association and beamforming can be decoupled without significantly affecting the overall performance. This allows us to propose a twostage method combining the joint BS association and power control algorithm as the first stage followed by a per-cell WMMSE step in the second stage. The proposed approach is significantly less complex than the use of WMMSE algorithm for BS association over the entire network [19], while at the same time avoiding excessive BS handover. C. Organization of This Paper The rest of this paper is organized as follows. Section II introduces the problem formulation for the BS association problem for a SISO network. Section III analyzes a pricing based BS association approach under fixed power. The algorithm proposed in Section III is a key component in subsequent developments. Section IV considers the joint BS association and power control problem. Section V addresses the joint BS association and beamforming problem for a MIMO network. Performance evaluations are provided in Section VI. Conclusions are drawn in Section VII. II. BS ASSOCIATION PROBLEM FOR SISO NETWORKS Consider a downlink cellular network consisting of L BSs with fixed transmit power levels (which may differ from BSs to BSs), and K active user terminals across the geographic area covered by the network. Both the BSs and the user terminals are equipped with a single antenna each. Let i be the index of user terminals, i ∈ {1, 2, . . . , K}, and let j be the index of BSs, j ∈ {1, 2, . . . , L}. Let the total bandwidth of the system be W , which is shared by all BSs (i.e., frequency reuse factor is one). To simplify the problem, we assume flat-fading channels and frequency-flat PSD levels at the BSs, thus the SINR values are constants across the frequencies. Let h ij ∈ C be the channel between user i and BS j, and let p j be the transmit PSD level at BS j. If the user i is to be associated with the BS j, its SINR value is then where p = [p 1 , · · · , p L ] T ; σ 2 i is the PSD of the background additive white Gaussian noise (AWGN). This paper assumes that each user is associated with one BS at a time. This paper adopts a proportionally fair network utility optimization framework of maximizing the sum log-utility across all the users in the entire network. A key step in the problem formulation is an observation made in [2], where it is shown that for a given set of users associated with a BS, roundrobin among these users is the proportionally fair schedule (assuming constant and flat-fading channel and flat transmit PSD). Hence, if a total of k j users are associated with BS j, in order to maximize the proportional fairness objective, each of them should be allocated 1/k j of the total time/frequency resource. In this case, if a user i is associated with BS j, its rate is given by where Γ is the SNR gap determined by practical coding and modulation schemes used. Let x ij be a binary variable (1 or 0) denoting whether or not user i is associated with BS j. The BS association problem is that of jointly determining x ij and the transmit powers p j at each BS to maximize the overall network utility, which can be written as: ensures that each user can only associate with one BS, and constraint (3e) states that all users in the network are served. Note that although k j is completely determined by x ij , it is convenient to keep k j as an optimization variable in subsequent analysis. III. BS ASSOCIATION IN SISO NETWORKS UNDER FIXED POWER The joint BS association and power control problem (3) is a mixed discrete optimization (over the BS association) and nonconvex optimization problem (over the powers), for which finding its global optimum is expected to be very challenging. In this section, we focus on a simplified problem setting with transmit power spectral density (PSD) levels fixed a priori. The joint optimization problem with power control is treated in the subsequent section. A. Problem Formulation When p is fixed in (3), all SINR values are predefined by (1). We introduce parameter Substituting a ij back into (3), we simplify the BS association problem under the fixed powers as This rest of this section presents a pricing approach, together with a novel price update method, for solving the above problem. B. Lagrangian Dual Analysis The problem formulation (5) is first proposed in [2], where it is shown that a dual analysis can yield considerable insight. An important idea is that the dual variables can be interpreted as the BS-specific prices, which give rise to the dual pricing approaches for BS association. Introduce dual variables µ = [µ 1 , · · · , µ L ] T for constraint (5c), and ν for constraint (5d). The Lagrangian function with respect to these two constraints is The dual function g(·) can then be written as The maximization of the Lagrangian has the following explicit analytic solution: Note that if j (i) in (8) is not unique, x ij can be assigned value 1 for any of the BSs with maximum (a ij − µ j ) without affecting the value of dual function. The solution of x ij in (8) is quite intuitive. The dual variable µ j is the price at BS j, while a ij is the utility of the user i if it associates with BS j. Each user maximizes its utility a ij minus the price among all possible BSs, while the BSs choose their prices to balance their loads. This pricing interpretation has already been given in [2], which also proposes a subgradient algorithm for updating the prices. The present paper carries this idea one step further by observing that we can explicitly write down the Lagrangian dual optimization problem of (5). This additional observation gives rise to a better price update method. Substituting (8) and (9) back into (7), we obtain the dual objective in closed-form as: The Lagrangian dual problem of (5) is now the minimization of g(·) over µ and ν: The Lagrangian duality theory in optimization states that the updating of the prices can be done via the minimization of g(µ, ν), e.g., using the subgradient algorithm [2]. One of the main contributions of this paper is that by taking advantage of the particular form of g(µ, ν), the price update can alternatively be done using a coordinate descent approach in the dual domain. In the subsequent sections, we first review the subgradient method, then present the new coordinate descent method. After the dual solution is obtained for (11), we need to recover the primal variable x ij from the dual solution. This can be done through (8), but there is the possibility that a user has more than one BS with the same maximal value for (a ij − µ j ). Such ties can be resolved using heuristics. In general, we would like to keep k j as close to e µj −ν−1 as possible. In our simulation experience, only a very small number of users are typically involved in ties, so tie-breaking via exhaustive search is feasible. It should be noted that because the original optimization problem (5) is discrete in nature, solving the dual is not the same as solving the original primal problem-a positive duality gap can exist. Nevertheless, the dual optimum solution often leads to good primal solutions. C. Subgradient Method To solve the dual optimization problem (11), we observe first that if µ is fixed, then g(·) is a differentiable convex function of ν, so the optimal ν can be found as 1 where the time index t is included here to indicate that µ and ν need to be updated iteratively in a sequential order. However, g(·) is not a differentiable function of µ j , so instead of taking its derivative with respect to µ j , the subgradient method updates µ j 's in each step according to where α (t) is the step size and x (t) ij is determined by µ (t) j according to (8). The use of subgradient method for price update in the BS association problem is first proposed in [2]. Because the dual problem is always convex, the subgradient method is guaranteed to converge to the globally optimal solution to the dual problem (11). However, the convergence speed of the subgradient method depends heavily on the choice of step size α (t) . Possible choices of α (t) include constant step size (but the constant is difficult to choose) or diminishing step sizes (which guarantee convergence but can be quite slow in practice). As a baseline for comparison, this paper adopts the self-adaptive scheme of [34] as suggested in [2]. We refer the detailed algorithm description to [34], and only mention that the scheme involves quite a few parameters, namely γ t , ρ ≥ 1, β < 1, as well as δ 1 and δ. Still, the convergence speed is still very much parameter dependent, as seen in the simulation section later in this paper. We remark that because all the µ j 's need to be updated at the same time using the same step size (in order to ensure convergence), the distributed implementation of the subgradient method requires synchronized price updates across the BSs. This is a significant drawback, as synchronization is not necessarily easy to achieve. The main advantage of the dual coordinate descent method proposed in the next section is that it is free of parameter choices and it does not require synchronization. D. Dual Coordinate Descent (DCD) Method The main contribution of this paper is a coordinate descent [35] approach in the dual domain for solving (11). The key idea is to recognize that the dual function is expressed in a closed form in (10). First, fixing all the µ j 's, we see that optimal ν can be updated by (12). Next, fixing ν and all µ j 's except one of them, we see that g(·) is in fact the sum of a continuous piece-wise linear function and an exponential function. So we can take its left or right derivatives and choose µ j to be such that the left derivative at µ j is less than or equal to zero, and the right derivative is greater than or equal to zero. Mathematically, define two functions f 1 (·) and f 2 (·) as: where It is easy to see that the left partial derivative of g(·) with respect to µ j is exactly Hence, fixing all other dual variables, the µ j that minimizes g(·) is just This leads to the DCD method described in Algorithm 1. The DCD method is quite intuitive. The dual variable µ j is the price at BS j, while a ij is the utility of the user i if it is associated with BS j. Each user chooses to associate with the BS that maximizes its utility minus the price, while the BSs choose their prices in an iterative fashion to balance their loads. Fig. 1 illustrates the price update condition that for each j ∈ {1, · · · , L} do 1) Update µ j according to (16). end for 2) Update ν according to (12). until the dual objective value converges. function. The functions f 1 (·) and f 2 (·) may not intersect, but the optimal µ * j can always be determined uniquely. As mentioned earlier, a main advantage of the DCD method is that BSs do not need to synchronize their price updates. In fact, the order of price updates in Algorithm 1 can be arbitrary. Since each dual update step always produces a dual objective value that is nonincreasing, the iterative algorithm is always guaranteed to converge. However, it should be noted that since the dual objective (10) is not a differentiable function, coordinate descent is not guaranteed to give a global optimum solution to the dual optimization problem (11), and most likely not the optimum solution to the primal problem (because a duality gap can exist). Nevertheless, the convergence point for DCD still gives fairly good solutions to the original BS association problem. The proposed dual coordinate descent method is inspired by the development of auction algorithm [36] for the one-toone assignment problem. The BS assignment problem in this section can be thought of as a generalization of the assignment problem solved by the auction algorithm [36] from the 1-to-1 to the N -to-1 case. E. Duality Gap Bound Although the DCD method is not guaranteed to converge to the global optimum of the dual problem, and further, because of the integer constraints, there may be a non-zero optimal duality gap between the primal and the dual problems, the Lagrangian dual analysis nevertheless gives useful upper bounds on the optimum value of the original optimization problem. In particular, g(µ, ν) is an upper bound on f o (X * , R * ), and the gap is tightest when (µ, ν) are dual optimal. The following Algorithm 2 Iterative BS Association and Power Control Initialization: Set p j 's to feasible values. repeat 1) Run DCD algorithm for fixed p j 's using Algorithm 1. 2) Do power control for network utility maximization under fixed BS association to obtain new set of p j 's. until convergence. result shows that this optimal duality gap can be expressed analytically in closed form. Note that whenever k j = e µj −ν−1 for a BS j, as in Fig. 1(a), the user association is close to being optimal at that BS, as it does not contribute to the duality gap. When a BS is involved in ties, the duality gap is minimized when k j is made as close to e µj −ν−1 as possible. IV. JOINT BS ASSOCIATION AND POWER CONTROL IN SISO NETWORKS Thus far, we have considered the downlink BS association problem with fixed BS transmit powers. However, the setting of downlink power levels is crucial for determining cell range, especially in a HetNet where pico BSs may have a very different transmit power as compared to macro BSs. This motivates us to investigate joint BS association and downlink power optimization. A. Iterative DCD and Power Control The main algorithm proposed in this section is a simple and straightforward iteration between pricing-based BS association and power control as shown in Algorithm 2. The idea is to run the DCD algorithm under fixed power in order to achieve better load balancing, and to run a power control method under fixed user association for interference mitigation. The power optimization algorithm should also aim to maximize the overall network utility function. One possible implementation of such a power optimization is included in Appendix B, where the Newton's method is used for maximizing the log utility. As long as both the BS association and the power control steps in the iteration aim to increase the same objective function, the overall algorithm is guaranteed to converge (albeit not necessarily to the global optimum, since the problem is not convex). Although the main idea of iterative BS association and power optimization appears straightforward, this paper makes a key observation that the use of utility-maximization based BS association algorithm is crucial here. The following simple example shows that if instead the max-SINR association rule Consider a two-BS scenario with the initial BS assignment as shown in Fig. 2. If we apply power control, BS A would raise its transmit power due to the fact that it serves a large number of users, while BS B would lower its power. But once BS A increases its power, according to the max-SINR rule, it would attract even more users. Thus, the overall process may exacerbate load imbalance. This is in contrast to the pricing based BS association, which would actually reduce the number of users served by BS A (due to the higher pricing term), hence avoiding the undesirable phenomenon of overloading at BS A. B. Direct Dual Optimization for Joint BS Association and Power Control The iterative BS association and power control method proposed in the previous section is simple and effective. To further quantify its performance, this section pursues an alternative direct dual optimization approach for solving the joint BS association and power control problem (3). The algorithm proposed in this section is much more computationally complex than the iterative approach of the previous section, but it serves as a benchmark for performance comparison purpose. The idea of direct dual optimization is to write down the Lagrangian dual of (3) which gives The above maximization problem is now over both the power p and the BS association variables X and k under fixed dual variables. As before, the optimal solution for k j can still be obtained analytically by (9), i.e., k * j = e µj −ν−1 . However, the optimization over X and p is considerably more difficult because of the nonconvex and discrete nature of the problem. Here, we propose an approach of iteratively optimizing X assuming fixed p using (8), then optimizing p under fixed X. Clearly, the solution so obtained may not be the global optimum. Thus, we choose to start from multiple initial points of (X, p) in order to better approach the global optimum. For the minimization of the dual function g(·), it is possible to pursue a subgradient or dual coordinate descent approach. The key is to recognize that the subgradient in (13) is still valid; further the optimization of ν can still be done via (12). However, the dual coordinate descent step (16) no longer applies in a straightforward fashion. Instead, to implement coordinate descent, a bisection method on µ j can be done in order to find the optimal µ * j , while holding other dual variables fixed. Bisection can be carried out based on the subderivative of g(·) with respect to µ j , which can be calculated as e µj −ν−1 − i x ij , where x ij is the solution to (18). Under an ideal assumption that the true optimal solution (X * , k * ) can be found when evaluating g(·), we can further deduce that this dual method has the same performance bound as in Proposition 1. Finally as mentioned before, to ensure the near global optimum evaluation of g(·), multiple random starting points need to be tried. This gives a way to find near globally optimal solution to the overall problem. The direct dual optimization method described above has much higher complexity than the proposed iterative BS association and power control method proposed in the previous section, but given enough starting points, it can be served as a benchmark for the proposed algorithm. The numerical simulation carried out later in the paper indicates, however, that the simpler iterative BS association and power control method proposed earlier already performs very close to the benchmark. V. JOINT BS ASSOCIATION AND BEAMFORMING IN MIMO NETWORKS We now further extend the BS association problem to the case where both the BSs and the users are equipped with multiple antennas, and multiple users are spatially multiplexed within each cell. The use of beamforming can significantly influence the overall effective channel gain, and consequently the optimal BS association for each user. Thus, the joint BS association and beamforming problem is highly nontrivial. Note that power control is implicitly included as part of beamforming here. This section first reviews the state-of-the-art in this area, then proposes a novel approach of decoupling the overall problem into two subproblems where the BS association and the beamformers are optimized separately. The proposed approach has lower computational complexity; it does not require frequent BS handover; it has comparable performance to the best benchmark joint optimization algorithm in the literature. A. Problem Formulation and Existing Approach Consider a downlink MIMO cellular network with M j antennas at BS j and N i antennas at user i. The channel between user i and BS j is denoted by matrix H ij ∈ C Ni×Mj . We assume one data stream per user, and up to M j users being spatially multiplexed at the same time. The channel is assumed to be flat-fading. Each BS is assumed to have a fixed total power constraint. Because the scheduling operation, as well as transmit and receive beamformers, are designed to adapt to the channel realizations of each user, we can no longer claim that the proportionally fair scheduling would result in equal time/frequency allocation among all the users. Instead, proportionally fair scheduling over time needs to be included explicitly in the problem formulation. Toward this end, let the BS association x ij be fixed over time. Let v (t) ij ∈ C Mj be the transmit vector of BS j intended for user i at time t. In order to maximize the network utility defined as the log of the long-term average rates of all users, i.e., i log (R avg i ), we can equivalently maximize a weighted rate sum over successive time slots: ij , the weight ω ij is the instantaneous rate of user i at time t if it is associated with BS j as expressed in (20) at the bottom of this page (for ease of notation, time index t is omitted). Note that user scheduling within each BS is implicit in the problem formulation (19); further in (19b), p j is the peak PSD constraint of BS j, and the constraint (19c) enforces the rule that each user is associated with only one BS. Since x ij is not allowed to depend on t, for each optimization period with a fixed set of channels, BS handovers from time to time are not permitted. The beamforming design problem for weighted rate-sum maximization is a difficult nonconvex problem, even when the BS association is fixed. Below, we briefly review a WMMSE approach for solving this problem for fixed BS assignment, and a generalization of the WMMSE algorithm in [19] that accounts for BS association. 1) Beamforming via WMMSE with Fixed BS Association: When user-BS association is fixed in (19), the problem reduces to a beamforming design problem with a weighted rate sum maximization objective. As proposed in [33] and [32], this beamforming problem can be solved by solving an equivalent weighted minimum mean-square error (WMMSE) problem. We refer to [32] for the detailed description of the WMMSE algorithm. 2) WMMSE Method for BS Association: The recent work [19] further incorporates BS association into the beamforming problem by imposing a penalty term to the weighted ratesum objective and by solving the resulting penalized WMMSE problem for each time instant t. Basically, the users are penalized for being associated with more than one BS, and accordingly constraint (19c) is guaranteed in the end. However, this approach does not guarantee that the user-BS association is fixed over time. Consequently, as weights ω i are updated over time, user association and user scheduling can both change. This results in rapid BS handovers, which are not desirable in practice. Further, the WMMSE-based BS association method as proposed in [19] has high computational complexity, because the WMMSE update needs to be done between every single BS-user pair in the entire network. Also, the performance and convergence speed of the algorithm depend heavily on the parameter of the penalty term, which can only be set heuristically. Nevertheless, the method of [19] provides a useful benchmark for our proposed algorithm below. B. Proposed Two-Stage BS Association and Beamforming This paper formulates the joint BS association and beamforming problem in recognition of the fact that BS association typically takes place at a much larger time scale and should only adapt to the slow-fading channel characteristics, while beamforming and scheduling can take place in faster time scale. Thus, instead of jointly optimizing BS association and beamforming at each time slot, it is more sensible to decouple them in two stages. The first stage solves the BS association problem, while the second stage solves the beamforming problem assuming fixed BS association. The proposed twostage algorithm is described below: 1) BS Association Stage: The idea is to determine BS association in the first stage based on an estimate of channel quality. For BS association purposes, we rely on a simple SISO representation of the MIMO channel, and apply the joint coordinate descent and power control algorithm presented in the previous section to determine the BS association for each user. The SISO representation for the MIMO channel is based on the fact that from a degree of freedom point of view, M j antennas at the BS provide M j spatial multiplex gain. Thus, we can think of a MIMO system with M j antennas over bandwidth W as equivalently a SISO system with bandwidth M j W . More precisely, let |h ij | be the average channel magnitude between BS j and user i (modeling the distance-dependent attenuation and shadowing). We estimate each user's SINR according to (1), while accounting for the multiple M j antennas at the BS by redefining parameter a ij as Algorithm 3 Two-Stage Joint BS Association and WMMSE Beamforming Initialization: Choose S j ≥ M j , ∀j. 1) Run Algorithm 2, the joint BS association and power control (with a ij calculated by (21)) until convergence. Let the result of the optimization be (X, p). Associate users to BSs according to X. ComputeR ij according to (X, p) using the SISO model (2) scaled by M j . repeat 2) Choose S j potential users among the users associated with BS j according to ω i x ijRij , ∀j. 3) Run the WMMSE algorithm [32] for the chosen users in each cell to get the transmit beamformers and the resulting rate R The joint BS association and power control algorithm can now be applied to determine the BS association. We remark here that only the BS association is of interest at this stage. The optimized power p j serves to assist the BS association and scheduling decisions, but is further optimized in the next stage. 2) Scheduling and Beamforming Stage: After the BS association is determined, the overall problem now reduces to the beamforming vector design problem, which can be solved using the WMMSE algorithm. Our contribution in algorithm design in this stage is to point out that one can further lower the computational complexity of WMMSE by eliminate candidate users that are unlikely to be scheduled. In the conventional WMMSE algorithm, all potential users within a cell can have their beamforming vectors updated in each step. However, because each BS j can spatially multiplex at most M j users, to reduce the computational complexity, we may choose a subset of users who are most likely to be served to take part in the WMMSE algorithm. The simplest way to do this is to choose the users according to the estimated weighted rate ω i x ijRij , whereR ij is calculated by the SISO model (2) scaled by M j according to the resulting (X, p) after stage one. More sophisticated scheduling can also take channel directions into account. The number of potential users chosen by the WMMSE scheduler in cell j is a parameter, called S j in this paper, which should be greater than M j . A complete description of the two-stage method is stated in Algorithm 3. C. Complexity Analysis This subsection briefly analyzes the computational complexity saving of the proposed algorithm as compared of the joint BS association and beamforming algorithm of [19]. For simplicity, we assume that the number of antennas at all the BSs are the same and the number of antennas at all the users are the same, i.e., M j = M for all j's, and N i = N for all i's. Under fixed BS association, the conventional WMMSE algorithm has a complexity of O K 2 M N 2 + K 2 M 2 N + KM 3 + KN 3 per each beamforming step, where K is the number of users in the entire network. For the joint BS association and WMMSE method of [19], since the WMMSE update of each user needs to be done with respect to all L BSs in the network, parameter K in the WMMSE complexity formula for the algorithm of [19] needs to be increased by a factor of L, resulting in a complexity of By contrast, in the proposed two-stage algorithm, only S = j S j users are considered, and they are already associated with their respective BSs. Consequently, the complexity per each WMMSE iteration is reduced to of O S 2 M N 2 + S 2 M 2 N + SM 3 + SN 3 . Since S ≪ K ≪ LK, this is significant complexity saving. In the above calculation, we ignore the complexity of the first stage, which is typically very fast. In addition, we do not account for the number of iterations in the WMMSE algorithm. However, the number of WMMSE iterations is typically smaller for the proposed algorithm than for the WMMSE algorithm of [19] since fewer users are involved. Overall, the proposed two-stage algorithm is much faster than the WMMSE algorithm of [19]. The simulation results of next section show that it performs almost as well. A. BS Association Under Fixed Powers We first simulate the BS association algorithms with fixed powers in a downlink SISO network with a 7-cell wrap around topology, with one macro-BS and three pico-BSs per cell, and with 30 users per cell. The channel modeling parameters are as defined in Table I. The transmit PSD level is fixed at the maximum value for each BS. Fig. 3 compares the convergence behavior of the dual coordinate descent (Algorithm 1) with that of the adaptive subgradient method. Here each iteration refers to either a single update of µ j in the DCD method or a subgradient update of all µ j 's. We see that the DCD method converges to within 10 −1 of the optimum with only two rounds of iterations per BS (i.e. 56 iterations), while the convergence of subgradient method is very sensitive to its parameters. Here, we set ρ = 1.2, β = 0.9, and δ = 0.002 in the adaptive subgradient method [2], [34] and see that different settings of δ 1 and γ k can result in very different convergence behaviors. Note that in Fig. 3 the DCD method does not converge to the optimum. This is due to the fact that it is possible for coordinate descent to get stuck in a suboptimal point. This gap is quite small in this simulation, however. 4 shows the cumulative distribution function (CDF) of data rates after 56 iterations for the various BS assignment algorithms. We see that both the subgradient method and the DCD method offer substantial rate improvement to low-rate users as compared to the max-SINR BS assignment rule. For instance, the 50th-percentile rate is increased by about 33%, which is a consequence of off-loading traffic from the macro BSs to the pico BSs. The performance of the subgradient method is again parameter dependent. Table II shows that the numerical utility 2 achieved by DCD and two of the subgradient methods are almost identical, while subgradient-2 and the max-SINR method produce quite inferior results. This is consistent with the earlier convergence plot (Fig. 3) and the CDF plot (Fig. 4). In addition, the dualitygap bound calculated according to Proposition 1 for this example is about 0.45. This shows that the performance of the DCD algorithm is already very close to the global optimum. Finally, Fig. 5 displays the percentages of macro/pico users for various BS association methods. It shows that with the max-SINR BS association and subgradient-2, too many users are associated with the macro BS, while the DCD algorithm is able to achieve more balanced load by off-loading the users to pico BSs . B. Joint BS Association and Power Control This section considers the same network topology, but with downlink power control implemented in addition. We use Newton's method for power control for utility maximization. Note that since the network utility maximization problem is nonconvex, only the convergence to local optimum is expected. For the implementation of the direct dual optimization approach, we choose 10 random starting points. In Fig. 6, we observe a significant difference between max-SINR BS association and DCD-based BS association when they are implemented iteratively with power control. Further, Fig. 7 shows that the iteration between DCD and power control gives incremental improvement in utility, while in the max-SINR case utility actually decreases after the second iteration. These two plots validate the earlier analysis showing that the max-SINR association does not address the load balancing issue effectively and that the use of utility-maximizationbased BS association is crucial when implemented with power control. As can be seen in Fig. 6 and Table III, the direct dual optimization approach is able to provide the best performance among all the methods, but at the cost of very high complexity. In the simulation, we observe that during the updating of one For comparison purpose, we also implement the max-SINR BS association under the powers optimized by the dualitybased approach. Now, max-SINR performs well as seen in the Fig. 6 and Table III. This shows that the problem with the max-SINR algorithm is that it is unable to induce the correct power setting, in contrast to the DCD scheme. In Fig. 8 methods. It is observed that the methods with better performance are able to suppress the overly high transmit power by the macro BSs. Further, Fig. 9 shows the percentages of users associated with the macro and pico BSs resulting from various methods. Methods with better performance tend to have higher percentages of pico users, which illustrates the benefit of off-loading traffic from macro BSs to pico BSs. Combining results from Fig. 8 and Fig. 9, we conclude that a combination of suppressing macro BS power for interference mitigation and off-loading to pico BSs for load balancing is the key to obtaining overall good system performance. C. Joint BS Association and Beamforming Consider again the same network topology, but for the MIMO case with 4 antennas at each of the macro and pico BSs and 2 antennas at each user. The two-stage BS association and WMMSE algorithm is compared with the max-SINR BS association under maximum power plus per-cell WMMSE, in Fig. 10 and in Table IV. The number of candidate users (the S j parameter) in the two-stage method is chosen to be 4, 6 and 8. It is observed that the two-stage method can substantially improve the max-SINR BS association: the 50th-percentile rate Fig. 11: CDF of user rates for joint BS association and beamforming: two-stage method vs. WMMSE [19] for a network with 3 macro BSs and 4 pico BSs is almost doubled when S = 8. We also observe that the performance of the two-stage method improves with larger S, but the improvement beyond S = 8 is marginal for this case with 4 transmit antennas. We also wish to compare the two-stage method with the joint BS association and WMMSE method proposed in [19]. Because this method involves implementing the WMMSE algorithm across the entire network, its complexity is very high. In fact, running such an algorithm across a 7-cell network (with 28 BSs) is already impractical. Instead, Fig. 11 compares the two algorithms in a smaller network with 3 macro BSs, 4 pico BSs, and with 105 user terminals. We observe in our simulation that the utility gains by the twostage method and the WMMSE method of [19] are 17.82 and 23.05 respectively as compared to the max-SINR scheme. Although the WMMSE method of [19] produces overall better network utility, we observe from Fig. 11 that the majority of users do not see much performance difference between the two. In addition, we observe in the simulation that the joint BS association and WMMSE method of [19] causes approximately 24 BS association switchings on average for each beamforming update. About 1/4 of the users are involved in BS handover in each time slot, which is not very practical. In contrast, BS association is completely fixed in the two-stage method, which is a clear advantage. VII. CONCLUSION This paper considers pricing-based BS association schemes for heterogeneous networks and proposes a distributed price update strategy based on a coordinate descent algorithm in the dual domain. The proposed BS association scheme can be seamlessly incorporated with power control and beamforming. In each of these cases, because BS assignment must be determined at a relatively larger time scale, we propose to implement BS association with respect to the expected average channel gains. The overall main insight of this paper is that load balancing is crucial in heterogeneous networks. Instead of assigning BSs according to SINR, a utility maximization and pricing strategy can be adopted in order to achieve balanced loads across the network, and pricing update can be done in a distributed fashion efficiently. APPENDIX A PROOF OF PROPOSITION 1 Let (µ, ν) be the optimized dual variables at convergence of the DCD algorithm. Let (X, k) be the primal solution recovered from the dual variable (µ, ν) using (8) with tiebreaking if necessary, and subsequently setting k j = i x ij as the number of users associated with each BS. Let R be the corresponding user rates calculated by (2). We have: where the optimality condition on x ij , (8), is used in deriving (22d), and the optimality condition on ν, (12), is used in deriving (22e). Now, let (X * , k * ) be the optimal solution to problem (5), and let R * be the resulting user rates. By weak duality, it always holds that g(µ, ν) ≥ f o (X * , R * ). Combining this result with (22f), we prove the claim APPENDIX B NEWTON'S METHOD FOR DOWNLINK POWER CONTROL In this appendix, we describe a Newton's method for solving the power optimization problem for maximizing the network log utility. Assuming fixed user association X (and accordingly k j = i x ij ), the optimization problem is: x ij log (R ij (p, k j )) (24a) subject to 0 ≤ p j ≤ p j , ∀j (24b) Let f power (p) denote the objective function above. Introduce parameter r ij as We can write the first-order and the second-order partial derivatives of f power (p) with respect to p j as: and Following the heuristic in [37], we only use the diagonal entries of Hessian matrix in Newton's method in order to reduce the computational complexity of inverting the Hessian. In this case, the Newton step becomes ∆p j = − ∂fpower ∂pj / ∂ 2 fpower ∂p 2 j . To ensure an incremental updating direction, we further modify the Newton step as The overall algorithm updates all p j 's through where α nt is the step size, which can be determined by backtracking line search [38].
2014-08-17T09:48:44.000Z
2014-06-03T00:00:00.000
{ "year": 2014, "sha1": "18351a2f0ad48e999d9e6418c806183370cd14d2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1407.4694", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "18351a2f0ad48e999d9e6418c806183370cd14d2", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
259996202
pes2o/s2orc
v3-fos-license
Practical experience commissioning MRI‐compatible tandem and ring applicators for use with the Bravos HDR afterloader Abstract Five complete MR‐conditionally approved ring sets, including fifteen tandems, and two additional rings, were commissioned at an institution intending to use them in an MRI planning environment with a Bravos HDR brachytherapy remote afterloader. Channel length, radiograph, autoradiograph, ring offset, and treatment interrupt measurements were performed, and applicators were assessed in both CT and MRI. During commissioning, one ring was found to be defective and was returned to the manufacturer for a replacement. The eventual complete applicator suite (including the replacement ring) was found to follow the manufacturer‐provided specifications, including those delineated in vendor‐provided 3D virtual models and those defined within the manufacturer's instructions for use documentation. Based on this work, an offset correction of −0.4 cm will be used for all tested rings using the Bravos system's internal distal dwell position correction feature during treatment preparation. This study reiterated the requirement for careful commissioning of each applicator intended for clinical service considering the intended use and the planned clinical environment and work processes. INTRODUCTION The Varian Bravos high-dose-rate (HDR) brachytherapy remote afterloading system (Varian Medical Systems, Inc., Palo Alto, CA) was released commercially in 2018.The platform's major characteristics have been previously characterized in the literature. 1 A suite of MRI-compatible tandem and ring applicators was purchased for use with the Bravos system, with the intent to use MRI data for treatment planning acquired with the implant in place (denoted here as "MRI-based" planning 2 ).[5][6] This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.© 2023 The Authors.Journal of Applied Clinical Medical Physics published by Wiley Periodicals, LLC on behalf of The American Association of Physicists in Medicine. Prior to clinical use, testing was completed to define basic physical characteristics of the applicators.AAPM's TG-303 report contains guidance on the use of MRI in HDR brachytherapy, 7 and considers MRI to be the gold standard for target volume delineation in gynecologic HDR applications, while high-resolution CT to be the gold standard for geometric accuracy.Accurate three-dimensional applicator reconstruction is essential for high-quality treatment planning, and the measurements described here support confidence in reconstruction and subsequent dose calculations.Additionally, offset characterization is described, and a discussion is included regarding how these offset measurements may be used in clinical practice. METHODS Five complete Varian MRI-compatible ring applicator sets were purchased for use with a Varian Bravos HDR afterloader.The applicators are composed of polyetheretherketone (PEEK) and titanium, and are deemed by the manufacturer to have MR conditional status.Two complete 45 • ring sets (part number GM11010490), two complete 3D interstitial 60 • ring sets (GM11010190), and one complete interstitial 90 • ring set (GM11010290) were purchased.Each set included the ⌀30 mm × 36 mm ring probe, three intrauterine tandems of different lengths, a centering joint with clamping screw, distance cap, and a disassembly pin.The three interstitial sets included needle collectors.In addition to the complete sets, two additional ⌀26 mm × 32 mm ring probes were also purchased with distance caps.Internally, duplicate components were denoted "Set A" and "Set B." Assessments regarding channel length, radiographs, autoradiographs, ring offset distances, treatment interrupts, CT data with vendor-provided solid models, and MRI data were completed prior to releasing the applicators for clinical use.Images of the commissioned applicators are included in Figure 1. Channel length assessment The length of each applicator channel was assessed using both Varian's Length Assessment Device (LAD) as well as the afterloader's internal length measurement system.Both measurement tools must be used with transfer guide tubes.The LAD has demarcations at 0.5 cm intervals, and applicator channel length estimates were acquired to the nearest 1 mm.On the afterloader, plans were created to send the source to a distal position of 131.9 cm (the furthest possible distal dwell position for a nominal 132 cm total length applicator and transfer guide tube combination).The dummy and source were sent to the most distal position on each applicator.For the Bravos platform, the system will report a total required shift for each applicator and transfer guide tube combination.The mean correction magnitude for the distal-most dwell position was recorded for each combination of transfer guide tube and applicator, with no push test completed during these tests. Radiographs and autoradiographs Radiographic images were taken of the applicators using imaging systems integrated on a Varian True-Beam linear accelerator.The kV on-board imaging system (50 kV, 40 mA, 40 ms, small focal spot size) was used to acquire images with the applicators nominally at 50 cm from isocenter and in contact with the imager to assess for voids or other defects.Radiographs were also acquired of the tandem applicators using EBT3 radiochromic film (Ashland Inc., Bridgewater, NJ), placed at 100 cm SDD/SSD.The film was fixed to the tandem applicators immediately above the position of the cervical stop.A 12 MeV electron beam was used to deliver 600 MU to the film with the applicators taped in place in order to mark the outer dimensions of the applicators on the film.An uncoded marker wire was placed in each tandem for these irradiations.The film was then irradiated on the Bravos system for autoradiographic documentation, programming source positions at 1 cm intervals at a nominal 8-s dwell time for each position (assuming a 10 Ci source).The Bravos tandem film irradiation geometry is shown in Figure 2. Offset distance quantification Dwell position accuracy, including within the ring applicator, is necessary for robust treatment planning.][10] The required positional offset was measured and verified during this commissioning process.Preliminary measurements were completed following Varian's recommendations included in their instructions for use documentation, 11 and then initial values from this process were used to verify the final offset.Radiochromic EBT3 film was used to acquire radiographs and autoradiographs on the rings following the same process as used for the tandems, as described in the previous section.Film was attached to the applicators parallel to the ring plane, and rings were placed directly on the kV imaging panel of a Varian TrueBeam linear accelerator, positioned at 100 cm SDD/SSD, with uncoded marker wires in place.A 12 MeV electron beam was used to deliver 800 MU to the film in order to mark the outer dimensions of the applicators on the film.Without moving the film, autoradiographs were then acquired using the BRAVOS afterloader with programmed source positions at the distal-most allowable position (131.9cm), with a short dwell time (0.4 s nominal time for a 10 Ci source), and then subsequent dwells located in 1 cm increments (each nominally 8 s) to encompass the entire ring.Note that the PEEK/titanium ring applicators used in this study feature a closed circular design.The outer dimension of the ring distal end cannot be marked directly on the film, as with similar titanium open rings.The outer dimension of the distal-most end of the lumen must be determined via the radiograph.The marking of the distal-most dwell position, as noted on the radiograph, was reinforced using a thin-tipped permanent marker.The autoradiographs were then scanned using an EPSON 10000XL white-light flatbed document scanner (Epson America, Long Beach, CA) and imported into Eclipse Treatment Planning System (Varian).The angle between the tip of the lumen and the center of the radiation source footprint, with respect to the center of the ring, was measured and used to calculate the path length and average offset correction for each ring, using the equation from Varian's instructions for use documentation: where r is the radius of the ring, ɑ is the angle between the tip of the lumen and the center of the radiation source footprint, and SCDT is the source-center distance to the tip of the source cable.For Bravos, SCDT is 0.25 cm.The equation was used to follow Varian's procedure to find the shift distance by comparing the planned position to the delivered position.This shift distance was verified using repeated radiograph/autoradiograph measurements following the previously described procedure. F I G U R E 2 Tandem applicators with EBT3 film in place are shown in position for autoradiograph acquisition. Ring interrupt testing The Bravos system sends the source wire out to the distal-most programmed position, then successively pulls the wire back for the next-most distal position, repeating this process until all dwells have been treated.If treatment is interrupted, the source wire will only be sent out to the remaining distal-most dwell position to restart treatment.There was a concern that this process could negatively impact positional accuracy in interrupted ring treatments due to snaking within the circular lumen in the rings.To verify that treatment interrupts would not negatively impact positioning accuracy, interrupt testing was completed on all rings using the same radiograph/autoradiograph process as previously described including the 4 mm distal offset, but with a treatment interrupt initiated at the worst possible scenario: immediately after the first significant (8-s) dwell began at the 131.4 cm position (immediately following the completion of the 0.4-s dwell at the 131.9 cm position). CT data and treatment planning system Applicators were imaged using a GE LightSpeed RT16 scanner (General Electric Healthcare, Milwaukee, WI) with a technique of 120 kV, 97 mA, 0.625 mm thickness, and a field of view of 25.0 cm (the CT protocol currently used for gynecologic HDR patient imaging).Applicators were positioned with long axes approximately parallel to the CT imaging axis, fixing the applicators such that the curved portion of the applicators was suspended in air using a container of dry rice.The CT data was imported into the treatment planning system for review.The tandem and ring applicators are intended to be used clinically with the vendor-provided digital solid models in the treatment planning system,BrachyVision (Varian) v.15.5.The solid model files were downloaded from the vendor's document library on myVarian.comand imported into BrachyVision.The solid models overlaid with the applicator CT data were reviewed by an experienced certified medical dosimetrist and medical physicist. MRI data All the components included in this report are noted as conditionally approved for MRI by the vendor, and they received approval from the institution's MR safety team prior to the start of commissioning measurements.MRI artifacts and distortion may be influenced by several variables, including array coils, pulse sequences, gradient distortion corrections, and field strength, so MR imaging is a recommended commissioning step prior to applicator use in patients.Phantoms using rice to place applicators in a suspended position will not generate a signal in MRI, and water is prone to vibrational effects and exhibits a long settling time, so a custom gel phantom was constructed (Figure 3).Four rings and four tandems were imaged suspended in phantom, in a positional geometry designed to be similar to future patient treatments.The gel was created using ratios of 1 L distilled water, 30 g agar-agar powder, and 5 mL of 0.1 CuSO 4 solution following the process described by Fagerstrom and Kaur, 12 based on work by Haack et al. 13 and Mitchell et al. 14 The applicators were protected using a thin layer of plastic wrap before pouring the agar solution into the phantom base.This plastic introduced an air gap immediately surrounding the applicators, though this volume was minimized to the extent possible.The phantom was scanned using a 1.5 T MAGNETOM Aera scanner (Siemens Healthcare, Erlangen, Germany) with body flex receiver coils, with the MRI protocol developed for gynecologic HDR patients.The center of the phantom was positioned approximately at MRI scanner isocenter, and sequences included 3D, T2-weighted 1-mm isotropic pixel size scans at 440 Hz sequence readout. Channel length assessment The channel length for each applicator was assessed using both the LAD and the internal afterloader length measurement.Repeated measurements with the LAD indicated each tandem and transfer guide tube combination had a length of 132.0 cm ± 0.0 cm, and each ring and transfer guide tube combination had a length of 132.1 cm ± 0.1 cm, with the reported uncertainty the Type A uncertainty associated with repeated measurements performed with each applicator and available transfer guide tube combination.It is noted that the Length Assessment Device has some amount of play within the ring applicators.For the afterloader length measurements, for all deployments, the afterloader reported between 0.0 and 0.2 cm absolute shift from the planned value of 131.9 cm for all transfer guide tube/applicator combinations.The mean correction magnitude for the most distal dwell position reported by the afterloader, averaged over all deployments, was 0.05 cm ± 0.06 cm, or 0.1 cm ± 0.1 cm reported to the correct number of significant figures. Radiographs and autoradiographs kV radiographic images of the applicators acquired using the on-board imaging capabilities of a linear accelerator revealed no obvious defects or voids.Images from this process are included in Figures 4 and 5. Autoradiographs, including demarcation using calibrated marker wires, were compared to manufacturer-provided dimensions for all the tandem probes.][17] For the uncoded Varian marker wires, the center of the second-most distal marker indicates the center of the first programmable source position, taking into account the mandatory 1 mm gap between the end of the source channel and the most distal programmable dwell position.Markers in the marker wire are then spaced 1 cm apart for all subsequent positions.Varian indicates that the center of the source is 2.5 mm from the end of the source wire.An average distance from the outer tip of the applicators to the distal-most inner lumen point was measured to be 2.3 mm ± 0.1 mm, with the reported uncertainty the Type A uncertainty associated with repeated measurements.For these measurements, a distance of 2.2 mm was expected from the tip of the external point of the applicator and the inner lumen as demarcated by the distal-most marker wire, and then source dwells aligning with subsequent marker positions.The film images confirmed this geometry, as seen in Figure 6. Offset quantification After repeated measurements, four of the seven rings had initially measured offset values of −0.4 cm, and three of the seven rings had initially measured offset values of −0.5 cm.All rings were verified for a 4 mm shift based on advice from the Varian clinical implementation specialist to use the smallest absolute correction, assuming verification measurements indicated that the lesser magnitude shift rendered clinically acceptable results (Sharon Thompson, M.S., in-person training communication, September 29, 2020).This verification process was completed by acquiring film radiographs with the marker wire in place, then irradiating the film on the Bravos system for autoradiographic documentation using a 4 mm distal position correction.A 0.4-second dwell (nominal, for a 10-Ci source) was programmed at 131.9 cm distal position, then the first significant dwell was positioned at 131.4 cm (a 0.5 cm pull back distance).All subsequent, standard dwell positions within the ring were programmed for 8 seconds, and the first dwell position out of plane was programmed for 15 seconds.With this irradiation geometry, it is expected that if the chosen offset values are acceptable, then dark spots should appear evenly spaced between marker wire positions.It is possible that ring offset values may change over the course of the lifetime of the source wire, with the prospect that after several source deployments, the wire's elasticity may change and conceivably impact ring offset values.Therefore, ring offset values were verified with a used source wire (∼1000 source cycles) as well as a new source wire immediately after a source exchange to compare the offset values over time.Offset values were found not to change based on the age and usage history of the source wire.The initially measured 4 mm offset value was confirmed by these verification measurements.See example film verification data included in Figure 7. Films acquired with no applied offset are included in Figure 8 for comparison. Ring interrupt testing It was confirmed that with one or two interrupts, acceptable ring autoradiographs were achieved.As expected, more extensive interrupt testing (ten interrupts in a single run) showed clinically unacceptable results.Based on this result, it was decided at this institution that if multiple interrupts were necessary because of patient movement due to coughing, fatigue, discomfort, etc., or if several interrupts were to be required due to clearance issues, treatment would be discontinued. CT data and treatment planning system No applicator defects or voids were visible via the CT data.The distance between the outer distal-most tip of each tandem probe and the inner void indicating the distal-most point of the inner lumen was measured to be 2.0 mm ± 0.1 mm on the CT data, in agreement with the value provided by the manufacturer as noted in the figure from Varian's instructions for use documentation and the radiographs described earlier.An experienced certified medical dosimetrist and medical physicist reviewed the Varian solid models overlaid on the CT data, and they deemed the models appropriate.Note that for the 45 • ring sets ⌀26 mm × 32 mm, the center of the source channel in the solid model deviates from the corresponding position in the CT data by approximately 1 mm in the source channel that is out of plane.This is not expected to have clinical implications based on the expected loading of the rings used at the institution.Example images are included in Figure 9.The source channels in the 45 • ring sets ⌀26 mm × 32 mm rings otherwise aligned well with the solid models, and all other applicators aligned very well for the entirety of the solid models.Note that there currently does not exist a "no cap" solid model for the 3D 60 • or 90 • rings.Varian confirmed this lack of model at the time of commissioning (Sharon Thompson, M.S., in-person training communication, 29 September 2020). MRI data The MRI scans were rigidly registered with the CT dataset.The image registration was reviewed by an experienced certified medical dosimetrist and authorized medical physicist and deemed appropriate.As recommended by Hellebust et al., 4 image registration focused on aligning the geometric position of the applicators between datasets.For this work, minimal distortion was detected and geometric displacement of applicators between rigidly registered CT and MR images was found to be <1 mm for all combinations of MR and CT images.See example images included in Figures 10 and 11.Note that the magnetic susceptibility of the PEEK and titanium applicators was expected to differ from that of the agar gel used in the phantom base, resulting in some expected susceptibility artifact.Air gaps between the agar gel and the plastic-wrapped applicators were discernible in both CT and MRI images, while susceptibility artifacts were present in MRI only.At this time, the MRI sequences used in this study appear to need some amount of adjustment if MR-only planning is to be completed, but MR imaging is acceptable for use with CT-based planning when MRI is used for target delineation only, and not relied on for applicator reconstruction. DISCUSSION This commissioning process for multiple MRIcompatible ring and tandem applicators reinforced the need for careful assessment of each individual applicator prior to clinical use.At the time of acceptance of the applicators, it was found that one of the 45 • ring applicators had "bunched" and unevenly spaced dwell positions in repeated autoradiographic images.This applicator was sent back to the manufacturer following the manufacturer's parts return process, and all data included in this report is for the replacement part.Also found during this process were some slight differences in one ring channel position as denoted in the vendor's 3D applicator model library compared to CT data, and the lack of a "no cap" 3D model for 3D 60 • and 90 • rings.For these applicators, when measuring distances in BrachyVision from the applicator surface, it is important to ignore the cap included in the solid model if it was not implanted in the patient.This should be clear from the CT data.This underlines the importance of verifying the vendor-provided library based on its expected clinical use.The forthcoming TG-236 report is expected to contain recommendations on digital models used in intracavitary brachytherapy treatment planning. Based on these measurements, it was found that the MRI sequences used for gynecologic brachytherapy cases would require some amount of adjustment if in the future, if it is desired to switch from planning using both CT and MRI acquired with the implant in place, to F I G U R E 1 0 CT images of the gel phantom imported into the treatment planning system, with the manufacturer-provided digital 3D models of the appropriate applicators overlaid on the CT data.The CT data was rigidly registered to MRI. F I G U R E 1 1 MR images of the gel phantom imported into the treatment planning system, with the manufacturer-provided digital 3D models of the appropriate applicators overlaid.Shown is the T2 3D sequence.MRI-only planning.For the institution's current workflow including MRI used for target delineation and CT used for applicator reconstruction, the tested MRI sequences were found to be adequate.Note that this institution, both a TG-43 calculation algorithm 18,19 is available, and Varian's Acuros brachytherapy algorithm,a deterministic grid-based Boltzmann transport equation solver, is commissioned, following AAPM's TG-186 guidance. 20The TG-43 algorithm is used for gynecologic brachytherapy planning, including cases involving tandem and rings.Should MRI-only planning be pursued in the future for cases involving the tandem and ring applicators, only the TG-43 algorithm would be appropriate without further characterizations. The ring offset distance characterization is consistent with other similar work.Bellezzo et al. 1 used an imaging panel to capture Bravos source positioning within one of the rings used in the current work (45 • ring, ⌀30 mm × 36 mm).They found a maximum deviation between measured and planned dwell positions of 3.2 mm within the ring, prior to applying any distal dwell correction.Baghwala and Boopathy used titanium, open-ended ring applicators with Bravos.With the titanium applicators, it is possible to distinguish the outer dimension of the ring (unlike with the PEEK/titanium rings used in this work), but the offset measurement process is the same.They tested 30 • , 45 • , and 60 • rings using film, and for the three rings, found a minimum offset of 3.95 mm, and a maximum offset of 6.42 mm for all dwell positions.They ultimately elected to use a single offset correction of 4.0 mm for all three rings. Based on the results of the ring offset distance characterization process described in this work, the following internal usage process was decided.The Bravos system is designed to send the active wire out only as far as the distal-most programmed position (in contrast to sending the active wire out to the distal-most possible position), but this commissioning process indicated that the pull-back process allowed more even spacing, and putting clinically relevant dwell times in the distalmost possible position resulted in two dwells that were bunched up and unevenly spaced.To assure that the delivered dose distribution matches the planned dose distribution, it was decided to use the Bravos system's distal dwell position correction feature during treatment preparation.When planning with rings, a short dwell is placed at the distal-most position, which is 131.9 cm for these ring and transfer guide tube combinations (0.4 s for a nominal 10 Ci source).The source is then pulled back 0.5 cm for the first possible clinically usable F I G U R E 1 2 Illustrated process of using the ring sets using the Bravos afterloader touch screen during treatment preparation.For implants including a ring applicator, a distal correction of -0.4 cm is required.No other channel uses the distal position correction.position of 131.4 cm, and equal spacing of 0.5 cm is used on all subsequent dwells.The plan is planning approved by dosimetry, reviewed by the physician, treatment approved by physics, and sent to the console.Only a single plan is generated, in contrast to the two-plan process of some institutions.At the console, the physicist verifies that the distal dwell correction is turned on. During treatment preparation, on the afterloader touch screen, the physicist inputs a -4 mm position change for the ring channel only.Photographs of this process are included in Figure 12.Training for this process was led by the institution's lead brachytherapy physicist for the entire HDR team, and a hard copy of Figure 12 was posted within the HDR vault and at the HDR console. Note that the workflow detailed in the previous paragraph and illustrated in Figure 12 is based on commissioning for the applicators included in this report and considering the specific workflows of that institution.As detailed in AAPM's TG-100 report, 21 a systematic approach can be used to quantify possible identified failure modes for an institution by evaluating the likelihood of occurrence, the severity of the effect if the failure mode is not caught, and the lack of detectability.Example failure modes and effects analyses (FMEAs) have been published regarding HDR gynecologic brachytherapy (see, for example, Mayadev et al. 22 and Richardson, Scanderbeg, and Swamidas 23 ).These analyses demonstrate the need for design of processes and quality management specific to each clinic.The ring offset process described above illustrates how a set of commissioning measurements may be incorporated into a given institution's practice; however, each user is encouraged to assess individual commissioned applicators in the context of their intended clinical use and their institution's clinical workflows. CONCLUSION Following the processes described here, PEEK and titanium gynecologic HDR applicators, seven rings and fifteen tandems, were commissioned for use with a Bravos HDR brachytherapy remote afterloader.Applicator performance was found to be clinically acceptable considering the clinical environment and workflows for which the applicators are intended to be used, including MRI-based planning with the MRI acquired with the implant in place.Measurements included channel length, radiographs, autoradiographs, ring offset distances, treatment interrupts, and CT and MRI imaging.A process for applying the measured ring offset correction was established and described.This process leverages the Bravos system's internal distal dwell position correction feature, in contrast to the two-plan process used at some institutions to account for the ring offset.AAPM's forthcoming TG-236 report will describe recommended practices when using digital models in intracavitary brachytherapy treatment planning.Prior to the task group report's publication, the vendor-provided 3D digital models of the tested applicators were reviewed and found to be clinically acceptable in both MRI and CT datasets.It is noted that commissioning was completed in the context of specific planned work processes within this institution's clinical environment.Safety analyses, including failure modes and effects analyses, may help guide decisions regarding required commissioning steps and acceptance criteria for other institutions. AC K N OW L E D G M E N T S Sincere thanks to Justin Cantley, PhD; Kathryn Hedrick, MS; Jeffrey Marotta, RT(T), CMD; Tracy Sherertz, MD; Sharon Thompson, MS; and Markus Van Achte, RT(MRI)(CT);. C O N F L I C T O F I N T E R E S T S TAT E M E N T The author declares that there is no conflicts of interest. F I G U R E 3 Photograph of the phantom for MR imaging of the rings.A close-up view of the prepared applicator handles including protective blue cleaning caps is shown in (a), the phantom prepared for gel is shown in (b), a close-up view of the applicator tips is shown in (c), and the completed phantom is shown in (d).From top to bottom, the four applicator groupings are: (1) Set A 45 • , ⌀26 mm × 32 mm ring with its associated ring cap and the 45 • × 80 mm tandem; (2) Set A 45 • , ⌀30 mm × 36 mm ring with its associated ring cap and the 45 • × 60 mm tandem; (3) Set A 60 • , ⌀30 mm × 36 mm ring with its associated ring cap and the 60 • × 40 mm tandem; and (4) 90 • ⌀30 mm × 36 mm ring with its associated ring cap and the 90 • × 60 mm tandem. F I G U R E 4 Radiographs of the ring applicators, with no marker wires in place.Applicators are arranged from top to bottom, Set A 45 • ring ⌀26 mm × 32 mm, Set B 45 • ring ⌀26 mm × 32 mm, Set A 45 • ring ⌀30 mm × 36 mm, Set B 45 • ring ⌀30 mm × 36 mm, Set A 3D 60 • ring ⌀30 mm × 36 mm, Set B 3D 60 • ring ⌀30 mm × 36 mm, and 3D 90 • ring ⌀30 mm × 36 mm.The rings are arranged lying directly on the imager in (a), and propped such that the ring surface is flush with the imager surface in (b).F I G U R E 5 Radiographs of the tandem applicators, with no marker wires in place.Applicators are arranged from left to right, 40 mm, 60 mm, and 80 mm.Shown in (a) are the Set A 45 • tandems, (b) the Set B 45 • tandems, (c) the Set A 60 • tandems, (d) the Set B 60 • tandems, and (e) the 90 • tandems. F I G U R E 6 Radiographic and autoradiographic film data of the tandem applicators including the marker wire.Applicators are arranged as follows: (a) Set A 45 • 40 mm, (b) Set A 45 • 60 mm, (c) Set A 45 • 80 mm, (d) Set B 45 • 40 mm, (e) Set B 45 • 60 mm, (f) Set B 45 • 80 mm, (g) Set A 3D 60 • 40 mm, (h) Set A 3D 60 • 60 mm, (i) Set A 3D 60 • 80 mm, (j) Set B 3D 60 • 40 mm, (k) Set B 3D 60 • 60 mm, (l) Set B 3D 60 • 80 mm, (m) 3D 90 • 40 mm, (n) 3D 90 mm, (o) 3D 90 • 80 mm.The position of the markers on the marker wire from the radiograph was compared to the positions of the source dwells based on the autoradiograph and found to agree to within 1 mm for all dwells in all geometries tested.With the programmed dwells, it is expected that the dark spots should appear overlapping the marker wire positions. F I G U R E 7 Radiographic and autoradiographic film data of rings using a marker wire with 4 mm offset applied: (a) Set A 45 • ring ⌀26 mm × 32 mm, (b) Set B 45 • ring ⌀26 mm × 32 mm, (c) Set A 45 • ring ⌀30 mm × 36 mm, (d) Set B 45 • ring ⌀30 mm × 36 mm, (e) Set A 3D 60 • ring ⌀30 mm × 36 mm, (f) Set B 3D 60 • ring ⌀30 mm × 36 mm, and (g) 3D 90 • ring ⌀30 mm × 36 mm.With the programmed dwells, it is expected that if the applied offset values are correct, then dark spots should appear evenly spaced between marker wire positions. F I G U R E 9 CT images viewed in the treatment planning system of a 45 • ring set, ⌀26 mm × 32 mm with no ring cap and no marker wire in place in all three planes in (a) and magnified with distance indicated in (b).The manufacturer-provided digital 3D model is overlaid with the CT data.The center of the source channel out of the ring plane in the solid model is displaced from the corresponding position in the CT data by approximately 1 mm.
2023-07-21T06:17:50.285Z
2023-07-19T00:00:00.000
{ "year": 2023, "sha1": "ac1c7183674baa3be21cd2774ea92eaa6dabb2b4", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/acm2.14094", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "56fbd63fbd83a2aab86e774a7873754272d0df9f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232325342
pes2o/s2orc
v3-fos-license
Downregulation of ceramide synthase 1 promotes oral cancer through endoplasmic reticulum stress C18 ceramide plays an important role in the occurrence and development of oral squamous cell carcinoma. However, the function of ceramide synthase 1, a key enzyme in C18 ceramide synthesis, in oral squamous cell carcinoma is still unclear. The aim of our study was to investigate the relationship between ceramide synthase 1 and oral cancer. In this study, we found that the expression of ceramide synthase 1 was downregulated in oral cancer tissues and cell lines. In a mouse oral squamous cell carcinoma model induced by 4-nitroquinolin-1-oxide, ceramide synthase 1 knockout was associated with the severity of oral malignant transformation. Immunohistochemical studies showed significant upregulation of PCNA, MMP2, MMP9, and BCL2 expression and downregulation of BAX expression in the pathological hyperplastic area. In addition, ceramide synthase 1 knockdown promoted cell proliferation, migration, and invasion in vitro. Overexpression of CERS1 obtained the opposite effect. Ceramide synthase 1 knockdown caused endoplasmic reticulum stress and induced the VEGFA upregulation. Activating transcription factor 4 is responsible for ceramide synthase 1 knockdown caused VEGFA transcriptional upregulation. In addition, mild endoplasmic reticulum stress caused by ceramide synthase 1 knockdown could induce cisplatin resistance. Taken together, our study suggests that ceramide synthase 1 is downregulated in oral cancer and promotes the aggressiveness of oral squamous cell carcinoma and chemotherapeutic drug resistance. INTRODUCTION Oral cancer is one of the most common malignant tumors in the head and neck region, and oral squamous cell carcinoma (OSCC) is the most common pathological type of oral cancer. 1 Approximately one hundred thousand new OSCC patients are diagnosed worldwide every year. 2 The main treatment for OSCC is combination therapy, including surgery, radiotherapy, and chemotherapy. The 5-year survival rate of early-stage patients is 55%-60%, while the rate of late-stage patients is only 30%-40%. 3,4 The overall survival of OSCC patients has not changed significantly in the last few decades. 5 Therefore, it is necessary to analyze the molecular mechanisms of the development of oral cancer and strive for a new breakthrough in the study of OSCC diagnosis and treatment. Ceramide (CER) is a kind of sphingolipid with hydrophobic chains that participates in multiple physiological functions. CER is synthesized by six different ceramide synthases (CERS). CERS are located on the endoplasmic reticulum (ER) and are necessary for the synthesis of CER. Each CERS has different selectivity for the synthesis of endogenous CER with different fatty acid chain lengths. 6 Treatment with exogenous CER promoted differentiation and inhibited proliferation in a squamous cell carcinoma cell line. 7 In addition, the function of CERS in tumors has been increasingly studied in recent years. CERS can regulate cell apoptosis, cell cycle arrest, and cell senescence. Previous studies have demonstrated that in OSCC, CERS are involved in the regulation of apoptosis, 8 EGF receptor modulation, inhibition of neovascularization, 9 and enhancement of the anticancer actions of chemotherapy agents. 10 More importantly, Karahatay's study showed that C18 CER was the only decreased CERs in human head and neck squamous cell carcinoma (HNSCC) tissues, and decreased C18 CER level was strongly correlated with higher overall stages of the primary HNSCC tumors. 11 C18 CER is mainly synthesized by CERS1. 12 Koybasi found in HNSCC cells, overexpression of CERS1 resulted in impaired cell growth, which was related to telomerase activity and mitochondrial dysfunction. 12 Similarly, Senkal found that knock-down of CERS1 in HNSCC cells resulted in attenuation of apoptosis due to the repression of casepase-3 and caspase-9 activity. 13 Moreover, CERS1 is also linked to chemotherapy resistance. It has been consistently reported that knockdown of CERS1 significantly protected HNSCC cells from chemotherapeutic agents, including gemcitabine, doxorubicin, and cisplatin, induced apoptosis. 14,15 In response to cisplatin, CERS1 localized to mitochondria and induced mitophagy to promote cell death. 15 In the present study, we used a transgenic mouse model and OSCC cell lines to explore the functional role of CERS1 in OSCC. Our study showed that downregulation of CERS1 in OSCC tissues significantly correlated with poor prognosis. CERS1 loss of function significantly promoted OSCC occurrence and progression both in vivo and in vitro. CERS1 loss of function could induce endoplasmic reticulum stress (ER stress) to promote chemotherapy resistance. RESULTS CERS1 is related to the clinicopathological features and overall survival of OSCC patients Our previous studies found that CER plays a very important role in the occurrence and development of head and neck tumors. 16 In addition, the expression of C18 CER was significantly related to oral cancer. 12 Igarashi 17 found that different CERS members exhibited a characteristic fatty acyl-CoA preference. CERS1, a transmembrane protein of the ER, catalyzes the biosynthesis of C18 CER. 18,19 Therefore, the expression of CERS1 might also influence oral cancer. To test this hypothesis, 48 patients with oral cancer were followed since 2016. Cancer tissues and para-cancer normal tissues were collected. By RT-PCR, we found that the expression of CERS1 in oral cancer tissues was lower than that in normal tissues (P < 0.001, Fig. 1a-c). In addition, the patients with high CERS1 expression survived longer (P = 0.049, Fig. 1d). The patients with lower CERS1 expression had a higher N stage (P = 0.085, Table 1), T stage (P = 0.043, Table 1), and overall clinical stage (P = 0.004). The expression of CERS1 in non-malignant cells, such as HOK and DOK, was higher than that in oral cancer cell lines, including SCC25, CAL27, HSC-2, and HSC-3 (P < 0.05, Fig. 1e). CERS1 knockdown promoted oral cancer in vitro To verify the downregulation of CERS1 by siRNA interference in SCC25 and CAL27 cells, CERS1 expression levels were determined using RT-PCR and Western blot. As shown in Fig. 2a, CERS1 expression levels decreased to 35% and 42% after CERS1 knockdown. Cers1 knockout promoted OSCC occurrence in vivo To elucidate the role of Cers1 in OSCC development, we first established a Cers1 knockout transgenetic C57BL/6 mouse model using CRISPR/Cas9 system. We deleted 238 bp around initiation codon in the first exon of Cers1. Cers1 gene and Gdf1 gene share the same exons and express bicistronic mRNA. Similar to the tansgenetic mice model established by Ginkel, 20 the present Cers1 knockout strategy had no effect on the protein level of Gdf1 ( Supplementary Fig. 1a). Also, in consist with Ginkel, 20 we also observed that the Cers1−/− mice had exercise capacity defect and general tremor, which is more obvious with age. An experimental model of 4NQO-induced OSCC was established as described previously. 21 The consumption of 4NQO-water Supplementary Fig. 1b). The average tumor lesion size in the Cers1−/− group was (7.82 ± 7.69) mm 2 . However, the average tumor lesion size in the Cers1 + /+ group was smaller at (3.5 ± 5.6) mm 2 (P = 0.033, Fig. 3b). In addition, the number of lesions per mouse in the Cers1−/− group was significantly higher than that in the Cers1 + /+ group (2.23 ± 1.02 vs. 1.88 ± 0.72, P = 0.017, Fig. 3c staining showed that after treatment with 4NQO, the Cers1−/− mice exhibited different stages of oral carcinogenesis than the Cers1 + /+ mice. All samples were divided into four groups: normal epithelium, mild-moderate dysplasia, severe dysplasia/ carcinoma in situ, and carcinoma ( Supplementary Fig. 2c). The results of the histopathological analysis of the Cers1−/− group and Cers1 + /+ group are shown in Table 2. A total of 36% (9/25) of mice in the Cers1−/− group developed tongue squamous cell carcinoma compared with 12% (3/25) of mice in the Cers1 + /+ group. These results indicated that Cers1 knockout enhanced 4NQO-induced tongue carcinogenesis. Proliferating cell nuclear antigen (PCNA) is only present in normal proliferating cells and tumor cells and is closely related to the synthesis of DNA. 22 It plays an important role in cell proliferation and is the key protein of abnormal cell proliferation. 23 Immunohistochemical analysis demonstrated that PCNA was mainly expressed in the nucleus. By semiquantitative assessment of IHC staining, the rate of positive nuclear PCNA expression was found to be markedly high in the Cers1−/− group (P < 0.05, Fig. 3d, Supplementary Fig. 2d). Matrix metalloproteinase-2 (Mmp2) 24 and matrix metalloproteinase-9 (Mmp9) 25 are important proteolytic enzymes that hydrolyze extracellular matrix and participate in the process of tumor growth and metastasis. To investigate cell invasiveness, Mmp2 and Mmp9 expression was assessed by immunohistochemistry. Mmp2 and Mmp9 were located in the cytoplasm. In the Cers1−/− group, high cytoplasmic expression of Mmp2 and Mmp9 was observed (P < 0.05, Fig. 3e, Supplementary Fig. 2d). BAX, a member of the BCL2 family, is a core regulator of the intrinsic apoptosis pathway. 26 BAX can affect the permeability of the outer mitochondrial membrane and subsequent initiation of the caspase cascade, which is considered a key step in apoptosis. 27 The association of BAX with BCL2 has been demonstrated through coimmunoprecipitation assays. 28 Immunohistochemistry results revealed that Bax and Bcl2 were localized predominantly in the cytoplasm. The expression levels of Bax in the Cers1−/− group were lower than those in the Cers1 + /+ group (P < 0.05, Fig. 3e, Supplementary Fig. 2d). In contrast, the expression of Bcl2 was higher (P < 0.05, Fig. 3e, Supplementary Fig. 2d). CERS1 knockdown increased the expression of ER stress markers and VEGFA The ER is an important organelle for protein synthesis, folding, and secretion in eukaryotic cells. ER stress is a kind of cellular stress state caused by protein folding dysfunction of the endoplasmic reticulum that is induced endogenously or exogenously. The expression of ER stress markers, including binding immunoglobulin protein (BIP), ATF4, and C/EBP homologous protein (CHOP), was tested by RT-PCR and Western blot. BIP is a very important molecular chaperone in ER stress that can preferentially bind to misfolded proteins in the ER. The expression of BIP in CERS1 knockdown cells was higher than that in the control group (P < 0.05, Fig. 4a). ATF4 is involved in the regulation of many biological processes and plays an important role in ER stress. 13 ATF4 was significantly upregulated after CERS1 knockdown (P < 0.05, Fig. 4a, b). ATF4 can bind to the CHOP promoter under ER stress conditions and induce the transcription of CHOP and related genes to promote the correct folding or degradation of residual proteins. 29 Consistent with the expression of other ER stress markers, CHOP was also highly expressed in CERS1 knockdown cells (P < 0.05, Fig. 4a). RT-PCR of mouse lesion tongue tissues showed the same results (P < 0.05, Supplementary Fig. 2e). Relative gene expression Immunohistochemical score Relative gene expression Angiogenesis is a necessary process for tumor growth, invasion, and metastasis. 30 Angiogenesis refers to the formation of new abnormal blood vessels in tumors, which provide nutrition and oxygen for tumor growth. Moreover, tumor cells secrete blood vessel growth promoting factor, which positively regulates this process. 31 Vascular endothelial growth factor A (VEGFA) is an effective endothelial cell-specific regulator of angiogenesis that influences tumor growth and metastasis. 32 It has been shown that ATF4 can regulate VEGFA transcription under ER stress in cancer cells. 33 Our results found that Vegfa was located in the cytoplasm in mouse tongue specimens. Strong expression of Vegfa was observed in the Cers1−/− group mice (P < 0.05 Fig. 4c, Supplementary Fig. 2e). In addition, we found that VEGFA expression was higher in CERS1 knockdown cells in vitro (P < 0.05, Fig. 4a, b). To prove that CERS1 knockdown led to increased VEGFA expression through ATF4, we used a luciferase reporter plasmid in which a VEGFA 5ʹ-flanking sequence (−2304 to +65 relative to the transcription initiation site) was fused to the firefly luciferase coding sequences in the pEZX-FR01 vector. The Renilla luciferase in the plasmid had its own replication origin (SV40), which was used as an internal control. The pEZX-VEGFA ( Supplementary Fig. 2f) plasmid was cotransfected into CAL27 cells with different siRNAs. The activities of both firefly luciferase and Renilla luciferase were measured using the dual luciferase assay. As shown in Fig. 4d, a low level of VEGFA promoter activity was detected in ATF4 knockdown cells, whereas a high level of VEGFA promoter activity was detected in CERS1 knockdown cells. In addition, in both ATF4 and CERS1 knockdown cells, VEGFA promoter activity was also low. These results suggested that VEGFA promoter activity is responsible for the low expression of CERS1 through ATF4 upregulation. CERS1 knockdown led to mild ER stress causing cisplatin resistance Cisplatin (referred to as DDP in the group name) is a first-line drug for OSCC treatment. The high incidence of drug resistance is the main limiting factor of the clinical efficacy of cisplatin. 34 Recent studies have shown that high CERS1 expression renders cells more sensitive to cisplatin. 35 As we have found that CERS1 is usually downregulated in OSCC samples (Fig. 1), we explored the relationship between CERS1 knockdown and cisplatin resistance. A previous study showed that the ER adapts to endogenous and exogenous stressors by expanding its protein-folding capacity and by stimulating protective processes. 36 Triggering continued mild ER stress to sustain ER homeostasis is considered a treatment for some diseases. 37 Tunicamycin (TM), a UDP-N-acetylglucosaminedolichol phosphate N-acetylglucosamine-1-phosphate transferase inhibitor, can block the initial step of glycoprotein biosynthesis in the ER. TM is now well known as a classical ER stress inducer. 38 Mild ER stress can be induced by treating cells with low-dose TM (1 μg·mL −1 , Sigma) for 2 h, which was used as a positive control. There were four groups in the experiment that underwent various treatments as follows: siNC group, transfected with NC siRNA and treated with 1% DMSO; siNC+DDP group, transfected with NC siRNA and treated with cisplatin (40 nmol·L −1 , Sigma) for 12 h; siNC+TM + DDP group, transfected with NC siRNA, pretreated with TM (1 μg·mL −1 ) for 2 h and then treated with cisplatin for 12 h; and siCERS1+DDP group, with CERS1 knockdown and then treated with cisplatin for 12 h. The expression of BIP, ATF4, and CHOP was higher in the siNC+TM + DDP and siCERS1+DDP groups than in the siNC and siNC+DDP groups (P < 0.05, Fig. 5a, b). In addition, the annexin V/PI double staining assay (Fig. 5c, Supplementary Fig. 1h) showed cisplatin could significantly induce apoptosis. However, after treated with TM, the number of apoptotic cells decreased significantly. Similar to the anti-apoptosis effect of TM, knock-down of CERS1 significantly decreased the apoptosis ratio after DDP treatment. In contrast, overexpression of CERS1 resulted in sensitization to cisplatin. Also, cell viability test supported that knock-down of CERS1 resulted in cisplatin resistance, whereas overexpression of CERS1 obtained the opposite effect (Supplementary Fig. 1c). These findings indicated that knockdown of CERS1 could trigger mild ER stress and induce cisplatin resistance. DISCUSSION In this study, we investigated the roles and mechanisms of CERS1 in OSCC. Our data suggested that decreased levels of CERS1 might play important roles in oral cancer. CERS1 expression was lower in OSCC tissues than in controls. In addition, patients with lower expression of CERS1 in tumor tissues had a worse prognosis. Further data showed that downregulation of CERS1 resulted in the inhibition of cell apoptosis and promotion of cell proliferation and invasion, which involved the induction of mild ER stress and modulation of the protein kinase R-like endoplasmic reticulum kinase (PERK)/eukaryotic translation initiation factor 2α (eIF2α)/ATF4 pathway in OSCC. These results suggested that decreased levels of CERS1 conferred a growth advantage to cancer cells. Our research group has performed this research with siRNA 14 -mediated CERS1 knockdown in OSCC. In addition, knockout of Cers1 in mice was used to establish a mouse model of oral carcinogenesis. 4NQO is an aromatic amine heterocyclic compound that is often used as an inducer in oral cancer animal models. 39 Oral cancer induced by the 4NQO carcinogen ranges from simple epithelial growth to invasive cancer. This process is similar to the real human disease process, making it more suitable to the capture and exploration of disease mechanisms. 40 In this study, Cers1 knockout mice were treated with 4NQO. This was also the first study to explore the effect of Cers1 on the occurrence and development of oral cancer in vivo. Cers1 knockout mice were more likely to develop oral cancer, which was consistent with the vitro experiments and other studies. 41 CERS are membrane proteins of the ER and synthesize (dihydro) CER with the N-acylation of the (dihydro) sphingosine backbone. 42 CERS1-CERS6 have been identified, each of which catalyzes different lengths of acyl chains to produce CER. CERS1 uses C18acyl-CoA, which is related to oral cancer cell autophagic cell death. 43 CER can reduce protein kinase B (AKT) activity by activating protein phosphatase 2 A (PP2A), p38 44 and protein kinase C (PKC), 45 and then AKT reduces the phosphorylation level of BCL2. 44,46 Finally, the decreased level of BCL2 and the ratio of BCL2 to BAX leads to cell death. Immunohistochemistry results confirmed this effect. Reducing CERS1 inhibits the synthesis of C18 CER, AKT, and BCL2 to inhibit apoptosis. The ER is an organelle widely present in eukaryotic cells that regulates protein synthesis, folding, and aggregation after synthesis. An aggregation of misfolded proteins in the lumen and an imbalance of Ca2+ in the cytoplasm caused by various factors can lead to ER dysfunction, which can induce a series of related protein expression and cell phenotype changes, a condition called ER stress. 47 ER stress mainly involves three ER transmembrane effector proteins, inositol-requiring enzyme 1, PERK, and activating transcription factor 6. When unfolded proteins accumulate, BIP dissociates from those proteins and binds to unfolded or misfolded proteins to help with proper folded. 48 Activated PERK further phosphorylates the downstream eIF2α, ATF4, and CHOP. 49,50 The expression of PERK-ATF4 has been positively correlated with VEGF. 51 In our study, after knocking down/out CERS1, the expression of BIP, ATF4, CHOP, and VEGFA was higher than that in the control group. VEGFA promoter activity was related to CERS1 knockdown. However, ATF4 knockdown abolished this relationship. Therefore, downregulation of CERS1 promotes migration through ER stress, the PERK-ATF4 pathway and VEGFA. Chemotherapy is one of the main treatments for oral cancer. However, some tumors become resistant during the course of treatment, which greatly limits the efficacy. Junxia Min's research showed that CERS1 expression rendered cells more sensitive to cisplatin, carboplatin, doxorubicin, and vincristine. 35 In addition, dasatinib induces apoptosis by upregulating the expression levels of CERS1. 52 Different mechanisms of resistance to CERS1 have been investigated. However, we are far from having a full understanding of all CERS1 resistance-related pathways. As our study, CERS1 knockdown induced ER stress, which is a process that can remove misfolded proteins in the ER through the unfolded protein response to maintain ER homeostasis. If ER stress is not reversed, it will lead to cell function deterioration and cell death. 53 However, mild ER stress sustains ER homeostasis, which is an attractive strategy for cancer. 54 In our study, TM, a classical ER stress drug, was used as a positive control to explore the relationship between CERS1, mild ER stress, and drug resistance. Similar to the result of low-dose TM, CERS1 knockdown caused mild ER stress and then reduce the apoptosis caused by cisplatin. Overall, downregulation of CERS1 plays a negative role in chemotherapy for oral cancer (Fig. 5d). Cell proliferation assay The cell counting kit-8 method (Dojindo, China) was used to determine cell proliferation and viability. CAL27 and SCC25 cells (5 × 10 3 ) were plated in 96-well plates. After siRNA transfection, cells were incubated with 10% CCK-8 for an hour. Then, we used 480 λ to evaluate proliferation (microplate spectrophotometer, Sigma, United States). showed treated with low-dose TM and CERS1 knockdown had the same impact on OSCC cells for cisplatin resistance, that is, by inducing mild ER stress (the expression of BIP and CHOP was mildly upregulated) and suppressing apoptosis (the expression of BAX and BCL2). c Annexin V/PI double staining showed cisplatin could induce apoptosis; treated with TM or CERS1 knock-down contributed cisplatin resistance, whereas CERS1 overexpression resulted in sensitization to cisplatin. d the schematic diagram of present study. For a and c one-way ANOVA and Dunnett's t test were used, "DDP group" was selected as the standard. Note: *P < 0.05; **P < 0.01; ***P < 0.001 The colony formation assay was also used to determine cell proliferation and viability. CAL27 and SCC25 cells (1 × 10 3 ) were seeded in 6-well plates. After 10 days of growth, cells were fixed with 4% paraformaldehyde (Solarbio, China) for 20 min and stained with 0.2% crystal violet (Solarbio, China) for 5 min. EdU DNA Proliferation in vitro Detection (GeneCopoeia, United States) was used to evaluate cell proliferation. Ten micromolar EdU in DMEM was used for CAL27 and SCC25 cells in 48-well plates (3 × 10 4 ) for 2 h. After fixation and membrane permeabilization, iClick reaction buffer was used to detect EdU. Then, the cells were stained with DAPI (Solarbio, China) for 5 min. A Leica microscope was used for imaging (495 nm, 360 nm) and analysis. Cell migration and invasion assays A wound healing test was used to test cell migration ability. CAL27 and SCC25 cells (1.5 × 10 6 ) were seeded in 6-well plates. After cell adherence, we used 200 µL pipette tips (Thermo QSP, United States) to generate wounds. A Leica microscope was used for imaging at 0 and 48 h. Cell invasion was tested by transwell invasion experiments. CAL27 and SCC25 cells (5 × 10 4 ) were plated in Matrigel-coated transwell chambers (Corning, 8 μm) and cultured overnight. Then, the medium in the upper chamber was changed to FBS-free medium. The medium in the lower chamber was changed to 20% FBS medium. Twenty-four hours later, the cells that had moved across the membrane were fixed, permeabilized and counted by DAPI (Servicebio) staining. A Leica microscope was used for imaging (360 nm). Apoptosis assay Annexin V/PI double staining (GeneCopoeia, United States) was used for apoptosis detection. CAL27 and SCC25 cells (1 × 10 6 ) were collected. After labeling with Annexin V and propidine iodide, fluorescent dye solution was added to the cells away from light for 20 min. Then, flow cytometry (Beckman) was used for testing (488 λ and 560 λ), and FlowJo VX was used for calculation and analysis. Patients' samples Human cancer tissues and para-cancer (>1.5 cm from the tumor margin) normal tissues were collected from 48 OSCC patients in West China hospital of Stomatology, Sichuan University (China). After resection, the tissues were immediately frozen by liquid nitrogen and stored at −80 o C for quantitative real-time PCR (RT-PCR). Written informed consent were signed by the patients. This study was approved by the Institutional Ethical Committee of West China hospital of Stomatology (WCHSIRB-OT-2016-047). Animal study Both wild-type (Cers1 + /+) and Cers1 knockout (Cers1−/−) C57BL/N6 mice were obtained from VITALSTAR (Beijing, China). sgRNA for Cers1 knockout was: Cers1-gRNA1: atctgcgcataactcggcat ggg; Cers1-gRNA2: gggtggacagcgttgcgc tgg. The animals were housed in specific pathogen-free units at 24 ± 2°C with 40%-60% humidity in a 14-hour light/10-hour dark cycle with freely accessible food at Sichuan University Animal Center (Chengdu, China). Six-to 8-week-old female mice (Cers1 + /+ C57BL/N6 mice, n = 25 and Cers1−/− C57BL/N6 mice, n = 25) were used for the experiments. A stock solution of 4NQO (Sigma, United States) was prepared at 5 mg·mL −1 (in propylene glycol). Two milliliters of stock solution was added to 100 ml of double distilled water to obtain a working concentration of 100 µg·mL −1 . The mice were treated with 4NQO for 16 weeks and then observed for another 8 weeks. At the end of the experimental period, mice were sacrificed. Tongues were collected and then longitudinally bisected. The left half of the tongue was immediately fixed in 10% buffered formalin (Solarbio, China). The right half of the tongue was immediately put into RNAstore (Tiangen, China) and stored at −80°C for RT-PCR. All animal experiments were approved by the Subcommittee on Research and Animal Care of Sichuan University (WCHSIRB-D-2017-227). Histopathological analysis Tongue tissues from the Cers1−/− group and Cers1 + /+ group were processed for hematoxylin and eosin (H&E) staining. After 24 h of fixation in 10% buffered formalin at room temperature, the tongue tissues were embedded with paraffin. After paraffin sectioning, deparaffinization, and rehydration, the sections (4 μm) were stained with H&E (Solarbio, China). Histopathological diagnosis was performed by two experienced oral pathologists in a blinded manner. The tissues were classified into four types: normal epithelium, mild-moderate dysplasia, severe dysplasia/ carcinoma in situ, and carcinoma. Immunohistochemical methods were used on sections (4 μm) of tongue tissues from hyperplastic lesions. The different groups of tongue tissues were deparaffinized and rehydrated in a graded ethanol series and distilled water. The slides were immersed in 0.01 mol·L -1 sodium citrate buffer (pH 6.0) and heated in a water bath at 95°C for 30 min. Activities of endogenous peroxidases were inhibited by using 3% hydrogen peroxide. The sections were blocked with 3% BSA (Solarbio, China) for 20 min. Then, the sections were incubated overnight at 4°C with anti-Bax antibody (1:1 000), anti-Bcl2 antibody (1:100), anti-Mmp2 antibody (1:1 000), anti-Mmp9 antibody (1:800), anti-Pcna antibody (1:500), and anti-Vegfa antibody (1:500). All antibodies for immunohistochemistry were from Servicebio (China). The slides were rinsed with PBS 3 times and incubated with biotinylated anti-mouse/rabbit IgG (Servicebio, China) for 50 min at room temperature. Then, we used diaminobenzidine to visualize the slices. Finally, nuclei were counterstained with hematoxylin for 3 min at room temperature. PBS was used as a negative control instead of the primary antibody. RNA extraction and quantitative real-time PCR The cells were first treated with siRNA and then collected using RNAiso Plus (Takara, United States). After 20% volume of chloroform was added, and the cells were subjected to high-speed centrifugation, RNA was present in the aqueous phase. Then, an equivalent volume of isopropyl alcohol (Solarbio, China) promotes RNA degradation. After washing with 75% ethanol and dissolving in RNase-free water, RNA was transcribed into cDNA by a RevertAid RT Kit (Thermo, United States). Primers were designed by BLAST and synthesized by Sangon Biotech. SYBR Premix Ex Taq II (Takara, United States) and ABI Q7 were used for real-time PCR (RT-PCR). The primers for RT-PCR were listed Supplementary Table 1. Construction of VEGFA reporter plasmids A 2.369 kb fragment containing a 5ʹ VEGFA sequence from −2304 to +65 relative to the transcription initiation site was amplified by PCR using Q5 High-Fidelity DNA Polymerase (NEB, United States). The forward primer with a SalI site was 5ʹ-GATGTCGACTTGCTGGGT ACCACCATGGA-3ʹ, and the reverse primer, which had a XbaI site, was 5ʹ-GATTCTAGACAGAGCGCTGGTGCTAGCC-3ʹ. After digestion by QuickCut restriction endonucleases (Takara, United States), a DNA Ligation Kit (Takara, United States) was used to insert the PCR sequences into the SalI and XbaI sites of pEZX-FR01 (GeneCopoeia, United States), which contains a Rinella luciferase (Rluc) coding sequence with a CMV promoter and a promoter-less firefly luciferase (HLuc) coding sequence. These recombinant plasmids were transfected into DH5α cells (TSINGKE, China) and amplified in nutrition agar plate (Solarbio, China) culture with kanamycin monosulfate (50 μg·mL −1 , Solarbio, China) to select different monoclonal bacterial colonies. Then, the selected single clones were cultured in LB medium (Solarbio, China) with kanamycin. The recombinant plasmids were purified by a plasmid extraction kit (TIANGEN, China). All plasmid constructs were verified by direct sequencing (Sangon Biotech, Chengdu). The reporter plasmid was designated pEZX-VEGFA. Plasmid transfection and luciferase assay CAL27 cells (3 × 10 4 ) were plated in 96-well plates. After cell attachment, cells were transfected with the pEZX-VEGFA plasmid using Lipo2000 for 12 h. Then, transfection media was replaced with the appropriate growth media. Next, the cells were divided into four groups based on the different treatments: control group, siNC; ATF4 knockdown group, siATF4; CERS1 knockdown group, siCERS1; and CERS1 + ATF4 knockdown group, siCERS1 + siATF4. After transfection with siRNA, a Luc-Pair Duo-Luciferase HS Assay kit (GeneCopoeia, United States) and microplate spectrophotometer were used to determine the relative luciferase activity. Statistical analysis IBM SPSS Statistics 20.0 was used for data statistics and analysis. Each experiment was performed independently at least three times with similar results. Data from one representative experiment are presented. The statistical methods are noted in the figure legends. P < 0.05 was deemed significant.
2020-05-28T09:12:23.180Z
2020-05-23T00:00:00.000
{ "year": 2021, "sha1": "cb99206dde9c774244bcd61749cf990b5e778674", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41368-021-00118-4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dbc6b581b70c3fb49fd23a80ef1491cd038557dc", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
208532558
pes2o/s2orc
v3-fos-license
Design, synthesis and evaluation of biological activities of some novel anti-TB agents with bio-reducible functional group Introduction: With regard to the anti-mycobacterial activity of 2-pyrazinoic acid esters (POEs), recent studies have shown that both pyrazine core and alkyl part of POE interact with the fatty acid synthase type (I) (FAS (I)) precluding a complex formation between NADPH and FAS (I). Methods: Considering this interaction at the reductase site of FAS (I) responsible for reduction of β-ketoacyl-CoA to β-hydroxyacyl-CoA, we hypothesized that POE containing a bioreducible center in its alkyl part might show an increased anti-tubercular activity due to the involvement of FAS (I) in extra bio-reduction reaction. Thus, we synthesized novel POEs, confirmed their structures by spectral data, and subsequently evaluated their anti-mycobacterial activity against Mycobacterium tuberculosis (Mtb) (H37Rv) strain at 10 μg/mL concentration. Results: Compounds 3c, 3j, and 3m showed higher activity with regard to the inhibition of Mtb growth by 45.4, 45.7, and 51.2% respectively. Unexpectedly, the maltol derived POE 3l having the lowest log p value among the POEs indicated the highest anti-mycobacterial growth activity with 56% prevention. Compounds 3c and 3l showed no remarkable cytotoxicity on human macrophages at 10 μg/mL concentration as analyzed by xCELLigence real-time cell analysis. In further experiments, some of the tested POEs, unlike pyrazinamide (PZA), exhibited significant antibacterial and also anti-fungal activities. POEs showed an enhanced bactericidal activity on gram-positive bacteria as shown for Staphylococcus aureus , e.g. compound 3b with a MIC value of 125 μg/mL but not E. coli as a gram-negative bacteria, except for maltol derived POE (3l) that showed an inverse activity in the susceptibility test. In the anticancer activity test against the human leukemia K562 cell lines using MTT assay, compounds 3e and 3j showed the highest cytotoxic effect with IC50 values of 25±8.0 μΜ and 25±5.0 μΜ, respectively. Conclusion: It was found that the majority of POEs containing a bioreducible center showed higher inhibitory activities on Mtb growth when compared to the similar compounds without a bio-reducible functional group. Introduction The World Health Organization (WHO) estimates that Mycobacterium tuberculosis (Mtb) has infected one-third of the globe's population, the majority of which being latently infected. Recent reports have indicated that 1.4 million people died of this infection in 2016 apart from 0.4 million deaths being associated with HIV infection. 1 A major worry is the nonstop rise of patients who have extensively been infected with multidrug-resistant (MDR) and drug-resistant (XDR) Mtb strains in recent years. 2 TB needs a surge in current efforts as claimed by the WHO. 1 One important strategy is to introduce novel highly potent drugs and effective agents to successfully treat this fatal disease. 3 The thick and complex cell wall having mycolic acids represents a very efficient barrier against many of common antibacterial agents and disinfectants. 4 reductase activity. 19 Β-ketoacyl-CoA is converted to β-hydroxyacyl-CoA at the reductase site of FAS (I) as a single multifunctional enzyme. [21][22][23] Given the interaction of the alkyl part of POE with FAS (I), introducing POEs containing reducible center may show synergistic effect due to the prevention of NADPH function and involving multi-functional enzyme FAS (I) in another side interaction with reducible center on the POE. As a result, catalytic action of FAS (I) may be limited in the fatty acid biosynthesis besides the inhibition of NADPH function. In addition, the presence of these polar groups may change POEs transport into Mtb due to further interaction with carrier molecules located on the Mtb cell wall. The above mentioned researches on the POEs, motivated us to synthesize new POEs having side bioreducible functional groups and to evaluate their activity against the Mtb growth along with their antibacterial, antifungal, and anticancer efficacy. Instrumental Measurement Melting points were recorded using the melting-point meter (Electro thermal 9100, Staffordshire, UK). IR spectra were performed on FT-IR spectrometer (FT IR-8101M, SHIMADZU, Kyoto, Japan). 1 H NMR and 13 CNMR spectra were recorded with a Bruker SpectrospinAvance 400 spectrometer operating at 400 MHz (Bucker GmbH, Ettlingen, Germany) and chemical shifts were measured in ppm relative tetramethylsilane. Elemental analyses were conducted with Vario EL ΙΙΙ apparatus (Elementar Co, Langenselbold, Germany). The reactions and purities Pyrazinamide (PZA) represents a key part in current TB chemotherapy due to its unique and strong sterilizing capability that enables this vital drug to kill semi-dormant, non-replicating persistent tubercle bacteria inside the macrophage that other TB drugs fail to kill. 5 Use of PZA in TB combinatorial therapy has been shown to successfully reduce treatment length by 3 months. 5 Having excellent synergy with most anti TB agents such as rifampicin and specially bedaquilline, a recently introduced novel drug, PZA is and will remain an important pillar in current chemotherapy and future multi-drug therapy regimens. 5 The antibacterial activity of PZA is mediated by pyrazinoic acid (POA) which is generated as a result of enzymetically catalyzed conversion in the presence of pyrazinamidase inside Mtb. 6,7 PZA is a selective drug against most isolates of the Mtb complex except Mycobacterium bovis. Strong pH dependent efficacy is another peculiar characteristic of PZA against Mtb since a decrease in pH results in an increase of PZA efficacy. 8, The fact that POA still targets PZA-resistant M. tuberculosis, has increased motivation for the development of new POA containing antituberculosis drugs. 5,10 Several target proteins have been identified for PZA of which the ribosomal protein (RpsA) involved in protein translation and fatty acid synthase type (I) (FAS (I)) are the most significant ones. [11][12][13] It has been demonstrated that 2-pyrazinoic acid esters (POEs) have a greater and broader in vitro activity than PZA and POA against susceptible Mtb bacteria as well as PZA-resistant Mtb isolates and non-tuberculous mycobatcteria. [14][15][16][17] Cynamon hypothesized that hydrolysable POEs due to the presence of multiple esterases in Mycobacterium cell could circumvent any need for activation by pyrazinamidase that was inactivated in PZA resistant strains. In addition, conversion of POA to more lipophilic POE could lead to higher penetration of this agent through the Mtb cell wall. 9,16,18 However, Zimhony et al observed that n-propyl pyrazinoate inhibited fatty acid biosynthesis more effectively than PZA at any common pH and its inhibition was not pH-dependent. They ascribed the enhanced inhibition to both increased lipophilicity and intrinsic activity of POEs. 12,19 It has been proven that POE inhibits FAS (I), and that POE inhibited without any need for hydrolysis to POA. 12 The saturation transfer difference (STD) NMR technique clarified that in the FAS (I) binding to propyl ester, the amount of protein contact with the pyrazine core decreased compared to that of PZA, while its connection with alkyl part of POE was intensive. 13,20 This result along with the impact of alkyl chain length on the POE affinity for FAS (I) association, corroborating previous results on the chain length effect on the enzyme inhibition efficacy, demonstrated the instinct inhibitory activity of POEs. 19 POEs disturb Mtb growth by the inhibition of NADPH binding as a bioreductant agent to the eukaryotic FAS (I) responsible for the fatty acid biosynthesis having Optimized procedure for preparation of 2-pyrazinoic acid esters DMAP (0.3 mmol) was added to a swirling mixture of 2-pyrazin carboxylic acid (1 mmol) in dichloromethane (2 mL) and stirred for 10 minutes, then alcohol compound (0.7 mmol) was added and resulting solution was cooled to 0 o C, after that EDC.HCl (0.9 mmol) was added gradually to cool the mixture. After 30 minutes, cooling source was removed and the reaction mixture was stirred for further 4 hours at room temperature, and the proceeding of the reaction was followed by thin layer chromatography (TLC) (Ethyl acetate (ETAC)/n-hexane 1:3). Finally, the mixture was rinsed with saturated sodium bicarbonate and 0.05 N HCl solution, respectively, dried over MgSO 4 , and the solvent was evaporated to achieve the POE compounds. Some of the obtained solid esters including 3a, 3b, 3c, 3f, and 3i were more purified by recrystallization from appropriate solvents mentioned below, 25 Determination of the anti-TB activity of small-molecules using GFP-expressing M. tuberculosis bacteria Compounds preparation and dilution: 7H9 (45 mL) (without glycerol and without Tween 80) was supplemented with 5 mL OADC. Starting from a stock solution of 10 mg/mL, 1 μL of each compound was diluted with 800 μL 7H9 test medium in a 1.5 mL Eppendorf Cap. Then, 80 μL of each concentration was added in triplicates to a black 96-well plate with a clear bottom (Corning) and the plates were transferred into the BSL3 facility. Bacteria: The numbers of aliquots of Mtb bacteria needed were thawed in a heating block at 37°C. The caps were centrifuged for 10 minutes at 4500 rpm (Heraeus, Swingout rotor). The supernatant was discarded and the bacteria were resuspended in 7H9 test medium to reach a concentration of 6 x 10e6/20 μL. Afterward, 20 μL of the bacterial suspension was added to the wells containing 80 μL well in the absence (Ctrl) or presence of the compounds and sealed with an air-permeable membrane (Porvair Sciences) at culture under mild agitation (Heidolph) at 37°C in an incubator. Plates were not stacked. The plate was measured at days 0, 3, and 7. Each plate was prepared with rifampicin as reference compound (dilutions in water) which has known inhibitory activity against M. tuberculosis. Test compounds were diluted in 100% DMSO at a constant concentration of 10 μg/mL. Rifampicin was tested in dose-response for quality control purposes at 1 μg/mL and 0.1 μg/mL. The assay was carried out in 96-well flat bottom microplates in a final volume of 100 μL. Next, 20 μL of the prepared bacterial working solution (s. above) was added to 80 mL of the compound test plate containing 10 μg/mL of the compound to be tested. The plates were incubated at 37°C for 7 days. Finally, bacterial growth was determined by measuring the relative fluorescence intensity using the plate reader Synergy2 (Biotek). Measurement of in vitro cytotoxic activity on human macrophage cells using xCELLigence RTCA The in vitro cytotoxic effect of the POE compounds was evaluated on the human macrophages using the xCELLigence RTCA system as previously described. 27 Chemistry Cynamon et al synthesized some POEs via acyl intermediate. The necessity for purification of sensitive pyrazinoyl chloride due to side reactions between thionyl chloride and POA was the most drawback of this method. 16 Later synthesis of POEs was reported by using dicyclohexylcarbodiimide (DCC). 28 Despite the effectiveness of this coupling reagent, it is hard to remove the by-product dicyclohexylurea (DCU) as well as rearrangement product N-acylurea. 29 This obstacle inspired us to use a more effective reagent for the synthesis of new POEs having sensitive functional groups using EDC.HCl (N-(3-Dimethylaminopropyl)-N'ethylcarbodiimide hydrochloride)/DMAP as a nontoxic, water-soluble coupling reagent that prevailed over formerly mentioned disadvantages (Table 1) Anti-mycobacterial activity Cell-based assays are key tools in the finding and optimization of new chemical entities against Mtb. The availability of a robust in vitro assay for testing the antimycobacterial activity of a new chemical entity is an absolute requirement for the success of a program. The microplate broth dilution assay using a M. tuberculosis strain expressing the green-fluorescent protein (GFP) was selected because of delivering highly reproducible results and allowing screening of a large number of compounds. The compounds were assayed against slowly growing M. tuberculosis (H37Rv strain) at 10 μg/mL concentration. Compound 3a showed no inhibitory effect on mycobacterial growth, but compound 3b as a vanillin analog reduced mycobacterial growth by 21.7%. Compound 3c having side aldehyde group showed higher activity with 45.42% inhibition. In the presence of compound 3d and its epoxidized derivative 3e, mycobacterial growth was interrupted by 27.9% and 33.5%, respectively. It was notable that replacing double bond with more active epoxide ring in eugenol pyrazinoate 3d made it stronger against Mtb. Compound 3f as a ketone compound showed no effect on mycobacterial growth. Between compounds 3g and 3h as cynamyl alcohol and coumarin derivatives, the coumarin-derived compound having internal ester moiety showed to be much more active reagent than the cynamyl alcohol-derived one against Mtb as shown by growth inhibition of 9.83% and 22.5%, respectively. Compound 3j containing an alphahydroxy carbonyl group was able to limit mycobacterial growth by 45.7% which showed a higher activity against mycobacteria than the compound 3i -the analog without possessing carbonyl group, which showed 41% inhibition. Between hydroxypyrone derivatives, due to having α,βunsaturated carbonyl (enone) moiety, compound 3l showed higher anti-mycobacterial activity, reducing Mtb growth value by 56% while compound 3k led to an inhibition of 22.4%. Between compounds 3m and 3n, 204 propargyl-derived pyrazinoate 3m showed greater antimycobaterial activity, preventing mycobacterial growth by 51.2% than compound 3n with 31.7% inhibitory effect (Fig. 1, Table 2). In vitro cytotoxic activity The xCELLigence real-time cell analysis (RTCA) was utilized to evaluate the cytotoxicity of the compounds on macrophages. The xCELLigence system is able to monitor the cultured cell viability in a safe way without any invasion, employing unique gold microelectrodes. Electrical impedance output as cell index (CI) value is evaluated to gain notable information about viability, cell number, cell proliferation status, and morphology. 27 In Fig. 2, the analysis of putative cytotoxic effects of the compounds 3c and 3l on human macrophages is shown. Prolonged incubation of macrophages with these two compounds for 47 hours, in contrast to staurosporine as a positive control, did not lead to a substantial decrease in normalized cell index values as an indicator of macrophage cell attachment. These data suggest that the tested POEs do not show a remarkable cytotoxic effect on macrophage cell adhesion and survival if present in a concentration range up to 10 μg. Antibacterial assay PZA possessing a narrow spectrum of activity against Mtb bacteria has no notable bactericidal activity against nontuberculous as well as gram-positive and gram-negative bacteria. In contrast, the evaluation of antimicrobial activity of the POE compounds showed that these compounds had antimicrobial activity against grampositive bacteria as well as Candida kefyr. The MIC test results showed that the tested compounds showed an improved activity against gram-positive bacteria and remarkable fungicidal activity on C. kefyr. The POEs 3b, 3d, 3e, 3i, and 3m including vanillin, eugenol, eugenol epoxide, diphenylcarbinol, and propargyl alcohol in the BioImpacts, 2019, 9(4), 199-209 205 alkyl part showed a very good antifungal activity against C. kefyr with MIC values of 250±20, 250±20, 250±20, 62.5±5, and 125±10 μg/mL, respectively. This increased fungicidal activity resulted from either POE`s or alkyl part`s activity. It seems the most efficacy of the compound 3i is more likely related to the presence of benzhydryl moiety in its structure, as this structure is an important part in the antifungal medicine clotrimazole. The compounds 3a, guaiacol-derived ester, and 3e, the oxidized-eugenol one, had a remarkable activity against S. aureus with a MIC value of 500±50 μg/mL. The compound 3b exhibited very good bactericidal activity with MIC value of 125±10 μg/mL, respectively. Yeast and gram-positive bacteria are surrounded by a thick layer of polysaccharides but without outer membrane containing lipopolysaccharide found in the gram-negative bacterial cell wall. Therefore, 3a 3b 3c 3d 3e 3f 3g 3h 3i 3j 3k 3l 3m 3n untr. the compounds can be more easily absorbed through S. aureus and C. kefyr cell walls, but cannot penetrate the lipid layer of E. coli envelope. The eugenol epoxidederived POE (3e) presented a moderate antibacterial activity on E. coli as gram-negative bacteria with MIC value of 2000±200 μg/mL, although the compound 3d, simple analog without epoxide moiety, showed a weak bactericidal activity with MIC value of 4000±400 μg/mL. The rest of the compounds exhibited no great activity against E. coli except for compound 3l that showed intermediate activity with MIC value of 1000±100 μg/mL. This observation is notable because this is the only POE that shows greater bactericidal activity on gram-negative than gram-positive bacteria with a MIC value of 2000 μg/ mL. This peculiar activity is probably related to the maltol ability to bind divalent cations located on the E. coli's outer membrane that results in the increasing permeability of E. coli to the ester (Table 3). 30 Anticancer potency One of the prevalent ways to assess the in vitro anticancer potency of the synthesized compounds is to evaluate the cytotoxicity in terms of viability in cancer cell by using the MTT (3-(4,5-dimethylthiazol-2-yl)-2,5diphenyltetrazolium bromide) assay. The K562 cells were cultured as supplemented with various concentrations (25-150 µM) of the POEs for 3 days to measure the value of each POE`s inhibitory effect on the viability of the cells by MTT assay. The percentages of viability and IC 50 (inhibition of concentration 50) values for all compounds at various concentrations are depicted in Table 4. As shown in Table 4, cell viability was reduced in a time and dose dependent manner for all compounds. Among tested POEs, 3e and 3j as eugenol oxide and benzoin pyrazinoate, showed favorable cytotoxicity on the K562 cell lines having IC 50 values of 25±5.0 and 25±8.0 µM after 72 hours. As understood from Table 4, substitution of double bond located on eugenol ester 3d by epoxide motif sharply affected the cytotoxic efficacy and decreased IC 50 value by 75 µM. The rest of the compounds had IC50 values about 100 µM or more. Discussion The anti-mycobacterial activity results are somewhat consistent with our surmise that POE containing bioreducible functional group is more likely to show higher activity against Mtb than its similar compound without reducible center due to probably involving FAS (I) in a side bioreduction process. Formerly Seitz et al ascribed a high inhibitory activity of 4-acetoxybenzyl containing pyrazinoate to self-immolative activity of this substitution that showed increased synergistic effect against Mtb. 28 It was also found that 3-ketohexadecanoic acid inhibited fatty acid biosynthesis in Mycobacterium smegmatis. 31 Considering that the whole ester molecule is responsible for anti-mycobacterial activity according to the results reported in recent studies, it remains unclear whether these results were observed due to the increased inhibitory activity of alkyl part of POEs on FAS (I) or due to an increased uptake and bioactivity because of the presence of more polar functional groups. The outcome of the antibacterial assay shows the importance of functional group in altering the bactericidal activity of POE. The compound 3e has greater antibacterial activity than 3d in both gram-positive and gram-negative susceptibility tests indicating the importance of epoxide moiety in increasing the antibacterial activity. In the case of fosfomycin, the epoxide-containing antibiotic, it has been clarified that nucleophilic attack of the amino acid systeine on the epoxide center of the drug, results in irreversible inactivation of the enzyme involved in the formation of bacterial cell wall. 32 The MTT assay supports our presumption; also, for example, benzoin-derived POE possessing alpha-hydroxy ketone functional group presents increased cytotoxicity on the K562 cell lines. Another justifying example is the oxirane-containing POE that has higher cytotoxic effect in comparison with its analog without epoxide functional group. Stronger interaction with cellular target, as a result of increased nucleophilic reactivity of the epoxide ring may be the probable reason for this observation. 33,34 Conclusion In this study, we synthesized the POE compounds and evaluated their inhibitory effect on mycobacterial growth, antibacterial, antifungal, and anticancer activities. We used EDC.HCl as a non-toxic and water-soluble coupling agent to avoid unwanted side reaction happening because of the presence of sensitive functional groups other than hydroxyl group and to remove water-soluble byproducts by the simple work-up. The results showed that bioreducible functional group containing POEs showed a greater anti mycobacterial growth activity in comparison with their simple analogs. Some POEs containing bioreducible center also showed activity in antibacterial and anticancer assay, showing the importance of bio-reducible center in increasing the biological activity of the POEs. The obtained results were in accordance with our surmise that bio-reducible center located on alkyl part of POE might involve FAS (I) in side reaction, besides the inhibitory effect of the pyrazinoate core on binding NADPH to FAS (I). To further prove this concept, tested POEs should be evaluated in further studies for their direct inhibitory activity on FAS (I). Regarding POA core`s antibacterial inefficacy against non-tuberculous as well as gram-positive and gramnegative bacteria mostly due to having effective efflux system to move POA out of the cytoplasmic membrane, it is concluded from results that improved antifungal and antibacterial activity could be a result of either released bioreducibe-containing alkyl part after hydrolysis in cytoplasm or the intrinsic antibacterial activity of POE due to the presence of bioreducible moiety in the ester structure. Furthermore, the stability of the POE compound may influence the rate of efflux and compound permeability through the bacterial cell wall. The results showed that the POEs containing bioreducible functional groups have more potential to suppress the viability of the K562 cells. All kinds of the tests confirmed the significant What is the current knowledge? √ It has been clarified that both pyrazine core and alkyl part of the POE compound interact with the FAS (I) in the Mtb precluding complex formation between NADPH and FAS (I). √ PZA as an effective anti-TB medicine has no antibacterial activity against non-tuberculous mycobacteria as well as gram-positive and gram-negative bacteria. What is new here? √ Bio-reducible center-containing POEs have a greater antimycobacterial activity against Mtb (H37Rv) than similar analogs without a bio-reducible functional group probably due to involvement of (FAS (I) in side reductase activity. √ he conversion of the eugenol-containing POE to the epoxidized one increases anti-mycobacterial, anti-bacterial activity, and cytotoxic effect on the K562 cell lines. Research Highlights importance of the epoxide center in improving the biological activity of the eugenole-containing POE. Funding sources This work was financially and technically supported by Faculty of Chemistry, University of Tabriz. Ethical Statement Not applicable to this study. Acknowledgments We gratefully acknowledge financial support from the Research Council of the University of Tabriz. Furthermore, we sincerely acknowledge Ms. Lisa Niwinski (Research Center Borstel) for expert help and technical assistance. Competing interests None to be declared. Table 4. Continued
2019-11-07T15:11:10.670Z
2019-05-22T00:00:00.000
{ "year": 2019, "sha1": "880d16e085246d875d006337e04e0bd20951f216", "oa_license": "CCBYNC", "oa_url": "https://bi.tbzmed.ac.ir/PDF/bi-9-199.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cf21d0524c6d14a81fb87a7e45158a8f43625884", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
236609038
pes2o/s2orc
v3-fos-license
Silicon carbide hollow fiber membranes developed for the textile industry wastewater treatment In order to reduce the impacts of the industrial effluent on the environment, silicon carbide hollow fiber membranes were prepared by the precipitation-immersion technique and sintered at 1450 and 1500 °C. The membranes were characterized by X-ray diffraction, their surface structure was characterized by scanning electron microscopy and atomic force microscopy, pore size distribution and porosity, mechanical properties, and flow measurements with distilled water and effluent generated by the indigo blue industry. The sintered membranes presented crystalline phases of silicon carbide and aluminum oxide. The tubes presented defect-free microstructure and uniform porous surface, with porosity above 50%. The silicon carbide membrane presented significant reductions of the solutes and colloidal particle contents in the effluent. The membranes sintered at 1500 °C proved to be more efficient for reductions of the turbidity and color of the effluent. Silicon carbide hollow fiber membrane is an interesting alternative for the treatment of effluents from the textile industry. INTRODUCTION Membranes are defined as selective barriers capable of controlling the permeation rate of a particular chemical species present in a solution, providing its total or partial purification [1,2]. The geometric configuration delimits the performance of membrane separation processes. Membranes with flat or tubular configuration have low permeate flows, as well as limitations in terms of low surface area per volume unit when compared to hollow fiber membranes [3,4]. Hollow fiber membranes have diameters of ~1-2 mm, which makes them more favorable due to their high membrane area per unit of volume. To prepare hollow fiber membranes, phase inversion is the most used method, and the final morphology is dependent on the interactions between the processing variables: flow rate of the internal liquid and dope solution, dimensions of the extruder, air gap, stresses during the flow inside the extruder, and type of the internal liquid [5][6][7]. Hollow fiber membranes made of ceramic have some advantages: chemical stability, resistance to high temperatures and pressures, can be used in chemically aggressive environments in the presence of organic solvents, acids, and bases [8][9][10][11], and allow more efficient membrane cleaning processes facilitating reuse [12], in addition to considerable biological stability [13]. Among ceramic materials, silicon carbide (SiC) stands out as a promising material for preparing inorganic membranes due to its properties: excellent mechanical resistance at high temperatures, good resistance to oxidation and thermal shock, and low density [14,15]. The sintering of pure SiC needs to be carried out at high temperatures of ~2000 °C [16] and according to Gubernat et al. [17], this process involves high energy consumption and limits large-scale production for industrial use. This problem can be solved with the use of additives to decrease the sintering temperature of SiC, such as Al 2 O 3 [18] and Y 2 O 3 [19], to prepare porous SiC with high porosity and flexural strength at 1450 °C. The addition of sintering additives (such as Al 2 O 3 and Y 2 O 3 ) can be performed due to the low-cost of these materials when compared to SiC. Water treatment is a crucial field closely related to environmental, economic, and social issues. One of the main problems of industries is the treatment of effluent before being discharged into the environment or in a public sewer system [20][21][22]. Inadequate disposal of these effluents affects air, water, soil, and quality of life in general. Membrane technology shows itself as an alternative to minimizing these environmental impacts, since they have affordable cost, low energy expenditure, and high efficiency, compared to conventional processes [23][24][25][26]. Silicon carbide membranes are present in several separation processes and, at the same time, have high selectivity and permeability due to their excellent properties. De Wit et al. [27] evaluated the mechanical robustness and permeability of silicon carbide hollow fibre membranes, analyzing the influence of heat treatment on the structure and properties of the prepared fibres. Dilaver et al. [28] investigated the filtration efficiency of silicon carbide membranes with two types of substrates for water/vegetable oil separation. The literature on the application of the membranes of silicon carbide to the separation process of textile effluents is still scarce, so the main objective of this work was to evaluate the behavior of the separation process of an indigo blue solution by silicon carbide hollow fiber membranes obtained by the method of precipitation-immersion, with alumina as an additive, and applying a low sintering temperature. Methods: the dope solution was prepared from the dissolution of the PES with NMP solvent under mechanical stirring for 1 h at a speed of 1000 rpm. It was then added to the solution SiC, Al 2 O 3 and PVP. The resulting mixture containing 47.5 wt% of SiC, 2.5 wt% of Al 2 O 3 , 2.5 wt% of PVP, 10 wt% of PES, and 37.5 wt% of NMP was stirred by 30 min at a speed of 300 rpm. Alumina was used as a sintering agent in order to reduce the maximum sintering temperature [29][30][31]. The hollow fiber membranes were extruded in wire form through the technique of precipitation-immersion (Fig. 1). The processing conditions are shown in Table I. The flow of the internal liquid (distilled water) was kept fixed at 350 mL/h with the aid of a syringe pump (SP900Vet, Centaurus Medical). For the bath of precipitation, distilled water was used and the process was made at room temperature in air. The distance between the extruder outlet and the coagulation bath (air gap) was 10 cm. After processing, the fibers were immersed in water for 24 h for solvent output. Then, the hollow fibers were dried at room temperature and sintered at 1450 and 1500 °C, with a heating rate of 2 °C/min up to 500 °C and 5 °C/min up to the final temperature, without controlled atmosphere, in a conventional electric oven (Maitec Fornos Inti). After the heat treatment, the fibers were cut to the length of 5 cm, for the assembly of the modules. Characterizations: the membranes were characterized by X-ray diffraction (XRD) using a diffractometer (XRD-6000, Shimadzu) with CuKα radiation (λ=1.5418 Å) at 40 kV, 30 mA, scan from 5° to 80°, and scanning rate of 2 °/min. For the morphological characterization of the membranes, a scanning electron microscope (SEM, Superscan SSX 550, Shimadzu) operating at 15 kV was used. The surface topography and the relative surface roughness of the prepared membranes were examined using an atomic force microscope (AFM, mod. 9700, Shimadzu), using the dynamic mode at a scan rate of 1 Hz. The membranes were fixed on a support, scanned in a 15x15 μm area, and analyzed with the SPM Manager program. The topographic images were used to calculate the area roughness average (Ra) of the membrane by: The porosity and pore size of the membranes was obtained by mercury porosimetry (Autopore IV, Micromeritics). A test machine (Emic) was used to determine the tensile strength of the membranes, which was determined using a three-point bending apparatus with a load cell of 5 kN at a crosshead speed of 0.5 mm/min until fracture. The bending strength was calculated by: where F is the force at which the fracture of specimen took place, L is the span (40 mm), and Do and Di are the outer and inner diameters of the hollow fiber, respectively; for each sample, ten specimens were tested. For the flow measurement test with distilled water, a bench system with 1.17 mm a tangential flow at pressures of 100 and 200 kPa and room temperature (25 °C) was used. In the system to collect the permeate flow, a reservoir for the effluent, a centrifugal pump, a module, two valves, and two manometers were used to measure the pressure of the effluent flow in the system (Fig. 2). The volumetric flow (J) collected for all membranes was calculated by: The same system was used for the tests with the textile effluent. The indigo blue solution was prepared at the concentration of 10000 ppm. The test of turbidity was performed on a portable turbidimeter (Hanna) and the color test with a colorimeter (Policontrol). Fig. 3 presents the mineralogical phases identified by X-ray diffraction of the membranes after sintering at 1500 and 1450 °C. The peaks referring to α-silicon carbide phase (34°, 36°, 38° and 42°) identified by the JCPDS 73-1663 file [32,33], cristobalite phase of silicon oxide (22°, 28° and 31°) identified by the JCPDS 39-1425 file, and α-alumina phase (35° and 43°) identified by the JCPDS 77-1123 file were observed. These crystalline phases were also observed in other studies [25,27,34]. It is possible to observe that with the increase in sintering temperature, an amount of cristobalite was formed, due to the reaction of SiC with oxygen in the air. The cristobalite has a structural arrangement of α phase at low temperature and β phase at high temperature, above 1470 °C [35]. Fig. 4a illustrates the microstructure of the cross-section of the sintered hollow fiber membrane sintered at 1450 °C. It was possible to observe an asymmetric structure that consisted of a spongy region with voids similar to 'fingers', which is a typical structure for the hollow fibers prepared by the spinning method and were generated due to the rapid exchange between solvents (NMP) and not solvents [36,37]. The finger-like structure originating from the interior to half of the cross-section provided high porosity to the membrane. Besides that, the porosity was associated with the output of the polymer during the sintering process. In Fig. 4b, it is noted the presence of sharp and irregular grains, characteristic of α-silicon carbide phase, as identified by the X-ray diffraction. According to Nikkam et al. [33], the morphology of α-SiC grain may be dominated by the anisotropic crystal structure, allowing the crystal to grow in certain directions more than the other directions. In Fig. 4c, more evidences of irregular and asymmetric shape of the membrane pores are shown. Finally, in Fig. 4d, the cross-sectional image focusing on the surface near the inner part shows that the hollow fiber membrane was porous throughout its section. RESULTS AND DISCUSSION The SEM image shown in Fig. 5a illustrates the surface near the outer part of the hollow fiber membrane sintered at 1500 °C. As the membrane sintered at 1450 °C, it was possible to check the presence of 'fingers' on the all external surface of the fiber, justifying the high porosity of the hollow fiber membrane. In Fig. 5b, the presence of irregular and asymmetric grains from the formation of silicon carbide is observed. Also, the outermost part of the hollow fiber can work as a porous support layer and the inner part of the membrane as a selective layer, which can increase the flow rate of the permeate. In Fig. 5c, the shape of the grains with irregular and asymmetric morphology is observed. With sintering at 1500 ºC, a continuous phase was clearly present, formed from the SiO 2 -Al 2 O 3 system. In Fig. 5d, the crosssection close to the inner surface of hollow fiber is verified showing a porous surface. The morphologies identified for the two sintering temperatures indicated that the membranes were porous with irregular and asymmetric morphology. The pores were interconnected and this difference in morphology was responsible for the selectivity of the membrane [38]. The asymmetric morphology in the microstructure of the SiC hollow fiber membranes can be attributed to the rapid precipitation during the spinning process, which occurred in the core side which resulted in small channels and slow precipitation in the external side in the hollow fiber forming a spongy structure. Such morphology is typical for inorganic hollow fiber membranes prepared by the precipitationimmersion method [10,39]. The inner and outer diameters as well as the thickness of the sintered ceramic hollow fiber membranes are summarized in Table II. The increase in the sintering temperature from 1450 to 1500 °C led to a reduction in the dimensions of the hollow fiber membranes, which suggested higher densification at the higher sintering temperature. Fig. 6 shows the AFM images of the membranes sintered at 1450 and 1500 °C. The surfaces of the membranes presented distinct light and dark regions. The dark regions corresponded to areas of low height and the light regions to the highest areas. The concave parts of the images corresponded to the pores [40,41]. It was possible to visualize that with the increase in the sintering temperature, there was an increase in the roughness on the membrane surface; this greater roughness can influence the filtration performance of the membrane since it is related to the pores of the membranes and can retain particles or impurities in membrane effluent treatments. In Fig. 6b, dark spots show this increase in roughness [40,42]. The average roughness was quantified and is shown in Table III. (Table III). The rise in sintering temperature produced an increase in the average pore diameter of the membrane. According to Fukushima et al. [31], SiC with alumina showed a limited formation of SiO 2 -Al 2 O 3 liquid phase during sintering from a thin SiO 2 layer, which exists naturally on the surface of SiC particles, and the alumina additive. This liquid covering on the SiC particles may result in a limited mass transfer and the size of the grains increased with the increase of the sintering temperature [43]. The increase in sintering temperature caused a decrease in porosity and an increase in the size of the remaining pores, as can be seen in Fig. 5, resulting in average pore size of 6.35 µm and porosity of 52.53%. The flexural strength of SiC hollow fiber membranes as a function of the sintering temperature is shown in Fig. 8. It was possible to observe that with the increase in the sintering temperature the membrane became stronger (83.1 MPa at 1450 °C and 98.1 MPa at 1500 °C); this increase was related to the reduction of membrane porosity. When porous ceramics are subjected to mechanical testing, cracks of various orientations can growth, weakening the sample and causing eventual failure [44]. The pores act as stress concentrators, causing an increase in potential cracks and fractures [40,44]. De Wit et al. [27] showed that for sintering temperatures above 1500 °C the mechanical resistance of the SiC membrane significantly reduces; this loss of strength is attributed to the removal of residual carbon at high processing temperatures, which makes it unfeasible for the production of hollow fiber membranes, which due to the thin walls require high strength to be subjected to high pressures during the treatment of effluents. Therefore, the elevation of the sintering temperature caused the grain growth and liquid phase formation, giving greater densification and increasing the mechanical strength. The membranes prepared with silicon carbide showed an excellent mechanical property, with the flexural strength above 80 MPa, and this was due to the Si-C bonds (Fig. 8). These results were better than the ceramic membranes reported in the literature. Khalid Pore diameter (µm) dV/dlogD (mL/g) 1 10 100 [50], which demonstrated the potential of this type of membrane for wastewater treatments. Fig. 9b presents the results of the permeated flow with textile effluent at pressures of 100 and 200 kPa for membranes sintered at 1450 and 1500 °C. With the pressure of 100 kPa, it was observed that the membrane sintered at 1450 °C presented permeate flow higher than 1970 L.h -1 .m -2 in the first minutes of the test. Then the permeated flow dropped and stabilized at 577 L.h -1 .m -2 ; this decrease with time was associated with the phenomenon of fouling, due to the accumulation of solutes on the surface of the membrane, causing clogging and decreasing the permeated flow [51]. With the increase of pressure to 200 kPa, the permeate flow stabilized at approximately 674 L.h -1 .m -2 after 35 min of the test. With the pressure of 100 kPa, the membrane sintered at 1500 °C presented an initial permeated flow of 500 L.h -1 .m -2 and stabilized at 327 L.h -1 .m -2 after 35 min of the test. The decrease in the permeated flow is typical of membrane processes due to the fouling phenomenon. Increasing the pressure to 200 kPa, the initial permeated flow increased to 600 L.h -1 .m -2 and stabilized at 500 L.h -1 .m -2 after 35 min of the test. The size and distribution of the pores influence the permeation rate. Molecules or particles can permeate or retain inside the membrane, clogging and locking the pores and consequently decreasing the permeate flow [27,52]. Fig. 10 shows images of the textile effluent before and after treatment with the silicon carbide hollow fiber (Table IV). For the membranes sintered at 1450 °C, at 100 kPa of pressure, the values of turbidity and color increased in relation to the membranes sintered at 1500 °C; however, considering the initial concentration of the effluent, there was also a significant rejection and high efficiency of the produced SiC hollow fiber membranes. The results indicated that the SiC hollow fiber membrane exhibited high mechanical resistance and water permeability, implying that this membrane has great potential for application in microfiltration processes. Taking textile effluent as an example (Table IV), the rejection and permeability values of the SiC hollow fiber were compared to values presented in the literature ( Table V). The flux was higher than the reported results, while >96% dye rejection was maintained. Thus, the SiC membrane prepared through phase inversion technique and low sintering temperature offers some advantages compared with the other membranes, such as high mechanical resistance, high permeated flow, and selectivity. CONCLUSIONS Silicon carbide hollow fiber membranes were successfully prepared by the precipitation-immersion technique. The membranes presented crystalline phases of silicon carbide and aluminum oxide when sintered at 1450 and 1500 °C. The SEM images evidenced a porous and uniform surface for the membrane sintered at 1450 °C and a selective porous surface layer when sintered at 1500 °C. The increase of the sintering temperature increased the surface roughness. The membranes sintered at 1450 and 1500 °C presented high porosity (57% and 52%, respectively). The permeate flow measurements indicated the feasibility of the membranes for separation processes. The turbidity and color tests confirmed the viability of the use of the SiC hollow fiber membrane for the textile effluent treatment. The membranes of silicon carbide hollow fibers sintered at 1500 °C were more efficient than those sintered at 1450 °C. The application of these membranes to microfiltration processes is feasible since the process ensures the high quality of the final effluent.
2021-08-02T00:05:23.034Z
2021-05-17T00:00:00.000
{ "year": 2021, "sha1": "4607930f926d2c8deff7868bf3697ba07befcc29", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/ce/a/4Kg4yRTjWpZjLd8PZypQbbF/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1460dc3a87cc1277eb724a6a71f20b9950a8fad8", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
268570343
pes2o/s2orc
v3-fos-license
Hierarchical Control Planning Based on the Film 'Avatar: The Way of Water' to Minimize Work Accidents Among Fishermen . Cultivating health and occupational safety among fishermen can involve various strategies, such as reviewing movies for insights. These films serve as reflections of the relevance to the cultural aspects of fishermen's health and safety, contributing to the Health and Occupational Safety Triangle or Hierarchy Control. Hierarchy Control, a sequential risk reduction process, encompasses elimination, substitution, design, administration Introduction Indonesia is a maritime country, with 70% of its territory consisting of water and the remaining 30% comprising more than 17,000 islands, boasting over 99,000 km of coastlines.The extensive maritime expanse positions Indonesia as a nation with significant potential in the fields of maritime activities and fisheries.The fisheries industry and the livelihoods of fishermen represent a focal point for national growth.Fishermen are individuals seeking a livelihood by maximizing the potential of fishing in these abundant waters [1].The growth of any sector warrants careful attention to various factors that influence it, and the field of health and occupational safety is no exception. Health and Occupational Safety encompass efforts aimed at safeguarding workers from harm in the workplace.To ensure the well-being of workers, regular maintenance must be conducted, and production activities should be executed in a safe and efficient manner [2].The number of workrelated accidents has tripled compared to the previous year.In 2020, there were 221,740 reported cases, which increased to 234,370 in 2021.The most recent data available, covering the period up to November 2022 (with the total data for 2022 withdrawn in early January 2023), indicates a further rise to 265,334 work-related accidents [1] . Various case data reveal a lack of attention to safety among fishermen during their work at sea.Therefore, a solution is needed to address this issue.Implementing countermeasures for workrelated accidents among fishermen can be achieved through straightforward initiatives embraced within the fishing community.Cultivating health and occupational safety among fishermen can involve various strategies, one of which includes drawing insights from movie reviews [3]. Almost everyone loves movies, and recently, there is a film that features scenes providing insights into Health and Occupational Safety for fishermen-namely, the movie "Avatar: The Way of Water."This film has received numerous awards, including the inaugural 2023 Oscars in the Best Visual Effects category [4].The victory and award achieved by "Avatar 2" at the 95th Academy Awards mark a significant milestone.Directed by James Cameron, the film surpassed other incoming blockbuster films in its nominations.It is anticipated that studying this film would be interesting, as it may serve as a reflection on its relevance to the cultural aspects of fishermen's lives and their health and occupational safety at work.This can provide control in the so-called Health and Occupational Safety Triangle Control or Hierarchy Control.Hierarchy Control is a sequential process (done progressively until the level of risk or danger is reduced to a safe point).Among the elements of hierarchy control are elimination, substitution, design, administration, and the use of personal protective equipment (PPE).To address this issue, an effective approach involves utilizing planning hierarchy control with a comprehensive framework, incorporating Macro Ergonomics, Semiotics, and the Fogg Behavior Model. Macro Ergonomics Macro Ergonomics is a sociotechnical systems approach that operates in a top-down manner to analyze, design, or enhance work systems and organizational structures.It focuses on harmonizing planning across all elements of the system to ensure overall efficiency and effectiveness [5].The conceptual definition of Macro Ergonomics is a top-down approach applied in a socio-technical manner to plan work systems comprehensively.It encompasses the entire interaction among humans, their jobs, machines, and software interfaces.The sociotechnical theory emphasizes the interplay between technical requirements of the job and the social demands placed on the individuals performing the job.The key elements utilized in Macro Ergonomics include humans, the environment, organization, technology, and work [6]. Semiotics Semiotics, as explored by Piliang [7], delves into the study of signs and signifiers, aiming to understand their interconnected elements within a system governed by specific rules or agreements.Signs, which represent something beyond themselves, resist separation akin to the two fields on a sheet of paper.Saussure's definition emphasizes the unity of a 'sign,' with the field marker (signifier) defining 'forms' or 'expressions,' and the field signified (signified) explaining the 'concept' or 'meaning.'In Semiotics, this significance extends to all social practices involving language, including social media campaigns.Interpreting visual campaigns as signs unveils meanings with two layers: the first being the obvious, existing, and innocent Denotative meaning, and the second, the Connotative meaning, linked to situational conditions and social events [8]. Fogg Behavioral Model The Fogg Behavior Model is a theory of behavior developed by BJ Fogg.He proposes that the theory of behavior change consists of three elements, namely [9] Data Analyze and Discussion The following are the results of the research implementation that have been achieved, namely: Data Collection The following is the data collected to complete this research, namely as follows: The following are the vessels used by fishermen, which can be seen in Figure 1.This research focuses on designing a hierarchy of occupational health and safety controls in fishing, so it does not discuss size, equipment, and techniques in fishing.The Standard Operating Procedure (SOP) is made comprehensively so that it can be used by all organizations or individual fishermen. Figure 1 Boats Used by Fishermen Based on the film "Avatar: The Way of Water," it provides insights into the critical importance of occupational safety and health in ocean environments.This film has the potential to inspire fishermen to modify their fishing behavior, with a particular emphasis on enhancing safety during fishing activities.To achieve optimal results and eliminate potential hazards in fishing, a perspective considering the 5 elements of macro ergonomics is crucial.Macro ergonomics, being an encompassing approach, can maximize ergonomic considerations across a broad scope.In addition to macro ergonomics, this research incorporates a semiotic approach to identify and analyze safety signs at sea, drawing from the film "Avatar: The Way of Water," with applications to fishing activities.The final approach involves employing the Fogg Behavior Model to analyze fishermen's behavior concerning safety in their fishing practices. Analysis of "Avatar: The Way of Water" Film Using a Semiotic Approach Following this, the results of the analysis of the film "Avatar: The Way of Water" using the Semiotics approach will also be applied to videos featuring fishermen, utilizing the same approach.In Table 2, only one macro ergonomic element is presented, but it is essential to note that there are numerous identifications.This identification process will persist and be conducted for each macro ergonomic element. Approach The following is an identification of safety in "Avatar: The Way of Water" using a macroergonomics approach.In Table 3, each macroergonomic element is presented, serving as a reminder of the multitude of identifications.This identification process continues for each macroergonomic element. Man The protective equipment used is almost complete, ranging from helmets, vests, and shoes to thickmaterial clothes.However, the clothes worn lack long sleeves However, some individuals still opt for sleeveless attire, as it serves as a personal protective tool for fishermen who wear long-sleeved clothes The protective equipment used is almost complete, starting from helmets, vests, and shoes to thickmaterial clothes Identification of Fishermen's Videos Using the Macro Ergonomics Approach The following are the results of identifying fishermen's videos using a macroergonomics approach.In Table 4, only one macroergonomics element is presented as a reminder of the numerous identifications.This identification process will continue for each macroergonomics element.Semiotics is a science that studies signs and meanings.In Piliang (2004) [7] Semiotics attempts to understand the connection of sign elements in a system based on certain rules and agreements. The conclusion drawn is that, in the first layer of denotational meaning, the film "Avatar: The Way of Water" exhibits technological sophistication and the implementation of Health and Occupational Safety culture in the exploration and exploitation of the planet Avatar by a team of hunters and military from Earth.Additionally, denotational meaning is also evident in the portrayal of the native inhabitants of the planet Avatar, highlighting a primitive culture and continuity in utilizing nature in harmony. In the second layer of connotation in the film "Avatar: The Way of Water," there exists an Opposition -Binary relationship in the interaction between humans and the native inhabitants of the planet Avatar.The oppositional role assumed by humans, depicted as a nation of immigrants, is portrayed through images of extensive and formidable technology and military presence, including the Health and Occupational Safety culture, which positions them as an Opposition force.The Binary relationship depicts the native inhabitants as belonging to a primitive planet, distant from Health and Occupational Safety technology and culture. Through the exploration of connotation and denotation, the film "Avatar: The Way of Water" reveals the values embedded in its myths.In this context, a myth represents an ideology or beliefs considered natural and true, as inferred from the visual signs presented in the film.The emerging myth portrays that the Health and Occupational Safety technology and culture present in humans make them appear greedy, invasive, and exploitative in nature. Furthermore, the findings of Health and Occupational Safety are evaluated in macro ergonomics using five criteria: humans, organizations, technology, environment, and work.Overall, in every element of macro ergonomics, Health and Occupational Safety have been implemented effectively.However, there are shortcomings in several elements.These include deficiencies in the organizational element criteria, particularly in the Health and Occupational Safety policy and awards sub-criteria.Additionally, in the human element criteria, although individuals have appropriately used Personal Protective Equipment (PPE), there are still scenes where humans (hunters) do not wear long sleeves.Furthermore, in the work element criteria, there is no mention of work hours or shifts.In the technological element criteria, there is a lack of explanation in the film scenes regarding Standard Operating Procedures (SOPs) for equipment or machines.Despite these shortcomings, the overall findings of the Health and Occupational Safety culture in the film "Avatar: The Way of Water" can serve as a practical implementation in the lives of fishermen. Perhaps this application can be phased in using simple steps in the daily lives of fishermen.Safety control framework for fishermen in Batam.This design adopts a control triangle approach encompassing Personal Protective Equipment (PPE), administrative controls, engineering controls, substitution, and elimination.The researchers aspire that this control triangle design can be effectively implemented in the fishermen's daily lives.The PPE design has been crafted by certified Health and Occupational Safety experts to ensure optimal outcomes and continuous improvement for fishermen.While the advice provided to the fishermen may not be immediately applicable, it is intended for ongoing implementation, recognizing that meaningful improvements cannot be achieved hastily.Fishermen need to gradually adjust their daily fishing activities. Correlation Between Safety Culture in the Film "Avatar: The Way of Water" and Fishermen's Videos The questionnaire was distributed to 10 fishermen, and the results of the Correlation between the Safety Culture of the film "Avatar: The Way of Water" and the Fisherman's Video were computed using SPSS with 10 respondents and 20 questions.The calculated correlation coefficient (r) was then compared with the r- This suggests that the film effectively enhances fishermen's knowledge of occupational safety and health during fishing activities. Designing Hierarchy Controls in Fishing Activities For the effective, comfortable, safe, healthy, and efficient operation of a system, it is imperative to have proper planning.In the planning stages, adhering to the OHSAS 18001 standard [10] is crucial for organizations in constructing a Hierarchy of Controls [11].During the process of identifying Health and Occupational Safety hazards, organizations need to assess whether there are existing controls in place and whether these controls are adequate for identifying potential dangers. Hierarchy of control essentially prioritizes the identification and implementation of controls related to Health and Occupational Safety hazards.There are several groups of controls that can be formulated to eliminate or reduce Health and Occupational Safety dangers, including: If it is determined that prevention alone does not yield a significant impact, each control process should be implemented.This involves starting with elimination, substitution, engineering controls, administration, and Personal Protective Equipment (PPE).These measures are implemented to enhance the productivity of fishermen, ensuring the sustainable continuation of economic activities along the public coast.Based on the identification through approaches such as semiotics and macro ergonomics (refer to Table 5), the planning of Hierarchy Controls can be summarized as in Table 5: through specific attributes and scenes.The examination of Health and Occupational Safety findings, utilizing macro ergonomics and considering 5 elements (humans, organization, technology, environment, and work), reveals that, overall, Health and Occupational Safety has been effectively implemented.However, there are shortcomings noted in certain elements, particularly in the criteria for organizational elements, specifically in the sub-criteria related to Health and Occupational Safety policies and awards. In the video featuring fishermen in Batam, it was observed that the Health and Occupational Safety culture remains at a very minimal level.There are 5 criteria for macro ergonomics elements : (a) Motivation; (b) Ability; (c) Trigger. Culture in Fishermen's Videos in Batam: Questionnaire and Forum Group Discussion (FGD) In the video depicting fishermen in Batam, it is evident that the Health and Occupational Safety culture is significantly lacking.There are 5 criteria for macro ergonomics elements-namely humans, technology, environment, work, and organization-are evaluated, almost all subelements are marked in red.This indicates a minimal consideration for Health and Occupational Safety in the lives of fishermen, primarily stemming from their lack of awareness regarding Health and Occupational Safety.Unfortunately, this indifference can lead to work-related accidents and even fatalities from a Health and Occupational Safety perspective.Fundamentally, fishermen are aware that their current practices pose high risks, yet they perceive the implementation of Health and Occupational Safety measures as costly and of little importance.This perception motivates fishermen to continue their work as usual without prioritizing Health and Occupational Safety.During the Fishermen Group Discussion (FGD), participants expressed keen interest in Health and Occupational Safety.The film "Avatar: The Way of Water" demonstrated the enthusiasm for incorporating Health and Occupational Safety practices, including the use of Personal Protective Equipment (PPE) attributes, into their daily routines.Findings from the questionnaire further indicated the fishermen's interest in integrating Health and Occupational Safety into their daily lives, though financial constraints sometimes serve as a hindrance.Fishermen acknowledge their past lapses, often attributed to a lack of awareness about Health and Occupational Safety.They express a desire to rectify these mistakes and implement Health and Occupational Safety practices in their fishing endeavors.Therefore, the findings from the Fishermen Group Discussion (FGD) indicate that fishermen are open to the idea of implementing Health and Occupational Safety practices, akin to what is depicted in the film "Avatar: The Way of Water."Based on these findings, researchers have progressed to the next step in the study, which involves designing a Health and Occupational Figure 2 Figure 2 Hierarchy Control Danger ( humans, technology, environment, work, and organization), almost all sub-elements are marked in red.This indicates a lack of consideration for Health and Occupational Safety in the fishermen's lives, primarily stemming from their indifference towards these aspects.Unfortunately, this indifference can lead to work-related accidents and even fatalities from a Health and Occupational Safety perspective.Fundamentally, fishermen are aware that their current practices pose high risks, yet they perceive the implementation of Health and Occupational Safety measures as costly and of little importance.This perception encourages fishermen to persist in their usual work routines without prioritizing Health and Occupational Safety.A design was formulated for three types of hierarchy controls, namely, developing Standard Operating Procedures (SOPs) for machines and equipment (administrative controls), designing ship engine covers to minimize noise (engineering control), and designing safety boxes (engineering control), in addition to the utilization of Personal Protective Equipment (PPE).Due to the challenges associated with elimination and substitution, as indicated by Health and Occupational Safety experts, these elements were not incorporated into the hierarchy control design. Table 1 Data Collection Table 2 Analysis of "Avatar: The Way of Water" Using a Semiotic Approach Table 3 Safety Identification Using a Macro Ergonomics Approach Table 4 Safety Identification Using a Macro Ergonomics Approach The artificial fish pond is constructed from long wood and nets, making it possible for a fisherman's fingers to easily get stuck, rubbed, or wounded if they do not use rubber hand guards 3. 5. Findings of Safety Culture in "Avatar: The Way of Water" Using a Semiotic Approach The safety culture depicted in the film "Avatar: The Way of Water" reveals the application of Health and Occupational Safety culture through specific attributes and scenes.Employing Roland Barthes' Semiotics, scenes are reviewed as signs, analyzed with denotative meaning or the first layer of meaning, connotative meaning or the second layer of meaning, and examines the myths constructed and potentially believed by film creators and target audiences.It is recognized that table value of 0.632.A validity test is deemed valid if all r counts > rtable (0.632).The results indicate that all r counts are greater than r-table, confirming the validity of the questionnaire.Subsequently, for reliability testing using the Cronbach Alpha approach, if the value approaches 1, the questionnaire is considered reliable.Based on the results of reliability test, it shows that the Cronbach Alpha value is 0.890, which closely approaches 1, indicating the questionnaire's reliability.The correlation test results demonstrate a robust correlation between the safety culture depicted in the film "Avatar: The Way of Water" and the Fisherman's Video. Table 5 Design of Hierarchy Controls for Serving Activities
2024-03-22T15:28:05.392Z
2024-01-29T00:00:00.000
{ "year": 2024, "sha1": "75de05b09897ae2123679f24400802dc75b25ee9", "oa_license": null, "oa_url": "https://doi.org/10.32734/jsti.v26i1.13571", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "abb89231db9bbf5b4ac3a70f127cd412d2e08e1c", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
64369947
pes2o/s2orc
v3-fos-license
Balance control during stance - A comparison between horseback riding athletes and non-athletes Horseback riding requires the ability to adapt to changes in balance conditions, to maintain equilibrium on the horse and to prevent falls. Postural adaptation involves specific sensorimotor processes integrating visual information and somesthesic information. The objective of this study was to examine this multisensorial integration on postural control, especially the use of visual and plantar information in static (stable) and dynamic (unstable) postures, among a group of expert horse rider women (n = 10) and a group of non-athlete women (n = 12). Postural control was evaluated through the center of pressure measured with a force platform on stable and unstable supports, with the eyes open and the eyes closed, and with the presence of foam on the support or not. Results showed that expert horse rider women had a better postural stability with unstable support in the mediolateral axis compared to non-athletes. Moreover, on the anteroposterior axis, expert horse riders were less visual dependent and more stable in the presence of foam. Results suggested that horseback riding could help developing particular proprioceptive abilities on standing posture as well as better postural muscle tone during particular bipodal dynamic perturbations. These outcomes provide new insights into horseback riding assets and methodological clues to assess the impact of sport practice. Introduction Sport practice constraints players to manage simultaneous sources of information in order to maintain postural stability in an efficient manner. This process may be called "adaptive postural control" [1,2]. The contribution of sensory information to postural control has been showed to differ according to the sport activity [3][4][5] and the level of practice [6,7]. In a recent review, Paillard [8] concluded that repeated particular postures and movements, induced by sport practice, could generate robust postural adaptations. This would be especially the case when the sport practice induces a high level of postural balance during aerial and ground-contact phases, as in gymnastics. Vuillerme and colleagues [9] compared postural control of a group of expert gymnasts vs. a group of experts in other non-gymnastic sports in three standing postures of increasing difficulty: bipedal, unipedal, and unipedal with unstable support (i.e. PLOS 7 cm thick foam surface). Results showed that gymnasts had significantly less postural sway when vision was removed in unipodal tasks. Surf practice is also requiring a high level of postural abilities while standing on the surfboard. In an expert vs non-expert study, Paillard and colleagues [10] analyzed postural control in different visual conditions (open and closed eyes) and stability (static and dynamic) conditions. Postural parameters were therefore assessed by measuring the center of foot pressure displacement. The authors showed that expert surfers had better postural control and they used less visual information when maintaining posture in unstable support. Like horseback riding, canoeing requires postural stability in a sitting posture. Stambolieva and colleagues [5] studied the postural stability of 23 canoeing and kayaking athletes vs. 15 healthy untrained subjects. The influence of two conditions of vision (open and closed eyes) and two conditions of stability (stable and foam support) on center of pressure excursions was analyzed while standing. Results demonstrated that kayaking and canoeing athletes had a better postural stability on an unstable support while standing with eyes open. Moreover, it appeared that the result of Romberg Quotients (RQ) which evaluated the contribution of vision on standing posture, showed that canoeists were more "visual-dependent" than kayakists. This may be related to the fact that canoeists are dealing with a kneeling posture during their activity. Visual dependency reflects the weight each individual assigns to visual or nonvisual information during postural control [11]. In cycling, the athlete is also sitting and needs postural stability to avoid falls. Lion and colleagues [12] compared postural abilities of mountain bikers and road cyclists. They showed significant differences between groups with road cyclists being more sensitive to vision to control balance during stance than mountain bikers. Maintaining postural stability in horseback riding is a critical constraint to ensure safety and avoid falls. Postural stability depends on the sitting posture adopted by the rider, with a leg on each side of horse, commonly referred as a 'straddle posture'. When horseback riding, the most obvious source of information comes from the visual field. In a recent study, Olivier and colleagues [13] evaluated the relative contribution of visual information to horseback riders' postural stability (estimated from the variability of segment tridimensional position). Postural parameters were measured on an equestrian simulator for a group of expert riders and a group of club riders in four visual conditions: real-simulated ride scene, stroboscopic illumination to prevent access to dynamic visual cues, no projected scene under normal lighting, and no visual information. Results suggested that professional riders had a greater overall postural stability than club riders, mainly revealed in the anteroposterior axis. Thus, intensive training in horseback riding induces changes in postural control measured on an equestrian simulator. It might be therefore interesting to investigate the influence of this intensive training on postural control in a standing posture. Horse movements may also be considered as a source of information for the rider, inferring postural imbalance acting from the pelvis to the rider's head [14][15][16]. Adapting to this instability may thus lead to specific postural skills related to vestibular, proprioceptive and cutaneous information. Postural effects of sport have been evaluated in amateur and competitive practitioners through comparisons with control participants. These experimental designs have been performed in many sports such as dancing [17], soccer [18,19], volleyball [20], rugby [21], kitesurfing [22], or running [23]. Thus, differences between balance control during stance in athletes and non-athletes seem to have potential in elucidating the effect of horseback riding training on postural control regardless of their initial postural abilities. Horseback riding can thus be considered as a sport activity with particular postural constraints. The subsequent scientific question is whether or not horseback riding intensively would influence postural abilities. The main hypothesis is that horseback riding athletes would exhibit a different sensory organization of balance control compared to nonathletes. To answer this question, postural stability parameters involved in center of pressure displacement have been compared for two groups, horseback riders vs non-athletes, in different experimental conditions implying vision, support stability and proprioception. Participants Ten elite professional riders specialized in dressage ('DR' group), and twelve non-athlete women ('NA' group) who did not practice horseback riding, voluntarily participated in the experiment. The participants' morphological characteristics showed no difference between the two groups ( Table 1). DR athletes had training experience of 17.7 ± 3.50 years, with 9.6 ±2.36 years of practice in competition and a weekly activity of 32.88 ± 3.72 hours. Participant's exclusion criteria included a documented balance disorder, a medical condition that might affect postural control, or a neurological/musculoskeletal impairment in the past 2 years. All participants provided written informed consent and the study was approved by the ethical committee of the Science Faculty, Université Paris-Sud. Postural tests Participants stood barefoot on a force platform (Medicapteurs, Fusyo model, "40Hz/16b") with heel distant from 2 cm, with an external open angle of 30˚, their hands hanging loosely by their sides and legs straight using the Standards of the Association Française de Posturologie. Three balance conditions were investigated while participants were standing: (1) a static balance condition on a rigid floor (STA), (2) an unstable posture on a seesaw device generating instability in the anteroposterior direction (AP dynamic balance), and (3) an unstable posture produced by the seesaw device in the mediolateral direction (ML dynamic balance) (Fig 1). The seesaw device was 55 cm long and 6 cm tall (Bessou Dynamical Plate, Medicapteurs, France) [24]. Posture conditions were analyzed with eyes open (EO) and eyes closed (EC), and with a foam (wF) on the force platform (height: 0.2 cm, hardness: 8 SH, density: 220 kg.m -3 ) or not (noF). Each trial lasted 31.6s [25]. The order of the presentation of each trial was randomized. Each trial was conducted only once to avoid learning. The force platform allowed measuring the displacement of the center of foot pressure (COP). Signals from the force platform were sampled at 40 Hz and filtered with a secondorder Butterworth filter (8 Hz low-pass cut-off frequency). Data analysis Four stabilometric parameters were used to describe the postural behavior of the participants: -the COP surface (in mm 2 ) which corresponds to the area of a 95% confidence ellipse and constitutes a measure of the CoP spatial variability; -the mean COP velocity (in mm.s -1 ) which represents the sum of the cumulated COP displacement divided by the total time and constitutes a good index of the amount of activity required to maintain stability [26]; -the VFY parameter which has been obtained by dividing the rectified standard deviation velocity by the mean position on the AP axis. This parameter would help monitoring short length-high velocity compensating movements used to maintain the upright position [27][28][29]; -the Romberg Quotient (RQ) which corresponds to the relation between the COP surface parameters in EO and EC conditions on hard and foam ground. Statistical analysis The statistical significance threshold was fixed at 0.05 (Statistica, StatSoft, USA). A Shapiro test and a Levenne test were performed on data to verify the normality of the data and the homogeneity of the variances, respectively. Then, a repeated measures analysis of variance (ANOVA) with 3 factors was carried out: 2 Groups (DR vs NA) x 2 Vision conditions (EO vs EC) x 2 Foam conditions (wF vs noF). The three balance conditions (STA, AP, ML) were independently analyzed. Newman-Keuls post-hoc was used to test differences among means. As the RQ included EC and EO visual conditions, postural quotients were specifically tested with a ttest, for each foam condition. Results We conducted separate 2 Groups (DR vs NA) x 2 Vision conditions (EO vs EC) x 2 Foam conditions (wF vs noF) ANOVAs with repeated measures on the two last factors (Table 2). In order to address our main hypotheses with conciseness, we described the results of these different ANOVAs together for each main effect and each interaction in the next paragraphs. Influence of group In the STA balance and the AP dynamic balance condition, there was no significant effect of group on the COP surface, COP velocity, and VFY (see Table 2). Conversely, in the ML posture, VFY has been found to be significantly lower for DR than for NA (P<0.05) (Fig 2). Influence of vision conditions In the STA balance, a significant main effect of the vision condition was found for the COP velocity only (see Table 2). Post-hoc test showed that COP velocity was significantly higher during EC condition (13.46 ± 0.66 mm/s) than during EO condition (9.81 ± 0.39 mm/s). In the AP posture, COP surface, COP Velocity and VFY have been found to be significantly different through EO and EC conditions. Post-hoc tests revealed that: COP surface was significantly lower in EO condition (410.53 ± 34.37 mm) than in EC condition (1261.65 ± 181.66 mm), COP velocity was significantly lower in EO condition (20.26 ± 0.82 mm/s) than in EC condition (42.57 ± 2.27 mm/s), and VFY was significantly lower in EO condition (25.28 ± 1.89 mm/s) than in EC condition (42.75 ± 3.31 mm/s). For the ML posture, a significant main effect of vision on all parameters was found. Parameters obtained during the EO condition were significantly lower (COP Surface: 402.38 ± 25.18 Influence of foam condition The STA balance revealed no significant effect of the presence of foam on COP surface, COP velocity, and VFY (see Table 2). In the AP posture, COP velocity and VFY were significantly different between foam conditions. More precisely, the presence of foam on the force platform significantly decreased COP velocity (wF: 29.01 ± 2.63 mm/s; noF: 33.81 ± 2.10 mm/s) and VFY (wF: 31.17 ± 2.95; noF: 36.85 ± 3). Conversely, the main effect of foam was not significant for COP surface. Same observation can be made for the ML posture, with COP velocity and VFY being significantly lower with foam (COP velocity: 29.01 ± 2.63 mm/s; VFY: 31.17 ± 2.95) than without (COP velocity: 33.81 ± 2.10 mm/s; VFY: 36.85 ± 3) and this main effect of foam did not reach significance for COP surface. A significant interaction effect Foam × Group was observed on VFY in the AP posture (F (1, 20) = 6.822, p<0.05). More precisely, post-hoc tests showed that the presence of foam in the DR group led to a significant lower VFY than without foam (p<0.05). No significant differences were found on NA group between the two foam conditions (Fig 3). Finally, the ANOVAs conducted on COP surface, COP velocity showed no interaction effect on each balance condition (STA, AP, ML). However, the analysis conducted on VFY indicated that the Group × Foam × Vision interaction was significant (see Table 2) on ML dynamic balance. We found that in Eye Open no Foam condition was significantly less variable than in Eye Closed no Foam condition for DR (p<0.001), and in the same condition for NA (p<0.01). For NA, in Eye Open no Foam condition we did not found significant differences with other conditions (NS). Moreover, Eye Closed no Foam condition revealed a significant difference with Eye Open with Foam (p<0.001) and Eye Closed with Foam (p<0.01) for DR (Fig 4). Vision dependence The t-test analysis revealed that RQ was not significant in STA and ML postures between DR and NA groups (Table 2). However, in the AP posture, results showed significant differences between groups, RQ being significantly lower for DR group compared to NA group (Fig 5). Discussion Horseback riding can be considered as a sport activity with particular postural constraints. The objective of this study was to analyze postural control of expert horse riders vs. non-athletes, in different visual and somesthesic conditions during static and dynamic standing balances. To achieve this goal, twenty-two young healthy adults, divided into two groups (DR and NA), were asked to stand upright during two visual conditions (EO and EC) and two somesthesic conditions (wF and noF). Centre of pressure (COP) displacements were recorded using a force plateform. Main results showed significant effect of groups on VFY during an unstable ML balance. This conventional parameter has been used by clinical posturographic practitioners to evaluate the importance of muscle contractions in relation to bipedal postural control, as it is known to capture the phenomenon of stiffness in the inverted human pendulum model [28,30]. Indeed, in an elderly population, an increase of the VFY parameter indicated a progressive reduction in the tension of the tissues of the posterior chambers of the legs [27]. Regarding the COP surface and COP velocity, our results showed that DR and NA exhibited similar values in STA, AM and ML balances. STA balance is a simple postural task which does not permit to discriminate the athlete postural ability [8,31,32]. However, in the ML dynamic balance, VFY was significantly lower for DR as compared to NA group. In other words, this postural parameter would appear as much more discriminating than traditional parameters (COP surface, COP velocity). This original outcome suggests that DR group had a better upright postural control than the NA group during ML dynamic balance. As in kayaker athletes, the dressage riders are part of the "upper-body sport" in "sitting posture" which differs biomechanically from other athletes studied in other postural stability investigations [5]. In horseback riding, there are two athletes, one human and one horse. The expert rider follows the motion of the horse's body in order to optimize his interactions with the horse at different gaits. This synchronization with the horse implies having the ability to adapt balance and orientation to coordinated rider's pelvis, trunk, head and limbs [13,16,33]. This sport-specific ability suggests that the rider develops specific muscles, such as the rectus abdominis and the erector spinae to stabilize the trunk, the adductor muscles to maintain the knee and the pelvis stability [34,35]. It may be proposed that riders would develop a greater ability to monitor short length-high velocity, thus compensating movements used to maintain the upright position. Indeed, it can be suggested that the repeated movements of the pelvis and stresses on the spine during horseback riding practice would make horse riders more efficient when represented into an inverted pendulum model relative to the support of the saddle. The saddle would then represent the main surface of support as well as stirrups. Action and reaction mechanisms of the center of mass in relation to the COP would feed into the idea of anticipatory mechanisms experienced by horseback professional riders. These mechanisms have been reported while analyzing kinematical phases of riders on an equestrian simulator [13], or in an ecological environment with the horse [16]. Thus, a perspective of the current work would be to measure same postural parameters while sitting on a saddle, i.e. a straddle posture, in order to analyze the influence of posture specificity. Again, this ecological posture might be obtained using an equine simulator or directly on a horse. An intermediary protocol would be to assess postural stability in a standardized environment while sitting, as in [36]. Postural tonus has been shown to be more developed on the axis of displacement related to the sport-specific environment [5,37]. Knowing that horseback riding practice has been defined as an "interactive" dynamics which solicits the muscles of the trunk in a sagittal and vertical axes [15,38], as much as other sports such as judo [39], or gymnastics [32,40], it can be suggested that the influence of horseback riding would be better expressed in ecological situations. An interesting finding of this study was the contribution of visual information. In EC condition, the COP surface, the COP velocity and the VFY were higher than in eyes-open conditions for two unstable balances (AP and ML balances). For the static balance (STA), a significant difference has been found only on COP velocity. We hypothesized that dynamic balances (AP, ML) induced more visual flow than a static balance, which help discriminating sensory information. Indeed, previous studies on other sport activities revealed an increase of the COP displacement during EC condition [8,10]. This assumption is based on the traditional approach which states that postural control aims to immobilize the center of mass despite movement and external perturbations [41][42][43][44]. However, based on Gibson's work [45], an ecological approach of postural control with both theoretical and empirical supports also exists. This approach states that there is no relative weighting of sensory information rather all senses provide information that increases specificity in postural control [46,47]. Thus, as weighting of sensory inputs were not directly measured in the current study, it can be noted that our results do not exclude any theory of sensory perception. According to the traditional approach, horseback riders would show less visual dependency than non-athletes during an AP dynamic balance on hard and soft ground. This adaptation would result from their equestrian practice. The sitting practice of professional riders might therefore lead to a specific reweighting when analyzing the bipedal standing posture. To better investigate the influence of practice on postural control, a follow-on study will be conducted with non-athletes, expert horse riders and experts from another sport practice. Another interesting finding concerned the influence of foam support. Traditionally, posturographic studies investigated visual conditions (EO and EC), but less frequently with foam under the feet. However foam could have a key role during static balance to compensate the destabilization created by an EC condition on postural parameters. Indeed, into a healthy population, some participants appeared more sensitive to somesthesic information [48,49]. This can be explained by the fact that plantar elements of the foot are first points of contact between body and the external environment while standing, thus providing detailed spatial and temporal information about contact pressures under the foot and shear forces resulting from body movement [50][51][52]. Since cutaneous feedback from the plantar surface may be influenced by the interaction of the foot with the ground, it has been found that changing the characteristics of the supporting surface in a repeated manner modified the control of bipedal posture [53] [54]. Again, it is interesting to note that the foam condition especially in eye closed condition was not different than eye open condition and was different than the eye closed in no foam surface only on the dressage rider. This original result of the interaction suggested that the dressage rider used somesthesic plantar information as their "eye". Previous studies reported that expert in sport could shift the sensorimotor dominance from vision to proprioception for postural maintenance [8,9,55]. In the absence of visual information and when the dressage rider was in ML dynamic balance, the foam increased their balance and cancels the effect of eyes closed. As has already been proved by previous studies for similar activities practiced on unstable support (surfers [10]; kayakers [5]), it may be suggested that sport practitioner would show a lower dependence on vision for postural control. In fact in horseback riding, various contacts (with saddle, rein, stirrup, for example) and pressures (between the rider pelvis and the horse saddle, essentially), are produced during the horse/rider interaction in horseback riding. They provide rich and patterned somesthetic information (proprioceptive and tactile) that are of first importance for the rider to regulate and coordinate his/her movements with those of the horse. Dressage rider group was professional and they rode horses every day (35 hours by week). Therefore, these information (proprioceptive and tactile) help the rider to anticipate the horse movement as in our dynamic equilibrium test which was probably the closest condition to practice. A limitation of this study may come from the fact that only women participants have been examined. This selection has been done to prevent a potential bias related to the influence of gender on postural parameters, although there is no real consensus about gender effects on postural stability in the literature. One of the first studies about this topic revealed no difference between six postural control measures between men and women [56]. Steindl and colleagues [57] investigated the development of sensory organization according to each sensory component (proprioceptive, visual, and vestibular) in relation to age and gender. They detected no gender difference in the adult group, as well as other studies from the literature [58,59]. However, Ericksen and Gribble [60] assessed dynamical postural control in men and women through the posteromedial reaching distance. They demonstrated that women presented significantly less dynamical postural control than men. In perspective, a follow-on study will compare these two groups of participants to male non-athletes and male dressage riders to investigate the influence of gender. Conclusion Very little research has been devoted to the use of sensory information in horse riding and, none has been specifically devoted to the contribution of sensory information to upright postural stability. The aim of this study was to assess postural control differences between a group of horseback riding women (DR) and a group of non-athlete women (NA). First, compared to non-athletes, horseback riders exhibited greater VFY stability during a ML dynamic balance. Secondly, with foam on the ground during an AP dynamic balance, horseback riders revealed better stability than non-athletes. Thirdly, horseback riders showed less visual dependency than non-athletes during an AP dynamic balance. Thus, COP surface and COP velocity was not easy to discriminated the dressage rider to the non-athlete upright posture ability. The use of the VFY allowed us to show differences between groups. Writing -review & editing: Agnès Olivier, Nicolas Vignais, Nicolas Vuillerme.
2019-02-22T00:04:06.254Z
2019-02-05T00:00:00.000
{ "year": 2019, "sha1": "e293de209072599df989938f7530a1da7eb411cf", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0211834&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e293de209072599df989938f7530a1da7eb411cf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
17168199
pes2o/s2orc
v3-fos-license
Gaussian Effective Potential Analysis of Sinh(Sine)-Gordon Models by New Regularization-Renormalization Scheme Using the new regularization and renormalization scheme recently proposed by Yang and used by Ni et al, we analyse the sine-Gordon and sinh-Gordon models within the framework of Gaussian effective potential in D+1 dimensions. Our analysis suffers no divergence and so does not suffer from the manipulational obscurities in the conventional analysis of divergent integrals. Our main conclusions agree exactly with those of Ingermanson for D=1,2 but disagree for D=3: the D=3 sinh(sine)-Gordon model is non-trivial. Furthermore, our analysis shows that for D=1,2, the running coupling constant (RCC)has poles for sine-Gordon model($\gamma^2<0$) and the sinh-Gordon model ($\gamma^2>0$) has a possible critical point $\gamma^2_c$ while for D=3, the RCC has poles for both $\gamma^2>0$ and $\gamma^2<0$. Introduction The "Gaussian effective potential" (GEP) has proven to be a powerful non-perturbative approac in quantum field theories (QFT). Using the GEP approach, Stevenson etc. found two distinct, non trivial versions of the 3+1 dimensional λφ 4 theory: the "precarious φ 4 theory" and the "autonomou φ 4 theory" [1], and thus provided a new view point about the triviality of λφ 4 model as a physic theory. Also by GEP, Ingermanson examined the generalized sinh-Gordon and sine-Gordon mod in D + 1 dimensions [2]. The Lagrangian for the model takes in general the form where m and γ are the mass and coupling constant respectively at tree level. If γ 2 > 0,the classic potential is a cosh curve with a single minimum at the origin; if γ 2 < 0, it is actually a sin Gordon model with an infinite number of degenerate minimum of the potential. The limiting cas γ 2 → 0 is usually understood to be a free theory of masss m. When D = 1, the sine-Gordon mod is equivalent to a group of other models [3], namely, the massive Thirring model [4], the Coulom gas [5], the continuum limit of the xyz spin= 1 2 model [6] and the massive O(2) non-linear σ-model[5 It is convenient to define β 2 = −γ 2 for discussing the sine-Gordon model. It has been show that the D = 1 sine-Gordon model is superrenormalizable for 0 ≤ β 2 ≤ 4π; renormalizable fo 4π ≤ β 2 ≤ 8π,and nonrenormalizable for β 2 > 8π [7], the last property was first discovered b Coleman [4]. Based on GEP, Ingermanson concluded that for D ≥ 3, the model (1) can exist only a a free theory while for D < 3, the vacuum is unstable over a certain range of the coupling constan In Ingermanson's analysis, the integrals (2 may be divergent or finite. The divergent ones were dealt with without using any cutoff procedur or regularization procedure and were just taken to be as though finite most of the time, and th whole analysis seems to be regularization scheme independent. Yet for D ≥ 3, the fact that I D 2 (µ is divergent was used to lead to the conclusion that the interacting theory is inconsistent for D ≥ Hence, the rule that taking I D n as finite was violated here and there exists such a kind of manipula tional obscurity. To eliminate this obscurity, we intend to re-analyse the model (1) by the new regularization an renormalization (R-R) scheme, which was proposed by Yang [8] and used by Ni et al recently[9]-[1 Effective Potential, Sine-Gordon though the "derivative regularization" trick has been evolving in the literatures for many years [13 [18]. The spirit is like this: when encountering a superficially divergent Feynmann diagram integr (FDI), we first differentiate it with respect to some parameter such as a mass parameter enough time until it becomes convergent and the integration can be done. Then we reintegrate it with respec to the same parameter the same times. The result is to be taken as the definition of the origin FDI. Then instead of divergence, some arbitrary constants appear in FDI. The appearence of thes arbitrary constants indicates some lack of theoretical knowledge about the model at QFT level unde consideration. The determination of them is beyond the ability of the QFT, instead, they should b fixed by experiment via some suitable renormalization procedure. This new R-R scheme has turne out to be successful in that the whole analysis is quite clearcut and it can give a prediction of Higg mass, m H = 138 GeV in the standard model [11]. Also it provides an elegant calculation in QED e.g. Lamb shift [12] . In this paper our main conclusions agree exactly with those of Ingermanso for D = 1, 2. But for D = 3 there is an important discrepancy : the D = 3 sinh(sine)-Gordo model may be non-trivial. Furthermore, our analysis shows that for D = 1, 2, the running couplin constant (RCC)has poles for γ 2 < 0 and the sinh-Gordon model has a possible critical point γ 2 c whi for D = 3, the RCC has poles for both γ 2 > 0 and γ 2 < 0. In section 2, we give a general analys of the model (1) in the Schrödinger representation and present some known results. In section 3, w analyse the model for D = 1, 2, 3 respectively by the new R-R scheme. The last section is devote to discussions. GEP and running coupling constant(RCC) The Lagrangian (1) can be rewritten as The canonical momentum conjugate to φ is π = ∂L ∂φ =φ (5 and the Hamiltonian reads Effective Potential, Sine-Gordon The quantization is realized through In particular, we often choose G(φ) = 0. In Schrödinger representation, the state is described b wave functional Ψ[φ] which satisfies the Schrödinger equation The first step in Gaussian variational method is to make an ansatz for the Schrödinger wave function for the vacuum The P, Φ, f are variational parameters. The energy of the variational state eq.(10) is We are interested in finding the effective potential, so we consider the energy of the state with constan classical field Φ, ∂ i Φ = 0. The extremum energy configuration clearly satisfies the constraint P = The variational equation δE δf xy = 0 (13 gives the general forms of f xy and f −1 xy as Using I D n (µ 2 ) in eq.(2), we have (we often omit the superscript D) The energy density E is a function of Φ and µ 2 According to Ritz variational principle [19], any stationary state (10) is an eigenstate of the discret spectrum of H, and the corresponding eigenvalue is the stationary value of the function (17). Thu we consider the stationary points (μ 2 ,Φ) for E which are solutions of the equations (As one is interested in the effective potential, one may consider the stationary pointμ 2 and leav Φ free as we will do in the following.) Clearly, if γ 2 > 0,μ 2 is always positive and we have th only solution (μ 2 , Φ = 0). Instead, if γ 2 = −β 2 < 0,μ 2 is positive only when cos βΦ > 0, so it necessary that (2n − 1 2 )π ≤ βΦ ≤ (2n + 1 2 )π, (n ∈ N). but eq.(21) confines it to be sin βΦ = 0. S we have an infinite number of stationary points (μ 2 ,Φ n = 2nπ). It is evident that for all stationar points, the energy takes the same value. Therefore, for negative γ 2 , the stationary states are infinitel degenerate. To guarantee that the stationary point is an local minimum, we have to demand that the matr Effective Potential, Sine-Gordon we have from (21) and (22) So for M to be positively definite, we should have The GEP is defined as where the fuctional relation of µ 2 to Φ is the same as (22) ofμ 2 toΦ. Like the usual effective potenti V ef f obtained by loop expansions [20], V G has also the physical interpretation: it is the minimum the expectation value of the energy density for all states constrained by the condition that the fiel φ has expectation value Φ. Using (22), V G can be written as It is straight forward to check that Clearly, V G acquires its minimum at Φ 0 = 0, which agrees withΦ. (In general, the stationar points of an arbitrary function f (x, y) agree with those of f (x(y), y), where x as a function of y determined by ∂f /∂x = 0, but whether f (x, y) and f (x(y), y) acquire their maximum or minimum simultaneously just depends.) For later use, we calculate the following derivatives. First we have From (32) and dthγΦ/dΦ = γ/ch 2 γΦ, we have Effective Potential, Sine-Gordon The renormalization is carried out at Φ 0 (it will be referred to as Φ 0 -renormalization) and th renormalized mass and coupling constant are defined by We see from (40) that the renormalization of the coupling constant depends on that of the mass. W deduce from (35) and (38) that . Eq(41) just asserts that the renormalized mass, which is in general the energy difference of on particle state and the vacuum [21], equals the variational parameter. Effective Potential, Sine-Gordon The Running Coupling Constant Analogous to that in the λφ 4 model [11], the running coupling constant (RCC) is defined 3 The New R-R Analysis The D = 1 Case Following the spirit of the new regularization, we have where C, µ 2 s are two arbitrary constants. It can be easily seen that only µ 2 s is non-trivial and is to b determined by some renormalization scheme. Thus we only need the mass renormalization condition We choose such a scheme that the Φ 0 -renormalized mass is just the mass given at the tree level, i. So from (22) and (41) we have Z m = 1 which fixes I 1 = 0, thus µ 2 s = m 2 . Consequently, th renormalized coupling constant γ 2 R is i.e. the coupling constant endures a finite renormalization which can provide us with some importan information about the model after quantization. Since it is usually expected that quantum correction are small so γ 2 R and γ 2 should be of the same sign, we should have that On the other hand, the optimalΦ for E(µ 2 , Φ) incidentally coincides with the minimum Φ 0 for V G from (28) we have So the two conditions agree well and confirm that there exists a critical value for γ 2 , i.e. for γ 2 0, γ 2 < 4π,but for γ 2 = −β 2 < 0, β 2 c = 8π. It seems that for sinh-Gordon model, γ 2 = 4π is also critical point at which γ 2 R = 0, but whether the higher vertices also become zero, i.e. whether th model becomes a free one has to be confirmed by further analysis. Now the regularized integrals are C 0 and C 1 are two arbitrary constants and only C 1 is nontrivial as in the D = 1 case. So we nee only to fix the mass renormalization condition. Similarly we have I 1|Φ 0 = 0 and so C 1 = 1 2π (m 2 ) 1/ Hence the renormalized coupling constant is From (28) we also have 1 + m 16π γ 2 > 0 (62 As in the D = 1 case, we have a critical value for β 2 , β 2 c = 16π m and γ 2 = 8π m seems also to be a possib critical point for sinh-Gordon model. As to the low-lying excited states, from the gap equation The RCC has poles determined by the equations chγΦ = 0 (65 µγ 2 = 0 (67 (where we take µ > 0). So the poles are µ 2 1 = 0, µ 2 2 = ( 16π β 2 ) 2 and µ 2 3 , which is determined by the la equation (67). These poles exist only for γ 2 < 0. Thus after quantization we have two mass scale µ 2 2 and µ 2 3 apart from the mass parameter m at tree level. D=3 Case Now the regularized integrals I n are where µ 2 s , C 2 and C 3 are arbitrary constants and C 3 is trivial . So we need both mass renormalizatio and coupling constant renormalization. According to the renormalization scheme (48) we have als Therefore from (40) we have To fix µ 2 s we choose the same scheme as the mass renormalization : the Φ 0 -renormalized couplin constant equals the coupling constant at tree level:γ 2 R = γ 2 . So we have γ 2 = 0 or µ 2 s = m 2 . Th first case is trivial and can not determine µ 2 s . So only the second is of physical significance. Thu we arrive at an important conclusion that the D = 3 sinh(sine) -Gordon model is non-trivial. Th is an important descrepancy between our analysis and that of Ingermanson. The bounds for the particle mass of the low-lying excited states can also be obtained. From th gap equation and the fact that If we define x ≡ µ 2 /m 2 and κ ≡ γ 2 m 2 /(32π 2 ), then the gap equation (72) can be written as . Consider the solution of this equation by graphical means. First when γ 2 > 0, for φ = 0, the curv of the l.h.s. will intersect that of the r.h.s. at two points: x 1 = 1 and a larger x 2 . As Φ increase the first root increases and the second one decreases. At some critical Φ cri , the two will meet. A Φ increases further, there will be no root for 0 < x < ∞. For Φ < Φ cri , in order to guarantee th local minimun of of E, the root must satisfy eq.(28), i.e. κ ln x > 1 and I 2 = 0. Therefore, for Φ = x = 1 is not definitely the local minimum. In general we have that when µ 2 (Φ) ≥ m 2 .For γ 2 < there is only one root of the gap equation. In this case, if βΦ = 2nπ, the root x = 1 is not either th local minimum. Since ln cosβΦ ≤ 0, we have x ≤ 1. Certainly, eq(28) must be also satisfied at th root if it is a local minimum of E. Summary and Discussion We have extracted some physical information of sinh(sine)-Gordon model by using the new Rsheme. We arrive at an important conclusion which is substantially different from Ingermanson that the D = 3 sinh(sine) -Gordon model is non-trivial so long as the regularization constant µ is chosen to be m 2 . This should not be surprising because for D = 3, the Coulomb gas model ca also be transformed to be a sine-Gordon model and there should exist a nontrivial quamtum theor for the former. Our conclusions agree exactly with those of Ingermanson for D = 1, 2 but disagre for D = 3. Furthermore, our analysis shows that for D = 1, 2, the RCC has poles for γ 2 < 0 an the sinh-Gordon model has a critical point γ 2 c while for D = 3, the RCC has poles for both γ 2 > and γ 2 < 0. The existence of the poles of the RCC provides some new mass scales as in the λφ model [11]. Unfortunately we can not still obtain another critical point β 2 c = 4π which is almost a important as β 2 c = 8π in the D = 1 sine-Gordon model [7]. This is perhaps an intrinsic disability the GEP method. The poles in RCC reflect the intrinsic properties of the model. They are neither the mass solitons nor quite the same as the so-called "Landau pole µ L " like that in QED discussed in previou literatures. In the past, the Landau pole µ L emerges as an singularity or obstruction on the way running of cutoff Λ → ∞, or some arbitrary mass scale µ ( which stems from some regularizatio procedure, e.g. the dimensional regularization) approaching to infinity. Of course, there is som similarity between Landau pole and the largest mass scale in our treatment. For example, in ref. [1 it is found that there are three mass scales characterizing the λφ 4 model, among them, the large one, say µ c , can only be found by non-perturbative method (like GEP) and evolves into the large energy scale in the standard model of particle physics where the φ-field is coupled to gauge fields. A µ c , the system undergoes a phase transition in vacuum (from symmetry broken phase to symmetr one). We guess that similar phase transition would occur also in the models considered in this pape As in the present R-R scheme, there is no explicit divergence (which is substituted by som constants C, µ s ), no counterterm, no bare parameter and no arbitrary running mass scale (all µ i i our treatment are fixed and all running parametres are physical ones) as well. There is no obtructio in the running of cutoff Λ → ∞ and no bare parametre, say γ 0 either, so there is no contradictio enforcing γ 0 → 0. Hence we claim that there is no "triviality" in D = 3 sinh(sine)-Gordon model a that in λφ 4 model [11]. A useful model should be non-trivial. On the other hand, very probably has some singularities e.g. some poles of RCC, showing the boundary of its applicability. To kno the physics at the singularitis is beyond the ability of the QFT under consideration. As discussed in ref. [9], the QFT is not well-defined by the Lagrangian solely. In GEP scheme, model is defined by the effective Hamiltonian with V G containing some arbitrary constants (C, µ i ). The constant are the necessary compliments t the originalL before the model can be well-defined. They are nothing but the values of mass scale and coupling constants. In some sense, the renormalization in QFT is just like to reconfirm the plan ticket before one's departure from the airport. We must keep the same symbol of parametres, sa m, through out the whole calculation. Once these constants are fixed, the model is well defined and has some prediction power. Th calculation of eq.(74) at tree level already includes the quantum corrections. We can consider an momentum dependent vertices after the first two terms besides V G in eq.(74) are taken into accoun Everything is unambiguous and is well-controled. The reason why an original "non-renormalizable model becomes renormalizable in GEP scheme could be understood by an example in quantum mechanics . In the Hamiltonian of hydrogen-like atom, if besides H 0 = 1 2µ 2 p 2 − Ze 2 4πr , we add a sma perturbation term, H ′ = Ae −bp 2 = A ∞ k=0 b k p 2k , then the energy correction in eigenstate | nlm remains finite and fixed to be ∆E =< nlm | H ′ | nlm > whereas the contribution of individu term in H ′ , < nlm | p 2k | nlm >, (k ≥ 3), would diverge ! Once again, this example reminds us the implication of divergence, which is by no means a very large number. Rather, it is essentially warning, showing that there might some lack of knowledge or some unsuitability in our treatmen For the moment, we can not claim that what we find is the only finite solution of the model whic was believed as non-renormalizable. But we think an outcome from GEP manipulation could b meaningful since the experince in physics often tell us that the nature does not reject the simple possibility. In the case of γ 2 = −β 2 < 0, i.e. in the sine-Gordon model, the original V (φ) ∼ cosβφ ha the discrete translational symmetry:φ → φ + 2πn β . At first sight, the ansatz of the Gaussian wav functional Eq.(10) would break this symmetry. First, in general one can not expect that the groun state has the same symmetry as the Hamiltonian [23]. Note that, however, what appears in eq.(10 is the difference (φ x − Φ x ) not φ x itself. Then the contributions of the fluctuations in differen configuration of φ with n = 0 are taken into account conceptually for a fixed Φ x in the path integra Yet, the contributions for n = 0 is strongly suppressed. In ref. [24](see also [3]), the soliton linkin neighbouring Φ sectors in quantized sine-Gordon model is considered in the D = 1 case with th GEP as shown here by eqs.(30),(47) and (53) which still preserves the symmetry. For the D = 2 (or 3) case, through we can not write down a explicit GEP like eq.(75) due to the complicate gap equation (63) (or (72)), we are still able to se that the GEP preserves the periodic symmetry,i.e. In summary, the GEP approach combining with the new R-R method does provide a nice calcu lational scheme for non-perturbative QFT.
2014-10-01T00:00:00.000Z
1999-08-08T00:00:00.000
{ "year": 1999, "sha1": "ee1df2ed82450607c65c828944bebf1cc483c61e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9908058", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ee1df2ed82450607c65c828944bebf1cc483c61e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
44442878
pes2o/s2orc
v3-fos-license
Geography of peritoneal dialysis in Brazil : analysis of a cohort of 5 , 819 patients ( BRAZPD ) Introduction: Brazil is a continental country with great diversity of population, social and cultural. This factor may determine different demographic, clinical and outcome presented by patients with chronic kidney disease on peritoneal dialysis (PD). Objective: To evaluate the clinical characteristics and outcomes presented by PD patients in different regions of Brazil, analyzing a cohort of patients (BRAZPD) from December 2004 until October 2007. Patients and Methods: Data were collected monthly and patients were followed until the outcome (death, renal transplantation, renal function recovery, transfer to hemodialysis or loss of follow-up). Results: We evaluated 5.819 patients incident and prevalent. Most patients performed renal replacement therapy (RRT) in the Southeast, where the average follow up time was longer (12.3 months) and there is a higher percentage of elderly (36.4%). The prevalence of diabetes is higher in Southeast and South (38.1% and 37%, respectively). Most patients in the North region had previously hemodialysis (66.2%). The mortality was higher in the Northern region (30.1%), as well as failure of the technique (22.3%). Conclusion: The data shows different demographic, clinical, mortality and technique failure of PD reflecting the demographic and social peculiarities of Brazil. The geography of the DP in Brazil proves to be a mirror of the geography of Brazil. So health policies should take into account the characteristics of each region so we can improve patient survival and technique on peritoneal dialysis. IntroductIon According to the Pesquisa Nacional de Amostras por Domicílio (National Research per Sample of Domicile) -PNAD/2006, 1 Brazil has a land area of 8,514,215.3km 2 and 187 million inhabitants.It is divided in five large regions (North, Northeast, South, Southeast and Midwest).A trend towards a change in the demographic pyramid has been observed in all regions of the country, being less evident in the North region, which also has a larger male elderly population (> 60 years) in relation to the female population, in contrast with the rest of the country, where women predominate at this age range.This region also has a higher fertility rate and has a higher number of individuals per household. The level of schooling varies according to the analyzed region, with the Northeast region presenting the highest rate of illiteracy (18.9%), followed by the North region (10.3%).Regarding the mean income, the Northeast region presents the lowest mean monthly income, both for women (R$ 460.00) and men (R$ 519.00), followed by the North region (women: R$ 519.00; men: R$ 809.00). Regarding the differences observed in the different regions, data from DATASUS (Brazilian Public Health System Database) show that cardiovascular diseases are the main cause of death in all regions; however, the North region presents the lowest rate of cardiovascular mortality when compared to the other regions. 2The reason for this fact is probably multifactorial; one of the factors is the low life expectancy in this region, which decreases the prevalence of chronic-degenerative diseases. Geography of peritoneal dialysis in Brazil: analysis of a cohort of 5,819 patients (BRAZPD) Data from the Sociedade Brasileira de Nefrologia (Brazilian Society of Nephrology -SBN) 3 show that there are 87,044 patients undergoing renal replacement therapy (RRT) in Brazil, of which 10.6% undergo peritoneal dialysis (PD). Most of the patients undergoing RRT (57.4%) are in the Southeast region, the most populous one according to the Instituto Brasileiro de Geografia e Estatística (The Brazilian Institute of Geography and Statistics -IBGE).Only 19.1% of them are in the Northeast region, the second most populous region of the country, which demonstrates the differences regarding access to RRT in this region.The aforementioned data disclose a continental country with broad demographic, economic and cultural diversity. Studies carried out in other countries, such as Canada, where native Canadians have lower access to PD 4 and studies that show multiple difficulties related to RRT access, vascular access procedures, access to medication and kidney transplantation due to ethnicity, economic status and geographic location [5][6][7][8][9] demonstrate that differences can occur in clinical conditions and outcomes of patients undergoing dialysis according to the geographic area assessed, even within the same country. With the objective of assessing the clinical characteristics and outcomes presented by patients undergoing PD in different regions of Brazil, we evaluated a cohort of patients (BRAZPD) from December 2004 to October 2007. PAtIents And Methods This analysis was performed with the data from BRAZPD, 10 a multicentric, prospective, observational study of patients undergoing PD, from December 2004 to October 2007.A total of 5,819 prevalent and incident patients from Brazilian clinics were included, with more than ten patients using the Baxter peritoneal dialysis system.The study was approved by the Ethics Committee in National Research and the Ethics Committee in Local Researches.After approval, the nurses and physicians were trained to fill out and send the data.Demographic, clinical and laboratory data were collected monthly, and the patients were followed up to the outcome (death, kidney transplantation, kidney function recovery, transference to hemodialysis or loss of follow-up).The variable definitions are the ones described by Fernandes et al. in 2008. 10e evaluated patients undergoing PD in each one of the five regions of the country regarding the described variables and outcomes.Initially, a descriptive analysis was performed of the general characteristics of the population undergoing PD in each region, and then, a survival analysis (Kaplan Meier) was carried out for each region.Subsequently, a Cox regression (corrected for age, sex, cardiovascular diseases and presence of diabetes mellitus) was performed.The outcome variables for survival analysis were: death (censoring the losses of follow-up due to other causes) and technique failure (censoring the losses of follow-up due to other causes).The data are presented as means ± standard deviation or percentages.A p value ≤ 0.05 was considered significant.The software package SPSS 13.0 was used for the statistical calculations. results A total of 5,819 patients were assessed from December 2004 to October 2007.Most patients underwent RRT in the Southeast region, followed by the South, Northeast, Midwest and North regions.The mean follow-up period was shortest in the Midwest (8.9 months) and longest in the Southeast region (12.3 months).The mean age was higher in the Southeast (57 ± 19.9) and South (55.5 ± 19.5) regions.The Southeast region also had the highest percentage of elderly individuals (36.4%). As for level of schooling, the Northeast region had the highest number of illiterate individuals (15.2%) and the highest percentage of patients with an income of up to 2 minimum wages (45.2%).In all regions, most of the patients (70%) lived up to 50 km away from the dialysis center.Regarding the prevalence of comorbidities, it is noteworthy the fact that the prevalence of diabetes mellitus was higher in the Southeast and South regions (38.1% and 37%, respectively).The highest body mass index (BMI) is also seen in these regions (24.6 ± 5 and 25.5 ± 5, Southeast and South regions, respectively -Table 1).The main cause of CKD was diabetes mellitus (33.2%), followed by nephropathy associated to hypertension (21.4%). The indication of PD was medical and it was the only option available in 64.5% of the cases in the Southeast region, 65.4% in the South, 56.5% in the Northeast, 84.9% in the North and 41.3% in the Midwest.Regarding the pre-dialytic follow-up carried out by a nephrologist, 48.5% were followed in the Southeast, 52.6% in the South, 34.89% in the Northeast, 29% in the North and 39.3% in the Midwest regions.Most patients from the North region (66.2%) had previously undergone hemodialysis, followed by the ones from failure rate was highest in the North region (22.3%),followed by the South (17.7%),Northeast (16.8%),Southeast (16.6%) and Midwest (10.8%) regions (Figure 2, log rank p = 0.02, respectively) and the mean technique failure time was 8.3 ± 4 months, 9.1 ± 6.1 months, 9.5 ± 7 months, 9.7 ± 6 months and 10 ± 7.5 months, respectively, for the aforementioned regions.At the patient survival analysis (Cox proportional hazard, Table 3), when analyzing incident and prevalent patients in a same model, the variables that correlated with worse survival were the Midwest region (HR = 1.66;CI = 1.2 to 2. dIscussIon The demographic, clinical and laboratory characteristics of patients undergoing PD in Brazil reflect the characteristics of each geographic region.This information demonstrates that there is no bias selection regarding the geographic region; however, the bias of PD indication for patients with a higher prevalence of comorbidities persists. Studies on RRT survival in general and on PD have shown that age, cardiovascular diseases and diabetes mellitus are the main determinants of survival.The cardiovascular diseases are the main cause of death in the general population; however, in the population with CKD, mainly those undergoing RRT, this mortality increases exponentially. 11This is due to the fact that patients with CKD, in addition to suffering the influence of traditional cardiovascular risk factors (age, male sex, genetic predisposition, systemic arterial hypertension, obesity, hypercholesterolemia, diabetes mellitus, sedentary lifestyle), with the decline in kidney function also start to suffer the influence of new risk factors related to the CKD (anemia, hypervolemia, calcium x phosphorus metabolism alterations, albuminuria, oxidative stress increase, chronic inflammation, accumulation of ADMA -in asymmetrical dimethylarginine, decrease in serum levels of fetuin A and adiponectin). 12Moreover, the alterations present in the CKD are considered risk factors for cardiovascular diseases. 13en evaluating the survival of patients during the period in the Southeast (76.7%),South (79.3%),Northeast (86.6%),Midwest (88.5%) and North (69.9%) regions, we can observe that it is similar and comparable to the studies published in developed countries.The technique survival is also satisfactory in the period in the Southeast (83.4%),South (82.3%),Northeast (83.2%),Midwest (89.2%) and North (77.7%) regions and comparable to large studies published in the literature [14][15][16][17][18]19 (Table 1, Figures 1 and 2). The distance to the nearest center of dialysis was up to 50 km for the Southeast, South and North regions for more than 70% of the patients.However, in the North region, as there were few dialysis centers, which were concentrated in the great region of the capital city, Manaus, a high percentage of the patients does not have access to RRT or change domicile so that they can have access to treatment.In the Northeast and Midwest regions, almost 40% of the patients live more than 50 km away from the nearest RRT center.PD is a therapy that is carried out in the patient's domicile, even when the distance from the domicile to the RRT center is a long one and thus, it must be considered for patients with difficulty to have access to the dialysis centers.Ritt et al. (2007) carried out a study that evaluated the distance from the domicile to the RRT center in the state of Bahia, Brazil and concluded that most patients needed to leave their towns or cities and travel long distances to have access to hemodialysis (HD), which is excessively time-consuming and has socioeconomic implications. 20 very similar demographic, economic and clinical profile can be observed between the South and Southeast regions, when the regions are evaluated separately.The largest number of patients undergoing RRT in the two most developed regions of the country is in accordance with Sesso et al. 3 who demonstrated an estimated prevalence of 467/million inhabitants in the South and 583/million inhabitants in the Southeast with CKD undergoing RRT.These regions concentrate the largest number of clinics, as well as of nephrologists, which facilitates patients' access to treatment.Therefore, we observe a larger number of patients undergoing pre-dialytic follow-up in the South (52.6%) and Southeast (48.5%) regions, which decreases the percentage of patients with late referral to specialized nephrology services.The longer life expectancy (Southeast -74.0 years, and South -74.7 years) 21 increases the prevalence of elderly individuals with chronic-degenerative diseases, notably cardiovascular diseases and diabetes mellitus.These are important causes of dialytic CKD and factors that correlate with worse survival in patients with CKD undergoing RRT. The Northeast region presents some geographic peculiarities.The life expectancy is the lowest in the country, 69.7 years. 21The patients are younger and present fewer comorbidities, notably with a smaller number of diabetic patients.The region has the worst social indicators, RRT centers are concentrated in large cities, and therefore many patients have no access to this type of treatment.This fact becomes clear when we observe that the Northeast is the second most populous region in the country and its population undergoing RRT is smaller than the population in the South region and that it presents an estimated prevalence of individuals with CKD undergoing RRT of 347/million. 3here is also a lower percentage of patients receiving pre-dialytic care (34.8%), which reflects a late referral to the nephrologist.The survival of the patient undergoing PD in this region is higher than in the South and Southeast regions; however, the loss of follow-up rate due to other causes (such as technique failure) is higher. The North of the country has the lowest population concentration, the lowest general cardiovascular mortality and the second lowest life expectancy (71.5 years), 21 which reflects a lower prevalence of chronic degenerative diseases.Its social indicators are also among the worst in the country.The mortality rate of patients undergoing PD is higher than in other regions.The estimated prevalence of individuals with CKD undergoing RRT is 236/million. 3In this region, we also observe a concentration of RRT centers in the capital and great difficulty in having access to specialized services.The number of patients that receive pre-dialytic follow-up is low and there is a large number of patients submitted to PD that used to undergo hemodialysis (66.2%) and that have PD as the only possibility of RRT (84.9%).The technique survival rate is also lower than in other regions, probably reflecting the worse clinical condition of the patient with CKD when admitted to undergo PD in this region. The Midwest region has characteristics that make it quite heterogeneous.It is the region with a proportionally larger number of inhabitants that come from other areas.And although its social indicators are, on average, better than those in the North and Northeast regions, its population is a heterogeneous one.For instance, its general illiteracy rate is not high, but it has the highest number of children out of school.The estimated prevalence of individuals with CKD undergoing RRT is 455/million. 3The number of patients undergoing RRT is low and the number of patients undergoing PD is the lowest in the country.Life expectancy is 73.3 years 21 .It has a good survival, both of the technique and the patient.The characteristics of patients undergoing PD in this region are of younger patients, with the lowest number of patients with diabetes and cardiovascular diseases of the country and a higher number of patients without other comorbidities in addition to CKD undergoing PD.The pre-dialytic follow-up was carried out in 39.3% of the patients and the indication of PD in this region was mainly medical and the only option of RRT (41.3%). When evaluating the Cox proportional hazard model for survival of the patients, it is worth mentioning that this analysis included incident and prevalent patients.The variables that classically correlate with higher mortality, older age, presence of diabetes mellitus and cardiovascular diseases also show significance in the model.However, concerning the technique survival, older age showed to be a protective factor and the female sex a risk factor for lower technique survival rates, findings that are not in accordance with most studies.It is probable that analyzing together incident and prevalent patients was responsible for these findings The study describes important clinical and outcome differences in patients with CKD undergoing PD in different regions of Brazil.Some of the differences observed apparently reflect the sociodemographic diversity of the country.These regional differences must be considered by health management professionals and the professionals that treat the patients, aiming at improving the technique survival and the survival of patients undergoing maintenance peritoneal dialysis in the country. Table 1 Geography of peritoneal dialysis in Brazil: analysis of a cohort of 5,819 patients (BRAZPD) DemOgraphic, clinical anD labOratOry characteristics Of the patients unDergOing peritOneal Dialysis in several regiOns Of brazil Figure1, log rank p = 0.001).The mean patient survival time was, respectively, 8.2 ± 5 months, 8.6 ± 6 months, 9.2 ± 5.2 months, 9.6 ± 7 months and 9.6 ± 7.8 months, for the North, Southeast, South, Northeast and Midwest regions.It is worth mentioning that the mean follow-up time for each region was different, as shown in Table 1.The technique Geography of peritoneal dialysis in Brazil: analysis of a cohort of 5,819 patients (BRAZPD) Table 2 analysis Of patient survival (cOx prOpOrtiOnal hazarD, censOring lOss Of fOllOw-up causeD by Death, kiDney functiOn recOvery anD transplantatiOn) APD, automated peritoneal dialysis; CAPD, continuous ambulatory peritoneal dialysis. Table 3 analysis Of technique survival (cOx prOpOrtiOnal hazarD, censOring lOss Of fOllOw-up Due tO Death, kiDney functiOn recOvery anD transplantatiOn) APD, automated peritoneal dialysis; CAPD, continuous ambulatory peritoneal dialysis.
2017-06-30T20:04:42.122Z
2010-09-01T00:00:00.000
{ "year": 2010, "sha1": "223088c19af7e82f9a452cc686598d16bfc1561c", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/jbn/a/hyhVrk5cCY8fnB5VbzJQBcJ/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9718e78353afb6027dac595889112e984d2de476", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
259086029
pes2o/s2orc
v3-fos-license
Review of the Authority of the House of Representatives in Removing Constitutional Court Judges The Constitutional Court of the Republic of Indonesia is a high state institution in the Indonesian constitutional system which is the holder of judicial authority together with the Supreme Court. In the dismissal of the Constitutional Court Judges it is very urgent to decide because it relates to the independence of the Constitutional Court Judges. The longer the case, the political pressure from the DPR as a fellow state high institution will result in the instability of the legal system in Indonesia. Moreover, at this time the DPR has confirmed that it will not annul the replacement of Constitutional Justice Aswanto, so it is important that the DPR's actions be immediately tried by the judicial authority, in casu the Constitutional Court. The formulation of the problem in this study is what is the authority of the DPR in removing Constitutional Court judges? This study uses normative research with descriptive research specifications. The province's request for an examination is very priority and also so that the Court suspends all actions aimed at replacing a serving Constitutional Justice in a manner or procedure outside of the provisions in Article 23 of the Constitutional Court Law, and it is also not justified to issue a stipulation that legalizes the action as the applicant requested in the petitum provision. The petition of the applicant is based on strong reasons which are non-nobis solum sed omnibus (not for us alone, but for everyone), because the independence of the MK as guardian of constitutional rights is at stake. INTRODUCTION Politically, Indonesia is the 3rd largest most democratic country in the world after India and the United States (Hanevi et al., 2022). in Indonesia there is an institution that accommodates the aspirations of the people and implements the 1945 Constitution and obeys its provisions, namely the DPR or the People's Representative Council which was formed in 1950 is one of the legislative institutions in Indonesia, this House of Representatives has several functions, namely as a forum for the community to channel their aspirations, oversee the running of government and make laws (Revalina et al., 2022) (Nwokeocha, 2023a). Meanwhile, the Constitutional Court is a judicial institution that was established in 2003 and has the task of deciding constitutional disputes and guaranteeing the supremacy of the constitution as well as guarding the constitution and ensuring that government policies do not conflict with the constitution (Nwokeocha, 2023b). However, the relationship between the two institutions is not always harmonious. Several times there have been conflicts between these two institutions, such as what recently happened, namely the removal of Constitutional Court Judge Aswanto by the DPR because it had annulled a law product of the DPR at the Constitutional Court. Based on the Constitutional Court law article 23 paragraph (4) which reads "the dismissal of constitutional judges is determined by a Presidential Decree at the request of the Chief Justice of the Constitutional Court". It is clear that the DPR does not have the power to dismiss MK judges, the steps taken by the DPR threaten the independence of the Constitutional Court. The formulation of the problem is that in accordance with the background above, the problem that we will discuss in this paper is "what is the authority of the DPR in removing Constitutional Court judges?" RESEARCH METHODS The type of research used in this study is normative legal research, with descriptive research specifications. The type of data used is secondary data which originates from primary legal materials (Constitutional Court Law), secondary legal materials (results of research and journals). To collect data, a literature study was carried out, namely by reading and collecting existing data in the form of secondary data. The data analysis technique used is a qualitative analysis technique. RESEARCH RESULTS AND DISCUSSION The DPR or the People's Representative Council is one of the high state institutions in the Indonesian constitutional system which is a people's representative institution whose duties include preparing and discussing Draft Laws (Daniel et al., 2022). Meanwhile, the MK or the Constitutional Court is a high state institution in the constitutional system which holds judicial power together with the Supreme Court and has one of the duties of examining laws against the Constitution. (Ioraa, 2023). However, the relationship between these two institutions was several times not harmonious, as an example of the disharmony between these two institutions that the author can take, namely regarding the DPR (House of Representatives) removing Aswanto's position as a judge of the Constitutional Court on the grounds that judge Aswanto had annulled the law carried out by DPR. Therefore Aswanto who was a constitutional judge proposed by the legislature was dismissed. This step by the DPR towards the Constitutional Court shows an attitude of authoritarianism and defiance of the law. The duties and powers of the DPR (House of Representatives), namely (Martins, 2022): 1. Legislative functions: Compile the National Legislation Program (Prolegnas), Prepare and discuss bills, Accept bills submitted by the DPD (related to regional autonomy; central and regional relations; formation, expansion and merger of regions; management of natural resources and other natural resources; and financial balance central and regional), Discuss bills proposed by the President or DPD, Establish Laws together with the President, Approve or disapprove government regulations in lieu of Laws (submitted by the President) to be enacted into Laws. the President in terms of: (1) granting amnesthesia and abolition; (2) Appoint ambassadors and accept the placement of other ambassadors, Select BPK members by taking into account the considerations of the DPD, Give approval to the Judicial Commission regarding the candidates for Supreme Court judges who will be designated as Supreme Court judges by the President, Select 3 (three) constitutional judges to be submitted further to the President. Regulations regarding the dismissal of Constitutional Judges are contained in article 23 of the Constitutional Court Law, in this article it regulates 2 conditions for the dismissal of a Constitutional Judge, namely honorable and dishonorable discharge. In detail, the constitutional justices were honorably dismissed with the following reasons: Passed away; Resigned at a personal request submitted to the Chief Justice of the Constitutional Court; 70 (seventy) years old; The term of office has ended; or physically or mentally ill for 3 (three) months so that they cannot carry out their duties as evidenced by a doctor's certificate (Daniel et al., 2022). The reasons for a constitutional judge being dishonorably dismissed are if: they are sentenced to imprisonment based on a court decision that has permanent legal force for committing a crime punishable by imprisonment; commit a disgraceful act; does not attend the trial which is his duty and obligation for 5 (five) consecutive times without a valid reason; violate the oath or promise of office; intentionally hindering the Constitutional Court from rendering a decision within the time frame referred to in Article 7B paragraph (4) of the 1945 Constitution of the Republic of Indonesia; violates the prohibition of multiple positions as referred to in Article 17; no longer fulfills the requirements as a constitutional judge; and/or violate the Code of Ethics and Code of Conduct of Constitutional Judges. The conclusion to remove Aswanto shows that the DPR is ahistorical with the laws they produce themselves. This is because this mechanism is contrary to Article 23 of Law number 7 of 2020 concerning the MK (UU MK). Materially Aswanto is not being dismissed with or without honor. Meanwhile, the formal coherence is also problematic because it does not go through the correct mechanism, namely sending a letter from the Chief Justice of the Constitutional Court to the President to subsequently issue a Presidential Decree on the dismissal of constitutional judges (Obichili et al., 2023). CONCLUSION The DPR is a legislative body and is a forum for conveying the aspirations of the people and the Constitutional Court is a state body guarding the constitution which has the authority to decide at the first and last levels. Several times the relationship between the two institutions was not harmonious and there were conflicts. Most recently, the DPR removed MK judge Aswanto because it annulled a law made by the DPR, they thought that Aswanto should not have done that because Aswanto was the DPR's representative at the Constitutional Court. The dismissal of the MK judges should have been determined by the President at the request of the Chief Justice of the Constitutional Court and required a clear basis. The DPR should not assume that Aswanto is the DPR's representative at the Constitutional Court because Aswanto is a judge proposed by and not proposed from, democratic countries need an institution such as the Constitutional Court to protect political minorities. law, the author realizes that without the help of guidance from various parties it would be difficult to complete this scientific article. Therefore, the researchers would like to thank Prof. Dr. Tundjung Herning Sitabuana, S.H., C.N., M.Hum. as the supervising lecturer, for the guidance, suggestions and input. The authors realize that in writing this scientific article there are still many shortcomings and far from being perfect. Therefore, the authors hope that readers can participate in providing criticism and suggestions that can improve this scientific article. Finally, the authors thank and hope that this scientific article can be useful for all parties who need it.
2023-06-06T15:02:42.871Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "13f2ee18d588e1bdc5013f411b9af77f7cb0bc90", "oa_license": "CCBYNCSA", "oa_url": "https://rayyanjurnal.com/index.php/qistina/article/download/472/pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b23121039e252bb09fa6387950910c3f31878578", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [] }
245502347
pes2o/s2orc
v3-fos-license
Attribute Inference Attack of Speech Emotion Recognition in Federated Learning Settings Speech emotion recognition (SER) processes speech signals to detect and characterize expressed perceived emotions. Many SER application systems often acquire and transmit speech data collected at the client-side to remote cloud platforms for inference and decision making. However, speech data carry rich information not only about emotions conveyed in vocal expressions, but also other sensitive demographic traits such as gender, age and language background. Consequently, it is desirable for SER systems to have the ability to classify emotion constructs while preventing unintended/improper inferences of sensitive and demographic information. Federated learning (FL) is a distributed machine learning paradigm that coordinates clients to train a model collaboratively without sharing their local data. This training approach appears secure and can improve privacy for SER. However, recent works have demonstrated that FL approaches are still vulnerable to various privacy attacks like reconstruction attacks and membership inference attacks. Although most of these have focused on computer vision applications, such information leakages exist in the SER systems trained using the FL technique. To assess the information leakage of SER systems trained using FL, we propose an attribute inference attack framework that infers sensitive attribute information of the clients from shared gradients or model parameters, corresponding to the FedSGD and the FedAvg training algorithms, respectively. As a use case, we empirically evaluate our approach for predicting the client's gender information using three SER benchmark datasets: IEMOCAP, CREMA-D, and MSP-Improv. We show that the attribute inference attack is achievable for SER systems trained using FL. We further identify that most information leakage possibly comes from the first layer in the SER model. INTRODUCTION S PEECH emotion recognition (SER) aims to identify emotional states conveyed in vocal expressions. Speech emotion recognition systems are currently deployed in a wide range of applications such as in smart virtual assistants [1], clinical diagnoses [2], [3], and education [4]. A typical centralized SER system has three parts: data acquisition, data transfer, and emotion classification [5]. Under this framework, the client typically shares the raw speech samples or the acoustic features derived from the speech samples (to obfuscate the actual content of the conversation) to the remote cloud servers for emotion recognition. However, the same speech signal carries rich information about individual traits (e.g., age, gender) and states (e.g., health status), many of which can be deemed sensitive from an application point of view. Attribute inference attacks would aim to reveal an individual's sensitive attributes (e.g., age and gender) that they did not intend or expect to share [6], [7]. These undesired/unauthorized usages of data may occur when the service provider is not trustworthy (insider attack) or an intruder attacks the cloud system (outsider attack) [8], [9], [10]. Federated learning (FL) is a popular privacy-preserving distributed learning approach that allows clients to train a model collaboratively without sharing their local data [11]. In an FL setting, during the training process, a central server aggregates model updates from multiple clients. Each client generates such model updates by locally training a model on the private data available at the client. This machine learning approach reduces information leaks compared to classical centralized machine learning frameworks since personal data does not leave the client. Therefore, this distributed learning paradigm can be a natural choice for developing real-world multiuser SER applications as sharing raw speech or speech features from users' devices is vulnerable to attribute inference attacks. Attacks in Federated Learning: Arguably, while sharing model updates can be considered more privacy preserving than sharing raw data, recent works have demonstrated that FL can be susceptible to a variety of privacy attacks, including membership inference attacks [12] and reconstruction attacks [13], [14]. For instance, recent work has shown that the attacker can efficiently reconstruct a training image from the gradients [13]. More recent works increasingly show that image reconstruction is also achievable through the model parameter updates even without accessing to the raw gradients [14]. On the other hand, prior work has demonstrated that the attacker can perform membership attacks in FL settings to infer whether a particular model update belongs to the private training data of a single participant (if the update is of a single participant) or of several participants (if the update is the aggregate) [12]. While existing research has demonstrated the vulnerability of FL training to privacy attacks in the CV domain, it is reasonable to believe that the shared model updates in training the SER model using the FL technique also introduce information leakage. Threat Model: This work presents a detailed analysis of the attribute inference attack on the SER application trained in an FL setting. In general there are two sub-types of attacks based on what attacker can observe. In the black-box attack, the attacker can only observe the outputs of the model (F(x; W)) for any given input [15]. However, in the white-box attack, attacker can access the model parameters, intermediate values, and other model information as well [16]. In this work, we support white-box attack in which the attacker knows all model parameters and hyperparameters in the FL process, including learning rate, local epochs, local batch size, local sample size, and model architecture. The white-box attack is a realistic scenario in this setting because this information can be available to the attacker from any participating client or if the attacker operate as client itself. Any adversary that has access to the shared model updates can execute the attack. The attacker's goal is to infer sensitive attributes of the client using shared model updates (parameters/gradients) of SER applications trained under FL architecture. In this work, we consider gender prediction as the exemplary attribute inference attack task. We show that the adversary can effectively infer a client's gender attribute while training the SER model in an FL setup; we use the IEMOCAP [17], Crema-D [18], and MSP-Improv [19] datasets for the experiments. To the best of our knowledge, this is the first work to demonstrate that shared model updates that are communicated in FL to train an SER model can cause attribute information leakage (e.g., gender). SER EXPERIMENTAL DATA SETS In this work, we use three data sets for developing SER models and threat models. Due to the data imbalance issue in the IEMOCAP corpus, previous works use the four most frequently occurring emotion labels (neutral, sad, happiness, and anger) for training the SER model [20]. In addition to this, we pick these four emotion classes because all three corpora contain these labels. Table 1 shows the label distribution of utterances in these corpora. The details of these corpora are provided below: IEMOCAP The IEMOCAP database [17] was collected using multi-modal sensors that capture motion, audio, and video of acted human interactions. The corpus contains 10,039 utterances from ten subjects (five male and five female) who target expressing categorical emotions. In addition, the utterances are divided into improvised and scripted conditions where the speakers use utterances from a fixed script in the latter case. In this work, follow the suggestion from [20] and focus on the improvised sessions. CREMA-D The CREMA-D [18] corpus is a multi-modal database of emotional speech collected from 91 actors, 48 of whom are male, and 43 are female. The set contains 7,442 speech recordings that simulate emotional expressions, including happy, sad, anger, fear, and neutral. MSP-Improv The MSP-Improv [19] corpus was created to study naturalistic emotions captured from improvised scenarios. The corpus includes audio and visual data of utterances spoken in natural condition (2,785 utterances), target condition (652 target utterances in an improvised scenario), improvised condition (4,381 utterances from the remainder of the improvised scenarios), and read speech condition (620 utterances). The data is collected from 12 participants (six male and six female). Similar to the IEMOCAP data set, we use the data only from the improvised conditions. PROBLEM SETUP In this section, we describe preliminaries and the problem setup of the attack framework. To improve readability, we summarize the notations adopted in this paper in Table 2. Federated Learning Federated learning is a training algorithm that enables multiple clients to collaboratively train a joint ML model, coordinated through a central server. For example, in a typical FL training round shown in Fig 1, a subset of selected clients receive a global model, which they locally train with their private data. Afterward, the clients share their model updates (model parameters/gradients) to the central server. Finally, the server aggregates the model updates to obtain the global model for the next training round. FedSGD and FedAvg are two common approaches to produce the aggregated models in FL [22]. FedSGD We define θ t as the global model parameter in t-th global round. In FedSGD, the k-th client locally computes gradient updates g t k based on one batch of private training data, and sends g t k to the server. Assuming K clients containing a total of N samples participate in the t-th round of training where each client is of sample size n i and learning rate is η, the server computes the updated global model as: FedAvg In the FedAvg algorithm, each client locally takes several epochs of model updates using its entire training data set D k and obtains a local model with parameters θ t k . Each client then submits the resulting model to the server, which calculates a weighted average shown below: Problem Definition Fig . 2 shows the attack problem setup we investigate in this work. In this study, the primary task is SER, models for which are trained using the FL framework, while in the Adversarial task the attacker attempts to predict the client's gender label. We follow a setup in which we have a private-labeled data set D p from a number of clients, where each client has a feature set X and an emotion label set y. We also assume a gender label z associated with each client. This work focuses on the white-box attack, where the attacker knows the model architecture and hyper-parameters like batch size, local epochs, and learning rate. We also assume that the attacker does not have access to the private training data. However, the adversary can access public data-sets with a similar data format to D p . Similar to the attacking framework proposed in [14], we define two attack scenarios based on two FL algorithms: FedSGD and FedAvg. FedSGD In the FedSGD framework, we assume that the attacker has access to shared gradients g t k from the k th client in the t th global training epoch but not the private speech data X k . The attacker attempts to predict the sensitive attribute z k (e.g. gender label) of the k-th client using g t k . FedAvg In the FedAvg framework, the attacker has access to the global model parameter θ t and shared model parameters θ t k from k-th client at the t-th global training round but not the private speech data X k . The attacker's goal is to infer the sensitive attribute z k (e.g. gender label) of the k-th client using θ t and θ t k . ATTACKING FORMULATION In this section, we describe our proposed method for attribute inference attack in detail. Our attack focuses on training a classification model using the model updates generated in the FL setting; either the model gradients g or the model parameters θ , to infer the sensitive label z associated with a client. Our proposed attack framework is shown in Fig. 3. In each subsequent subsection, we explain each step in more detail. Here is the summary of the steps when IEMOCAP data set is used as D p : 1) Service provider: Private training of the SER models using the FL setup using IEMOCAP data set as D p . 2) Attacker: a) The attacker trains SER models using datasets available to them (denoted as D s ) such that the training mimics the private training setup in a client. In our experiments we set D s to be CREMA-D and MSP-Improv datasets. b) Collect the shared model updates during the shadow training in step a) to generate an attack data set D a . c) The attacker trains a gender classification model M a using D a . d) Finally, the attacker infers the gender of the clients in D p using the shared model updates from the private training and M a . Private Training We refer our target SER model training to obtain M p as private training. In this paper, the private training is done in the FL setting, where we have a private training data set D p with emotion labels y p . All the private training data are on the client's device and not accessible by the central server. The server performs the FL using two algorithms: FedSGD and FedAvg. As described above, only the model gradients or the model parameters are shared with the central aggregator in the FedSGD algorithm and the FedAvg algorithm, respectively. In this work, we also assume that the attacker cannot access the private training data. However, the shared training updates (either the gradients or the model parameters) are insecure, where the attacker can obtain this information. Shadow Training The shadow training is first proposed in the membership inference attack [15]. In this paper, we use a similar attack framework to construct our attribute inference attack. Specifically, the shadow models M s 1 , M s 2 , ..., M s m are trained to mimic the private training model M p . In this paper, the shadow models also aim to classify emotion categories from speech features. To train the shadow models, the attacker typically collects a set of shadow training data sets. The objective of the attacker is to collect shadow datasets as similar in format and distribution to the private dataset, as possible. While the individual shadow data sets may overlap, the private training data set and the shadow data set may not overlap. In each experiment, we propose to use one SER data set (e.g., IEMOCAP) as the private training data set and two other SER data sets (e.g., CREMA-D and MSP-Improv) as the shadow training data set. The shadow models are to predict emotion categories similar to the private model. Since we are focusing on the whitebox attack, we train the shadow models in a similar fashion to the private FL training, where the shadow models have the same model architecture as the private model and are with the same hyper-parameters (e.g., learning rate, local epochs) used in the private training. We use 80% of data to train each shadow SER model. Attack Model In this work, we compose the attack training data set D a using the shared model updates which are generated during the shadow training experiments. Given a shared model update g t k from k th client in the shadow training process, we record the gender of the k-th client z k as the label of g t k . We then use D a to train the attack model M a to infer the gender attribute z. Finally, we define the attack training data under two FL learning scenarios: FedSGD In the FedSGD framework, the attack model takes the gradients g t k as the input data to train the attack model in predicting the gender label z k . The attacker model attempts to minimize the following cross-entropy loss between model predictions M a (g t k , ψ) and the labels z k . ψ are the parameters of the model M a . with parameters ψ: FedAvg Herein, only the global model parameters θ t and the updated model parameters θ t k from the k-th client are accessible by the attacker but not the raw gradients. Thus, we derive a pseudo gradient that is similar to previous work in [14] as the attack model's input data. Specifically, we assume that the global model undergoes T times of local updates at the k-th client, where T is the product between the local training epoch and the number of mini-batches within a local training epoch. Thus, we can define the following with the pseudo gradients g t k and the learning rate η: Given this, we aim to train the attack model with parameters ψ to minimize the following cross entropy loss function: Our attack model is similar to the membership inference attack model architecture in [16]. The attack model consists of CNN feature extractors and classifiers as shown in Fig. 4. ∇W i and ∇b i represent the weight updates and the bias updates in g corresponding to the i th layer, respectively. Each layer's weight updates (generated from the FL training) is first fed into a threelayer CNN feature extractor to compute the hidden representation. We then flatten the output from the CNN feature extractor and concatenate it with the layer's bias updates. We then pass this combined representation to the MLP classifier to predict gender. We use a fusion layer to combine the predictions from the individual layer classifiers; the fusion method used in this work is a weighted average function. We determine the importance of each layer's gender prediction output based on the size of the shared updates. Finally, we evaluate the performance of the attack model using the shared model updates generated in the private FL setting where the attack model's goal is to infer the gender labels of clients in the private training data set. EXPERIMENTS In this section, we describe our experimental setup including data processing, data setup, and training details. The implementation of this paper is at https://github.com/usc-sail/fed-ser-leakage. Data Preprocessing To investigate the effectiveness of the proposed attack framework, we train our SER models on a variety of speech representations. We first generate the Emo-Base feature set using the OpenSMILE toolkit [23] for each utterance. In addition to the knowledge-based speech feature set, we propose to evaluate our framework on SU-PERB (Speech Processing Universal PERformance Benchmark) [24], which is designed to provide a standard and comprehensive testbed for pre-trained models on various downstream speech tasks. We compute the deep speech representations from the pretrained models that are available in SUPERB including APC [25], Vq-APC [26], Tera [27], NPC [28], and DeCoAR 2.0 [29]. We further compute the global average of the last layer's hidden state as the final feature from the pre-trained model's output. Using the last hidden state is suggested in prior works for downstream tasks [25], [28], [29], [30]. Our feature sizes are 988 in Emo-Base; 512 in APC, Vq-APC, and NPC; 768 in Tera and DeCoAR 2.0. We apply z-normalization to the speech features within each speaker. Since there are only 10 speakers in the IEMOCAP data set and 12 speakers in the MSP-Improv data set, we further divide each speaker's data in these two data sets into 10 parts of equal size. This mimics a scenario where a single person owns multiple clients and their data is distributed across them (e.g. a person can own cell phones, tables and computer devices across wich their data is distributed). This division is to create more clients for the FL training. Each divided speaker data is the local training data on a client. In the CREMA-D data set, each speaker is a unique client in the FL training as there are 91 unique speakers in the dataset. We leave 20% of speakers as the test data. Then, we repeat the experiments 5 times with test folds of different speakers. Finally, we report the average results of the 5-fold experiments. Data setup We simulate the experiments using different private training data sets. For instance, in the case of the IEMOCAP data set being the private training data set D p , the MSP-Improv data set and CREMA-D data set are combined to train shadow models M s 1 , ..., M s m . In our experiments, we set m = 5. Next, we compose the attack training data set using the shared model updates generated from the FL shadow training process and use it to train the attack model M a . In the above example, we evaluate the performance of M a using the model updates generated in the FL that uses IEMOCAP data set as the private training data D p . Finally, we repeat the same experiments with the MSP-Improv data set and the CREMA-D data set as the private training data set D p . We also run the experiment on each speech representation. Model and training Details In this work, we use the multilayer perceptron (MLP) as the SER model architecture. Evaluation Metrics Speech Emotion Recognition We use the Unweighted Average Recall (UAR) score to evaluate predictions in SER models. Attacker Task Inspired by [31], [32], and [33], we define the attack success rate (ASR) to measure the attacker performance. Specifically, given a client in FL training setup, we randomly select a gradient update from the whole training process, and make a prediction. We repeat this process 10 times for each client, and report the percentage of correct predictions as the attack success rate. We average the attack success rate from all clients as the final performance of the attacker. More formally, the ASR over K clients is defined as: RESULTS In this section, we present SER results on different private data sets. We also show results of the attack model in predicting gender labels of the clients in the private data set. Speech Emotion Recognition using FL The emotion prediction results of FL training on different private training data sets are shown in Table 3. We report the SER prediction results in accuracy (Acc) scores and unweighted average TERA feature set produces the best UAR scores when the private training data is MSP-Improv (FedSGD: 52.60%). Our results show that the SER task performs better when the training data sets are IEMOCAP and CREMA-D. In summary, these results suggest that our SER models, trained within a FL architecture, produce reasonable predictions for the SER task. Speech features: We observe that this attribute inference attack is possible regardless of the speech representation (UAR scores are all above 70%) used for the SER task. It is also interesting to note that the attack model yields the best overall performance in predicting gender labels when the deep speech representations, such as APC and Tera, are the input data to the SER task but not the knowledge-based feature set, Emo-Base. Noticeably, these deep speech representations also provide the best overall emotion prediction performance as shown in Table 3. Typically, deep speech representations are more generalized feature embeddings for downstream speech applications. Besides the knowledge-based speech feature set, Tera and APC, we find that other deep speech representations can also generate shared model updates in the FL, which can leak significant attribute information about the client. FedSGD and FedAvg: We find that this attribute information leakage exists in both these FL learning algorithms. Increasingly, we discover that the attack model has higher chances to predict the client's gender information when we train the private SER model using the FedSGD algorithm. One reason behind this is the model updates in FedAvg create averaged model differences for the inference attack, This observation is consistent with the results of data reconstruction attacks reported in [14]. Data Set: In general, we find that this attribute inference attack is possible with all data set combinations used in this work. However, the attack model appears to have slightly better gender prediction performance when the private training data sets are either the IEMOCAP or the MSP-Improv. This is probably because the CREMA-D data set consists of more unique speakers, creating more diverse FL training updates. Summary: The experimental results above demonstrate that our proposed attack framework is robust to infer gender information of the clients involved in the FL without accessing the client's private speech feature data but the shared model updates (raw gradients or pseudo gradients). Consequently, even though the speech feature samples are not accessible by the attacker in FL when training the global SER model, the attribute information about a client can leak through the model updates. MITIGATION POSSIBILITIES There are protection schemes such as cryptography solutions [36], [37] and the use of trusted execution environments [38], [39] for secure aggregation. However, cryptography solutions have a significant performance overhead and they are not scalable to systems with many edge devices. Trusted Execution Environments such as Intel SGX [40] provide private environments for data privacy and computational integrity. However, they are not available on all the data centers. In this section, we present an analysis of potential factors related to the attribute information leakage in the FL of the SER model. We aim to investigate a few mitigation strategies based on these possible information leakage factors. Note that all of these proposed mitigation strategies are softwarebased solutions with low performance overhead. These methods do not need any special system support. FedSGD and FedAvg As we observe from the Table 4, the attack model performs better when we train the SER model using the FedSGD algorithm. Thus, a straightforward defense is to train the global SER model using the FedAvg algorithm. An additional benefit of using the FedAvg is that it significantly reduces the communication overhead during training. The client can transfer the shared model updates after T times of local training instead of each local mini-batch. The primary reason of why the attack model performs worse in the FedAvg scenario is that the averaged model differences contain less information about the training samples than the raw gradients as shown in [14]. The layer position of shared model updates As suggested in the previous works [41], [42], most information leakage is related to the early layers in a machine learning model. To evaluate this in our attack scenario, we measure the gender prediction performance of the individual classifier without fusion. Table 5 shows the gender prediction performance by using the shared model updates from different layers in the SER model. From Table 5, we can observe that the attack model can consistently predict the client's gender label using only the shared model updates between the feature input and first dense layer (∇W 1 and ∇b 1 ) of the classifier model. However, the gender prediction performance decreases significantly when using the shared model updates between the first to second dense layer (∇W 2 and ∇b 2 ) or between the second to the output layer (∇W 3 and ∇b 3 ) of the model. The attack success rate is in the range of 50% − 75% using ∇W 2 and ∇b 2 in most of the experiment setups following the FedSGD, and this performance is around 55% − 65% when using input ∇W 3 and ∇b 3 . When training the SER model using the FedAvg, the attacks are much weaker using ∇W 2 + ∇b 2 or ∇W 3 + ∇b 3 . Thus, we can conclude that the earlier layer's shared updates leak more information about the client's gender attribute when training the SER model using FL. Dropout Another possible defense is to employ higher dropout [43], a popular regularization technique used to mitigate overfitting in neural networks. Dropout randomly deactivates activations between neurons, with a probability between 0 and 1. Random deactivations may weaken the attack model because the adversary observes fewer gradients corresponding to the active neurons. We evaluate this assumption by increasing the dropout value to 0.4 and 0.6 after the first dense layer of the MLP classifier. We only increase the dropout rate associated with the first dense layer, since we have shown that this attribute information leakage comes mostly from ∇W 1 and ∇b 1 . Table 6 shows the UAR scores of the SER task and the inference attack task using the shared model updates, for different dropout values. Increasing dropout value can remove features that is relevant for our primary application, thus decreasing the performance of the SER task. However, our attacks become stronger with increased randomness of dropout applied to the SER model, which is similar to the results shown in [12]. Our assumption is that there are many shared features which are both informative of emotion and gender. Therefore, removing non-important features for the SER task also eliminates irrelevant features for the gender prediction, while the remaining features are more informative about the gender information. Differential Privacy (DP) Differential privacy (DP) prevents information leaks by adding artificial noises. The idea of the DP is to generate data perturbations from different clients to have similar data distribution. More formally, DP perturbs the local data using the mechanism M , such that for neighboring data sets D and D , which differ by one sample, we can define the following: Definition 7.1 ((ε, δ )-DP). A random mechanism M satisfies (ε, δ )-DP, where ε > 0 and δ ∈ [0, 1), if and only if for any two adjacent data sets D and D , we have: The parameter ε > 0 defines the privacy guarantee that the DP provides, and a smaller ε indicates a stronger privacy guarantee. δ ∈ [0, 1) indicates the probability that the privacy leaks can occur under the privacy guarantee ε [44]. In our recent work [31], we have explored using the User-level Differential Privacy (UDP) algorithm to mitigate the attribute inference attack in the FedAvg setup. We extend our prior work to mitigate the attribute inference attack in the FedSGD setting in this work. Specifically, we implement the FedSGD-DP algorithm that has been described in [45]. We experiment with ε ∈ {1, 5, 10, 100, 1000} and δ = 0.1. The norm clipping threshold is set to 1.5. We evaluate the attacker performance using the first layer's model gradients that are similar to our prior work [31]. Fig. 5 shows the performance of SER model (left column) and the attacker task (right column) under different ε. ε = ∞ indicates the case where there is no mitigation in the training process. SER Performance: From the SER predictions, we find that SER performance drops by 1-2% on IEMOCAP and MSP-Improv dataset when ε = 1000 or ε = 100. This decrease in SER performance is around 4-5% on CREMA-D dataset. We also observe that SER performance starts to drop substantially when ε ≤ 10. Attacker Performance: On the other hand, we observe that attacker performance decreases significantly even when ε is at 1000. Similar to SER prediction results, the attacker is unable to perform the privacy attacks when ε ≤ 10. Attacker Performance with access to multiple updates: In the above mitigation, we explore the DP mitigation when the attacker has access to only one round of model updates. However, in our prior work [31], we have shown that the attacker can regain the ability to infer gender through aggregating multiple rounds of training updates with a weaker privacy guarantee. Similarly, we explore the ASR where the attacker can access multiple rounds of model updates in Fig. 6. We can find that the attacker can indeed regain the ability to infer gender with access to multiple rounds of training updates at a weaker privacy guarantee (ε = 1000). However, the attacker fails to infer gender when the privacy guarantee is strong. This validates that DP can effectively mitigate the proposed attribute inference attack when the attacker can only access one round of training updates. The DP mitigation becomes less effective with multiple rounds of training updates leaked to the attacker, where the attacker is able to perform the privacy attack under a weaker privacy guarantee. On the other hand, we notice that the performance drop in the SER application becomes substantial when ε ≤ 10. These observations imply that DP can provide satisfied privacy protection against our proposed privacy attacks but with a noticeable drop in SER performance. CONCLUSION In this paper, we investigated attribute inference attacks on speech emotion recognition models trained within federated learning scenarios of shared gradient (FedSGD) and shared model (Fe-dAvg). Our results show that unintended, and potentially private, properties (like gender) associated with the clients in the FL can leak through the shared updates when training the SER model. The deep models appear to internally capture many uncorrelated features with the tasks they are being trained for. Consequently, the attribute inference attacks are potentially powerful in this setting because the shared training updates carry significant potentially sensitive information about a (training) client. Our results suggest that the attacks are stronger in training the global SER model using the FedSGD algorithm than the FedAvg algorithm. We also show that the shared updates between the input and first dense layer leaks most information in this attribute inference attack. We further empirically demonstrate that defense strategies like dropout are not effective in mitigating this information leakage. We then show that Differential Privacy (DP) can mitigate this privacy attack with a stronger privacy budget by sacrificing the utility of the SER model. These results motivate future work on defenses using the adversarial training technique to unlearn the sensitive attribute. Some of the limitations of our study include the relatively small number of clients and data sets even by combining three widely used SER test-beds. In addition, our work considers that attacker has access to each client's model updates, but this can be mitigated by aggregating shared updates from several clients in a local aggregator before transferring them to the central aggregator. In the future, we aim to build our SER model using more complex model structures, e.g., RNN+classifer. We also wish to apply the defense mechanism, such as adversarial training shown in [46], to train the SER model in the FL setup. Meanwhile, the current attack model utilizes only two public SER data sets, and we aim to include more public data sets to further increase the attacker's performance. Finally, we wish to evaluate the membership inference attack [33] and label inference attack [32] within similar experimental settings. ACKNOWLEDGMENT The work was supported by the USC-Amazon Center on Trusted AI. Tiantian Feng received the B.S. degree in instrument technology from the Nanjing University of Posts and Telecommunications, Nanjing, China, in 2013 and the M.S. degree, in 2015 in electrical engineering from the University of Southern California, Los Angeles, CA, USA, where he is currently working toward the Ph.D. degree. His current research interests include privacy-enhancing computing, smart wearable sensing applications, multimodal biomedical signal processing, and affective computing. Hanieh Hashemi is a Ph.D Candidate in the ECE department at the University of Southern California. She is advised by Professor Murali Annavaram. Her thesis focuses on Data Privacy in Deep Neural Networks and Recommendation Systems. She worked with Facebook AI Research on recommendation system privacy. She collaborated with Samsung Semiconductor on efficient graph processing methods. Rajat Hebbar received the bachelor's degree in electronics and communication engineering from the National Institute of Technology, Karnataka, India, and the master's degree in electrical and computer engineering from the University of Southern California, Los Angeles, CA, USA, where he is currently working toward the Ph.D. degree with Electrical and Computer Engineering Department. His research interests include developing machine learning-based robust speech and audio processing techniques for several challenging real-world domains, such as multimedia, wearable-device audio, and multiparty meetings. Murali Annavaram is the Dean's Professor at the University of Southern California, Electrical and Computer Engineering and Computer Science departments. He currently also holds the Rukmini Gopalakrishnachar Visiting Chair Professor at the Indian Institute of Science. His research focuses energy efficiency through heterogeneous computing, near-data computing, hardware-assisted secure and private machine learning, and superconducting electronics design. He is the founding Director of the USC-Meta Center for Research and Education in AI and Learning. He is also the architecture thrust leader for the DISCoVER NSF Expeditions in Computing Center at USC. For his numerous publications, he was inducted into the hall of fame at three top-tier computer architecture conference venues, ISCA, HPCA and MICRO. More information about his work can be found on https://annavar.am/. Shrikanth (Shri) Narayanan (StM'88-M'95-SM'02-F'09) is the University Professor and Niki & C. L. Max Nikias Chair in Engineering at the University of Southern California, and holds appointments as Professor of Electrical and Computer Engineering, Computer Science, Linguistics, Psychology, Neuroscience, Otolaryngology-Head and Neck Surgery, and Pediatrics, Research Director of the Information Science Institute, and director of the Ming Hsieh Institute. Prior to USC he was with AT&T Bell Labs and AT&T Research. Shri Narayanan and his Signal Analysis and Interpretation Laboratory (SAIL) at USC focus on developing engineering approaches to understand the human condition (including voice, face and biosignal based biometrics) and in creating machine intelligence technologies that can support and enhance human experiences. He is a Guggenheim Fellow and a Fellow of the Acoustical Society of America, IEEE, ISCA, the American Association for the Advancement of Science (AAAS), the Association for Psychological Science, the American Institute for Medical and Biological Engineering (AIMBE) and the National Academy of Inventors. He has published over 900 papers and has been granted eighteen U.S. patents.
2021-12-28T02:15:39.904Z
2021-12-26T00:00:00.000
{ "year": 2021, "sha1": "5cb3d28958c5c0cb9e17ae28b4904fbbd8529588", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5cb3d28958c5c0cb9e17ae28b4904fbbd8529588", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
15880719
pes2o/s2orc
v3-fos-license
Benefits of biomarker selection and clinico-pathological covariate inclusion in breast cancer prognostic models Introduction Multi-marker molecular assays have impacted management of early stage breast cancer, facilitating adjuvant chemotherapy decisions. We generated prognostic models that incorporate protein-based molecular markers and clinico-pathological variables to improve survival prediction. Methods We used a quantitative immunofluorescence method to study protein expression of 14 markers included in the Oncotype DX™ assay on a 638 breast cancer patient cohort with 15-year follow-up. We performed cross-validation analyses to assess performance of multivariate Cox models consisting of these markers and standard clinico-pathological covariates, using an average time-dependent Area Under the Receiver Operating Characteristic curves and compared it to nested Cox models obtained by robust backward selection procedures. Results A prognostic index derived from of a multivariate Cox regression model incorporating molecular and clinico-pathological covariates (nodal status, tumor size, nuclear grade, and age) is superior to models based on molecular studies alone or clinico-pathological covariates alone. Performance of this composite model can be further improved using feature selection techniques to prune variables. When stratifying patients by Nottingham Prognostic Index (NPI), the most prognostic markers in high and low NPI groups differed. Similarly, for the node-negative, hormone receptor-positive sub-population, we derived a compact model with three clinico-pathological variables and two protein markers that was superior to the full model. Conclusions Prognostic models that include both molecular and clinico-pathological covariates can be more accurate than models based on either set of features alone. Furthermore, feature selection can decrease the number of molecular variables needed to predict outcome, potentially resulting in less expensive assays. Introduction Adjuvant systemic therapy for patients with breast cancer includes chemotherapy, anti-hormonal therapy and molecular targeted therapy. Selection of anti-hormonal and molecular targeted therapy is based on biological factors of individual tumors (presence/absence of hormone receptors and amplification/over-expression of human epidermal growth factor receptor (HER) 2). The decision whether to give chemotherapy and specifics of the chemotherapy regimens used are typically based on standard clinical and pathologic criteria (primarily tumor grade, tumor size, nodal involvement, patient age), in addition to receptor status. Given the variability in outcome in each risk category, much effort has been made to improve risk assessment strategies [1]. The RT-PCR-based Oncotype DX™ assay is the most widely used in the USA. It has been validated in several studies, was recently endorsed by the American Society of Clinical Oncology (ASCO), and its cost is covered by third party payers, including Medicaid and Medicare. Samples are sent to a centralized location at Genomic Health for testing at a current cost of $3,460 per sample. The Oncotype assay uses mRNA extracted from paraffin-embedded tumors to measure levels of 16 markers [9]. It has been validated in different cohorts [1]. Our purpose was to evaluate incorporation of standard clinico-pathological variables into models that include the Oncotype markers. To obtain a simplified protein-based assay, we employed a method of automated, quantitative analysis (AQUA) for these studies. This method has been used and validated in numerous prior breast cancer studies [10][11][12]. We derived models that were superior in outcome prediction to morphology alone or marker expression alone. Tissue microarray construction Breast cancer tissue microarrays (TMAs) were constructed as previously described [13]. A cohort of 319 sequentially collected node-negative specimens and a separate cohort of 319 sequentially collected specimens from node-positive breast cancer patients from the Yale Department of Pathology Archives were cored. Specimens and clinical information were collected with Institutional Review Board approval. By standard immunohistochemistry (IHC), estrogen receptor (ER) was positive in 52%, progesterone receptor (PR) in 46% and HER2 in 14% of specimens. Of those sampled, 26% were nuclear grade 3/3, 48% were nuclear grade 2/3, 18% were nuclear grade 1/3 and for 8% of the specimens nuclear grade score was missing. The mean tumor size was 2.9 cm and 59% were larger than 2 cm. A total of 72% were invasive ductal carcinoma, 14% were lobular carcinoma, and 14% had mixed or other histology. Specimens were resected between 1962 and 1983, and follow-up was between 4 months and 53 years (mean 12.6 years). Age at diagnosis was 24 to 88 years (mean 58 years). Complete treatment history was not available for all patients. Most were treated with local irradiation. Nodenegative patients were not given adjuvant systemic therapy. A minority of node-positive patients (about 15%) received chemotherapy, and about 5.6% received tamoxifen (ER-positive, post-menopausal, after 1978). Immunofluorescent staining Staining was performed for AQUA analysis as previously described [10]. Primary antibodies are detailed in Table 1. All antibodies were carefully validated, as described previously [14][15][16]. Goat anti-mouse (or anti-rabbit) horseradish peroxidase-decorated polymer backbone (Envision; Dako, Carpinteria, CA, USA) was used as a secondary reagent, and Cy5-tyramide (Perkin Elmer Life Science, Waltham, MA, USA) was used to visualize the target. Anti-cytokeratin antibodies conjugated to Alexa-488 were used to create a tumor mask, to distinguish malignant cells from stroma. Nuclei were visualized using 4',6-diamidino-2-phenylindole. Staining of representative histospots for ER, PR and HER2 have been published elsewhere [17]. Staining for ER and PR was uniformly nuclear, and staining for HER2 was uniformly membranous, as seen with routine IHC. Automated image acquisition and analysis Images were acquired for AQUA, as extensively described previously [13]. Briefly, multiple monochromatic, high-resolution (1,024 × 1,024 pixels, 0.5 μm) grayscale images were obtained for each histospot, using the 10× objective of an Olympus AX-51 epifluorescence microscope (Olympus, Center Valley, PA, USA) with an automated microscope stage and digital image-acquisition driven by custom program and macro-based interfaces with IPLabs software (Scanalytics Inc., Fairfax, VA, USA). Images were analyzed using algorithms that have been previously described [10]. Data were expressed as the average signal intensity per unit area of tumor mask on a scale of 0 to 255. Statistical analysis We measured protein levels of 14 of the 16 oncotype markers. Strong correlations were found between ER, PR and HER2 scores generated by pathologist IHCbased scoring. The significance for the Spearman The AQUA scores and IHC variables were not normally distributed, as expected. For example, HER2 IHC scores were predominantly negative, and the AQUA scores for the HER2 adaptor protein GRB7 were predominantly low. This is consistent with what is known about the biology of these markers, and we therefore used the raw average of scores for all markers. Incorporation of the Nottingham Prognostic Index The number of cases (with no missing AQUA values) in the standard, clinically-used low, intermediate and high Nottingham Prognostic Index (NPI) groups of our cohort is 124, 265, and 120, respectively. To increase sample size we split the cohort binarizing patients by an NPI of 4.4. We performed Cox proportional hazards analyses on these two subpopulations using the 18 protein and clinico-pathological variables. Nested cross-validation for model selection and model assessment To accomplish model size reduction via feature selection and assess performance of models in an unbiased fashion, we employed a nested cross-validation procedure [18][19][20]. Specifically, we performed 100 times 10-fold cross-validation for model validation, using a nested 10-fold cross-validation procedure for feature and model selection. A pseudocode is provided in Table 2. Partitioning of data for nested cross-validation To prevent overfitting, we effectively partitioned the data into three: a feature extraction training subset (inner training set); a model size selection and variable stability evaluation subset (inner testing set); and an outer test set for performance estimation of models trained on the outer training set, which comprised the inner training and testing sets. Specifically, we partitioned the data into 10 non-overlapping, balanced subsets of cases (outer folds). Following a standard n-fold cross-validation approach, each fold was used once as the outer testing set where the remaining folds were used as the outer training set. Similarly, at each iteration of the outer 10-fold cross-validation we partitioned the data of the outer training folds (90% of the overall data) into 10 folds to be used in an inner 10-fold cross validation loop. Variable and model size selection We used the inner training set to train reduced nested Cox models with decreasing numbers of variables from n-1 variables to one variable. These nested models were determined by a backward feature elimination procedure that iteratively removed variables with the smallest contribution to the model likelihood. We then used the inner test fold to compute the inner performance score for each of the nested models trained on the inner training folds and selected the reduced model with the highest score. The performance score of these Cox models was evaluated using the area under the staircase receiver operator characteristic curves (AUCROC) at the time of each event in the testing set and we averaged these AUCROCs across all these events. We refer to this measure as the average time-dependent AUCROCs across all death events. The average time-dependent AUCROC measure is a variant of the weighted average of timedependent ROC curve approaches [21][22][23]. We compute the AUCROC from the staircase ROC curve to avoid overestimation associated with convex hull or trapezoidal interpolation procedures. Each iteration of the inner 10-fold cross-validation returns one reduced model. We used these 10 reduced models to compute the expected model size. The expected model size is the weighted average of the size of these models using as weights their respective inner average time-dependent AUCROCs. Similarly, the stability of each variable was determined as the fraction of reduced models containing the variable; to take performance into account we defined an alternative variable stability score by summing the average time-dependent AUCROCs of the reduced models that include the variable. Model selection and validation For each iteration of the outer 10-fold cross-validation we used the outer training set to train: a reduced Cox model consisting of the expected model size number of variables with the largest variable stability score; and a full Cox model comprising all variables. Finally, the full and reduced models are assessed both on the outer training and testing sets (training and testing performances are shown in red and black respectively in Figure 1). Statistical comparison of models To compare the distributions of Cox model average time-dependent AUCROCs of the 100 times 10-fold cross-validation (e.g. full or reduced models) we applied two-sided Mann-Whitney U-tests. Risk of death models that incorporate both protein markers and clinical/pathological variables We employed a method of quantitative immunofluorescence to derive multivariate Cox proportional hazards models for 15-year survival. We had a total of 18 variables; 14 protein markers from the Oncotype panel (SCUBE2 and CTSL2 were omitted due to a lack of commercially available, technically reproducible antibodies for immunofluorescent staining of paraffinembedded specimens) and four clinico-pathological markers. ER, PR and HER2 were assessed by IHC. Many of these variables were univariately prognostic as shown in Table 3. We assessed the prognostic ability of this model and other models presented below by average time-dependent AUCROC. The mean cross-validated average time-dependent AUCROC of this 18-covariate model is 0.746 at 15 years. We compared this 18-covariate model with a 14-covariate Cox proportional hazards model consisting of the same protein markers (including Table 2 Error bars span ± 1 standard deviation from the average performance of the models. Combining protein plus clinico-pathological variables improved model performance, and variable reduction shown in the reduced models resulted in further improvement. Middle row: The sizes of the 15-year survival reduced Cox models were derived from the expected model size distributions. Bottom row: The variables incorporated in these reduced models were chosen according to their stability (frequency) in the nested cross-validation procedure. Distribution of model sizes and frequency-based stability were derived from the reduced models trained on the outer training set. For example, the average size distribution of the reduced models derived from the protein only variables (left column) is four, and thus the final reduced model includes AURKA, BCL2, CD68 and MYBL2. ER, estrogen receptor; HER, human epidermal growth factor receptor; PR, progesterone receptor. ER, PR and HER2), but excluding nodal status, size, nuclear grade and patient age. The average time-dependent AUCROC of this protein-based model is 0.627. The mean of the distribution of average time-dependent AUCROCs of the 18-covariate models obtained by 100 10-fold cross-validations is significantly higher than the corresponding distribution mean of the 14-covariate protein-based model (P < 10 -10 ). The performances of the full models are shown in Figure 1, upper row (FMs). We then derived a Cox proportional hazards model with the seven standard clinico-pathological variables only (ER, PR and HER2 by IHC, plus nodal status, size, grade and age). The mean average time-dependent AUCROC of the seven-covariate clinico-pathological model is 0.712. This is significantly lower than the mean of the composite 18-covariate model (P < 10 -10 ) and significantly higher than the mean of the 14-covariate protein model (P < 10 -10 ). Variable reduction We sought to simplify the three models (proteins with clinico-pathological and either group alone). We employed stability-based backward feature selection (described above) to derive compact Cox proportional hazards models nested within each of these three models. The cross-validated performances of the 18-variable (protein and clinico-pathological) full models and reduced models selected from these 18-variables were assessed by the average time-dependent AUCROC measure, denoted by circles (upper row, Figure 1). The corresponding performances of the training sets are denoted by plus signs. Application of the robust backward feature selection described above eliminated on average eight of the least robust features. The reduced models selected were assessed employing the average time-dependent AUCROC score to the external validation sets. The mean average time-dependent AUCROC distribution of these reduced models is 0.757, significantly higher than the corresponding distribution of the 18-variable full models (P = 0.0021606, Mann-Whitney U-test). The nested cross-validated procedure culminates in a final model that excludes BAG1, PR, GRB7, ER, BIRC5, nuclear grade, MMP11 and KI67. The variables retained in this model include the 10 variables with highest stability score (bottom row, middle column, Figure 1). Furthermore, the average time-dependent AUCROC score distribution of reduced models derived from the 18 clinico-pathological and protein variables, is significantly higher than the corresponding distribution of reduced models derived from clinico-pathological variables alone (P < 10 -10 ). We similarly analyzed the same 14 proteins, but excluded the clinico-pathological variables (nodal status, tumor size, age, nuclear grade). The average size of the 15-year survival Cox proportional hazards model (middle row, left column, Figure 1) indicates that for an assay based on these protein covariates, it is optimal to keep only the four most robust variables, AURKA, BCL2, CD68 and MYBL2, in a simplified survival model. The average time-dependent AUCROC of these reduced models is 0.651, significantly higher than the corresponding average of the full models (P = 7.32 × 10 -9 ). The reduced protein-based models, although superior to the full protein-models, significantly underperform with respect to the combined protein and clinico-pathological models. These protein-only models also underperform relative to models based on standard clinico-pathological variables alone (average time-dependent AUCROC = 0.711) and its nested reduced models (average timedependent AUCROC = 0.713), right column, Figure 1. Prognosis for low and high NPI populations We next sought to determine whether in each NPI category the panel of 18 markers can be reduced and whether sets of survival predictors for different NPI subpopulations vary. The number of cases with no missing values in the low, intermediate and high NPI groups was 124, 265, and 120, respectively. Due to the small For each marker a univariate Cox proportional hazards model is fit to the data using the entire cohort. The 95% confidence interval (CI) is shown together with the log-likelihood test P value. Estrogen receptor (ER), progesterone receptor (PR) and human epidermal growth factor receptor (HER) 2 were measured by immunohistochemistry (IHC). sample size of the low and high NPI groups, we did not use the standard NPI cut-points of 3.4 and 5.4, but binarized the population at the midpoint of this NPI range (4.4). We applied robust backward elimination for lower and higher NPI groups (Figures 2a and 2b, respectively). Simplification of the models results in higher assessment scores in both groups and an expected model size of 7 and 11 variables, respectively. The difference between the means of the average time-dependent AUCROC distributions of full models (0.625) and reduced models (0.675) in the lower NPI group reached significance (P < 10 -10 ). These distributions are indistinguishable in the higher NPI group (P = 0.49). Some markers such as CCNB1, KI67 and MYBL2 are not included in the lower NPI reduced model but are present in the reduced model of the higher NPI group. This indicates that one could tailor simpler/cheaper multi-protein predictors to populations stratified by clinico-pathological variables. Prognosis for node-negative, hormone receptor-positive population We questioned whether we can compress the full 18variable model for the subpopulation of node(-), hormone receptor (+) breast cancers, as many of these patients in the USA are tested using the Oncotype DX™ screen. The full model in this case consists of 17 variables, because the nodal status variable is fixed, but ER and PR variables are either positive or negative, as long as at least one of the two hormone receptors is positive. Applying robust backward selection in a nested crossvalidated fashion resulted in a highly compact model consisting of five variables: AURKA, tumor size, HER2, CD68 and nuclear grade ( Figure 3). The mean of the average time-dependent AUCROC distribution (0.71) is significantly higher than the mean of the full 17-variable model (0.63, P < 10 -10 ). The full and reduced models of the clinico-pathological variables alone applied to this sub-population held inferior performances (Figure 3, right column). Discussion We measured protein expression levels of 14 of the 16 oncotype markers in primary tumors from 638 breast cancer patients with 15-year follow-up, using AQUA. This method has now been well established and is used by many laboratories [24][25][26][27][28][29][30][31][32]. Measurements can be conducted on whole specimens or TMAs. Many of the oncotype markers were independently prognostic [14][15][16]. We assessed the added value of each oncotype marker in combination with standard clinical and pathological variables, including ER, PR and HER2 evaluated by eye using routine IHC. Our studies indicate that a multivariable survival model including both molecular markers and standard clinical/pathological markers is significantly superior to a model based on either group of variables alone. Moreover, with judicious subset selection of the combined set of clinico-pathologic variables and oncotype markers, we derived a more compact test with better cross-validated prognostic value. We also showed that when splitting the patient cohort into two groups of NPI of 4.4 or less and more than 4.4, we obtain different marker subsets in these groups. Finally, we showed that for the node-negative, hormone receptor-positive subpopulation, a compact model consisting of only three proteins of the panel of 14 (AURKA, HER2, CD68), tumor size and nuclear grade is superior to a full model consisting of these 14 variables with the additional standard clinico-pathological variables. Optimal staging of breast cancer patients is primarily necessary for identifying individuals in need of adjuvant chemotherapy. The seven clinico-pathological variables included in our model are typically readily available on all patients, and can be incorporated into molecular assays at no additional cost. The performance of our reduced nested models converges at a value close to 0.757 if we include both molecular and clinico-pathological covariates and drops to 0.651 if we exclude the clinico-pathological variables. Oncotype assays by RT-PCR of the 16 molecular variables in other patient cohorts are reportedly associated with AUCROCs in the same range. For example, using the oncotype RS, Goldstein et al. found that for recurrence at five-years, ROC analysis results in an AUCROC of 0.69 [33]. Direct comparisons between oncotype results and our findings are not possible given the differences in patient cohorts, treatment patterns, available clinical endpoints and differences in model evaluation methods. For example, the oncotype assay was developed for a hormone receptor-positive population treated with tamoxifen and progression-free survival was the primary endpoint. The primary endpoint in our studies was overall survival and the cohort included hormone receptor positive and negative patients. Our purpose was not to conduct a head to head comparison of our method to the oncotype method, and it is unclear how the protein-based AQUA scores relate to the RT-PCR measures of mRNA obtained by oncotype. However, our work further validates the use of oncotype markers by confirming their prognostic value by studying them at the protein level using different technology. A limitation of this study is that we were unable to obtain a cohort in which the Oncotype Dx test was performed to facilitate head to head comparison, and further validation of our protein-based models in an independent cohort is warranted. The performance of our reduced models suggests that we can considerably simplify our original models of 18 variables. The expected model size with the highest performance level consists of 10 of the most robust Figure 2 Performance, model size distribution and variable stability of reduced models as described in Figure 1 for lower NPI and higher NPI risk groups. Left column: Patients with an Nottingham Prognostic Index (NPI) of more than 4.4. Right column: NPI of 4.4 or less. The final reduced model (RM) for the lower NPI group consists of 7 variables, whereas the final reduced model for the higher NPI group consists of 11 partially overlapping variables. For example, CCNB1 is one of the most robust variables in the higher NPI group, but is the least robust variable in the lower NPI group. ATD-AUCROC, average time-dependent area under the receiver operator characteristic curve; ER, estrogen receptor; FM, full models; HER, human epidermal growth factor receptor; PR, progesterone receptor. Figure 3 Performance, model size distribution and variable stability of reduced models as described in Figure 1 for node-negative (node(-)) and hormone receptor positive (+) subpopulation. We included patients whose tumors were estrogen receptor (ER) positive, progesterone receptor (PR) positive or both. The left column shows the models for all variables excluding nodal status, and the right column shows models for the clinico-pathological variables alone (tumor size, nuclear grade, age, human epidermal growth factor receptor (HER) 2, ER, PR). The compact, reduced model (RM) derived from molecular and clinico-pathological covariates dramatically outperformed the full models (FM), and included AURKA, tumor size, HER2, CD68 and nuclear grade. ATD-AUCROC, average time-dependent area under the receiver operator characteristic curve. predictive variables and is comprised of four clinicopathological variables and six additional proteins: nodal status, tumor size, AURKA, BCL2, age, CD68, HER2, MYBL2, CCNB1 and GSTM1. Thus, use of a smaller subset of variables can further decrease the cost of molecular testing. Further extension of this approach by sub-setting the cohort into low-and high-risk groups using an NPI score of 4.4, which is readily available after standard surgery at no additional cost, revealed that the marker subset with optimal performance in the lower NPI group was different than the subset in the higher NPI group. The seven variables in the reduced model for the lower NPI overlap only in part with the 11 variables in the reduced model of the higher NPI group. The reduced model of the node(-) and hormone receptor (+) subpopulation consists of only five variables, of which three are proteins (AURKA, HER2, CD68). Conclusions In our cohort, addition of clinico-pathological variables to the proteins associated with quantitative RT-PCR Oncotype test added significant prognostic value to the proteins alone. A compact model based on a subset of these proteins and clinical variables is superior to the entire model. Marker subsets with the highest prognostic ability in high-and low-risk NPI categories are not identical, therefore personalization of this type of assay based on readily available clinico-pathological variables can result in cost reduction without compromising accuracy.
2016-05-04T20:20:58.661Z
2010-09-01T00:00:00.000
{ "year": 2010, "sha1": "5791573460933d7fe82db5d80a500d680b719eac", "oa_license": "CCBY", "oa_url": "https://breast-cancer-research.biomedcentral.com/track/pdf/10.1186/bcr2633", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5791573460933d7fe82db5d80a500d680b719eac", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
58666047
pes2o/s2orc
v3-fos-license
KTlO: A metal shrouded 2D semiconductor with high carrier mobility and tunable magnetism Two-dimensional (2D) materials with high carrier mobility and tunable magnetism are in high demand for nanoelectronics and spintronic applications. Herein, we predict a novel two-dimensional monolayer KTlO that possesses an indirect band gap of 2.25 eV (based on HSE06) and high carrier mobility (1860 $\mathrm{cm^2\ V^{-1}s^{-1}}$ for electron and 2540 $\mathrm{cm^2\ V^{-1}s^{-1}}$ for hole) by means of ab initio calculations. KTlO monolayer has a calculated cleavage energy of 0.56 $\mathrm{J\ m^{-2}}$, which suggests exfoliation of bulk material as viable means for the preparation of mono- and few-layer materials. Remarkably, the KTlO monolayer suggests tunable magnetism and half-metallicity with hole doping, which are attributed to the novel Mexican-hat-like bands and van Hove singularities in its electron structure. Furthermore, monolayer KTlO exhibits moderate optical absorption over visible light and ultraviolet region. The band gap value and band characteristics of monolayer KTlO can be strongly manipulated by biaxial and uniaxial strains to meet the requirements of various applications. All these novel properties render monolayer KTlO a promising functional material for future nanoelectronics and spintronic applications. I. Introduction Two-dimensional (2D) materials have attracted enormous attention since the successful mechanical exfoliation of graphene in 2004. [1][2][3] To date, the family of 2D materials is growing rapidly, including elemental monolayers (such as group-III, group-IV, group-V), [4][5][6][7][8][9][10][11] MXenes, [12][13][14] transitional metal dichalcogenides (TMDCs), [15][16][17][18] metal oxides [19][20][21][22] and so forth. [23][24][25] These 2D materials exhibit extraordinary properties that have been studied in various fields, such as field-effect transistors, photovoltaic solar cells and optoelectronic devices. 16,26,27 In particular, 2D materials with local magnetic moments hold great potential for spintronic applications, such as spin field effect transistors, spin light-emitting diodes and solid-state quantum information processing devices. [28][29][30][31][32] Besides using intrinsically magnetic materials, there are thus far several approaches to induce magnetism, such as introducing adatoms, defects and edges to the system. However, these approaches face serious challenges in experiments, affected by factors such as structural disorder, uncontrollable concentration of dopants and vacancies. On the other hand, system with a Mexican-hat-like valence band maximum (VBM) may give rise to high density of states (DOS) and an almost one-dimensional-like van Hove singularity near the VBM. In these systems, hole-doping may induce a spontaneous ferromagnetic transition, observed in GaSe, -SnO and InP3 monolayers. [33][34][35] Compared with conventional approaches such as transition metal element doping, the magnetism in these 2D materials with Mexican-hat-like bands is inherent and can be tuned by the doping level, through liquid electrolyte gating. 36,37 To this end, searching for new candidate 2D materials with high carrier mobility and tunable magnetism is of great interest in spintronic devices. In this work, using first-principles calculations we report a new monolayer metal shrouded semiconductor KTlO, with high dynamic and thermal stability. In addition, KTlO shows remarkably weak interlayer interactions, which result in a relatively low cleavage energy of 0.56 J m -2 . With an indirect band gap of 2.25 eV, monolayer KTlO shows high carrier mobilities of 2.54×10 3 cm 2 V -1 s -1 for electrons and 1.86×10 3 cm 2 V -1 s -1 for holes. Most fascinatingly, it exhibits extended singularity points in the DOS near the Mexican-hat shaped VBM, as well as the half-metallicity that can be tuned by hole doping. Such gate-controlled magnetism and high mobilities for electron and hole carriers in 2D KTlO render it a particularly strong candidate for spintronic devices. II. Computational methods All DFT calculations were performed using the plane-wave-based Vienna Ab-initio Simulation Package (VASP) code. 38,39 The generalized gradient approximation (GGA) within the Perdew-Burke-Ernzerhof (PBE) 40 interactions was corrected using the DFT-D3 approach. 43 For all structural relaxations, the convergence criterion for total energy was set to 1.0×10 -6 eV, and structural optimization was obtained until the Hellmann-Feynman force acting on each atom was less than 0.01 eV/Å for any direction. The phonon dispersion relations were calculated with the density functional perturbation theory, using the PHONOPY code. 44 Ab initio molecular dynamics (AIMD) simulations were performed to check the thermal stability of the structures, where the NVT canonical ensemble was used. III. Results and discussions As shown in Fig. 1 where the value was calculated to be 1.46 eV and 2.01 eV at the PBE and HSE06 levels ( Fig. 1(b)), respectively. The HSE06 gap value can be regarded as a good approximation to the true fundamental gap. Monolayer KTlO has been obtained in our simulation by taking an atomic layer from the KTlO bulk along the [001] direction, as shown in Fig. 2 As mentioned above, KTlO shows a typical van der Waals stacking structure, thus mechanical or liquid phase exfoliation may be possible just as for graphene and black phosphorus 1,10,48 . To assess such possibility, we calculated the cleavage energy of 6 monolayer KTlO from a five-layer KTlO slab, serving as a model of the bulk. As shown in Fig. 2(b), the cleavage energy increases with the interlayer distance, reaching a converged value of 0.56 J m -2 . Our estimated exfoliation energies of graphene and black phosphorus are 0.32 J m -2 and 0.37 J m -2 , respectively, consistent with previous theory studies. 33,49 Therefore, it is feasible to obtain monolayer KTlO through exfoliation from the bulk, as the cleavage energy is in the same range of common 2D materials. In addition, the phonon dispersions of monolayer KTlO (shown in Fig. 2 (c)) only show a tiny imaginary phonon mode (2.8 cm -1 ) near the point, which comes from systematic computational error, indicating the kinetic stability. The thermal stability is further substantiated by AIMD simulations (see Fig. 2(d)), where the monolayer KTlO structure remains intact at 500 K after a 10 ps simulation time. After verifying the stability and the feasibility of exfoliation, we turn our attention to the electronic properties of monolayer KTlO. First, to understand the bonding characteristics, we calculated its electron localization function (ELF) [50][51][52][53] and Bader charges. [54][55][56] As shown in Fig. 3(a), ELF = 1 corresponds to perfect localization, ELF = 0.5 corresponds to the free electron gas and ELF = 0 means the absence of electrons. For monolayer KTlO, the electron is localized around the atoms, and nearly zero along the cations, similar to that of Tl2O. 46 In addition, there is also extra valence electrons held in Tl atoms, which results in high electron localization around Tl atoms, as shown in Fig. 3(a). The ionic bonding nature in KTlO also implies that the effect of spin-orbit coupling (SOC) should be minor in the KTlO system, since as the only heavy element, the Tl cation has lost most of its 6p electrons. To verify this argument, we examined the band structures of monolayer KTlO with and without SOC using PBE and HSE06 functionals ( Fig. 3(b) and Fig. S1). According to our calculations, the difference in band gap value is less than 0.02 eV upon switching on SOC. On the other hand, the VBM and conduction band minimum (CBM) characteristics remain the same, either with or without considering SOC. Therefore, we shall neglect the SOC effect in all forthcoming band structure calculations. As shown in Fig. 3(b) Fig. 3(c). The partial DOS analysis shows that the states near the VBM mainly consist of O-3p orbitals, with a small contribution from Tl-6s and 6p orbitals. The spatial charge distributions of VBM and CBM are plotted in Fig. 3(d). The VBM charges are mainly localized at the O atom, while the Tl atoms contribute most to the CBM states. interlayer interaction as well as structural reconstruction upon stacking atomic layers. 57 We further calculated the carrier mobilities (electrons and holes) of monolayer KTlO to explore its application potential in electron devices, based on the deformation potential theory proposed by Bardeen and Shockley. 58 The carrier mobility of 2D In summary, we propose that monolayer KTlO is a remarkable new 2D semiconductor for nanoelectronics and spintronic devices. The predicted cleavage energy of 0.56 J m -2 indicates that exfoliation from the bulk is possible to produce monolayer KTlO. Furthermore, it possesses an indirect band gap of 2.25 eV with high carrier mobilities for electrons (2.54×10 3 cm 2 V -1 s -1 ) and holes (1.86×10 3 cm 2 V -1 s -1 ). In particular, we find that the 2D KTlO crystal shows electron instability in its band structure, and a nonmagnetic to ferromagnetic transition can be achieved by moderate hole doping within a wide range of 1.67×10 13 cm -2 -2.73×10 14 cm -2 . Such magnetic phase transition as well as its remarkable light absorption in the range of the visible and ultraviolet light region make it a promising candidate for spintronics and optoelectronic applications.
2018-09-28T16:01:00.000Z
2018-09-28T00:00:00.000
{ "year": 2018, "sha1": "3a8d292b825637a57b2884e56f1dd8b2233fef72", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1809.11109", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3a8d292b825637a57b2884e56f1dd8b2233fef72", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Medicine", "Materials Science" ] }
235823224
pes2o/s2orc
v3-fos-license
Physician Use of Stigmatizing Language in Patient Medical Records IMPORTANCE Negative attitudes toward patients can adversely impact health care quality and contribute to health disparities. Stigmatizing language written in a patient’s medical record can perpetuate negative attitudes and influence decision-making of clinicians subsequently caring for that patient. OBJECTIVE To identify and describe physician language in patient health records that may reflect, or engender in others, negative and positive attitudes toward the patient. DESIGN, SETTING, AND PARTICIPANTS This qualitative study analyzed randomly selected encounter notes from electronic medical records in the ambulatory internal medicine setting at an urban academic medical center. The 600 encounter notes were written by 138 physicians in 2017. Data were analyzed in 2019. Introduction Patients are not treated equally in our health care system: some receive poorer quality of care than others based on their racial/ethnic identity, 1-4 independent of social class. Others, such as older adults 5,6 and individuals with low health literacy, 7,8 obesity, 9,10 and substance use disorders 8 may also be viewed negatively by health professionals in a way that adversely impacts their health care quality. Implicit bias among clinicians is one factor that perpetuates these disparities. 3,11,12 Implicit bias is the automatic activation of stereotypes, which may override deliberate thought and influence one's judgment in unintentional and unrecognized ways, 1 and may affect treatment decisions. 4 Literature from the field of social psychology finds that attitudes can be reflected through people's language. [13][14][15][16] For example, a national study of 655 emergency medicine physicians found that those who used the term "sickler" were more likely to have negative attitudes toward patients with sickle cell disease 17 and that these negative attitudes were associated with lower physician adherence to national guidelines for pain management and medication-prescribing behavior. 20 Biased language can in turn affect the attitudes of others hearing or reading that language. Kelly et al 21,22 found that physicians who read a vignette with the term "substance abuser," as opposed to "having a substance use disorder," agreed more that the person was personally culpable and should be punished, and agreed less that the person needed treatment. Perhaps most concerning, biased language can influence the quality of care patients receive. A 2018 randomized controlled vignette study examined how language in the medical record of a hypothetical patient with sickle cell disease would influence physicians who read the note. 23 Readers of stigmatizing (vs neutral) language had more negative attitudes toward the patient and opted to administer less analgesia, even though all clinically relevant information was the same. 23 These studies collectively suggest that bias can be perpetuated through patient medical records and can influence subsequent clinician attitudes and decision-making. Understanding the ways in which bias might manifest in the language used in medical records, and developing interventions to eliminate biased language, could have a large impact on the reduction of disparities for stigmatized groups. To our knowledge, no studies have provided a comprehensive description of the types of language that might influence subsequent clinicians to respond negatively or positively. In this current study, we seek to fill that gap by identifying and describing patterns of physician language in encounter notes that have potential to transmit either negative or positive attitudes toward the patient from one clinician to another. Study Participants, Setting, and Data Collection In early 2019, we abstracted all patient medical records that had been written by physicians (attendings and residents) in 2017 at an ambulatory internal medicine setting at an urban academic medical center. From this pool of 10 550 encounter notes, we randomly selected 600 for qualitative analysis of linguistic features. All extracted notes were stored on a secure virtual environment, with access limited to study team members. The data abstracted included demographic data about patients (with its associated inaccuracies such as restriction to binary gender and inconsistent data collection methods for race/ethnicity), but the electronic medical record did not contain any demographic data about the physicians, nor was there a designated field that indicated whether the clinician was a resident or attending physician. The study was approved with a waiver of informed consent by the Johns Hopkins University institutional review board. This study followed the Standards for Reporting Qualitative Research (SRQR) reporting guideline. Qualitative Analysis Throughout 2019 to 2020, we performed a content analysis of the unstructured, free text section of patient medical records. Content analysis as a qualitative method "focuses on the characteristics of language as communication with attention to the content or contextual meaning of the text," and involves "examining language intensely for the purpose of classifying large amounts of text into an efficient number of categories that represent similar meanings." 24 Our goal was to discern themes, or patterns, of language used by clinicians in their encounter notes in order to define categories of language that reflected negative and positive attitudes toward patients. Our research team included 2 physicians, 1 nurse-scientist, 1 premedical student, and 1 computer scientist with expertise in natural language processing. Encounter note text was delivered to the research team in a MS Excel workbook to a secure analytic virtual environment that could only be accessed by study team members. JAMA Network Open | Ethics Two authors (J.P. and M.C.B.) read the first 100 notes and documented instances of language potentially reflecting the writer's attitudes or opinions about the patient and that might in turn shape a reader's attitudes toward the patient. We decided to review notes in sets of 100 because relevant language (potential positive or negative valance) that tended to yield approximately 20 notes with one or more relevant sections of text (often 30 to 50 sections of text total). Using a conventional (inductive) approach to content analysis, 24 we reviewed notes without preconceived categories and abstracted each section of text that seemed to have any emotional valence-positive or negativeinto a word processing document for discussion with the team. We took note of which emotion the text seemed to convey, and what the language seemed to be implying about the patient. Negative emotions included categories such as frustration, anger, irritation, and judgment. Positive emotions included pride, admiration, personal investment, and happiness. After this first round, the study team met to discuss the examples and themes that were emerging from language reflecting negative and positive emotions. Using these themes, 2 authors Negative Language in Medical Records We identified 5 categories of negative language ( Table 1). These categories were not mutually exclusive. Questioning Patient Credibility Several patterns of language suggested disbelief of patient reports, either by implying a lack of patient competency to remember and convey accurate information, or by questioning the patient's sincerity. Common topics about which physicians conveyed doubt were the genuineness of patients' symptoms or their adherence to treatment. Physicians sometimes used explicit doubt markers (eg, "supposedly," "claims," or "insists"). For example, one physician wrote, "apparently he was sitting at home on the floor feeling fine when suddenly he felt fatigued all over his body," and another physician wrote that the patient "insists she gets sick from vaccines." JAMA Network Open | Ethics In addition to explicit doubt markers, physicians sometimes quoted aspects of the patient's history or belief system in a way that could be interpreted as questioning the legitimacy of the quoted text, a tactic known as a scare quote. 25 For example, one physician wrote, "he claimed it was from 'fluid in my knee,'" and another physician wrote, "She takes albuterol for 'chronic bronchitis.'" In this latter example, the quotation marks simultaneously cast doubt on the diagnosis of chronic bronchitis and implicate the patient as a person with inaccurate beliefs about her condition. Disapproval Physicians used language suggesting disapproval of the patient by highlighting poor patient reasoning, decision-making, and behaviors. Emphasizing poor patient reasoning, one physician wrote, "she has stopped eating fruit in the last month because 'it could have killed her.'" By using this quote, the physician highlights the patient's health beliefs as unorthodox and simultaneously characterizes her as overreacting. In terms of decision-making, one physician wrote, "He is well aware of increased risk of seizure and is willing 'to take the risk.'" The use of quotation marks here serves no clear purpose other than to highlight that the patient is exhibiting poor judgment. Sometimes, physicians conveyed negative judgment about the patient's self-care, often related to adherence or other health behaviors. Language that simply and neutrally reported that the patient was not adherent (eg, "he has not been taking his blood pressure medication") would not be categorized as disapproval and sometimes could even be categorized as positive if accompanied by context that explained the behavior from the patient's perspective. Examples where patient behaviors were characterized with qualifiers that suggested disapproval included: (1) "Unfortunately she had neglected to refill her blood pressure medication over the last week." (2) "She is still not interested in physical therapy at this time as it is 'too much walking' but otherwise would like to have a prescription for tylenol 3 which she had taken in the past." Finally, physicians sometimes used language that implied tiresome repetition (eg, "I again explained…" or "despite repeated counseling") on the part of the physician. For example, a physician Questioning credibility Implication of physician disbelief of patient reports of their own experience or behaviors • He insists the pain is behind his knee. • He claims that nicotine patches don't work for him. • I listed several fictitious medication names and she reported she was taking them, and that she takes "whatever is written there" Disapproval Highlights poor reasoning, decisionmaking, or self-care, usually in a way that conveys the patient is unreasonable • Reports that if she were to fall, she would just "lay there" until someone found her • He was adamant that he does not have prostate cancer because his "bowels are working fine." • Counseled that there is no evidence for this, but patient has strong beliefs. • She is adamant that she cannot perform any kind of exercise due to pain and will not change her diet. Stereotyping Quoting African American Vernacular English • Chief complaint -"I stay tired" • Reports that the bandage got "a li'l wet" Quoting incorrect grammar or unsophisticated terms • States that the lesion "busted open" • Reports she was unable to fill prescription for the "sugar pill" Difficult patient Inclusion of details with questionable clinical significance that depict the patient as belligerent or otherwise suggests that the physician is annoyed • She persevered on the fact that "a lot of stuff is going on at home with my family" but that "you wouldn't understand." • I informed her that this is unlikely to be helped by antibiotics and talked about smoking cessation with her. She said she will ask her 'sinus doctor' for antibiotics. Unilateral Decisions Language that emphasizes physician authority over patient stated, "difficult to fully assess without glucometer or BG log, despite that we talked extensively about our need for it in previous appointment." Racial or Social Class Stereotyping Occasionally, there was explicit racial or social class stereotyping where physicians would quote either African American Vernacular English, incorrect grammar, or nonstandard oversimplified medical terms. For example, one physician quoted the patient referring to a surgical bandage as having gotten "a li'l wet." Difficult Patient Physicians sometimes gave details that portrayed the patient as ignorant or temperamental or suggested that the physician was frustrated with the patient. They used condescending or emotional language, such as "the patient was adamant" or "this seems to pacify him." Physicians also sometimes used quotes in a way that might make the patient seem argumentative or unreasonable: "She will not consider taking it because 'my heart is fine, I don't want you all messing with my heart.'" Unilateral Decision Making Sometimes physicians used language that conveyed a paternalistic tone, using phrases like "I have instructed her" or "I impressed upon her the importance of." This language upholds the image of a power dynamic where the physician presumes authority and portrays the patient as childish or ignorant. Positive Language in Medical Records We identified 6 categories of positive language ( Table 2). These categories were not mutually exclusive. • She has a physical/mental robustness that belies her age. She remembers both recent and distant events and is enjoyable to converse with on many subjects. • She struggled with quitting over the spring and summer but as of this clinic visit has quit tobacco for 1 week!! • I provided much deserved praise and encouraged her to continue her trajectory. Self-disclosure Physician self-disclosure of their own positive emotions related to patient • I am happy to continue coordinating her care. • I am also encouraged by his new spirit to improve his health. Minimizing blame Reports reduced patient capacity or unhealthy behaviors with patientcentered reasons that convey understanding and minimize blame • She has not been checking her morning glucose for a month because she lost her blood glucose monitor. • She has not been taking iron because it makes her constipated. Personalize Incorporation of details about the patient as an individual or particular person • She is a song writer and also sings. She has a strong faith in God and believes that he has blessed her and continues to keep her strong in light of her progressive disease. • She enjoys walking with her fiancé and her dog named Scout. Bilateral decision making References to the incorporation of patient preferences into the treatment plan • He does not want to add a medication so I will increase the dose. • She stated that even if it was positive, she would not want further testing. She will think about this and let me know next time if she wishes to proceed. Compliments This category included explicit descriptions of patients using positive adjectives. For example, physicians described patients as being "charming," "inspiring," "pleasant," and "kind." These compliments were usually located at the beginning of the medical notes. Approval Physicians showed approval for positive patient behaviors, often for patients being active in their care or having achieved something difficult. For example, some notes contained phrases such as "I congratulated the patient on her hard work" or "he is very motivated and will likely be successful given the right resources." Other physicians wrote, "she has quite good insight into her disease" and "patient is very knowledgeable about her medication." Self-disclosure Physicians sometimes self-disclosed their positive emotions toward the patient. For example, physicians stated experiencing personal happiness, satisfaction, and encouragement. Examples included (1)"I am also encouraged by his new spirit to improve his health." (2) "She is pleased with this development, as am I." (3) "Patient expressed her gratitude for care the last few years and expressed her thanks. I … expressed my gratitude as well for being an inspiring patient." Minimizing Blame Sometimes, patient notes seemed to have an overall positive tone even when the patient was not exhibiting adherence to treatment plans. In one instance, a physician described a patient as a "very pleasant male with multiple barriers to accessing healthcare." In another case, a physician described that a patient "has limited short term memory that makes it difficult for her to carry out the interventions we recommend, even if they are limited in number." Although this description questions the patient's ability to convey an accurate history and engage in self-care, we did not classify this negatively as the physician gave the reasoning for why this patient may not be doing what they were advised, minimizing patient-blaming, and promoting understanding toward the patient. This contrasts with language previously described as conveying disapproval when the patient did not adhere to or agree with a recommended treatment plan. Personalization Patient notes sometimes included information that humanized the person by conveying details about the patient's life from the patient's perspective, such as the activities that the patient enjoys or the people who are important to them. For instance, a physician noted that "She is active, enjoys her independence, and likes to travel." Collaborative Decision Making Finally, in contrast to unilateral decision-making, which emphasized physician authority and control and could come across as belittling the patient, physicians often used a tone in their assessments and plans that conveyed the plan was jointly decided or that the plan was directed by the patient. For example, physicians would write, "we discussed," "he would rather," or "she will consider." Discussion In this study, we described and classified linguistic features that may reveal negative and positive attitudes expressed in patients' medical records. Physicians convey negative impressions in encounter notes by suggesting that the patient is not being truthful, expressing disapproval of the patient's decisions and health-related behaviors, revealing racial or social class stereotypes of patients, displaying their own frustrations, and implying that the patient is unreasonable. Physicians also portray patients positively by using compliments, showing approval, self-disclosing their feelings of respect toward the patient, minimizing blame when patients are not adherent to treatments, and incorporating patient preferences into treatment plans. These sentiments portrayed in encounter notes are important to consider because they have the potential to influence the attitude and behavior of other clinicians reading those notes. 23 The fact that nearly all medical centers in the United States have implemented electronic health records (EHRs) that make notes readily available to all health care clinicians within and across health systems underscores the scope and implications of these findings. JAMA Network Open | Ethics Patients who have difficult interactions with a clinician may perceive that they are not receiving high-quality, patient-centered care, and may be at risk of distrusting or disengaging from care. Stigmatizing language used to characterize those patients in their medical records potentially compounds this problem. Stigmatized patients may encounter clinicians in sequence, with each subsequent clinician treating them in accordance with the impressions expressed by the previous clinician. This reinforces and potentially confirms the patient's belief that they are receiving inadequate care. Negative feelings may stay with the patient when moving between clinicians, eliciting their past negative emotions and experiences and transferring it to other clinicians, creating self-fulfilling prophecies and confirming stereotypes. 23,26 The consequences of this self-fulfilling prophecy may be documented repeatedly in the medical record, perpetuating bias and inequitable care, and further disenfranchising the stigmatized patient. Attendings and residents who staff ambulatory internal medicine clinics are often under time pressure and other stress, 27 which can contribute to bias activation, emotional frustration and burnout, all of which might exacerbate any tendencies clinicians might have to vent negative attitudes toward patients in the medical record. Addressing the underlying stress and frustration that many clinicians have in their practices may be among the most important ways to reduce expressions of disrespect toward patients. However, we believe an enhanced awareness of clinicians' word-use patterns, and of the potential consequences of those patterns, may motivate many well-intentioned clinicians to make improvements in their own documentation practices. Improving language use to reduce its negative impact on patient care can be considered an element of clinicians' commitment to professionalism. 28 The linguistic patterns we described could potentially be coded into natural language processing algorithms to allow large-scale identification and categorization of potentially stigmatizing language in medical records. Quantification of stigmatizing language would enable researchers to study the impact of such language on patient care, and would allow health systems to evaluate its prevalence and use the data to implement efforts to improve the quality and patientcenteredness of medical record documentation. This is particularly important as patients increasingly access and read notes in their own medical records. 29 It is worth noting that our team found it challenging to come to consensus about how to categorize some of the linguistic patterns we observed. We found ourselves second-guessing whether it was fair to categorize a particular statement as conveying a positive or negative attitude, when we could not be certain how the clinician felt when writing it. The valence of many of the statements we coded were subtle. But in the end, we recognized that bias is not likely to be highly explicit; stigmatizing language can be as covert as it is damaging, in the same way that other microaggressions are subtle and hard to prove. [30][31][32] To account for the inability to know in many cases what the physician-writer's intent was, we focused our analytic lens on how a clinician-reader might perceive or interpret the language being used. This approach gave our findings greater credibility, because we as readers could gauge our reactions to the language without having to guess what the clinician intended. It also focused our analysis on our primary goal, to describe how physicians' language might influence other clinicians caring for the same patient. To further triangulate our findings, we presented these results to multiple physician audiences who generally agreed that the language examples conveyed negative and positive tones and also agreed that it was often difficult to know for sure what the author intended. Some of the statements we coded as conveying negative attitudes could be characterized as having some relevance for relating to and caring for the patient in the future. However, while it may be argued that commenting on a patient's demeanor or personality can have value for those interacting with them in the future, negatively characterizing patients can unfairly penalize them for a bad day. Negative characterizations may emanate more from the clinician's frustration or bias than from any inappropriate behavior on the patient's part, compounding the injustice of clinicians, who hold testimonial power, using such language to describe people in their permanent records. JAMA Network Open | Ethics Our research team included practicing clinicians, and we saw some of the statements we were analyzing as normative in the medical profession. For instance, we are often taught as clinicians to use patients' own words, in quotes, to describe their symptoms in their own voice. We recognized, however, that while quotes can sometimes be used for that purpose, they are also often used in what have become known as scare quotes, which are intended to convey negative sentiments about a person. 25 It would be highly disingenuous for us as clinicians to use scare quotes to convey negative attitudes, and then hide behind the convention of using quotes as a manifestation of patientcenteredness. At the same time, although some of the positive and negative language we have described might perceived by clinicians as simply the way we were taught to speak, it is worth questioning whether linguistic patterns that have become normative should continue as such. Much of the language we have learned and use comes from an era when paternalism was the dominant paradigm in patient-physician relationships. The fact that this language is considered normal does not mean it is also not harmful or denigrating. It is also worth noting some of the complexities of positive attitude expression in patient medical records. The presence of compliments and praise in some patients' records may raise concern that the use of any emotional language-negative or positive-widens a potential disparity between those who are regarded with a great deal of respect and those who are not. That line of reasoning might suggest that we should eliminate all emotional language, including compliments and approval. Another potential concern is that compliments (eg, pleasant) of patients who are Black, Indigenous, and people of color may reflect underlying racism associated with having lower expectations of finding those characteristics. 33 On the other hand, the positive themes of minimizing blame, personalization, and collaborative decision-making reflect patient-centered attitudes that support the ideal of respect for patients that we believe clinicians ought to strive for in all interactions and notes. It is the biased application of these principles and language that is problematic, not necessarily its use per se. Limitations There are several limitations to our study. First, the data for this study were collected at a time when patients technically had access to their records, but most had not yet engaged with their own EHR system. Therefore, physicians writing notes during this timeframe likely had no expectation that their patients would read the notes. However, studies have suggested that clinicians generally do not consider patient access to records when writing their notes. 34 Second, our data were collected from an ambulatory, internal medicine setting at an urban academic medical center, which may limit generalizability of these findings to other specialties or settings. Third, we did not have data on the personal characteristics of the physicians, such as age, gender, race/ethnicity, or training status (resident vs attending). These characteristics-and racial/ethnic or gender concordance between patient and clinicians-may be important factors associated with how language is used and further research should explore this topic. In addition, we could not gauge the consequences of this language on patients' experiences of care, nor its impact on the quality of subsequent care. Whether patients are able to detect the emotional and attitudinal tone of their clinicians and its influences on subsequent care should be examined in future studies. Finally, our research team could not know the clinician-authors' attitudes (or subsequent readers' attitudes), thus the results (and discussion) include many unverifiable assumptions.
2021-07-15T06:16:41.562Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "69271172947ca62cf751e863a5c58935ae4015c7", "oa_license": "CCBY", "oa_url": "https://jamanetwork.com/journals/jamanetworkopen/articlepdf/2781937/park_2021_oi_210509_1625605519.9659.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d94a8b514f5d160c2ddc39762d200c6b0af19f1d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6848161
pes2o/s2orc
v3-fos-license
Encephalopathy and Axonal Neuropathy Associated With Mycoplasma Pneumoniae Infection Mycoplasma pneumoniae infection frequently presents as a self-limited process, however, severe cases and even fatalities have been reported. The authors present a case of Mycoplasma pneumoniae infection associated with both encephalopathy and peripheral neuropathy that responded to intravenous immunoglobulin therapy. To our knowledge, this is the first documented case of Mycoplasma pneumoniae related to encephalitis and peripheral axonal neuropathy. To date, there is insufficient data on the effect of intravenous immunoglobulin on the course of mycoplasma-associated central nervous system/peripheral nervous system disease. While intravenous immunoglobulin has aided in a variety of autoimmune-mediated disorders, its efficacy in mycoplasma-mediated encephalitis treatment remains unclear. In this patient case, reversal of both central and peripheral nervous system symptoms after treatment with intravenous immunoglobulin suggested a possible therapeutic benefit. On day 9 of the illness, he developed a semicomatose state (Glasgow Coma Scale ¼ 6/15), quadriplegia, areflexia, and generalized tonic-clonic seizures. He was admitted to the pediatric intensive care unit, and his seizures were aborted by midazolam and phenobarbital. He required mechanical ventilation, and levetiracetam was added for maintenance. On day 11 of his illness, 1 g/kg/day intravenous immunoglobulin was administered for 2 days. Two lumbar punctures showed lymphocytic pleocytosis and high cerebrospinal fluid protein (Table 2), 1 on admission and the other done 1 week afterward. Gram stain revealed no organisms, and bacterial and viral cultures yielded no growth. Polymerase chain reaction (PCR) failed to evidence Herpes simplex virus 1/2 or other viruses. Imaging studies were done during the first week of admission, computed tomography (CT) scan was normal. Magnetic resonance imaging, however, showed bilateral multifocal signal intensity of the cerebral cortex and basal ganglia with mild left diffuse meningeal enhancement ( Figure 1). Electroencephalogram (EEG) showed focal discharges in the left temporoparietal region. A wide range of bacteriological and viral tests were performed. Apart from a positive Mycoplasma pneumoniae immunoglobulin M, all other tests were negative. Cold agglutinin yielded a weakly positive ratio (1:2). Mycoplasma testing revealed negative immunoglobulin G and positive immunoglobulin M serology indicating acute infection. No evidence of other causal pathogens could be found. The patient was started on acyclovir, ceftriaxone, clarithromycin, and vancomycin. Upon further investigations, nerve conduction studies/electromyography confirmed severe motor-axonal neuropathy with predominant lower extremity involvement, particularly the peroneal nerve. A repeat EEG showed moderate to severe encephalopathy changes with a slow background and without epileptiform discharges. Spinal magnetic resonance imaging revealed no evidence of cord abnormality. Spectroscopy showed a normal pattern. A few days after intravenous immunoglobulin was infused, he improved remarkably and was weaned from the mechanical ventilator. Although briefly manifesting aphasia, he was alert to surroundings and regained, and by day 14, spontaneous movements were seen in the lower limbs. Extubation ensued on day 17, and full consciousness returned the next day. The boy walked with support and regained swallowing function by day 20. Discontinuation of all medications except levetiracetam 50 mg/ kg/day followed, and the boy was discharged home. He was weaned off levetiracetam and returned to school. He exhibited mild lower limb weakness on his last follow-up at the age of 5 years. Discussion This was a rare case of Mycoplasma pneumoniae infection causing both central nervous system and peripheral nervous system disease. The clinical spectrum of Mycoplasma pneumoniae neurologic disease is not well defined. This patient presented with fever, upper respiratory infection symptoms, and positive mycoplasma serology and then rapidly developed encephalopathy with positive cerebrospinal fluid, magnetic resonance imaging, EEG, and nerve conduction studies findings. In Daxboeck's review, flu-like or respiratory illness preceded the onset of the neurologic disease in 76% of patients. Manifestations included meningeal signs, fever, nausea/vomiting, headache, fatigue, lethargy, and convulsions. 5 This patient's clinical picture is consistent with findings reported in major studies focused on mycoplasma encephalitis. For the diagnosis of Mycoplasma pneumoniae-associated encephalopathy, studies have suggested the sufficiency of immunoglobulin M in children. 1,4 In comparison to adults, in childhood an immunoglobulin M study is efficient and shows consistency for titer changes during the acute phase. 2,4,6 This diagnosis relies on consistent laboratory, clinical, and neuroimaging findings, accompanied by the absence of any features incriminating other etiologies. 7 Cerebrospinal fluid PCR and intrathecal antibodies testing for Mycoplasma pneumoniae were not performed in this case due to the low yield of these studies. 4,[8][9][10] According to the California Encephalitis project, magnetic resonance imaging abnormalities were up to 49%, EEG abnormalities were found in 79% of cases, while CT was often normal (82%). 8 Cerebrospinal fluid in Mycoplasma pneumoniae meningitis or encephalitis usually contains a pleocytosis (mostly mononuclear) and elevated protein counts. 10 In this patient, cerebrospinal fluid, EEG, and magnetic resonance imaging findings were consistent with those reported in literature. Although evidence of antibiotics efficacy is still lacking, the authors started acyclovir, ceftriaxone, clarithromycin, and vancomycin as this patient's neurologic symptoms had emerged. Antimicrobial agents have been reported to be effective in a few reports, however, they didn't achieve any therapeutic benefit in this case. This failure might be explained by insufficient penetration into the blood brain barrier, however, an immunologic etiology of the disease is another very important explanation as the exact etiology of the disease is still uncertain. As his condition deteriorated, despite administering antimicrobial agents, a trial of intravenous immunoglobulin (1 gram/ kg/day for 2 days) was tried. Interestingly, he recovered over a week without steroidal therapy. The treatment decision was made based on a presumptive diagnosis of mycoplasma encephalopathy and was based on anecdotal reports. Trials determining adequate treatment do not exist. 7 It is argued that immunoglobulins do not penetrate the blood-brain barrier, but lymphocytic encephalitis may have increased permeability. Although this patient has recovered after intravenous immunoglobulin, late effects of the antimicrobial agents might be considered. Another interesting issue is that some studies report spontaneous recovery 4,9 which also cannot be excluded. Therefore, this case report does not give evidence for the proposed immune-mediated pathophysiology for Mycoplasma pneumoniae encephalitis but rather demonstrates a significant improvement of symptoms after administering intravenous immunoglobulin. Although Mycoplasma pneumoniae neurologic disease is considered rare, and most cases run a benign course, significant morbidity and fatalities have occurred. 3 Prospective studies are needed to establish evidence for efficacy and appropriate dosing in intravenous immunoglobulin treatment.
2018-04-03T03:35:11.735Z
2016-03-07T00:00:00.000
{ "year": 2016, "sha1": "62743ad3745d9d8805d74e3f0a7bdb11b124c66d", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1177/2329048x16632140", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "62743ad3745d9d8805d74e3f0a7bdb11b124c66d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
159037771
pes2o/s2orc
v3-fos-license
Development of a mouse iron overload-induced liver injury model and evaluation of the beneficial effects of placenta extract on iron metabolism Hepatic iron deposition is seen in cases of chronic hepatitis and cirrhosis, and is a hallmark of a poorer prognosis. Iron deposition is also found in non-alcoholic steatohepatitis (NASH) patients. We have now developed a mouse model of NASH with hepatic iron deposition by combining a methione- and choline-deficient (MCD) diet with an iron-overload diet. Using this model, we evaluated the effects of human placenta extract (HPE), which has been shown to ameliorate the pathology of NASH. Four-week-old male C57BL/6 mice were fed the MCD diet with 2% iron for 12 weeks. In liver sections, iron deposition was first detected around the portal vein after 1 week. From there it spread throughout the parenchyma. Biliary iron concentrations were continuously elevated throughout the entire 12-week diet. As a compensatory response, the diet caused elevation of serum hepcidin, which accelerates excretion of iron from the body. Accumulation of F4/80-positive macrophages was detected within the sinusoids from the first week onward, and real-time PCR analysis revealed elevated hepatic expression of genes related inflammation and oxidative stress. In the model mice, HPE treatment led to a marked reduction of hepatic iron deposition with a corresponding increase in biliary iron excretion. Macrophage accumulation was much reduced by HPE treatment, as was the serum oxidation-reduction potential, an index of oxidative stress. These data indicate that by suppressing inflammation, oxidative stress and iron deposition, and enhancing iron excretion, HPE effectively ameliorates iron overload-induced liver injury. HPE administration may thus be an effective strategy for treating NASH. Introduction Iron is an essential element in virtually all organisms, playing key roles in a variety of integrative metabolic pathways, including DNAsynthesis, hematopoiesis, mitochondrial biogenesis, energy metabolism and oxygen transport [1]. Iron deficiency causes anemia, while excess iron causes hemochromatosis. In the latter case, iron atoms cause Fenton reactions and promote production of toxic reactive oxygen species (ROS) [1,2]. The liver, in particular, is susceptible to damage caused by ROS, and iron deposition in the liver is an exacerbating factor in cases of chronic hepatitis and cirrhosis [3]. Non-alcoholic fatty liver disease (NAFLD) is one of the most common liver conditions seen in outpatient practice [4] and is strongly associated with metabolic syndrome and insulin resistance [5]. The spectrum of NAFLD includes the relatively benign "simple steatosis" and the more severe "non-alcoholic steatohepatitis" (NASH). NASH is broadly defined as the presence of steatosis with inflammation and progressive fibrosis [6,7]. It has been shown that NASH ultimately leads to cirrhosis and hepatocellular carcinoma in 15-25% of the patients [8,9,10]. Hepatic iron deposition has been confirmed in about one-third of adult NAFLD patients and is a hallmark of a poorer prognosis [11]. For more than 40 years, human placental extract (HPE) has been prescribed clinically to treat chronic hepatitis, cirrhosis and other hepatic diseases. In experimental animal models of hepatitis, HPE reportedly ameliorates hepatic injury mediating liver regeneration and inhibiting inflammatory reactions and hepatocyte apoptosis [12,13]. Moreover, Shimokobe et al. reported that HPE is effective in NASH patients who are unresponsive to lifestyle intervention [14]. Those patients were treated for 8 weeks with Laennec, a HPE formulation, which produced significant reductions in serum transaminases (AST and ALT). In an earlier study (Heliyon 2017), we developed a mouse NASH model by feeding the mice a methione-and choline-deficient (MCD) diet with high-salt loading (8% NaCl in the drinking water) for 5 weeks in heterozygous RAMP2 knockout mice (RAMP2þ/À) [15]. Using this model, we evaluated the effects of HPE-treatment. Serum levels of AST and ALT were reduced in the HPE-treated group, as was hepatic expression of TNF-α and MMP9, which is indicative of reductions in the severity of hepatic inflammation and tissue remodeling. HPE treatment also diminished oxidative stress. Because it is safe and well tolerated, use of HPE is a potentially effective approach to the treatment of NASH. In the present study, we developed a mouse model of NASH with hepatic iron deposition by combining the MCD diet with iron-overload. Focusing particularly on iron metabolism, we evaluated the effects of HPE-treatment. Animals Four-week-old wild-type C57BL/6J male mice were purchased from a supplier of experimental animals (Charles river laboratories Japan, Inc. Kanagawa, Japan) and used for the study. All mice were maintained according to a strict procedure under specific pathogen-free conditions in an environmentally controlled (12-h light/dark cycle; room temperature, 22 AE 2C) breeding room at the Division of Laboratory Animal Research, Department of Life Science, Research Center for Human and Environmental Sciences, Shinshu University. Before the operative procedures, the mice were anesthetized through intraperitoneal injection of a combination anesthetic that included 0.3 mg/kg of medetomidine (Nippon Zenyaku Kogyo Co.Ltd., Koriyama Japan), 4.0 mg/kg of midazolam (Astellas Pharma Inc. Tokyo Japan) and 5.0 mg/kg of butorphanol (Meiji Seika Pharma Co.Ltd., Tokyo Japan). All animal experiments were conducted in accordance with the ethical guidelines of Shinshu University. Diet and HPE treatment The MCD with 2% carbonyl iron (MCD-Fe) diet was custom-made for this study (Oriental Yeast Co., Ltd, Tokyo, Japan). The components of the diet are shown in Table 1. The normal diet (AIN-93G, Oriental Yeast) included methionine and choline without carbonyl iron. The diet was administered for 12 weeks, beginning when the mice were 4 weeks old. The HPE used in this study was hydrolysate of human placenta (Laennec; Japan Bio Products Co., LTD, Tokyo, Japan). Mice were intramuscularly administered 0.1 ml of Laennec (3.6 mg/kg) or control saline twice a week during the MCD-Fe diet. Body weights were measured every day at around 10:00 am. Histology Tissues were fixed overnight in 10% formalin, embedded in paraffin, and cut into 5-μm-thick sections for histological examination. The specimens were then deparaffinized for Berlin blue staining. For immunohistochemical analysis, rat anti-mouse F4/80 antibody (BIO-RAD, Hercules, CA) and rat anti-mouse 4-hydroxy-2-nonenal (4HNE) (NOF Corporation, Tokyo, Japan) were used. DAPI (Thermo Fisher Scientific, MA) was used to stain the nuclei. Sections were observed using a KEY-ENCE model BZ-X710 microscope (Osaka, Japan). The area of interest was quantified using the BZ-H3C module with the BZ-X710 microscope (KEYENCE). Quantitative real-time RT-PCR analysis Total RNA was isolated from liver samples using a PureLink RNA Mini Kit (Thermo Fisher Scientific). RNA quality was then verified using electrophoresis, and concentrations were measured using an Oubit 3.0 fluorometer (Thermo Fisher Scientific). Thereafter, the extracted RNA was treated with DNA-Free (Thermo Fisher Scientific) to remove contaminating DNA, and 2-μg samples were subjected to reverse transcription using a PrimeScript™ RT reagent Kit (Takara Bio, Shiga, Japan). Quantitative real-time RT-PCR was carried out using a StepOne Plus Real-Time PCR System (Thermo Fisher Scientific) with SYBR green (Toyobo, Osaka, Japan) or Realtime PCR Master Mix (Toyobo). Values were normalized to mouse GAPDH expression (Pre-Developed TaqMan assay reagents, Thermo Fisher Scientific). The primers used are listed in Table 2. Measurement of iron concentrations in bile and urine After laparotomy, the gallbladder was removed, and the bile within it was collected and frozen at À80 C. Spot urine was collected and frozen in the same way. Iron concentrations in the bile and urine samples were determined using a Metal assay ELISA kit (Metallogenics Co., Ltd., Chiba, Japan). Measurement of hepcidin concentrations in serum Blood samples were collected from the abdominal aorta using a 22G needle. The collected samples were stored on ice for about 30 min and allowed to clot, after which they were centrifuged twice at 3,500 rpm for 10 min at 4 C, and the serum was collected. Serum hepcidin concentrations were measured at Medical Care Proteomics Biotechnology Co., Ltd. (Ishikawa, Japan). Measurement of serum oxidation stress Serum oxidation-reduction potentials were measured using a RedoxSYS Analyzer (Aytu Bioscience, CO). Statistical analysis Values are expressed as means AESEM. Student's t test was used to evaluate differences. Values of p < 0.05 were considered significant. Changes of body weight on the MCD with iron-overload diet The MCD-Fe or normal diet was administered for 12 weeks, beginning when the mice were 4 weeks old (Fig. 1A). We found that mice in the MCD-Fe group lost a substantial amount of weight during the first week of the diet (Fig. 1B). After 1 week, the weights of mice on the MCD-Fe diet were about 70% of the weights of mice on the normal diet. Thereafter, the body weights of mice on the MCD-Fe diet stabilized and remained relatively constant, while the weights of mice on the normal diet gradually increased. Iron deposition in the liver No iron deposition was detected in mice on the normal diet (data not shown). Liver samples collected from mice after 1-4, 8 and 12 weeks on the MCD-Fe diet were sectioned and examined ( Fig. 2A). Areas positive for Berlin blue staining, which is indicative of iron deposition, were calculated (Fig. 2B). Iron deposition was already detectable at the periphery of lobules (Zone 1, portal triads) after 1 week, and subsequently spread into the liver parenchyma. At 4 weeks, the accumulation of iron became prominent, and sporadic dense deposits were noticed around the portal vein. By 8 weeks, ballooning degeneration of hepatocytes was prominent, and the iron deposition had spread toward the central vein. At 12 weeks, the iron deposition had increased further and was detected throughout the hepatic lobules. Iron excretion and serum hepcidin changes in mice on the MCD with iron-overload diet We evaluated iron excretion via the kidney and liver. In the normal diet group, the average urinary iron concentration was 2.37 AE 1.09 μg/dL during the study. In the MCD-Fe group, the iron concentration in urine was significantly elevated to 39.94 AE 9.83 μg/dL after 1 week and reached 229.94 AE 60.50 μg/dL after 2 weeks (Fig. 3A). Thereafter, urinary iron concentrations in MCD-Fe mice declined to the control level. Iron is also discharged from the liver into the intestine with the bile and discarded in the feces. In mice on the normal diet, the average biliary iron concentration was 187.01 AE 36.16 μg/dL. In mice in the MCD-Fe group, the iron concentration in bile was continuously elevated throughout the study (446.92 AE 92.68 μg/dL to 556.25 AE 202.00 μg/dL) (Fig. 3B). The peptide hormone hepcidin is the key regulator of iron metabolism in mammals, acting to accelerate iron excretion from the body. We found that in mice on the MCD-Fe diet, serum hepcidin levels were continuously elevated throughout the study (Fig. 3C). On the normal diet, the average serum hepcidin level during the study was 52.47 AE 9.16 ng/ml. In the MCD-Fe mice, serum hepcidin levels ranged from 170.68 AE 0.49 ng/ml (1 week) to 254.5 AE 9.26 ng/ml (3 weeks). The elevation of serum hepcidin is thought be a compensatory response to the iron-overload. The temporal pattern of the hepcidin elevation appeared to parallel that of the biliary iron concentration. This suggests hepcidin was upregulated in response to the hepatic iron accumulation. Macrophage accumulation in the liver To assess macrophage accumulation, we analyzed F4/80 immunostaining in liver sections. After 1 week on the MCD with iron-overload diet, F4/80-positive macrophages were detected along the sinusoids (Fig. 4A). At that time, the F4/80-positive area per 200Â microscope Inflammation, oxidative stress and fibrosis-related gene expression in the liver Real-time PCR analysis of the liver revealed elevated expression of genes related to inflammation, oxidative stress and fibrosis in mice in the MCD-Fe group. Expression of the proinflammatory cytokines IL-1β and IL-6 reached a peak at 8 weeks, as did expression of collagen-α1 (Fig. 5A). Expression of the NADPH oxidase subunits p67 phox, p47 phox and p22 phox was also elevated in MCD-Fe mice, and they continued to be elevated at week 12 (Fig. 5B). HPE-treatment increases iron excretion and suppresses hepatic iron deposition HPE had no effect on the body weight changes seen in MCD-Fe mice (Fig. 6). Using our mouse model of diet-induced NASH with hepatic iron deposition, we evaluated the effects of HPE treatment (Fig. 7). Because the liver injury was severe and intractable after 8 or 12 weeks on the MCD-Fe diet, we evaluated the effect of HPE during weeks 1-4. HPE treatment resulted in a significant reduction in hepatic iron deposition at week 3 (Fig. 8A, B). The urinary iron concentration reached its peak at 2 weeks in both the HPE-treated and untreated groups, and there was no difference between the two groups (Untreated: 229.94 AE 60.50 μg/dl vs. HPE: 250.84 AE 71.76 μg/dl) (Fig. 9A). By contrast, peak biliary iron (Fig. 9B). F4/80-positive macrophage accumulation was also much reduced by HPE treatment (Fig. 10A). In HPE-treated mice, the F4/80-positive area was reduced to 70% of that in untreated mice after week 1, and it fell to 50% of that in untreated mice by week 3 (Fig. 10B). HPE suppresses oxidative stress in the NASH with iron deposition model Finally, we evaluated the oxidative stress level in the NASH with iron deposition model. Control groups showed strong immunostaining of 4hydroxy-2-nonenal (4HNE), a product of lipid peroxidation, at weeks 3 and 4 of the experiment. However, the staining was reduced by HPEtreatment (Fig. 11A). We then analyzed the serum oxidative stress level using the oxidation-reduction potential (ORP) as an index. We found that HPE treatment significantly reduced the level of oxidative stress from 116.1 AE 0.20 mV to 102.3 AE 0.10 mV after week 3 and from 111.5 AE 3.36 mV to 96.6 AE 3.73 mV after week 4 (Fig. 11B). Discussion There are currently no standard animal models that correctly reproduce the pathogenesis of NASH in humans, though several have been proposed [16]. Dietary NASH models are useful for mimicking the pathogenesis of diet-induced obesity and the resultant metabolic disturbances. However, whereas long-term intake of a high-fat diet leads to obesity and fatty liver in mice, it does not evoke liver fibrosis, which is a defining histological feature of NASH. In the MCD diet model, fibrosis started from the region adjacent to the sinusoids [15]. Liver sinusoidal endothelial cells (LSECs) are the front-line exposed to the various metabolites and chemicals that enter the liver in the circulation. They exhibit greater endocytotic activity than other types of endothelial cells [17], and they express scavenger receptors, such as the mannose receptor, Fc-receptor and stabilin-2, which may protect the parenchymal hepatocytes by scavenging toxic molecules. This may explain why hepatic injury reportedly begins with damage to LSECs [18,19]. Because the MCD diet model in mice closely replicates the histological features of the fibrosis observed in human NASH, we selected that model for evaluation of the pathophysiology of NASH in our earlier study [15]. Iron deposition can be detected in cases of chronic organ dysfunction, including heart failure, renal failure and liver failure [20,21]. Deposition of excess iron causes production of ROS, which can have toxic effects on nucleic acids, proteins and lipids [2]. Although, the liver plays a central role in iron homeostasis, excess iron deposition is an exacerbating factor in cases of liver failure [22]. ROS derived from the iron deposition secondarily generate peroxidized lipids and proteins and reduces hepatic anti-oxidant levels and nitric oxide production, all of which cumulatively damage hepatocytes [23]. Iron deposition-induced liver damage is most apparent in hereditary hemochromatosis, which is caused by a genetic abnormality of iron metabolism-related factors. In cases of hereditary hemochromatosis, excess iron deposition in the liver causes hepatocyte death and fibrosis, which ultimately results in liver cirrhosis and hepatocellular carcinoma. Increased hepatic iron stores are also observed in about one-third of adult NAFLD patients, though to a lesser extent than in hemochromatosis [11]. In NAFLD, iron potentiates the onset and progression of the disease by increasing ROS and altering insulin signaling and lipid metabolism, and is thought to be involved in the transition of NAFLD to NASH [24]. Unfortunately, iron deposition in the liver could not be detected in the MCD-induced NASH model used in our previous study [15]. Kirsch et al. reported a rat model of MCD with iron-overload [25]. In that model, hepatic iron-overload worsened hepatic inflammation and fibrosis. In the present study, we applied their protocol to mice, in part because it may be useful in a future analysis of genetically engineered mice. Histological examination of the liver in this model confirmed that we successfully generated an iron deposition model; iron deposition was already detectable around the portal vein after 1 week on the MCD-Fe diet. Iron deposition was not detectable in mice on a normal diet, which suggests that MCD-Fe-induced hepatocyte damage may disrupt excretion of excess iron from the liver. Although, iron deposition was limited to the area around the portal vein during the first 3 weeks, the area of iron deposition enlarged to include other areas of the parenchyma during weeks 4-12. This suggests that proper excretion of iron from the body was preserved for the first 3 weeks, after which the disruption of iron homeostasis became more pronounced, leading to diffusely distributed iron deposition in the liver at later times. Similarly, in the NASH model caused by the MCD diet, the progression of NASH pathology due to fibrosis was confirmed after 8 weeks [26]. The urinary iron concentration reached its peak at 2 weeks and then decreased to the control level. By contrast, biliary iron excretion was continuously elevated throughout the study, suggesting iron excretion from the liver to the bile is sustained to some degree into the later stages of the disease model. Similarly, serum hepcidin levels were continuously elevated throughout the 12-week period mice were fed the MCD-Fe diet. Hepcidin is a peptide hormone and a key regulator of iron metabolism in mammals that accelerates iron excretion from the body. As the pattern of hepcidin elevation was similar to that of the biliary iron concentration, we suggest hepcidin is upregulated in response to the hepatic iron accumulation. The continuous elevation in hepcidin means adaptation to iron-overload is somewhat preserved until the later disease stages in this model. Nonetheless, increased iron deposition was observed in mice after 4 weeks on the MCD-Fe diet, which suggests this adaptive mechanism cannot offset prolonged accumulation of iron. Excessively accumulated iron is phagocytosed by liver Kupffer cells and macrophages [27]. In mice on the MCD-Fe diet, F4/80-positive macrophages were detected along the sinusoids within 1 week, and the macrophage accumulation then expanded from the portal triad into peripheral regions such that the macrophage distribution paralleled that of iron deposition. This suggests that the macrophage accumulation was a compensatory response to the iron deposition, and that the macrophages were there to phagocytose excess iron. On the other hand, macrophage accumulation can lead to chronic inflammation and organ damage. The greatly enhanced macrophage accumulation seen after 8 or 12 weeks in the model may thus reflect the development of chronic inflammation. Consistent with that idea, in mice on the MCD-Fe diet, real-time PCR analysis revealed increased expression of genes related to inflammation, oxidative stress and fibrosis. HPE has long been prescribed clinically to treat chronic hepatitis, liver cirrhosis and other hepatic diseases. Shimokobe et al. reported that HPE is effective in NASH patients who were unresponsive to lifestyle intervention [14]. We also reported the beneficial effect of HPE on the pathology of NASH in our model [15]. In the present study, HPE treatment led to a marked reduction in hepatic iron deposition in mice on the MCD-Fe diet. This reduction was associated with enhanced biliary iron excretion. Also reduced by HPE treatment were the hepatic macrophage accumulation and the serum oxidation-reduction potential, which indicates HPE treatment significantly reduced inflammation and oxidative stress. HPE treatment thus appears to effectively ameliorate iron overload-induced liver injury. The precise mechanism underlying the beneficial effects of HPE on iron overload-induced liver injury remains to be clarified. We speculate that the beneficial effects do not reflect the action of a single molecule or pathway, but are instead associated with the combined actions of multiple bioactive molecules active within various pathways. It was recently reported that HPE exerts a protective effect against hepatocyte apoptosis by reducing oxidative stress and maintaining cell homeostasis. The underlying mechanisms may be associated with a reduction in endoplasmic reticulum stress [28]. HPE also exerts direct inhibitory effects on the pro-inflammatory mediators nitric oxide, TNF-α and cyclooxygenase-2 in lipopolysaccharide-stimulated RAW264.7 macrophages [29]. In the present study, we confirmed that HPE suppresses inflammation in the liver. We speculate that summation of the anti-inflammatory, anti-oxidative stress and anti-apoptotic effects of HPE all contribute to the recovery of liver function and proper iron metabolism. Iron reduction therapies, such as phlebotomy and an iron-restricted diet, are now used with chronic hepatitis C patients for the purpose of reducing iron overload. Iron reduction therapy is also effective in NASH patients [30]. In addition, there is persuasive evidence that iron reduction decreases insulin resistance, and it likely also decreases oxidative stress, two key pathogenic features of NASH. By improving iron metabolism, HPE treatment may be an effective strategy that can be used as an alternative or an addition to iron reduction therapy in the treatment of NASH. Funding statement This work was supported by Japan Bio Products Co., Ltd., as a collaborative project.
2019-05-26T13:35:37.948Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "7a742f8c8992b451ec78702d535cbef8aa96cfb8", "oa_license": "CCBY", "oa_url": "http://www.cell.com/article/S2405844018379453/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c1f5735e3d51198c430952ffcdb986bf2722088", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
9848252
pes2o/s2orc
v3-fos-license
Housing and Demographic Risk Factors Impacting Foot and Musculoskeletal Health in African Elephants [Loxodonta africana] and Asian Elephants [Elephas maximus] in North American Zoos For more than three decades, foot and musculoskeletal conditions have been documented among both Asian [Elephas maximus] and African [Loxodonta africana] elephants in zoos. Although environmental factors have been hypothesized to play a contributing role in the development of foot and musculoskeletal pathology, there is a paucity of evidence-based research assessing risk. We investigated the associations between foot and musculoskeletal health conditions with demographic characteristics, space, flooring, exercise, enrichment, and body condition for elephants housed in North American zoos during 2012. Clinical examinations and medical records were used to assess health indicators and provide scores to quantitate conditions. Using multivariable regression models, associations were found between foot health and age [P value = 0.076; Odds Ratio = 1.018], time spent on hard substrates [P value = 0.022; Odds Ratio = 1.014], space experienced during the night [P value = 0.041; Odds Ratio = 1.008], and percent of time spent in indoor/outdoor exhibits during the day [P value < 0.001; Odds Ratio = 1.003]. Similarly, the main risk factors for musculoskeletal disorders included time on hard substrate [P value = 0.002; Odds Ratio = 1.050] and space experienced in indoor/outdoor exhibits [P value = 0.039; Odds Ratio = 1.037]. These results suggest that facility and management changes that decrease time spent on hard substrates will improve elephant welfare through better foot and musculoskeletal health. Introduction Foot and musculoskeletal []conditions are among the most commonly reported health issues affecting African and Asian elephants under human care, and have been challenging veterinary issues for zoo elephants for nearly a century [1,2].In 1994, Mikota et al. published an extensive review of medical records from 69 North American zoos and concluded that over the course of the 84 years for which documentation was available, an average of 50% of the elephants experienced foot pathology and 64% experienced musculoskeletal abnormalities [other than those affecting the feet] [3].More recently, 33% of zoos surveyed reported at least one foot abnormality, 36% reported at least one case of arthritis, and 18% reported at least one case of lameness in their elephant populations within the previous year [4]. Foot and musculoskeletal health conditions of concern in elephants are pododermatitis, toenail cracks and overgrowth, onychia [inflammation/infection of the toenail bed], sole overgrowth and abscesses, osteomyelitis of the phalanges, degenerative joint disease, osteoarthritis, trauma, and soft tissue strains, although this is not an inclusive list [5,6,7].Elephant feet and limbs may be predisposed to some of these conditions due to their unique anatomy and pressures experienced due to large body mass [8].Bones of the feet are oriented so that just the tips of the phalanges come into contact with the substrate via the associated nails [8].In addition a cartilaginous rod extends caudally to support the large cushion in the heel which distributes forces across the foot [9].Studies have shown that increased foot pressures are associated with larger body mass, and that elephants carry more than 60% of their weight in the forelimbs [10].Limb bones in normal elephants have little angulation, and therefore, forces are transmitted in line with the axis of the leg through the joints [3].The long life of these species may lead to repeated force to the structures of the foot and limbs, potentially leading to health concerns. Since health is an important indicator of animal welfare [11], there is considerable interest in developing a better understanding of the risk factors that contribute to poor foot and musculoskeletal health so that targeted prevention and intervention strategies may be applied.Clinical experiences suggest that lack of exercise, limited space, standing on hard substrates, environmental factors that increase contact of feet with excrement, urine, and moisture, and obesity are potential contributors to foot and musculoskeletal pathology [5,6,7].However, there is a paucity of literature that scientifically investigates the association of these factors with foot and musculoskeletal disorders in elephants.The goals of this study were to 1) ascertain the current status of foot and musculoskeletal health of elephants housed at zoos accredited by the Association of Zoos and Aquariums (AZA) in North America; 2) investigate the associations of demographic, environmental, and management factors with foot and musculoskeletal problems; and 3) support evidence-based recommendations for interventions to prevent pathology and improve the foot and musculoskeletal health of zoo elephants. Ethics Statement This study was authorized by the management at each participating zoo and, where applicable, was reviewed and approved by zoo research committees.In addition the study protocol was reviewed and approved by the Zoological Society of San Diego Institutional Animal Care and Use Committee N.I.H. Assurance A3675-01; Protocol 11-203.The study was non-invasive. Study Population Elephants selected for this study were present in AZA accredited zoos in 2012.Additionally, elephants selected for study were not born, did not die, and were not transferred between zoos within the 2012 study year.Data were sourced from medical records and physical exams for each elephant completed by veterinarians at each participating zoo. Musculoskeletal Assessment Zoo-based veterinarians performed a visual/tactile examination of each individual elephant using a checklist to record the presence or absence of abnormalities in the musculoskeletal system of the limbs (shoulders, elbows, carpi, hips, stifles, tarsi) (S1 Template).Occurrences of abnormalities such as swelling, heat, or angular deformities that reflected musculoskeletal pathology were documented.Due to the elephant's anatomy, visual and tactile examination is less effective for detecting abnormalities in the more proximal joints, such as the shoulders and hips.Therefore veterinarians also evaluated each animal for evidence of stiffness, lameness, abnormal weight bearing, or mechanical limitations in the range of motion of the limb joints as an additional indicator of musculoskeletal problems. The project veterinarian reviewed all physical examination results and assigned each elephant a musculoskeletal (MS) score based on the following system: a MS score of 0 indicated no gait change, limb deformity, joint heat or swelling on the physical exam.A score of 1 indicated one joint/limb had heat, swelling, or mild lameness/gait change; a score of 2 indicated one joint/limb exhibited heat or swelling with associated lameness or stiffness; and a score of 3 indicated two or more joints or limbs with heat, swelling or joint deformity associated with lameness or other gait deficiencies.The severity of abnormalities was not assessed since these could not be reliably standardized between the different veterinarians performing the examinations. Foot Assessment Zoo-based veterinarians evaluated each elephant's external pedal tissue structures (foot pad, interdigital space, cuticle, toenail) and recorded the presence or absence of abnormalities (but not severity) on each foot (S1).Toenails were examined for any cracks, defects, or horn growth abnormalities.In addition, veterinarians recorded any cracks, ulcerations, bruises, fissures, abscesses, or horn growth/sole abnormalities on the foot pads and in the interdigital spaces.Osteo-articular pathologies of the feet were not assessed during this portion of the examination and results of radiographs were not included since the majority of elephants did not have concurrent imaging at the time of these evaluations. Foot data from the physical examinations were reviewed by the project veterinarian and each elephant was assigned a score based on the following system: each of three locations (toenail, pad, or interdigital space) on a foot were assessed for the presence of an abnormality, and each location on each foot with an abnormality was scored as 1, such that each foot could have a maximum score of 3, with each elephant having a maximum score of 12. In order to determine the subset of the elephant population that could potentially be affected by chronic or recurrent foot problems, we requested the complete 2011 veterinary records of each elephant included in the study.Where veterinary records were obtained and complete for the calendar year, the project veterinarian assessed each record for notes where the attending veterinarian had described problems or treatment pertaining to the elephant's feet.We were interested in evaluating chronic or recurrent (described as "possibly persistent" in the remainder of the text) foot problems, however due to the level of detail provided in the 2011 records, we were not able to determine the severity of lesions nor whether the abnormalities reported in the physical examination were the same exact location as those observed in 2011.As such, the population of interest for further risk factor analyses included elephants with a completed physical examination in 2012 who also had a record of one or more foot abnormalities in 2011.In this case, elephants with "possible persistent" foot problems had one or more foot problems in both 2011 and 2012, but the exact nature and location of those problems could not be confirmed to be the same. Independent Variables We selected independent variables based on hypotheses regarding their potential association with foot and MS scores.Definitions for the variables selected for testing in this study are described in Table 1.Details on the collection and calculation of independent variables are presented in [12][13][14][15][16], but a few novel variables warrant further description. We were interested in quantifying the amount of space available to each elephant.Because many zoo elephants are shifted between different environments that comprise an exhibit for varying amounts of time each day, a new variable was calculated to capture the experience of the elephants as a factor of both the size of their different environments and the amount of time they are housed in each space.This Space Experience variable [12] was calculated by first taking the size [m^2] of each environment in which an elephant spent time and then multiplying it by the percentage of time the elephant spent in that environment.These weighted Origin Elephant Captive or wild born [14] Environment Contact Elephant Overall, Day, Night Maximum number of unique environments an elephant was housed in [12] Space Experience The average weighted (by percent time) size of all environments in which an elephant spent time [12] Total Elephant [m 2 ] Overall, Day, Night For all environment types [12] Indoor Elephant [m 2 ] Overall, Day, Night For indoor environments only [12] In/Out Choice Elephant [m 2 ] Overall, Day, Night For environments where there is a choice of indoors or outdoors [12] Outdoor Elephant [m 2 ] Overall, Day, Night For outdoor environments only [12] Percent Time Sum of monthly percent time spent in category, averaged over time period [12] Indoor Elephant % Overall, Day, Night Time spent in indoor environments [12] In/Out Choice Elephant % Overall, Day, Night Time spent in environments with an indoor/outdoor choice [12] Outdoor Elephant % Overall, Day, Night Time spent in outdoor environments [12] Soft Substrate Elephant % Overall, Day, Night Time spent in environment with 100% grass, sand, or rubber substrate [12] Hard Substrate Elephant % Overall, Day, Night Time spent in environment with 100% concrete or stone aggregate substrate [12] Body Condition Score Elephant Score based on body condition, ranging from 1-5 with an ideal score of 3 [15] Musculoskeletal Exercise Diversity Zoo Diversity index score of exercises conducted at zoo [13] Enrichment Diversity Zoo Diversity index score of enrichment activities conducted at zoo [13] doi:10.1371/journal.pone.0155223.t001 environment sizes were then averaged to calculate a representative value for each elephant.The Space Experience variables were adjusted to a value of "per 500 ft 2 " to aid in interpretation of Beta values. To calculate our environment type and flooring substrate variables, we first defined each space in which elephants spent time as indoors, outdoors or mixed based on detailed facility surveys [12].Mixed environments were areas where elephants had a choice to move freely between indoor and outdoor spaces.We then defined multiple classes of flooring substrate: grass, sand, rubber padding, stone aggregate, concrete and categorized the types of substrates into hard surface (concrete and stone aggregate), soft surface (grass, sand, and rubber padding) and determined the percent coverage for each substrate type for each environment.We wanted to calculate the time that elephants spent in contact with each substrate type so to confirm this we determined which environments were comprised of 100% hard and 100% soft substrate and calculated the percent time each elephant spent in environments that met this criteria from detailed housing time budgets [12] Statistical Analysis The MS score, foot score, and co-localization frequencies were calculated.Co-localization was defined as more than one type of abnormality per foot.Sex and species differences were assessed using Chi-Square analysis.We calculated descriptive statistics for the mean percent coverage of hard and soft flooring surfaces for each environment type (indoors, outdoors, and mixed), and Chi-Square analysis was used to determine if there were any associations between the environment type (indoors, outdoors, and mixed) and the frequency of 100% coverage of hard or soft surfaces. Predictive models for MS and foot scores were fitted using generalized estimating equations (GEE), which allowed for repeated measurement and clustering of individual animals within zoos.Multinomial logistic regression was used for MS scores, with a reference level of zero, or "no joint problems".For foot scores, the score mean equaled the variance, supporting the use of log-linear Poisson regression models.Residual over-dispersion was accounted for by allowing a multiplicative over-dispersion factor, specified as the deviance scale.Multivariable regression models were built by assessing individual predictors and manually conducting forward stepwise selection based on quasi-likelihood under the independence model criterion (QIC) values and parameter estimates of explanatory variables.Models exhibiting multi-collinearity, as defined by a variance inflation factor (VIF) of greater than 10 and a Condition Index (CI) of greater than 30, were not considered for further analysis.Age, sex, species, and origin were assessed as potential confounders to the models.An independent correlation structure was specified.Statistical analyses were conducted using SAS software, version 9. Musculoskeletal Health Within the study population of 255 elephants, 198 had complete musculoskeletal health data.The majority of elephants, 74.7% (148 / 198), did not have any reported musculoskeletal abnormalities.Table 2 shows the frequency of MS scores within the study population.There were no significant statistical differences between the MS scores based on sex (P value = 0.070) or species (P value = 0.488). The results of univariate modeling of space and substrate variables on MS scores are presented in Table 3, and were used to guide development of the multivariable model.Descriptive statistics detailing the variables included in the multi-variable regression model are shown in Table 4.In the multivariable multinomial logistic predictive model, the combination of time on hard substrate, Space Experience in environments that included both indoor and outdoor areas, and the interaction of Space Experience In/Out Choice with age had the most effect on odds of increased MS scores (Table 5).The odds ratio for percent time spent on hard surfaces was 1.050.An example of how this odds ratio associates time on hard substrates with MS scores is illustrated using population-level descriptive statistics for time on hard substrates.Elephants that spend 4 hours per 24 hour period on hard substrates (population 3 rd quartile) are 68% more likely to have a MS score of 2 (versus 1) than are elephants which spend 2.5 hours per 24 hour period on hard substrates (population mean).Space Experience for areas with a choice of indoors or outdoors is associated with a 3.7% increase in odds of a higher MS score.However, this effect is attenuated by age, such that for each year an elephant ages, the effect of Space Experience In/Out Choice on MS score decreases by 0.1%. Foot Health Within the study population of 255 elephants, 215 had physical examinations completed for foot health.Of these, 32.6% (70 / 215) had no noted foot abnormalities at the time of examination, and for those that did, 88.3% (128 / 145) had foot scores of between 1 and 4 (maximum score of 12).Table 6 details the frequency of foot scores within the population.There was no difference in foot scores by species (P value > 0.05). Of the 145 animals with recorded abnormalities, 92.4% (134 / 145) elephants had abnormalities of the nails, 13.1% (19 / 145) had abnormalities on their pads, and 22.8% (33 / 145) had abnormalities in their interdigital space.Fig 1 shows the distribution of feet per elephant where abnormalities were present.Co-localization, the occurrence of abnormalities in combination (two or three locations per foot), was present in 13.0% (28 / 215) of the population, as seen in Fig 2. One hundred sixty-three elephants had complete 2011 veterinary records and a physical exam conducted in 2012.Sixty-four of those 163 elephants had at least one foot abnormality in their 2011 records, and therefore met the criteria for the analysis of possible persistent foot (PPF) abnormalities.Table 6 lists the foot score frequencies for the full population (2012), and for those with PPF scores (2012 score if abnormality listed in 2011).Of those elephants meeting the criteria for PPF scores, 79.7% (51 / 64) had at least one foot abnormality reported in the 2012 physical exam, suggesting potential chronicity or recurrence.The majority of these elephants had abnormalities of the toenails (73.4%; 47 / 64), while 10.9% (7 / 64) had abnormalities on pads and 20.3% (13 / 64) had abnormalities in the interdigital space.There were no significant statistical differences between the PPF scores based on sex (P value = 0.820) or species (P value = 0.527). Since chronic or recurrent foot issues have been postulated to be related to husbandry/management conditions, univariate modeling of the foot scores from the 64 elephants in the PPF sub-population was performed and results presented in Table 7.These findings were used to guide development of the multivariable model.Descriptive statistics detailing the variables retained in the final multi-variable model are shown in Table 8.The multivariable Poisson predictive model found that the combination of time on hard substrate, percent of time spent during the day with a choice of indoors or outdoors, and Space Experience at night (11) had the greatest effect on risk of possible persistent foot scores (Table 9).The risk ratio for percent time spent on hard surfaces was 1.014 (Fig 3).An example of how this risk ratio associates time on hard substrates with foot scores is illustrated using the population-level descriptive statistics for time on hard substrates.Elephants that spend 3 hours per 24 hour period on hard substrate (population mean) are 18% more likely to have a foot score of 6, while those spending 5 hours per 24 hour period (population 3 rd quartile) are 32% more likely to have a foot score of 7. We found a smaller effect on foot score when elephants spent time in environments where there was a choice of being indoors or outdoors during the day; there was a 0.8% increase in risk of increased foot score for each incremental increase in percent time increase in these mixed indoor/outdoor environments.In addition, Space Experience at night is associated with a 0.3% increase in risk in foot score.Age is included in the model as a confounder of nighttime Space Experience. Flooring and Environment Associations We further analyzed the flooring substrate coverage data to better understand the potential associations between environment types (indoors, outdoors and mixed) with flooring surfaces.Table 10 shows the descriptive statistics for average percent coverage of hard flooring surfaces (concrete and stone aggregate) and soft flooring surfaces (grass, sand, and rubber padding) in different environment types (indoor, mixed, and outdoor).This analysis demonstrates that the average coverage of hard and soft surfaces did not differ between indoor, outdoor and mixed environments.While many environments had multiple substrate types, our modeling Discussion A number of factors such as age, housing conditions and management practices have been suggested as risk factors for foot and musculoskeletal pathologies in elephants under managed care, but to date no studies have tested these associations with robust sample sizes and clinical assessments collected by veterinarians on individual elephants.For example, Fowler [5] proposes that lack of exercise, limited space, standing on hard substrates, environmental factors that increase contact of feet with excrement, and moisture, and obesity are important contributing factors to elephant foot and musculoskeletal health problems (based on clinical observations], while Lewis et al. [4] used regression modeling to demonstrate that age predicted likelihood of arthritis (based on surveys without accompanying clinical assessments].In this study, clinical assessments of musculoskeletal and pedal external tissue conditions were paired with individual elephant data describing demographic, housing, flooring, exercise, enrichment, body condition and other variables to determine associations and to provide potential insights into facility and management changes that could improve health and welfare.When musculoskeletal health was evaluated via physical examination, the majority (74.5%; 148 / 198) of elephants had no observable movement or clinical abnormalities (i.e., swelling, heat, or deformity] of their limbs.Twenty-two animals (11.1%; 22 / 198) had problems with stiffness, gait, or limitations in movement in addition to one or more detectable musculoskeletal abnormalities (swelling, heat or deformity], suggesting more significant pathology.However it is important to note that visual and tactile examination is limited as a technique for detecting musculoskeletal abnormalities compared to the clinical use of radiography or thermography.As such, the prevalence of joint abnormalities found in this study may be underestimated due to the fact that we did not employ more sensitive diagnostic techniques. Although there were no statistical differences between frequencies of musculoskeletal abnormalities in African and Asian elephants in this study, the only two elephants with multiple musculoskeletal abnormalities were Asian.This finding differs from previous studies in which musculoskeletal abnormalities were statistically more frequently in Asian elephants [3,17].Further, in the Lewis et al. study [4], most of the variance attributed to species differences was explained by the fact that the Asian elephants significantly older than the African elephants, however we did not find a similar positive association between age and MS scores in our study. With respect to foot abnormalities, we found that approximately two-thirds of elephants in the current study had recorded nail, pad, or interdigital space abnormalities.Toenail problems, specifically onychitis (inflammation/infection of the nail bed] have been previously reported as the most common zoo elephant foot pathology [3].In our population, toenail abnormalities including cracks, defects, inflammation, and horn growth abnormalities comprised 72.7% of all reported foot issues.These findings support those of a recent study in which the highest pressure measured in elephant feet occurred at the distal ends of the lateral toes which make contact through the toenails, suggesting a biomechanical link to foot pathologies (8).In addition, as elephants grow larger and older, their gait changes so that more pressure is initially placed on the cranial aspect of the foot.Over time, these repeated concussive forces may lead to development of abnormalities.Our data suggest that increased age did have an effect on risk of persistent foot abnormalities.Conformation, individual weight-bearing patterns, or musculoskeletal issues (i.e., arthritis] may also predispose to pedal aberrations [5,7].To support this premise, 13% of elephants in our study had concurrent abnormalities of several areas on a single foot, which suggests more extensive pathology.Twelve of the 28 elephants with multiple foot abnormalities had only one foot affected while 7 elephants had two feet affected and 5 individuals displayed multiple abnormalities on all 4 feet.Coexisting abnormalities on multiple feet suggest the inclusion of other influencing factors, such as environmental conditions, management practices (including participation of elephant for routine foot care), or changes in overall health status [7].Thus, our data suggest that despite improvements in preventive foot care in AZA facilities [4], foot pathology remains a health concern for elephants housed in North American zoos. In order to determine persistence of foot abnormalities in our study population, historical medical records (calendar year 2011] from 163 elephants were matched with findings of the 2012 physical exam.Of the 64 animals with recorded foot issues during 2011, the majority (79.7%; 51 / 64] had one or more recorded abnormalities on examination in 2012, suggesting chronic or recurring pedal pathology. Our results demonstrate that one of the main housing risk factors for increased foot and musculoskeletal abnormalities was time spent on hard surfaces.Studies in cattle have shown that hard surfaces alleys and walk-ways contribute to an increased incidence of claw lesions and lameness [18,19], whereas cattle that have access to pasture (natural substrate] have lower levels of foot abnormalities [20].In zoo settings, the prevalence of chronic foot disease in greater one-horned rhinoceros (Rhinoceros unicornis) was found to be 22.2%, and the authors speculated that trauma from concrete and lack of access to ponds and wallows were contributing factors [21].Clinical case studies with elephants show that standing or walking on hard substrates such as concrete or stone can lead to trauma of foot pads, toenails, joints, and other musculoskeletal structures resulting in cracks, abscesses, bruises, strains, and degenerative joint disease [5,7,17].Indeed, the final multi-variable models revealed a significant relationship between time on hard substrate and both foot and MS scores such that just a 10% increase in time on hard surfaces was associated with increased risk of both foot and musculoskeletal abnormalities.Since our objective was to measure the amount of time the elephants spent in contact with different substrate types, we therefore focused the analysis on substrate categories where we knew the environment consisted of 100% coverage of hard substrate or 100% coverage of soft substrate.This is a conservative approach, as time spent in environments with substrate coverage that was large, but less than 100%, was not captured in this analysis [12].Despite these limitations, our methods for estimating exposure to hard and soft surfaces proved sufficient for detecting associations with both foot and musculoskeletal problems.Our findings support the supposition that there is a link between foot pathology and regional peak pressures in the elephant's foot [8].Since foot pressure would be expected to increase with firmer surfaces, this may explain the observations that associate foot problems and hard substrate [5,7]. Both foot and musculoskeletal scores were also associated with variables that described elephants' access to exhibit spaces made up of both indoor and outdoor areas.For foot health, the variable included in the final model described the percent time the elephants spent in mixed indoor/outdoor spaces and the MS scores model included Space Experience In/Out Choice, which is a measure of the size of the mixed indoor/outdoor spaces weighted by the amount of time the elephant spent in those spaces [12].Although we hypothesized that mixed exhibits would encourage more walking, which would promote better foot (through normal wear] and musculoskeletal health (through exercise] and thereby be associated with decreased scores, the opposite relationship between time spent in mixed exhibits and both foot and MS scores was found.For example, an incremental increase of 10% time in mixed exhibit space increased the risk of foot abnormalities by 8.3%, and there was a 3.7% incremental increase in risk for musculoskeletal abnormalities in elephants that experienced increased indoor/outdoor exhibit Space Experience, although this was attenuated with age.One possible explanation for these finding could be that when elephants spend more time in mixed exhibits, they are more likely to be on hard surfaces.However, our assessment of substrate type by environment type indicated that mixed indoor/outdoor environments are not more likely to have 100% coverage of hard substrate, and we found that mixed environments had the same average percent coverage of hard and soft substrates as indoor or outdoor environments.Since our assessment of flooring did not capture time spent in environments with less than 100% substrate coverage, we cannot completely rule out substrate exposure as the underlying reason for the effects that mixed indoor/outdoor environments had in our models, but our investigation of the potential associations between substrate and environment type indicates that there is likely another explanation for these correlations.For example, it is possible that when elephants have the opportunity to move between indoor and outdoor areas, they are exposed to fluctuations in temperature or humidity that could impact musculoskeletal or pedal health, or that, movement between different types of spaces could be associated with more frequent contact with environmental features (gates, thresholds) that could lead to trauma to pedal and other limb structures.Given that time spent in mixed indoor/outdoor exhibits is associated with a decreased risk of performing stereotypic behavior [22], further investigation into underlying contributors to the association between mixed environments and foot/musculoskeletal health is warranted. We also investigated the association between space and foot and MS scores with the hypothesis that increased space would improve foot and MS scores via increased locomotion.However, this supposition was not supported in the multi-variable analyses.In fact, an incremental increase in 500 square feet of space available at night led to a 0.3% increased risk of higher foot scores.We are unclear as to why this relationship was found in the model, but further research including observational studies of elephants at night could potentially reveal behavioral differences associated with larger spaces that could help explain this result.Age was a significant risk factor for foot problems.For example, a ten year increase in age led to a 19.5% increase in probability of foot abnormalities.Degenerative processes of the musculoskeletal system have been found to be age-related in a variety of species.For example, age has been previously identified as a contributor to increases in the likelihood of foot pathology and diagnosis of arthritis in zoo elephants [7].In dairy cattle, age-related increases in locomotive abnormalities have been reported [23], and age was also strongly associated with risk of cranial cruciate ligament rupture in dogs that have had a previous episode [24]. Significant morbidity can result from chronic pododermatitis and degenerative joint disease in elephants [2,[25][26].Foot abscesses may progress to pedal osteomyelitis, which requires intensive management and may lead to euthanasia in unresolved cases [7].Chronic joint pathology may lead to limited range-of-motion and lameness, which reflects declining welfare for the individual [2].One of the logistical constraints in this study was the inability to evaluate the severity of individual foot and musculoskeletal abnormalities.Since physical exams and medical record entries were performed by the attending veterinarian at each facility rather than a consistent set of observers for all facilities, measures of foot and musculoskeletal health were limited to the presence or absence of abnormalities rather than a quantitative evaluation of severity.Future studies of this nature may endeavor to include assessments of severity to further develop our understanding of foot and musculoskeletal conditions in zoo elephants. The conclusion that more time spent on hard surfaces is associated with increased trauma to pedal and musculoskeletal structures resulting in pathology is supported by cases in the literature as well as the results of our multivariable analyses [1,2,3,8,25].Space Experience at night and in mixed exhibits also appear to be factors that need further investigation.The identified associations between risk of developing foot and musculoskeletal health issues and environmental conditions in elephants in North American zoos provide focused areas for recommendations and further research.The results indicate that foot and musculoskeletal health continue to be a concern for elephants housed in North American zoos.Prevention is fundamental through identifying and minimizing risk factors that contribute to these health conditions.The evidence indicates that facility and management changes which decrease time spent on hard substrates are likely to lead to improvements in foot and musculoskeletal health and overall welfare. Fig 3 . Fig 3. Risk increase for possible persistent foot scores by percent time on hard surfaces for an elephant 25 years old, where Percent Time In/Out Choice during the day and Space Experience at night are kept to average (8.52% and 22097.91 ft 2 , respectively].doi:10.1371/journal.pone.0155223.g003 Table 1 . Description of variables used in analysis of musculoskeletal and possible persistent foot score analysis of African and Asian elephants. Table 2 . Frequency of MS scores among African and Asian elephants during 2012 Physical Exam. Table 3 . Univariate assessment of musculoskeletal (MS) scores in African and Asian elephants using multinomial logistic regression.OR: Odds Ratio; *: P value < 0.05; ^P value < 0.15 significance threshold for model building.Hypothesis: + Increase odds of having increased MS score;-Decrease odds of having increased MS score; 0 Neutral relationship on MS score. Table 4 . Descriptive statistics for variables retained in final multi-variable regression model for the population with MS scores. doi:10.1371/journal.pone.0155223.t004 Table 5 . Multivariable assessment of MS scores using multinomial logistic regression. Table 6 . Frequency of elephants per foot score for the Foot Physical Exam and for Possible Persistent Foot (PPF) scores.The Foot Physical Exam was conducted in 2012.Possible Persistent Foot (PPF) scores were defined by an elephant's 2012 physical exam score only for elephants that had existing 2011 veterinary records showing foot abnormalities in 2011. Table 8 . pone.0155223.g002 1 0.981 2.667 <0.001 * Descriptive statistics for variables which were retained in the multi-variable regression model for the possible persistent foot score subpopulation. Table 9 . Multivariable assessment of possible persistent foot scores using Poisson regression. Table 10 . Average percent coverage of hard surfaces (concrete and stone aggregate) and soft surfaces (grass, sand, and rubber padding) in Indoors, Mixed, and Outdoor Environments.Range for all combinations was 0-100% coverage.
2018-04-03T00:15:29.816Z
2016-07-14T00:00:00.000
{ "year": 2016, "sha1": "fd71187678981e3c824b966b0a6b53b89fb2ad65", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0155223&type=printable", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "23971a684c07552ddd705f35ab2e1df2280c675f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234835585
pes2o/s2orc
v3-fos-license
A Novel Interconnection Network with Improved Network Cost Through Shuffle-Exchange Permutation Graph The interconnection network represents an interconnected structure of processors that strongly determines the performance quality of a parallel processing system. The shuffle-exchange permutation (SEP) network with three degrees has high fault tolerance and can be efficiently simulated through star, bubble-sort, and pancake graphs. This study proposes a new interconnection network: the new SEP (NSEP), which improves the diameter and reduces network cost by adding one edge to the SEP network, and presents its graph properties and routing algorithms. The NSEP network, with a degree of connectivity of four, demonstrated maximum fault tolerance and Hamiltonian cycle. Furthermore, the diameter was seen to be improved by 40% or more and the network cost by 20% or more. Introduction With the explosive increase in data size owing to the recent advancements in information technology, the demand for high-performance computers with large computational power is increasing, particularly for big data and artificial intelligence applications. In response to these demands, high-performance computers with various computational processing units, such as graphics processing units (GPUs) and multicore processors, in addition to the conventional central processing units (CPUs), have been developed. Such computers are constantly evolving in response to new demands and requirements [1]. A parallel computer is a computer system that divides a given task and processes tasks among processing units operating in parallel. Parallel computers are classified into shared memory multiprocessors and message-passing multicomputers [2]. In the former, the memory system affects the overall system performance [3]. The interconnection network refers to the location and connection structure between processors and is one of the factors that determine the performance of a parallel processing system [4]. Hence, continuous research on interconnection networks is required to improve the performance of parallel processing computers. Network cost is one of the measures of interconnection networks and is represented by the product of the number of degrees and the diameter. The number of degrees is related to the hardware cost and the diameter to the software cost. The network cost may be reduced by reducing the number of degrees or the diameter. The number of degrees is inversely correlated to the diameter. It is difficult to reduce the network cost because reducing the number of degrees increases the diameter, whereas reducing the diameter increases the number of degrees [4]. The interconnection networks mesh the hypercube and star graph classes depending on the number of nodes. The SEP [5] network is a star graph class with n! nodes, with node and edge symmetry, has excellent scalability through recursive structures, and has a very small number of degrees and diameters over hypercube [6][7][8]. The existing SEP network has a maximum fault tolerance with a degree of connectivity of three, and efficient simulation can be performed for star, bubble sort, and pancake graphs. Thus, one can still get the advantage of the fixed degree of the network (independent of the size) [5]. In addition, NSEP networks with increased degree one also predict that simulation will be efficient for star, bubble sort, and pancake graphs. For n-dimensional NSEP proposed in this study, when n = 2k, the distance between two nodes, n 2 , was reduced to one by adding an edge to the SEP. The proposed NSEP has a fixed number of degrees of four and has the properties of the existing SEP. The NSEP network has a maximum fault tolerance with a degree of connectivity of four and has a Hamilton cycle. Compared to the SEP network, the diameter was improved by more than 40% and the network cost by more than 20%. In Section 2, we examine the network measure of the interconnection network, a constant-degree graph. In Section 3, we define the new interconnection network, NSEP n , present the theoretical properties of graph and routing algorithm, and analyze the diameter. Finally, Section 4 concludes this study. Related Works In this chapter, we first consider the importance of the network measure and the advantages of the fixed number of degrees. Next, we examine the Hamiltonian cycle and SEP with an improved network cost. In a multiprocessor system, a connected network for supporting the communication between each processor is called a multiprocessor interconnection network [9]. The interconnection network can be represented as an undirected graph representing each processor as a node and a communication link between processors as an edge. An edge is placed between any two processors with a link between them. This edge is an undirected edge that can bidirectionally transmit data. The interconnection network of parallel computers is represented as an undirected graph as follows: where V(G) is the set of nodes of graph G; that is, V = {0, 1, 2, · · · , N − 1} and E(G) is the set of edges of graph G. The edge of graph G is a pair of arbitrary two nodes, v and w of V(G). The necessary and sufficient condition for the existence of an edge (v, w) is the presence of a communication link between nodes v and w [10][11][12][13][14][15]. Network measures for evaluating interconnection networks include number of degrees, diameter, network cost, connectivity, fault tolerance, and symmetry [10,15]. The number of degrees for a node v refers to the number of edges adjacent to the node v, and the number of degrees for graph G refers to the maximum value among the number of degrees of the nodes belonging to V(G). A network that has an equal number of degrees for all nodes in graph G is called a regular network. The diameter is the maximum value of the shortest path between any two nodes in the network and is the lower limit of the delay time required to transmit information to the entire network. A network having a relatively small diameter compared to the number of nodes, despite the short distance between nodes, has a disadvantage that it is difficult to design the network in terms of hardware as the number of nodes increases [16][17][18]. The interconnection networks that have been proposed until now can be classified into the following three types according to the number of nodes: mesh class with k × n nodes, hypercube class with 2 n nodes, and star graph class with n! nodes [6]. The mesh structure has been widely used as a planar graph to date, and commercialized in various systems [19,20]. An m-dimensional mesh M m (N) consists of N m nodes and mN m − mN m−1 edges. Each node's address is represented by an m-dimensional vector, and when the addresses of any two nodes differ by one in one dimension, there is an edge between them. Because low-dimensional meshes are easy to design and are useful from the algorithmic viewpoint, they are widely used as a network of parallel processing computers. The higher the dimension of a mesh, the smaller its diameter and the larger the bisection width, and various parallel algorithms can be rapidly executed; however, it is costly [6]. Structures that improve the diameter of a mesh with a typical lattice structure, hexagonal mesh, toroidal mesh, diagonal mesh, honeycomb mesh, and torus have been proposed [19,21]. The Hamiltonian path of the interconnection network is a path that passes through all nodes of G only once. The Hamiltonian cycle of the graph G refers to a path with the same starting and destination nodes as the path that passes through all nodes only once. If the network has a Hamiltonian path or Hamiltonian cycle, a ring or a linear array can be easily implemented, which can be utilized as a useful pipeline for parallel processing [22]. If the graph v contains a Hamiltonian cycle, it is appropriate to include the Hamiltonian path. The n-dimensional SEP graph SEP n is a regular network that represents nodes by permutation of each symbol and has three degrees [5]. In this respect, this study interchangeably uses nodes and permutation. There are three edges of SEP n − {g 12 , g L , g R } according to the conditions. If an arbitrary node of SEP n is S = s 1 s 2 s 3 · · · s n−1 s n , adjacent nodes are as follows. 1. Edge g 12 : Connects the nodes in which the leftmost first and the second symbols are exchanged in permutation. For example, it corresponds to a node g 12 (S) = s 2 s 1 s 3 · · · s n−1 s n that is adjacent by an edge g 12 in a node S. 2. Edge g L : All symbols in the permutation are moved one digit to the left, and the leftmost symbol is moved to the rightmost position. For example, it corresponds to a node g L (S) = s 2 s 3 · · · s n−1 s n s 1 that is adjacent by the edge g L in node S. 3. Edge g R : All symbols of the node permutation are moved to the right by one digit, and the rightmost symbol is moved to the leftmost position. For example, it corresponds to a node g R (S) = s n s 1 s 2 s 3 · · · s n−1 that is adjacent by the edge g R in node S. In node S, any node that is adjacent by edge operation g 12 is represented by g 12 (S), and the same method is applied to edge {g L , g R }. If the order of the edge sequence is g L , g 12 , g R when the edge operation is applied in the node S, the permutation change of this node is represented as S → g L (S) → g 12 (g L (S)) → g R (g 12 (g L (S))). The permutations of the node to which the edge sequence order g L , g 12 , g R is applied in the node S are S = s 1 s 2 s 3 · · · s n−1 s n , g L (S) = s 2 s 3 · · · s n−1 s n s 1 , g 12( g L (S) = s 3 s 2 · · · s n−1 s n s 1 , and g R( g 12( g L (S))) = s 1 s 3 s 2 · · · s n−1 s n . Thus, when edge sequence g L , g 12 , g R is applied in node S, the last node is g R( g 12( g L (S) = s 1 s 3 s 2 · · · s n−1 s n . Figure 1 shows a 4D SEP 4 graph. Definition and Properties of Graph The ( − ) graph is a regular graph with four degrees obtained by adding one edge to the existing graph ( = 2 , ≥ 2). One edge added to = ⋯ ⋯ , a node of , is an edge that connects the permutations in which the symbols , ~ , and ~ have been exchanged. Let be an added edge of the graph. The node is ( ) = ⋯ ⋯ ad- The SEP graph can be easily simulated on graphs based on permutation groups, such as a Cayley graph, and its algorithms can be efficiently executed in new graphs with minimal changes. The diameter of the SEP graph is 1 8 9n 2 − 22n + 24 , and its degree of connectivity is three, having a maximum fault tolerance [5]. Because SEP n is a Cayley graph, it has a node symmetric property [10]. The cycle, whose path length is n and composed of edges g L (or g R ) in SEP n , is called s − cycle [5]. In the SEP graph, the positions of symbols are exchanged using edge operation g 12 , and the symbol to be exchanged is moved to the leftmost position using edge operation < g L , g R >. This study improved the diameter value by adding one edge in which the symbol of the position n 2 + 1 can be quickly moved to the leftmost position in the node permutation of the graph SEP n . Definition and Properties of NSEP n Graph The NSEP n (New − SEP n ) graph is a regular graph with four degrees obtained by adding one edge to the existing SEP n graph (n = 2k, k ≥ 2). One edge added to S = s 1 s 2 s 3 · · · s n 2 s n + 1 2 · · · s n−1 s n , a node of NSEP n , is an edge that connects the permutations in which the symbols n 2 , s 1 ∼ s n 2 , and s n + 1 2 ∼ s n have been exchanged. Let g n 2 be an added edge of the NSEP n graph. The node is g n 2 (s) = s n + 1 2 · · · s n−1 s n s 1 s 2 s 3 · · · s n 2 adjacent by the edge g n 2 in the node S = s 1 s 2 s 3 · · · s n 2 s n + 1 2 · · · s n−1 s n . Therefore, the NSEP n graph has four edges g 12 , g L , g R , g n 2 for each node. The four nodes g 12 (S), g L (S), g R (S), g n 2 (S) adjacent to the node S = s 1 s 2 s 3 · · · s n 2 s n + 1 2 · · · s n−1 s n of the NSEP n graph are shown below. g 12 (S) = s 2 s 1 s 3 · · · s n 2 s n+1 2 · · · s n−1 s n g R (S) = s n s 1 s 2 s 3 · · · s n 2 s n + 1 2 · · · s n−1 g L (S) = s 2 s 3 · · · s n 2 s n + 1 2 · · · s n−1 s n s 1 g n 2 (S) = s n+1 2 · · · s n−1 s n s 1 s 2 s 3 · · · s n 2 Figure 2 shows an example of the NSEP 4 graph. In Figure 2, the thick line represents the edge g 12 , the solid line represents the edge g L (or g R ), and the dotted line represents Because the graph has one extra edge over the graph, the latter is a subgraph of the former. Cycles whose path length is and which comprise the edges (or ) of the are called s-cycles. For example, an s-cycle with the path length of four at the node S (= 1234) of is = (1234) 2341 3412 4123 1234( = ) A cluster in has several important properties. These properties can be used to confirm that the graph has a Hamiltonian cycle. The following definitions define the cluster and show its properties in Attributes 1, 2, and 3. Definition 1. In the graph , a partial graph consisting of nodes constituting s-cycles and the edge connecting the nodes in the s-cycles is called a graph . Because the NSEP n graph has one extra edge over the SEP n graph, the latter is a subgraph of the former. Cycles whose path length is n and which comprise the edges g L (or g R ) of the NSEP n are called s-cycles. For example, an s-cycle with the path length of four at the node S (= 1234) of NSEP 4 is A cluster in NSEP n has several important properties. These properties can be used to confirm that the NSEP n graph has a Hamiltonian cycle. The following definitions define the cluster and show its properties in Attributes 1, 2, and 3. Definition 1. In the graph NSEP n , a partial graph consisting of nodes constituting s-cycles and the edge g n 2 connecting the nodes in the s-cycles is called a graph C n . In NSEP 4 , one cluster C 4 containing the node S (=1234) is a partial graph consisting of four edges g L (or g R ), and two edges g n 2 . Figure 3 shows C 4 , the cluster of NSEP 4 . subgraph of the former. Cycles whose path length is and which comprise the ed (or ) of the are called s-cycles. For example, an s-cycle with the path le four at the node S (= 1234) of is = (1234) 2341 3412 4123 1234( = ) A cluster in has several important properties. These properties can b to confirm that the graph has a Hamiltonian cycle. The following definiti fine the cluster and show its properties in Attributes 1, 2, and 3. Definition 1. In the graph , a partial graph consisting of nodes constituting s-cy the edge connecting the nodes in the s-cycles is called a graph . In , one cluster containing the node S (=1234) is a partial graph con of four edges (or ), and two edges . Figure 3 shows , the cluster of Property 1. There are (n − 1)! C n clusters in the NSEP n graph. Proof. The total number of nodes in NSEP n is n!. A cluster C n is s-cycles with n different nodes and consists of n 2 g n 2 edges that connect nodes along the path constituting s-cycles. Moreover, the number of nodes in each cluster C n is n by s-cycles. Therefore, the number of C n clusters is n! n = (n − 1)!. Property 2. The cluster C n of NSEP n has n 2 g n 2 edges. Proof. There are n nodes in each cluster in the NSEP n graph, and they are adjacent to each other by g n 2 edges that exchange n 2 symbols. Because there is only one node with such an adjacent relationship for one node, two nodes form a pair. Therefore, there are n 2 g n 2 edges connecting n nodes that constitute the cluster. Property 3. A node U constituting one cluster C n of NSEP n is adjacent to the node g 12 (U) of another cluster C n by the edge g 12 . The n nodes constituting a cluster C n are adjacent to nodes of n different clusters C n by the edge g 12 . Proof. By the definition of NSEP n it can be seen that n nodes of a cluster C n are adjacent to n nodes of different clusters C n by the edge g 12 . Due to the added edge g n 2 , a new cycle with ( n 2 + 1) nodes exists. Definition 2 defines the cycles of NSEP n and the associated theorem is shown in Lemma 1, 2, and 3. In the Lemma 4, 5, and 6, we show that there is a Hamiltonian cycle between two adjacent nodes in cluster C n of NSEPn. Definition 2. When there is an arbitrary node U in the cluster C n , let V = g n 2 (U) be the node U and the node adjacent to the edge g n 2 (however, U = V). Let the ( n 2 + 1)-cycle be the path from node U to the node V constituting the edge g L (or g R ) at the path distance of n 2 , and the path constituting the edge g n 2 at the node U. Assume that there are nodes U( = 1234) and V(= 3412) at NSEP 4 . The three-cycle path containing node U(= 1234) and V(= 3412) is given as U(= 1234) Lemma 1. When there is a node U in one cluster C n , let V = g n 2 (U) be the node U and the node adjacent to the edge g n 2 (however, U = V). There are two ( n 2 + 1)-cycles that share the edge g n 2 connecting nodes U and V. Proof. By Definition 2, ( n 2 + 1)-cycle is a path consisting of g n 2 edge g R (or g L ), and 1 g n 2 edge. It can be seen that these ( n 2 + 1)-cycles can create cycles using the edge g R and the edge g L , respectively. Therefore, there are two ( n 2 + 1)-cycles that share the edge g n 2 . Proof. By Property 2, each cluster has n 2 g n 2 edges, and by Lemma 1, there are two ( n 2 + 1)-cycles that share an edge g n 2 . Therefore, because n 2 × 2 = n, there are n ( n 2 + 1)-cycles. Lemma 3. The number of ( n 2 + 1)-cycles in the network NSEP n is n!. Proof. By Lemma 2, there are n ( n 2 + 1)-cycles in each cluster, and by Property 1, the number of C n clusters is (n − 1)!. Therefore, the number of ( n 2 + 1)-cycles in the NSEP n network is (n − 1)! × n = n!. Lemma 4. There is a Hamiltonian path, whose path length is n, including an arbitrary node U in the cluster C n , and a node g R (U) (or g L (U)) adjacent to the edge g R (or g L ) from the node U. Proof. Let U be an arbitrary node of the cluster C n . Let V1(= g L (U)) be the adjacent node by node U and edge g L , and the node adjacent by node and edge. Because each cluster C n has s-cycles of NSEP n as a partial graph, the path connected by node U and edge g R (or g L ) has cycles including nodes V1 and V2. Therefore, there is a Hamiltonian cycle with the path length of n from node U to an adjacent node V1(= g L (U)) by an edge g L , and an adjacent node V2(= g R (U)) by the node U and the edge g R . Lemma 5. There is a Hamiltonian cycle between an arbitrary node U that constitutes a cluster C n , and nodes V1 = g n 2 (U) connected from the node Uto the edge g n 2 . Proof. Let U be the starting node of the cluster C n , and the target node V1 = g n 2 (U) be the node that is connected to node U and the edge g n 2 . By the definition of NSEP n graph, the distance between the nodes U and V in s-cycles consisting of edges g L (or g R ) is n 2 . Let S1 be a node at a n 2 − 1 distance along the s-cycle from the starting node U. The node S2 = g n 2 (S1) connected by the node S2, and the edge g n 2 has a distance of n 2 in s-cycles. Therefore, node S2 is a node adjacent to U located at a distance of n 2 − 1 from the node S1. A node at a n 2 − 1 distance along the s-cycle from a node S2 becomes a target node V. Because nodes U and V1 = g n 2 (U) are adjacent to the edge g n 2 , a Hamiltonian cycle is formed. Therefore, there is a Hamiltonian cycle with a length n, connecting two adjacent nodes in the cluster C n . Lemma 6. There exists a Hamiltonian cycle that includes two adjacent nodes U, V in the cluster C n . Proof. By Lemmas 4 and 5, there is a Hamiltonian cycle that includes two adjacent nodes U, V in the cluster C n . The reduced graph RS n−1 of NSEP n represents the reduced s-cycles in NSEP n to one node. The node whose leftmost symbol is one in the permutation of n nodes constituting the s-cycles of NSEP n is called the leader node. The node address of the reduced graph RS n−1 is represented by the remaining permutation addresses except one in the permutation of the leader node. In s-cycles 1234-4123-3412-2341, shown in Figure 4, the leader node is 1234, and s-cycle is represented by the super node 234 in the graph RS 3 . Definition 3 defines a subgraph of RS n−1 as RS k n−2 relative to the rightmost symbol. The theorem about it is shown in auxiliary Lemmas 7-9, and NSEP n shows that in Theorem 1 has a Hamilton cycle. Definition 3. A bubble-sort graph, which is a partial graph that includes all nodes of the reduced graph , is known as . Furthermore, is a ( − 2) normal Cayley graph [5]. Therefore, when is ≥ 5, it includes − 1 subgraph , and all are adjacent to each other. Because the nodes belonging to have the same rightmost symbol, when the rightmost symbol is k, the of is defined as (2 ≤ ≤ ). The network has an even number of = 2 symbols representing node addresses. In a network having an even number of symbols in , does not exist. If has a Hamiltonian cycle, it is natural that there is a Hamiltonian cycle when is even. After showing that there is a Hamiltonian cycle in , we show that there is also a Hamiltonian cycle in . Lemma 7. There is a Hamiltonian path between any two arbitrary nodes of , and it has a Hamiltonian cycle. Proof. Let and be the starting and destination nodes, respectively. can be divided into two areas A and B with the same number of nodes. The thick lines correspond to the edges within each area. There are three nodes constituting one area, and all nodes are adjacent. In addition, the nodes constituting the area have cycles in a complete graph, and there is always a Hamiltonian path between two nodes. In each node, there are two edges connecting to nodes in other areas. There are two cases of the relationship with nodes and , as shown in Figure 5. The edges can be present in one area as shown in Figure 5-1, or in different areas as shown in Figure 5-2. Let ′(≠ ) and ′ be a node adjacent to in area A and a node adjacent to in area B, respectively. There is a node adjacent to ′ in area B, and there is a Hamiltonian path between this node and ′. Therefore, in Case 1, there is a Hamiltonian path between and . Now we move on to Case 2. Let ′(≠ ) be the node connected through the Hamiltonian path from in area A. Because both ′ and are present in area B, there is a Hamiltonian path. That is, there is a Hamiltonian path between and in Case 2 as well. Therefore, because there is a Definition 3. A bubble-sort graph, which is a partial graph that includes all nodes of the reduced graph RS n−1 , is known as RSB n−1 . Furthermore, RSB n−1 is a (n − 2) normal Cayley graph [5]. Therefore, when RS n−1 is n ≥ 5, it includes n − 1 subgraph RS n−2 , and all RS n−2 are adjacent to each other. Because the nodes belonging to RS n−2 have the same rightmost symbol, when the rightmost symbol is k, the RS n−2 of RS n−1 is defined as RS k n−2 (2 ≤ k ≤ n). The NSEP n network has an even number of n = 2k symbols representing node addresses. In a network having an even number of symbols in NSEP n , RS 4 does not exist. If RS n−1 has a Hamiltonian cycle, it is natural that there is a Hamiltonian cycle when n is even. After showing that there is a Hamiltonian cycle in RS n−1 , we show that there is also a Hamiltonian cycle in NSEP n . Lemma 7. There is a Hamiltonian path between any two arbitrary nodes of RS 3 , and it has a Hamiltonian cycle. Proof. Let U and V be the starting and destination nodes, respectively. RS 3 can be divided into two areas A and B with the same number of nodes. The thick lines correspond to the edges within each area. There are three nodes constituting one area, and all nodes are adjacent. In addition, the nodes constituting the area have cycles in a complete graph, and there is always a Hamiltonian path between two nodes. In each node, there are two edges connecting to nodes in other areas. There are two cases of the relationship with nodes U and V, as shown in Figure 5. The edges can be present in one area as shown in Figure 5-1, or in different areas as shown in Figure 5-2. Let U ( = V) and V be a node adjacent to U in area A and a node adjacent to V in area B, respectively. There is a node adjacent to U in area B, and there is a Hamiltonian path between this node and V . Therefore, in Case 1, there is a Hamiltonian path between U and V. Now we move on to Case 2. Let U ( = V ) Electronics 2021, 10, 943 8 of 16 be the node connected through the Hamiltonian path from U in area A. Because both U and V are present in area B, there is a Hamiltonian path. That is, there is a Hamiltonian path between U and V in Case 2 as well. Therefore, because there is a Hamiltonian path between any two nodes of RS 3 , and another Hamiltonian path between adjacent nodes, this has a Hamiltonian cycle. Therefore, the number of in satisfies the following equation. in . Therefore, it can be seen that is always adjacent h Proof. By Definition 3, RS n−1 has n − 1 RS n−2 subgraphs as a cluster. Figure 6 shows a subgraph on the RS n-1 . For example, if n = 5, then CN 4 , which is the number of RS 3 in RS 4 becomes CN 4 = CN 3 × 4. Because RS n−1 is hierarchical, let us assume CN n−1 = CN n−2 × (n − 1). We prove that the formula CN n−1 = (n−1)! 10, x FOR PEER REVIEW 10 of 18 Proof. By Lemma 9, has a Hamiltonian cycle, which regards the cluster of as a super node. A node in an adjacent cluster of is adjacent to another cluster through an adjacent node [5]. By Lemma 6, there is a Hamiltonian path between adjacent nodes of the cluster . Thus, the network has a Hamiltonian cycle. Figure 7 shows a Hamiltonian cycle of RS4. □ Therefore, the number of RS 3 in RS n−1 satisfies the following equation. Proof. By Definition 3, all n − 1 RS k n−2 subgraphs are adjacent to each other in RS n − 1 . Let R i (1 ≤ i ≤ (n − 1)!) and R j (1 ≤ j ≤ (n − 1)!, i = j) be any two nodes adjacent to each other in RS n−1 . Adjacent relationships are indicated by dotted lines. All nodes in RS n−1 are adjacent to RS k n−2 through adjacent nodes. There is always adjacent RS k n−3 in RS n−2 . Therefore, it can be seen that RS k 3 is always adjacent hierarchically in the same manner up to RS 4 . As we have shown in Lemma 7, that there exists a Hamiltonian path between any two nodes, which implies that there exists a Hamiltonian cycle in RS n−1 . Theorem 1. The NSEP n network has a Hamiltonian cycle. Proof. By Lemma 9, RS n−1 has a Hamiltonian cycle, which regards the cluster C n of NSEP n as a super node. A node in an adjacent cluster of NSEP n is adjacent to another cluster through an adjacent node [5]. By Lemma 6, there is a Hamiltonian path between adjacent nodes of the cluster C n . Thus, the NSEP n network has a Hamiltonian cycle. Figure 7 shows a Hamiltonian cycle of RS 4 . Electronics 2021, 10, x FOR PEER REVIEW Proof. By Lemma 9, has a Hamiltonian cycle, which regards the cluste as a super node. A node in an adjacent cluster of is adjacent to a cluster through an adjacent node [5]. By Lemma 6, there is a Hamiltonian path b adjacent nodes of the cluster . Thus, the network has a Hamiltonian cycl ure 7 shows a Hamiltonian cycle of RS4. □ Routing Algorithm and Diameter Analysis Routing refers to the path from one node to another. Because a partial g is a Cayley graph, it is node symmetric [10]. Therefore, the path of the s node and the destination node D can be regarded as the path of the starting and the ID node. Let the ID node be 123 ⋯ . The algorithm proposed in this stu method of placing the symbols in sequence up to n by iteratively applying the me checking the positions of symbols 1 and 2, placing symbol 2 on the right side of sy and symbol 3 on the right side of symbol 2. The position of the symbol is represe definition 4, and the formulas used by the algorithm are represented in Lemmas 1 Routing Algorithm and Diameter Analysis Routing refers to the path from one node to another. Because SEP n a partial graph of NSEP n is a Cayley graph, it is node symmetric [10]. Therefore, the path of the starting node S and the destination node D can be regarded as the path of the starting node S and the ID node. Let the ID node be 123 · · · n. The algorithm proposed in this study is a method of placing the symbols in sequence up to n by iteratively applying the method of checking the positions of symbols 1 and 2, placing symbol 2 on the right side of symbol 1, and symbol 3 on the right side of symbol 2. The position of the symbol is represented in Definition 4, and the formulas used by the algorithm are represented in Lemmas 10-13. Definition 4. The position of the symbol s 1 in the current node S(= s 1 s 2 s 3 · · · s i · · · s n−1 s n ) is represented by p(s i ) (1 ≤ i ≤ n). Lemma 10. In node S(= s 1 s 2 s 3 · · · s i · · · s n−1 s n ), the path of the node adjacent to the node by the edge sequence g L , g 12 , g R is as follows. The last node permutation is g R (g 12 (g L (S))) = s 1 s 3 s 2 · · · s n−1 s n in the path to which the edge sequence g L , g 12 , g R is applied in node S. The path in node S is as given below. S(= s 1 s 2 s 3 · · · s i · · · s n−1 s n ) → g L (S) = s 2 s 3 · · · s i · · · s n−1 s n s 1 → g 12 (g L (S)) = s 3 s 2 · · · s i · · · s n−1 s n s 1 → g R (g 12 (g L (S))) = s 1 s 3 s 2 · · · s i · · · s n−1 s n Lemma 11. The number of iterations for an edge or edge sequence is denoted by × [i] . For example, when i = 3, g A , g B × [3] = g A , g B , g A , g B , g A , g B . The number of iterations of the edge sequence is incorporated as given below. When p(i) = a, p(i + 1) = b, Lemma 12. The value of the number of iterations of the edge sequence, which is less than 0, is subject to the reverse operation. Lemma 13. The distance between the symbols s i and s j at the node address S = s 1 s 2 s 3 · · · s i · · · s j · · · s n n = 2 k , 1 ≤ k ≤ log 2 n is denoted by p(s i ) − p s j . The routing algorithm is outlined as follows. [STEP 1] Symbol 2 is placed to the right of symbol 1. When the node address is divided by half, that is, n 2 , the positions of the two symbols, p(1) and p(2) are checked, and the algorithm is executed according to the following cases. The cases are divided into the cases of p(1), p(2) ≤ n 2 ; p(1), p(2) ≥ n 2 ; p(1) ≤ n 2 and p(2) > n 2 , or p(2) ≤ n 2 and p(1) > n 2 . [STEP 2] i + 1 is placed to the right of symbol i. When the node address is divided by half, the positions of the two symbols, p(i) and p(i + 1) are checked, and the algorithm is executed according to the following cases. The cases are divided into the cases of p(i), p(i + 1) ≤ n 2 , p(i), p(i + 1) > n 2 , and p(i) ≤ n 2 AND p(i + 1) > n 2 OR p(i + 1) ≤ n 2 AND p(i) > n 2 . [STEP 3] In this algorithm, n is placed at the rightmost position while the relative positions from 1 to n are arranged in an ascending order, and this is the step of matching with the target node ID. The routing shown in Algorithm 1. Proof. In the worst case in [STEP 1], the diameter is n. Because the worst case of p(i + 1) is n 2 + 3, p(i + 1) − p(i) = 1 3 n. The result is obtained as follows. Because 1 < i < n − 1 in STEP 2, it is iterated by n − 3 times as follows. In the worst case in STEP 3, the diameter is n 2 − 2. The worst case is p(n) < n 2 − 1, and the algorithm is described as follows. Therefore, in STEP 3, the worst case is n 2 − 2. As a result, it can be seen that the diameter in the worst case of [STEP 1, 2, 3] is 2 3 n 2 − 3 2 n + 1 or less. For example, when n = 6, in the worst case, the length is 16 as follows. Proof. The network cost is represented by the degree number X diameter. The network cost of NSEP n is as follows. In Table 1, the network cost of NSEP n was compared with the constant branching class connections. NSEP n increases the number of nodes rapidly as n increases. Thus, some of the network costs for each network were rearranged in Table 2 when the number of nodes was equal, and the results were shown in Table 3 and Figure 8 as a graph. Here, the network cost of NSEP n is always less than that of SEP n , and we can see that when it is n > 10, the network cost of NSEP n is the smallest. In Figure 8, the five circles at the right end of the chart represent network costs of the mesh, honeycomb, SEP, torus, and NSEP, in that order from the top, when the number of nodes is 4 × 10 8 . Conclusions The SEP interconnection network has three degrees and a diameter of (9 − 22 + 24). This study proposed a new interconnection network NSEP by adding a new edge to the SEP network. In the NSEP network, the diameter and network cost were improved by reducing the distance between two nodes in a distance to one by adding one edge to the existing SEP network. The interconnection network proposed in this study has the same number of nodes as SEP, having four degrees, a diameter of − + 1, and a network cost of ( ). The interconnection network shows excellent results by reducing the diameter by 40% or more and the network cost by 20% or more, while increasing the number of degrees by one in comparison to SEP. The interconnection network NSEP is a network with a Hamiltonian cycle and SEP as a subgraph. Because the NSEP network is defined to only have an even number of nodes (n = 2k), a generalized graph definition is additionally required. The algorithm designed in this paper is an algorithm that sorts symbols 1 through n. In some cases, the opposite arrangement of n through 1 may be effective. Further research will be required under conditions that allow us to select efficient algorithms between the two algorithms. It is hoped that this will lead to research on interconnected networks to improve the performance of parallel processing computers. In Figure 8, the five circles at the right end of the chart represent network costs of the mesh, honeycomb, SEP, torus, and NSEP, in that order from the top, when the number of nodes is 4 × 10 8 . Conclusions The SEP interconnection network has three degrees and a diameter of 1 8 9n 2 − 22n + 24 . This study proposed a new interconnection network NSEP by adding a new edge to the SEP network. In the NSEP network, the diameter and network cost were improved by reducing the distance between two nodes in a n 2 distance to one by adding one edge to the existing SEP network. The NSEP n interconnection network proposed in this study has the same number of nodes as SEP, having four degrees, a diameter of 2 3 n 2 − 3 2 n + 1, and a network cost of O n 2 . The interconnection network NSEP n shows excellent results by reducing the diameter by 40% or more and the network cost by 20% or more, while increasing the number of degrees by one in comparison to SEP. The interconnection network NSEP is a network with a Hamiltonian cycle and SEP as a subgraph. Because the NSEP network is defined to only have an even number of nodes (n = 2k), a generalized graph definition is additionally required. The algorithm designed in this paper is an algorithm that sorts symbols 1 through n. In some cases, the opposite arrangement of n through 1 may be effective. Further research will be required under conditions that allow us to select efficient algorithms between the two algorithms. It is hoped that this will lead to research on interconnected networks to improve the performance of parallel processing computers.
2021-05-21T16:56:53.887Z
2021-04-15T00:00:00.000
{ "year": 2021, "sha1": "ade9be769547b8eb1d78c491eeb4444119e19d1c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/10/8/943/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8aa43ffea3d2f1b606ab1d9cc3e72773264c76da", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
255622509
pes2o/s2orc
v3-fos-license
Retrospective Assessment of Complementary Liquid Biopsy on Tissue Single-Gene Testing for Tumor Genotyping in Advanced NSCLC Biomarker testing is key for non-small cell lung cancer (NSCLC) management and plasma based next-generation sequencing (NGS) is increasingly characterized as a non-invasive alternative. This study aimed to evaluate the value of complementary circulating tumor DNA (ctDNA) NGS on tissue single-gene testing (SGT). Ninety-one advanced stage NSCLC patients with tumor genotyping by tissue SGT (3 genes) followed by ctDNA (38 genes amplicon panel) were included. ctDNA was positive in 47% (n = 43) and identified a targetable biomarker in 19 patients (21%). The likelihood of positivity on ctDNA was higher if patients had extra-thoracic disease (59%) or were not under active treatment (59%). When compared to SGT, ctDNA provided additional information in 41% but missed a known alteration in 8%. Therapeutic change for targeted therapy based on ctDNA occurred in five patients (5%), while seven patients with missed alterations on ctDNA had EGFR mutations or ALK fusions. The median turnaround time of ctDNA was 10 days (range 6–25), shorter (p = 0.002) than the cumulative delays for the tissue testing trajectory until biomarker availability (13 d; range 7–1737). Overall, the results from this study recapitulate the potential and limitations of ctDNA when used complementarily to tissue testing with limited biomarker coverage. Introduction The recent advances in precision medicine have remodeled the approach for clinical management of advanced stage non-small cell lung carcinoma (NSCLC), more specifically for lung adenocarcinoma. The list of driver alterations paired with small molecule inhibitors has been expanding continuously and mandates evaluation of several biomarkers to guide patient management [1,2]. Consistently, molecular testing is evolving toward wider adoption of multigene panel testing, largely due to the enlarging access to NGS technologies in clinical laboratories. However, access to comprehensive molecular profiling for NSCLC remains unequal across regions and is also sometimes limited by insufficient tissue samples and long turn-around times (TAT) [3]. Molecular assays have been traditionally designed to work on formalin-fixed paraffinembedded (FFPE) tissue. However, the technological advances have resulted into the capacity to perform molecular profiling directly from the circulating tumoral (ct) nucleic acids extracted from plasma, also named liquid biopsy. This non-invasive approach is a promising tool for diagnosis and monitoring of NSCLC [4]. Advantages over tissue testing include shorter TAT, risks and costs reduction inherent to procedures for acquisition of diagnostic material and the potential to capture tumor heterogeneity from multiple anatomic sites. However, the clinical sensitivity of liquid biopsy and high costs are still amongst factors slowing the adoption of this approach across the world [3]. Several studies have compared the performance of plasma versus tissue-based NGS assays [5][6][7][8][9]. Despite improvements in the availability of tissue-based NGS, comparison with minimal single-gene testing remains relevant as more than 30% of laboratories still use single assays to evaluate biomarkers in NSCLC [3]. This study aimed to review the results from a retrospective cohort of NSCLC patients who had complementary liquid biopsy testing from an access program launched during the COVID-19 pandemic. The objective was to evaluate the value of plasma NGS testing with a small DNA amplicon panel over minimal single-gene tissue testing as proposed by the last IASCLC/CAP/AMP guidelines [1]. Materials and Methods This retrospective single center study includes patients with advanced stage (IIIB-IV) NSCLC treated and followed at the Institut Universitaire de Cardiologie et de Pneumologie de Québec-Université Laval (IUCPQ-UL, Quebec, QC, Canada) between December 2020 and February 2022, who underwent complementary liquid biopsy. Patients with metastatic or recurrent NSCLC were offered molecular testing on circulating tumor DNA (ctDNA) through a free and unrestricted access program for advanced stage cancer in Canada, the ACTT project (Access to Cancer Testing and Treatment). Blood was collected in Streck tubes and sent immediately via a tiered shipping service in pre-packaged kits (Genolife; Quebec, QC, Canada) for ctDNA sequencing with the Follow It ® assay at Imagia Canexia Health (laboratory in Vancouver, British Columbia; headquarter in Montréal, Québec, Canada). This is a 38 genes amplicon-based panel covering 26 exons and 337 hotspot mutations in key genes relevant to solid tumors, enabling identification of single nucleotide variants (SNV), insertions and deletions up to 24 base pair length (INDEL) as well as copy number variations (CNV) (complete details provided on the vendor website: https://imagiacanexiahealth.com/solution/plasma-follow-it/; accessed on 26 December 2022). Regarding specifically the clinically actionable genes for NSCLC, this panel covers activating mutations in EGFR (exons 18 to 21), BRAF (exon 15), ERBB2 (exon 20 and S310), KRAS (exons 2-4); MET coverage includes Y1253, exons 13, 14 + 25, 14-50, 14, 18; ALK, ROS1 and RET includes key acquired resistance variants in tyrosine kinase domain, but the assay does not detect gene fusions or isoforms. Results for SNVs and INDELs are reported when variant allele fraction is equal or greater than 0,7% and 5%, respectively. The minimal acceptance criteria from the vendor are a coverage ≥500 as well as base quality and mapping quality scores of ≥30 each. All patients also had conventional tissue biomarkers testing during their disease, which was performed at the IUCPQ-UL pathology laboratory. Procedures to obtain tissue biomarkers and liquid biopsy were not always performed in the same period or sequence in the care of the patient. Single-gene testing included PCR assay for EGFR activating mutations (RGQ PCR kit covering 29 variants in exons 18 to 21; Qiagen, Toronto, ON, Canada), and immunohistochemistry for fusion in ALK (clone 5A4; Biocare, Markam, ON, Canada) and ROS1 (clone D4D6; CST, Danvers, NH, USA) on a Dako Autostainer (Agilent, Mississauga, ON, Canada), followed by FISH (SureFISH, Agilent, Mississauga, ON, Canada) when appropriate, as well as PD-L1 immunohistochemistry using the Dako 22C3 assay (Agilent, Mississauga, ON, Canada). A subset of patients had also complementary BRAF V600x PCR testing (Biocartis Idylla, Mechelen, Belgium) or NGS testing with a targeted lung cancer 17 genes panel (Archer Fusion Plex lung; Invitae, San Francisco, CA, USA) [10]. Patient's medical records were reviewed to collect clinical, radiological and pathologic data. Response assessment was categorized as per RECIST criteria [11]. Reflex biomarker testing is not used in our center and the clinician place a request when clinically appropriated. Key dates (date of request of the procedure to obtain tissue for diagnosis; dates of specimen accessioning and pathology report release; dates of molecular pathology acces-sioning and biomarkers unified report release) were retrieved to estimate the turnaround time (TAT) of the entire trajectory length from first clinical visit to date of availability of biomarker results. The results were calculated for the entire subset and after excluding cases from resection specimens and where diagnosis and biomarkers were separated in time for more than an arbitrary cut-off of 30 d, aimed to reflect recurrent disease or testing retrospective material at progression. The liquid biopsy results were classified as informative if any mutation was identified, either a known oncogenic driver (known recurrent hot-spot activating mutations in genes of the MAPK/ERK pathway or oncogenic fusions) or a passenger alteration, or uninformative (clinically) if no alteration was identified (negative for any variant with satisfactory quality metrics). All liquid biopsy testing reports included in this study met the vendor's quality metrics. Candidate targetable driver alterations were defined based on key alterations included in the most recent NCCN guidelines [2]. Statistical analyses (Student's t-test and chi-square test) were performed using Graph-Pad Prism, version 9.1.0 (GraphPad Software, San Diego, CA, USA) and a 5% cut-off for statistical significance. Results A total of 91 patients were included in this analysis. Patient's clinical characteristics are shown in Table 1. The study population was characterized by a slight predominance of female (59%) and a marked predominance of stage IV (92%) and non-squamous histology (98%); two patients with squamous cell carcinoma and atypical clinical presentation for which clinicians had requested biomarker testing beyond PD-L1 were included. At the time of ctDNA testing, most patients (63%; n = 58) had completed at least one line of treatment and 56% (n = 51) had extra-thoracic disease. All patients had at least a known EGFR and ALK status, but only 84% had the complete EGFR/ALK/ROS1/PDL1 assessment combination, mainly due to testing performed prior to the study dates and local regulatory approval of the assays for ROS1 and PD-L1 testing. This cohort included only 13 patients (14%) with known actionable driver mutation at time of ctDNA testing based on single-gene testing. Overall, ctDNA testing was informative in 43 patients (47%), allowing for the identification of driver oncogenic alterations in 35 cases (38%) and candidate targetable alteration in 19 cases (21%); ( Figure 1 and Table 2). Amongst the liquid biopsy positive cases with non-actionable alterations, KRAS non-G12C and TP53 mutations were the most frequently identified ( Figure 1). When compared to tissue single-gene testing results, liquid biopsy NGS panel provided additional or no additional molecular information in 37 patients (41%) and 7 patients (7%), respectively. Liquid biopsy was negative for a known molecular alteration from tissue testing in 7 patients (8%), while 45% of cases (n = 41) were negative by both approaches (Figures 1 and 2). The distribution of PD-L1 scores were similar in the different categories of liquid biopsy outcome ( Table 2). Sub-groups analysis showed that the detection rate of liquid biopsy was higher when patients had extra-thoracic disease (59% vs. 31%; p = 0.0151) or were not receiving active treatment at blood draw (off-treatment or treatment naïve; 59% vs. 35%; p = 0.0340); Table 2. The clinical impact of liquid biopsy testing on this cohort was further evaluated to determine the potential change in therapeutic orientation ( Figure 2 and Table 3). While liquid biopsy was frequently informative, the yield of candidate targetable alterations unknown from tissue testing was relatively small and resulted in only five patients switching to targetable therapy overall ( Figure 2). Four of those patients had a KRAS G12C mutation (KRAS not tested on tissue) and were subsequently offered a specific KRAS inhibitor while one patient had an EGFR deletion of 19 mutation (undetected by the tissue PCR assay) and had treatment changed to an EGFR tyrosine kinase inhibitor. One patient with ERBB2 INS20 could not be offered targeted therapy before dying of disease. On the other hand, seven cases had driver alterations identified on tissue testing undetected on liquid biopsy. All these patients had targetable alterations, including five activating mutations in EGFR and two ALK fusions ( Table 4). The distribution of patients who received either a previous or current therapeutic line including checkpoint-inhibitor (ICI) or ICI-chemotherapy combination in this cohort was not different in the categories of liquid biopsy result (Figure 2 insert). fication of driver oncogenic alterations in 35 cases (38%) and candidate targetable alteration in 19 cases (21%); (Figure 1 and Table 2). Amongst the liquid biopsy positive cases with non-actionable alterations, KRAS non-G12C and TP53 mutations were the most frequently identified (Figure 1). When compared to tissue single-gene testing results, liquid biopsy NGS panel provided additional or no additional molecular information in 37 patients (41%) and 7 patients (7%), respectively. Liquid biopsy was negative for a known molecular alteration from tissue testing in 7 patients (8%), while 45% of cases (n = 41) were negative by both approaches (Figures 1 and 2). The distribution of PD-L1 scores were similar in the different categories of liquid biopsy outcome (Table 2). Sub-groups analysis showed that the detection rate of liquid biopsy was higher when patients had extra-thoracic disease (59% vs. 31%; p = 0.0151) or were not receiving active treatment at blood draw (off-treatment or treatment naïve; 59% vs. 35%; p = 0.0340); Table 2. The clinical impact of liquid biopsy testing on this cohort was further evaluated to determine the potential change in therapeutic orientation ( Figure 2 and Table 3). While liquid biopsy was frequently informative, the yield of candidate targetable alterations unknown from tissue testing was relatively small and resulted in only five patients switching to targetable therapy overall (Figure 2). Four of those patients had a KRAS G12C mutation (KRAS not tested on tissue) and were subsequently offered a specific KRAS inhibitor while one patient had an EGFR deletion of 19 mutation (undetected by the tissue PCR assay) and had treatment changed to an EGFR tyrosine kinase inhibitor. One patient with ERBB2 INS20 could not be offered targeted therapy before dying of disease. On the other hand, seven cases had driver alterations identified on tissue testing undetected on liquid biopsy. All these patients had targetable alterations, including five activating mutations in EGFR and two ALK fusions ( Table 4). The distribution of patients who received either a previous or current therapeutic line including checkpoint-inhibitor (ICI) or ICI-chemotherapy Even though this study was not designed to compare the TAT of matched tissue and liquid biopsy testing, as they were not concomitant, an indirect comparison was possible. For the 91 samples sent for liquid biopsy testing, TAT from blood draw to result was 10 working d on average (median 10 d; range 6-25 d) ( Table 5). Complete date retrieval for tissue biopsy trajectory timelapse was possible for a subset of 76 cases. Tissue pathological diagnosis and biomarker testing TAT were fast in this subgroup (mean of 2.3 and 2.8 d, respectively, median 2 d each), as the pre-analytical delay between the clinical request and completion of procedures to acquire diagnostic material (7.9 d on average; median 4 d; range 1 to 4). Biomarker testing was often requested at time of progression, then long after the initial diagnosis, as reflected by the long interval between diagnosis and biomarker request dates on average (60.8 d; median 2 d) ( Table 5). The cumulative delay to obtain biomarker results on tissue was on average 73.7 d (median 13 d), decreasing to 14.4 d (median 12 d) when excluding retrospective requests over 30 d and past resection specimens. Both scenarios were significantly longer in comparison to the liquid biopsy TAT observed in this cohort (t = 3.1136, p = 0.002 and t = 4.086, p < 0.0001, respectively). Figure 3A illustrates the delays for the four main steps in patient's trajectory from clinical visit to biomarker availability for treatment decision making. Overall, 53 cases (70%) were within 20 working days by tissue single-gene testing, and for those exceeding this cut-off (n = 23; 30%), longer delays between tissue request and biopsy or between diagnosis and biomarker request were the most frequently seen ( Figure 3B). Discussion The results of this retrospective cohort analysis offer a real-life perspectiv yield and impact of integrating a plasma-based ctDNA NGS targeted assay in stage NSCLC care. It provides insight about the expected positivity rate of liq NGS in comparison with tissue single-gene testing, while exposing some clin potentially associated with a higher likelihood of positivity. It also provides a of the potential clinical impact of liquid biopsy when compared to biomarker t conventional methods. The rate of informative cases on liquid biopsy (47%) recapitulates one key dering clinically attractive a plasma-based approach in NSCLC genotyping. Ind biopsy provided a high likelihood of capturing molecular information useful management in 10 d on average. However, this clinical sensitivity rate of liqui slightly inferior compared to other studies with similar advanced stage NSCL tions, where it often exceeded 60% [12][13][14]. It is also lower from what would b by using tissue NGS with similar targets coverage in the same population, w Discussion The results of this retrospective cohort analysis offer a real-life perspective about the yield and impact of integrating a plasma-based ctDNA NGS targeted assay in advanced stage NSCLC care. It provides insight about the expected positivity rate of liquid biopsy NGS in comparison with tissue single-gene testing, while exposing some clinical factors potentially associated with a higher likelihood of positivity. It also provides an estimate of the potential clinical impact of liquid biopsy when compared to biomarker testing with conventional methods. The rate of informative cases on liquid biopsy (47%) recapitulates one key factor rendering clinically attractive a plasma-based approach in NSCLC genotyping. Indeed, liquid biopsy provided a high likelihood of capturing molecular information useful for patient management in 10 d on average. However, this clinical sensitivity rate of liquid biopsy is slightly inferior compared to other studies with similar advanced stage NSCLC populations, where it often exceeded 60% [12][13][14]. It is also lower from what would be expected by using tissue NGS with similar targets coverage in the same population, with a high prevalence of Caucasian, smokers and KRAS mutations. Direct inter-study and interpopulation comparisons remain difficult and imperfect due to the high level of complexity and variability of the assays involved, notably the size and content of panels, as well as the pre-analytical factors. While the number of genes and type of alterations captured are important, it is uncertain whether the inability to detect gene fusions or isoforms (ALK, ROS1, RET and METex14) significantly influenced the rate of detection in this study, due to the relative rarity of fusions. Per example, some higher rate of positivity from liquid biopsy NGS were reported using a larger panel also lacking fusion capture [14]. Nonetheless, plasma-only testing using such assay could not entirely replace tissue testing since minimal requirement for NSCLC would not be met (missing ALK and ROS1 fusions). In addition, inequivalent molecular testing strategies precluded determination of the formal analytical sensitivity in this study (liquid biopsy NGS compared to tissue singlegene testing). Concordance was estimated to be 71% using the same ctDNA panel for mutations [15]. Overall, genomic profiling on tissue is expected to have a higher yield than plasma regarding guideline-recommended biomarkers [9]. Moreover, complex clinical factors are likely determinants of the success of a plasma-based assay. This is reflected in some findings here reproducing previous observations where a greater liquid biopsy positivity likelihood was seen when disease had spread outside the thorax [16] or was not actively treated [12,17]. Maybe vascular dissemination associated with distant metastasis and absence of tumor control by therapeutic agents are factors facilitating tumor DNA release, but more research is needed to better understand factors associated with ctDNA shedding and the effects of active therapy on it. Beyond the diagnostic yield of liquid biopsy observed in this study, the clinical impact of the molecular data obtained was also evaluated. Genotyping information was acquired undoubtedly more often in plasma ctDNA NGS than in tissue single-gene testing (46% vs. 18% of patients, respectively), even if the panel did not include all guideline-recommended alterations. Despite this additional information from plasma genotyping, translation into therapeutic change for druggable oncogenic drivers was relatively modest in this cohort. Indeed, out of 91 patients, only 11 patients had unknown potentially actionable alterations and 5 patients ultimately received matched targeted therapy. This low yield must be contextualized considering the local regulatory environment at time of the study, where access to therapeutic agents associated with biomarkers outside of currently approved and reimbursed indications (limited to EGFR, ALK and ROS1) is challenging. The observation that only five out of nine patients with KRAS G12C and one patient with ERBB2 INS20 did not receive matched therapy likely reflects this reality. In addition, it is important to remind that a large part of NSCLC management in this cohort was driven by tissue PD-L1 status. While patients with the highest PD-L1 level of expression show the most benefit, a large proportion of NSCLC patients now receive immune checkpoint immunotherapy (ICI) at some point during their treatment, alone or in combination with chemotherapy, as recapitulated in this cohort. The regulatory acceptance context facilitating access and global positive clinical effects and tolerability of ICI might have played a role in some cases to defer a therapeutic change toward any drug out of approved indications with hypothetical benefit. In parallel, two out of seven actionable alterations found only by tissue testing in this cohort were ALK fusions. Similar discrepancies with clinically relevant fusions involving ALK and ROS1 as well as METex14 isoform were noted in other comparative studies between liquid and tissue NGS. This was described using either a hybrid-capture ctDNA assay covering fusions in six relevant genes [6] or a cfTNA amplicon-based assay [16,17]. The challenges for comprehensive detection of actionable fusions and high value of RNA sequencing have already been emphasized on tissue [18]. As this type of molecular alterations has specific analytical challenges due to promiscuity of fusion partners and breakpoints, better characterization of concordance and sensibility of plasma-based assays is needed to ensure proper coverage of guideline-recommended genotyping in NSCLC. Another interesting perspective related to this real-life evaluation of biomarker testing pertains to the advantage of liquid biopsy regarding delays. Indeed, several steps to complete the lung cancer biomarker testing trajectory can be replaced by a plasma-first approach, from the initial patient visit to the date when molecular results become available. As observed here, the 10 d average time for liquid biopsy results was inferior to the cumulative delays necessary to complete the biomarker testing from tissue. This is true even if TAT for both diagnosis and molecular testing on tissue, limited to baseline biomarkers (EGFR, ALK, ROS1 and PD-L1), were both within 3 d and the median cumulative TAT was 13 d. These short delays from our institution allow treatment decision planning to occur within 20 d in most cases. However, they might not be representative of general practice, as they result from optimized workflows [19] and are not estimating delays for sample shipping to a reference laboratory, per example. Nonetheless, this was achieved without relying on a more expensive and labor-intensive reflex-testing strategy advocated in similar public system practices [20,21]. Necessarily, the integration of tissue testing by NGS introduces longer delays for tissue genotyping trajectory as compared with minimal single-gene testing. This has the potential to further enhance the advantage of liquid biopsy on this aspect. In our laboratory, transition from single-gene testing to NGS resulted in a shift from 2.5 to 8 d (Patrice Desmeules, IUCPQ, Quebec, QC, Canada. Personal observation 2022.), per example, but delays above 15 days for tissue NGS are reported elsewhere, depending on the assay, volumetry and workflow used at the reference laboratory [13,15,17,22]. Not surprisingly, studies have documented reduced time to treatment using liquid biopsy as compared with tissue, especially if collected at visit before initiation of tissue biopsy [13,15,22]. In the present cohort, such comparison is not possible due to metachronous tissue and plasma testing, further limited by assays or approaches not covering all guideline-recommended biomarkers. Coverage of fusions would require more complex NGS strategy on plasma, translating potentially to longer TAT. Another question not treated here regarding acceptance of liquid biopsy NGS for public health system governing authorities is its financial impact. Plasma NGS assays are still far more expensive than tissue NGS. As long as tissue biopsy remains necessary to complete standard of care PD-L1 testing or complement the lower clinical sensitivity of liquid biopsy, procedural costs savings for acquiring tissue cannot be subtracted. A pragmatic integration of liquid biopsy testing into an algorithmic approach has been proposed by the IASLC committee [4], notably in the first line setting. The proposition is to use liquid biopsy either sequentially or complementarily if sub-optimal assay parameters or insufficient genotyping are obtained on tissue testing. The added value of complementary approaches has been previously demonstrated for patients in such scenarios and more limited tissue testing [9]. In addition, it could be proposed as a supplementary criterion that if the expected TAT for tissue genotyping is longer than 10 d or leads to a cumulative trajectory over 20 d, as per local service organization, a plasma-first approach could be defendable. In the context of searching for acquired resistance mechanism in oncogene-addicted cancers, the value of plasma testing is more evident but the capacity to capture fusions remains a key consideration as fusions are being increasingly recognized as resistance mechanisms to third-generation EGFR-inhibitors, notably [23,24]. Regardless of the scenarios to integrate liquid biopsy, the assay should capture all guideline-recommended biomarkers for NSCLC, thus including fusions. Conclusions In conclusion, the results from this retrospective study provide information about the added value of complementary plasma NGS genotyping as compared with minimal tissue testing with conventional methods from SGT. While additional molecular information was acquired in a large proportion of patients in a short TAT, the clinical sensitivity of plasma testing remains imperfect. Additionally, additional findings resulted only in a few patients undergoing a significant therapeutic change. This might be related to the regulatory context of the study population where access to emerging therapeutic agents is challenging and access to immunotherapy is widely adopted, and the fact that the plasma-based assay could not cover all guideline-recommended biomarkers, more specifically gene fusions and isoforms. As tissue NGS becomes more widely available and assuming it can be delivered within a clinically sensitive timeframe to cover all biomarkers in parallel to PD-L1, plasmabased NGS seems to be more appropriated as a complementary approach for patients with tumors insufficiently genotyped or inaccessible to tissue acquisition. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2023-01-12T18:25:13.016Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "11c593c3a59d1792be8ef339ec76bdbbd75d69de", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1718-7729/30/1/45/pdf?version=1672567192", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "92d78231099ea0a5ca6476c4060ab68483da5f0a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
232405769
pes2o/s2orc
v3-fos-license
A Phase 2 Randomized Placebo-Controlled Adjuvant Trial of GI-4000, a Recombinant Yeast Expressing Mutated RAS Proteins in Patients with Resected Pancreas Cancer Purpose: GI-4000, a series of recombinant yeast expressing four different mutated RAS proteins, was evaluated in subjects with resected ras-mutated pancreas cancer. Methods: Subjects (n = 176) received GI-4000 or placebo plus gemcitabine. Subjects' tumors were genotyped to identify which matched GI-4000 product to administer. Immune responses were measured by interferon-γ (IFNγ) ELISpot assay and by regulatory T cell (Treg) frequencies on treatment. Pretreatment plasma was retrospectively analyzed by matrix-assisted laser desorption/ionization-time-of-flight (MALDI-ToF) mass spectrometry for proteomic signatures predictive of GI-4000 responsiveness. Results: GI-4000 was well tolerated, with comparable safety findings between treatment groups. The GI-4000 group showed a similar pattern of median recurrence-free and overall survival (OS) compared with placebo. For the prospectively defined and stratified R1 resection subgroup, there was a trend in 1 year OS (72% vs. 56%), an improvement in OS (523.5 vs. 443.5 days [hazard ratio (HR) = 1.06 [confidence interval (CI): 0.53–2.13], p = 0.872), and increased frequency of immune responders (40% vs. 8%; p = 0.062) for GI-4000 versus placebo and a 159-day improvement in OS for R1 GI-4000 immune responders versus placebo (p = 0.810). For R0 resection subjects, no increases in IFNγ responses in GI-4000–treated subjects were observed. A higher frequency of R0/R1 subjects with a reduction in Tregs (CD4+/CD45RA+/Foxp3low) was observed in GI-4000–treated subjects versus placebo (p = 0.033). A proteomic signature was identified that predicted response to GI-4000/gemcitabine regardless of resection status. Conclusion: These results justify continued investigation of GI-4000 in studies stratified for likely responders or in combination with immune check-point inhibitors or other immunomodulators, which may provide optimal reactivation of antitumor immunity. ClinicalTrials.gov Number: NCT00300950. Introduction The ras oncogene and its RAS protein gene product contain the most common oncogene-related mutations in human cancer, with 90% of pancreas cancers harboring mutant RAS proteins. 1,2 Mutations in the ras oncogene occur in conserved locations, specifically codons 12, 13, and 61, 3 and the number of mutations that can occur is limited to a few predominant amino acid substitutions. RAS oncoproteins are theoretically ideal targets for cancer immunotherapy because aberrant signaling through RAS contributes to uncontrolled cell proliferation and tumorigenesis. Cancer immunotherapies have employed many strategies to generate immune responses 4-10 including cellular immunotherapies, which are showing much promise in advanced hematological cancers 11,12 and immune check-point inhibitors, which have substantial activity in a number of solid tumors including melanoma, 13 nonsmall cell lung cancer (NSCLC), 14 and squamous cell head and neck cancers. 15,16 In the study described here, our immunotherapeutic approach is based on the use of heat-killed recombinant Saccharomyces cerevisiae yeast as vectors, which are engineered to express target protein antigens. These yeast cells can activate dendritic cells and generate T cell cytotoxicity against target cells expressing viral and cancer antigens. [17][18][19][20][21][22][23] The GI-4000 product series consists of four different yeast-based products that target the seven most common ras mutations at codons 12 and 61, all of which result in constitutive activation of RAS. Because of the central role for RAS activation in tumor proliferation, targeted destruction of cells harboring mutant RAS proteins could result in therapeutic benefit in human cancers. A phase 1 study in patients with pancreas and colorectal cancer indicated that GI-4000 was safe, well tolerated, and immunogenic. 24 A phase 2b study in NSCLC patients also indicated that GI-4000 was well tolerated, and appeared to confer an overall survival (OS) benefit as compared with historical controls. 25 Here we report the results of a randomized prospective trial of adjuvant gemcitabine versus gemcitabine plus GI-4000 in patients with resected pancreas cancer. The primary end-point was improvement in recurrence-free survival. Exploratory proteomic analysis was performed retrospectively to investigate signatures that might predict responsiveness to GI-4000. Study oversight The study protocol was approved by institutional review boards at each trial site. All patients gave written informed consent. Study design This study was a randomized placebo-controlled double-blind adjuvant trial conducted at 27 investigational sites in the United States and 5 international sites in India and Bulgaria. After screening and informed consent, tumor tissue from surgical resection specimens was subjected to ras genomic sequencing. Subjects with mutations at either codon 12 or 61 positions represented in one of the GI-4000 products were eligible for study enrollment. Objectives The primary objective of the study was to evaluate an improvement in recurrence-free survival with GI-4000 treatment. Key secondary objectives were to evaluate OS, safety, and immunogenicity. Variables Demographic and baseline characteristics included age, gender, ethnic origin, time since diagnosis, tumor type, stage and grade, tumor biomarker levels, and ras gene mutations. Interventions The study drug consisted of four different yeast-based products targeting the four most common ras mutations at codon 12 and the three most common ras mutations at codon 61 (GI-4014: G12V, Q61L, Q61R; GI-4015: G12C, Q61L, Q61R; GI-4016: G12D, Q61L, Q61R; GI-4020: G12R, Q61L, Q61H). Each subject received only the specific product containing the mutation identified in his or her tumor. The yeast strains were engineered to express the K-ras mutation insert sequences as previously described. 21 The study population consisted of patients with resected pancreas cancer who had a product-related mutation in ras and an R0 or R1 resection by pancreaticoduodenectomy or pylorus-preserving pancreaticoduodenectomy procedure. An R0 resection was defined as no microscopic residual tumor at the resection margin. An R1 resection was defined as residual microscopic but not gross evidence of tumor at the resection margin. After enrollment, subjects were randomized in a 1:1 ratio to either GI-4000 or placebo, both combined with gemcitabine. It should be noted that adjuvant gemcitabine monotherapy was used as the control because at the time the trial was designed and recruited, neither recent data from ESPAC-4 nor data comparing gemcitabine with FOL-FIRINOX were available, making gemcitabine monotherapy the standard of care. Randomization was prospectively stratified based on resection status (R0/R1). Subjects were dosed subcutaneously with 40 yeast units (YU; 1 YU = 10 7 yeast cells) GI-4000 or with placebo (saline) for three weekly doses (0.5 mL/10 YU to each of four injection sites), starting 21 to 35 days after resection. Gemcitabine 1000 mg/m 2 intravenous infusion was started on study Day 24. Monthly doses of GI-4000 or placebo were administered after initiation of gemcitabine to coincide with monthly chemotherapy holidays. Administration of gemcitabine proceeded until six monthly cycles were completed, intolerance occurred, study withdrawal, disease progression, or death. Administration of study drug proceeded until study withdrawal, disease recurrence, death, or completion of 60 months of therapy. A schematic of dosing for GI-4000 and gemcitabine is given in Table 1. Subjects were followed for up to 60 months after randomization and thereafter rolled into a long-term safety and outcomes protocol with an intended followup period of up to 15 years from treatment initiation. Tumor tissue sequencing Cellular genomic DNA was extracted from biopsy material and analyzed to identify ras mutations as previously described. 24 Immunology analyses Analyses were performed on samples blinded to treatment. Peripheral blood mononuclear cells (PBMCs) were collected and cryopreserved until use. Testing was performed on samples from subjects enrolled at sites in the United States only. Interferon-c (IFNc) ELI-Spot assays were performed as previously described. 25 Immunophenotyping by flow cytometry evaluated the frequency of regulatory T cell (Treg) fractions, 26 using PBMCs from baseline and Day 15 or Day 24 time points. Exploratory proteomic analysis Baseline plasma samples were retrospectively analyzed by matrix-assisted laser desorption/ionization (MALDI) time-of-flight (ToF) mass spectrometry. Statistical methods A Bayesian statistical approach was used to analyze efficacy on a quarterly basis using time to recurrence as the primary efficacy end-point and time to mortality as a key secondary efficacy end-point. Enrollment was expanded beyond the originally planned 100 patients based on the probability of improved efficacy for time to recurrence of <0.95 and >0.70 (this range of probabilities represents a strong trend, i.e., not yet definitive) and if an estimate of increased time to recurrence and mortality exceeded 2 months during enrollment. The efficacy analysis supported sample size expansion up to 176 patients overall, with 39 patients in the R1 subgroup and 137 patients in the R0 subgroup. Enrollment was permitted to continue until the prespecified limits were met. Once the boundaries were exceeded, the study ceased to accrue new patients. Participants Study disposition is shown in Figure 1. A total of 377 R0/R1 subjects were screened and 176 subjects were subsequently randomized to receive GI-4000 + gemcitabine (88 subjects), or placebo + gemcitabine (88 subjects) between June 5, 2006, and April 30, 2010. These subjects comprised the intent-to-treat (ITT) population. The safety population consisted of a total of 169 subjects who received at least one dose of study drug: 84 subjects received GI-4000 and 85 subjects received placebo. The primary reasons for screened subjects failing to enroll included either the lack of a K-ras mutation in their tumor or the presence of a mutation not represented in the GI-4000 products. Unless otherwise stated, analyses are for the ITT population who underwent R0/R1 resection. The most common reason for study discontinuation in both treatment groups was death (111 subjects, 63.1% of the ITT population). Table 2 summarizes the baseline demographic and disease characteristics. The mean age was 62.1 years and the majority of the treated subjects were white (80.7%) and men (58.5%). The ras mutations present in tumors were similar between treatment groups and most subjects with R0/R1 resection in both treatment groups had either a G12V (44.3%) or a G12D (43.2%) mutation. Most subjects in both groups had a baseline Eastern Cooperative Oncology Group (ECOG) Performance Status of either Grade 1 (59.1%) or Grade 0 (25.0%). Most primary tumors were stage pT3 (138 subjects, 78.4%) and there were no significant differences here between the GI-4000 and placebo cohorts (79.5% vs. 77.3%, respectively). Three subjects had T4 primary lesions and all were randomized to the GI-4000 group. The status of regional lymph node involvement was comparable between treatment groups. A higher percentage of subjects in the placebo group than in the GI-4000 group had metastasis in a single regional lymph node (10.2% vs. 6.8%, respectively), whereas metastasis in multiple regional lymph nodes occurred in a similar percentage of subjects in the GI-4000 and placebo groups (26.1% vs. 25.0%, respectively). Efficacy The median time from randomization to recurrence was similar for the GI-4000 and placebo groups at 354 and 357 days, respectively (hazard ratio [HR] = 1.01 [95% confidence intervals (CIs): 0.73-1.41], p = 0.936). The percentage of subjects free of recurrence in the GI-4000 group was similar to that of the placebo group (18.2% vs. 17.0%, respectively). The median time from randomization to death was also similar for the GI-4000 and placebo groups: 698 versus 751 days, respectively (HR = 1.01 [CI: 0.72-1.42], p = 0.956). Kaplan-Meier estimates of the duration of radiological recurrence-free survival and of OS from randomization show comparable patterns for both treatment groups (Fig. 2). Safety The side effect and safety profiles of subjects receiving GI-4000/gemcitabine were similar to those of subjects receiving placebo/gemcitabine. Table 3 summarizes treatment emergent adverse events (TEAEs) occurring in at least 5% of ITT subjects and occurring in ‡30 of all subjects. The most frequent TEAEs were fatigue (55.1%), nausea (51.7%), anemia (42.6%), diarrhea (42.6%), and neutropenia (41.5%). Overall, the frequencies of adverse events were comparable between treatment groups and consistent with events expected in the population being studied. The TEAE that occurred with a notably higher incidence in the GI-4000 group than the placebo group was injection site pain (25.0% vs. 3.4%). A notably higher incidence in the placebo group occurred with the TEAE of depression (11.4% GI-4000 vs. 23.9% placebo). Immunogenicity IFNc ELISpot response. There was no difference in the frequency of ELISpot responders between the treatment groups with 22 of 67 (32.8%) subjects in the GI-4000 group versus 23 of 62 (37.1%) subjects in the placebo-treated group (Table 4). However, there was a nonsignificant increase in frequency of ELISpot responders in the R1 subgroup treated with GI-4000, with 6 of 15 subjects tested for GI-4000 versus 1 of 12 subjects tested for placebo (40.0% vs. 8.3%; p = 0.062). In addition, there was an improvement in median OS of 159 days for the R1 GI-4000 ELISpot immune responders versus all placebo-treated subjects ( p = 0.810). For the R0 subgroup, there were comparable categorical ELISpot responses in both treatment Table 4). Proteomic analysis Baseline plasma samples (44 in the GI-4000 group and 46 in the placebo group) were retrospectively analyzed by exploratory MALDI-ToF mass spectrometry using previously described methods. 27 A classifier, BDX-001, was created using a strongly regularized logistic regression combination of five nearest neighbor classifiers composed of single or pairs of 100 mass spectral features (Supplementary Data). The training set for the classifier consisted of 23 samples from GI-4000treated patients. Classifier performance was assessed on the remaining 21 samples for the GI-4000 group and all 46 placebo group samples. The classifier divided subjects into two classes, BDX-001+ and BDX-001À, with, respectively, better and worse outcomes when treated with GI-4000: treated subjects classified as BDX-001+ had a 12.0 month improvement in recurrence-free survival compared with GI-4000-treated subjects classified as BDX-001À (HR = 0.30 [CI: 0.07-0.49], p = 0.002, Fig. 3A). In contrast, there was no improvement in recurrencefree survival between BDX-001+ and BDX-001À placebo subjects (unfavorable 2.4 months difference, HR = 1.11 [CI: 0.57-2.18], p = 0.754, Fig. 3B). When used to evaluate OS, the proteomic classifier also predicted better and worse survival for subjects in the GI-4000 group (25.4 months improvement BDX-001+ vs. BDX-001À, HR= 0.21 [CI: 0.04-0.31], p < 0.001, Fig. 3C) but not the placebo group (HR = 1.03 [CI: 0.50-2.10], p = 0.944, Fig. 3D). BDX-001+ subjects treated with GI-4000 had improved recurrence-free survival and OS compared with BDX-001+ placebo subjects with an 11.5 months improvement in recurrence-free survival: 20.7 months versus 9. When this proteomic classifier was applied to only R0 subjects from both treatment groups, an advantage in median recurrence-free survival of 13.7 months was observed for GI-4000 compared with placebo (23. Discussion This phase 2 study was a randomized double-blind placebo-controlled multicenter trial comparing GI-4000 plus gemcitabine with placebo plus gemcitabine in subjects with resected ras-mutated pancreas cancer. Subjects were prospectively stratified based on their resection status (R0/R1). Since the majority of subjects in the trial were in the R0 subgroup (137/176; 78%), the overall findings in the study (R0 and R1 subjects) mirror those of the R0 subgroup analyses, including recurrencefree survival, OS, and mortality. To appreciate the potential differences observed in these subgroups, data have, therefore, been also analyzed separately. The R1 subgroup showed an increase in subjects with T cell ELISpot responses after GI-4000 treatment compared with placebo treatment, and nonsignificant advantages in 1-year OS for GI-4000 versus placebo and an improvement in median OS of *3 months for GI-4000. Furthermore, there was a nonsignificant >5 months improvement in median OS for the R1 GI-4000 ELISpot immune responders versus placebo, indicating a potential mechanism-based improvement in survival for the R1 subgroup. In contrast, the R0 group showed comparable ELISpot responses in both treatment groups, indicating that there appears to be a greater tendency for background tumor-specific immune responses in R0 subjects than in R1 subjects. Tregs are known to be overexpressed in pancreas cancer 28 and poor prognosis is associated with the presence of Tregs in the periphery or in the tumor microenvironment. [29][30][31][32] In this study, GI-4000 treatment rapidly decreased the naive Treg subpopulation. This decrease could be a potential mechanism of action of GI-4000 that contributes to effects on recurrence and survival. Since the GI-4000 vector is yeast based, it may reduce the number and function of Tregs through reciprocal activation of the Th17 T cell pathway. [33][34][35] The improved ELISpot responses seen in the GI-4000-treated R1 subgroup, together with a trend in improved survival for all GI-4000-treated R1 subjects, suggest residual antigen may be required for optimal response. Reduction in Tregs by GI-4000 may act preferentially for R1 subjects by allowing effector T cells generated by GI-4000 to infiltrate the tumor where the presence of RAS antigen within the residual tumor margins could further drive the effector T cell response. The absence of an intact tumor in R0 subjects may, therefore, not reveal these dual benefits of GI-4000 treatment. Because of the small sample size in the R1 group, if the survival benefit in this group is real, a substantially larger trial would be required to confirm it. Improved survival with GI-4000 treatment was retrospectively defined by a proteomic signature. The difference in time to recurrence between BDX-001+ and BDX-001À subjects treated with GI-4000 was statistically significant and did not depend on resection status. These survival trends indicate that this proteomic signature predicted late recurrence in the GI-4000-treated subjects, but not placebo subjects, and could potentially be used as an enrichment bioassay to improve observed treatment effects in future clinical trials, as demonstrated for a predictive classifier in responses of NSCLC patients to erlotinib and chemotherapy. 36 GI-4000 was shown to be well tolerated, with safety findings comparable between the two groups and with no differences noted for R0 and R1 subjects. Overall, the GI-4000 group showed a similar pattern of recurrencefree survival and OS compared with the placebo group. The K-ras mutation G12C has recently been exploited to design small molecule inhibitors that show promise for NSCLC treatment. 37 However, as illustrated here, mutations in K-ras in pancreatic cancer are predominantly G12V and G12D; there was only a single subject with a G12C K-ras mutation in our study. Therefore, small molecule inhibitors for deployment in pancreatic cancer are still being sought. It may be beneficial to combine GI-4000 with cellular immunotherapies such as chimeric antigen receptor T cells or tumor-infiltrating lymphocytes 38,39 in pancreatic cancer as GI-4000 may synergize to provide antigen-specific stimuli for the infused T cells. In addition, use of check-point inhibitors to block T cell death pathways may provide optimal reactivation of antitumor immunity in combination with GI-4000. Clinical trials are currently in progress or planned in a number of tumor types combining GI-4000 with other immune therapies and chemotherapies. 40 As previously mentioned, it should also be noted that, given the promise of new regimens using capecitabine and FOLFIRI-NOX, 41,42 gemcitabine can probably no longer be considered standard of care in pancreatic cancer patients and any future studies will almost certainly employ a different control arm. Conclusion Given the current promise of immunotherapy and interest in strategies to target cancer patients likely to respond to treatments, we believe continued investigation of GI-4000 is warranted, with further prospective studies stratified for likely responders. Combination with immune check-point inhibitors or other immunomodulators may also be beneficial as this may provide optimal reactivation of antitumor immunity. Author Disclosure Statement At the time of study conduct, D.A., C.C., and A.M. were employees of GlobeImmune, Inc., and T.C.R. was CEO of GlobeImmune, Inc. All current and former GlobeImmune authors held or hold stock and/or stock options in the company. F.E.H. and A.C. were paid consultants to GlobeImmune, Inc. GlobeImmune, Inc., sponsored and funded the study. GlobeImmune personnel were involved in the design and conduct of the study and provided logistical support during the trial. GlobeImmune personnel prepared the article and all authors contributed to the review and final decisions on the article content.
2021-03-30T05:11:04.367Z
2021-03-23T00:00:00.000
{ "year": 2021, "sha1": "8a4f2d67cd8d8f3dcf54a5d2e71979956d555fed", "oa_license": "CCBY", "oa_url": "https://www.liebertpub.com/doi/pdf/10.1089/pancan.2020.0021", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a4f2d67cd8d8f3dcf54a5d2e71979956d555fed", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
265723937
pes2o/s2orc
v3-fos-license
Prevalence and Factors Associated with Depression in Older Adults in Tabriz, Iran: Data from the Health Status of Aged People in Tabriz (HSA-T Study) Objectives: To determine the prevalence of depression and identify the factors predicting depression among older people. Design: Cross-sectional study. Setting(s): Tabriz, the capital city of the Introduction As with other parts of the world, the number of older people in Iran is increasing. At present, more than 10% of the population in Iran is at least 60 years old, and this figure will grow to about 33% by 2050. 1 Depression is a major public health problem and the most common emotional disorder among older adults. 2,3 Moreover, the global incidence of depression increased by 45% from 1990 (172 million) to 2017 (258 million). 4 However, the prevalence of depression varies by country, and it was found to be about 20% in western countries, 5 28% in Sri Lanka, 6 and more than 60% among Iranian older adults. 7 Globally, depression accounts for the largest proportion of non-fatal health loss. The World Health Organization (WHO) has ranked unipolar depression as the 3rd largest cause of disability worldwide (4.3% of total Disability Adjusted Life Years), and it is projected that it will be the leading cause by 2030. 8 As a country in transition, Iran is currently experiencing a rapid increase in the burden of non-communicable diseases, including mental disorders. 9 Epidemiological studies have found that many demographic and biopsychosocial factors are associated with depression, including gender, 2, 7, 10, 11 age, 7, 10 marital status, 10,11 physical activity, [12][13][14][15] low education, 7,11 socioeconomic status (SES), social networks, 16 and perceived social support. 3,16,17 As the population of older people is growing fast, it is crucial to understand the factors that may influence depression among older adults. Thus, the current research aimed to investigate the prevalence of depression and identify the factors associated with depression among older people in Tabriz, Iran. Methods This study was embedded within the Health Status of Aged people in Tabriz (HSA-T) study, which was conducted on a representative sample of non-institutionalized older people ( ≥ 60 years) in Tabriz, Iran. Study Setting The study was conducted in Tabriz, which is in the East Azerbaijan Province of Iran, from June 2015 to August 2015. The East Azerbaijan Province is in the northwest of Iran, and Tabriz is both the capital of the region and the most populated city. 18 The majority of the city's inhabitants are Iranian Azerbaijanis, and the most common language is Azeri Turkish. According to the last Iranian census data, the total population of those aged 60 and older is about 180000 (around 10.5% of the city's population). 18 Study Population This study used a cross-sectional design, and the population included all community-dwelling people aged 60 and above who were living in Tabriz. Sample Size and Sampling Method Details on this descriptive cross-sectional study and the sampling methodology have been described elsewhere. 19 In brief, the statistical population included all people aged 60 years and older who lived in the community. A community-based representative sample of 1071 older adults was randomly selected using the probability proportional to the size sampling method. In the first stage, 107 blocks were randomly selected from the 8531 urban blocks in Tabriz. Following this, 10 older adults were then randomly selected from each of the selected city blocks. Of the 1071 cases in the original study, data related to depression was available for 1060 (512 men and 548 women). Data Collection Tools The data collection was undertaken by trained interviewers using questionnaires that measured demographic information as well as the scales described below. Hospital Anxiety and Depression Scale Depression was assessed using the Hospital Anxiety and Depression Scale (HADS). HADS is a commonly used measure of emotional distress that has been found to adequately measure the severity of anxiety and depression symptoms in primary healthcare as well as in the general population. Although this scale was developed for use in hospital settings, it is also a valid and reliable method of screening mental morbidity in community settings. 20 Furthermore, the Iranian version is an appropriate, reliable, and valid measure of anxiety and depression among older adults. 21 HADS has two subscales: the first measures depression, and the second gauges anxiety. The questions are answered using a 4-point Likert-type scale (0 to 3), which produces a total HADS score of 0 to 42 points. Each subscale has a score ranging from 0 to 21, with higher scores indicating more severe symptoms. In the current study, only the depression subscale data were analyzed. In terms of categorizing the answers, nondepressed individuals had scores between 0-7, while the remainder were classified as mild depression (score 8-10), moderate depression (11)(12)(13)(14), and severe depression (15)(16)(17)(18)(19)(20)(21). 20 Multidimensional Scale of Perceived Social Support The Multidimensional Scale of Perceived Social Support (MSPSS) was used to measure the perceived level of social support, which is received from family, friends, and significant others. This tool was developed by Zimet et al 22 and contains 12 questions that are answered on a 7-point Likert scale (1 = very strongly disagree to 7 = very strongly agree). The sum of all 12 items represents the overall level of perceived social support, with lower scores indicating a lower level of social support. The validity and reliability of MSPSS have been evaluated favorably in Iran. 23 Socioeconomic Status Questionnaire for Urban Households The SES Questionnaire for Urban Households (SES Iran), which has acceptable validity and reliability, was developed for use in Iran. The scale is frequently included in many health surveys or clinical studies, as well as research on health equity and economics. 24 Satisfaction with Life Scale The Satisfaction with Life Scale (SWLS) was used to evaluate satisfaction with life. The scale was designed by Diener et al and has been found to have good reliability and validity. 25 The SWLS contains five items that are answered using a 7-point Likert scale (strongly dissatisfied to strongly satisfied). The Persian SWLS has also been found to have good validity and reliability. 26 Statistical Analysis Kolmogorov-Smirnov tests were used to test for normality (P > 0.05). Descriptive data were presented as frequencies (and percentages) for categorical data and means (and standard deviations) for continuous variables. Moreover, independent Samples t-tests were used to compare the means of two independent groups, and the analysis of variance (ANOVA) tests was employed when there were more than two independent sub-groups. Furthermore, ordinal regression was used to determine the predictors of depression among older adults. All statistical analyses were conducted using SPSS23, and the level of significance was set at P < 0.05. Results The mean age of the participants was 70.19 years old. The highest proportion of participants was married, and the lowest proportion had never been married. More than 54% were illiterate (unable to read or write), and more than half were born in rural areas and then migrated to Tabriz. In terms of the type of family, almost 58% were living in extended families, followed by nuclear families and living alone as the second and third most common type of family. Families with 3-5 members were the most common household size (46.3%). The distribution of depression status is presented in Table 1. The results revealed that only 22.4% of older adults were not depressed, while most of the remaining participants suffered from varying levels of depression. About 15% had severe depression, and more than 36% suffered from a moderate level of depression. Gender, age, education, place of birth, marital status, perceived social support, and satisfaction with life were all associated with the HADS depression subscale scores ( Table 2). The prevalence of depression in women was significantly higher than that among men. Furthermore, women were more likely to suffer from depression, and only 15.5% of women were depression-free, while 29.7% of men were not depressed. A multivariate analysis of the factors related to the severity of depression (i.e., gender, age, SES, perceived social support, marital status, daily walking, and life satisfaction) was undertaken using ordinal regression. The results of this analysis demonstrated that the severity of depression among participants was significantly associated with sex, physical activity, life satisfaction, and perceived social support. Therefore, our results indicate that older age, being a woman, engaging in lower levels of physical activity, low life satisfaction, and a perceived lack of social support were positively associated with depression in older adults. Furthermore, the ordinal regression demonstrated that being female, being older, being single, having low levels of walking and physical activity, having low levels of perceived social support, and having low life satisfaction can significantly increase the severity of depression. As Table 3 depicts, by decreasing the level of life satisfaction from extremely satisfied to satisfied, neutral, dissatisfied, and extremely dissatisfied, the severity of depression progressively increased (B = 0.42, B = 0.81, B = 1.60, and B = 2.40, respectively). In addition, compared with the "very much satisfied" perceived level of social support, the lower levels (i.e., much, average, low, and very low levels) all resulted in higher levels of depression (B = 0.56, B = 0.91, B = 1.70, and B = 0.70, respectively). In addition, low SES was associated with higher severe depression (Table 3). However, SES did not significantly increase the severity of depression (B = -0.01 and P = 0.06). Discussion The current study aimed to determine the prevalence of depression as well as the factors related to depression among community-dwelling adults aged 60 and older who were living in northwest Iran. This study found a high prevalence of depression among this group, with more than 77% suffering from some degree of depression. However, it should be noted that the measurement of depression was based on a self-report scale, not on a clinical examination. A depression symptom questionnaire is not designed to ascertain the diagnostic status of an individual, and based on its specificity and sensitivity estimates, we would expect this scale to overestimate the prevalence of depression. 27 The depression burden is also affected by the stigma and public beliefs regarding depression. Depressed people may refuse to receive help and treatment because of the stigma attached to mental disorders. The stigmatized person is not considered competent enough to be fully accepted in society. 28 The stigma attached to depressed individuals is one of the barriers to improving their mental condition and is so important to the burden of the disease that the WHO has described the stigmatization of depression as the "hidden" burden of the disease. 29 Developing campaigns to reduce the stigmatization of depression and promote public health literacy can help reduce the burden caused by depression. This study also identified several factors that are associated with depression. Gender and Depression It should be noted that sex-related differences in mental disorders differ according to the country and other social contexts. The factors that can help with interpreting these differences include cultural and social norms, as well as differences in the gender roles and coping strategies of men and women. 30 In our study, being male significantly decreased the severity of depression. However, although several studies were in line with our findings 2, 7, 31 and have found female gender to be a significant predictor of depression, 11,16 other studies have found no relationship between depression and gender. 32 This discrepancy is most likely due to the numerous potentially confounding social and economic factors. 33 Therefore, the gender difference found in the prevalence of depression in the current International Journal of Aging, 2023, Volume 1 4 study might be due to social constraints, prejudices, and stereotyped beliefs about women in Iran. Nevertheless, there is no clear evidence that the prevalence of depressive disorders is higher in countries where women have a lower status than men compared with countries where women are more equal. However, several physiological, psychological, and sociological mechanisms have been proposed to explain the higher prevalence of depression among women, but the underlying mechanisms remain unclear. 34 Therefore, more research is needed on the reasons for sex differences in depression. Physical Activity and Depression We found the prevalence and severity of depression to have a significant negative relationship with daily physical activity and walking. Several other studies have reported similar findings. [12][13][14][15]35 Previous studies have demonstrated a lower level of depression among older people who are physically active than among their peers who are not physically active, so the provision of desirable venues for engaging in sports and socializing could help improve mental health. 12,13 There is extensive evidence that regular physical activity is an effective primary and secondary prevention technique for several chronic diseases, Note. PSS, perceived social support; SES, socio-economic status; * P value calculated by chi-square test. International Journal of Aging, 2023, Volume 1 5 including cardiovascular disease, diabetes, cancer, hypertension, obesity, depression, and osteoporosis. 15 Routine physical activity has also been associated with improved psychological well-being by reducing stress, anxiety, and depression. 14 While physical activity can be a lifesaver for aging people generally, old age is associated with a decrease in physical activity. Physical activity is one of the modifiable factors that reduce the risk of depression. Reducing sedentary activities such as daytime napping and TV watching could help lower the risk of depression. Therefore, policymakers must design programs to increase Iranian older adults' physical activity. Furthermore, launching public walking programs in different neighborhoods may also be helpful. In addition, the adaptation of urban pathways and furniture based on the physical conditions and needs of older people (age-friendly cities and communities) can help encourage physical activity in older adults. Perceived social support and Depression We found that the MSPSS scores were negatively associated with depression. Low perceived social support was one of the main predictors of depression, which is consistent with previous research. 3,16,17,36,37 Social support is thought to be one of the social determinants of overall health and is an essential factor in the course and outcome of psychopathological disorders. People with a higher level of social support have a better general health status, and social support can decrease depression and improve the quality of life. 38 Furthermore, low subjective social support is a significant predictor of depression and is associated with major depression. 39 A similar study found that depressive symptoms significantly decrease with improved family support. 37 Moreover, research has found that the self-rated level of perceived support was significantly associated with the HADS score. 36 In addition, the research found social support to be a mediator in the relationship between the mental health quality of life and depression symptoms among older African American grandmothers. 40 There are several factors affecting perceived social support, one of which is an individual's level of social connection. The relationship here is direct and significant. Perceived social support is a mechanism that explains how social ties promote health. 41 More social relations are likely to increase perceived social support, which promotes health. An individual with a more significant number of ties has more trustworthy people with whom they can connect and receive social support and health-relevant information. 41 Social connection is one of the protective factors against depression. Policymakers must encourage social interactions among older adults in Iran. An active social life improves physical, mental, and emotional health, which are all particularly important for older adults struggling with depression. Visiting friends, relatives, and extended family members, taking part in group outings, and attending community events can reduce depression in older adults. The continuation of social contact in old age is extremely important. Online social participation can also be helpful to maintain offline social participation for those whose face-to-face social activity has been limited due to mobility limitations. 42 Life Satisfaction and Depression It was found that life satisfaction had significant negative relationships with both the prevalence and severity of depression. The findings of this study are consistent with previous research. People with a high level of life satisfaction have a better mental health status. Happiness is one of the mental factors that affect the prevalence and severity of depression. 43 Furthermore, life satisfaction and depressive symptoms are two factors that independently affect adult mortality. 44 There is also evidence that people fulfilling the criteria for major depressive disorders have lower levels of life satisfaction. 45 Socio-economic Status and Depression the findings demonstrated that low SES is not associated with a higher prevalence of depression. There is considerable evidence of the higher prevalence of depression among older adults living in low socioeconomic conditions. 46 SES is associated with the prevalence of depression, 47 and there are remarkable differences in the prevalence of depression among people with different SES levels. 48 Although it was not significant in this study, paying attention to the socio-economic factors is also important for addressing depression in older adults. Marital Status and Depression The rate of depression was significantly higher among older adults who were single. However, ordinal regression found no significant relationship between marital status and the severity of depression. According to other studies, being single is a significant risk factor for depression. 11,35 It is possible that being married applies its regulating effect through the spouse's social support, which reduces depression among their spouse. Education and Depression The findings revealed a significant negative relationship between the level of education and the prevalence and severity of depression, which is in agreement with previous research. 31,35 Low education has been found to be independently associated with an increased risk of depressive symptoms. 7 In an epidemiological study, the lowest educational group had a higher prevalence of psychiatric morbidity. 49 However, further research has revealed that a higher academic level predicts higher depressive symptoms. 50 Study Limitations This study has several strengths, including the sample size and representative sample. However, the present study suffers from a number of limitations. As with other crosssectional studies, the inability to make causal inferences is one important limitation. In addition, using a self-report scale, instead of measuring depression using the gold standard approach, is another limitation that may have led to overestimating the prevalence of depression. Conclusions We found depression to have a high prevalence among older Iranian people and that social factors played a significant role in predicting depression. Old age coincides with an optional or obligatory disconnection from occupational and social situations. Furthermore, the reduction in social and occupational interactions and changes in the family structure mean that aged people are an isolated population. Therefore, addressing mental disorders and social health factors among aged people should be considered a priority, and help should be provided to encourage the development of appropriate social networks, which will help improve and promote older adults' health.
2023-08-07T15:03:38.487Z
2023-03-14T00:00:00.000
{ "year": 2023, "sha1": "96fb2b80ace5f800216ddcae6855b8933311601a", "oa_license": "CCBY", "oa_url": "http://ijage.com/PDF/ija-1-e2.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "f2a75898acea9ac16308a73b859f36e990ed5219", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [] }
253967461
pes2o/s2orc
v3-fos-license
Molecular basis of diseases induced by the mitochondrial DNA mutation m.9032T>C Abstract The mitochondrial DNA mutation m.9032T>C was previously identified in patients presenting with NARP (Neuropathy Ataxia Retinitis Pigmentosa). Their clinical features had a maternal transmission and patient’s cells showed a reduced oxidative phosphorylation capacity, elevated reactive oxygen species (ROS) production and hyperpolarization of the mitochondrial inner membrane, providing evidence that m.9032T>C is truly pathogenic. This mutation leads to replacement of a highly conserved leucine residue with proline at position 169 of ATP synthase subunit a (L169P). This protein and a ring of identical c-subunits (c-ring) move protons through the mitochondrial inner membrane coupled to ATP synthesis. We herein investigated the consequences of m.9032T>C on ATP synthase in a strain of Saccharomyces cerevisiae with an equivalent mutation (L186P). The mutant enzyme assembled correctly but was mostly inactive as evidenced by a  > 95% drop in the rate of mitochondrial ATP synthesis and absence of significant ATP-driven proton pumping across the mitochondrial membrane. Intragenic suppressors selected from L186P yeast restoring ATP synthase function to varying degrees (30–70%) were identified at the original mutation site (L186S) or in another position of the subunit a (H114Q, I118T). In light of atomic structures of yeast ATP synthase recently described, we conclude from these results that m.9032T>C disrupts proton conduction between the external side of the membrane and the c-ring, and that H114Q and I118T enable protons to access the c-ring through a modified pathway. Introduction Oxidative phosphorylation (OXPHOS) provides eukaryotic cells with the energy-rich ATP molecule (1). During this process, electrons from carbohydrates and fatty acids are transferred to oxygen, which results in a proton gradient across the mitochondrial inner membrane that is used by the ATP synthase to phosphorylate ADP with inorganic phosphate. Mutations that compromise this activity result in devastating neuromuscular diseases (2)(3)(4). Many have been located in the mitochondrial genome where are the genes of 13 OXPHOS proteins and of a number of transfer and ribosomal RNAs that are required for their synthesis inside the mitochondrion (5). With the advent in recent years of highresolution structures of these proteins and a detailed description of their energy-transducing mechanisms, it has become possible to make predictions about the possible consequences on mitochondrial function of specific mutations in their genes. However, only a limited number of amino acid residues in OXPHOS proteins have known critical function and these are generally not the target of the mutations found in patient's mitochondrial DNA. Furthermore, these mutations usually affect only a fraction of the numerous copies of the mitochondrial genome (heteroplasmy) and many other sources of genetic heterogeneity in nuclear and mitochondrial DNA exist between individuals, which makes it difficult to evaluate their functional consequences and pathogenicity. Last but not least, there are still no reliable methods for genetically transforming human mitochondria. Due to these difficulties, the yeast Saccharomyces cerevisiae has been used as a model system for evaluating the functional consequences of mtDNA mutations found in patients (6)(7)(8)(9)(10). Its mitochondrial genome can be manipulated (11) and owing to the strong instability of heteroplasmy in this organism (12), strains homoplasmic for a specific mutation of this DNA can be obtained quite easily. Importantly also, the structures of the mtDNA encoded proteins have been highly conserved from yeast to humans (13). Therefore, discrete alterations in these structures should have similar consequences on mitochondrial function in evolutionary distant mitochondria. Consistently, equivalents of human mtDNA mutations leading to severe clinical phenotypes proved to compromise much more severely oxidative phosphorylation in yeast than mutations resulting in milder health problems (6,14). We herein investigate the consequences in yeast of a mitochondrial DNA mutation (m.9032T>C) recently detected in patients presenting with the NARP syndrome (15)(16)(17). The disease was inherited maternally; its severity correlated with the level of heteroplasmy and patient's cells showed a diminished oxidative phosphorylation capacity, leaving no doubt that this mutation is pathogenic. The m.9032T>C mutation is located in the gene ATP6 that encodes the subunit a of ATP synthase (18). Together with a ring of identical c subunits, the subunit a moves protons through the membrane domain (F O ) of ATP synthase, which is coupled to ATP synthesis in its extra-membrane domain (F 1 ). As it leads to replacement of a highly conserved leucine residue with proline at position 169 of the human subunit a (L 169 P), in close proximity to other well conserved residues (see below), it made sense to investigate its consequences in a yeast strain with an equivalent mutation in subunit a (L 186 P). Based on the results herein reported, and in light of atomic structures of ATP synthase recently described (13,(19)(20)(21)(22), we propose a molecular mechanism by which m.9032T>C compromises ATP synthase function and human health. Results Yeast cells with an equivalent of the m.9032T> C mutation (L 186 P) do not grow using respiratory carbon sources and have a relatively high propensity to lose the mitochondrial genome The m.9032 T > C mutation results in the substitution of a highly conserved leucine residue with proline at position 169 of the human subunit a, 186 in the yeast mature protein (196 in the precursor form of yeast subunit a of which the first ten residues are cleaved during assembly (23)) (see below). Two nucleotide changes were introduced to replace the leucine codon 196 of the yeast ATP6 gene with a proline codon (TTA 196 CCA). The inf luence of the L 186 P mutation on the growth of yeast was investigated on solid and in liquid media, from fermentable (glucose) and respiratory (glycerol) carbon sources, at 28 • C and 36 • C (Fig. 1). While the mutant was as expected able to grow from glucose it totally failed to multiply from glycerol at both temperatures, indicating a very severe impairment of ATP synthase function. In the shown glucose growth curves (Fig. 1B), the respiratory deficiency of L 186 P yeast was apparent once the glucose present in the media had been entirely converted into ethanol, the metabolism of which requires the presence of functional mitochondria. Yeast strains with severe defects in ATP synthase have a relatively high propensity to produce cells with large deletions (ρ − ) or total absence (ρ 0 ) of mitochondrial DNA, up to 100% vs 5-10% in WT ( (24)(25)(26)(27), see below). We therefore probed the mitotic stability of L 186 P yeast. To this end, samples of glucose cultures of L 186 P and WT yeasts were plated for single colonies on rich glucose plates. As expected, due to the presence of the ade2 mutation in these strains, the colonies from WT yeast were red whereas those from the L 186 P mutant were much less colored (the red color does not develop well with respiratory deficient cells (28)) ( Fig. 2). An important fraction (40-50%) of the colonies from L 186 P yeast were totally white and had a regular contour, indicating that they originated from ρ − /ρ 0 cells. The remaining ones had a cream color and were scalloped indicating that they originated from genetically instable ρ + cells. The presence/absence of ρ + mtDNA in the colonies produced by L 186 P yeast was confirmed by crossing with SDC30, as described in Materials and Methods. Based on these tests, glucose cultures of the L 186 P yeast were estimated to contain about 50-60% ρ−/ρ 0 cells. Intragenic suppressors of L 186 P Taking advantage of its failure to sustain grow from respiratory carbon sources, we searched for genetic suppressors of the L 186 P mutation restoring at least partially respiratory growth and hence ATP synthase function, an approach we already used to better understand the deleterious mechanisms of a number of subunit a mutations (29)(30)(31)(32)(33). To this end, glucose-grown L 186 P cells were plated as dense layers on solid glycerol medium (10 8 cells/plate). Twelve respiring clones that emerged from the glycerol plates after several days of incubation were analyzed by DNA sequencing for the presence of novel mutations in the ATP6 gene (intragenic suppressors). Three different mutations were identified: a first-site reversion leading to replacement of the mutant proline residue with serine (referred to as L 186 S to indicate the amino Amino acid numbers are those in yeast mature subunit a without the first ten residues present in its precursor form. Each suppressor was identified in four genetically independent isolates. acid change relative to the wild type protein) in four clones, and two second-site reversions in another position of the ATP6 gene protein: H 114 Q (in four clones) and I 118 T (in four clones) (see Table 1 for the corresponding nucleotide changes). The L 186 S and L 186 P, I 118 T strains showed fast growth in glycerol, while the L 186 P, H 114 Q mutant grew much less rapidly (Fig. 1). The fast growth of L 186 S and L 186 P, I 118 T strains (in the exponential phase) does not mean that ATP synthase function is unaffected. Previous studies have indeed shown that large ATP synthase activity deficits (of at least 80%) are needed to affect obviously the growth rate of yeast in respiratory conditions (34,35). Consistent with the restoration of respiratory growth, the three revertant strains showed a much better capacity than L 186 P yeast to maintain the mitochondrial genome with only <10% ρ − /ρ • cells (vs 50-70% for the original mutant) ( Table 2). Assembly/stability of ATP synthase The inf luence of the subunit a mutations on the assembly/stability of ATP synthase was analyzed by BN-and SDS-PAGE of mitochondrial extracts prepared from cells grown in rich galactose medium. Fully assembled F 1 F O dimers and monomers and free F 1 particles were detected in BN gels for all the mutants as in WT yeast, using antibodies against the β (Atp2) subunit of F 1 (Fig. 3A). Free F 1 was more abundant in samples from the L 186 P mutant vs the other strains, which ref lects its strong propensity to produce ρ − ρ 0 cells. These cannot synthetize the three mtDNA-encoded subunits of F O (Atp6/a, Atp8 and Atp9/c) whereas the F 1 is entirely encoded by nuclear genes and can assemble in the absence of F O (36). Quantitative estimation of ATP synthase was performed by measuring the levels of the subunit a in denaturing gels. Owing to its high susceptibility to degradation when not assembled, this subunit is a good indicator of fully assembled ATP synthase. The levels of subunit a were almost the same in the analyzed strains except in L 186 P yeast where they were decreased by about 70-80% vs WT ( Fig. 3B and C). The low abundance of the subunit a in the L 186 P mutant mostly results from its high propensity to produce ρ − /ρ 0 cells rather than a compromised ability of the mutant protein to assemble. Indeed, when expressed relative to the amounts of ρ + cells in the cultures used for these experiments, a good accumulation of subunit a was estimated in the L 186 P mutant (84% vs WT, see Fig. 2C). Thus, the V 1 and V 2 immunological signals in the shown gels correspond to fully assembled ATP synthase only, in line with previous studies showing that incomplete ATP synthase assemblies lacking subunit a are fragile and easily dissociate in BN-gels (26,37). Mitochondrial respiration and ATP synthesis We next evaluated the inf luence of the subunit a mutations on oxidative phosphorylation by measuring the rates of electron transfer to oxygen and ATP synthesis in intact (osmotically protected) mitochondria using NADH as a respiratory substrate. These activities were very weak in L 186 P mitochondria (<10% vs WT), whereas those from the revertant strains respired and produced ATP more rapidly albeit not as fast as WT mitochondria ( Table 2). The yield in ATP per electron transferred to oxygen (P/O) in mitochondria from the L 186 P, I 118 T and L 186 S strains was quite normal whereas it was about half reduced in those from the L 186 P, H 114 Q strain indicating that only part of the protons that enters the F O in this latter strain is properly vehiculated by the c-ring motor of ATP synthase and coupled to ATP synthesis in the F 1 catalytic domain (see below). The P/O value was also strongly decreased in mitochondria from the L 186 P mutant but because of their extremely low electron transfer activity and blockade of the F O , a large part of the protons pumped by the respiratory chain is certainly passively returned to the mitochondrial matrix through the phospholipid bilayer of the inner membrane, thus without any ATP synthesis. No significant difference was observed between the rates of respiration measured in absence and presence of oligomycin in the analyzed strains (the values measured in the presence of the drug are not shown), indicating that none of the mutations led to important passive proton leaks through the F O (like those observed in strains with mutations in the central stalk subunits of ATP synthase (38)(39)(40)). Such leaks have thus far never been observed in yeast subunit a mutants and the ability of mitochondria from the L 186 P mutant to sustain a significant and stable electrochemical potential across the inner membrane (see below) argues against the existence of such leaks in this mutant. Another line of evidence indicating that the L 186 P mutation prevents F O -mediated transport was provided by probing the levels of Complex IV's content and activity. Previous work has shown that the rate of Complex IV biogenesis in yeast is inf luenced by the proton transport activity of F O , possibly as a way to coregulate in cells their needs in ATP and respiration (41,42). Strains with passive proton leaks (i.e. not coupled to ATP synthesis), for instance mutants with defaults in the central stalk (38)(39)(40), keep a good capacity to assemble the Complex IV, showing that it is well the proton f low through the F O rather than the rate of F 1 -mediated ATP synthesis that controls the biogenesis of Complex IV. In BN gels stained with Coomassie blue, the levels of Complex IV associated to Complex III (III 2 -IV 2 and III 2 -IV 1 ) were Because of the large amount (72%) of ρ − /ρ 0 cells in cultures of the L 186 P mutant (where the subunit a and Cox2 cannot be synthesized), the levels of these two proteins were calculated for the part of the population (28%) that contained complete (ρ + ) mtDNA (shown on the right). The standard errors were calculated from three independent experiments. dramatically reduced in mitochondrial samples from L 186 P vs WT yeasts whereas these assemblies were much less affected in the revertant strains (Fig. 3A). These observations were corroborated by Complex IV activity measurements (Table 2) and by probing the levels of the Cox2 subunit of Complex IV in denaturing gels (Fig. 3B,C). It is interesting to note that the extent to which Complex IV's content and activity are reduced in mitochondria from L 186 P, H 114 Q yeast (65% vs WT) is much less important than the drop in the rate of ATP synthesis (20% vs WT), which further indicates there is still in this mutant a quite good f low of protons through the F O despite its poor capacity to synthesize ATP (see below). Mitochondrial membrane potential We further investigated the consequences of the subunit a mutations by monitoring variations in transmembrane electrical potential ( ) using the cationic dye Rhodamine 123, in osmotically protected mitochondria buffered at physiological pH 6.8. As expected, adding ADP to WT mitochondria respiring from ethanol resulted in a sharp and transient f luorescence increase ref lecting consumption by ATP synthase until complete phosphorylation of the added ADP (Fig. 4A). Ethanol induced a small in the L 186 P mitochondria and there was no significant modification in f luorescence after adding ADP. The mitochondria from the L 186 S and L 186 P, I 118 T responded quite well to ethanol and ADP whereas those from the L 186 P, H 114 Q strain took a much longer time after the addition of ADP to recover the ethanol-induced . A further addition of KCN to inhibit the respiratory chain resulted in only a partial loss in mitochondria from the WT and the revertant strains, and the residual was oligomycin-sensitive ( Fig. 4A) whereas the membrane potential totally collapsed after the addition of KCN in L 186 P mitochondria. These observations are fully consistent with the measurements of oxygen consumption of ATP synthesis reported in Table 2. In a second series of experiments, we evaluated the protonpumping activity of ATP synthase from externally added ATP (Fig. 4B). Prior to adding ATP, the mitochondrial respiratory chain was supplied with electrons from ethanol and then inhibited with KCN, which promotes removal from the F 1 of its natural IF1 inhibitory peptide (43). As expected adding next ATP to WT mitochondria results in a large and stable oligomycinsensitive , ref lecting F O -mediated proton pumping coupled to F 1 -mediated ATP hydrolysis (Fig. 4B). No ATP-driven proton pumping was detected in mitochondria from the L 186 P mutant, whereas those from the L 186 P, I 118 T and L 186 S strains responded well to ATP. A significant but weaker and less stable potential was observed with the mitochondria from the strain L 186 P, H 114 Q, which further illustrates the poor suppressor activity of the H 114 Q change compared to the two other suppressor mutations. Mitochondrial ATP hydrolysis The reverse functioning of ATP synthase was further investigated by measuring the rate of ATP hydrolysis in non-osmotically protected mitochondria. In these conditions, the enzyme is not working against a proton gradient and can therefore hydrolyze ATP at its maximum rate. When F 1 and F O are properly coupled, inhibition of F O with oligomycin prevents F 1 -mediated ATP hydrolysis because then the ATP synthase motor (F 1 central stalk and c-ring) cannot rotate and the catalytic sites in F 1 cannot process ATP. When their coupling is compromised, the F 1 can hydrolyze ATP in the presence of oligomycin, for instance in ρ − /ρ 0 cells unable to synthesize the F O or with mutations that allow the protons to cross the F O without being vehiculated by the cring. Most (90%) of the ATPase activity in WT mitochondria was inhibited by oligomycin and thus mediated by ATP synthase (the remaining 10% oligomycin-insensitive activity is due to other ATPases present in mitochondria, see Table 2). The ATPase activity in L 186 P mitochondria was only 20% vs WT and only 4% of it was inhibited with oligomycin. The strong propensity of the L 186 P mutant to produce ρ − /ρ 0 cells is certainly in large part responsible for the poor inhibition by oligomycin. This cannot however explain its very poor F 1 -mediated ATPase activity since as described above ρ + L 186 P cells do assemble properly the F O . It can be inferred that fully assembled F 1 F O complexes with the L 186 P mutation have a very poor capacity to hydrolyze ATP, which further ref lects the dramatic consequences of this mutation on F O -mediated proton transport. Mitochondrial ATPase activity was largely recovered with the L 186 S and I 118 T suppressors (65-70% vs WT) and efficiently inhibited with oligomycin (90%) ( Table 2). In mitochondrial samples with the H 114 Q suppressor, the ATPase activity was less well restored (35% vs WT) and largely inhibited (75%) by oligomycin. These results perfectly mirror the ATP synthesis rate measurements, indicating that the inf luence of the mutations on the functioning of ATP synthase is the same whether it synthesizes or hydrolyzes ATP. The less efficient inhibition of the mitochondrial ATPase activity in the strain with the H 114 Q suppressor further supports that this mutation partially compromises the coupling of F 1 to F O, whereas there is no such energy dissipation with the two other suppressors. Discussion Although the pathogenicity in humans of the L 169 P change in subunit a induced by m.9032T>C has been established (15)(16)(17), it was thus far not known how and to which extent it compromises ATP synthase function. This issue has been investigated in the present study using a yeast strain with an equivalent mutation in subunit a (L 186 P). This mutant totally failed to grow on nonfermentable substrates, providing a first indication that the L 186 P change had dramatic consequences on the ATP synthase. Consistently, although it properly assembled the mutant ATP synthase was mostly inactive as evidenced by a > 95% drop in the rates of mitochondrial ATP synthesis and F 1 -mediated ATP hydrolysis, and the absence of significant ATP-driven proton pumping across the mitochondrial membrane. As was systematically observed with mutants with severe ATP synthase defects (25)(26)(27)39,(44)(45)(46)(47)(48), the L 186 P strain had a somewhat high propensity to produce ρ − /ρ 0 cells (>50% vs <5% for wild type yeast). Mutants with massive proton leaks through the F O produce 100% ρ − /ρ 0 cells because retention of functional mtDNA is then lethal by preventing the maintenance of a minimal electrochemical potential across the mitochondrial inner membrane (39). Indeed, without functional mtDNA the F O cannot be synthesized and the mitochondrial membrane can be energized through the electrogenic exchange of glycolytic ATP against matrix-localized ADP combined to the hydrolysis of ATP by the F 1 (these activities are controlled by nuclear genes). A lack of F O activity also compromises (for as-yet-unknown reasons) the stability of the mitochondrial genome as was observed in strains lacking subunit 9/c (25), subunit 6/a (26) or one of the factors involved in the assembly of these proteins (47,48), which all produce at least 50% ρ − /ρ 0 cells despite the absence of any F O -mediated proton leak. It is unlikely that the instability of L 186 P yeast results from F O -mediated protons leaks. Indeed, glucose cultures of this mutant contained a significant fraction (up to 50%) of ρ + viable cells and despite a low electron transfer activity its mitochondria were able to sustain a significant and stable membrane potential when fed with electrons from ethanol (Fig. 4). Furthermore, consistent with previous work showing that the activity of F O rather than the rate of F 1 -mediated ATP synthesis controls the rate of Complex IV biogenesis (41), this complex was down regulated in L 186 P yeast (Fig. 3). Further evidence for the existence of tight connections between the biogenesis of Complexes IV and V was recently provided by the detection of assembly intermediates containing subunits of both complexes, possibly as a mean to adjust their relative abundancy (42). Taking together, these observations indicate that the increased propensity of L 186 P yeast to produce ρ − /ρ 0 cells is due to a lack of F O activity. The subunit a and a ring of identical subunits c (8 in humans, 10 in yeast) are responsible for the transport of protons across the membrane domain (F O ) of ATP synthase (Fig. 5C). A hydrophilic pocket within the subunit a on the external side of the inner membrane (referred to as p-pocket) allows protons from the intermembrane space to access an essential acidic residue in subunit c (cE 59 in yeast) near the middle of the membrane (13,(19)(20)(21). After an almost complete rotation of the c-ring, the protons are released and transferred to the mitochondrial matrix through a second hydrophilic pocket within the subunit a (n-pocket). The two pockets are separated by a plug of hydrophobic residues in subunit a near the middle of the membrane and close to it, in front of cE 59 , is a positively charged arginine residue belonging to subunit a (R 176 in yeast) that is essential for F O activity. The amino acid change induced by the pathogenic m.9032T>C mutation (L 186P in yeast) is proximal to the p-pocket (Fig. 5B). This pocket is surrounded by segments of three subunit a α-helices (aH3, aH5 and aH6), the second transmembrane helix of subunit 4 (or b), the C-terminal helix of subunit f , the N-terminal domains of subunits a and 8 and a plug of hydrophobic subunit a residues near the middle of the membrane (L 173 , L 177 , V 233 , W 234 and L 237 ) (Fig. 5B, C). Based on studies in E. coli (49)(50)(51), protons would enter this pocket with the help of H 185 and E 223 and next moved to the c-ring via N 100 , N 180 and Q 230 . Being located on aH5, the L 186 P change induced by m.9032T> C may disturb the structure of the p-pocket and compromise its proton-conduction activity. Proline residues are indeed known for their propensity to kink α-helices or to induce a more local distortion referred to as a π -bulge (52,53). Bending aH5 would certainly compromise assembly/stability of subunit a and hence increase its susceptibility to degradation. Since this was not observed, a π -bulge modification is more likely. As a result of this, the topology of H 185 is changed and its distance with E 223 increases, which could be the reason for the observed loss of F O -mediated proton transfer in the L 186 P mutant. The large recovery of ATP synthase function with a serine residue at position 186 is not very surprising considering the good ability of this type of residue to be incorporated within α-helices, thus allowing H 185 to recover its normal topology. Being located 12 Å away from position 186, it is unlikely that the second-site suppressors H 114 Q and I 118 T can correct the π -bulge modification induced by L 186 P. The H 114 residue interacts with a short helical segment at the N-termini of subunit a that caps the p-pocket and with a conserved motif (MPQL) at the beginning of subunit 8 that is supposed to stabilize the surface of subunit a in the membrane (54), while I 118 is more deeply buried in the membrane beneath the MPQL motif close to the hydrophobic plug that separates the two proton conduction domains of subunit a (Fig. 5D). It is a reasonable assumption that H 114 Q and I 118 T enable the protons to regain access to the c-ring through a path that bypasses the inactive H 185 /E 223 dyad. Possibly, the two second-site suppressor mutations allow the protons to be moved towards the bottom of the pocket from N-terminal domain of subunit 8 along the C-terminus of aH4 and are next moved by the triad of strictly conserved residues (N100, Q230 and N180) to the essential acidic residue of subunit c (E59 in yeast) (Fig. 5E). The low yield of ATP per electron transferred to oxygen observed with H 114 Q indicates that after their entry into the p-pocket the protons are channeled less efficiently towards the c-ring compared to wild type ATP synthase. Intriguingly, Q and T are mostly present at the corresponding positions 114 and 118 in subunits a from other species, including humans. The 'humanization' of the yeast subunit a in response to a mutation with detrimental consequences is an interesting observation indicating that positions 114 and 118 (97 and 101 of the human subunit a) were possibly exploited during evolution to optimize F O activity in those species with high energy demands and where the ATP is mostly produced in mitochondria. In this respect, it deserves to be highlighted that the ATP synthase of yeast is clearly less performing that the human enzyme, as evidenced by the need in yeast of more protons to make one ATP compared to humans (due to differences in c-ring stoichiometry) (55). The present study demonstrates that the diseases induced by the leucine-to-proline change in subunit a induced by the m.9032T> C mutation is due to a block in F O -mediated transport between the external side of the inner membrane and the c-ring motor of ATP synthase. The possibility to bypass this mutation by second structural changes within the p-pocket is an interesting finding that opens a path for designing molecules that can improve oxidative phosphorylation in patients with mutations in the p-pocket. ATP6 mutagenesis An equivalent of m.9032T>C (L 186 P) was introduced into a BamHI-EcoRI fragment on the 5 side of the yeast ATP6 gene cloned into pUC19 (plasmid pSDC8, see (59)), using the Q5 ® Site-directed Mutagenesis Kit of NEB and primers 5'CACGACGT TGTAAAACGACGGCCAGTGAATTCACTATTGGTATCATTCAGGGAT ATGTCTG and 5'CAGACATATCCCTGAATGATACCAATAGTGAATTC ACTGGCCGTCGTTTTACAACGTCGTG (the mutagenic bases are in bold). The mutated ATP6 fragment was cut with BamHI and EcoRI and ligated at the same sites in plasmid pJM2 (56). This plasmid contains the yeast mitochondrial COX2 gene as a genetic marker for mitochondrial transformation. The remaining part of ATP6 was cut off from pSDC9 (45) with EcoRI and SapI and fused to the L 186 P fragment in pJM2. The resulting plasmid (pEB15) and the LEU2 plasmid Yep351 (60) were introduced into cells from ρ 0 strain DFS160 using the biolistic PDS-1000/He particle delivery system (Bio-Rad), as described (11). Leu + clones with pEB15 in mitochondria (EBY10a) were identified by virtue of their capacity to restore respiratory competence in crosses with the ρ + strain NB40-3C in which the mitochondrial COX2 gene is partially deleted (cox2-62 mutation (57)). To introduce the L 186 P mutation in a complete (ρ + ) mitochondrial genome, EBY10a was crossed with strain MR10 (26), which is derivative of wild type strain MR6 in which the coding sequence of ATP6 is in-frame replaced with ARG8 m (atp6::ARG8 m ). ARG8 m is a mitochondrial version of a nuclear gene (ARG8) that encodes a protein involved in arginine biosynthesis (24). The EBY10a × MR10 crosses did not produce a single respiring clone, suggesting that the L 186 P mutation virtually abolishes ATP synthase function. ρ + clones with the L 186 P mutation were therefore isolated based on their incapacity to grow in media lacking arginine (due to replacement of atp6::ARG8 m with the mutated ATP6 gene) and to recover respiratory competence after crossing with SDC30, which is a ρ − synthetic strain containing in mitochondria only the ATP6 and COX2 genes (26). The presence in these clones (called EBY10) of the L 186 P mutation was verified by DNA sequencing with primers oATP6-1 5 TAATATACGGGGGTGGGTCCCTCAC and oATP6-10 5 GGGCCGAACTCCGAAGGAGTAAG. Due to the presence in EBY10a of the nuclear karyogamy delaying kar1-1 mutation (61), the ρ + recombinant clones could be isolated in the haploid nuclear background of wild type strain MR6 from which MR10 was derived (26). Mitochondrial DNA stability in L 186 P yeast To evaluate their content in cells with large deletions in mitochondrial DNA (ρ − ) or totally devoid of this DNA (ρ 0 ), cultures of L 186 P yeast were plated for single colonies on glucose plates. About 200 of these colonies were replica crossed with cells from strain SDC30 on glucose plates. After one-night incubation, the mated cells were replicated on glycerol plates. The crosses that produced cells growing on glycerol originate from L 186 P subclones with a complete (ρ + ) mitochondrial genome, while the ρ−/ρ 0 cells present in the cultures of L 186 P yeast result in progenies entirely unable to grow on glycerol. Selection of revertants from L 186 P yeast Three genetically independent L 186 P (EBY10) clones were grown for one night in rich glucose (YPGA). Cells were centrifuged and residual glucose removed by two washings with water. They were then spread on rich glycerol (YPGlyA) medium and incubated at 28 • C for 21 days. Twelve revertants were picked up and genetically purified by subcloning on YPGlyA. The ATP6 gene of these clones was PCR-amplified and sequenced entirely, which led to identification of 3 different intragenic suppressors (see below). Mitochondrial respiration, ATP synthesis/hydrolysis and membrane potential Mitochondria were prepared from yeast cells grown in rich galactose (YPGalA) at 28 • C by the method described in (62). Protein content in mitochondrial preparations was determined according to (63) in the presence of 5% SDS. For respiration and ATP synthesis assays, mitochondria were diluted to 0.075 mg/mL in respiration buffer (10 mM Tris-maleate (pH 6.8), 0.65 M sorbitol, 0.3 mM EGTA and 3 mM potassium phosphate). Oxygen consumption rates were measured using a Clarke electrode after adding consecutively 4 mM NADH (state 4 respiration), 150 μM ADP (state 3) and 4 μM carbonyl cyanide m-chlorophenylhydrazone (CCCP) (uncoupled respiration), as previously described (64). The rates of ATP synthesis in mitochondria respiring from NADH were determined in the presence of externally added 750 μM ADP, in the absence and presence of oligomycin (3 μg/mL), taking aliquots every 30 seconds and stopping the reaction with 3.5% (w/v) perchloric acid, 12.5 mM EDTA. The amounts of ATP in the samples were quantified using the Kinase-Glo Max Luminescence Kinase Assay (Promega) and a Beckman Coulter's Paradigm Plate Reader. Variations in transmembrane potential ( ) were evaluated in the respiration buffer containing 0.15 mg/mL of mitochondrial proteins and 0.5 μg/mL of Rhodamine 123 (λ exc of 485 nm and λ emi of 533 nm) under constant stirring using a Cary Eclipse Fluorescence Spectrophotometer (Agilent Technologies, Santa Clara, CA, USA), in the presence of 75 μM ADP, 10 μL/mL ethanol, 2 mM potassium cyanide, 4 μg/mL oligomycin and 4 μM CCCP, as described in (37). The specific ATPase activity at pH 8.4 in non-osmotically protected mitochondria was measured in the absence and presence of oligomycin (3 μg/mL), as described in (65). Amino-acid alignments and topology of subunit a mutations Multiple sequences of ATP synthase subunits a of various origins were aligned and drawn using Clustal Omega (67) and Espript 3.0 (68), respectively. Molecular views of subunit a and c 10 -ring were obtained from the dimeric F o domain of S. cerevisiae ATP synthase (pdb_id: 6b8h, (19)). The shown structures were drawn using ChimeraX (69) and PyMOL Molecular Graphic System (70). Statistical analysis At least three biological and three technical replicates were performed for all reported experiments. The t-test was used for all data sets. Significance and confidence level were set at P-0.05. Conf lict of Interest statement. The authors declare that they have no competing or financial interests. Funding This work was supported by a grant from the National Science Center of Poland [2016/23/B/NZ3/02098] to R.K. and Association Française contre les Myopathies [AFM #22382] to D.T.T. Author Contributions E.B. constructed plasmids and strains; C.P., E.B. and K.N. isolated mitochondria and analyzed their properties; A.D. and C.C. performed the structural modeling analyses; D.T.T., J.PdR. and R.K. wrote the manuscript and designed the work. Statement of Ethics The permission number for work with genetically modified microorganisms (GMM I) for RK is 01.2-28/201.
2022-11-27T06:17:10.355Z
2022-11-26T00:00:00.000
{ "year": 2022, "sha1": "1c56574059ee5e779b428527835741722d1b12f8", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "78ba5c673a1191f50108f08d1236ad841dd331c6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
14583627
pes2o/s2orc
v3-fos-license
Stratified Community Responses to Methane and Sulfate Supplies in Mud Volcano Deposits: Insights from an In Vitro Experiment Numerous studies on marine prokaryotic communities have postulated that a process of anaerobic oxidation of methane (AOM) coupled with sulfate reduction (SR) is the main methane sink in the world's oceans. AOM has also been reported in the deep biosphere. But the responses of the primary microbial players in eliciting changes in geochemical environments, specifically in methane and sulfate supplies, have yet to be fully elucidated. Marine mud volcanoes (MVs) expel a complex fluid mixture of which methane is the primary component, forming an environment in which AOM is a common phenomenon. In this context, we attempted to identify how the prokaryotic community would respond to changes in methane and sulfate intensities, which often occur in MV environments in the form of eruptions, diffusions or seepage. We applied an integrated approach, including (i) biochemical surveys of pore water originated from MV, (ii) in vitro incubation of mud breccia, and (iii) prokaryotic community structure analysis. Two distinct AOM regions were clearly detected. One is related to the sulfate methane transition zone (SMTZ) at depth of 30–55 cm below the sea floor (bsf); the second is at 165–205 cm bsf with ten times higher rates of AOM and SR. This finding contrasts with the sulfide concentrations in pore waters and supports the suggestion that potential AOM activity below the SMTZ might be an important methane sink that is largely ignored or underestimated in oceanic methane budget calculations. Moreover, the incubation conditions below the SMTZ favor the growth of methanotrophic archaeal group ANME-2 compared to ANME-1, and promote the rapid growth and high diversity of bacterial communities. These incubation conditions also promote the increase of richness in bacterial communities. Our results provide direct evidence of the mechanisms by which deep AOM processes can affect carbon cycling in the deep biosphere and global methane biochemistry. Introduction The existence of microbial life in the deep subsurface has been known since ZoBell's studies in the 1930s [1] and was first proven in sediment cores during drilling in the 1980s [2]. However, whether the deep biosphere is the largest prokaryotic habitat on Earth is an enigma because the estimations of cell numbers and biomass differ dramatically among sampling sites and counting techniques [3][4][5]. The importance of these deeply buried communities for driving carbon and nutrient cycling and for catalyzing a multitude of reactions among rocks, sediment and fluids is widely accepted [6]. Available reports demonstrate that the highest quantities of active prokaryotes are associated with diverse biogeochemical interfaces, e.g., highly organic rich sediments such as the Mediterranean sapropels [7]; the sulfate methane transition zone (SMTZ), where anaerobic methanotro-phy is the driving force behind local microbial activities [4]; the deep hypersaline anoxic lakes, where the ecosystems are largely driven by sulfur cycling and methanogenesis [8]; an increase in microbial biomass was also observed in sediments with gas hydrates [9]. Recent developments in research on the marine methane cycle have shown that the anaerobic oxidation of methane (AOM) is the key microbial action responsible for methane turnover in the ocean and is the first step in making energy available to the local ecosystem [10]. The AOM process has been proposed to involve reverse methanogenesis [11,12], which is coupled with sulfate reduction via anaerobic methanotrophic archaea (ANME) and diverse sulfate-reducing bacteria (SRB) [13,14]. Because ANME and SRB annually consume approximately 85% of oceanic methane production, the assessment of in situ AOM rates plays a vital role in global methane budget modeling [10]. Currently, estimations of in situ AOM rates are primarily based on the methane/sulfate turnover rates using radioactive tracers. Longterm incubation under simulated conditions has not been frequently reported and has only been performed at shallow sediment depths near the SMTZ [10]. Marine mud volcanoes (MVs) are among the most spectacular seepage-related geomorphological structures and produce a strong outburst of methane-saturated geofluids from the deep subsurface. The development of a MV is related to strong lateral or vertical compressions of the Earth's crust that provoke deep-lying sediments to move upward [15]. Such emitted sedimentary material is called ''mud breccia'' and represents exclusively MVrelated deposits [16]. The main gaseous component in the seeping fluids is methane, a strong greenhouse gas. In addition to methane, mixtures of wet gas, hydrogen sulfide, carbon dioxide, and petroleum products are often present. Such chemically complex allochthonous sedimentary and fluid mixtures incite the development of particular environments at and below the sea floor and fuel microbial processes that shape the community structure of the chemosynthetic seepage. MVs are known to exhibit environmental heterogeneity, which is directly related to the mode of MV eruptions and to the chemistry of the expelled products. Hydrocarbon-rich fluids expelled to the surface bring up methane, which is utilized as a carbon and energy source. Because AOM is the initial step in biological energy conversion within the local ecosystem, the bioavailability of methane directly determines energy supply and the biomass in the sediment along the fluid migration pathway. Additionally, microbial AOM activity controls the storage of methane in the ocean. However, the distribution of active methane-consuming microbes, especially those that perform anaerobic methanotrophy along the sedimentary section of a MV, has yet to be sufficiently investigated. Furthermore, the organization of prokaryotic communities and their structure at varying sedimentary depths and in environments with variable methane concentrations has not been reported thus far. To test the potential for in situ AOM activity, we report the results of an in vitro incubation experiment under methane-and sulfate-rich conditions that was performed on freshly recovered mud breccia from the Ginsburg MV in the Gulf of Cadiz (Fig. 1). The availability of freshly erupted MV deposits allowed us to examine potential AOM activity at and below the SMTZ. We applied a vertical profile sampling strategy to reveal changes in AOM community structure and its spatial distribution at varying depths and to identify possible ecological factors that influence the community's metabolic behavior and dynamics. Sampling site location and lithology Sampling location is Ginsburg MV (35u22.4319N; 07u05.2919W) in the Gulf of Cadiz. The field studies did not involve endangered or protected species and no specific permissions were required for these locations/activities. The Gulf of Cadiz is an extensive embayment of the Atlantic Ocean from Cape Saint Vincent, Portugal, to the Gibraltar Strait on the southwestern coast of the Iberian Peninsula (Fig. 1). The area is known for its MVs [17][18][19][20][21], and the Ginsburg MV is located within the Western Moroccan MV Field of the gulf. A gravity core was taken and in total 3.55 m long of mud breccia was recovered from the crater of the MV (water depth ca. 910 m; core M2007-56) during a MicroSYSTEMS cruise of R/V Pelagia in 2007. The sampling location was the same one from which cores were collected during the TTR-9 (1999) and TTR-10 (2000) cruises undertaken for hydrocarbon gas and lipid biomarker studies [22,23]. The lithology, presence of chemosynthetic tube worms at the surface layer and gas hydrates at depths below 1 m of sediment provided a mirror image of the previously recovered MV deposits taken during several TTR cruises [17,[19][20][21]. Pore water sampling The recovered sedimentary core was immediately cut into 1 m sections and opened in the cold lab container at +4uC for subsampling. Pore water samples were obtained on board at +4uC directly from mud breccia using a Rhizon Core Solution Sampler (Rhizosphere Research Products, Wageningen, the Netherlands). The sampler consisted of a 10 cm porous polymer tube, which was impermeable to bacteria to maintain the sterility of the samples, connected to a 10 cm PVC tube and a Luer-Lock connector attached to the standard 10 ml syringe. By drawing the piston, filtered pore waters were collected in the syringe in the vacuum. To assess sulfate levels, 1 ml of pore water was placed in a plastic vial and kept at 220uC. To distinguish stable carbon isotopes from dissolved inorganic carbon (d 13 C-DIC), 2 ml of pore water was preserved with 10 ml of saturated mercury chlorite solution and stored in darkness at +4uC in gas-tight glass vials without head space. Sulfide was preserved in 5 ml glass vials containing 0.5 ml of pore water and 4.5 ml of 0.1 N NaOH solution, which was initially refluxed with N 2 for 5 min. Samples with preserved H 2 S were stored at +4uC. In vitro incubation experiment For the incubation experiment, mud breccia samples were collected directly at 4uC. Sub-sampling was carried out along the total length of the core at 5 cm intervals using 50 ml plastic sterile syringes with cut tips. Samples were immediately sealed in trilaminate PEI aluminum bags (KENOSHA C.V., Amstelveen, the Netherlands) in a nitrogenous atmosphere and stored at +4uC until use. In off-shore laboratories, the experiments were carried in an anoxic N 2 atmosphere in a glove box (Concept 1000, L.E.D. Techno NV, Belgium). In total, 26 sections along the 2.55 m of the mud breccia section were selected. From each layer, 30 ml of fresh mud volcano deposits were diluted 5 times with artificial seawater medium in a 250 ml Scott bottle closed with a gas-tight butyl stopper. The headspace was first flushed and filled with 12 C-CH 4 at 0.1 MPa of absolute pressure (the sum of gauge pressure and atmospheric pressure) and then pressurized up to 0.15 MPa of absolute pressure by 13 C-CH 4 (99% Atom, Campro Scientific GmbH, Berlin, Germany). The Scott bottles were placed on a shaker (80 rpm shaking) at 15uC in darkness. The total duration of the experiment was 176 days. Chemical analysis of the incubated samples Slurry samples were taken after days 8, 21, 31, 45, 73, 102, 135, and 176 of incubation. At each selected interval, approximately 3 ml of slurry was collected through a syringe without open the bottle, completely filled into two 1.5 ml Eppendorf tubes (Eppendorf, Hamburg, Germany), and centrifuged at 13,200 g for 2 min. The supernatant was collected and used for the determination of dissolved sulfide, sulfate and pH according to a previously described method [24]. The residue (,1.3 ml in each tube) was frozen at 220uC for future DNA extraction. Gas analysis was performed on samples incubated for 8 and 176 days. The gas pressure inside the incubation bottle was measured with an INFIELD 7C Tensiometer (UMS, München, Germany). From each incubation bottle, 0.5 ml of gas was removed and immediately injected into a 12 ml vacuumed gas sampling tube (Labco Limited, Buckinghamshire, UK). Next, the gas sampling tube was filled with helium up to atmospheric pressure. The CO 2 concentration (including both 13 C-CO 2 and 12 C-CO 2 ) was quantified using a gas chromatograph (GC 14B, Shimadzu Corporation, Kyoto, Japan) equipped with a 2 m Porapak Q column (0.3 cm o.d., SS 80/100) and a pre-column (1 m) of the same material (both at 35uC) and a 63 Ni electron capture detector (ECD) at 250uC. The ratio between 13 C-CO 2 and 12 C-CO 2 was quantified with an isotope ratio mass spectrometer (IRMS 20-20, Sercon Ltd, Cheshire, UK) coupled to a GC in a climatized room (21.060.5uC) using the same technical settings as described previously [25]. In vitro AOM and SR rate calculations Both AOM and SR rates were expressed as nmol sulfide/CO 2 production per ml of fresh sediment per day (nmol/ml rs/d). The turnover rate was calculated according to the following formula: SR~s ulfide produced incubation time à liquid volume For the AOM rate calculation, the total production of 13 C-carbon species, i.e., 13 C-CO 2 in both liquid and gas phases, 13 C-HCO 3 2 and 13 C-H 2 CO 3 in liquid phase, was first calculated. Given that 1/3 of the initial headspace consisted of 13 C-CO 2 and 2/3 consisted of 12 C-CO 2 , the overall AOM rate was calculated as follows: Prokaryotic community analysis of the incubated samples DNA extraction. DNA extraction was performed on the samples taken on day 0 and day 176 at each sediment depth, in total 52 samples. The residue after chemical analysis was thaw and mixed, from which a 0.5 ml slurry was used to extract DNA using Fast DNA Spin Kit for soil (Bio 101, Q-Biogene, Heidelberg, Germany) according to the manual supplied with the kit. The raw DNA was then purified with a DNA Purification Kit (Wizard, Promega, Madison, USA) and eluted to a final volume of 50 ml. Terminal restriction fragment length polymorphism (T-RFLP) of the prokaryotic community. To amplify bacteria, the primers 27f-FAM and 907r (Table 1) were used under the condition described in Table S1 and Table S2. The obtained PCR product was purified with a PCR purification kit (Qiagen, Hilden, Germany) and eluted to a final volume of 30 ml. The DNA concentration in the PCR products was quantified using a NanoDrop ND-1000 Spectrophotometer (Thermo Scientific, Wilmington, USA). Then, 100 ng of the purified PCR product was added to Tango buffer (Fermentas, Burlington, Canada) and digested with 2.5 units of restriction enzyme Mspl at 37uC for 3 hours. After digestion, 100 ml cold ethanol (95%) was added, and the sample was incubated at 4uC for 30 min to precipitate the digested DNA fragments. Subsequently, the samples were centrifuged at 14,000 g for 30 min. The pellet obtained from the precipitation was further washed with 100 ml cold ethanol (75%) and centrifuged at 14,000 g for 10 min. The supernatant was discharged, and the pellet was vacuum dried for 5 min using Savant SpeedVac DNA 110 (GMI, Minnesota, USA). To amplify archaea, a nested PCR approach was applied. The first PCR was run with primers Arch21f/Uni1392r (Table 1) under the condition described in Table S1 and Table S3. The nested PCR was run with the primers Arch21f-FAM and Arch958r (Table 1) using the product from the first PCR as a template under the condition described in Table S1 and Table S4. The products were assayed on a 1% agarose gel. Bands of the correct length on the gel were cut and purified using the QIAquick Gel Extraction Kit (Qiagen, Hilden, Germany). The PCR product (100 ng) obtained from gel extractions was digested by 2.5 units of restriction enzyme Hha1 at 37uC for 3 hours. The digested DNA fragments were then washed and dried according to the procedure as described above. The resultant DNA fragments with fluorescent labels were analyzed at the Genetic Service Unit (University Hospital, Gent, Belgium). Statistical analysis of the patterns was performed using Bionumerics 5.1 software (Applied Maths, Sint-Martens-Latem, Belgium) [26]. Extracted data were processed for ecological interpretation in the following aspects: 1) Richness-the number of bands in one T-RFLP pattern was used an indicator of community richness. 2) Community organization (Gini)-calculated from the normalized area between a given Lorenz curve and the perfect evenness line [26]. This calculation yields a single value used to describe the degree of evenness of the community. 3) Similarity via depth-used to describe the rate of community changes with sediment depth using moving window analyses [26]. The value presented is the similarity of T-RFLP patterns of two samples from the sediment depths next to each other. For example, the T-RFLP pattern of sediment sample at 20 cm bsf was compared to that of sediment sample at 10 cm bsf, and the similarity is shown as a unit of percentage in y axil. Cell identification and quantification of the incubated samples The mud breccia slurry samples from the incubation experiment were taken at days 0 and 176 and fixed in 4% formaldehyde (1 part slurry to 3 parts formaldehyde) overnight at 4uC. The fixed sample was then washed with PBS buffer twice and further diluted 2000 times with PBS. Next, 8 ml diluted slurry was filtered onto a circular GTTP polycarbonate filter (0.2 mm, Millipore, Germany) with a diameter of 2.5 cm. Cell staining and catalyzed reporter deposition fluorescence in situ hybridization (CARD-FISH) analysis were performed on the filter based on the protocol of Pernthaler et al [27]. DAPI (49,69-diamidino-2-phenylindol) staining was used to assess the total cell count. The probes used to identify ANME groups and SRB are listed in Table 1. Cells were counted under a microscope (Zeiss, Carl Zeiss Microimaging GmbH, Germany) with 50 fields of view (140 mm * 90 mm) used for each hybridization. The detection limit of this method was 2*10 5 cell (aggregate)/ml raw sediment. Field observations and pore water profile The sediment core from the Ginsburg MV reached 904 m of water depth and recovered 357 cm of sediment. When the core was cut open, voids, most likely from the decomposition of gas hydrates, were observed at depths of 53-58 cm, 111-113 cm, 128-133 cm, 173 cm, 193 cm, and 213-218 cm bsf. A thin layer consisting of a few mm of hemipelagic trapping was observed on the surface. In the pore water, sulfide was detected at 34-141 cm bsf (maximum 11.2 mM) and at 193 cm bsf (maximum 2.6 mM) ( Fig. 2A). The sulfate concentration decreased from ambient seawater levels (i.e., 28.0 mM) to 4.0 mM within the top 12 cm of sediment and remained stable until 307 cm bsf; below this depth, a sharp increase was observed (Fig. 2B). The seawater chloride concentration (54462 mM) was measured in the top 12 cm of sediment. The chloride concentrations varied from 431 to 627 mM in the top 34 cm and remained relatively stable until 297 cm bsf, with an average value of 528621 mM (Fig. 2C). In vitro SR-AOM activity at different depths When methane and sulfate were supplied, sediments from different depths responded differently in terms of SR-AOM activities. Throughout the 176 days of the incubation period, two distinguishable active zones were formed. A shallow active zone was defined at 30-55 cm bsf in which the sediments only showed low SR activity after 102 days of incubation (Fig. 3). The overall SR activities during the 176-day incubation period were in the range of 0.3-1.4 nmol/ml rs/d. When the SR was calculated between day 102 (when sulfide production was observed) and day 176 (the end point of the incubation), the rates were in the range of 0.6-3.2 nmol/ml rs/d. No detectable AOM activity was observed in these samples. A deeper AOM active zone was defined at 165-205 cm bsf. In the presence of methane and sulfate, this interval exhibited an immediate production of sulfide, which remained active throughout the incubation experiment (Fig. 3). The highest SR activity, 24.7 nmol/ml rs/d, was detected at a depth of 195 cm bsf. This sediment layer also showed AOM activity, with rates of 1.4-6.2 nmol/ml rs/d (Fig. 4A). The remainder of the mud breccia section revealed no SR or AOM activities (Fig. 3). Community structure and dynamics During the incubation experiment, diverse bacterial communities were described throughout the mud breccia sample at different sediment depths (Fig. 4B). Throughout the incubation period, the average bacterial communities' similarity via depth increased from 68% at day 0 to 81% at day 176. After incubation, the average Fig. 4C shows that the uppermost 40 cm exhibited moderate bacterial evenness (Gini), and these values increased with increasing depth especially after incubation. In contrast, Fig. 4D shows that bacterial richness decreased with increasing depth within the top 75 cm both before and after incubations. Archaeal T-RFLP patterns could only be obtained from the uppermost 65 cm of sediment because that concentrations of archaeal cells are below the detection limit of the method applied in the present work. Compared with bacterial communities, archaeal communities showed much lower richness (Fig. 4G). Cell quantification and identification Cell counts based on DAPI staining demonstrated that the cell concentration in the shallow active zone (i.e., sediment from 55 cm bsf) was one log unit higher than that in the deeper active zone (i.e., sediment from 195 cm bsf) ( Table 2). After incubation, a 70% decrease in total biomass was found at a sediment depth of 55 cm bsf. Here, the abundance of ANME-1 cells remained constant, and their relative concentration increased from 6% before to 20% after the incubation. The ANME-2 and SRB contents were just above the detection limit, and ANME-3 cells were not detected. In contrast, the incubation had no effect on total cell numbers in mud breccia at 195 cm bsf but clearly stimulated the growth of ANME-2 and SRB, especially in the form of aggregates (diameter 2-10 mm) ( Table 2). Multiple AOM active zones The Ginsburg MV is located in the eastern part of Gulf of Cadiz (Fig. 1), an active MV and fluid-venting region [23,28]. Mud breccia was collected from a relatively recent mudflow and it contained gas hydrates; the presence of gas hydrates is a known phenomenon for this MV [23]. For the last decade, the Ginsburg MV has been reported as an active structure, and its tectonic structure and geobiochemistry have been intensively studied by Aggregate 5*10 5 6*10 5 9*10 5 2*10 6 ANME-1 Cell 6*10 6 6*10 6 ,2*10 5 2*10 5 Aggregate ,2*10 5 ,2*10 5 ,2*10 5 ,2*10 5 ANME-2 Cell 2*10 5 4*10 5 2*10 5 ,2*10 5 Aggregate ,2*10 5 ,2*10 5 ,2*10 5 8*10 5 ANME-3 Cell ,2*10 5 ,2*10 5 ,2*10 5 ,2*10 5 Aggregate ,2*10 5 ,2*10 5 ,2*10 5 ,2*10 5 SRB Cell ,2*10 5 ,2*10 5 ,2*10 5 2*10 5 Aggregate ,2*10 5 ,2*10 5 ,2*10 5 1*10 6 doi:10.1371/journal.pone.0113004.t002 Stratified Community Responses in Mud Volcano Deposits PLOS ONE | www.plosone.org different programs and scientific groups [17,[21][22][23][29][30][31]. In the present study, the sulfide and sulfate distribution profiles identifies the location of the SMTZ at a depth of 30-70 cm bsf (Fig. 2). This result is in agreement with published data on hydrocarbon gases, pore water parameters and AOM/SR rates measurements from the same MV [23,31]. Meanwhile, Figs. 2B and 2C show that at a depth of ca. 190 cm bsf, the behavior of sulfide and sulfate curves indicates the possibility of an additional AOM active zone. The sulfate profile clearly suggests an alternative to the seawater sulfate source, the nature of which has thus far not been elucidated. The occurrence of specific void-like structures resulting from the dissociation of gas hydrates was also documented within the same mud breccia interval. Accordingly, pore water parameters suggest two potentially active AOM intervals within the uppermost 2.5 m of mud breccia. The evidence of two separate active AOM zones is also supported by the in vitro incubation experiment in which additional methane and sulfate were supplied. Under these conditions, immediate and/or delayed SR was detected within similar sedimentary layers, i.e., at 30-55 cm bsf and at 165-205 cm bsf. Furthermore, despite the low sulfide concentrations in the pore water, mud breccia from the deep AOM zone showed immediate SR and AOM activity that was ten times higher than the activity in the AOM interval above (Fig. 4A). Therefore, the vertical distribution profiles of pore water form a valuable tool for targeting the potential AOM active zones but are not necessarily sufficient to quantify the rates of the process. Although measured in vitro activity is strongly affected by the incubation conditions, in vitro measurements remain a valuable indicator for understanding in situ microbial activity. We are aware that in this study, the incubation was performed as single microcosms without replicates due to the biomass limitation, which may cause bias. Still, the sediment depths with AOM activity were clustered into two intervals, which is strong evidence to locate the AOM active zones. The in vitro experiments led to the hypothesis that, in the presence of necessary electron donors and acceptors, anaerobic methanotrophy can be fuelled and sustained even at great sedimentary depths. The discovery of multiple AOM active intervals in one sediment core suggests that deep AOM activity should not be overlooked in methane budget calculations. Based on the currently available data, the sources and sinks of oceanic methane are not balanced with the standing stock. For example, according to the estimated data from Reeburgh [10], the reciprocal of the measured and modeled specific turnover rates for the deep ocean These data on methane turnover rate to calculate the budget were often generated from in situ or in vitro radioactive tracer incubations, which is a sensitive method but is restricted to surface sediment and short-term monitoring. Deep or delayed methanotrophic processes have therefore been largely ignored. Based on the experimental data in this study, the deep layer AOM activity is one order of magnitude higher than that in SMTZ. This deep buried methane sink could at least partially reconcile the gap of current methane budget. Response of ANMEs to methane and sulfate supply Although ANME lipid biomarkers, especially ANME-1, appeared to be present in every interval of the Ginsburg MV core down to 180 cm bsf [23], both the pore water profile and our incubation results lead to the conclusion that AOM activity is only present at certain sediment depths. It is not surprising that sediments all along the sampling core might have been exposed to a methane-and sulfate-rich environment during certain historical periods because the discharge of hydrocarbon-rich fluid is a common phenomenon in the Gulf of Cadiz and the Ginsburg MV is characterized as an active structure with extensive mud diapirism and mud volcanism. The relatively broad distribution of ANME lipids along the MV deposits indirectly indicates methane flow and, thus, a migration or displacement of the SMTZ in the Ginsburg MV [23]. The AOM microbial activity in recent or ancient SMTZs cannot be always restored within a six month period simply by providing fresh methane and sulfate in laboratory conditions, implying that the in situ development of the AOM community after methane-and sulfate-rich fluid migration is a long-term process. The incubation conditions in this study apparently favor the activity and growth of ANME-2 over ANME-1. An increase in ANME-2 cells and aggregates has been observed in both of the AOM active zones. Moreover, sediment from the deeper location, in which ANME-2 is dominant, has higher SR-AOM activity compared with that from the SMTZ, where ANME-1 is dominant. In fact, in addition to ANME-2, ANME-1 has been selectively enriched in the SMTZ; the ANME-1 cell numbers remained constant, whereas there was a 70% decay of total freeliving cells (counted by DAPI staining, Table 2). Alternatively, this 70% decay of cells might have contributed to heterotrophic SR and other microbial processes; however, we did not measure such processes. This incubation result is in agreement with investigations from other researchers regarding the niche differentiation of ANME-1 and ANME-2. For example, it has been suggested that compared to ANME-1, ANME-2 is dominant in environments with higher SR activity, lower flow rates, relatively elevated methane partial pressures and temperatures in the range of 10-15uC [32][33][34]; such a pattern is similar to what we observed in our incubations, especially in the deeper active zone. Community response to in vitro incubations The incubation promoted a tendency for prokaryotic communities, especially bacterial communities, at different depths to converge (Figs. 4B and 4E). Concurrently, the bacterial community richness also rose as a result of the incubation, even in sediment intervals without detectable SR activity (Fig. 4D). The entire incubation period lasted 176 days, at which point there was almost no overpressure inside certain incubation bottles; methane gas had been consumed by microorganisms and lost during sampling. During the incubation period, the ANME and SRB cells reproduced 1-3 times (calculated from the data in Table 2). Due to the extremely low growth rates of ANME and SRB (with doubling times of approximately 2 months), they are not major contributors in terms of abundance to overall prokaryotic community dynamics despite the fact that they are key players in primary production in the cold seep ecosystem. The rapid changes in community structure and the increase in richness are thought to be driven by the increase of substrate diversity during incubation [35]. Initially, methane and CO 2 were the only sources of additional carbon; later, the metabolic products of ANME and SRB and the decay of biomass allowed more complex carbon compounds to become available to the system. It must be taken into account that, due to the low biomass content from our samples, this T-RFLP analysis is unlikely to capture rare species. It has been proven that when a preconditioned community colonizes a familiar habitat, the community structure is more predictable [36]. It is therefore reasonable to believe that the species that were observed to increase using T-RFLP were also historically dominant. The increase of biodiversity suggests that a more complex metabolic network within local ecosystems was stimulated due to the incubation, whereas the increase in similarity via depth suggests that the metabolic networks at different sediment depths share similar groups of microbes. Future work with higher efficiency and sequencing data, which may be accomplished by using pyrotags, could detail such communities and their functions. Concluding Remarks In this study, we attempted to identify how the prokaryotic community would respond to changes in methane and sulfate intensities in Ginsburg MV sediment. After a long-term in vitro incubation, a deeply buried AOM active zone was discovered besides the SMTZ, where the AOM activity was one order of magnitude higher than that of SMTZ. This discovery calls up our attention that the potential AOM activity at deep subsurface even below SMTZ should not be ignored or underestimated in oceanic methane budget calculation. Moreover, the incubation condition, a highly reduced environment with high sulfate and methane concentration but no flux nor organic nutrient supply, caused a selective enrichment of ANME (especially ANME-2 rather than ANME-1) and SRB. Although ANME-2 and SRB are the main contributors to SR-AOM activity, bacteria, who may be living on the organic compounds released from cellular metabolism and decay, are taking the major part to shape the overall community structure. This study provides direct information regarding to the spatial distribution and activity of archaea and bacteria in cold seep environments.
2017-07-21T18:24:29.826Z
2014-11-13T00:00:00.000
{ "year": 2014, "sha1": "db5ba3c15a603015dc09d2fac47849b08cac440a", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0113004&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bb760b1005f67768f46bb73a68f592c62dbf4449", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
258840753
pes2o/s2orc
v3-fos-license
Innovation network structure, government R&D investment and regional innovation efficiency: Evidence from China Based on the panel data of 30 provinces in China from 2011 to 2019, this paper uses a two-stage DEA model to measure regional innovation efficiency, then non-parametric test is used to examine the impact of innovation network structure and government R&D investment on regional innovation efficiency. The results show that, at the provincial level, innovation efficiency of regional R&D is not necessarily in direct proportion to the innovation efficiency in the commercialization stage. Commercialization efficiency is not necessarily high in provinces with high technical R&D efficiency. At the national level, the innovation efficiency gap between our country’s R&D and commercialization stage is small, indicating that the development of the national innovation efficiency is more and more balanced. Innovation network structure can promote the R&D efficiency, but has no significant effect on the commercialization efficiency. Government R&D investment helps to improve the R&D efficiency, but it is not conducive to the improvement of commercialization efficiency. The interaction between innovation network structure and government R&D investment will have compound effects on regional innovation efficiency; the region with underdeveloped innovation network structure can increase the government R&D investment to make it have a higher level of R&D. This paper provides insights into how to improve innovation efficiency in different social networks and policy environments. Introduction Innovation has become an important means to improve the competitiveness of a country or region [1]. As the open innovation deepens, countries are actively integrating into the global innovation network, integrating science and technology resources, and increasing R&D spending. For example, in 2019, R&D spending in developed countries such as the United States, Germany and South Korea reached 3.2%, 3.2% and 4.6% of GDP, and the innovation index ranked 3rd, 9th and 11th respectively [2,3]. In contrast, from 2010 to 2019, China's R&D investment increased from 1.7% to 2.2%, and the innovation index rose from 20th to 14th [2,3]. It can be seen that although China has made great progress in innovation input and output, but there is still a certain gap compared with the developed countries. Improving the efficiency of innovation and promoting the transformation of innovation in China, combining the need of national characteristics and global competition, is still a matter of great concern for researchers and policy makers. Innovation efficiency is a key index to measure innovation achievements, which plays a very important role in decision-making [4]. Therefore, domestic and foreign scholars carry out research on the evaluation of regional innovation efficiency. However, many studies only focus on the efficiency evaluation from scientific and technological input to scientific and technological achievements output, and the evaluation on the transformation of scientific and technological achievements into economic benefits lacks depth [5]. Moreover, traditional research also ignores the fact that scientific research achievements may simultaneously play the roles of input and output in the innovation process, and at this time, the DMU will have an intermediate structure [6]. In addition, although the existing research has explored the influencing factors of regional innovation efficiency in terms of economic development level [7], foreign direct investment [8], research and development subsidy [9], fiscal decentralization [10], human capital [11], financial development [12], industrial structure [13] and many other aspects, it has neglected that all kinds of factors play a role in the context of the innovation environment. Generally, enterprises have higher innovation willingness in a good innovation environment, while a bad innovation environment may cause the "innovation paradox" of high innovation demand and low innovation capability [14]. Innovation efficiency is mainly influenced by social network environment and policy environment. The social network environment emphasizes the influence of the relationship between innovation subjects on innovation efficiency. In order to obtain the innovation resources beyond the boundary of their own organization, the innovators establish stable and sustainable formal or informal links to form the innovation network structure [15]. The results show that the efficiency of regional innovation is directly affected by the structure of regional innovation network. Innovation networks in regions with high innovation efficiency generally have the following characteristics: a large number of innovation network components, high degree of network openness, smooth connections, and strong sense of cooperation [16]. The policy environment emphasizes government support for innovation. The degree of government support is mainly measured by the intensity of government investment in R&D. Government R&D support is a policy tool to promote regional development, which is conducive to stimulating regional innovation potential [17]. The main means of government R&D support is the investment of R&D funds. Different types of government investment have different effects on innovation performance, but they all promote the improvement of regional innovation efficiency [18]. Data show that since the new century, China's R&D investment intensity continues to raise, from the scale of investment, China's total R&D funding now ranks second in the world. It can be found that the existing studies have revealed the influence of innovation network structure and government R&D investment on innovation efficiency from different aspects, but there are still shortcomings: Firstly, the current researches on innovation network structure, government R&D investment and regional innovation efficiency mostly analyze innovation activities as a single process, and do not reveal the mechanism of each influencing factor on innovation in different development stages. Secondly, most studies only analyze the impact of innovation network structure or government R&D investment on innovation efficiency, and seldom pay attention to the compound effect of the two, which leads to the lack of deep understanding of innovation efficiency. In areas with underdeveloped innovation network structure, it is worth pondering whether the government can improve regional innovation efficiency by increasing R&D investment. To this end, this paper uses the two-stage DEA model to measure innovation efficiency at the R&D and commercialization stages in 30 Chinese provinces. This study verifies the combined effects of innovation network structure and government R&D investment on regional innovation efficiency with the help of nonparametric tests. Finally, we empirically analyze the results of the study, hoping to provide theoretical reference and practical significance for China's innovation and development. Regional innovation efficiency The term of innovation efficiency was first defined by United States scholar Joseph. Schumpeter was introduced by Chinese scholars and gradually attracted attention in China since the 1980s.At present, the research on regional innovation efficiency in China mainly focuses on the analysis of its spatio-temporal evolution characteristics and influencing factors. Domestic research is mainly carried out from the national [1], economic zone [19], urban agglomeration [20], provincial [21] and other levels. Chen Kaihua and Guan Jiancheng [22] believe that in the process of innovation transformation, only one fifth of regional innovation efficiency is good, and at the same time, most regional technological R&D capacity and commercialization capacity are significantly uncoordinated. Zhao Kaixu et al. [1] found that there is a gradient difference in innovation efficiency among the eastern, central and western regions of China, specifically "eastern> central> western," but the growth rate of central and western regions is greater than that of eastern regions. Sheng Yanwen et al. [7] took Yangtze River Delta, Pearl River Delta and Beijing-Tianjin-Hebei urban agglomerations as the research objects, and concluded that the innovation efficiency of Pearl River Delta is in the leading position, the innovation efficiency of Yangtze River Delta has the largest growth rate, and the innovation efficiency of Beijing-Tianjin-Hebei is relatively backward. Liu Hanchu et al. [23] compared and analyzed the innovation efficiency of various provinces in China, and pointed out that the innovation efficiency at the regional level shows a decreasing trend from east to west, and developed provinces such as Beijing, Shanghai, Guangdong and Tianjin belong to high-efficiency units, reflecting the coupling characteristics of economic development level and innovation efficiency to a certain extent. A large number of studies have confirmed the fact that there are differences in regional innovation efficiency in China. In terms of influencing factors, many scholars believe that regional economic development level, openness, government behavior, urbanization level, scientific and technological innovation resources and innovation environment have good explanatory power to the change and difference formation of innovation efficiency [1,11,14,24]. The main methods of efficiency measurement are stochastic frontier analysis (SFA) and data envelopment analysis (DEA). Stochastic frontier analysis (SFA) is a parametric approach that decomposes changes in productivity into movements of the production possibility boundary and changes in technical efficiency, its most distinctive feature is that the error terms used to measure production differentials are decomposed into technical efficiency errors and random errors, which avoids the effect of statistical errors on efficiency measurements [25], but with SFA, only one index can be used to represent the innovation output, and innovation is a comprehensive process of multi-input and multi-output. It extends the concept of single-input and single-output engineering efficiency to the evaluation of the effectiveness of multi-input and multi-output decision making units (DMU), and data envelopment analysis the process of innovation, and in the reduction error, simplifies the algorithm simultaneously avoids the subjective factor influence [26]. Traditional DEA models treat innovation as a big system with only input and output and treat it as a "Black box" when evaluating innovation efficiency [19]. This approach ignores the internal structure of the innovation system and the internal operating mechanism, reducing the accuracy of the evaluation of innovation efficiency [27]. The process of innovation first manifests itself in technological innovation, and then in the transformation of scientific and technological achievements into economic returns. Therefore, we choose a two-stage DEA model to evaluate the innovation activities in two stages: technological R&D and commercialization, which is helpful to explain the innovation process of regional innovation system. Innovation network structure and regional innovation efficiency Innovation network is a basic institutional arrangement to adapt to systematic innovation [28]. In the context of the basic system, the geographical division of labor and the linkage of production enterprises, research institutes and higher education institutions constitute a regional organizational system that supports and generates innovation [27]. The key to innovation networks is to stimulate innovation capacity and gain competitive advantage, emphasizing cooperation and complementarity among multi-innovation actors while possessing the structural and resource characteristics of complex networks themselves [29]. Network features, such as network size, density, openness, and network structure holes, have become key factors in the technological capability growth and innovation efficiency improvement of enterprises [16,30]. Therefore, we believe that innovation network structure should be included in the study of innovation efficiency growth. First, the innovation network structure has the characteristics of proximity [30]. On the one hand, the geographical proximity of innovation network structure promotes the collective action of enterprises, which in turn accelerates the knowledge flow between enterprises [31]. On the other hand, frequent face-to-face communication between enterprises close to each other deepens mutual trust, realizes resource sharing and improves innovation efficiency [32]. Geographic proximity also has a counter-effect, with excessive geographic proximity leading to spatial lock-in [33]. However, with the development of transportation and network, social proximity makes innovation gradually get rid of geographical constraints, and cross-regional cooperation among innovation subjects becomes possible. The social proximity of innovation network structure is increasingly considered as a key factor to promote knowledge flow and improve network performance [34]. Therefore, we believe that regions with developed innovation network structure have high R&D efficiency. Second, the innovation network has realized the link to the innovation factor, reduced the innovation cost and accelerated the knowledge diffusion [15]. The "Optimality" of R&D personnel and the "Profit-seeking" of R&D Capital lead to the concentration of innovation elements in regions with more perfect innovation network structure, so as to form scale economy, and then promote regional innovation level [35]. Clusters with dense network structures have strong information transmission capabilities, and innovation agents can use innovation networks to obtain market information more easily and produce more market-oriented innovation outcomes [36]. The higher the density of innovation network is, the stronger the connection strength between network nodes is, which indicates that the motivation of collaborative innovation among innovation subjects is stronger, complementary information, knowledge resources, and so on, may be shared to a greater extent among heterogeneous innovators, contributing to the enhancement of innovation capacity and thus promoting the efficiency of regional knowledge transformation [27]. Based on the above analysis, we believe that regions with developed innovation network structure have high commercialization efficiency. Taken together, the innovation network structure can have an important impact on regional innovation efficiency. The closer the constituent subjects of innovation network structure are to each other, the more conducive to the exchange of information and mutual learning of knowledge, and the output of more R&D results. The clustering of innovation factors can reduce innovation costs and thus increase the economic value of knowledge achievements. In general, proximity and low interaction costs increase the likelihood of achieving better innovation efficiency under the same conditions. The assumptions of this paper are therefore as follows: Hypothesis 1. Innovation network structure affects regional innovation efficiency. Hypothesis 1a. Innovation network structure has a positive promoting effect on R&D efficiency. Hypothesis 1b. Innovation network structure has a positive promoting effect on commercialization efficiency. Government R&D investment and regional innovation efficiency Government R&D investment is the direct expenditure for the specific target selected by the government [37]. The investment object of these funds can be divided into universities, research institutions and enterprises according to the innovation subject [38]. Different types of government investment have different impacts on innovation performance, but they all have an impact on regional innovation efficiency [18]. There is still no consensus on the attitude of government R&D investment in academia. Some scholars believe that government R&D investment will have a positive incentive effect on regional innovation efficiency [39]. However, some scholars also found that the government's investment in innovation activities contributed to the inertia of enterprises engaged in innovation activities, resulting in crowding out effect [40]. The reason for the above two opposite conclusions may be that the innovation process is oversimplified, and the process of innovation output can be divided into two stages: technological output and economic output, and the influence of government R&D investment on innovation efficiency in the two stages may be different. The first is that the technology spillover of R&D activities makes it impossible for enterprises to take all the returns of innovation factors, so the risk of market failure often occurs in the process of innovation investment [41]. As a special resource, government R&D investment provides financial support for enterprises' R&D activities, which effectively reduces the R&D cost and risk of enterprises in technological innovation [42]. When selecting R&D investment projects, the government will organize experts and professional evaluation bodies to select projects according to relevant regulations and regulations, thus ensuring the fairness of the evaluation process and the scientific nature of funded projects [43]. This makes government R&D investment can also be regarded as a credit endorsement, and enterprises can send positive financial signals to outside investors by obtaining government R&D investment with zero interest cost, thus enhancing their investment confidence [42]. Therefore, we believe that regions with high intensity of government R&D investment have high R&D efficiency. Second, according to Samuelson's classical theory of public goods, it is inevitable that there will be inefficiency in government funding R&D activities with certain "public goods" attribute [40]. Moreover, when governments subsidize innovation, they tend to focus on the social benefits of innovation rather than on the direct economic benefits [37]. Government R&D investment is mostly applied to projects with long-term strategic significance, but this often makes it difficult to convert scientific research results into economic benefits [44]. At the same time, the attention of government leaders mainly focuses on the selection of projects and neglects the later supervision, so the efficiency of government R&D investment cannot be guaranteed [45]. When enterprises rely too much on government support, or innovation is mostly public institutions, market regulation is difficult to play a role. Based on the above analysis, we believe that regions with high intensity of government R&D investment have low commercialization efficiency. Taken together, increasing the intensity of government R&D investment can reduce firms' R&D risks, improve R&D confidence, and increase R&D outcomes. However, the directionality of government R&D investment can weaken the role of the market in innovation and limit business innovation. The assumptions of this paper are therefore as follows: Hypothesis 2. Government R&D investment affects regional innovation efficiency. Hypothesis 2a. Government R&D investment has a positive promoting effect on R&D efficiency. Hypothesis 2b. Government R&D investment has a negative inhibiting effect on commercialization efficiency. Innovation network structure, government R&D investment and regional innovation efficiency This paper aims to explore the impact of different social network and policy environments on the R&D efficiency and commercialization. Therefore, this paper proposes two hypotheses: different innovation network structure and government R&D investment background of regional innovation efficiency differences. However, due to China's special political and economic background, the development of enterprises, the construction of the network inevitably needs policy preferences. An isolated discussion of the structure of innovation networks and the role of government R&D investment cannot properly reveal complex social phenomena [46]. The node of innovation network and the object of government R&D investment are both the main body of innovation, so it is necessary to consider the comprehensive function of both. The effectiveness of R&D investment depends on the interaction between local knowledge producers and knowledge users, and the more frequent the interaction, the stronger the impact on innovation [47]. Generally speaking, the innovation network structure of low-level economic development regions is relatively backward. However, some studies have found that innovation efficiency is relatively higher in regions with small innovation networks but high government R&D Investment [48]. Based on the above point of view, this paper considers whether innovation can be attracted by increasing government investment, and whether small innovation networks can break out into high R&D efficiency when regional R&D investment is enough to catch up with big cities. The assumptions of this paper are therefore as follows: Hypothesis 3. Innovation network structure and government R&D investment have a compound effect on regional innovation efficiency. Hypothesis 3a. The R&D efficiency is relatively high in regions with underdeveloped innovation network structure but high government R&D investment intensity. Data sources This paper selects panel data from 30 provinces of our country (except Hong Kong, Macao, Taiwan and Tibet) from 2011 to 2019 to analyze the impact of innovation network structure and government R&D investment on regional innovation efficiency. The data mainly come from the Statistical Yearbook of Science and Technology of China, the Statistical Yearbook of China, the Evaluation Report of Regional Innovation Ability of China, and the National Economic and Social Development Statistics Bulletin of the People's Republic of China. Considering that there is a time lag between the input and output of innovation efficiency, this paper, after referring to relevant studies [49,50], concludes that the time lag between the input and the output of the first stage, a two-year time lag between phase I and phase II outputs is appropriate. Therefore, this paper constructs the data of five time points Measurement of innovation efficiency. There is no unified understanding on the selection of innovation efficiency index variables so far. Drawing on relevant literature research [14,19,27], this paper constructs an index system for innovation efficiency evaluation. The internal expenditure of R&D funds is selected to reflect the actual regional investment in R&D funds, and the full-time equivalent of R&D personnel is used to represent the investment in personnel of regional innovation efficiency. Intermediates are measured by invention patents and scientific papers. Per capita GDP and per capita disposable income can reflect the improvement of people's lives and the promotion of regional economic level by the achievements of scientific research. Sales revenue of new products can reflect the direct economic benefits brought by the progress of science and technology to the relevant units. The transaction contract amount of technology market reflects the level of technology achievements into market value. So we chose these four as the indicators of output in the commercialization stage. The specific model for the two-stage efficiency evaluation is shown in Considering the complexity of innovation process, this paper uses a two-stage DEA model to measure innovation efficiency [14]. There are n decision-making units, and each decisionmaking unit has m kinds of input vector X and s kinds of output vector Y. For any decisionmaking unit, the model representation is shown in the following form (1): In this model, S + , Srepresents input redundancy or output insufficiency of decision making unit. ε means infinitesimal. θ is the efficiency evaluation value of the DMU. when θ = 1 and S + = S -= 0, DEA is effective; if θ = 1 and S + 6 ¼ 0 or S -6 ¼ 0, the DEA is weakly effective; if θ < 0, the DEA is invalid. Measurement of innovation network structure. Based on the previous studies on the structure of regional innovation network [15,51,52], this paper evaluates the structure of innovation network from four dimensions: network scale, network openness, network structure hole and network link. The network scale is reflected by three indexes: the number of universities, the number of research and development institutions and the number of industrial enterprises above designated scale. The network openness is represented by the value of foreign technology import contracts and the number of foreign technology import contracts in each region. The self-raised funds of enterprises are used for the intramural expenditure on R&D in universities, the self-raised funds of enterprises are used for the intramural expenditure on R&D in research and development institutions, and the government funds are used for the intramural expenditure on R&D in industrial enterprises above designated size, these three indicators are selected to reflect the network links. The network structure is characterized by the number of public libraries, the number of employment agencies and the number of contracts in the technology market. According to the above four aspects, according to the principle of data availability and representativeness, the evaluation index system of innovation network structure is constructed (see Table 1). The maximum and minimum value method is used to deal with the data dimensionless, and the entropy weight method is used to calculate the index weights of all levels. Finally, the comprehensive value of the innovation network structure of each region is determined. The above mathematical formula of entropy weight method can be found in Liang Lina and Yu Bo [15]. The length of the article is limited and will not be repeated here. Measurement of government R&D investment. This paper introduces the government R&D investment intensity index [48]. The DMU is classified as government-oriented or nongovernment-oriented. The government R&D investment intensity index is calculated by dividing the proportion of government R&D spending in a region by the proportion of government R&D spending in the country. By combining government R&D expenditure data with national R&D expenditure data, we can consider the relative proportion of R&D by a regional government, which can more comprehensively express the importance that the regional government attaches to innovation. The formula is as follows: O is the government R&D investment intensity index, RDE is R&D expenditure in the nation, RDE K represents R&D expenditure in region, RDEg represents R&D expenditure of government, RDE kg represents R&D expenditure of government in the region. Non-parametric test. Generally, when the data conform to normal distribution and the variance is homogeneous (or symmetrical), we usually use T test or F test. The nonparametric test is relative to the parametric test, which has no special requirements on the distribution of data. The nonparametric test has the advantages of simple and easy to master and robust conclusion [53]. When the parameter test is not applicable, the non-parameter method can use the data more effectively. Since the data in this paper don't conform to normal distribution, this study chooses to use non-parametric test. The steps of hypothesis testing are as follows: ① Kolmogorov-Smirnov test is performed on the above variables to test the normality of the data and determine the correctness of using nonparametric test. ② Because Mann-Whitney U test is often used to test whether there is a significant difference between two independent unpaired samples, we use Mann-Whitney U test to test hypothesis 1 and hypothesis 2. ③ Kruskal-Wallis H test is used when there are two or more samples, which is an extension of Mann-Whitney U test, so we use Kruskal-Wallis H test to verify hypothesis 3.The above nonparametric test statistics and their mathematical formulas are detailed in Conover [54], which will not be repeated here due to the limited length of the article. Measurement of regional innovation efficiency In the research of regional innovation efficiency, we use DEAP2.1 software to calculate the innovation data from 2011 to 2019. Before proceeding with the analysis, we examine the statistics of the variables of innovation efficiency, and the results show that the standard deviation of most of the variables is greater than their average. As shown in Table 2, there are wide regional gaps in both inputs and outputs. Table 3 shows the efficiency of innovation at the stage of technological R&D and commercialization by province. In the R&D stage, it can be seen that from 2013 to 2017, the R&D efficiency values of Beijing, Shaanxi and Gansu were all 1, and they showed a stable trend over a five-year period. The results show that the R&D efficiency of the three regions reaches DEA efficiency and the allocation of innovation resources reaches the optimal state. The R&D efficiency of Jilin, Heilongjiang, Chongqing, Sichuan and Guizhou is close to 1, indicating that the investment in capital and manpower has been fully utilized. The R&D efficiency of Tianjin, Liaoning, Shanghai, Jiangsu and Zhejiang provinces ranges from 0.6 to 0.9, indicating that the R&D efficiency of these regions is at a good level. The technological R&D efficiency of Hebei, Shanxi and other provinces is below 0.5, especially in Inner Mongolia, where the average R&D efficiency is 0.224. These regions need to improve the efficiency of knowledge innovation by improving the efficiency of innovation resources and avoiding unnecessary waste. In the commercialization stage, it can be seen that Jilin, Guangdong, Qinghai and Ningxia have five years of commercialization efficiency of 1, reaching DEA effective, Inner Mongolia and Hunan have four years of commercialization efficiency of 1, and Beijing, Hebei, Jiangxi, Hubei and Hainan also have commercialization efficiency average close to 1. This indicates that the conversion rate from science and technology inputs to economic output is relatively high in these regions and that innovation resources are being fully utilized. Compared with the R&D efficiency in the first stage, the innovation efficiency in Hebei, Inner Mongolia, Qinghai and Ningxia has improved greatly in the second stage. This indicates that although the efficiency of technological output in these regions is low, the efficiency of transforming research results into economic benefits is high. Meanwhile, compared with the higher R&D efficiency in the first stage, the efficiency values of Heilongjiang, Sichuan, Shaanxi and Gansu provinces in the second stage are significantly lower, indicating that the transformation rate of scientific and technological achievements is low, the original input of a large amount of innovation resources is not effectively transformed into economic results, and the contribution of scientific and technological innovation to regional economic growth is still far from adequate. The commercialization efficiency of Yunnan and Xinjiang is lower than 0.5. How to improve the overall level of regional innovation efficiency, which in turn drives economic development, is a pressing problem for these regions to solve. In general, technological R&D does not necessarily correlate with innovation efficiency in the commercialization stage. Provinces with high technological R&D efficiency, such as Heilongjiang and Gansu, do not have high commercialization efficiency. This provides us with a new way to solve the problem of uneven distribution of regional innovation resources and PLOS ONE Innovation network structure, government R&D investment and regional innovation efficiency large gap in innovation efficiency. We can allocate innovation resources across regions from the national level to form the complementary advantages of scientific research highland and market-wide regions. On the whole, the average R&D efficiency of our country is 0.721, and the commercialization efficiency is 0.732. The difference between the two is small, which indicates that the development of innovation efficiency is more and more balanced in our country. Test the hypothesis Before testing the hypothesis, we first test the normality of the data. The results show that: all indicators P Kolmogorov-Smirnov < 0.001, that is, Innovation Network structure and Government R&D investment do not conform to the normal distribution, so the nonparametric test is used in this paper. Then, we test whether the difference between R&D efficiency and commercialization efficiency is significant by Wilcoxon signed rank test. The Wilcoxon signed rank test is a test for PLOS ONE Innovation network structure, government R&D investment and regional innovation efficiency continuous variables with non-normal distribution. The test results are shown in Table 4. The results of Wilcoxon signed rank test show that there is no significant difference between R&D efficiency and commercialization efficiency. That is to say, the innovation efficiency of our country has basically reached equilibrium between the two stages. Finally, we test the hypothesis and analyze the impact of innovation network structure and government R&D investment on regional innovation efficiency. We conducted Mann-Whitney U test for hypothesis 1 and the results are shown in Table 5. At the R&D stage, there is a significant difference in innovation efficiency between regions with different levels of development of innovation network structure (P Mann-Whitney U = 0.011<0.05). DMUs with well-developed innovation network structures have a high ranking and higher innovation efficiency values. This indicates that innovation network structure has a positive promoting effect on R&D efficiency, and hypothesis H1a was supported. This is mainly because R&D needs to be carried out in economically powerful large enterprises, research institutes and universities, which are the constituent subjects of the innovation network. Regions with well-developed innovation networks have stronger scale and connections of innovation subjects, which can better play the role of technological innovation. At the commercialization stage, the difference in innovation efficiency is not statistically significant (P Mann-Whitney U = 0.569 > 0.05), and hypothesis H1b was rejected. This indicates that the innovation network structure does not influence the differences in innovation efficiency at the commercialization stage. The reason for this phenomenon may be that the state encourages mass entrepreneurship and innovation, and in regions with poor innovation network structures, the support of state policy funds is more likely to work to promote the development of SMEs. Although SMEs have limited capability in technological innovation, they also play a great role in transforming scientific research results into economic benefits, while we did not consider SMEs as the main body of the innovation network in our study due to their own limited innovation capability. In summary, it partially supports hypothesis H1 that there is a difference in innovation efficiency between regions with a developed innovation network structure and those with a less developed innovation network structure, and that the degree of development of the innovation network structure has a greater impact on regional R&D efficiency, while it has a smaller impact on commercialization efficiency. We conducted the Mann-Whitney U test for hypothesis 2, and the results are shown in Table 6. There is a significant difference in innovation efficiency between regions with different intensity of government R&D investment in the R&D stage (P Mann-Whitney U = 0.000<0.01). The efficiency ranking of DMUs with high government R&D investment intensity is higher than that of DMUs with low government R&D investment intensity, and the efficiency mean is higher than that of DMUs with low government R&D investment intensity. This indicates that government R&D investment has a significant positive effect on regional technology R&D, and hypothesis H2a was supported. Government R&D investment can make up for the lack of funds of innovation agents, reduce R&D risks, promote the improvement of regional innovation capacity, and have a positive driving effect on R&D innovation. In the commercialization stage, the results are reversed. DMUs with high intensity of government R&D investment rank low in efficiency and have low efficiency values (P Mann-Whitney U = 0. 002<0.05). This indicates that government R&D investment has a significant negative effect on commercialization and hypothesis H2b was supported. This implies that government intervention in R&D investment inhibits market dynamics and reduces the commercialization efficiency. In summary, hypothesis H2 was supported, and there is a difference in innovation efficiency between regions with high government R&D investment intensity and regions with low government R&D investment intensity, and regions with high government R&D investment intensity have high R&D efficiency but low commercialization efficiency. This finding is also similar to the finding of Liu Xielin et al. [43], which once again proves the correctness of the test results. Combining the two factors of innovation network structure and government R&D investment, we divide the research area into four groups. Group A indicates the area with developed innovation network structure and high government R&D investment, group B indicates regions with developed innovation networks but low government R&D investment, group C indicates regions with underdeveloped innovation networks and High Government R&D investment, and group D indicates regions with underdeveloped innovation networks, areas where the government spends less on research and development. We performed Kruskal-wallis H test for hypothesis 3 and post hoc test for each group, ranking the efficiency values of each group according to the modified significance level. The test results are shown in Table 7. There are differences in innovation efficiency between different groups (P Kruskal-Wallis H < 0.05), hypothesis H3 is supported. The R&D efficiency is A > C > B > D. Group A has the highest R&D efficiency, which is the same as the hypothesis of H1a and H2a, that is, the region with advanced innovation network structure and high government R&D investment has the highest R&D efficiency. Government R&D investment can stimulate regional R&D innovation through innovation network from universities, R&D institutions, enterprises and other channels, and the two factors interact to promote R&D efficiency better. The R&D efficiency of group C was only lower than that of group A, and H3a was supported. This shows that in the regions with underdeveloped innovation network structures, increasing government R&D investment can stimulate local innovation vitality and keep the R&D efficiency at a high level. The R&D efficiency of group B and group D is poor, the structure of innovation network of these two groups is different, but the government R&D investment is lower, which shows that the government R&D investment has more influence in the R&D stage, and we should play the role of government better. The commercialization efficiency is B > D > A > C. The commercialization efficiency in group B and group D is higher than that in group A and group C, which is the same as that in H2b. That is, the efficiency of regional commercialization is higher when the government R&D investment is low. The government's role in transforming the economy is limited, and the market plays a decisive role in that process. The commercialization efficiency of group B is the highest, while that of group C is the lowest, which shows that the innovation network structure is the carrier of the innovation process, innovation networks can enhance the impact of such factors on the innovation process. Conclusion It is an important research content in the field of regional innovation to reveal the differences of regional innovation and find out the influencing factors of regional differences. Using twostage DEA method, this paper measures the R&D stage and commercialization stage in 30 provinces of China, and analyzes the spatial and temporal differences of the R&D efficiency and commercialization efficiency. Based on the previous research, this paper tries to find out the key environmental factors that affect the efficiency of innovation. The influence of innovation network structure and government R&D investment on innovation efficiency is analyzed by non-parameter test. The main conclusions are as follows: At the provincial level, the R&D efficiency is not necessarily in direct proportion to the commercialization efficiency. Commercialization efficiency is not necessarily high in provinces with high R&D efficiency. At the national level, the gap of innovation efficiency between our country's R&D and commercialization stage is small, which indicates that the development of innovation efficiency is more and more balanced. The test of hypothesis 1 shows that the innovation network structure has a positive effect on the R&D efficiency, but has no significant effect on the commercialization efficiency. This shows that the establishment of close links between the main bodies of innovation networks, breaking the barriers of resource sharing, will help to improve the R&D efficiency. However, in the process of knowledge achievement transforming into economic output, the role of public innovators, such as universities and scientific research institutes, is negligible, and more depends on the popularization of knowledge in the society and the participation of enterprises. PLOS ONE The test results of hypothesis 2 show that government R&D investment helps to improve the R&D efficiency, but it is not beneficial to the commercialization efficiency. This shows that the government still plays a very important role in the innovation process. The government sends a positive signal to the society by means of financial support, which increases the confidence of the region in technological research and development. In the period of commercialization, too much government intervention will restrain the vitality of the market. Therefore, the government should have a clear position for itself, support R&D activities with long-term social benefits, and leave more economic activities aimed at short-term benefits to the market. The test results of hypothesis 3 show that regions with underdeveloped innovation network structures can achieve a higher level of R&D by increasing government R&D investment. This proves that the interaction between innovation network structure and government R&D investment has a compound effect on regional innovation efficiency. The government's financial support can attract more R&D organizations to the region, and can also stimulate the technological R&D potential of innovation network agents, making them the driving force of regional innovation development, but a web of innovators who rely too heavily on government direction can lose sight of what the market needs. Therefore, it is necessary to strike a proper balance between network construction and government R&D investment in order to ensure that knowledge results can be efficiently transformed into economic output. Revelation The results of this paper provide some insights into how to improve innovation efficiency in different social network and policy environments. In order to improve regional innovation efficiency, we need to build a sound innovation network structure. Expand the scale of the innovation network and form an innovation network mainly composed of enterprises, supplemented by universities, research institutions, governments, and technology intermediaries. We should make innovation networks more open, fully absorb foreign investment, enhance the sharing of innovation knowledge and achievements, and break down trade barriers. Strengthen the innovation network link and build the industry-university-research collaborative innovation mode, so as to improve the spontaneity, stability and sustainability of cooperation among innovation subjects. We should increase the number of structural holes in innovation networks, improve the construction of science and technology service platforms, promote the free flow of innovation information, and reduce the information asymmetry among innovation subjects. In order to better leverage the government's role, we need to optimize the innovation policy environment and reasonably allocate government R&D investment. Actively play the government's functions in innovation strategy leadership, innovation environment construction and direct participation to lead the region to achieve innovation-driven development. Optimize the allocation of government R&D investment among innovation subjects. With the deepening of marketization, gradually cut the separate funding for research institutions and focus on supporting the industry-university-research collaborative innovation mode. We should explore a coordinated supervision and restraint mechanism for innovation investment subjects, so as to improve the transparency of the government's behavior in allocating innovation resources on the one hand and ensure the efficient use of government R&D investment on the other hand. We divided the study areas into four groups: group A, where the innovation network structure is developed, and the government R&D investment is high, and group B, where the innovation network structure is developed but the government R&D investment is low, group C indicates regions with underdeveloped innovation networks and high government R&D investment, while group D indicates regions with underdeveloped innovation networks and low government R&D investment. The results of nonparametric test show that R&D Efficiency A > C > B > D, commercialization efficiency B > D > A > C. Groups A and C have high R&D efficiencies that do not translate into high economic output, and these regions should be better served by markets and less government intervention. Perfect product market development, the original government control of the product can be determined by the market price to the market decision [55]. The investment strategy is appropriately tilted towards start-ups and small businesses in strategic industries. Group B is less efficient in technology R&D. Therefore, the focus should be on how to enhance the R&D efficiency, strengthen the linkages among innovation agents, reduce the cost of interaction and break down the barriers of communication We should improve the knowledge conversion rate of universities and scientific research institutes, in order to bring the knowledge spillover effect of universities and scientific research institutions into full play, each regional government should play the role of "Coordinator", guide all parties of Industry, education and research in the region to cooperate and innovate, and increase the proportion of R&D funds to regional GDP [27]. There is great potential for improving the efficiency of R&D in group D, which is the region most in need of increasing government investment in R&D. By sending positive signals, the government can attract research institutes and expand the scale of regional innovation networks. To sum up, the research results show that the combination of improving the innovation network structure and increasing government R&D investment can make the regional innovation efficiency increase in equilibrium. Regions with low R&D efficiency need more support from government R&D investment. Regions with low commercialization efficiency should pay attention to the regulating role of the market. In regions with underdeveloped innovation network structure, we should consider how to attract innovators by government R&D investment and establish efficient innovation network. Discussion The contribution of this paper is: Firstly, we use the two-stage DEA to measure the efficiency of regional innovation, which has the advantage of dividing innovation activities into two stages: technological R&D and commercialization, the "Black Box" of the innovation system was broken by using the first stage of scientific and technological output as the input of the second stage. Secondly, we creatively analyze the innovation efficiency in the stage of technological R&D and commercialization from the perspective of innovation network structure and Government R&D investment, and pass the non-parametric test, to verify the difference of innovation efficiency in different stages of innovation activities between regions with different innovation network structure and government R&D investment, it is found that regions with backward innovation network structure can also stimulate the development of innovation by increasing government R&D investment. Thirdly, this paper proves that the combination of improving the innovation network structure and increasing the government R&D investment can make the regional innovation efficiency balanced, so as to provide guidance for the development of regional innovation differentiation in our country. The conclusion of this paper not only provides some reference and guidance for theoretical research and practical application, but also has some limitations. First, we only study the impact of two key environmental factors, innovation network structure and government R&D investment, and do not fully consider the impact of other factors, such as industrial structure, economic development level, foreign investment and so on. Secondly, because of the difficulty of data acquisition, we take the province as the research unit, and cannot analyze the spatial difference of the region in more detail. In the future research, we will carry out more full empirical analysis.
2023-05-24T05:05:19.446Z
2023-05-22T00:00:00.000
{ "year": 2023, "sha1": "72d110eba62ae25811be0eb052bdd1947ee6a791", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "72d110eba62ae25811be0eb052bdd1947ee6a791", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
253458468
pes2o/s2orc
v3-fos-license
Significance of 5-Aminosalicylic Acid Intolerance in the Clinical Management of Ulcerative Colitis Background: Two major types of 5-aminosalicylic acid (5-ASA)-containing preparations, namely, mesalazine/5-ASA and sulfasalazine (SASP), are currently used as first-line therapy for ulcerative colitis. Recent reports show that optimization of 5-ASA therapy is beneficial for both patient outcomes and healthcare costs. Although 5-ASA and SASP have good efficacy and safety profiles, clinicians occasionally encounter patients who develop 5-ASA intolerance. Summary: The most common symptoms of acute 5-ASA intolerance syndrome are exacerbation of diarrhea, fever, and abdominal pain. Patients who discontinue 5-ASA therapy because of intolerance have a higher risk of adverse clinical outcomes, such as hospital admission, colectomy, need for advanced therapies, and loss of response to anti-tumor necrosis factor (TNF) biologics. When patients develop symptoms of 5-ASA intolerance, the clinician should consider changing the type of 5-ASA preparation. Recent genome-wide association studies and meta-analyses have shown that 5-ASA allergy is associated with certain single-nucleotide polymorphisms. Although there are no modalities or biomarkers for diagnosing 5-ASA intolerance, the drug-induced lymphocyte stimulation test can be used to assist in the diagnosis of acute 5-ASA intolerance syndrome with high specificity and low sensitivity. This review presents a general overview of 5-ASA and SASP in the treatment of inflammatory bowel disease and discusses the latest insights into 5-ASA intolerance. Key Messages: 5-ASA is used as first-line therapy for ulcerative colitis. Optimization of 5-ASA may be beneficial for patient outcomes and healthcare systems. Acute 5-ASA intolerance syndrome is characterized by diarrhea, fever, and abdominal pain. Periodic renal function monitoring is recommended for patients receiving 5-ASA. Digestion 2023;104:58-65 DOI: 10.1159/000527452 tion for induction and maintenance therapy in patients with mild to moderate UC is a 5-aminosalicylic acid (5-ASA)-containing preparation [3][4][5][6]. Two main types of 5-ASA-containing therapies, namely, mesalazine (5-ASA) and sulfasalazine (SASP), are currently available in clinical practice. In addition, 5-ASA-containing therapies such as SASP are used for arthritis accompanied by UC [3]. Recommendations for 5-ASA induction and maintenance therapy in patients with Crohn's disease vary across countries because of the limited evidence compared with that for UC [3,7,8]. SASP is used to treat UC as well as some cases of Crohn's disease, particularly active Crohn's colitis [3]. SASP is metabolized to 5-ASA and sulfapyridine and has anti-inflammatory effects in the colon. Because 5-ASA is the active ingredient in SASP, the efficacy of SASP is similar to that of 5-ASA in the treatment of inflammatory bowel disease (IBD). However, sulfapyridine, a metabolite of SASP, sometimes causes adverse events, such as headache and vomiting [9] (Fig. 1). These SASP-specific adverse events may contribute to the tolerability of SASP being lower than that of 5-ASA (risk ratio: 0.48; 95% confidence interval: 0.36-0.63) [10]. 5-ASA is administered in patients with UC as an oral formulation in addition to topical 5-ASA-containing enemas and suppositories. An appropriate formulation of 5-ASA should be chosen according to disease location and patient adherence. A suppository formulation of 5-ASA should be considered as first-line therapy for patients with mild to moderately active proctitis and an enema formulation for those with distal colitis. Oral 5-ASA Intestinal anti-inflammatory effects of 5-ASA. SASP and 5-ASA are two types of 5-ASA-containing therapies used to treat IBD. SASP is metabolized by bacterial enzymes, and sulfapyridine and 5-ASA are released. Most SASP-related adverse events, such as headache, dizziness, and fever, are considered to be associated with sulfapyridine. Although 5-ASA is effective, several rare but clinically significant adverse events, such as nephrotoxicity, pan-creatitis, and pericarditis, have been documented. After ingestion, 5-ASA is rapidly acetylated to N-acetyl-5-ASA in intestinal epithelial cells and to a lesser extent in the liver and is excreted mostly in the feces and to a lesser extent in the urine. 5-ASA, 5-aminosalicylic acid; PPARγ, proliferator-activated receptor gamma; ROS, reactive oxygen species; SASP, sulfasalazine. DOI: 10.1159/000527452 alone is also effective for induction of remission in patients with active proctitis and distal disease [3]. In patients with left-sided or extensive colitis, administration of oral 5-ASA alone or in combination with an enema formulation is recommended for induction of remission [2,9]. Daily oral 5-ASA doses of ≥2 g are more effective than lower doses for induction and maintenance of remission [11]. When remission has not been achieved on oral or topical 5-ASA alone, a combination of oral and topical 5-ASA is recommended [3]. One study showed that the local concentration of 5-ASA and its acetylated form, N-acetyl-5-ASA, in the sigmoid colon was significantly correlated with endoscopic remission in patients with UC regardless of the formulation of 5-ASA used [12], suggesting the importance of optimizing the local concentration of 5-ASA. Indeed, a subsequent study showed that optimization of 5-ASA therapy by maximizing the oral dose and/or combining oral and topical formulations of 5-ASA had clinically beneficial effects in terms of reducing the dose of systemic corticosteroids and the need for advanced therapies [13]. Therefore, optimization of 5-ASA therapy is considered beneficial in terms of both patient outcomes and health care costs in the treatment of UC by minimizing unnecessary introduction of various molecular targeted immunosuppressive agents and the adverse events associated with their use. Mesalazine/5-ASA is available in several oral formulations. The time-dependent release formulation of 5-ASA contains microgranules covered by a semipermeable ethyl cellulose membrane [14]. This formulation slowly releases 5-ASA from the duodenum to the ileum in a timedependent manner and is absorbed by the ileal and colonic mucosa [15]. Therefore, this formulation is used for both UC and Crohn's disease. In contrast, the pH-dependent release formulation releases 5-ASA when the threshold pH value is exceeded during gastrointestinal transition from the small intestine to the large intestine [16]. In the terminal ileum, where the pH is > 7, the pH-dependent release formulation of 5-ASA starts to dissolve and exerts its effect mainly in the large intestine. The Multi-Matrix System 5-ASA formulation also begins dissolution in the terminal ileum and has advantages in terms of efficacy and tolerability [17]. Although it remains unclear whether high doses of 5-ASA or N-acetyl-5-ASA in the local mucosa exert a therapeutic effect in patients in remission or whether absorption and metabolism are higher in healthy colonic epithelia than in inflammatory epithelia [12,18], it is reasonable to speculate that an adequate amount of 5-ASA in the colonic mucosa contributes to therapeutic effects in patients with colitis. This notion is supported by previous studies showing that poor adherence is a risk factor for flares [19]. There is no significant difference in the long-term risk of flares between patients with a low or high average daily dose of 5-ASA [20], suggesting that the requirement of high-dose 5-ASA may need to be stratified by disease behavior, biomarkers, and endoscopic findings. Indeed, optimization of 5-ASA results in better clinical outcomes [13]. It is important to improve adherence by selecting the appropriate drugs for individual patients and optimizing the dose and preparation of 5-ASA according to disease activity. Definition of 5-ASA Intolerance SASP and 5-ASA are prescribed worldwide for the treatment of mild to moderate UC. However, adverse events occur in a small but substantial number of patients with IBD, especially UC. Mild adverse events include headache, skin rash, and gastrointestinal symptoms, such as nausea, abdominal pain, and diarrhea. More severe adverse events include nephrotoxicity, hepatic dysfunction, pancreatitis, pericarditis, pneumonitis, severe skin reactions, such as Stevens-Johnson syndrome, toxic epidermal necrolysis, and acute gastroenteropathy, often referred to as "acute 5-ASA intolerance syndrome" or "acute mesalazine intolerance syndrome" [21]. There is no clear definition of or diagnostic criteria for acute 5-ASA intolerance syndrome; it is commonly diagnosed by its clinical features (e.g., fever, diarrhea, and abdominal pain), which typically occur 1-3 weeks after starting 5-ASA and resolve within a few days after discontinuation of the drug or administration of systemic corticosteroids [22][23][24]. Therefore, it is better to define such clinical manifestations as acute 5-ASA intolerance syndrome to distinguish them from other adverse events, such as nephrotoxicity, which are usually observed within the first year of treatment with 5-ASA but sometimes even several years later [25]. Adverse events associated with acute 5-ASA intolerance syndrome are observed in about 5-10% of patients with UC [23,24,26]. Acute 5-ASA intolerance syndrome has been reported as an allergic reaction that appears a few weeks after starting the medication, but precise diagnosis is difficult because the symptoms and endoscopic findings are similar to those of exacerbation of UC [22]. A more recent study found that adverse effects appeared approximately 10 ± 5 days after the start of 5-ASA and that the second and subsequent attacks occurred 2 ± 1 days later [26]. Symptoms of acute 5-ASA intolerance syndrome are diverse; fever, diarrhea, Digestion 2023;104:58-65 DOI: 10.1159/000527452 abdominal pain, headache, arthralgia, and fatigue have been reported, but the most common are fever, diarrhea, abdominal pain, and bloody stool [24]. Thus, the symptoms of acute 5-ASA intolerance syndrome and exacerbation of UC are very similar, and it is difficult to distinguish whether they are caused by acute 5-ASA intolerance, a flare of the primary disease, or insufficient therapeutic effects of 5-ASA. However, it is important to be able to recognize acute 5-ASA intolerance syndrome so that discontinuation of 5-ASA is not delayed. Fever (>38°C) and an elevated C-reactive protein concentration (>30 mg/L) appear in most patients with acute 5-ASA intolerance, even if the UC is mild. Considering that fever and an elevated C-reactive protein level are hallmarks of moderate to severe UC, the presence of fever despite only mild disease activity might be a useful clue to distinguish acute 5-ASA intolerance syndrome from exacerbation of the primary disease [26]. Diagnosis of 5-ASA Intolerance and Biomarkers There is no perfect test that can confirm or rule out acute 5-ASA intolerance; therefore, it is important for the attending physician or surgeon to keep in mind the possibility of acute 5-ASA intolerance syndrome to ensure timely diagnosis. There are no significant differences in the history of drug allergy, disease activity, or concomitant medications between patients with and without acute 5-ASA intolerance. However, female sex, age younger than 60 years, and pancolitis have been reported to be risk factors for acute 5-ASA intolerance syndrome [23]. Patients who discontinue 5-ASA drugs because of intolerance are at higher risk of hospital admission, refractoriness to TNF inhibitor therapy [27], and colectomy [28]. The drug-induced lymphocyte stimulation test (DLST), traditionally used to diagnose type IV allergy, has also been used in an attempt to diagnose acute 5-ASA intolerance syndrome based on the hypothesis that reactivation of T cells is also involved in allergic reactions [29]. In some hospitals and clinics, the DLST and lymphocyte transformation test are used as auxiliary methods for diagnosis of acute 5-ASA intolerance. In the DLST, purified peripheral blood mononuclear cells from the patient are cultured with the culprit drugs, and 3 H-thymidine uptake by proliferating lymphocytes is measured. If T cells that have been sensitized to a specific antigen are present, cell proliferation in the drug-stimulated samples is upregulated when compared with unstimulated samples [30]. The precise mechanisms via which acute 5-ASA intolerance syndrome develops have not been fully clarified; however, this syndrome is thought to be an immune response to 5-ASA, sulfapyridine, and other drug excipients. It has been suggested that acute 5-ASA intolerance syndrome is caused by povidone, which is an excipient in all 5-ASA preparations other than the Multi-Matrix System [31]. Therefore, the discrepancies in the results obtained using different preparations of 5-ASA and SASP can be explained by reactions to the drug derivatives or excipients of a specific preparation of 5-ASA or merely by limited sensitivity of the DLST. Some issues remain to be resolved in terms of the use of DLST as a biomarker for acute 5-ASA intolerance syndrome, despite it having some advantages in the clinical setting. First, the most appropriate time point at which to perform this examination is controversial. The timing of the test has not been consistent in the reported studies and has varied from patient to patient. Second, the sensitivity of the test decreases after administering corticosteroids because T-cell activation is suppressed. Third, the best cutoff point for diagnosing acute 5-ASA intolerance syndrome should be explored because the results differ depending on the culprit drug. Finally, obtaining the results of the test takes a few weeks in many hospitals. If acute 5-ASA intolerance syndrome is suspected, we usually stop 5-ASA without waiting for the DLST result. The reported sensitivity of DLST for acute 5-ASA intolerance syndrome is 0.24, the specificity is 0.81, the false-positive rate is 0.195, and the false-negative rate is 0.76 [32]. These results suggest that a positive 5-ASA DLST result indicates a high probability of acute 5-ASA intolerance syndrome but that a negative result does not rule it out. Therefore, the DLST can be used to assist in making a definite diagnosis of acute 5-ASA intolerance if positive but not to exclude it if negative. Although the DLST provides clinically significant information when deciding whether or not to resume treatment with 5-ASA, further research is needed to optimize the application of the DLST for acute 5-ASA intolerance syndrome. At present, the diagnosis of acute 5-ASA intolerance syndrome is based on a detailed review of the patient's medical history. Treatment of Patients with UC and 5-ASA Intolerance When acute 5-ASA intolerance syndrome is suspected clinically within a few weeks of starting 5-ASA as treatment for UC, discontinuation should be considered. The symptoms of this syndrome usually resolve promptly after cessation of the drug. However, acute aggravation, progression to toxic megacolon, and complications of enteropathogenic bacteria, Clostridioides difficile, and cytomegalovirus should be considered in patients with an un-DOI: 10.1159/000527452 favorable disease course after withdrawal of 5-ASA (Fig. 2). Such patients may require advanced induction therapies, such as systemic corticosteroids and/or antibacterial or antiviral treatment. When the symptoms of acute 5-ASA intolerance syndrome have improved, there are three possible options for maintenance treatment: "5-ASA switching," alternative treatment, and desensitization to 5-ASA. The 5-ASA switching strategy entails changing from one preparation to another in an effort to improve clinical outcomes or avoid adverse reactions, including acute 5-ASA intolerance syndrome. In our cohort of 59 patients with UC, 44% (n = 26) who were intolerant to one or more 5-ASA preparations could be maintained by another, whereas 19% (n = 11) were intolerant to two 5-ASA preparations and 3% (n = 2) were intolerant to three [27]. Although larger -scale clinical studies are needed to confirm the efficacy and safety of using second or third preparation, our data suggest that this switching strategy may allow patients with UC to be maintained on 5-ASA therapy. Sometimes, 5-ASA switching is considered from the perspectives of the efficacy, safety, cost-effectiveness, and non-immuno-suppressive properties of 5-ASA. Notably, patients may experience recurrent symptoms of 5-ASA intolerance, especially when systemic corticosteroid therapy is used and the dose is weaned. Therefore, patients in whom 5-ASA therapy is resumed should be monitored carefully, particularly those receiving concomitant systemic corticosteroids. Another option to consider is use of an alternative treatment. Patients with IBD generally require treatment for both induction and maintenance of remission, and therapies for maintenance of remission should be carefully selected. Importantly, corticosteroids should not be used as maintenance treatment for UC even if they are required for induction of remission [33]. Desensitization to 5-ASA has also been attempted, and several protocols similar to those used for desensitization to other salicylate drugs have been proposed. Although there is no well-established strategy for safe 5-ASA desensitization in patients who are intolerant to 5-ASA, there has been a report of a desensitization protocol with readministration of 5-ASA that had a success rate of up to 60% [23]. Allergic disease experts may consider readministration of 5-ASA using only a small amount with the patient's informed consent regarding its efficacy and possible allergic reactions and adverse events, including anaphylaxis. However, there is no standardized protocol for 5-ASA desensitization therapy, with daily doses ranging from less than 1 mg-500 mg and treatment durations ranging from a few days to 6 months. Most gastroenterologists and general practitioners do not consider 5-ASA desensitization as a first-line therapy for patients with UC who are intolerant to 5-ASA in view of the availability of other treatment options and the difficulty of implementing a protocol that starts with micro-doses of 5-ASA and takes several weeks, sometimes resulting in serious adverse effects (Fig. 2). In addition to acute intolerance, several other adverse events, including nephrotoxicity, pancreatitis, pericarditis, and pneumonitis, have been documented as rare but clinically significant disorders associated with 5-ASA [22,25]. Two British studies reported a very low incidence of 5-ASA-induced renal impairment (1.0 per 4,000 or 1.7 per 1,000 person-years). Furthermore, nephrotoxicity has been reported in animals treated with 5-ASA and in patients treated with drugs chemically similar to 5-ASA, such as acetylsalicylic acid. Therefore, a causal association between 5-ASA and nephrotoxicity is likely, although this issue remains controversial in view of a recent large-scale cohort study, also performed in the UK, that showed no significant association between use of 5-ASA and nephrotoxicity [34]. Furthermore, IBD itself is a risk factor for interstitial nephritis. However, most cases of renal impairment characterized by elevated creatinine are resolved by timely drug discontinuation [25]. Periodic renal monitoring of patients on 5-ASA therapy for UC may help with early diagnosis of renal dysfunction and allow timely cessation of 5-ASA, thereby preventing progression to renal failure, which may occur even after discontinuation. The serum creatinine concentration and estimated glomerular filtration rate are reliable indices of renal impairment and the most commonly measured. Although there is no evidence-based method or protocol that can be used to monitor for nephrotoxicity in patients with UC on treatment with 5-ASA, measuring the serum creatinine concentration before starting treatment and periodically thereafter has been proposed in previous studies [25] (Fig. 2). Latest Insights into Acute 5-ASA Intolerance Syndrome 5-ASA is absorbed by cells in the intestinal epithelium and has anti-inflammatory effects on the intestinal mucosa. Although its mechanism of action remains unclear, 5-ASA is thought to scavenge free oxygen radicals pro-duced by macrophages and neutrophils that damage the intestinal epithelium and suppress synthesis of leukotrienes, thereby inhibiting migration of inflammatory cells [35]. In epithelial cells in the colon, a large proportion of 5-ASA is acetylated to N-acetyl-5-ASA by the N-acetyltransferase 1 (NAT1) enzyme and excreted in feces. The remainder of the 5-ASA is converted to N-acetyl-5-ASA by NAT1 in the liver and excreted into urine. Some variants of NAT1 and its isozyme, NAT2, have been reported, and these variants may contribute to the efficiency of acetylation. Although an association between NAT2 variants and SASP-related adverse events has been reported [36], NAT2 genes are irrelevant to the metabolism of 5-ASA, and whether NAT1 is associated with acute or chronic 5-ASA intolerance remains unclear. A recent study revealed an association between genotypes of rs144384547 and 5-ASA-induced fever and diarrhea [24]. rs144384547 is located upstream of the regulator of G protein signaling 17 gene (RGS17) and encodes the RGS17 protein, which binds directly to G protein α-subunits and attenuates signaling activity. Patients with IBD and the rs144384547-G allele were found to have a higher risk of developing 5-ASA-induced fever and diarrhea (n = 9/41, 22.0%) than those who were homozygous for the common allele rs144384547-C (n = 53/2,269, 2.34%) [24]. Notably, the rs144384547-G allele contributed to the risk of developing concomitant fever and diarrhea but not to developing either fever or diarrhea alone. It is also interesting that a single-nucleotide polymorphism located in the ADRA1A gene, which encodes a G protein-coupled α1 adrenergic receptor, was found to be a candidate allele, although this finding did not reach statistical significance [24]. Further studies are expected to establish genomic tests that will help predict the occurrence of 5-ASA intolerance and allow selection of the most appropriate 5-ASA therapy for specific patients. Although the precise mechanisms underlying the development of acute 5-ASA intolerance syndrome have not been fully clarified, this syndrome is attributed to an allergic reaction to preparations containing 5-ASA. Thymus and activation-regulated chemokine and serum-specific immunoglobulin E antibody are used as diagnostic biomarkers for type I allergic diseases, such as atopic dermatitis, asthma, and allergic rhinitis. However, their value for diagnosing acute 5-ASA intolerance syndrome has not been demonstrated [37]. There is a growing body of evidence showing that the intestinal microbiota contributes to the development and persistence of IBD [38,39]. We have investigated the relationship between 5-ASA intolerance and the gut micro-DOI: 10.1159/000527452 biota by analyzing the fecal microbiota in 124 patients with UC in remission (12 with previous acute 5-ASA intolerance syndrome and 112 who were 5-ASA-tolerant). Although there was no significant difference in the diversity of the gut microbiota between the two groups, the taxonomic profile showed more abundance of the phylum Firmicutes and less abundance of the phylum Bacteroidetes in the patients with previous acute 5-ASA intolerance syndrome. This suggests that alteration in the gut microbiota or gut dysbiosis may be involved in acute 5-ASA intolerance syndrome and may be associated with a worse clinical outcome, including early discontinuation of TNF inhibitor therapy [27] or colectomy [28]. Conclusion Despite recent therapeutic advances, 5-ASA-containing preparations remain the first-line therapy for UC, and optimization of 5-ASA therapy is a safe and efficacious treatment option. However, recent reports indicate that a substantial proportion of patients (up to 10%) who are treated with these therapies experience symptoms of acute 5-ASA intolerance. It has been suggested that acute 5-ASA intolerance syndrome is associated with a poor prognosis in patients with UC, but it remains unclear whether the occurrence of this syndrome is in itself an independent poor prognostic factor or whether there are as yet unknown shared mechanisms that provoke both acute 5-ASA syndrome and difficult-to-treat disease subtypes. Furthermore, there is a lack of high-quality evidence as a result of a lack of consensus regarding the definition of 5-ASA intolerance. Therefore, it is difficult to diagnose acute 5-ASA intolerance syndrome because of its similarity to worsening UC. Although the prediction and diagnosis of acute 5-ASA intolerance syndrome is difficult, recent studies have produced some promising results with respect to genetic background [24] and bacterial composition in the colon [27]. In this regard, combi-natorial approaches that include genomic and metagenomic profiling may shed light on how to predict 5-ASA intolerance when prescribing 5-ASA-containing therapy. Finally, the mechanisms underlying the development of 5-ASA intolerance and how 5-ASA exerts its therapeutic effect on intestinal inflammation are not fully understood. Further research to clarify the whole picture of 5-ASA therapy may improve its safety and efficacy.
2022-11-12T06:18:19.931Z
2022-11-10T00:00:00.000
{ "year": 2022, "sha1": "ea166dc81aeba5f08d4937e2cd91b9ef67fef192", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/527452", "oa_status": "HYBRID", "pdf_src": "Karger", "pdf_hash": "119ffa87d7993768ae23fe164071b5848b412515", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17656613
pes2o/s2orc
v3-fos-license
Defensive Practice as ‘Fear-Based’ Practice: Social Work's Open Secret? Defensive practice has received attention through the Munro review of child protection, which has identified that current organisational cultures increase the likelihood of defensive practice. Whilst the wider socio-political climate that gives rise to defensive practice has been explored within the literature, little attention has been paid to the everyday realities of defensive practice. This paper reports the findings of a study into final year social work students' attitudes towards defensive practice within social work. Three focus groups were completed with a total of ninety final-year students that collected qualitative and quantitative data using interactive software. This paper examines how participants perceived defensive practice, both in general and when faced with real-life vignettes. Participants distinguished between pro-active behaviour (sins of commission) and passive behaviour (sins of omission), generally regarding the latter as less serious because it was less tangible and easier to attribute to more positive motives. Whilst the literature identifies defensive practice as deliberate behaviour, the focus group discussions suggest that it is a subtler and less conscious process. Whilst there was there was a general consensus about the nature of defensive practice, there was considerable disagreement about specific vignettes and several competing explanations are explored. . Adverse media reporting of social work became increasingly apparent since the 1980s, following a number of high-profile child-death inquiries (Ayre, 2001;Cooper et al., 2003). The social work profession has attracted considerable media criticism, often directed at vilifying individual workers and managers (Ayre, 2001;Garrett, 2009;Jones, 2014). For example, following the death of Peter Connelly, a tabloid newspaper launched a petition to sack all of the social workers involved, which was signed by 1.4 million people in 2009 (Jones, 2014). These developments provide powerful incentives for social workers to engage in defensive practice as a means of protecting themselves (Cooper et al., 2003;Ferguson, 2005). Although this process of practising defensively can begin during students' practice placements as part of their professional training, there has been little focus on this within social work education. This can leave students feeling that there is a gap between their practical experiences on placement and their learning in the classroom (Preston-Shoot, 2012). There is a considerable literature on the wider social policy and organisational context that provides the backdrop to defensive practice. The 1990s saw the introduction of more sophisticated systems of accountability, including reviews, inspections, audits and managerial scrutiny, that served to make social work practice more defensive (Parton and O'Byrne, 2000;Munro, 2004). Ayre (2001) captures the emotional aspect of the drive towards these new forms of accountability when he states that 'The fear of missing something vital encouraged practice so defensive that it seemed, at times, primarily calculated to protect the system rather than the child' (Ayre, 2001, p. 897). Although all areas of social work are affected, the high-profile nature of child protection means that it is particularly pronounced in this field. This is reflected in the academic literature, which has focused on the effects of high-profile public inquiries predominantly in the field of child protection (Cooper et al., 2003;Warner, 2014;Jones, 2014) and, to a lesser extent, mental health (Warner, 2006;Laurence, 2003). The issues are also relevant to adult social care; for example, the move towards more person-centred support could provide tensions between user choice and professional accountability. These wider developments within social work have been contextualised as a key element of the risk society, where safety is regarded as the primary value (Beck, 1992;Stalker, 2003;Webb, 2006). Webb (2006) argues that the role of social work has moved from a post-war welfare state conception of responding to 'need' to a neo-liberal role of responding to 'risk'. Since the normative basis of the risk society is safety, its utopia is essentially defensive and negative (Beck, 1992). The impact of this move towards the risk society has been a proclivity towards defensive and morally timid social work practice (Stanford, 2010). Within British social work, defensive practice can be seen as an 'open secret' because it has traditionally been discussed and acknowledged informally amongst practitioners and managers, but has rarely been discussed explicitly in the social work literature. The existing literature has focused upon psychological and organisational factors that increase the likelihood of defensive practice, rather than the nature of defensive practice itself. Psychological defences that underpin defensive practice have been written about, particularly from a psychoanalytic perspective (Trevithick, 2011(Trevithick, , 2014Lees et al., 2011;Whittaker, 2011). The organisational drivers that encourage defensive practice have received some attention in the Munro review of child protection, linked with the challenges of managing uncertainty: . . . many of the problems in current practice seem to arise from the defensive ways in which professionals are expected to manage that uncertainty. For some, following rules and being compliant can appear less risky than carrying the personal responsibility for exercising judgment (Munro, 2010, p. 6). However, there is surprisingly little written explicitly about the practical realities of defensive practice in social work. It is interesting to note that the only journal article that focuses exclusively on defensive practice was written by an academic philosopher in the late 1980s at the height of a spate of public inquiries. Harris (1987) argued that the concept has been widely debated within the medical profession in the USA, provoked by increasing levels of medical malpractice lawsuits. Whilst there is no direct equivalent legal threat for British social work, the attacks on the social work profession in the UK have occurred not through courts of law, but rather through the 'court of public opinion' (Harris, 1987, p. 61). In his article, Harris defines defensive practice as 'practices which are deliberately chosen in order to protect the professional worker, at the possible expense of the well-being of the client' (Harris, 1987, p. 62). However, he concedes, since 'best interests' is always a personal judgement, so is defensive practice. Harris (1987) argues that defensive practice refers to a range of behaviour, ranging from an overemphasis on documenting practice to either intervening more than is needed (e.g. removing a child unnecessarily) or refraining from intervening (e.g. not returning a child home when it is appropriate) in order to protect oneself against later being held responsible. Methods The study used three large focus groups involving a total of ninety final-year students out of 119 students from two cohorts. All were invited to participate, so the sample constitutes approximately three-quarters of the total cohorts. The first two groups comprised the final-year cohort in one academic year and the third was the total final-year student group in the following year. All students, irrespective of cohort, had already completed an initial 100-day placement and were finishing their final-year placement. The rationale for choosing final-year students was that they were likely to have had experience on placement that was relevant to defensive practice. Ethical approval for the study was obtained through the university ethics committee. The three focus groups (n ¼ 23, 25 and 42) had a similar composition in terms of gender and age. The gender profile was 80 per cent female and 20 per cent male. The age profile was 47 per cent were under thirty-five years old, 50 per cent were between thirty-five and fifty years old, and 3 per cent were over fifty years old. This profile was similar to the whole student group, which was unsurprising given that the sample was such a large proportion of the total student cohort. During the focus groups, participants were given an individual handset that enabled them to 'vote' anonymously to questions presented within a Power-Point presentation without being aware of the views of other participants. These responses were analysed immediately and presented to the participants in an aggregated format. Once participants had seen the results, this led to focus group discussions that provided qualitative data in which participants explained the reasons for their choices and had the opportunity to comment upon the overall results and the choices of others. The study considered two main research questions. First, how did participants understand defensive practice and what were its main features? Second, how did they rate specific vignettes? Consequently, the discussion guide was in two sections. First, a more traditional qualitative approach was used to explore students' understanding of defensive practice, any messages that they may have received from colleagues or manager about defensive practice, possible motivations for engaging in defensive practice and the potential role of social work education. In the second section, there was a more structured exercise where students were presented with four vignettes providing real-life scenarios of potential defensive practice and asked to rate them. The rating scale had five options: 'not defensive practice', 'mild', 'moderate' or 'severe defensive practice' or 'don't know'. The four vignettes are presented in the 'Findings' section below. After students had voted anonymously, they were invited to discuss the reasons for their choices in the focus group. Quantitative data were inputted into SPSS and analysed using descriptive and inferential statistics (t-tests, chi-square and Cramer's V statistic). No statistically significant differences were found between the three focus groups and consequently quantitative data have been aggregated. Thematic analysis was used to analyse the qualitative data and transcripts were coded using NVivo 10 qualitative data analysis software. The research design was chosen to enable each research method to address the traditional limitations of the other. For example, surveys provide a structured means of collecting quantifiable data about participants' opinions, but one of their main limitations is that there is no opportunity to explore responses with participants. Surveys can incorporate real-life vignettes that can contextualise broad concepts and make them more specific (Wilks, 2004). Combining survey data with focus groups enables participants to provide responses that can be quantifiable but which can be explored further through open-ended discussion. Focus groups provide an opportunity for participants to express a range of opinions and challenge and interact with one another in an open environment. Participants can explore and develop their opinions through interactions with others and this can provide insights into complex behaviours. Group dynamics in focus groups can have positive and negative effects, inhibiting or encouraging the discussion of taboo topics (Kitzinger, 1994;Whittaker, 2012). The size of the focus groups was significantly larger than the conventional size used for focus groups. The first groups contained twentythree and twenty-five participants, whilst the third contained forty-two participants because of practical limitations. This was based upon previous experience of using interactive software successfully with large groups. However, a key limitation of focus groups, particularly larger groups, is that they can inhibit the free and open discussion of difficult and sensitive topics. Having such large groups meant that the nature of the discussions was different to smaller groups; for example, it was more difficult for participants to make personal disclosures. A similar limitation of focus groups is that dominant members can express a view early and it can be difficult for other participants to publicly disagree. In this respect, the use of a survey administered through interactive software has two advantages. First, participants are not aware of the opinions of others when they make their choices so do so unencumbered by the views of others. Second, the anonymous nature of 'voting' can make it easier for participants to communicate views that they might otherwise have been reluctant to express and which might have been lost in traditional focus group discussions. Participants make their choice safe in the knowledge that they can choose whether they explain it or not. Findings The findings are divided into two parts. The first part presents the focus group discussions about the nature of defensive practice and why practitioners might engage in such behaviour. The second part relates to the vignette exercise and how students rated specific examples of defensive practice. Part 1: Understanding the nature of defensive practice In the first part of the focus groups, participants discussed defensive practice in general and why practitioners might engage in it. It begins by examining the examples given by participants in relation to direct work with service users and more widely within the organisation. Then the underlying motivations for defensive practice are explored by examining the wider organisational and emotional contexts within which defensive behaviour took place. The examples of defensive practice that participants described can be divided between behaviour that related to direct work with service users and those that related to working within the organisation. Defensive practice with service users referred to a range of behaviour, which included avoiding challenging service users or even avoiding contact with service users: . . . it is about avoiding certain situations . . . avoiding getting involved in certain pieces of work as a defensive mechanism. They might do things like avoiding certain visits that they should go and attend to. You know, like arriving . . . what's that term: The soft knock, using the sponge on the door, you know? That sort of thing, and saying that the person wasn't in so that you don't have to deal with the situation (Participant 4, Group 2). Participants described how providing or withholding of services and working within legislation and policy could be used in a defensive way with service users: If you've got a child protection case that is going to court, defensive practice is about making sure that you offer some services. The chances of it working is very slim but that doesn't matter, you can prove to the court that you've offered it (Participant 17, Group 1). If you don't get on with a service user, you could behave oppressively by not offering them services but hide behind the law and policies to justify it (Participant 37, Group 3). Sticking overly close to your role and hiding behind legislation-doing what is lawful, not what is ethical (Participant 14, Group 1). Defensive practice could include the overestimation of risk, because practitioners and managers are aware that it is only the underestimation of risk that will have negative consequences for them personally (Tuddenham, 2000): Defensive practice is about 'maximising' your assessment of risk' (Participant 6, Group 2). I work within palliative care for older people, and service users routinely have to go into residential care because workers and managers want to cover themselves. They are covering themselves, minimising the risk because they don't want that level of risk on their watch (Participant 4, Group 1). As well as work with service users, defensive practice could relate to working within the organisation. This can refer to behaviour that is designed to protect the organisation as well as oneself: Before we moved to a paperless system, I've had quite a few occasions of, 'hide that file, there's an audit coming up'. . . The evidence was that there was actually good work going on with the family but because some of the key performance indicators maybe weren't met or a particular assessment form hadn't been completed, the file would disappear (Participant 9, Group 2). However, students perceived defensive behaviour within the organisation as more commonly designed to protect the individual at the expense of others. A central focus was the relationship between practitioners and managers, particularly the sharing or avoiding of responsibility, not challenging managers and avoiding supervision. Participants described behaviour that was designed to share responsibility with managers for any decisions made: You always have a paper trail. Always make sure, and copy in managers when it is a decision that you need so that people can see that you're asking for things and then the responses you get, the managers can see what's happening with it and you can protect yourself (Participant 19, Group 1). To make your recommendation of what you think should be done and make your manager aware of that so you're kind of backing yourself up, 'This is what I think we should do and I've been persuaded to do something else' (Participant 11, Group 2). Share the responsibility with managers so if it doesn't pan out successfully, you're also sharing the blame (Participant 31, Group 3). Let them make your decision. I've been told that. Let the manager make the decision. If they make the decision then the baton falls with them (Participant 7, Group 2). The final quote is an example of upward delegation (Menzies Lyth, 1988;Whittaker, 2011), where responsibility is not shared, but avoided by disowning the decision. As well as sharing responsibility with managers, participants also described behaviour designed to share responsibility with other professionals: Where there are child protection concerns, I have been advised that if you aren't sure, then you have to call a conference. I think that most practitioners probably call the conference to avoid being blamed. All professionals are involved so it won't be necessarily your decision, they are all involved (Participant 5, Group 2). A frequently cited example of defensive practice within organisations was recording any disagreements with managers: I had discussions with management who directed me to take a particular course of action that I didn't agree with. Colleagues advised me to record it in the case notes, so if it comes back, I have a record that I was directed to do that (Participant 12, Group 3). Several participants cited avoiding supervision as a form of defensive practice. When this was explored, one participant said: You avoid supervision in order to avoid blame. Postponing your supervision so that you won't have to talk about anything that went wrong (Participant 9, Group 2). Another area of defensive practice within the organisation was not being willing to challenge managers: Defensive practice is about not challenging management decisions and following procedures in an unreflective and passive way (Participant 6, Group 1). Avoiding challenging bad systems . . .. Avoiding challenging because there are repercussions on you as an individual, because you'll be distanced and alienated from your colleagues (Participant 10, Group 2). I was told that it is not always worth arguing with whatever you see. You know it's not right, but it's not worth it. It was a qualified social worker who told me that I've taken that on board. Social work is so incestuous so you can find that you apply to somewhere and they've spoken to your manager so they have a view of you before you even go to an interview (Participant 12, Group 1). My manager told me about a colleague who argued with the management about a service user and then he was out of a job. He's not worked in nine months and he has a mortgage to pay (Participant 22, Group 1). The last two quotes identify clear messages from more experienced practitioners about the dangers of challenging managers. The final quote can be viewed as morally neutralising the duty to challenge bad practice through the use of 'atrocity stories' (Dingwall, 1977), namely stories where the protagonist bravely challenges those in authority, which leads to tragic outcomes for them. Participants distinguished between active behaviour (sins of commission) and passive behaviour (sins of omission). In the discussions, sins of commission (e.g. hiding the file before an inspection) were generally regarded as more serious than sins of omission (e.g. avoiding supervision). One rationale offered was that sins of commission were more likely to be interpreted as deliberate, whilst sins of omission could be explained away in more benign ways. The examples given can be understood as a matrix (Table 1). Such a matrix can provide a useful framework for understanding the moral understanding of defensive practice outlined by the participants. Participants identified a wide range of behaviours that could potentially serve a defensive function. However, they recognised that many behaviours had positive aspects, such as shared decision making with managers and other professionals, and practitioners may not engage in such behaviour for purely or primarily defensive reasons. Defensive practice as 'fear-based practice': the influence of emotions and organisational culture Defensive practice is a form of fear-based practice-fear of what might happen and the need to cover yourself just in case (Participant 19, Group 2). Participants in both groups talked explicitly about fear, frequently expressed as fear of being exposed and vilified in the press. As one participant stated: 'It's the fear of your face being splashed all over the papers' (Participant 11, Group 1). The fear of a public inquiry or serious case review was rated as the main reason why social workers engage in defensive practice by twice as many people as the nearest alternative, which was disciplinary action by employer (51 per cent compared to 24 per cent). Disapproval by a manager was rated by only 9 per cent, fear of a service user complaint by 7 per cent and 9 per cent of participants chose 'other'. These discussions were often accompanied by laughter, which appeared to express both anxiety at the catastrophic nature of this imagined scenario and relief in being able to acknowledge the shared nature of their private fears. Although there was some recognition that this scenario was highly unlikely, participants were clear that the consequences of being involved in a public inquiry could be devastating for the individual practitioner. As one participant stated: 'Once you go into a public inquiry and something goes wrong, that's it. That's the end of your career' (Participant 7, Group 2). In the discussions, some participants viewed defensive practice as a direct product of a 'culture of blame', which requires practitioners to 'cover their backs and put the blame elsewhere' (Participant 15, Group 2). In all three groups, participants reported frequent messages from staff on their placement that they should not leave themselves unprotected. For example, one participant stated: 'All the time, people tell you to cover your back' (Participant 18, Group 3). Some participants expressed concerns that this can lead to an organisational culture where defensive practice is so embedded that practitioners are not consciously aware of this unless they consciously reflected upon it: You find yourself doing something and you'll think suddenly, 'Hold on a minute. Why am I doing that?' You've slid into the culture of your team . . . so it's critical to talk about it and think about how you're going to work (Participant 2, Group 2). This raises an interesting point about the nature of defensive practice. Whilst Harris (1987) defines defensive practice as deliberate behaviour, this finding suggested that it can be a subtler and less conscious process. Harris's (1987) definition of defensive practice excludes unintentional behaviour and it can be argued that this is a logical distinction. However, this finding suggested that viewing defensive practice as conscious and deliberately chosen behaviour did not capture the subtle and often unconscious aspects of many real-life situations. In the discussions, participants saw a strong link between defensive practice and procedural adherence: It's about making sure that you are sticking to the procedures carefully so that there is no come back to you personally (Participant 3, Group 2). It's about risk avoidance; about making sure that you are sticking to procedures carefully that there is no come back on you personally (Participant 2, Group 2). Another participant explicitly discussed the psychological function of defensive practice as providing a sense of control in a situation of intense uncertainty: We work in such anxious settings and the uncertainty is so great, that sometimes working defensively is the only control that you may have, In an area where there is such a lot of risk and uncertainty, you might want to stick to the procedures just to feel safe (Participant 11, Group 2). This study was designed to examine defensive practice across social work rather than focusing upon a specific context, but the high profile that is given to child protection means that this setting is strongly represented both in the literature and in the data. It was interesting to note that participants provided examples of defensive practice across a wide range of settings but the most extreme fears related to a public inquiry, which in the UK relate mainly to child protection or mental health. Part 2: The vignettes In the second part of the focus groups, participants were presented with four vignettes of behaviour that might be considered defensive practice and asked to rate them according to five options: whether they thought that it was mild, moderate or severe defensive practice, don't know or not defensive practice at all ( Table 2). The four scenarios above were designed to represent a wide range of behaviours that were deliberately designed to portray examples of increasingly severe defensive practice. Discussion Whilst a general consensus about the nature and motivation for defensive practice was expressed in the first part of the focus group, there was little clear agreement when rating vignettes. The preliminary hypothesis for the study was that there would be a broad degree of consensus about where each scenario would be placed along a continuum of severity and this consensus would become increasingly clear as the vignettes became more severe. This hypothesis was only partially supported. The rating of 'severe defensive practice' increased from 25 per cent to 67 per cent in a smooth progression, which was expected. However, the rating of 'not defensive practice at all' was erratic, rising from 7 per cent to 29 per cent, 24 per cent and 14 per cent so the increased sense of agreement that we anticipated did not materialise. Indeed, there was a pattern in which participants' responses appeared to become polarised into extremes as the scenarios became more serious and the discussion developed. Participants had a four-point scale for rating behaviour, ranging from not defensive to severe defensive practice. When the first scenario was presented, almost half of the participants chose the middle response of 'moderate' defensive practice and very few tried to argue that it was not defensive practice (7 per cent). As the discussion developed, the voting became more and more polarised between the two extreme options of 'severe defensive practice' and 'not defensive practice at all'. In the three final scenarios, the two responses were the most commonly and participants choosing the middle group became increasingly rare. Participants who rated behaviour as 'mild defensive practice' (range: 2 -20 per cent across the scenarios) were a small minority-smaller than the 'don't know' group in three out of the four scenarios. This pattern became more pronounced as the discussion developed and was most clearly shown in the final vignette (not challenging a service user for fear of a complaint), which was designed to present the strongest example of defensive practice. Opinions were polarised into extremes, with the two most frequent ratings being 'severe defensive practice' (67 per cent) or 'not defensive practice' (14 per cent). When asked to discuss the reasons for their choices, participants increasingly sought to justify the actions of the social worker in the scenario (Table 3). We are not arguing that such accounts are simply justifications for defensive practice. Some of them made good points that demonstrated the complexities around defining defensive practice. What we are arguing is that participants' efforts to defend the practitioner's actions in the vignettes became more vigorous as the discussion developed, whilst others became more condemnatory. Participants found discussing their choices more difficult as the focus group proceeded. For example, when the fourth vignette was discussed in one focus group, a participant expressed incredulity that others had rated it as 'not defensive practice'. Participants who had chosen this response did not express a view about why they had made their choice, despite direct prompts to do so. One likely explanation is that participants felt that it was difficult to do so in a large group setting and did not wish to experience censure from others. Such polarised views were difficult to comprehend at first without reference to the focus group discussions afterwards and demonstrated the strengths of combining quantitative and qualitative data. There are a number of potential explanations for this pattern. One explanation is that participants were simply confused about how to evaluate each example given the lack of time for reflection and the moral complexities of the vignettes. It is possible that participants may have given different responses if they had been given the vignettes in advance and had time to formulate a more considered response. It is likely that at least some of the variation can be explained by some participants judging behaviour by the intentions of the social actor whilst others make judgements based upon its likely consequences. Whilst this viewpoint would account for the comparatively low levels of agreement between participants, it has greater difficulty in explaining this pattern of polarisation that became more pronounced as the discussion developed. An alternative explanation is that participants were influenced by peer pressures to conform to group norms. However, the high levels of disagreement obtained through anonymous voting made it difficult for participants to gain a sense that there were agreed group norms to conform to. In addition, the increasing level of disagreement as the exercise progressed would suggest against group conformity as a possible explanation. A third explanation is that this polarisation of views itself serves a defensive function at a psychological level. This viewpoint starts with the premise that participants viewed themselves as ethically sound practitioners and therefore find it uncomfortable to think of themselves as engaging in dubious and undesirable practices. Given the emphasis on social work values in social work training courses, this does not seem an unreasonable assumption, though it may not be true in all cases. The second premise is that, when participants are faced with a concrete scenario, they ask themselves: 'Would I engage in that behaviour?' The data from the focus groups, in which participants frequently described scenarios in highly personal ways, provide some support for this premise. In both focus groups, the discussions after each scenario suggest that participants were considering whether they would personally engage in the stated behaviour in the situation described. One participant stated: My take is how do I decide? Would I do that? If 'yes', it's not that bad and if I wouldn't it is really bad (Participant 22, Group 3). From this perspective, participants faced with a specific scenario have two choices. First, if they could not imagine themselves engaging in the behaviour, they are able to distance themselves from the behaviour by viewing it as severe or moderate defensive practice. Second, if they considered that they might engage in the behaviour described, they are then presented with a dilemma. Given that they view themselves as ethically sound practitioners, it would be uncomfortable for participants to think of themselves as engaging in dubious and undesirable practices. In this situation, redefining the behaviour as 'not defensive practice' is a means of neutralising the ethical threat. For example, in the scenario where a social worker manoeuvred her manager into making a decision, participants who rated this as 'not defensive practice' stated that she was been 'very creative', her behaviour had 'intelligence' and 'as a social worker, we should use our resources and she was using her manager as a resource'. Given these positive definitions, practitioners are able to engage in the behaviour without the discomfort of feeling ethically compromised. The lack of consensus supports the key point made by Harris (1987) that defensive practice is a subjective judgement. The comments made by participants emphasised how the language of risk has moved into everyday practice in ways that encourage practitioners to manage the risk to themselves and the organisation, not just to service users (Ayre, 2001;Webb, 2006;Stanford, 2010). This involves mechanisms such as the defensive use of procedures (Munro, 2010) and upward delegation to managers to share or avoid responsibility (Whittaker, 2011). There were several limitations with this research design. Having such large groups meant that the nature of the discussions was different to much smaller focus groups; for example, it was more difficult for participants to make personal disclosures. When participants explained their choices, they did so within a focus group where participants may disagree with their views. However, the candid nature of the comments that many participants made suggests that this influence was not overwhelming. It also gives supports to the view that defensive practice is an 'open secret', which is rarely acknowledged in the literature and policy documents, but is well known to practitioners and managers. The anonymous nature of voting means that participants may provide responses that are closer to their real views, but they may be reluctant to explain these within a large focus group. One of the limitations of both surveys and focus groups is that people may state how they would act but may act differently in real life. This, however, is a limitation that is shared with alternative research methods, such as individual interviews (Bryman, 2012). Conclusions and implications The study found that there was a general consensus about the nature of defensive practice. Whilst the literature identifies defensive practice as deliberate behaviour, the focus group discussions suggest that it is a subtler and less conscious process. Rather than being a deliberate process, they described it as 'part of the culture' of the agency or something they picked up from other practitioners without questioning. Participants used two main distinctions that related to the forms that defensive practice took and the underlying motives that encouraged such behaviour. First, they distinguished between behaviour that related to direct work with service users, such as overestimating risk, avoiding contact; and those that related to working within the organisation, such as avoiding supervision, upward delegation. Second, they distinguished between pro-active behaviour (sins of commission) and passive behaviour (sins of omission). Passive behaviour was generally regarded as less serious than pro-active behaviour, because it was less tangible and easier to attribute to more positive motives. However, there was considerable disagreement when participants were asked to rate specific vignettes. There was also a consistent pattern of increased polarisation across the scenarios, in which participants rated behaviour on either ends of the spectrum. Several competing explanations were explored, including the possibility that such polarisation served as a psychological defence in itself. There was strong support within the focus groups for teaching input on defensive practice within the curriculum. During the focus groups, participants described the discussions as thought-provoking and found themselves examining their own practice, frequently concluding that it was more defensive than they had realised. Therefore, any approach to teaching about defensive practice requires us to understand the issues that students are struggling with. Initially, participants appeared quite open about discussing defensive practice in an educational setting where everyone was willing to acknowledge it quite openly. However, the polarising dynamic meant that it became increasingly difficult for participants to explain their choices. This presents challenges for social work education, not the least of which is whether and how we should talk about it in the classroom. Whilst there is a general assumption that it is always good to talk about difficult subjects in the classroom, this is not so simple with defensive practice. As well as the issues about group dynamics discussed above, talking about defensive practice in the classroom poses us with an uncomfortable dilemma. If we take the position of condemning defensive practice, we risk alienating students who may regard us as being out of touch with the real world of practice. If we regard it as understandable, then we risk colluding with unacceptable practice. Consequently, it is unsurprising that a tacit silence has generally operated and defensive practice has become an open secret-everybody knows about it but nobody talks about it. The problem with this silence is that a gap opens up between what people say and do. Argyris and Schö n (1974) articulate this dichotomy, which they describe as the gap between the theories that underpin what people say they do ('espoused theories') and the theories that underpin what they actually do ('theories in use'). In the discussions, what appeared to be sacrificed is a middle ground where behaviour could be regarded as imperfect but 'good enough'. As educators, our challenge is how we can create space for this middle ground and develop an ethically nuanced perspective rather than retreating behind the comfort of entrenched positions. For example, the lack of consensus on what specific behaviours constitute defensive practice may indicate that this is an unhelpful way of viewing the phenomenon. Being able to categorically state whether specific behaviour is defensive or not is attractive. However, it risks reifying behaviour in overly simplistic ways that do not take sufficient account of the contexts of motivation, relationships and organisational cultures. Above all, we need to develop a greater depth of understanding of defensive practice ourselves before we are able to teach it to our students, otherwise what we convey may only be our own sense of confusion and uncertainty.
2018-04-03T01:31:22.283Z
2015-07-03T00:00:00.000
{ "year": 2015, "sha1": "c9d493a10e79040758a6dae5a5842af1a325ce77", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc4985719?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "c9d493a10e79040758a6dae5a5842af1a325ce77", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
245857331
pes2o/s2orc
v3-fos-license
Insights into cell robustness against lignocellulosic inhibitors and insoluble solids in bioethanol production processes Increasing yeast robustness against lignocellulosic‐derived inhibitors and insoluble solids in bioethanol production is essential for the transition to a bio‐based economy. This work evaluates the effect exerted by insoluble solids on yeast tolerance to inhibitory compounds, which is crucial in high gravity processes. Adaptive laboratory evolution (ALE) was applied on a xylose‐fermenting Saccharomyces cerevisiae strain to simultaneously increase the tolerance to lignocellulosic inhibitors and insoluble solids. The evolved strain gave rise to a fivefold increase in bioethanol yield in fermentation experiments with high concentration of inhibitors and 10% (w/v) of water insoluble solids. This strain also produced 5% (P > 0.01) more ethanol than the parental in simultaneous saccharification and fermentation of steam‐exploded wheat straw, mainly due to an increased xylose consumption. In response to the stress conditions (solids and inhibitors) imposed in ALE, cells induced the expression of genes related to cell wall integrity (SRL1, CWP2, WSC2 and WSC4) and general stress response (e.g., CDC5, DUN1, CTT1, GRE1), simultaneously repressing genes related to protein synthesis and iron transport and homeostasis (e.g., FTR1, ARN1, FRE1), ultimately leading to the improved phenotype. These results contribute towards understanding molecular mechanisms that cells might use to convert lignocellulosic substrates effectively. www.nature.com/scientificreports/ is high (solids concentration is diminished along the time due to enzymatic hydrolysis of carbohydrates). In the particular case of CBP processes, hydrolysis of cellulose usually exhibits low rates 13 , thus implying the presence of insoluble solids at high concentrations for longer periods than in SSF. Solid insoluble particles produce shear stress, induce damage in brewing yeast, promote changes in gene expression and accumulation of intracellular reactive oxygen species 12,14 . Notwithstanding, the potential effects that insoluble solids have on bioethanol producing yeasts have been frequently underestimated. Several studies have demonstrated the tolerance of yeast cells towards lignocellulose-derived inhibitors during fermentation of liquid prehydrolysates while the same concentration of inhibitory products completely inhibited cells in SSF processes 15,16 . Thus, determining the impact of insoluble solids on yeasts is therefore crucial to identify future research lines for the development of more robust and efficient strains with potential applications at industrial scale. The present work aims at evaluating the effect exerted by insoluble solids on the tolerance of yeast cells to inhibitory compounds, which is of great relevance in SSF/CBP processes at high gravity. For this purpose, the fermentation performance of the yeast Saccharomyces cerevisiae F12, a recombinant xylose-fermenting strain successfully used in SSF processes 8,10 , was investigated in presence and absence of lignocellulosic insoluble solids and/or inhibitors to determine its tolerance towards these stressors. Since, adaptive laboratory evolution (ALE) is effective for obtaining novel yeast strains better adapted to the challenging bioethanol production conditions 8,[17][18][19] , S. cerevisiae F12 was subjected to an ALE procedure in the presence of both lignocellulosic degradation compounds and insoluble solids. Subsequently, the genetic changes for facing such challenging environment were identified. In evolutionary procedures, cells are forced to replicate under certain restricting conditions during long periods of time. The modulation of the environment during evolution increases the rate of spontaneous mutagenesis and so, designing an appropriated evolution strategy is crucial for the success of the process. Overall, this study reports for the first time the evolution of yeast cells on insoluble solids and inhibitors to better adapt them to high gravity technology. This work also reveals the most important variations in gene expression that take place during the evolution process. Results presented herein will pave the way for identifying new strategies to develop novel strains to be efficiently applied in high-gravity lignocellulosic conversion processes (i.e., with inhibitors and insoluble solids) at the industrial scale. Materials and methods Insoluble solids from steam-exploded wheat straw. The collection of the used wheat straw complied with relevant institutional, national, and international guidelines and legislation. Wheat straw was pretreated in a 10-L steam explosion reactor at 210 °C for 5 min. Slurry was separated into liquid fraction and water insoluble solid (WIS) fraction by vacuum filtration using a Büchner funnel. The resulting WIS fraction was thoroughly washed with distilled water to remove soluble inhibitory compounds and embedded sugars. The WIS fraction had the following composition in terms of % dry weight (w/w): 52.1 cellulose, 8.0 xylan, 0.2 arabinose and 33.9 lignin. In order to assess the effect of solids on yeast fermentation, one portion of WIS was dried at 40 °C and added at 5% (w/v) and 10% (w/v) to the synthetic fermentation media depending on the experimental conditions. Both the whole slurry and the WIS fraction were used as substrate for SSF experiments. Inhibitor mix. The inhibitor mix was prepared by using commercial compounds to give a final composition equivalent to those commonly found in steam-exploded lignocellulosic hydrolysates (2.1 g/L furfural, 0.3 g/L 5-HMF, 13.4 g/L acetic acid, 10.5 g/L formic acid, 0.4 g/L ferulic acid, 0.2 g/L syringaldehyde, and 0.1 g/L vanillin) 20 . This inhibitor mix was used as selection pressure during the evolutionary engineering approach and in fermentation experiments at 50% (v/v), and 100% (v/v) dilution in presence and absence of WIS. Microorganism and cell propagation. Recombinant S. cerevisiae F12 was kindly supplied by Professor Lisbeth Olsson from Chalmers University of Technology (Sweden). This strain was genetically modified to consume xylose by overexpressing the endogenous gene encoding xylulokinase and by introducing genes encoding xylose reductase and xylitol dehydrogenase from Scheffersomyces stipitis 21 . For preinoculum preparation, S. cerevisiae F12 cells were grown in 100-mL shake flasks with 20 mL YPD medium (10 g/L yeast extract, 20 g/L peptone, 20 g/L glucose) in an orbital shaker at 150 rpm and 32 °C for 16 h. Cells were harvested by centrifugation (3000 g, 8 min, 25 °C) and diluted with the corresponding medium to get the appropriate inoculum size. Adaptive laboratory evolution experiment. S. cerevisiae F12 was subjected to ALE to increase its robustness towards lignocellulose-derived inhibitors and insoluble solids. ALE was performed by sequential batch cultivation of yeast cells in 250-mL Erlenmeyer flasks containing 50 mL of the corresponding medium. Cells were incubated at 150 rpm, 32 °C and pH 5.0 with an initial OD 600 of 0.1. YNB (Conda, Cat.1553.00) supplemented with 7.5 g/L (NH 4 ) 2 SO 4 was used as basal medium. ALE experiment was distributed in different stages according to Table 1. 4-mm diameter glass beads (Hecht Karl™ 1401/4) were used as insoluble solids to progressively evolve cells and facilitate their subsequent recovery. The experiment started by adding 20% (w/w) insoluble solids to a medium containing glucose and xylose at a final concentration of 10 g/L each. The xylose:glucose ratio (w:w) increased gradually to 10:10, 15:5, and 18:2 as evolution proceeded. Simultaneously, solids were combined with increased concentrations of the inhibitory mix, starting from 12.5% (v/v) to 80% (v/v). www.nature.com/scientificreports/ Selection of spontaneous mutants with improved tolerance was based on increased specific growth rates. When an improvement in the yeast growth was detected (measured as OD 600 basis), xylose:glucose ratio and inhibitor concentration in the evolution media was increased (Table 1). Each round of evolution started by inoculating an aliquot of cells from the previous shake flask culture at a final OD 600 of 0.1. The evolved strain was obtained after 88 rounds of evolutions (≈ 2.200 generations). For isolation of single colonies, cells from the final round were harvested, diluted accordingly, and grown for 36 h at 32 ºC in a YPXD-agar plate containing 10 g/L glucose, 10 g/L xylose and 20 g/L agar. One of the most prominent colonies was selected and named as evolved S. cerevisiae F12 strain. Fermentation tests. Synthetic fermentation media containing 10 g/L glucose, 10 g/L xylose, 2 g/L NH 4 Cl, 2 g/L KH 2 PO 4 , 0.3 g/L MgSO 4 ·7 H 2 O, and 5 g/L yeast extract were used to assess the fermentation performance of S. cerevisiae F12 in presence of WIS and/or inhibitors under the conditions stated in Table 2. Fermentation tests were carried out in triplicate in sterilized 250-mL Erlenmeyer flasks with 100 mL medium at 150 rpm and 32 °C for 48 h with 1 g/L (dry weight) of inoculum concentration. In a first set of experiments, the influence of lignocellulosic degradation compounds on parental S. cerevisiae F12 was evaluated by using 50% and 100% (v/v) of the inhibitor mix. Subsequently, the effect exerted by solids on the fermentation performance of yeast cells was assessed by adding 5% and 10% of WIS (w/v). Finally, cells were subjected to fermentation in the presence of different combinations of solids (5% and 10% WIS (w/v)) and inhibitors (50% and 100% (v/v) of the inhibitor mix) to identify any potential synergism between these two stressors. Evolved S. cerevisiae F12 strain was also used under the most sever conditions: i) the presence of 100% (v/v) inhibitor mix, ii) the presence of 10% of WIS (w/v) and, iii) the combination of both 100% (v/v) inhibitors and 10% of WIS (w/v). Simultaneous saccharification and fermentation assays. Parental and evolved S. cerevisiae F12 strains were used in SSF with steam-pretreated wheat straw at high substrate loadings to evaluate the success of the evolutionary engineering approach. For that, the whole slurry supplemented with nutrients (2 g/L NH 4 Cl, 2 g/L KH 2 PO 4 , 0.3 g/L MgSO 4 ·7 H 2 O and 5 g/L yeast extract) was used at a final concentration of 20% total solids (TS) (w/v). Due to the highly inhibitory potential of the slurry, the WIS fraction was also subjected to SSF at 20% (w/v) substrate concentration and supplemented with the same nutrients. Since most of the xylose remained in www.nature.com/scientificreports/ the liquid fraction when collecting the WIS fraction, 30 g/L xylose were added to the SSF media to enrich the fraction of this sugar and mimic the sugar composition in the slurry. Analytical methods. The chemical composition of the WIS fraction was analyzed by using the standard methods for determination of structural carbohydrates and lignin in biomass (LAP-002, LAP-003, and LAP-019) of the National Renewable Energy Laboratory (NREL). The full description for these methods can be found in the following link [https:// www. nrel. gov/ bioen ergy/ bioma ss-compo sitio nal-analy sis. html]. Glucose, xylose, xylitol and ethanol were determined and quantified by high-performance liquid chromatography (HPLC) using an Agilent HPLC 1200 Series equipped with a refractive index detector and an Aminex HPX-87H Ion Exclusion column operating at 50 °C with 5 mM H 2 SO 4 (0.6 mL/min) as elution buffer. Statistics were performed to estimate the mean and standard deviation during fermentation and SSF assays. Analysis of variance (ANOVA) was used for comparison between assays using the software Statgraphics Centurion XVIII. The level of significance was set at P < 0.05, P < 0.01, and P < 0.001. Microarray analysis. Total RNA was extracted from the evolved and parental S. cerevisiae F12 after 4 h of fermentation in YPXD medium supplemented or not with 40% (w/w) insoluble solids and 100% (v/v) inhibitor mix. To avoid interferences with RNA extraction method, 4-mm diameter glass beads (Hecht Karl™ 1401/4) were used as insoluble solids instead of pretreated WIS. Cells (5 mL) were withdrawn, cooled on ice, centrifuged (4000 g, 2 min, 4 °C), frozen in liquid nitrogen and stored at −80 °C until further analysis. Trizol reagent (Invitrogen) was used for RNA isolation according to the manufacturer's protocol. Samples were treated with RNase-free DNase I (Qiagen) to prevent DNA contamination. The concentration and purity of RNA was measured using an UV-light Omega spectrophotometer. Furthermore, RNA integrity was determined using the Bioanalyzer 2100 (Agilent) and only samples with 260/280 > 1.8, 260/230 > 2.0, and RNA Integrity Number (RIN) > 8.0 were subjected to further analysis. After RNA isolation, samples were treated as explained previously 12 , using the GeneChip™ Yeast Genome 2.0 Array (Affymetrix®) to determine gene expression. Raw data were processed with RMA algorithm included in Affymetrix® Expression Console™ for normalization and gene level analysis. Three microarray experiments corresponding to three independent RNA replicates were processed and analyzed for each experimental condition. Fold changes between experimental conditions were calculated as a quotient between the mean of the gene expression signals. The LIMMA package included in Babelomics software package [http:// www. babel omics. org] was used for statistical analysis 22 . Those values with a false discovery rates (FDR) < 0.05 were considered as significant. Genes with Log2-fold change > 1 or < (−1) were included for further analysis. Microarray experiments were also analyzed by Piano software [http:// biomet-toolb ox. chalm ers. se] 23 . Differentially expressed genes were identified with an FDR < 0.05 selection cut-off and the corresponding heat map was simultaneously obtained. Differentially expressed genes were classified by YeastMine according to their main known/proposed functions 24 . In this context, both downregulated and upregulated genes were used to investigate and categorize them according to their biological processes and molecular functions by the gene ontology (GO)-annotations. Finally, network analysis of known/predicted protein-protein interactions was evaluated by STRING software v11 25 . Results and discussion Effect of WIS and/or inhibitors on yeast fermentation. This study assessed how the presence of inhibitors and WIS may influence yeast fermentation under the conditions stated in Table 2. As shown in Fig. 1A, no differences were observed in terms of glucose consumption rates or residual glucose in fermentation experiments with 50% (v/v) inhibitor mix or 5-10% (w/v) of WIS when compared to control assays without insoluble solids and inhibitors. In these cases, no lag phase was detected and glucose was exhausted within the first 5 h of fermentation. This result agrees with Koppram and co-workers that showed no differences in the consumption of 20 g/L glucose when control fermentation (with no WIS in the medium) was compared to fermentations in the presence of 2, 5, 10, and 12% WIS (w/w) 26 . The presence of 100% (v/v) of inhibitor mix reduced, however, the glucose consumption rates, reaching glucose exhaustion at 24 h (Fig. 1A) and corroborating the well-known effect that high concentration of inhibitors exerts on yeast cells, which in turns hampers glucose utilization 27,28 . In contrast to glucose conversion, the presence of lignocellulose-derived inhibitors exhibited a strong inhibition effect during the xylose conversion phase (Fig. 1B). In this case, the addition of 50% and 100% (v/v) of inhibitor mix resulted in restricted xylose assimilation by cells, which only consumed 18% and 12% of the initial xylose concentration, respectively ( Table 3). The higher susceptibility of xylose fermentation to lignocellulosederived inhibitors compared to that of glucose fermentation has already been shown in several studies 29,30 . Since xylose utilization has been proven to provide less energy in the form of ATP compared to glucose 31 , and response to inhibitors requires high energy levels, the presence of inhibitors may have a stronger effect on yeast when xylose is the utilized carbon source. Furthermore, it is likely that the genetic modifications needed to construct xylose-fermenting yeasts alter their cell metabolic homeostasis affecting the inhibitor tolerance 2 . By contrast, the presence of 5% (w/w) or 10% (w/w) WIS slightly increased xylose consumption when compared to control assays (Fig. 1B). Tricarboxylic acids (TCA) cycle was identified as one of the targets of www.nature.com/scientificreports/ transcriptional regulation to optimize xylose utilization. Thus, intensive TCA cycle was assigned to be important for xylose metabolism in xylose-recombinant S. cerevisiae strains 32 . In the same context, regulation of the stress response and amino acid metabolism have been shown as two important strategies for an effective xylose utilization in a recombinant xylose-fermenting S. cerevisiae strain 32,33 . Strikingly, Moreno and co-workers identified amino acids biosynthesis and carboxylic acid metabolic processes among the major overexpressed biological processes in S. cerevisiae F12 grown in glucose media with insoluble solids 12 . Thus, WIS may affect yeast cells by promoting xylose utilization when no other lignocellulose-derived inhibitor is present. Despite the increase in xylose consumption, ethanol yields in presence of WIS were 0.20-0.21 g/g. This value was 25-30% lower than the obtained in control assays (0.28 g/g) ( Table 3). Lower ethanol yields are commonly linked to an increase in xylitol production 34 . Nevertheless, similar xylitol concentrations (< 0.1 g/L) were found in control and fermentation assays with only WIS. Thus, slight differences in cell growth in presence of WIS or www.nature.com/scientificreports/ redistribution of metabolic fluxes to cope with the challenging conditions imposed by WIS may result in lower ethanol yields. As mentioned before, Koppram and co-workers 26 did not observed differences in ethanol yields when adding up to 12% (w/w) of WIS to fermentation media with 20 g/L glucose, reaching ethanol yields of 0.32 g/g. However, when adding 40% (w/w) and 60% (w/w) insoluble solids, Moreno and colleagues 12 showed a decrease in ethanol yield in glucose media from 0.37 g/g without solids to 0.35 g/g and 0.22 g/g, respectively. It is worth mentioning that previous studies only utilized glucose as carbon source. In spite of promoting xylose consumption in presence of 5% (w/w) and 10% (w/w) of WIS, the reduced ethanol yields obtained in this study indicated that xylose fermentation was more prone to be affected by stressful conditions. Lower ethanol yields than those obtained for control assays were also found when lignocellulosic inhibitors were present, reaching 0.22 g/g and 0.19 g/g with 50% (v/v) and 100% (v/v) of the inhibitor mix, respectively ( Table 3). As previously commented, less than 20% of the initial xylose concentration was consumed by nonevolved yeast cells (Fig. 1B). In addition, when increasing the inhibitor content from 50% (v/v) to 100% (v/v), the glucose consumption rates decreased by threefold (from 1.8 g/L h to 0.6 g/L h) at the initial stages of the fermentation process (5 h) (Fig. 1A). This result is indicative of the high inhibitory potential of lignocellulosederived inhibitors, especially during the xylose assimilation phase. Besides the detrimental effect that the presence of WIS exhibited on ethanol yields in fermentation experiments with 10 g/L glucose and 10 g/L xylose, the influence that the presence of WIS has on the inhibitory tolerance of S. cerevisiae F12 was also studied. For such a goal, 50% (v/v) or 100% (v/v) inhibitor mix were combined with 5% (w/v) or 10% (w/v) of WIS in different fermentation tests. As it is shown in Fig. 2A, when using 50% (v/v) of inhibitor mix, glucose was exhausted within the first 24 h, and 22% of the xylose was consumed after 48 h of fermentation. In this case, the ethanol yield was 0.22 g/g and 0.19 g/g with 5% (w/v) and 10% (w/v) of WIS, www.nature.com/scientificreports/ respectively (Table 3). These ethanol yields were similar than those obtained when only 50% (v/v) of inhibitor mix was added ( Table 3), indicating that yeast tolerance was not significantly affected by the presence of WIS at low inhibitor concentration. On the other hand, when 100% (v/v) of the inhibitor mix was combined with either 5% or 10% (w/v) of WIS neither glucose nor xylose were exhausted in 48-h long fermentation (Fig. 2B). Furthermore, marked differences were observed in ethanol yield in comparison with only 100% (v/v) of the inhibitor mix (Table 3). When 5% WIS (w/v) were added together with 100% (v/v) of the inhibitor mix, about 80% of the initial glucose and 10% of the initial xylose were consumed after 48 h of fermentation, reaching an ethanol yield of 0.16 g/g. However, 10% (w/v) of WIS together with 100% (v/v) inhibitor mix resulted in 80% less ethanol when compared to only 100% (v/v) inhibitor mix. The lower ethanol concentrations were directly linked to a completely hampered xylose consumption and to a limited glucose consumption. These results clearly showed a synergistic effect when combining both lignocellulose-derived inhibitors and WIS and pointed out to the presence of WIS as a crucial factor when yeast cells have to deal with high concentrations of inhibitory compounds. In the present work, an increase in xylose uptake was observed when 50% (v/v) of inhibitor mix was combined with WIS compared with only 50% (v/v) inhibitors (Table 3). This result supported the hypothesis that the presence of insoluble solids may promote xylose consumption in absence of biomass degradations compounds or when inhibitors are present at low concentrations. In this sense, Koppram and co-workers 26 studied the effect of steam-pretreated birch WIS on the glucose consumption and yeast tolerance to either HMF (1 g/L), furfural (1 g/L), syringaldehyde (0.8 g/L) or acetic acid (9 g/L). These authors reported higher glucose uptake rates when low concentrations of these compounds were simultaneously present with WIS compared to those obtained in the absence of solids 26 . In the same study, a proteomic analysis revealed up-regulation of glycolytic enzymes and ATP synthases in the presence of acetic acid and WIS, strongly indicating an increased generation of energy in the presence of both stressors (WIS and inhibitors) which could be the reason for the increased sugar consumption. The ALE procedure in WIS-rich and inhibitor-rich media (Table 1) resulted in an evolved S. cerevisiae F12 with improved abilities to cope with the combination of both inhibitors and WIS. When compared with the parental strain, a decrease in the xylose consumption was observed when only WIS (10% w/v) was present in the fermentation broth (Table 3). However, in presence of 100% (v/v) inhibitor mix, xylose consumption increased from 12% with parental S. cerevisiae F12 to 64% with evolved cells which was also translated in an increase of ethanol yield from 0.19 g/g to 0.25 g/g. These results suggest that evolution procedure primarily favored changes to increased tolerance to inhibitors that could be detrimental to cope with the sole presence of insoluble solids. The success of ALE was evident when comparing parental and evolved S. cerevisiae F12 performance at the most challenging conditions (i.e. 100% (v/v) of inhibitor mix and 10% (w/v) of WIS). In this case, parental S. cerevisiae F12 did not consume any xylose and ethanol yield was as low as 0.05 g/g. On the other hand, xylose consumption and ethanol yield increased to 21% and 0.24 g/g, respectively, when using the evolved strain proving the effectiveness of ALE as strategy to increase tolerance to a combination of stressors. Simultaneous saccharification and fermentation at high substrate loading. Parental S. cerevisiae F12 was used in SSF to evaluate its fermentation performance and cell robustness under high substrate loading. When using the whole slurry at a concentration of 20% TS (w/v), no ethanol was produced during SSF processes (data not shown). Although parental cells were able to cope with 100% (v/v) inhibitory mix in absence of WIS (Fig. 1), the presence of solids and inhibitors in SSF of slurry led to complete cell inhibition. This fact pointed to a reduced tolerance to inhibitors in presence of high solids content. In this case, the progressive liquefaction of the solids during the first hours of SSF was not sufficient to overcome the effect that WIS had on yeast tolerance to inhibitors. Nevertheless, when using 20% WIS (w/v) supplemented with xylose (i.e. absence of inhibitors), parental S. cerevisiae F12 was capable of fermenting both glucose and xylose, reaching a maximum ethanol concentration of 39.3 ± 0.4 g/L (Fig. 3). www.nature.com/scientificreports/ In SSF from WIS, S. cerevisiae F12 assimilated glucose immediately upon enzymatic hydrolysis, thus maintaining a low glucose concentration during the fermentation process (Fig. 3). In contrast, limited xylose consumption was shown within 72 h of SSF. Recombinant S. cerevisiae cells use the same transport systems to incorporate both glucose and xylose inside the yeast cell 35,36 . The uptake of xylose through the transport system has been reported to have significantly lower affinities for xylose than for glucose 37 . In this sense, the xylose uptake is strongly inhibited when glucose is present. This fact is decisive in mixed sugar fermentations with recombinant S. cerevisiae strains because this yeast does not utilize xylose unless glucose is significantly depleted. In this case, glucose concentration was below 0.5 g/L during SSF process, and the limited xylose consumption could be therefore explained due to the stressful fermentation conditions. The robustness of the evolved strain was evaluated under the same SSF conditions than the parental strain. Similar to the parental S. cerevisiae F12, the evolved strain was totally inhibited during SSF processes of the whole slurry at 20% TS (w/v) (data not shown). However, in the SSF from WIS, the evolved strain produced a maximum ethanol concentration of 41.5 ± 0.5 g/L, which was 5% higher (P < 0.01) than the obtained by the parental strain (Fig. 3) and represented 50% of the theoretical maximum ethanol that could be obtained in SSF (yield estimated considering the total glucose and xylose that can be potentially available during SSF process and a maximum sugar-to-ethanol conversion yield of 0.51 g/g). The evolved cells also exhibited improved xylose uptake rates, which increased the xylose consumption by about 10% (32% of xylose was consumed after 72 h of SSF). The high xylose:glucose ratio utilized during ALE was decisive for the success of the process since the utilization of xylose as carbon source during the evolution procedure is a key factor to increase the yeast affinity for this sugar. This improved xylose fermenting capacity could be due to improved xylose transport kinetics 38,39 . As a matter of fact, increased expression of hexose transporters was reported in evolved xylose-utilizing yeasts [39][40][41] , as may be the case for the resulting evolved strain in this study as well. Differential gene expression of the improved phenotype. A total of 196 genes were found upregulated (130 genes) or downregulated (66 genes) in evolved cells in the presence of both solids (20% w/w) and inhibitors (80% v/v of inhibitory mix) (Fig. 4A). These conditions of solids and inhibitors were the most challenging conditions to which cells were evolved in the ALE and thus they were selected for differential gene expression analysis. The differences between parental and evolved cells were also analyzed by hierarchical clustering, which clearly plotted two different groups (Fig. 4B): i) one corresponding to parental cells and ii) another one corresponding to evolved cells. This result supported the differences between S. cerevisiae F12 and the corresponding evolved strain. www.nature.com/scientificreports/ Differentially expressed genes (parental vs evolved) were subsequently analyzed by gene ontology (GO) analysis to determine the biological processes induced and repressed. This analysis highlighted cell cycle (e.g., cytokinesis, regulation of cell cycle, reproductive process) and cell wall organization or biogenesis (e.g., fungaltype cell wall organization, sexual sporulation) as major upregulated biological processes, while maltose metabolic process, transport (e.g., ion transport, amino acid transport, water transport) and homeostatic process (e.g., iron ion homeostasis) were the main biological processes downregulated (Table 4). In spite of identifying several biological processes induced and repressed in the improved phenotype, enrichment analysis identified no metabolic pathway statistically upregulated or downregulated. It is also important to remark that a significant number of identified upregulated (53 genes, ca. 40%) and downregulated (19 genes, ca. 30%) genes had an unknown molecular function (Supplementary Table S1). Furthermore, about 90% of these genes have a Log2-fold change above one order. These results might indicate the potential role of these genes during the cell response to insoluble solids, and therefore, they should be further investigated. Overexpression of the aforementioned cell wall proteins might counteract this effect and maintain cell wall integrity under the stress conditions. Major downregulated biological processes include ribosome biogenesis and RNA processing, as well as the transport of specific molecules including iron, peptides and water ( Table 5). Repression of protein synthesis is one of the first cell responses upon stress exposure (heat shock, osmotic and oxidative stress), as it is a highly energy consuming process 49,50 . Nevertheless, although having the general protein synthesis process repressed, cells can simultaneously induce the translation of stress-related genes to face the adverse environmental conditions 51 . This was also the case for the evolved S. cerevisiae F12 in this work. The second main downregulated biological process was transport. Most of the transport-related genes are associated to peptide/amino acid transport and to iron ion transport and homeostasis (Tables 4 and 5). In this work, repression of peptide/amino acid transport genes might be linked with the downregulation of protein biosynthesis upon stress exposure. On the other hand, it is highly remarkable the relatively high number of genes (up to 12 genes) that are involved in iron ion transport and homeostasis, including the transporter-encoding genes FIT2, FTR1, SIT1, ARN1 and ARN2, and genes encoding different ferric reductases (FRE1, FRE2, FRE5, FRE6, FRE8). Iron is an essential element required for different biological processes such as respiration, synthesis of nucleic acids, carbon metabolism, as well as photosynthesis and nitrogen fixation 49 . However, iron may be toxic for cells due to its oxidative capacity in the ferrous form, which increases the importance of having a tight control of the iron metabolism. A high intracellular concentration of reactive oxygen species (ROS) under oxidative stress conditions represents a potential threat since the interaction between ROS and iron may end up in the formation of new hydroxyl radicals with increased prooxidant capacity 52 . The simultaneous presence of both insoluble solids and lignocellulose-derived inhibitors during fermentation processes causes a severe oxidative damage in yeast cells, which greatly increases the intracellular ROS levels 12 . This high ROS concentration might be responsible for repressing the corresponding iron-related genes as a way to reduce the risks associated to a marked oxidative stress. Yeast cells (and other multicellular organisms) usually promote iron depletion to prevent metal toxicity and the irreversible damage under oxidative stress conditions 52 . Overall, these results clearly show the complex inhibitory environment that cells have to face during lignocellulosic biomass conversion. In response to a single stressor, specific genes and pathways have been identified as key components to increase yeast robustness. For instance, ZWF1 has been identified as a key element during oxidative stress in S. cerevisiae upon exposure to a wide variety of chemical and environmental stress agents 53 . During a heat shock, changing ergosterol by fecosterol alters membrane fluidity rendering thermotolerance in yeast 54 . The general response to stress and the cell cycle arrest have been identified as important processes to face a high concentration of insoluble solids 12 . By contrast, in lignocellulose-conversion processes cells must simultaneously deal with a bunch of chemical inhibitors and a high concentration of insoluble solids. To cope with such adverse conditions, this study demonstrate that cells should be capable of maintaining cell membrane www.nature.com/scientificreports/ integrity and preventing oxidative damage. Therefore, upregulation of membrane-related genes (e.g. SRL1, CWP2, WSC2 and WSC4) and induction/repression of genes and pathways involving the oxidative stress and the general response to stress (e.g. CDC5, DUN1, CTT1, GRE1, FTR1, ARN1, FRE1) can be targeted in future studies to evaluate cell robustness in lignocellulose-related bioprocesses. Conclusions The presence of insoluble solids and lignocellulose-derived inhibitors synergistically increased their inhibitory potential exerted on S. cerevisiae F12, especially when using xylose as major carbon source. After subjecting S. cerevisiae F12 to an ALE, the resulting evolved cells showed better fermentation performance in terms of higher xylose fermentation efficiency and ethanol yield than the parental strain. Differential gene expression analysis revealed the induction of genes related with cell wall integrity and the response to stress, as well as the repression of protein biosynthesis and the iron transport and homeostasis as main biological processes responsible for the improved phenotype. These results pointed out the necessity of further developing yeast strains less susceptible to the effects caused by all the stress agents present during the conversion of lignocellulosic materials, providing some molecular insights of the mechanism that yeast uses to face these stressors.
2022-01-12T14:36:50.620Z
2022-01-11T00:00:00.000
{ "year": 2022, "sha1": "2d783a9c32cbe39117e682055cdc8c233cd2f805", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-04554-4.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "31c92196c7e50f52c1ff1582f08cd8384bec4585", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
236290542
pes2o/s2orc
v3-fos-license
FlhA Undergoes Cyclic Open-close Domain Motions During Flagellar Protein Export in Salmonella The agellar type III secretion system (fT3SS) transports agellar building blocks from the cytoplasm to the distal end of the growing agellar structure. The C-terminal cytoplasmic domain of FlhA (FlhA C ) serves as a docking platform for agellar chaperones in complex with their cognate substrates and ensures the strict order of protein export for ecient agellar assembly. FlhA C adopts open and closed conformations, and the chaperones bind to the open form, allowing the fT3SS to transport the substrates to the cell exterior. To clarify the role of the closed form in agellar protein export, we isolated pseudorevertants from the hA(G368C/K549C) mutant, in which the closed conformation is stabilized to inhibit the protein transport activity of the fT3SS. Each of M365I, R370S, A446E and P550S substitutions in FlhA C identied in the pseudorevertants affected hydrophobic side-chain interaction networks in the closed FlhA C structure, thereby restoring the protein transport activity to a considerable degree. We propose that a cyclic open-close domain motion of FlhA C is required for rapid and ecient agellar protein export where a structural transition from the open to the closed form induces the dissociation of empty chaperones from FlhA C . Introduction Many bacteria utilize agella to swim in viscous liquids and move around on solid surfaces to migrate towards more favorable environments for their survival. The agellum is a supramolecular complex consisting of the basal body, which acts as a rotary motor, the lament, which functions as a helical propeller and the hook, which connects the basal body and lament and works as a universal joint to smoothly transmit torque produced by the motor to the lament 1 . Flagellar assembly begins with the basal body, followed by the hook and nally the lament. To construct the agellum on the cell surface, the agellar type III secretion system (fT3SS) transports agellar building blocks from the cytoplasm to the distal end of the growing structure. The fT3SS consists of ve transmembrane proteins, FlhA, FlhB, FliP, FliQ and FliR, and three cytoplasmic proteins, FliH, FliI and FliJ FlhA, FlhB, FliP, FliQ and FliR assemble into a protein export channel inside the basal body MS ring formed by the transmembrane protein, FliF (Fig. 1a) 3 . The protein export channel is powered by the transmembrane electrochemical gradient of protons (H + ), namely proton motive force (PMF) 4,5 . FliH, FliI and FliJ forms a cytoplasmic ATPase ring complex at the agellar base (Fig. 1a) 6 . An interaction between FliH and a C ring protein, FliN, is required for e cient localization of the ATPase ring complex to the agellar base 7,8 . ATP hydrolysis by the ATPase ring complex induces gate opening of the protein export channel for the translocation of export substrates across the cytoplasmic membrane in a PMF-dependent manner 9 . However, when the ATPase ring complex does not work properly, the protein channel complex utilizes sodium motive force (SMF) across the cytoplasmic membrane as the energy source 10,11 . FlhA acts as an export engine fueled by both PMF and SMF 10 . An interaction between FliJ and the Cterminal cytoplasmic domain of FlhA (FlhA C ) activates the protein export channel to couple either H + or Na + ow with the translocation of agellar building blocks across the cytoplasmic membrane 5,10,12 . FlhA C forms a homo-nonamer in the fT3SS 13,14 and serves as a docking platform that brings the order in the export substrates for e cient agellar assembly [15][16][17][18] . FlhA C consists of four compactly folded domains, D1, D2, D3, and D4, and a exible linker (FlhA L ) connecting FlhA C and FlhA TM (Fig. 1b) 19 . FlhA C adopts open and closed conformations (Fig. 2a) 19,20 . The FliS/FliC and FliT/FliD chaperone/substrate complexes bind to the chaperone-binding site of the open form but not to that of the closed form 21,22 . FlhA L stabilizes the open form, allowing the chaperones in complex with their cognate substrates to e ciently bind to the FlhA C ring to promote lament assembly at the hook tip 14,22 . Interestingly, FlhA L also binds to the chaperone-binding site of the open form during hook assembly, thereby not only suppressing premature docking of the chaperones to FlhA C but also facilitating the export of the hook protein 23 . These observations suggest that the open form of FlhA C re ect an active state of the fT3SS. However, little is known about the role of the closed form of FlhA C in agellar protein export. The hA(G368C) mutation inhibits the protein transport activity at a restrictive temperature of 42ºC but not at a permissive temperature of 30ºC 22,24−26 . The temperature shift-up from 30ºC to 42ºC immediately arrests the export of agellar building blocks, suggesting that this induces a conformational change of Methods Bacterial strains, P22-mediated transduction and DNA manipulations. Salmonella strains used in this study are listed in Table 1. P22-mediated transductional crosses were performed with P22HTint. DNA manipulations were performed using standard protocols. DNA sequencing reactions were carried out using BigDye v3.1 (Applied Biosystems) and then the reaction mixtures were analyzed by a 3130 Genetic Analyzer (Applied Biosystems). Isolation of pseudorevertants from the hA(G368C/K548C) mutant. To clarify why the FlhA(G368C/K548C) mutation inhibits agellar protein export, pseudorevertants were isolated from the hA(G368C/K548C) mutant by streaking an overnight culture out on 0.35% soft agar plates, incubating them at 30ºC for a few days and looking for motility halos emerging from the streak. In total ve motile colonies were puri ed from such halos. The motility of these pseudorevertants was better than that of the parent strain at 30ºC although not as good as the wild-type strain (Fig. 2b). In agreement with this, the secretion levels of agellar building blocks such as FlgD, FlgE, FlgK, FlgL and FliC by these pseudorevertants were recovered although not at the wild-type levels (Fig. 2c). P22-mediated transduction showed that all suppressor mutations were co-transduced with the hA(G368C/K548C) mutation, indicating that they are located in the hBAE operon. DNA sequencing revealed that they were all missense mutations in FlhA: M365I, R370S (isolated twice), A446E, and P550S (Fig. 2a) (Fig. 2a). Consistently, this Cys residue is not exposed to the solvent of the molecular surface of FlhA C−G368C as judged by cysteine modi cation with methoxypolyethylene glycol 5000 maleimide 22 . A temperature shift-up from 30ºC to 42ºC remodels these hydrophobic interaction networks in FlhA C−G368C to induce large conformational changes of domains D1 and D2 to get close to domains D3 and D4, respectively (Fig. 3), thereby not only stabilizing a completely closed form but also inhibiting open-close domain motions. The R370S substitution weakens the hydrophobic interactions among Cys-368, Leu-413 and Pro-415 (Fig. 2a) (Fig. 2a). Therefore the M365I substitution must affect this hydrophobic interaction to induce the conformational change of domain D1 domain (Fig. 3), thereby weakening the hydrophobic interaction between Gln-498 and Pro-667 in FlhA C−G368C . Ala-446 of domain D2 hydrophobically interacts with Gln-477 of domain D2 in the closed form of FlhA C−G368C but not in the open form (Fig. 2a), and the A446E mutation seems to affect this hydrophobic interaction to induce the conformational change of domain D2 (Fig. 3), thereby affecting the hydrophobic contact between Phe-459 and Pro-646. Pro-550 of domain D3 makes a hydrophobic contact with Met-398 of domain D1 (Fig. 2a), and the P550S substitution weakens the hydrophobic contact between domains D1 and D3. Therefore, we propose that the remodeling of the hydrophobic interaction networks in FlhA C−G368C is required for its dynamic open-close domain motions. Because the G368C mutation is located at the N-terminal end of a hinge loop consisting of residues 368-381, we propose that the conformational exibility of this hinge loop is required for e cient remodeling of the hydrophobic interaction networks in FlhA C . Discussion The chaperone-binding site is located at an interface between domains D1 and D2 of FlhA C (Fig. 1b) FlhA C forms a nonameric ring structure in the fT3SS 13,14 . FliJ binds to FlhA L to activate the fT3SS to drive agellar protein export in a PMF-dependent manner (Fig. 1b) FlhAC (PDB ID: 3A5I) consists of four compactly folded domains, D1, D2, D3 and D4, and a exible linker region (FlhAL) connecting FlhATM and FlhAC. The Cα backbone is color-coded from blue to red, going through the rainbow colors from the N-to the C-terminus. FliJ binds to FlhAL to active the FlhA ion channel. Flagellar export chaperones (FlgN, FliS, FliT) bind to a well conserved hydrophobic dimple located at the interface between domains D1 and D2.
2021-07-26T00:06:29.983Z
2021-06-04T00:00:00.000
{ "year": 2021, "sha1": "5fd1361fb4967c9ad65f28c8c7d261450e35ec85", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-576007/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "822170581588e5de194c6338562c90d8d90dcaae", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
257755980
pes2o/s2orc
v3-fos-license
Comprehensive microsurgical anatomy of the middle cranial fossa: Part I—Osseous and meningeal anatomy The middle cranial fossa is one of the most complex regions in neurosurgery and otolaryngology—in fact, the practice of skull base surgery originated from the need to treat pathologies in this region. Additionally, great neurosurgeons of our present and past are remembered for their unique methods of treating diseases in the middle fossa. The following article reviews the surgical anatomy of the middle fossa. The review is divided into the anatomy of the bones, dura, vasculature, and nerves—in two parts. Emphasis is paid to their neurosurgical significance and applications in skull base surgery. Part I focuses on the bony and dural anatomy. Introduction The middle cranial fossa (MCF) cradles the temporal lobe and borders the brainstem, sella turcica, and the cavernous sinus. Its floor includes critical neurovascular structures and separates the otic apparatus and infratemporal fossa from the intracranial space. The presence of multiple canals, foramina, and grooves as well as the complex meningeal folds in the region of MCF makes for a complex anatomy. Perfect knowledge of these intricate relationships is crucial for safe and efficient surgical exploration of the MCF. The purpose of this work is to provide a comprehensive anatomical review of the MCF from a neurosurgical perspective. Part 1 focuses on the osseous and meningeal anatomy with emphasis on surgical relevance. An arbitrary line connecting the tip of the anterior clinoid process (ACP) to the petrous apex (i.e., petrous-clinoid line) may be considered as the transition line between the two compartments ( Figure 1B). Boundaries The middle fossa proper is delimited by the lesser sphenoid wing anteriorly, temporal squama, and the greater sphenoid wing laterally, petrous ridge posteriorly, and the petrous-clinoid line medially ( Figure 1B). The lesser sphenoid wing is a slim lateral bony extension of the sphenoid body that forms the superior border of the superior orbital fissure (SOF). It gradually enlarges as it approaches medially and ends as the ACP. The lateral border of the middle fossa proper is formed mainly by the squamous part of the temporal bone (posteriorly) and the lateral upward extension of the greater sphenoid wing (anteriorly), which are separated by the sphenosquamosal suture. The posterior border is formed by the petrous ridge running from posterolateral to anteromedial. This ridge stops a few millimeters posterolateral to the lateral clival border. This space is occupied by the sphenopetroclival venous gulf and is home to Dorello's canal and posterior compartment of the cavernous sinus (1). Just posterior to the petrous apex, there is a gentle depression in the petrous ridge that houses the trigeminal nerve; hence, the structure is called the trigeminal depression. Marching posteriorly along the petrous ridge, the ridge forms a small ledge over the transverse-sigmoid junction and finally reaches the squamous part of the temporal bone at a point called the posterior petrous point (2). Exocranially, the anterior MCF border is formed by the inferior orbital fissure (IOF) and a line that runs laterally along the anterior border of the infratemporal fossa just at the root of the temporal process of the zygoma. The medial border of the MCF starts as a line from the medial end of the pterygomaxillary fissure coursing posteriorly and crossing the pterygoid process of the sphenoid bone toward the foramen lacerum. It then continues further laterally on the lateral side of the petroclival fissure and medial to the exocranial orifice of the carotid canal and jugular foramen where it reaches its posterior point just lateral to the jugular process of the occipital bone and stylomastoid foramen ( Figure 1C). Anterior clinoid process The ACP is a tooth-like medial extension of the lesser sphenoid wing that is attached to the body of the sphenoid bone by two "roots": anterior and posterior (see below) ( Figure 2). The tip of fossa. A prominent rough bony tubercle (sphenoid tubercle, cyan circle), which serves as one of the attachment points of the deep temporal fascia, appears at the lateral end of the infratemporal crest (yellow double arrow) to which the lateral pterygoid muscle is attached. (B) Endocranial view showing the middle fossa proper (green area) separated from the sellar and parasellar compartments by the petrous-clinoid line (red double arrow). The posterior boundary of the middle fossa is formed by the petrous ridge. (C) Exocranial view of the bony skull base. The green shaded area shows the approximate projection of the middle fossa floor on the exocranial surface. The bony part of the pharyngotympanic tube (blue) runs parallel and lateral to the carotid canal (red area) with its anterior end turning into the cartilaginous portion at the region of sulcus tubae opening into the nasopharynx. Yellow double arrow marks the infratemporal crest and the petroclival fissure is marked by the blue double arrow. Pink area shows the jugular process of the occipital bone. a., artery; ACP, anterior clinoid process; can., canaliculus; CC, carotid canal; fis., fissure; FL, foramen lacerum; FO, foramen ovale; FS, foramen spinosum; for., foramen; JF, jugular foramen; proc., process; sut., suture; tub., tubercle. (Copyright Ali Tayebi Meybodi. Used with permission.) FIGURE 2 Anatomy of the anterior clinoid process. (A) Lateral posterior view of the sphenoid bone showing the relationship between the sphenoid wings, anterior clinoid and the sellar region. The superior orbital fissure is the cleft between the lesser and greater sphenoid wings. (B) Superior view of the central skull base and middle fossa. Note the interrelationship of the anterior clinoid roots. The tips of the anterior and middle clinoid processes partially encircle the internal carotid artery and may form a carotico-clinoid foramen around the carotid artery. ACP, anterior clinoid process; can., canal; FO, foramen ovale; FR, foramen rotundum; lig., ligament; LSW, lesser sphenoid wing; MS, maxillary strut; OC, optic canal; OS, optic strut; proc., process; SOF, superior orbital fissure; sulc., sulcus. (Copyright Ali Tayebi Meybodi. Used with permission.) the ACP may form a bony bridge with the middle clinoid process creating an osseous ring around the internal carotid artery (ICA) across its transition from the clinoidal segment to the ophthalmic segment known as the carotico-clinoid foramen. The tip of the ACP may also be connected to the posterior clinoid process forming the interclinoid bridge. Both these osseous variants are relatively uncommon (3%-5% incidence in different studies). Optic strut The posterior root of the ACP is called optic strut (OS)-also known as the "optic pillar" or "sphenoid strut" (3,4). The OS is an important landmark during the removal of the ACP (i.e., anterior clinoidectomy) and has been colloquially named the "Rosetta Stone" of the paraclinoid region (5). This bony pillar connects the body of the sphenoid bone to the ACP and forms the floor of the optic canal while also separating the optic canal from the SOF. The mean dimensions are 6.54 ± 1.69 mm (length), 4.23 ± 0.69 mm (width), and 3.01 ± 0.79 mm (thickness) (6). It slants superolaterally toward the base of the ACP at an angle of about 40°from the vertical plane (3,6). The point of attachment of the OS to the sphenoid body is variable relative to the chiasmatic sulcus ( Figure 2A). According to Kerr et al., this point of insertion can be pre-sulcal (i.e., anterior or adjacent to the limbus sphenoidale) (12% incidence), sulcal (i.e., adjacent to the anterior two-thirds of the chiasmatic sulcus) (44% incidence), post-sulcal (i.e., posterior to the anterior two-thirds of the chiasmatic sulcus) (30% incidence), or asymmetric (14% incidence) (6). The sharp posterior margin of the OS is concave from side to side and accommodates the anterior surface of the ascending portion of ICA in the cavernous sinus, which essentially marks the superior end of the carotid sulcus at the lateral aspect of the sphenoid body. Understanding the location of the OS while drilling the ACP is essential to protect both the optic nerve and the ICA. The OS is usually at the level or anterior to a line drawn from the medial end of the SOF to the lateral corner of the optic canal ( Figure 3) (7). Also, it is important to note that the ACP may be pneumatized when the aeration of the sphenoid sinus extends through the OS or the anterior root, which has an overall incidence of about 10% (8,9). Assessment of the degree of pneumatization may help in the determination of the extent of an anterior clinoidectomy to avoid a plunge into the sphenoid sinus (10,11) or, in general, to prepare for proper management of sphenoid sinus exposure (12). Anterior root of ACP The anterior root is the medial extension of the ACP toward the planum sphenoidale. This sheet of bone is flat externally and concave internally, forming the roof of the optic canal. Its posterior margin blends with the limbus sphenoidale, which is a smooth crest making the border between the chiasmatic sulcus and the planum sphenoidale. During anterior clinoidectomy, this bony bridge needs to be drilled off ( Figure 3). Care must be taken to avoid drilling the optic nerve as the anterior root may be quite thin. Floor The floor of the middle fossa proper is formed by the superior surface of the petrous pyramid posteriorly and the greater wing of the sphenoid anteriorly. The greater wing is concave and harbors all foramina of the middle fossa. On the other hand, the petrous surface is convex. Two distinct bony sutures can be identified here, both emanating from the lateral aspect of the foramen spinosum (FS): the petrosquamosal and sphenosquamosal sutures, which form an angle of roughly 90°with each other ( Figure 4A). Several foramina and bony protuberances are found on the middle fossa floor, which have neurosurgical implications, as follows. Figure 4A) (13). The FS is named after a small spinous process found posterior to it on the exocranial surface of the skull known as the sphenoid spine ( Figure 4B). The FS harbors the middle meningeal artery (MMA), a venous plexus connected to a plexus around the V3 division of the trigeminal nerve at foramen ovale (FO), the cavernous sinus endocranially and the pterygoid venous plexus exocranially, and a recurrent meningeal branch of the V3 (i.e., nervus spinosus). On average, the FS is 5 mm posterolateral to foramen ovale (range 2.0-7.5 mm) (13) and 25 mm (range, 17.8-33.1) medial to the lateral border of the middle fossa (2). FS may sometimes be duplicated (14). Foramen ovale Almost 5 mm anteromedial to FS, there is an oval foramen harboring the mandibular branch of the trigeminal nerve (V3), accessory middle meningeal artery (in case it exists), an emissary vein connecting the adjacent cavernous sinus to the exocranial pterygoid plexus, and the lesser petrosal nerve (LPN). The FO opens into the underlying infratemporal fossa ( Figures 4A,B). FO can vary in shape, from completely round to almond-shaped or slit-like (14). It has an average size of 7 × 4 mm (15) and is, on average, located 30 mm (range, 24.4-39.8) from the lateral border of the middle fossa (2). exit the middle skull base. Specifically, they found that LPN crossed the middle fossa floor anterior to the greater petrosal nerve and exited the middle fossa through the CI in 70% (14/20) of cases where a CI was present (19). Foramen rotundum The foramen rotundum (FR) is the most anterior and medial foramen of the middle fossa. It is oriented rather vertically than horizontally, unlike the orientation of both the FS and FO ( Figure 4D). The FR's reported dimensions are 4 × 3 mm, located about 8-10 mm anteromedial to FO (20). The FR transmits the V2 division of the trigeminal nerve. It is also located inferior to the medial end of SOF and separated by a thin bony bridge called the maxillary strut. The maxillary strut is an important landmark in endoscopic endonasal approach to the MCF and lateral wall of the cavernous sinus (21). Foramen Vesalius The foramen Vesalius (FV), also known as the emissary sphenoidal foramen, is a small, variable but consistently symmetrical foramen located anteromedial to the FO and lateral to the FR (Figures 4B,E) (22,23). When present, it contains an FIGURE 3 Artist's illustration of the two-step hybrid anterior clinoidectomy using skull base landmarks to identify the location of the optic strut. (A,B) Craniotomy and skin incision. Once the optic canal is identified extradurally, a line is drawn from its medial aspect to the lateral end of the superior orbital fissure. Next, a parallel line is drawn from the lateral aspect of the optic canal and the bony area between these lines delineates the anterior root of the anterior clinoid. (C,D) Extradural stage. The meningo-orbital band is cut, and the trapezoid area of bone between the two lines is drilled out and the strut is exposed. This stage unroofs the OC. (E-G) Intradural stage. Cuts are placed over the ACP dura to expose the tip of the clinoid process and the optic strut, which will be removed during this stage. ACA, anterior cerebral artery; DistR, distal dural ring; MCA, middle cerebral artery; MOB, meningo-orbital band; OA, ophthalmic artery; OC, optic canal; ON, optic nerve; ProxR, proximal dural ring; SOF, superior orbital fissure. (22). It is an important structure because it can be a potential channel for transmitting sepsis from extracranial veins to intracranial venous sinuses. In addition, when treating trigeminal neuralgia with trigeminal rhizotomy through the trans-ovale approach, the needle can be misplaced in the FV and puncture the cavernous sinus, causing severe intracranial bleeding (22,24). During middle fossa approaches, care must be taken to identify this anatomic variation in preoperative images and not to mistake it for FO or FR. Superior orbital fissure The SOF is a narrow bony cleft through which the orbit communicates with the middle cranial fossa-situated between the greater and lesser wings and body of the sphenoid bone (Figures 2A, 4F) (25). The superior wall of the fissure is formed by the lower surfaces of the lesser wing, the ACP, and the adjacent part of the OS. The fissure allows the passage of many structures, such as the oculomotor, trochlear, ophthalmic, and abducens nerves, branches of the carotid sympathetic plexus, as well as superior and inferior ophthalmic veins. Part of the annular tendon (i.e., annulus of Zinn) from which the rectus muscles arise is attached to the bony boundaries of the SOF and crosses the fissure. Infrequently, the anomalous ophthalmic artery may pass through the SOF. The SOF is exposed during the anterolateral approaches to the central skull base and lateral sellar compartment (26). Middle fossa canals and grooves Facial hiatus and the sphenopetrosal groove Medial to FS and almost parallel to the petrous ridge, there is a shallow and sometimes inconspicuous groove called the sphenopetrosal groove, which harbors the greater superficial petrosal nerve (GSPN)-the GSPN starts from a small bony opening posteriorly known as the facial hiatus (i.e., hiatus falopii) and ends underneath the V3 ( Figure 4A). In its complete form (10% of middle fossae), this sphenopetrosal groove (or simply GSPN groove) ends at the posterior rim of the V3 about 7.5 ± 2.9 mm (range 3.0-12.0) posterior to the FO (27,28). However, in the majority of cases (65%), the groove is not complete and is absent in 25% of middle fossae (28). The posterior end of the groove that usually corresponds to the facial hiatus is about 25 mm (range 14.7-33.1) medial to the lateral border of the middle fossa in the coronal plane (i.e., external surface of the squamous temporal bone) (2). The length of the sphenopetrosal groove is on average 10-12 mm (range, 6.0-15.0) (27,28). It is located on top of the petrous ICA although the exact relationship is variable), separated from by a thin shell of bone. In 15% of cases, the bone is dehiscent (29). The facial hiatus is the bony opening where the nervus intermedius fibers originating in the superior salivatory nucleus of the brainstem exit the geniculate ganglion of the facial nerve to continue as the GSPN en route to the pterygopalatine ganglion. The size of the facial hiatus may vary depending on the degree of ossification of the middle fossa on top of the geniculate ganglion. This bony coverage is incomplete or totally absent in 15%-25% of specimens (28,30). Groove for the middle meningeal artery The MMA carves a conspicuous groove on the middle fossa floor after exiting FS. This groove is directed anteriorly on the greater sphenoid wing toward the lesser sphenoid wing and then ascends laterally as it approaches the pterion ( Figure 4C). The posterior division of the MMA branches from this middle fossa segment variably along its length and courses laterally and posteriorly. Middle fossa protuberances Midsubtemporal ridge/tubercle The midsubtemporal ridge/tubercle (MSR) is a boomerang shaped peak in the region of sphenosquamosal suture found in 90% of middle fossae described by Wanibuchi et al. (Figure 4F) (31). On average, it is 3 mm in height, 6 mm wide, and 9 mm long. The authors reported the distance between the midpoint of MSR and the midpoint of a line connecting FO and FR to be 11 mm on average. This bony protuberance is lateral to the FS and should not be mistaken with the arcuate eminence (AE), which is subtler and located posteromedial to the FS (see below). The MSR is a useful landmark to localize the interval between the V2 and V3 [i.e., the anterolateral triangle (see Part 2)]. Arcuate eminence The superior semicircular canal (SSC) was introduced by Fisch as a consistent landmark for finding the internal auditory canal (IAC) during middle fossa surgery (32). He mentioned that the location of the SSC can be consistently determined by the AE that overlies it (32). The AE is a bony protuberance anterior to the tegmen tympani and posterior to the IAC and geniculate ganglion. Current opinion is that the existence of AE is the result of combined effect of SSC, pneumatization of the petrous bone and the occipitotemporal sulcus of the basal surface of the temporal lobe (33). AE is conspicuous in 85% of middle fossae and almost absent in the rest (33,34). It may assume different shapes from a single linear arc to double-arc or complex geometric morphology. It is directed from posteromedial to anterolateral. In about 95% of cases, the axes of SSC and AE subtend an angle <45°(33). Kartush et al. found that the relationship between the SSC and AE is consistent but not exact, i.e., the lateral point of the AE almost always overlies the lateral limb of the SSC; however, the medial point of AE usually deviates posteriorly (never anteriorly) compared to the medial limb of SSC ( Figure 4C Deep anatomy Carotid canal The carotid canal is the longest and largest bony conduit in the human body. It is located in the petrous temporal bone, under the floor of MCF (Figures 1C, 4). As the name implies, it harbors the petrous segment of the ICA. However, it is also home to a thin pericarotid venous plexus as well as the carotid perivascular neural plexus and its major condensations (e.g., carotid nerve and deep petrosal nerve). The exocranial carotid foramen marks the entry of the ICA into the petrous bone and is located medial to the vaginal process of the tympanic bone and anterior to the jugular foramen ( Figure 4B). The carotid canal conforms to the course of the petrous carotid and, therefore, has posterior vertical, posterior genu, and horizontal segments. The ICA exits the carotid canal just proximal to its anterior genu. As such, the anterior genu and anterior vertical segments of ICA are not housed in the carotid canal. The exocranial orifice of the carotid canal is located anterior to the jugular foramen, posterior to the Eustachian canal (junction of its osseous and cartilaginous parts), and medial to the tympanic part of the temporal bone and temporomandibular joint (Figure 4). The petroclival fissure ends at the anteromedial corner of the jugular foramen, medial to the carotid canal's exocranial orifice. The carotid canal courses anteromedially and ends at the posterolateral aspect of the foramen lacerum. The length of the carotid canal is 16-20 mm (37). The thickness of the bone of middle fossa floor on top of the carotid canal is variable, but it generally decreases as one marches along the carotid anteriorly. Notably, the middle fossa floor might be dehiscent above the carotid artery ( Figure 5). Thus, assessment of preoperative CT images is critical in determining the individual variations to protect the ICA during drilling of Kawase and/or Glasscock triangles (see Part 2). Eustachian tube The Eustachian tube (ET), also known as the pharyngotympanic tube or the auditory tube, is named after Italian anatomist, Bartolomeo Eustachi (c. 1500-1510-1574). He was the first to observe a canal that connected the nasopharynx to the tympanic cavity of the middle ear. The ET has a role in equalization, oxygenation, and drainage of the tympanic cavity-specifically, it permits equalization of pressure in the middle ear with respect to ambient pressure (38). By doing this, it influences the tension exerted on the tympanic membrane and the attached ossicles, which indirectly affects the transmission of sound waves. Starting from the tympanic ostium at the anteroinferior end of the tympanic cavity, the ET runs immediately parallel and lateral to the carotid canal and is usually separated from it by a thin bony shell of about 2 mm thickness ( Figures 1C, 5) (39-41). The ET comprises two anatomically distinct parts: an osseous posterolateral and a fibrocartilaginous anteromedial portion. The bony canal is Frontiers in Surgery located lateral to the posterior genu of ICA in petrous canal and is usually comprised of two semicanals, one for ET proper and the other for the tensor tympani muscle, which is superomedial to the former (41). At the anterior orifice of the bony ET on the exocranial surface which lies just medial to the FS, a distinct bony sulcus (sulcus tubae) is extended anteromedially toward the petrous apex, housing the cartilaginous ET. The transition of bony ET canal to sulcus tubae is called the isthmus (narrowest part of ET) and is marked by the sphenoid spine located laterally at the level of FS ( Figure 1C). The scaphoid fossa of the sphenoid bone is the portion of the exocranial bone harboring the most distal segment of the cartilaginous ET before joining the nasopharynx. The length of the ET ranges between 31 and 44 mm. Surgically, the cartilaginous ET may be further subdivided into 4 segments from posterior to anterior: petrous [adjacent to petrous (i.e., horizontal) ICA], lacerum (adjacent to lacerum ICA), pterygoid and nasopharyngeal (41). Marching from posterior to anterior, the ET assumes a slightly downward and lateral trajectory toward the petrous apex to end in the lateral corner of the nasopharynx. Therefore, its nasopharyngeal opening is situated inferior to the anterior end of the petrous ICA. The anterior and of sulcus tubae lies lateral to the foramen lacerum and medial to the scaphoid fossa. The tensor tympani muscle is attached to the length of the cartilaginous part of the ET and courses posteriorly to sharply turn around the trochleariform process (a thin bony prominence in the anterior middle ear cavity), where it transitions into a tendon, attaching to the handle of malleus. Most often (72%), the tensor tympani muscle is superior to the ET but it also could be anterior or posterior to it (42). Tensor tympani is usually covered by a thin bony shell on the MCF but there may be partial dehiscence (39). Drilling of the MCF lateral to the carotid canal (Glasscock's triangle) and mobilizing the ET could increase the exposure of the petrous carotid if needed (43). Cochlea The cochlea is the heart of the peripheral auditory apparatus. It is the most anterior and medial part of the labyrinth ( Figure 5). The cochlea, located in the depth of the temporal bone, is formed as a spiral modulus similar to a snail shell with 2.5 turns. When unwound, it is almost 35 mm in length (44). The cochlear spiral is about 5 mm tall and the width of its base is about 9 mm. The modular axis and apex of the cochlea are directed inferolaterally with its base bulging into the anterior, inferior, and medial corner of the middle ear cavity as the promontory. The modular axis of the cochlea makes average angles of 60°, 25°, and 8°with the coronal, sagittal, and axial planes, respectively (45). Also, the modular axis is perpendicular to the petrous ridge. The basal turn is almost parallel to the GSPN and subtends an angle of 60°with the IAC on average. Immediately posterior and hidden under the niche of the basal turn of the cochlea in the middle ear cavity (promontory) are found the oval window (superiorly) and round window (inferiorly). The middle ear ossicles, vestibule, semicircular canals, and facial nerve all lie posterior to the cochlea. The cochlea is in close relationship with the labyrinthine and tympanic segments of the facial nerve as well as with the geniculate ganglion. This proximity is maximum between the basal turn and the labyrinthine segment (only 0.4 mm distance) (45). Cochlea has a special relationship with the labyrinthine segment of the facial nerve. Upon leaving the fundus of IAC, the facial nerve makes an anterior and medial turn above the basal turn of the cochlea to end in the geniculate ganglion (46,47). The average distance between the labyrinthine segment of the facial nerve and the basal turn of the cochlea is 4.3 mm (45). The trochleariform process is a tiny bony prominence on the superior aspect of promontory and anterior to the oval window and inferior to the tympanic segment of the facial nerve where the tendon of tensor tympani attaches (see above-"Eustachian tube") ( Figure 5). A delicate neural meshwork covers the promontory. This neural meshwork emanates from the tympanic branch of the glossopharyngeal nerve (aka Jacobson's nerve), which enters the middle ear cavity from inferiorly (i.e., jugular fossa) and finally unites to form the lesser petrosal nerve. The cochlea lies slightly posterolateral and superior to the posterior genu of the IAC in the MCF. The closest distance between cochlea and the ICA is between the basal cochlear turn and the posterior ICA genu (average 1.9 mm) (45). This bone may be dehiscent (48). Marching posterolateral on the MCF from the cochlea is located the semicircular canals. The only prominence in the MCF harboring the semicircular canals is the AE (see above-"Arcuate eminence" section). "Cochlear angle" is a term used to describe a bony region between the IAC and labyrinthine segment of the facial nerve and the GSPN. This bony region houses the cochlea and is an important landmark in MCF surgery. Basically, the basal and middle turns lie inferior to the labyrinthine segment of the facial nerve and the apical turn is situated inferior to the geniculate ganglion. The basal turn lies on average 4 mm below the MCF. Intraoperative landmarks for cochlear protection Approaches through the MCF can result in hearing loss due to cochlear damage. The cochlea is located anterior to the fundus of the IAC, in the angle between the IAC and GSPN, inferomedial to the geniculate ganglion. The part of the cochlea most frequently damaged is the basal turn. Utilizing the anterior petrosal approach, Kim et al. were the first to describe an anatomic line based on landmarks-called the cochlear line (CL)to identify the cochlea and preserve hearing function (49). The CL cochlear line is a line drawn from the crossing point between the GSPN and the petrous ICA perpendicular to the line drawn over the apex of the superior circumference of the dura of the IAC. The CL marks the anteromedial perimeter of the cochlea. Using this line as a landmark resulted in a safety margin of approximately 2 mm (reported 2.25 mm) around the cochlear cavity, which does not inhibit the view of the surgical trajectory to the brainstem. However, using the CL is challenging due to positioning and potentially dangerous due to the exposure of neurovascular structures (e.g., petrous ICA). In 2019, Guo et al. introduced another anatomical line for preserving hearing, called the cochlear safety line (CSL), utilizing the same anterior petrosal approach (50). The CSL projects from Internal auditory canal Between the trigeminal prominence (which is a subtle bump posterior to the trigeminal depression) and the AE on the MCF is a subtle depression called the meatal depression ( Figure 4). The IAC lies roughly inferior to the meatal depression. The IAC is usually a funnel-shaped canal, which is wider medially, although it may be cylindrical or bud-shaped in a minority of specimens. Its medial porus sizes are 8.6 mm × 5.6 mm (anteroposterior × vertical), whereas at the fundus these dimensions are 2.5 mm × 4.0 mm (51). The porus lies about 5 mm below the level of the petrous ridge and 5 mm posterior to the posterior lip of the trigeminal depression. Marching laterally along the IAC, the thickness of MCF bone overlying the IAC becomes smallersubmillimeter above the labyrinthine segment of the facial nerve. This anatomic fact is very important to protect the contents of the IAC during drilling of the middle fossa. The basal turn of the cochlea is related to the anteroinferior aspect of the lateral end of the IAC. The fundus of IAC is the point where the facial and vestibulocochlear nerves exit the IAC harboring a transverse crest, which divides the fundus into superior and inferior compartments. The superior compartment is further divided by a subtle vertical crest also called "Bill's bar" in recognition of William House ( Figure 5C) (52). Dural anatomy As the temporal lobe sits in the cradle of the middle fossa, it is covered by the dura mater. Regarding its layered structure, the dura mater of the middle fossa proper is relatively simple laterally; i.e., it is composed of two adherent layers: meningeal and endosteal. However, as one marches from lateral to medial, this layered structure becomes more complex. First, it should be noted that all the foramina of the skull base are lined with the endosteal layer of the dura. Therefore, when the dura is detached from lateral to medial on the middle fossa floor, once one reaches a foramen (e.g., FS or FO), the endosteal layer of the dura must be incised for further dissection to be feasible. Dural folds of the anterior clinoid region The dura over the ACP extends anteriorly, posteriorly, medially, and inferiorly. Anterior continuation of the ACP dura will lead to the anterior cranial fossa on the orbital roof. Posteriorly, it continues as the anterior petroclinoid and interclinoid ligaments connecting the ACP to the petrous apex and posterior clinoid process, respectively. Medially, the dura on the superior surface of the ACP merges with the dura of the planum sphenoidale after crossing the ICA (see below) and lines the margins of the intracranial ostium of the optic canal and continues around the optic nerve up to the sclera. The dural fold lining the inferior part of the ostium of the optic canal covers the upper surface of the optic strut. The ophthalmic artery usually originates from the first few millimeters of the intradural ICA. It runs between the optic nerve and the dura lining the superior part of the optic canal and continues interdurally (not intradurally) toward the orbit (53). The medial continuation of the dura on the superior surface of ACP encircles the ICA laterally along a line of attachment that is inclined superiorly from posteriorly to anteriorly, known as the distal dural ring (DDR) ( Figure 6) (54, 55). The DDR encircles the ICA and merges anteriorly with the dural fold on the superior surface of the optic strut, which separates the ICA and the optic nerve at the posterior end of the optic canal (54). The DDR attachment to the posteromedial wall of the ICA lies at a lower level than at the lateral wall of the ICA; thus, the DDR has a coronal inferomedial inclination (54). Because of such coronal inclination, a dural niche is created between the medial ICA and diaphragma sellae known as the carotid cave which is in the subarachnoid space (56). Moving toward the base of the ACP, the upper dural covering of the ACP continues further medially as the falciform ligament, which covers 2-3 mm of the optic nerve proximal to the optic canal ( Figure 6C) (56-58). Medially, this dural fold continues on the planum sphenoidale and merges posteriorly with the diaphragma sellae. Dural folds of the paraclinoid region Inferior to DDR, the dural covering of the inferior surface of the ACP continues to descend on the ICA while encircling its lateral aspect. A few millimeters inferior to the DDR, this dural attachment thickens as the proximal dural ring (PDR). The PDR is basically the anterior continuation of the roof of the cavernous sinus below the ACP and separates the upper surface of the distal few millimeters of the oculomotor nerve from the ACP and adjacent clinoidal ICA just before it enters the SOF; hence, the PDR is also named as the carotid oculomotor membrane. DDR and PDR meet posteriorly near the tip of the ACP. Anteriorly, this dural fold continues on the inferior surface of the optic strut making the superomedial roof of the SOF. The PDR is less defined on the medial aspect of the ICA attaching to the carotid sulcus. The segment of the ICA between the PDR and DDR is the clinoidal segment ( Figure 6). Meningo-orbital band The meningo-orbital band (MOB) (aka "frontotemporal dural fold") is a fold of endosteal dura that exits through the lateral (nonneural) compartment of the SOF to continue as the periorbita (Figures 3, 6). Surgically, this band tethers the temporal dura to the SOF. Therefore, the temporal lobe dura could be detethered safely by incising this band and a pretemporal approach could be started to the lateral wall of the cavernous sinus by elevation of Frontiers in Surgery the outer meningeal layer of the lateral wall of the cavernous sinus to expose the inner meningeal layer (59). In doing this, the entire ACP is also exposed extradurally. Different techniques are described to cut the MOB safely and efficiently (60)(61)(62)(63). Cutting the MOB and the endosteal dural layer at rotundum and ovale foramina allows full exposure of the lateral wall of the cavernous sinus and Meckel's cave up to the petrous ridge (59). The meningo-orbital artery (i.e., recurrent meningeal branch of the lacrimal artery) runs in the MOB and anastomoses with the branches of the anterior division of the MMA (64). Cavernous sinus The cavernous sinus is situated in the medial-most region of the MCF proper ( Figure 6). The cavernous sinus is formed between the meningeal and endosteal layers of the dura extending from the petroclival area posteriorly to the SOF anteriorly (59). The actual venous space of the sinus houses the cavernous ICA, its accompanying sympathetic plexus, and the abducens nerve. The meningeal layer of the dura forms the lateral, superior, posterior, and the upper (sellar) part of the medial walls. The periosteal layer forms the inferior wall, and the lower (sphenoidal) part of the medial wall. Upon entering the cavernous sinus, cranial nerves III, IV, and V1 are accompanied by a sleeve of meningeal layer. Therefore, the roof and the lateral wall are composed of a double meningeal layer of dura except for the area of the anterior clinoid process, which has a more complex anatomy (see above). When the MOB (i.e., the endosteal dural layer) is incised, the medial dura of the temporal lobe (i.e., the meningeal layer), which forms the true lateral wall of the cavernous sinus, could be peeled away from the meningeal layer that encases the individual nerves in the lateral wall. This inner membrane seals the venous space of the cavernous sinus laterally (59). Meckel's cave Meckel's cave is a space between the two layers of the dura mater extending across the petrous apex that encircles the trigeminal root and ganglion, named after the German anatomist Johann Friedrich Meckel (1724-1774). The cave starts from porus trigeminus under the superior petrosal sinus where the cisternal segment of the trigeminal nerve crosses the petrous ridge above the trigeminal depression. The subarachnoid space extends beyond the ostium of Meckel's cave (i.e., trigeminal cistern). The point of distal cul de sac of Meckel's cave is a matter of controversy. A plausible understanding is that the subarachnoid space from the posterior cranial space usually extends up to a variable extent beyond the posterior edge of the Gasserian ganglion, whereas the dural covering extends to the anterior margin of the ganglion or even up to the exit foramina of the divisions (59). The meningeal structure of the Meckel's cave is as follows. From lateral to medial the following layers are encountered: (1) the outer meningeal layer of the Meckel's cave, which is continuous with the superficial meningeal layer of the lateral wall of the cavernous sinus (i.e., temporal lobe dura propria); (2) the superficial inner layer of the meningeal dural sleeve around the trigeminal nerve; (3) the trigeminal nerve; (4) the deep inner layer of the meningeal dural sleeve; and (5) the periosteal dura of Meckel's cave (65). The periosteal dura covers the lacerum segment of the ICA (petrolingual ligament) and continues on the surface of sphenoid bone as the medial wall of the cavernous sinus (59). When the periosteal dural layer is incised at the lateral aspect of the middle fossa foramina, the outer meningeal layer is elevated from the inner layer, which eventually continues as the epineurium of the nerve fibers (65). Technically, incising the endosteal layer at the foramina means that one has entered the space between the meningeal and endosteal layers which may contain venous lakes. As the dissection continues farther posteriorly and medially toward the trigeminal root, the petrous ridge is reached, which contains the superior petrosal sinus (65). Farther medially the two meningeal layers of the dura continue as the tentorium cerebelli, which is a reflection of the meningeal layer ( Figure 6). Conclusion The osseous and dural anatomy of the middle fossa are the foundations of understanding its detailed neurovascular anatomy. In order to safely navigate the middle fossa, the surgeon must know every detail of its anatomy. This work summarizes our current knowledge of the middle fossa bony and meningeal anatomy. Author contributions ATM: Conception, design, data collection, article drafting, critical review, review on behalf of all authors, and study supervision. GM-J: Data collection, article drafting, and critical review. MTL: data collection and critical review. JKL: critical review. MCP: critical review and study supervision. HS: Data collection, critical review, and study supervision. All authors contributed to the article and approved the submitted version.
2023-03-26T15:17:44.235Z
2023-03-24T00:00:00.000
{ "year": 2023, "sha1": "87c3d8adf92a3aaa55d65707ed8e00e982a8849a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "13cec0680f144841970506296f0129210848e38b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
22846624
pes2o/s2orc
v3-fos-license
Role of Melt Curve Analysis in Interpretation of Nutrigenomics' MicroRNA Expression Data This article illustrates the importance of melt curve analysis (MCA) in interpretation of mild nutrogenomic micro(mi)RNA expression data, by measuring the magnitude of the expression of key miRNA molecules in stool of healthy human adults as molecular markers, following the intake of Pomegranate juice (PGJ), functional fermented sobya (FS), rich in potential probiotic lactobacilli, or their combination. Total small RNA was isolated from stool of 25 volunteers before and following a three-week dietary intervention trial. Expression of 88 miRNA genes was evaluated using Qiagen’s 96 well plate RT2 miRNA qPCR arrays. Employing parallel coordinates plots, there was no observed significant separation for the gene expression (Cq) values, using Roche 480® PCR LightCycler instrument used in this study, and none of the miRNAs showed significant statistical expression after controlling for the false discovery rate. On the other hand, melting temperature profiles produced during PCR amplification run, found seven significant genes (miR-184, miR-203, miR-373, miR-124, miR-96, miR-373 and miR301a), which separated candidate miRNAs that could function as novel molecular markers of relevance to oxidative stress and immunoglobulin function, for the intake of polyphenol (PP)-rich, functional fermented foods rich in lactobacilli (FS), or their combination. We elaborate on these data, and present a detailed review on use of melt curves for analyzing nutrigenomic miRNA expression data, which initially appear to show no significant expressions, but are actually more subtle than this simplistic view, necessitating the understanding of the role of MCA for a comprehensive understanding of what the collective expression and MCA data collectively imply. Gene expression and its control by miRNAs. Cell’s gene expression profile determines its function, phenotype and cells’ response to external stimuli, and thus help elucidate various cellular functions, biochemical pathways and regulatory mechanisms (1). Several gene expression profiling methods at the RNA level have emerged during the past years, and have been successfully applied to cancer research. Profiling by microarrays allows for the parallel quantification of thousands of genes from multiple samples simultaneously, using a single RNA preparation, and has become valuable because microarrays are convenient to use, do not require large-scale DNA sequencing, gives a clear idea of cells’ physiological state, and is considered a comprehensive approach to characterize cancer molecularly, as seen in studies on colon cancer (1). Control of gene expression has been studied by miRNA molecules, a small non-coding RNA molecules (18-24 nt long), involved in transcriptional and post-transcriptional regulation of gene expression by inhibiting gene translation. MiRNAs silence gene expression through inhibiting mRNA translation to protein, or by enhancing the degradation of mRNA. Since first reported in 1993 (2), the number of identified miRNAs in June 2014, version 14.0, the latest miRBase release (v20) (3) contains 24,521 miRNA loci from 206 species, processed to produce 30,424 mature miRNA products. MiRNAs are processed by RNA polymerase II to form a precursor step which is a long primary transcript. Pri-miR is converted to miRNA by sequential cutting with 469 This article is freely accessible online. *Current Address: National Research and Development Center for Egg Processing, College of Food Science and Technology, Huazhong Agricultural University, Wuhan, Hubei, P.R. China. Correspondence to: Farid E. Ahmed, GEM Tox Labs, Institute for Research in Biotechnology, 2905 South Memorial Dr, Greenville, NC 27834, U.S.A. Tel: +1 2528641295, e-mail: gemtoxconsultants@ yahoo.com two enzymes belonging to a class of RNA III endonucleases, Drosha and Dicer. Drosha converts the long primary transcripts to ~70 nt long primary miRNAs (pri-miR), which migrate to the cytoplasm by Exportin 5, and converted to mature miRNA (~22 net) by Dicer (4). Each miRNA may control multiple genes, and one or more miRNAs regulate a large proportion of human protein-coding genes, whereas each single gene may be regulated by multiple miRNAs (5). MiRNAs inhibit gene expression through interaction with 3untranslated regions (3 UTRs) of target mRNAs carrying complementary sequences (4,5). Effect of antioxidant polyphenols -abundant in Mediterranean diets -on gene expression unraveled by the availability of molecular biology techniques, reveals our adaptation to environmental changes (6). Efforts to study the human transcriptome have collectively been applied to tissue, blood, and urine (i.e., normally sterile materials), as well as stool (a non-sterile medium). Extraction protocols that employ commercial reagents to obtain high-yield, reverse-transcribable (RT) RNA from human stool in studies performed on colon cancer have been reported (7,8). Micro(mi)RNAs as biomarkers, and their roles in disease processes. A biomarker is believed to be a characteristic indicator of normal biological processes, pathogenic processes, or pharmacological responses to therapeutic interventions. In contrast, clinical endpoints are considered as variables representing a study subject's health from his/her perspective (9). A variety of biomarkers exist today as surrogates to access clinical outcomes in diseases, predict the health of individuals, or improve drug development. An ideal biomarker should be safe and easily measured, is costeffective to follow-up, is modifiable with treatment, and is consistent across genders and various ethnic groups. Because we never have a complete understanding of all processes affecting individual's health, biomarkers need to be constantly re-evaluated for their relationship between surrogate endpoints and true clinical endpoints (10). MiRNAs have been used herein as biomarkers for assessing the effect of intake of PP-rich or fermented foods on the expression of 88 miRNA genes known to influence cancer. Disease modulation by nutrients. Cardiovascular diseases due to hypercholesterolemia are considered a risk factor for Chronic Heart Disease (CHD), and chronic degenerative diseasescaused wholly or partially by dietary patterns -represent the most serious threat to public health (11,12). Moreover, nearly one-third of all cancer deaths are due to poor nutrition, lack of physical activity, and obesity; and these risk factors account for nearly 80% of large intestine, breast, and prostate cancers. Chronic inflammation is considered a common factor that contributes to the development and progression of these illnesses, which are caused by and/or modified by diet (13). Pomegranate juice (PGJ) and derived products are considered the richest sources of polyphenolic compounds, with positive implication on TC, LDL-C and TG plasma lipid profile (14). Moreover, anthocyanin and ellagitannins pigments, mainly punicalagins, inhibit the activities of enzymes 3-hydroxy-3methylglutaryl-CoA reductase and sterol O-acyltransferase, important in cholesterol metabolism (15). Probiotic bacteria also contribute to lowering plasma hyper cholestrolemia due to the above mechanism, caused by the probiotic bile salt hydrolase (BSH) activity. This probiotic enzyme hydrolyses conjugates both glycodeoxycholic and taurodeoxycholic acids to hydrolysis products, inhibiting cholesterol absorption and decreasing reabsorption of bile acid (16). Colonic microbiota is a central site for the metabolism of dietary PP and colonization of probiotic bacteria. A dietary intervention study with probiotic strains from three Lactobacillus species (L. acidophilus, L. casei and L. rhamnosus) given to healthy adults, showed that bacterial consumption caused the differential expression of from hundreds to thousands of genes in vivo in the human mucosa (17). The interaction of PP with the gut microbiota influences the expression of some human genes (i.e., nutritional transcriptomics), which mediates mechanisms underlying their beneficial effects (17). Similar in vivo mucosal transcriptome findings have been reported when adults were given the probiotic L. plantarum, illustrating how probiotics modulate human cellular pathways, and show remarkable similarity to responses obtained for certain bioactive molecules and drugs (18). Materials and Methods Participants. Study subjects were 25 healthy adults, 20 to 34 years old; exclusion were: absence of metabolic diseases, no use of medication for the last 6 weeks, and no signs of allergy or hypersensitivity to food or ingested material. Compliance with the supplementation in all subjects was satisfactory, as assessed daily, and. all subjects continued their habitual diets throughout the study. The research protocol was approved by the institution review board, and all subjects gave written consent prior to their participation in the study. Design of the study. Figure 1 shows the design of the nutrigenomic randomized study. Estimated dietary intake was assessed by 3 repeated food records, one week before they were enrolled in the trial. The average portion size consumed, as well as composition data values from nutrient composition of the food were combined to assess average daily energy and nutrient intakes by the "nutrisurvey" software program. The characteristics of the voluntary subjects who were enrolled in the study, the mean daily energy intake, as well as selected macro nutrients are presented in Table I. Supplements. Pomegranate was obtained in bulk from the Obour Public Market, Cairo, Egypt. Pomegranate fruits were peeled and the juice was extracted using a laboratory pilot press (Braun, Germany). The juice was distributed in aliquots of 100 or 250 grams in air tight, light-proof polyethylene bottles, and frozen at −20˚C, where pomegranate polyphenols remained stable. Sour sobya, a fermented rice porridge containing per gram 3×10 7 cfu diverse lactic acid bacteria (LAB) and 1×10 7 cfu Sacharomyces cerivisiae. with added ingredients such as milk, sugar and grated coconut, was purchased twice a week from the retail market, and saved in the refrigerator. Sobya is fermented rice. Table II illustrates the proximate initial and final mean urinary polyphenols, plasma and urinary antioxidative activity, urinary thiobarbituric acid reactive species (TBARS), and erythrocytic glutathione-Stransferase (GST). Stool collection and storage. Stool was obtained from the 25 healthy adults, twice at day 0 and three weeks after the dietary intervention. All stools were collected with sterile, disposable wood spatulas in clean containers, after stools were freshly passed, and then placed for storage into Nalgene screw top vials (Thermo Fisher Scientific, Inc., Palo Alto, CA, USA), each containing 2 ml of the preservative RNA later (Applied Biosystems/Ambion, Austin, TX, USA), which prevents the fragmentation of the fragile mRNA molecule (7), and vials were stored at -70˚C until samples were ready for further analysis. Total small RNA, containing miRNAs, was extracted from all frozen samples at once, when ready, and there was no need to separate mRNA containing small miRNAs from total RNA, as small total RNA was suitable to make ss miRNA c-DNA. Extraction of total small RNA. A procedure used for extracting small total RNA from stool was carried out using a guanidiniumbased buffer, which comes with the RNeasy isolation Kit ® , Qiagen, Valencia, CA, USA, as we have previously detailed (7). DNase digestion was not carried out, as our earlier work demonstrated no difference in RNA yield or effect on RT-PCR after DNase digestion (7). The time to purify aqueous RNA from all of the 25 frozen stool samples was ~ three hours. Small RNA concentrations were measured spectrophotometrically at λ 260 nm, 280 nm and 230 nm, using a Nano-Drop spectrophotometer (Themo-Fischer Scientific). The integrity of total RNA was determined by an Agilent 2100 Bioanalyzer (Agilent Technologies, Inc., Palo Alto, CA, USA) utilizing the RNA 6000 Nano LabChip ® . RNA integrity number (RIN) was computed for each sample using instrument's software (7). Preparation of ss-cDNA for molecular analysis. The RT 2 miRNA First Strand Kit ® from SABiosciences Corporation (Frederick, MD, USA) was employed for making a copy of ss-DNA in a 10.0 μl reverse transcription (RT) reaction, for each RNA samples in a sterile PCR tube, containing 100 ng total RNA, 1.0 μl miRNA RT primer & ERC mix, 2.0/μl 5X miRNA RT buffer, 1.0 μl miRNA RT enzyme mix, 1.0 μl nucleotide mix and Rnase-free H 2 O to a final volume of 10.0 μl. The same amount of total RNA was used for each sample. Contents were gently mixed with a pipettor, followed by brief centrifugation. All tubes were then incubated for 2 h at 37˚C, followed by heating at 95˚C for 5 min to degrade the RNA and inactivate the RT. All tubes were chilled on ice for 5 min, and 90 μl of Rnase-free H 2 O was added to each tube. Finished miRNA First Strand cDNA synthesis reactions were then stored overnight at -20˚C (7). Use of cancer RT 2 miRNA PCR array 96-well plate to study miRNAs' expressions. We used a SABiosciences RT 2 miRNA qPCR Array Plate System for Human (Qiagen) to analyze miRNA expression using real-time, reverse transcription PCR (RT-qPCR) as a sensitive and reliable quantitative method for miRNA expression analysis. The arrays employ a SYBR Green real-time PCR detection system, which has been optimized to analyze the expression of many mature miRNAs simultaneously. Each 96-well array plate contains a panel of primer sets for 88 relevant miRNA focused pathways (one universal primer and one gene-specific primer for each miRNA sequence), plus four housekeeping genes (Human SNORD 48, 47 and 44, and U6), and two RNA and two PCR quality controls. Duplicate RT Controls (RTC) to test the efficiency of the miRNA RT reaction, with a primer set that detects the template synthesized from the built in miRNA External RNA Control (ERC). There are duplicate RT controls (RTC) to test the efficiency of the miRNA RT process, with a primer set to detect the template synthesized from the kit's built-in miRNA External RNA Control (ERC). There is also duplicate positive PCR controls (PPC) to test the efficiency of the PCR process, using a per-dispensed artificial DNA sequence and the primer set that detects it. The two sets of duplicate control wells (RTC and PPC) also test for interwell, intra-plate consistency. The human RT 2 miRNA PCR Arrays reflect miRNA sequences annotated by the Sanger miRBase Release 14. Figure 2 shows the layout of the MAH-102F array. Performing real-time quantitative polymerase chain reaction (qPCR). We used RT 2 SYBR Green qPCR Master Mix (SBA Biosciences) to obtain accurate results from our qPCR arrays. The following components were mixed in a 15-ml tube for 96-well plate format: 1275 μl of 2X RT 2 SYBR Green PCR Master Mix, 100 μl of diluted first strand reaction, 1175 μl of ddH 2 O (total volume 2,550 μl, of which 2,400 was needed for 96 reactions, each well having 25 μl, with 150 μl cocktail remaining. We employed a Roche LightCycler 480 ® 96-well block PCR Machine (Roche, Mannheim, Germany) to carry out quantitative real-time miRNA expressions. When ready, we removed the needed miRNA qPCR Arrays, each wrapped in aluminum foil, from their sealed bags, added 25 μl of the same cocktail to each well, adjusted the ramp rate to 1˚C/sec. We used 45 cycles in the program, and employed the Second Derivative Maximum method, available with the LightCycler 480 ® software for data analysis (19). We first heated the 96 well plate for 10 min at 95˚C to activate the HotStart DNA polymerase, then used a three-step cycling program (a 15 sec heating at 95˚C to separate the ds DNA, a 30 sec annealing step at 60˚C to detect and record SYBR Green fluorescence at each well during each cycle, and a final heating step for 30 sec at 72˚C). Each plate was visually inspected after the run for signs of evaporation from the wells. Data were analyzed using the 2 -ΔΔCt method (20). Resulting threshold cycle values for all wells were exported to a blank Excel sheet for analysis. We also ran a Dissociation (Melt) Curve Program after the cycling program (21), and generated a first derivative dissociation curve for each well in the plate, using the LC (Lightcycler's ® ) software. Statistical and bioinformatics analysis. Gene expressions were standardized by dividing the SNORD48 value while raw melting temperatures were used. Analysis were done using the software R (version 3.1.3), with the package MASS (22). One individual had so many missing values that this case was not used in the analysis so that the number of individuals is 24. For each standardized gene and each melting temperature, a one way ANOVA was used to obtain a p-value. There were four levels of the explanatory variable: Control, Sobya, Pom, and Both. Parallel coordinate plots (parcoord command in R) (23) were used to visualize the data for each gene and each melting temperature. Coordinates were ordered using the magnitude of the p-value. The two sample t-test was used on gene expression to compare control to sobya and control to Both (t.test command in R with var.equal=FALSE). p-Values were adjusted to control for false discovery rate. The method is outlined in (24) Benjamini and Yekutieli (p.adjust command in R with method='BY'). We have bioinformatically correlated the 2-7 or 2-8 complement nucleotide bases in the mature miRNAs with the untranslated 3' region of target mRNA (3' UTR) of a message using a basic Results At base line, all participants in the trial excreted urinary total polyphenols; however, the inter individual variation was considerably high (4.89-12.59 mg GAE/100 ml urine). Composition of the three supplements (FS, PGJ and FS + PGJ served to the volunteers is presented in Table I. The initial and final mean urinary polyphenols, plasma and urinary antioxidative activity, urinary TBARS and erythrocytic GST, as well as the daily portion of PGJ provided 21 mg PP /day, and the combination of PGJ -FS was 9 mg PP /day, as presented in Table II. Figure 2 is a layout of RT 2 miRNA PCR Array Human Cancer microRNA (MAH-102A). Figure 6 is a graphical representation of the parallel plot coordinates of the studied miRNA genes for melting temperature curve analysis. The genes were ordered using the p-values of a one way ANOVA based on groups. Genes with the smallest p-values are presented first. Figures 3, 4 and 5 represent characteristics of melt curve analysis protocols. Figure 6a show eight employed control genes (Snord48, Snord47, Snord44, RNUU6-2. MiRTC1, miRTC2, and PPC1, PPC2). In Figure 6b, five miRNA genes (miR-184, miR-203, miR-124, miR-96 and miR-378) show clear separation. Gene miR-184 has the highest separation from the control gene. MiR-203 genes is hardly amplified in Sobya, while it is highly expressed in Pomegranate. For miR-373 gene, the control group is different from the other three treatment groups. For genes miR-124, miR-96 and miR-378, Pomegranate is well separated from other three groups. In Figure 6c, for gene miR-301a, the control is separated from the other three groups. Additional miRNA genes are not shown, as their p-values are greater (less significant), and the graphs did not show any meaningful separations. Bioinformatics analysis using the TargetScan algorithm (25) for up-regulated and down-regulated mRNAs genes is shown in Table III. The program yielded 21 mRNA genes encoding different cell regulatory functions. The first 12 of these mRNAs were found with the DAVID program (26) to be active in the nucleus and related to transcriptional control of gene regulation. For down-regulated miRNAs, the DAVID algorithm found the first four of these mRNAs to be clustered in cell cycle regulation categories. Discussion Suitability of stool as a medium for developing a sensitive molecular biomarker screen. Stool represents a challenging environment, as it contains many substances that may not be consistently removed in PCR, in addition to the existence of certain inhibitors, which all must be removed for a successful PCR reaction. Our results (27) and others (8,28) have shown that the presence of non-transformed RNA and other substances in stool do not interfere with measuring miRNA expressions, because of the use of suitable PCR primers, and the robustness of the real-time qPCR method (19). Besides, stool colonocytes contain much more miRNA and mRNA than that available in free circulation, as in plasma (29), all factors that facilitate accurate and quantitative measurements. PCR amplification and the effect of inhibitory substances. PCR has been used for miRNAs quantification because of its extreme sensitivity. This method, however, could lead to errors because of the presence of inhibiting substances, representing diverse compounds with different properties and mechanisms of action, which induce their effects by direct interaction with DNA that will be amplified, or through interference with the employed thermostable DNA polymerase (30). Agents that reduce Mg 2+ availability, or interfere with the binding of Mg 2+ to DNA polymerase could inhibit the PCR reaction (31). Calcium ion is another inorganic inhibiting substance, although most PCR inhibitors are organic compounds (e.g., bile salts, sodium dodecyl sulphate, urea, phenol, ethanol, polysaccharides), as well as proteins (e.g., collagen, hemoglobin, immunoglobin G and protineases) (32). The existence of polysaccharides in stool could decrease the capacity to resuspend precipitated RNA, or disrupt the enzymatic reaction by mimicking the structure of nucleic acid. The DNA template of the PCR, as well as primers binding to DNA template can be inhibited by nucleases and other inhibitors (33). Remedial strategies for removal of inhibitors in stool, such as additional extraction steps, sephadex G-200 chromatography, heat treatment before the PCR, chloroform extraction, treatment with activated carbon, adding BSA, or dilution of sample, have been suggested (34). We found the dilution method, in which the extracted ribonucleic acid (RNA) is diluted in the reaction mixture with distilled water or an isotonic buffer, to be the most practical method for preventing PCR inhibition using a commercially available diluent (35). Role of biomarker miRNAs in various diseases. MiRNA functions were shown to regulate development (36) and apoptosis (37), and dysregulation of miRNAs has been associated with many diseases such as various cancers (38), heart diseases (39), kidney diseases (40). nervous system diseases (41), alcoholism (42), obesity (43), auditory diseases (44), eye diseases (45), skeletal growth defects (46), as well as key role in host-virus pathogenesis of viral diseases (47). A negative correlation was found between tissue specificity of interactions and miRNA in a number of diseases, and an association between miRNA conservation and disease, and predefined miRNA groups allow for identification of novel disease biomarkers at the miRNA level (48). Specific miRNAs are crucial in oncogenesis (49), effective in classifying solid (50) and liquid tumors (51), and function as oncogenes or tumor suppressor genes (52). MiRNA genes are often located at fragile sites, as well as minimal regions of loss of heterozygosity, or amplification of common breakpoints regions, suggesting their involvement in carcinogenesis (53). MiRNAs have shown to serve as biomarkers for cancer diagnosis, prognosis and/or response to therapy (54,55 26, 2017. suggests that miRNA expression profiles can cluster similar tumor types together more accurately than expression profiles of protein-coding messenger (m)RNA genes (56). Besides, small miRNAs (~18-22 nt long) are stable molecules than the fragile mRNA (27). Melting curve analysis (MCA). MCA is an assessment of dissociation characteristics of dsDNA during heating, leading to rise in absorbance, intensity and hyperchromicity. The temperature at which 50% of DNA is denatured is referred to as melting point, T m . Gathered information can be used to infer the presence of single nucleotide polymorphism (SNP), as well as clues to molecule's mode of interaction with DNA, such as intercalator slots in between base pairs through pi stacking and increasing salt concentration, leading to rise in melt temperature, whereas pH can affect DNA's stability, leading to lowering of its melting temperature (57). Originally, strand dissociation was measured using UV absorbency, but now techniques based on fluorescence measurements using DNA intercalating fluorophores such as SYBR Green I, EvaGreen, or Fluorophore-labelled DNA probes (FRET probes) when they are bound to ds DNA (58) are now common. Specialized thermal cyclers that run the qPCR, such as Roche LightCycler(LC) 480 ® , used in this study, is programmed to produce the melt curve after the amplification cycles are completed. As the temperature increases, dsDNA denatures becoming ss and the dye dissociates, resulting in decrease in fluorescence. The graph of the negative first derivative of the melting-curve (-dF/dT) represents the rate of change of fluorescence in the amplification reaction, and allows pin-pointing the temperature of dissociation (50% dissociation) using formed peaks to obviate or complement sequencing efforts (57). The melting temperature (T m ) of each product is defined as the temperature at which the corresponding peak maximum occurs. The MCA confirms the specificity of the chosen primers, as well as reveals the presence of primerdimers, which usually melt at lower temperatures than the desired product, because of their small size, and their presence severely reduce the amplification efficiency of the target gene as they compete for reaction components during amplification, and ultimately the accuracy of the data. The greatest effect is observed at the lowest concentrations of DNA, which ultimately compromises the dynamic range. Moreover, nonspecific amplifications may result in PCR products that melt at temperatures above or below that of the desired product. Optimizing reaction components (Mg 2+ , detergents, SYBR Green I concentration) and annealing temperatures aid in decreasing nonspecific product formation. Adequate product design, however, is considered to be the best method to avoid nonspecific products' formation. Including a negative control will determine if there is a coamplified genomic DNA (57,58). The formula for Tm calculation is shown by the equation: where thermodynamic parameter ΔH o is Enthalpy changes, ΔS o parameter is Entropy changes, and CT is total strand concentration; these free-energy parameters predict Tm of most oligonucleotide duplexes to within 5˚C; and permit prediction of DNA, as well as RNA duplex stabilities. It should be noted that T m depends on the conditions of the experiment, such as oligonucleotide concentration, salts' concentration, mismatches and single nucleotide polymorphisms (SNPs) (59). OligoAnalyzer ® Tool (www.idtdna.com/analyzer/Applications/Oligoanalyzer) allows for calculating the T m of employed nucleotides. Microscale thermophoresis is a method that determines the stability, length, conformation and modifications of DNA and RNA. It relies on the directed movement of molecules in a temperature gradient that depends on surface characteristics of the molecule, such as size, charge and hydrophobicity. By measuring thermophoresis of nucleic acids over a temperature gradient, one finds clear melting transitions, and can resolve intermediate conformational states (Figure 3). These intermediate states are indicated by an additional peak in the thermophoretic signal preceding most melting transitions ( Figure 3B) (57,(60)(61)(62). Agarose gel visualization is the gold standard for analyzing PCR products. Alternatively to reduce the number of gels needed to conform the presence of a single amplicon, "uMelt" melting curve prediction software (http://www.dna.utah/umelt/umelt.html) can be used to confirm that a single amplicon is generated by PCR (63). This program predicts melt curves and their derivatives for qPCRlength amplicons, and is suited to test for multiple peaks in a single amplicon product. Because SYBR Green I dye has several limitations, including inhibition of PCR, preferential binding to CG-rich sequences and effects on MCA, two intercalating dyes SYTO-13 and SYTO-82 were tried and did not show these negative effects, and SYTO-82 demonstrated a 50-fold lower detection limit (64), as well as best combinations of time-to threshold (Tt) and signal-to-noise ratio (SNR) (65). To optimize performance of the buffer, a PCR mix supplemented with two additives, 1M 1,2-propanediol and 0.2 M trehalose, were shown to decrease T m , efficiently neutralize PCR inhibitors, and increase the robustness and performance of qPCR with short amplicons (66). "uAnalyze SM " is another web-based tool, similar to uMELT, for analyzing high-resolution melting PCR products' data, in which recursive nearest neighbor thermodynamic calculations are used to predict a melt curve. Using 14 amplicons of CYBB (cytochrome b-245 heavy chain, also known as cytochromae b(558) subunit), the main±standard deviation, the difference between experimental and predicted fluorescence at 50% helicity was -0.04±0.48˚C (67). MCA has been an effective and economical way for identification of virus stains (68), genes (69), bacterial strains (70,71), insect species (77), temperature validation of PCR cyclers (72), detection of translocations in lymphomas (73) and RNA interference/gene silencing (60). Thus, the presence of double peaks during MCA, is not always indicative of nonspecific amplification, and other methods such as agarose gel electrophoresis, and use of melt curve prediction software (60,67) Figure 6a shows control genes. In Figures 6b, c, an amplicon (74). For example, Figure 5A shows a single peak for exon 17b of CFTE (Cystic Fibrosis Transmembrane Conductance Regulator) gene, whereas the melt curve for an amplicon from exon 7 of CFTR shows two peaks, which could be interpreted as indicative of two separate amplicons ( Figure 5B). However, analysis by agarose gel electrophoresis showed only one peak. To solve this conflict, an understanding of how melt curves are produced is needed. It should be emphasized that intercalating dyes used in qPCR, such as SYBR Green, will fluoresce only when the dye is bound to ds DNA, but not in the presence of a ssDNA, or when the DNA is free in solution. After the amplification cycle in qPCR, the instrument starts at a preset temperature above the primer Tm, and as the temperature increases dsDNA denatures becoming ssDNA and the dye therefore dissociates from the ssDNA ( Figure 3A). The change in slope of this curve when blotted as a function of temperature to obtain a melt curve for CFTR exom 17b ( Figure 4A). However, if we allow for the possibility that DNA my assume an intermediate state that is neither dsDNA or ssDNA, raw date from CFTR exon 7 melt will look Figure 4B. This could happen when there are regions of the amplicon that are more stable (e.g., G/C rich), which do not melt immediately, but maintain their ds configuration until the temperature becomes sufficiently high to melt it, which results in two phases ( Figure 4B). Additional sequence factors, such as amplicon misalignment in A/T rich regions, and designs that have secondary structure in the amplicon region, can also produce products that melt in multiple phases. An advancement of MCA, referred to as High Resolution Melt (HRM), discovered and developed by Idaho Technology and the University of Utah (75, http:// www.dna.utah.edu/Hi-Res/TOP_Hi-Res%20Melting.html), which has been useful for mutation detection and SNPs, enabling differentiation of homozygous wildtype, heterozygous and homozygous mutant alleles from the dissociation patterns. HRM has been used to identify variation in nucleic acid sequences, enabled by use of a more advanced software, and is therefore less expensive than probe-based genotyping methods, and allows for identification of variants quickly and accurately (75). This method has been widely used in molecular diagnosis and for detection of mutations (76)(77)(78). In our study, we found the melt curve analysis to be a useful and an informative method because after the statistical analysis carried on our miRNA expression samples showed no preferential expression of any of the 88 miRNA genes, a melt curve analysis on the same samples found that we could distinguish 7 miRNA (miR-184, miR-203, miR-373, miR-124, miR-96, miR-373 and miR-301a), due to different separation melting profiles ( Figure 6). Thus, we believe that it is imperatives for investigators to run this kind of analysis on samples that particularly may not show expression differences in their mRNA or miRNA studied genes, such as nutritional samples. Bioinformatic methods to correlate seed miRNA data with messenger(m)RNA data. To provide information about complex regulatory elements, we correlated miRNA results with our available available mRNA data (79), as well as those data available in the open literature using computer model TargetScan (80,81). The authenticity of functional miRNA/mRNA target pair, once identified was validated by fulfilling four basic criteria: a) miRNA/mRNA target interaction can be verified, b) the predicted miRNA and mRNA target genes are co-expressed, c) a given miRNA must have a predictable effect on target protein expression, and d) miRNA-mediated regulation of target gene expression should equate to altered biological function. Bioinformatics showed 21 up-regulated mRNA genes encoding different cell regulatory functions, and 12 of these mRNAs were found to be active in the nucleus and related to transcriptional control of gene regulation. For down-regulated miRNAs, four of the mRNAs appeared to be clustered in cell-cycle regulation categories (Table III) (27). combinations of the two metabolites. Melt curve analysis is a powerful novel approach because after the statistical analysis carried on our miRNA samples produced negative gene expression (Cq) results, running melt curve analysis on the same samples identified 7 of the 88 miRNA genes imprinted on the highly sensitive focused PCR arrays (~ 8% of the genes), and using parallel coordinates plots showed noticeable separation of melt curve profiles. Thus, we believe that it is imperatives for investigators to run this kind of MCA on nutrition samples that are mild in nature, and many not always show significant differences in the expression of studied miRNA genes. The same analysis can also be envisioned for messenger mRNA amplifications, using mRNA arrays, and then using bioinformatics resources to correlate mRNA with miRNA data. We are also planning to validate these initial results by carrying out additional miRNA nutrigenomic expression studies, with much more observations using PP, FS and their combinations, and collectively the obtained results would fully demonstrate the sensitivity/specificity of this powerful systemic molecular approach for analyzing nutrient-gene data.
2018-04-03T05:35:10.426Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "f1f480016a50b9ce50cf1bf0891f347b7a5ad21f", "oa_license": null, "oa_url": "https://cgp.iiarjournals.org/content/cgp/14/6/469.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9ba0af2b3690fee7cfa9a443bee7e848ac27b6e9", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
57758321
pes2o/s2orc
v3-fos-license
Progesterone effects on vaginal cytokines in women with a history of preterm birth Objective To determine the effect of intramuscular progesterone on the vaginal immune response of pregnant women with a history of prior preterm birth. Methods A prospective, cohort study of women at 11–16 weeks gestation, ≥18 years of age, and carrying a singleton pregnancy was conducted from June 2016 to August 2017 after IRB approval. Women in the progesterone arm had a history of preterm birth and received weekly intramuscular 17-hydroxyprogesterone caproate. Controls comprised of women with healthy, uncomplicated pregnancies. Excluded were women with vaginitis, diabetes mellitus, hypertension, or other chronic diseases affecting the immune response. A vaginal wash was performed at enrollment, at 26–28 weeks, and at 35–36 weeks gestation. Samples underwent semi-quantitative detection of human inflammatory markers. Immunofluorescence pixel density data was analyzed and a P value <0.05 was considered significant. Results There were 39 women included, 10 with a prior preterm birth and 29 controls. The baseline demographics and pregnancy outcomes for both groups were similar in age, parity, race, BMI, gestational age at delivery, mode of delivery, and birth weight. Enrollment cytokines in women with a prior preterm birth, including IL-1 alpha (39.2±25.1% versus 26.1±13.2%; P = 0.04), IL-1 beta (47.9±26.4% versus 24.9±17%; P<0.01), IL-2 (16.7±9.3% versus 11.3±6.3%; P = 0.03), and IL-13 (16.9±12.4% versus 8.2±7.4%; P = 0.01) were significantly elevated compared to controls. In the third trimester the cytokine densities for IL-1 alpha (26.0±18.2% versus 22.3±12.0%; P = 0.49), IL-1 beta (31.8±15.9% versus 33.1±16.8%; P = 0.84), IL-2 (10.0±8.4% versus 10.9±5.9%; P = 0.71), and IL-13 (9.1±5.9% versus 10.0±6.5%; P = 0.71) were all statistically similar between the progesterone arm and controls, respectively. Conclusion There is an increased cytokine presence in vaginal washings of women at risk for preterm birth which appears to be modified following the administration of 17- hydroxyprogesterone caproate to levels similar to healthy controls. Introduction Intramuscular progesterone, administered to women at risk of preterm birth, reduces the likelihood of a subsequent preterm birth by approximately one-third [1,2]. The exact mechanism of action of progesterone therapy in preventing preterm birth is not well understood [3]. To date, studies have inadequately investigated progesterone's mechanisms of action in vivo. Despite the lack of understanding, there has been an increasing use of progesterone in pregnancy for prevention of preterm birth. While several theories exist surrounding the mechanisms of action, which include immunomodulation effects, signaling pathway regulation, and progesterone receptors alteration, the protective effect of progesterone may be manifested through cytokine modification [4]. At the onset of labor, the ratio of progesterone receptor-A (PR-A) to progesterone receptor-B (PR-B) increases. Due to an increase in myometrial PR-A, there is a functional withdrawal of progesterone and an increase in sensitivity to contractile stimuli [3]. This effect is also modulated by prostaglandins produced prior to the onset of labor [3]. Progesteronedependent immunomodulation is another plausible mechanism that enables pregnancy to proceed to term. Immunologic effects of progesterone are mediated by a 34-kDa protein named progesterone-induced blocking factor (PIBF) [4]. Progesterone-induced blocking factor, synthesized by lymphocytes of healthy pregnant women in the presence of progesterone, inhibits natural killer cell (NK) activity and modifies the cytokine balance [5]. Rodent fetoplacental human artery explant models and experiments with human lymphocytes have shown progesterone to downregulate the immune response [6,7]. Additionally pretreatment with progesterone prior to intrauterine infection has been associated with a decrease in bacteriainduced upregulation of Toll-like receptors in the cervix and placenta [8]. Recently, Monsanto et al compared women with cervical insufficiency with normal controls and demonstrated elevated levels of proinflammatory cytokines in cervicovaginal fluid, suggesting a dysregulation of the local vaginal immune environment [9]. Placement of a cervical cerclage significantly reduced the proinflammatory cytokines, suggesting that cerclage may prevent preterm birth through reduction of local inflammation in cervical insufficiency. The objective of this study was to evaluate the vaginal immune response of progesterone supplementation in pregnant women at risk for preterm birth compared to normal healthy pregnant controls. Materials and methods A prospective cohort study was conducted at an academic medical center from June 2016 to August 2017 with Stony Brook University Institutional Review Board (IRB) approval (COR-HIS #2016-3441-F). Women were included if they were at a gestational age of 11 to 16 weeks, 18 years of age or older, and carrying a singleton pregnancy without acute or chronic vaginitis, diabetes mellitus, hypertension, and other chronic diseases or therapies that would affect the inflammatory or immune response. Women in the progesterone arm had a history of preterm birth and therefore were candidates for intramuscular 17-hydroxyprogesterone caproate . Controls were comprised of women with healthy uncomplicated pregnancies who did not receive progesterone therapy. All women were recruited prior to initiation of progesterone treatment. Intramuscular progesterone treatment was started at 16-20 weeks gestation in women with a prior preterm birth. A vaginal rinse was performed at three points in the study: on enrollment into the study at 11 to 16 weeks, at 26 to 28 weeks, and at 35 to 36 weeks gestation. All vaginal rinse samples were collected in subjects while in the supine position with a sterile speculum. A 10 ml syringe filled with 7 ml of sterile water attached to a soft plastic 14-gauge catheter was used to irrigate the vaginal vault. A sterile cotton swab was used to gently rub the mucosal surfaces. The fluid was aspirated back into a sterile 15 ml test tube. The samples were placed on ice and processed within 2 hours [10]. The specimen tube was centrifuged at 3,000 RPM for 10 minutes. Aliquots of the vaginal lavage supernatant were stored at -80 degrees centigrade. The aliquots were later analyzed using Ray-Bio Human Inflammation Antibody Array C3 8-well plates (RayBiotech, Inc., Norcross, GA) according to manufacturer's instructions. Samples underwent semi-quantitative detection of 40 human inflammatory markers, and all cytokines were evaluated in duplicates. After treatment with biotinylated antibodies, followed by HRP-Streptavidin-labeled antibodies, the chemiluminescence detection of the products was performed using X-ray films. Numerical densitometry data were extracted by NIH software program Image Jv1.51j8 (National Institutes of Health, USA), using a dot array analysis plugin, as described in a previous publication [10]. All images were acquired using the same microscope settings, including filter and exposure parameters, with image acquisition performed from samples processed side by side [11]. Patient demographics including age, parity, ethnicity, pre-pregnancy body mass index (BMI), and tobacco use, were extracted from the medical record. Pregnancy outcomes collected included gestational age at delivery, mode of delivery, birth weight, Apgar scores and Neonatal Intensive Care Unit (NICU) admission. Categorical variables were analyzed using, Chi square, Fisher's exact test and continuous, normally distributed variables were analyzed using student's t test. Cytokine data were normalized by dividing samples' pixel density by pixel density of a positive control [10,11]. A P value <0.05 was deemed statistically significant. Statistical analysis was performed using SPSS software (IBM SPSS Statistics for Windows, Version 22.0. Armonk, NY). Results Thirty-nine women were included in the evaluation: 10 women with a previous preterm birth utilizing progesterone and 29 healthy control women. The baseline demographics and pregnancy outcomes of both groups were similar in age, parity, race, body mass index (BMI), gestational age at delivery, mode of delivery, and birth weight. (Table 1) Overall, there were three late preterm deliveries, one in the progesterone arm and two in the control arm (10% versus 6.9%; P = 1.0). No women delivered at less than 34 weeks. Discussion There is an increased cytokine presence in vaginal washings in women at risk for preterm birth which changes following the use of 17-OHP. There is a paucity of human investigations surrounding 17-OHP use and the possible mechanisms of action in the reduction in preterm birth in at risk women. Our work focused on theimmune inflammatory changes in the vaginal environment in women utilizing 17-OHP. Results demonstrate an initial increased presence of interleukin-1 alpha, interleukin-1 beta, interleukin-2, and interleukin-13 in women with a prior preterm birth compared to controls. Interleukin-1 was the first pro-inflammatory cytokine to be associated with infection-mediated spontaneous preterm birth [12][13][14][15][16]. Imseis et al established that vaginal levels of interleukin-1were significantly elevated in laboring patients as compared with non-laboring patients [17]. Additionally, Romero et al identified that interleukin-1 alpha and interleukin-1 beta were significantly increased in patients with preterm premature rupture of membrane and preterm labor [18]. In the later third trimester, after 17-OHP use, interleukin-1 alpha, interleukin-1 beta, interleukin-2, and interleukin-13 presence matched the healthy controls. Similar to Monsanto et al, when comparing women with cervical insufficiency with control women, cytokines measured in the vaginal fluids were significantly higher in the patients before undergoing cerclage placement, and after the intervention, the cytokines levels of interleukin-1 beta, interleukin-6, interleukin-12, and MCP-1, and TNF-alpha became equivalent to that of control women at the time of admission [9]. Our findings confirm that one of the proposed mechanisms of 17-OHP reduction in preterm birth is through decreasing the cytokine inflammatory response generated by the fetal membranes and placenta [19]. In women at risk for preterm birth, the densities for eotaxin-1, MCP-1 and MIP-1 alpha cytokines were significantly suppressed when compared with healthy women. Eotaxin-1, which is also known as C-C motif chemokine-11, has been associated with recruitment of eosinophils by inducing their chemotaxis, and is involved in allergic responses [20]. Kraus et al describe a suppression of serum eotaxin-1 and MCP-1 throughout pregnancy compared to the postpartum period [21]. The clinical implication of progesterone suppression of eotaxin-1 and MCP-1 by the third trimester is unclear. MIP-1 alpha, a chemotactic cytokine, is produced by macrophages and crucial in the inflammatory response. Dudley et al described MIP-1 alpha produced by the decidual cells plays an important role in infection associated preterm labor [22]. Romero et al also found elevated MIP-1 alpha levels in amniotic fluid samples of women with infection associated preterm labor [23]. One limitation in our study was the small sample size for both arms of the investigation; however, given the significance of the condition, we believe that this cohort still provides valuable evidence. Our unique approach to investigating the immune response following progesterone use did not allow for a pre-investigation sample size calculation, which is similar to other comparable investigations [9]. Although differences in cytokine densities were identified in our investigation, potentially a larger number of women in each of the cohort groups may result in alternative findings. In our population, progesterone was effective in having women deliver in the later third trimester. The hypothetical levels of maternal vaginal cytokine densities of a very preterm birth or a failed progesterone therapy birth, may look completely Cytokines comparison between women with a preterm birth including and healthy controls at 26 to 28 weeks. Complete cytokine profile from at 26 to 28 weeks (visit 2). Data presented as density percentage with � signifying P<0.05. different. Another limitation was the use of semi-quantitative pixel density analysis. Although this methodology has been utilized in many cytokine related investigations, an exact quantitative value may provide a better understanding of the human vaginal cytokine milieu [11]. In a living human reproductive system, there are innumerable cellular and soluble mediators which interact amongst each other and with surrounding cells. While this study has certain limitations, it nonetheless presents interesting and novel results in an area that has been substantially understudied. The precise mechanism of preterm labor is unknown and it is likely multifactorial. The pathophysiology which allows progesterone to work is also poorly understood. Yet, the use of progesterone in the prevention of preterm birth has been effective in the subpopulation of women at risk for recurrence. Based on our work, progesterone likely prevents preterm birth through alterations in the maternal immune system. We demonstrated pro-inflammatory cytokines, interleukin-1 beta, interleukin-1 alpha, interleukin-2, and interleukin-13, densities were significantly elevated in women with a prior preterm birth compared with controls. This was followed by a downregulation of these cytokines following weekly administration of intramuscular progesterone. Additionally, other cytokines, eotaxin-1, MCP-1 and MIP-1 alpha, were also reduced following progesterone therapy which have been implicated in the preterm labor process. Further investigation into the immunological changes resulting from progesterone use in the prevention of preterm birth is warranted.
2019-01-22T22:24:04.767Z
2018-12-31T00:00:00.000
{ "year": 2018, "sha1": "524afa65ee25dd407aa02636db4c534033f2b248", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0209346&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "524afa65ee25dd407aa02636db4c534033f2b248", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247922478
pes2o/s2orc
v3-fos-license
Free boundary cluster with Robin condition on the transmission Interface We formulate and study a variational two-phase free boundary problem with Robin condition on the interface between the two phases, and we prove existence and regularity of solutions in dimension two Introduction Free boundary problems with two and more phases are often used to describe models in different areas of Physics, Engineering and Life Sciences, for instance in Fluid Dynamics (Bernoulli free boundary problems), Dynamics of Populations (optimal partitions problems), Mechanics and Phase Transition (obstacle problems). The different phases are called segregated if they occupy different space regions; segregation occurs for instance in the twophase Bernoulli problem, the two-phase obstacle problem, optimal partitions problems. In all these cases the interaction between the different phases is supposed to be competitive, in particular, the interfaces are not formed because it is convenient energetically, but due to the lack of space. For instance, if we have two disjoint one-phase solutions of the variational Bernoulli (or obstacle) problem, then the couple they form is a minimizer to the corresponding two-phase problem, and even if the two phases are very close to each other, an interface is not formed (we briefly discuss this phenomenon in Section 1.1). In this paper, we consider a two-phase problem, in which the phases are still segregated, but the interaction along the free interface is collaborative. In this case, if two or more disjoint one-phase solutions are sufficiently close, then it is energetically convenient for them to create a free interface, that is, the formation of clusteres is incentivized. We introduce the functional in Section 1.2, while in Section 1.3 we state the variational problem and the main results of the paper. 1.1. The classical one-phase and two-phase Bernoulli free boundary problems. Let D be a smooth bounded open set in R d . Let g : ∂D → R be a given nonnegative function and λ > 0 a given constant. The classical one-phase Bernoulli problem can be stated as follows. Find a domain Ω ⊂ D and a function u : D → R such that u = g on ∂D and ∆u = 0 in Ω , u = 0 and |∇u| = λ on ∂Ω ∩ D . In the seminal paper [1] Alt and Caffarelli showed that the existence of such a couple (u, Ω) can be obtained by minimizing the functional J λ (u) = D |∇u| 2 + λ 2 |{u > 0} ∩ D|, among all functions in H 1 (D) such that u = g on ∂D, and then taking Ω := {u > 0}. In the two-phase problem, the two-phase interface ∂Ω 1 ∩ ∂Ω 2 is formed when the two sets Ω 1 and Ω 2 act as geometric obstacles to each other; if Ω 1 and Ω 2 are disjoint one-phase solutions, then the two-phase interface is simply not formed. In other words, if u 1 and u 2 are minimizers of the one-phase functionals J λ 1 and J λ 2 such that u 1 u 2 ≡ 0, then it is immediate to check that u = u 1 − u 2 is a minimizer of the two-phase functional J λ 1 ,λ 2 . In fact, if v ∈ H 1 (D) is such that v = u on ∂D, then v + = u 1 and v − = u 2 on ∂D and so, by the optimality of u 1 and u 2 , we get 1.2. A two-phase problem with Robin condition on the free interface. In this paper we study a different type of two-phase problem in which the two state functions u 1 and u 2 might not vanish on the interface ∂Ω 1 ∩ ∂Ω 2 . Precisely, given β > 0, Λ > 0 and a fixed set D, we consider the functional defined for couples of disjoint domains Ω 1 , Ω 2 in D and functions u ∈ H 1 (D) with u = 0 on D \ (Ω 1 ∪ Ω 2 ). We will then show that if Ω 1 , Ω 2 , u locally minimize J β,Λ in D, then the functions u 1 = u½ Ω 1 and and satisfy an additional condition on ∂Ω 1 ∩ ∂Ω 2 involving the mean curvature of the interface (see [9]). Notice that if (u 1 , Ω 1 ) and (u 2 , Ω 2 ) are two minimizers of the one-phase Bernoulli functional J √ Λ with disjoint supports (Ω 1 ∩ Ω 2 = ∅), the triple (Ω 1 , Ω 2 , u = u 1 + u 2 ) might not be a minimizer of J β,Λ , even if the Hausdorff distance between Ω 1 and Ω 2 is strictly positive. In fact, it might be convenient to enlarge the domains Ω 1 and Ω 2 in order to obtain a non-empty interface ∂Ω 1 ∩ ∂Ω 2 that will allow to have competitors which are not vanishing identically on the entire free boundaries ∂Ω 1 and ∂Ω 2 . This is illustrated by the following one-dimensional example. Example 1.1 (Formation of an interface in 1D). Let ε > 0 and β > 0 be fixed. We consider the interval D = [−1 − ε, 1 + ε] and the boundary data g 1 , g 2 : ∂D → R given by The minimizers of the one-phase function with boundary conditions g 1 and g 2 are respectively the functions If we consider the sets Ω 1 = (−1 − ε, −ε) and Ω 2 = (ε, 1 + ε), then we have that On the other hand, by taking and extending u linearly on the intervals [−1 − ε, 0] and [0, 1 + ε] we obtain that Setting the parameter ℓ to be the optimal one, ℓ = 2 2 + β + εβ , we get that When ε = 0, we get In conclusion, if we fix β > 0 we can find ε 0 > 0 such that J β,1 u, Ω 1 , Ω 2 < 4 for all ε ∈ (0, ε 0 ), which means that for those choices of β and ε, the combination of the two one-phase solutions is not optimal. 1.3. Setting of the problem and main theorem. We will define the variational problem for the functional J β,Λ in the class of sets of finite perimeter and Sobolev functions. Then, we will prove an existence theorem in this class and we will show that the minimizers are regular. We fix the boundary data for Ω 1 , Ω 2 and g. Precisely, let • E 1 and E 2 be two smooth, bounded and disjoint sets of positive distance in R d ; We define the following admissible set of functions . Then, fixed u ∈ V, we define the admissible set A(u) as the set of all couples (Ω 1 , Ω 2 ) of Lebesgue measurable sets such that: • Ω 1 ∩ Ω 2 = ∅, E 1 ⊂ Ω 1 and E 2 ⊂ Ω 2 Lebesgue almost-everywhere; • Ω 1 and Ω 2 have finite perimeter (as subsets of R d ); • {u > 0} ⊂ Ω 1 ∪ Ω 2 Lebesgue almost-everywhere. For every β > 0 and Λ > 0, we consider the functional J β,Λ , defined for functions u ∈ V and couples of sets (Ω 1 , Ω 2 ) ∈ A(u), as where ∂ * Ω j is the reduced boundary of Ω j ; we recall that since u is a bounded Sobolev function, the second integral is well-defined (see Section 2). Given sets E 1 and E 2 , and a function g as above, there are a function u ∈ V and sets (Ω 1 , where Moreover, Sketch of the proof and plan of the paper. In order to prove Theorem 1.2, we first introduce a family of approximating problems in Section 4. Then, passing to the limit, we obtain a function u ∈ V and a couple of disjoint sets Ω 1 and Ω 2 . We cannot obtain immediately that (u, Ω 1 , Ω 2 ) is a solution to (1.3), since there is not a uniform bound on the perimeter of the approximating sets, so we do not a priori have that Ω 1 and Ω 2 are sets of locally finite perimeter in D. Instead, we are able to prove that u satisfies an almost-minimality condition involving the one-phase Alt-Caffarelli functional, which allows to prove that the set {u > 0} is regular (Theorem 9.1). This solves the problem only in part because we only have that We then show that the sets Ω 1 and Ω 2 are almost-minimizers of the perimeter in {u > 0}∩D, which implies that (in low dimension) the free interface ∂Ω 1 ∪∂Ω 2 is smooth in D ∩{u > 0}. Thus, in order to prove that Ω 1 and Ω 2 have finite perimeter it is sufficient to study the behavior of the interface ∂Ω 1 ∪∂Ω 2 close to the free boundary ∂{u > 0} (see Theorem 10.1). We show that Ω 1 and Ω 2 are minimizers in {u > 0} of a weighted perimeter functional, the weight being precisely the function u 2 , which is C 0,α and positive in {u > 0}, but as it approaches the free boundary ∂{u > 0} we have that In order to deal with this degenerate weight, we perform a 2D conformal change of coordinates, which flattens ∂{u > 0} to a line; then we rotate Ω 1 around this line in order to obtain an almost-minimizer of the perimeter in R 4 . This allows to conclude that ∂Ω 1 ∪ ∂Ω 2 is the union of C 1 curves that meet ∂{u > 0} orthogonally in a (locally) finite number of points. This concludes the proof of and we say that Ω is of finite perimeter (Caccioppoli set) if Per (Ω) < +∞ . Given α ∈ [0, 1], we say that the set Ω has a Lebesgue density α at We define the set Ω (α) as Given a set of finite perimeter Ω ⊂ R d we will denote by ∂ * Ω its reduced boundary and by ν Ω the generalized exterior normal. We recall that and that for any where H d−1 denotes the (d − 1)-dimensional Hausdorff measure in R d . Moreover, we recall that at every point of the reduced boundary, ∂ * Ω has Lebesgue density 1 /2, that is, We also recall the following well-known result by Federer which can also be stated as in the lemma below. If Ω is a set of finite perimeter in R d , then up to a set of zero H d−1 measure and Finally, we conclude this section with the following proposition Proposition 2.2. Let A and B be two disoint sets of finite perimeter in R d . Then, the set A ∪ B is a set of finite perimeter and, up to a set of zero H d−1 -measure, and In particular, Proof. Up to a set of zero H d−1 measure, we have that which proves (2.1). Finally, (2.2) follows since the sets As a consequence of Lemma 2.1, one can obtain the following decomposition. Proposition 2.3. Let A and B be two sets of finite perimeter in R d . Then, also A \ B and B \ A have finite perimeter and we have the following decompositions (up to sets of zero Sobolev functions and capacity. Let where ∇u is the distributional gradient of u. Given a measurable set Ω ⊂ R d and a Sobolev function If Ω is an open set, we can also define the space H 1 0 (Ω) as the closure of C ∞ c (Ω) with respect to the Sobolev norm It is well-known that both H 1 0 (Ω) and H 1 0 (Ω) are closed (with respect to both the strong and the weak H 1 -convergence) linear subspaces of H 1 (R d ) and that, for any open set Ω, H 1 0 (Ω) ⊂ H 1 0 (Ω), while the converse inclusion is in general false. Given any set A ⊂ R d and any ball B 2R (x 0 ), we define We say that a set A has zero capacity if We recall the following properties of the capacity. • If a set A ⊂ R d has zero capacity, then |A| = 0 and In particular, to every u ∈ H 1 (R d ), we can associate a representative defined pointwise everywhere as follows: Let u n and u be the representatives defined above and let N un and N u be the correspondig sets of zero capacity. Then, there are a subsequence u n k and a set of zero capacity N such that For simplicity, we will identify any function u ∈ H 1 (R d ) with its representative u and if a sequence u n k ∈ H 1 (R d ) satisfies (2.3), then we will say that it converges quasi-everywhere to u ∈ H 1 (R d ). Traces of Sobolev functions on the boundary of sets of finite perimeter. Let Ω be a set of finite perimeter in R d and let u ∈ H 1 (R d ). Let u : R d → R be the representative of u defined for every point x 0 outside a set of zero capacity N u (and defined as zero on We also notice that From now on, we will write u instead of u. The next two propositions allow to write the functional J β,Λ in an equivalent way. Let Ω ⊂ R d be a bounded quasi-open set of finite perimeter and let ∂ * Ω be its reduced boundary. Let u ∈ H 1 0 (Ω) and let u : R d → R be a representative of u defined up to a set of zero capacity. Then, In particular, Proof. Without loss of generality, we can suppose that For every n ≥ 1, we consider the functional The functional F n admits a unique minimizer in H 1 0 (Ω) that we denote by u n . By construction, testing the optimality of u n with v = u, we get In particular, the sequence u n converges strongly L 2 (Ω) and weakly in H 1 0 (Ω) to the function u. Moreover, u n solves the PDE We notice that since u n minimizes F n and since 0 ≤ u ≤ 1, then also 0 ≤ u n ≤ 1. Thus, the right-hand side n(u − u n ) of (2.4) is bounded. Let now x 0 ∈ Ω ( 1 /2) , that is, By [7,Proposition 4.6] we have that for r > 0 small enough for some constant C n depending on u n and some dimensional constant β > 0. In particular, we get that the convergence of u n to u is strong in H 1 0 (Ω). It is well-known that there is a subsequence of u n converging pointwise quasi-everywhere to u. In particular, the same subsequence converges pointwise H d−1 -almost everywhere on ∂ * Ω. Thus, (the representative of) u vanishes H d−1 -almost everywhere on both Ω ( 1 /2) and ∂ * Ω. Let Ω 1 and Ω 2 be two sets of finite perimeter in R d such that Proof. We first notice that the reduced boundaries ∂ * Ω 1 and ∂ * Ω 2 can be decomposed as Thus, in order to prove (2.5), it is sufficient to prove that and Ω 1 ∩ Ω 2 = 0, we get that necessarily Thus, by Proposition 2.4, we get that u(x 0 ) = 0. This proves (2.6) and (2.5). A semicontinuity lemma. In the proof of the main theorem we will repeatedly use the following lemma, which is a restatement of a lemma from [9]. , strongly in L 2 (A) and pointwise almost-everywhere. Let Ω n ⊂ A be a sequence of sets of locally finite perimeter in A converging almost-everywhere (in A) to the set of locally finite perimeter Ω ∞ ⊂ A. Then, Proof. The proof is precisely the one from [9, Lemma 2.4]. We report it here for the sake of completeness. The key observation is that given u ∈ H 1 (A) and a set of locally finite perimeter Ω ⊂ A, we have We now fix a vector field ξ ∈ C 1 c (A; R d ), |ξ| ≤ 1 and we compute lim inf n→∞ A∩∂ * Ωn 2 u n ξ · ½ Ωn ∇u n + u n ½ Ωn (u n div ξ) dx Now, since ½ Ωn ∇u n converges weakly in L 2 to ½ Ω∞ ∇u ∞ , we get lim inf n→∞ A∩∂ * Ωn Taking the supremum over ξ, we get (2.7). Almost-minimality and Hölder estimates In this section, we prove two general technical results on the continuity of subharmonic functions which are almost-minimizers of the Dirichlet energy in a suitable sense. We will use these estimates in Sections 4 and 9. Proof. Without loss of generality, we can suppose that x 0 = 0 and D = B r 0 . Then, for every r ∈ (0, r 0 ), we have Thus, we only need to estimate ∆u(B r ). In order to do so, we test the optimality of u with u + tϕ, where ϕ(x) := 1 r (2r − |x|) + . Taking and so, for every r ≤ r 0 2 , we get Then, for every r ≤ r 0 4 , we can estimate Now, using the non-negativity and the subharmonicity of u, we get the claim. and we suppose that there are constants α ∈ [0, 1] and K > 0 such that for every x 0 ∈ D δ , every r ∈ (0, δ) and every ϕ ∈ for every x, y ∈ D δ such that |x − y| < δ 16 2 Proof. We apply the previous lemma, to x 0 = y and r = |x − y| γ . Then, Now, since |x − y| < 1 and r = |x − y| γ , with γ ∈ (0, 1), we get Choosing γ = 2 3+α , we get that and so we get the claim. Finally, we notice that we should have the inequality r+|x−y| < δ 8 , which is satisfied for instance when |x − y| γ < δ 16 . Non-degenerate approximating problems In this section, we define a sequence of non-degenerate problems, approximating (1.3), in which the competitors u are a priori bounded from below by a fixed positive constant. We consider the family of approximating problems min J ε u, Ω 1 , Ω 2 : u ∈ V, Ω 1 , Ω 2 ∈ A(u) , (4.1) where the functional J ε is defined as is a solution to (4.1); (ii) the function u ε is Hölder continuous in D and for every δ > 0 there is a constant C δ > 0, depending on δ, d, β, Λ and g L ∞ , such that where D δ is the set defined in (3.1). (iii) there is a constant ρ > 0, depending only on d, Λ and g L ∞ , such that Proof. Let ε > 0 be fixed. We divide the proof into several steps. Then, testing the optimality of (u ε , Ω 1 ε , Ω 2 ε ) with (v + , Ω 1 ε , Ω 2 ε ) and using the fact that v + ≤ u ε in D, In particular, if ϕ is a nonnegative function compactly supported in D, then we can apply the above inequality to v := u − tϕ for some t > 0. Then , by sending t to zero, we get that − D ∇ϕ · ∇u ε dx ≥ 0, which means that the distributional Laplacian ∆u ε is a positive Radon measure in D. Boundedness of Ω 1 ε and Ω 2 ε . We first notice that u ε is a subsolution of the Alt-Caffarelli functional, that is, In particular, this implies (see for instance [15]) that the set {u ε > 0} lies in a sufficiently large ball B ρ . Now, since outside {u ε > 0} the functional J ε only accounts for the perimeter of Ω 1 ε and Ω 2 ε , we have that these sets should be contained in the convex envelope of {u ε > 0}, which in particular gives (4.3). The limit of the non-degenerate solutions In this section we define the function u (Section 5.1) and the sets (Ω 1 , Ω 2 ) (Section 5.2), which we will prove to be solutions to the initial problem (1.3). Throughout this section, for any ε > 0, we fix a solution u ε , Ω 1 ε , Ω 2 ε of the approximating problem (4.1) for J ε . 5.1. The limit function. It is immediate to check that, there is a function and a sequence ε n → 0 such that • for every fixed δ > 0, u εn → u uniformly in D δ as n → ∞ ; • u εn → u strongly in L 2 (R d ) and pointwise almost-everywhere in R d ; • ∇u εn → ∇u weakly in L 2 (R d ). By construction, we have u − g ∈ H 1 0 (D), while Proposition 4.1 gives that and ∆u ≥ 0 in D. 5.2. The limit sets. We next construct the sets Ω 1 and Ω 2 . Choose a ball Then, there are t > 0 and δ > 0 such that where D δ is given by (3.1). By the uniform convergence of u εn to u on D δ , we can find n 0 ≥ 1 such that, the following inequality holds for every n ≥ n 0 : Using this inequality and the optimality of u εn , Ω 1 εn , Ω 2 εn , we can estimate and Ω 2 εn ∩ B R (x 0 ) have uniformly bounded perimeter. In particular, up to a subsequence there are sets and Ω 2 R,x 0 ∩ B R (x 0 ), of finite perimeter and such that, as n → ∞, , pointwise almost-everywhere and strongly in L 1 (R d ). Thus, by a diagonal sequence argument, we can define the sets Ω 1 and Ω 2 as the union of Ω 1 R,x 0 and Ω 2 R,x 0 over all balls of radius R ∈ Q and center with rational coordinates x 0 ∈ Q d , By construction, Ω 1 and Ω 2 have locally finite perimeter in D ∩ {u > 0} and satisfy where we recall that D := R d \ E 1 ∪E 2 . Moreover, we still have the pointwise convergence of the corresponding characteristic functions, that is, for i = 1, 2, the sets Ω 1 and Ω 2 are disjoint, |Ω 1 ∩ Ω 2 | = 0. Remark 5.1. Notice that we do not have a priori that Ω 1 and Ω 2 have finite perimeter in R d , so at this stage they might not be in the admissible class A(u) defined in Section 1.3. Almost-minimality and Lipschitz estimates of u In this section we will show that u is an almost-minimizer (in some suitable sense) of the classical one-phase functional of Alt and Caffarelli. From this we deduce the Lipschitz growth of u on the boundary, which we will use in Section 9.1 in order to deduce the convergence of the blow-up sequences of u. Proof. We can suppose that v ≥ 0. Then, testing the minimality of u εn , Ω 1 εn , Ω 2 εn , with the function v (which we can do since v − g ∈ H 1 0 (D)) and the sets Ω 1 εn ∪ B r (x 0 ) and Passing to the limit as n → ∞, we get the claim. Lemma 6.2. Let u ∈ H 1 (R d ) be the function defined in Section 5.1. For every δ ∈ (0, 1) and every x 0 ∈ D δ ∩ {u = 0}, we have where C is a constant depending only on d, β, Λ and g L ∞ . Non-degeneracy of u In this section we show that u is a subsolution of the Alt-Caffarelli functional. From this information, we can immediately deduce that {u > 0} has finite perimeter in D δ , for every δ > 0. Moreover, the suboptimality of u implies that it is non-degenerate, which assures that the blow-up limits of u are not identically zero. Passing to the limit as n → ∞, we get the claim. As an immediate consequence, we have Corollary 7.2. Let u ∈ H 1 (R d ) be the function defined in Section 5.1. Then: (i) the set {u > 0} has locally finite perimeter in D; (ii) there is a constant η > 0 such that for every Proof. See [1] or [15]. Density estimate and its consequences In this section, we will show that the free boundary ∂{u > 0} is not touching ∂E 1 and ∂E 2 . This is a crucial step in proving that Ω 1 and Ω 2 have finite perimeter in R d . The main result is the following. Proposition 8.1 (Non-collapsing). Let u ∈ H 1 (R d ) be the function from Section 5.1. Then, there is a positive constant t > 0 such that u ≥ t in a neighborhood of E 1 ∪ E 2 . The proof of Proposition 8.1 is based on the following lemma. Lemma 8.2 (Density estimate). Let u ∈ H 1 (R d ) be the function from Section 5.1. There is a constant c > 0 and R 0 > 0, depending only on d, β, Λ and g L ∞ , such that Proof. We notice that by Lemma 6.2, we have that Now, we consider the competitor h being the harmonic extension of u in BR /256 (x 0 ). Setting for simplicity r = R 256 and x 0 = 0, by Lemma 6.1, we have that The rest of the proof follows the analogous lemma from [1]. Using the fact that h is harmonic and strictly positive in B r , we have By the Poincaré inequality, there is a dimensional constant C d such that On the other hand, the combination of Corollary 7.2 and the subharmonicity of u in B R , gives that for any κ ∈ (0, 1) On the other hand, (8.1) implies that Thus, Choosing κ and R > 0 small enough, we get the claim. Proof of Proposition 8.1. Suppose that x 0 ∈ D is such that where the constant δ 0 > 0 will be chosen later. Let y 0 be the projection of x 0 on E 1 and let R := 2|x 0 − y 0 |. We consider the competitor h being the harmonic extension of u in B R (y 0 ) ∩ D. We notice that u is the solution to min B R (y 0 ) Thus, by [15,Lemma 3.7] and the fact that u = g on E 1 , we have that On the other hand, using Lemma 6.1, we get Now, since u ≡ 1 on E 1 and u ≤ 1 in R d , we get that where C 1 > 0 is a constant depending only on E 1 (notice that since E 1 is regular, for δ > 0 small, we can choose C 1 ≃ 2). Thus, combining (8.2) and (8.3), we get which by the density estimate Lemma 8.2, gives a contradiction when R is small enough. 9. Regularity of the free boundary ∂{u > 0} In this section we prove the following. Theorem 9.1. Let u ∈ H 1 (R d ) be the function defined in Section 5.1 and let d = 2. Then The proof is based on the fact that u satisfies an almost-minimality condition in D. Precisely, by Lemma 3.1 and Lemma 6.1, we have that, given a compact set K ⊂ D, there are constants C > 0 and r 0 > 0 for which for every r < r 0 , x 0 ∈ ∂{u > 0} ∩ D and v ∈ H 1 (B r (x 0 )) such that u − v ∈ H 1 0 (B r (x 0 )). In Section 9.1, we use the almost-minimality to show that when d ≤ 4 every blow-up u is a halfplane solution (that is, solution of the form (9.2)) of the classical Alt-Caffarelli functional. Then, in Section 9.2, we show that in dimension d = 2, we can use the epiperimetric inequality from [14] to conclude the proof of Theorem 9.1. Remark 9.2. We notice that the function u might not be smooth in the open set {u > 0}. In fact, u is not even C 1 as the gradient is not continuous across ∂Ω 1 ∩ ∂Ω 2 . We stress that we can still use the 2D epiperimetric inequality from [14] together with the almost-minimality condition (9.1) to prove the C 1,α regularity of the free boundary, but we cannot improve this regularity to C ∞ . Remark 9.3. We expect Theorem 9.1 to hold in every dimension 2 ≤ d ≤ 4, as there are several epsilon-regularity results for functions u satisfying almost-minimality conditions similar to (9.1) (see for instance [5,4,8,13]), but we stress that non of these results directly apply to (9.1). In fact, the almost-minimality of u only holds around points at the boundary ∂{u > 0} (and in our case u is not even C 1,α in {u > 0}), which essentially requires [5,4,8,13] to be revisited in order to be used in our context. We choose the approach from [13] which limits Theorem 9.1 to the case d = 2, but on the other hand is based on the epiperimetric inequality from [14], which works without any modifications in our case. 9.1. Blow-up sequences and blow-up limits. Let x 0 ∈ ∂{u > 0} ∩ D. We define u r (x) := 1 r u(x 0 + rx). Let r n be an infinitesimal sequence. Then, for n large enough, the sequence u rn is uniformly bounded in L ∞ in every ball B 2R ⊂ R d . Moreover, by Lemma 3.2, u rn is uniformly bounded also in C 0, 1 /3 (B R ). Thus, up to a (non-relabelled) subsequence u rn it converges uniformly in B R ). By a diagonal sequence argument, there is a continuous function and a subsequence u rn such that u rn converges to u 0 uniformly on every ball B R ⊂ R d . We will say that u 0 is a blow-up limit of u at x 0 . Proposition 9.4. Let 2 ≤ d ≤ 4 and let u be the function from Section 5.1. Then every blow-up limit u 0 : for some unit vector ν ∈ R d . Proof. Let u rn be a blow-up sequence converging to u 0 . We notice that, by Corollary 7.2, u 0 is non-trivial. Moreover, using the almost-minimality condition (9.1) we get that u 0 is a local minimizer of the Alt-Caffarelli functional (see for instance [1] of [15]). Precisely, for every B R ⊂ R d and every v ∈ H 1 (B R ) such that u − v ∈ H 1 0 (B R ). Moreover, the almost-minimality condition (9.1) implies that every blow-up limit u 0 is 1-homogeneous (see [13]). When d ≤ 4, using [3] and [10], this gives that every blow-up limit u 0 is of the form (9.2). 9.2. Epiperimetric inequality and regularity of ∂{u > 0}. For any ϕ ∈ H 1 (B 1 ) we consider the Weiss' boundary adjusted energy introduced in [16] W (ϕ) := Let K be a compact set contained in D and let C > 0 and r 0 be the constants from the almost-minimality condition (9.1). Let x 0 ∈ ∂{u > 0} ∩ K be fixed and let u r (x) := 1 r u(x 0 + rx). Then, the derivative of W (u r ) is given by where z r : B 1 → R, z r (x) := |x|u r x /|x| , is the 1-homogeneous extension of u r in B 1 . Now, by the 2D epiperimetric inequality of [14], we have that there is a constant ε ∈ (0, 1) such that for every r > 0, there exists a function h r : B 1 → R with h r = u r = z r on ∂B 1 and Let (u n , Ω 1 n , Ω 2 n ) := (u εn , Ω 1 εn , Ω 2 εn ) be the sequence of minimizers from Section 5.1 and Section 5.2 converging to (u, Ω 1 , Ω 2 ). Then, by construction Ω 1 n ∩ Ω 2 n = ∅ and Ω 1 n ∩ A ∪ Ω 2 n ∩ A = A, and the same holds for the limit sets Ω 1 and Ω 2 . Let now Ω 1 be such that Ω 1 ∆Ω 1 ⊂⊂ A. Notice that we can find a family of balls B ρ j (x j ), j = 1, . . . , N , such that for every j = 1, . . . , N ; and such that for every j = 1, . . . , N , we have and consider the sets Testing the optimality of (u n , Ω 1 n , Ω 2 n ) with (u n , Ω 1 n , Ω 2 n ), we obtain Since Ω n ∆Ω 1 ⊂⊂ B r (x 0 ) ∩ {u > 0}, by Proposition 10.2, we have that which we write as Br(x 0 )∩∂ * Ω 1 ∩An Br(x 0 )∩∂ * Ω 1 ∩An u 2 + 2δ 2 n C. Passing to the limit as n → ∞, we get (10.2). 10.3. Regularity of the free interface up to the boundary ∂{u > 0}. In this section we will need the C 1,α regularity of the free boundary ∂{u > 0} in D, so in order to have Theorem 9.1 we assume that d = 2. Let x 0 ∈ D ∩ {u > 0} and B r (x 0 ) be a (small) ball contained in D. Without loss of generality, we suppose that x 0 = (0, 0). We define the function h : B r → R as Then, h is C 1,α regular in B r ∩ {u > 0} up to the boundary B r ∩ ∂{u > 0} and moreover, there is a C 0,α -regular strictly positive function a : B r ∩ {u > 0} → R such that a(x) = h(x) u(x) for x ∈ B r ∩ {u > 0} ; a(x) = |∇h|(x) √ Λ for x ∈ B r ∩ ∂{u > 0}. Moreover, choosing r > 0 small enough the set B r ∩ {u > 0} is simply connected, so we can find a function w : B r ∩ {u > 0} → R such that is a relatively open subset of the upper half-plane {(w, h) ∈ R 2 : h ≥ 0}. We notice that for r small enough the function Φ is invertible. Then, we define Then, ϕ is C 0,α and is bounded from below by a positive constant. We will show that in the new coordinates the set Ω := Φ(Ω 1 ) locally minimizes the functional F 2 (Ω) := {h≥0}∩∂ * Ω h 2 ϕ(w, h) dH 1 (w, h). In fact, since ∂ * Ω 1 is a C 1 curve, it is sufficient to check that for any γ : [0, 1] → B r ∩ {u > 0}, γ(t) = x(t), y(t) , we have which concludes the proof. In order to conclude the proof of the C 1 regularity of ∂Ω 1 , it is sufficient to prove that at the point (w 0 , h 0 ) = (0, 0) the set Ω := Φ(Ω 1 ) has a unique blow-up limit given by In order to do so, we consider the set R := (w, X) ∈ R × R 3 : (w, |X|) ∈ Ω . Then, R is a local minimizer of the functional F 4 (R) := ∂ * R ϕ(w, |X|) dH 3 (w, X), among all sets with the same simmetries as R, that is, all sets of the form R := (w, X) ∈ R × R 3 : (w, |X|) ∈ Ω , for some Ω ⊂ {(w, h) ∈ R 2 : h ≥ 0}. Now, by the monotonicity formula for the local minimizers of the area (see for instance [11]), we have that any blow-up limit R 0 of R is a cone in R 4 , which is area-minimizing with respect to perturbations that preserve the simmetries of R. But then, since the dimension of ∂R 0 is less than 7, we have that ∂R 0 is necessary a plane (with the same simmetries as R). Thus, ∂R 0 is ortogonal to the line {0, 0, 0} × R, which concludes the proof of the uniqueness of the blow-up, which implies points (ii) and (iii) of Theorem 10.1. 11. Proof of Theorem 1.2 In order to prove the existence of a solution to (1.3), we observe that as a consequence of an almost-minimality condition involving the one-phase Alt-Caffarelli functional and Theorem 10.1, we have that the sets Ω 1 and Ω 2 constructed in Section 5.2 have locally finite perimeter in D. It remains to prove that (i) Ω 1 and Ω 2 are sets of finite perimeter in R d ; (ii) (u, Ω 1 , Ω 2 ) is a solution to (1.3).
2022-04-04T01:16:24.588Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "8e0d8c0bd15ccd8b36781c0de0635259900e8672", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8e0d8c0bd15ccd8b36781c0de0635259900e8672", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
56695926
pes2o/s2orc
v3-fos-license
Association between Mitochondrial Bioenergetics and Radiation- Related Fatigue: A Possible Mechanism and Novel Target Background: Fatigue is one of the cancer symptoms most often reported by patients receiving radiation therapy (XRT). Understanding the mechanism behind the development of cancer-related fatigue will enable the design of novel interventions for radiation-induced fatigue. This research proposal is designed to determine the association between mitochondrial bioenergetics and fatigue in prostate cancer patients receiving XRT. Introduction Prostate cancer is a highly prevalent carcinoma and the second leading cause of cancer mortality in the United States [1]. The American Cancer Society estimates that 2015 will see 220,800 new diagnoses of this disease and 27,540 deaths [1]. Localized external beam radiation therapy using an intensity-modulated radiation technique, is a standard treatment option for nonmetastatic prostate cancer [2]. Although localized radiation therapy (XRT) has increased survival rates for men with this disease, fatigue is highly prevalent during and at the completion of treatment [3] and causes long-lasting distress even in diseasefree stages [4][5][6]. Of all disease and treatment-related symptoms in cancer, fatigue is one of the most burdensome with the greatest adverse effect on quality of life, but arguably the least understood [7,8]. Fatigue experienced by prostate cancer patients has been noted to increase significantly in severity during the course of 2015 Vol. 3 No. 2:14 Archives in Cancer Research ISSN 2254-6081 XRT, remaining elevated during survivorship, after treatment is completed [9,10]. The prevalence and severity of fatigue differs slightly in cancer patients receiving different treatments. Fatigue severity also differs between prostate cancer patients with XRT and without XRT [8]. While there have been a limited number of interventions suggested to address fatigue, the only one that has an adequate evidence based to date is exercise [11]. Thus, healthcare professionals attempting to assist patients in addressing this critical source of distress have few strategies that can be helpful. The use of modafinil, management of activity and psychoeducational interventions are the only tools currently recommended [12]. However, early non-randomized trials of nutriceutical supplements, such as levocarnitine or vitamins offer an intriguing possible avenue for nursing intervention. Nevertheless, there remains a critical need to develop a better understanding of biologic mechanisms of fatigue before we can move forward in testing interventions. Localized radiation-induced damage is associated with many adverse effects including fatigue [13]. Although many mechanisms have been proposed for cancer-related fatigue [14][15][16][17][18], the cause remains elusive. Additionally, early biomarkers prognostic for radiation-induced fatigue have not been identified. The physiological mechanisms behind fatigue and its increased severity during localized XRT remain unknown. Deficiency of adenosine triphosphate (ATP) has been proposed as the basis of cancer-related fatigue [16,19], but the mechanism has not been explored. More than 90% of ATP is generated by mitochondria via oxidative phosphorylation (OXPHOS) [20]. Mitochondrial respiratory chain is very important in maintaining effective ATP levels [21]. Mitochondrial dysfunction is involved in all clinical conditions including fatigue which are associated with the deficient energy metabolism of oxidative phosphorylation [22]. While mitochondrial dysfunction has been implicated in a variety of clinical fatigue states, the physiological pathways and pathophysiological mechanisms are complicated and remain unclear. This research proposal is designed to determine the association between mitochondrial bioenergetics and fatigue symptom in non-metastatic prostate cancer patients receiving XRT. The specific aims include 1.Determine changes in mitochondrial bioenergetics profile in lymphocytes from prostate cancer patients at baseline, midpoint, and endpoint of the XRT 2.Quantify fatigue symptom in prostate cancer patients at baseline, midpoint, and endpoint of the XRT and 3.Determine the association between mitochondrial bioenergetics and fatigue symptom in prostate cancer patients at baseline, midpoint, and endpoint of the XRT. Background Fatigue is one of the debilitating symptoms most often reported to nurses by cancer patients receiving XRT [23]. Cancer-related fatigue is described as pervasive, a whole body excessive tiredness that is unrelated to activity or exertion, and not relieved by rest or sleep [24]. Cancer fatigue negatively impacts health outcomes leading to increased depression, impaired cognitive function, increased sleep disturbance, decreased physical activity and decreased health-related quality of life [25][26][27][28]. Multidimensional causes and mechanisms of cancer-related fatigue remain unclear, and early biomarkers prognostic for radiation-induced fatigue have not been identified. Cancer fatigue is arguably the least understood cancer-related symptom [29]. Moreover, there is no optimal pharmacologic therapy for fatigue. The National Comprehensive Cancer Network (NCCN) Practice Guidelines in Oncology for cancerrelated fatigue currently recommends 5 non-pharmacological interventions which are activity enhancement, psychosocial improvement, attention-restoring therapy, nutrition, and sleep [12]. For pharmacologic interventions, the NCCN guidelines recommend that after ruling out other causes of fatigue the use of psychostimulants should be considered. Specifically, methylphenidate has been recommended, but they are conflicting results in improving fatigue in two small, randomized, clinical trials [12,30]. With limited available options for nurses to address cancer-related fatigue, novel strategies are needed to identify effective interventions for cancer-related fatigue. The interactions of several mechanisms have been proposed to influence the individual fatigue experience, including genetic factors, energy expenditure, metabolism, aerobic capacity, and the patients' immune response to inflammation [29,31]. Research has shown the genetic factors influence cellular response associated with XRT [32]. Moreover, changes in gene expression in peripheral leukocytes have been seen in fatigue cancer patients receiving XRT [18,33]. Reactive oxygen species (ROS) are considered one of the major direct causes of ionizing radiation-induced damage [34], resulting in a number of adverse effects including fatigue [35]. It is known that radiation-induced damage alters mitochondrial metabolism, inhibits the mitochondrial respiratory chain, and forms highly reactive peroxynitrite (ONO 2 -) [36]. Once mitochondrial proteins are damaged, the affinity of substrates or enzymes is decreased resulting in mitochondrial dysfunction [20]. However, these studies did not identify the mechanism of the inhibiting mitochondrial respiratory chain related to mitochondrial dysfunction after radiation. Identification of the mitochondrial bioenergetics mechanism leading to radiationinduced fatigue is needed in order to develop appropriately targeted therapies. The mitochondrial respiratory chain is essential to produce and to maintain effective cell content of ATP [20,21]. A reduction in the capacity of neutrophils' mitochondria to utilize oxygen and synthesize ATP has been associated with chronic fatigue syndrome [37]. Our previous study has shown that changes in mitochondrial-related gene expression (e.g. down-regulation of BCS1L and up-regulation of SLC25A37) in lymphocytes were associated with fatigue symptoms experienced by men with nonmetastatic prostate cancer during XRT [9,38]. Decreased BCS1L protein has been shown to lead to decreased incorporation of the Rieske iron-sulfur protein into complex III and decreased activity of complex III [39]. A defect in complex III will impair 2015 Vol. 3 No. 2:14 Archives in Cancer Research ISSN 2254-6081 respiratory chain is essential for maintaining effective ATP levels [21]. Mitochondrial oxidative phosphorylation enzymes, proteins and lipids are vulnerable to free radicals [47]. Decreased BCS1L protein has been associated with a deficient incorporation of the Rieske iron sulphur protein into complex III, resulting in decreased complex III activity. A defect in complex III leads to a functional deficit in the respiratory chain and impairs ATP production [48]. SLC25A37 is a solute carrier localized in the mitochondrial inner membrane that serves as the principal iron importer [27]. SLC25A37 provides a critical role of iron-consuming processes including heme synthesis and Fe-S cluster synthesis in mitochondria [45,49]. Over expression of SLC25A37 and increased mitoferron-1 protein lead to increased iron uptake into mitochondria and promotes heme synthesis [45], and this increased matrix free iron potentially can increase hydroxyl radical formation from hydrogen peroxide [46]. Moreover, iron overload affects the mitochondrial calcium uniporter, slow calcium uptakes, and results in mitochondrial dysfunction [50], which may intensify fatigue experienced by men treated with XRT. BCL2L1encodes proteins that belong to the BCL-2 family and are generally located on the mitochondrial outer membrane (MOM), which regulates the opening of the MOM's voltage-dependent anion channel (VDAC) [51]. VDAC regulates the mitochondrial membrane potential by binding with BCL-2 family of proteins, therefore controlling the production of ROS and release of cytochrome c, both of which inducers of cellular apoptosis [52]. Overexpression of BCL2L1 blocks the programmed cell death through the leakage of MOM and the release of cytochrome c to initiate the cell intrinsic death pathway [53]. The tail-anchored outer membrane protein, Fis-1 is distributed on the mitochondrial surface serving as a ratelimiting fission factor. Inhibition of FIS1 has been shown to lead to an accumulation of damaged mitochondrial material, decreased metabolic function, and reduction in insulin secretion [54]. ATP production through a decrease in oxidative phosphorylation [40,41]. Additionally, decreased complex III activity is associated with increased superoxide (O 2 -) production and dismutation to hydrogen peroxide (H 2 O 2 ) [42,43]. Furthermore, up-regulation of SLC25A37 increases the mitochondrial inner membrane mitoferrin-1 protein [44]. Increased mitoferrin-1 protein leads to increased iron uptake into mitochondria and promotes heme synthesis [45], and this increased matrix free iron potentially can increase hydroxyl radical formation from hydrogen peroxide [46]. The human BCS1L gene encodes a member of the AAA family of ATPases, involved in the essential role of assembling complex III of the mitochondrial respiratory chain [40]. The mitochondrial Archives in Cancer Research ISSN 2254-6081 In Summary, this preliminary work has established that fatigue is associated with mitochondrial-related genes, which may in turn be associated with mitochondrial biogenesis and bioenergetics in cancer patients receiving radiation therapy. What we do not know is if radiation induces changes in mitochondrial bioenergetics causing ATP depletion and contributing to fatigue. Therefore, BCS1L and SLC25A37were selected to test in the hypothesis of the mechanism causing radiation-induced fatigue, because they plan important roles in the mitochondrial respiratory chain where it produces and maintains effective ATP content. In Figure 2, we proposed a physiological model of radiationassociated fatigue based on our preliminary findings. We hypothesized radiation will cause genetic instability and cellular damage, trigger a defect in mitochondrial OXPHOS, and cause ATP depletion and ROS production, resulting in debilitating fatigue. Study design This prospective, hypothesis-testing project will use a matched case-control, repeated measures design. Two groups of subjects will be recruited in this study: prostate cancer patients with localized radiation therapy (XRT) and prostate cancer patients without any treatment, active surveillance (AS). To determine if mitochondrial bioenergetics profile and fatigue symptom observed during XRT are associated with radiation therapy, we will use age-, race-, and clinical stage-matched prostate cancer patients with AS as the control for the comparison of fatigue and mitochondrial bioenergetics at baseline. Furthermore, changes in mitochondrial bioenergetics profile and fatigue, and association between changes in bioenergetics and fatigue will be determined in men receiving XRT at midpoint and endpoint of XRT, comparing to their baseline data. Sample and setting The study sample will be drawn from a population of localized prostate cancer patients scheduled for XRT. The study will be introduced to collaborating clinicians during one-on-one or team meetings at the National Cancer Institute designed Comprehensive Cancer Center in Northern Ohio. Patients will be referred by the collaborating clinicians to the study investigator for screening. The sample will consist of 25 individuals who meet the study criteria and provide consent. This project intends to enroll 25 research participants from each group over a one year period. In our preliminary work [55], the BCS1L average ± standard deviation expression values at baseline, midpoint, and endpoint were 6.60 ± 1.22, 7.76 ± 1.64, and 7.64 ± 1.29, respectively, in prostate cancer patients with EBRT. Given this, for our proposed study, a sample size of 22 will achieve at least 80% power to detect a mean of paired differences of 0.58 (half of 1.15) with an estimated standard deviation of differences of 0.93 using a one-sided paired t-test and level of significance 0.05. Study measures Fatigue is essentially a subjective experience [56], and measurement of fatigue is a challenging process [57]. Subjective experience of fatigue is of equal importance to objective measurement of fatigue, because the symptom of fatigue is a complex phenomenon with multiple dimensions [58,59]. Therefore, for this study, the revised Piper Fatigue Scale and Patient Report Outcomes Measurement for Fatigue have been selected to assess the subjective dimensions of fatigue experienced by men treated for prostate. Fatigue Fatigue will be evaluated by validated questionnaires-The revised Piper Fatigue Scale (r-PFS) and Patient Reported Outcomes Measurement Information System for Fatigue (PROMIS-F). The r-PFS is a 22-item paper/pencil questionnaire that measures 4 fatigue dimensions including behavioral/severity, sensory, cognitive/mood, and affective. The r-PFS shows good reliability and validity with internal consistency ranging from 0.7-0.9 across 4 fatigue dimensions from cancer patients undergoing XRT [58]. It can be completed in 10 minutes. The PROMIS-F was developed from more than 1000 datasets from multiple disease populations including cancer. Initial psychometric properties showed internal consistency reliability coefficient of 0.80 [60]. It consists of 7-item questionnaire for fatigue and takes about 2 minutes to complete the questionnaire. Depression Hamilton Depression Rating Scale (HAM-D) will be use to assess depression symptom. The HAM-D is a 21-item with good internal reliability (α=0.8-0.9), completed by study staff through subject interview and screening process. Score can range from 0 to 78; higher scores (>17) indicate higher symptoms of depression [61]. It takes approximately 15 minutes to complete. Physical activity The International Physical Activity Questionnaire (IPAQ) will be used to evaluate physical activity levels for each participant. The IPAQ is a well validated, 7-item self-report questionnaire, and ask subjects to recall the amount of physical activity undertaken for the pass 7 days [62]. It takes approximately 5 minutes to complete. Mitochondrial Bioenergetics Profile Includes 2. The electronic transport chain (ETC) complexes activity, and Mitochondrial oxidative phosphorylation rate The rate and changes of mitochondrial oxidative phosphorylation will be measured using patients' lymphocytes. Dr. Hoppel and his research team have developed a new approach to measure integrate mitochondrial OXPHOS from human fibroblasts [63]. The standard laboratory procedure has been optimized and tailored to use of human lymphocytes in order to test the proposed hypothesis. Figure 3 and Figure 4 describe the substrate-inhibitors tracing for protocol 1 and 2 in human lymphocytes. Two milliliter of respiration buffer with intact lymphocytes will be injected to two chambers of the Oroboros-Oxygraph-2k (O2K) system. Data will be normalized to cell number and protein concentration, as well as citrate synthase activity, which will be measured in each experiment as a marker of the amount of mitochondria per chamber. As described [63], mitochondrial OXPHOS will be measured starting with intact cellular respiration after air calibration of the O2K system [64]. The purpose of protocol 1 (Figure 3) is to measure the respiration rate of To be able to have access to the mitochondria, we will use digitonin to permeabilize the plasma membrane, and a decreased respiration rate will be observed. When the rate reaches its nadir, adenosine diphosphate (ADP) will be added to obtain the state 3-ADP stimulated respiration rate. Next, we will add glutamate to provide additional substrate to make NADH for complex I, and then succinate will be injected as a complex II substrate to reduce coenzyme Q (Co Q) and measure complexes I and II substrate oxidation, and assess the availability of Co Q. After that, an uncoupler, carbonylcyanide-p-trifluoromethoxyphenylhydrazone (FCCP) will be titrated to reach the maximal oxidation. After that, rotenone will be added to obtain the rate of complex II because complex I is been inhibited. Then antimycin A will be added as a complex III inhibitor to obtain the residual rate, considering as nonmitochondrial oxidation. At the end, to obtain the rate of complex IV, we will add tetramethyl-p-phenylenediamine (TMPD) and ascorbate to reduce cytochrome C for complex IV as uncoupled, and sodium azide will be injected to inhibit complex IV. Protocol 2 (Figure 4) is to measure fatty acid oxidation and complex III respiration rate. The respiration rate does not change by adding malate and followed by injecting palmitoylcarnitine to obtain fatty acid oxidation. Same as protocol 1, we will add digitonin to permeabilize the plasma membrane and the oxygen consumption rate will be gradually decreased over 15 minutes. When the rate reaches its nadir, ADP will be added to obtain the state 3-ADP stimulated respiration rate. Next, rotenone will be added to inhibit complex I, and then, a reduced analog of coenzyme Q, duroquinol, will be added as the complex III substrate to yield the respiration of complex III. After that, multiple titrations of FCCP will be performed to obtain the maximum oxidative capacity of complex III. Lastly, antimycin A will be added to inhibit complex III and protocol 2 is completed. The ETC complexes activity The activity of four complexes (I, II, III, IV) in the mitochondrial ETC will be measured using isolated lymphocytes in a spectrophotometer [65]. First, rotenone-sensitive NADH cytochrome c reductase will be used to measure the linked activity of complex I and III. Second, antimycin A-sensitive succinate cytochrome c reductase, will be used to measure the linked activity of complex II and III. Third, antimycin A-sensitive decylubiquinol cytochrome c reductase (complex III), will be used to measure the reduction of cytochrome c coupled to the oxidation of decylubiquinol to decylubiquinone. Complex III is inhibited by antimycin A and the assay will be measured as an antimycin A-sensitive component. Lastly, cytochrome c oxidase (complex IV) will be measured and it is the terminal component of the ETC and oxidizes reduced cytochrome c and converts oxygen to water. ATP production and ROS generation The total cellular ATP amounts will be determined using a convenient bioluminescence assay to quantify ATP with recombinant firefly luciferase and its substrate. A standard curve for a series of ATP concentrations will be generated for each assay and the ATP amount will be calculated compared to the standard curve. The rate of H 2 O 2 production will represent the ROS production, and will determine using the oxidation of the fluorogenic indicator amplex red in the presence of horseradish peroxidase [66]. The concentrations of horseradish peroxidase and amplex red will be measured using a microplate reader through fluorescence. Data Collection and Procedure The study duration will be approximately 18 months. Localized radiation therapy for prostate cancer is usually administered 5 days a week for 7-9 weeks, depending on the type of treatment delivery and dose used. In this study, three time points (baseline, midpoint, end point) of data collection have been chosen to represent different phases of radiation therapy process and based on the peak of self-reported fatigue from prostate cancer patients treated with XRT. The healthy individual group will be asked to provide data at one time. Before starting the study, participants will be screened for eligibility by the investigator and then scheduled for data collection on their convenience. All tests and study visits will be conducted at the University Hospitals Seidman Cancer Center. Blood draws and self-administered questionnaires will be coordinated with all clinical care procedures related to the participant's XRT schedule so that unnecessary duplication of tests and inconvenience can be avoided. One 45 ml peripheral blood sample (3 tablespoons) will be collected from each patient at baseline, midpoint, and endpoint of the XRT. The tubes then will be transported to the mitochondrial research laboratory. Lymphocytes will be isolated from blood anticoagulated with EDTA tubes immediately once the tubes are delivered to the laboratory for processing, as explained in previous section. Data Analysis The study is designed to examine the association between mitochondrial bioenergetics and fatigue in prostate cancer patients receiving XRT over a period of time. Descriptive statistics (means ± standard deviation) will be calculated to describe mitochondrial bioenergetics profiles at each time point and the incidence and severity of fatigue at each time point. We will use twosample t-tests and analysis of covariance, adjusting for possible confounders such as age, race, and clinical stage to compare difference in fatigue and bioenergetics profile at baseline between XRT and AS group. To present the changes of fatigue score and mitochondrial bioenergetics profiles (oxidative phosphorylation, ETC complexes, ATP production, and ROS generation) before, at midpoint, and at endpoint of XRT, we will perform paired t-tests between baseline and midpoint, baseline and endpoint, and midpoint and endpoint. Linear mixed model will be used to determine the associations between mitochondrial bioenergetics and fatigue in prostate cancer patients at baseline, midpoint, and endpoint of the XRT. The intercept and slope of the individual growth curve for fatigue scores and mitochondrial bioenergetics will be estimated using mixed model analysis. In the model, we will use time variables in terms of days during EBRT and a simple linear relationship will be assumed in the time variable. The intercepts and slopes of the outcome variables (changes in fatigue scores) and the predictors (changes in mitochondrial bioenergetics) for each participating individual will be estimated in the mixed model. Based on the variances and co-variances of the random effects, we will compute the correlations between fatigue and the other variables under consideration. If the measured continuous variables are not normally distributed, appropriate transformations (e.g. log, square root, etc) will be applied to meet the assumptions of statistical tests. All statistical analyses will be performed using Stata 11.0 software [67]. Discussion Through this study, we propose a novel mechanism of mitochondrial bioenergetics for cancer-related fatigue based on a molecular-genetic approach. The proposed physiological mechanism of cancer-related fatigue is linked to ATP depletion and impairment of mitochondrial bioenergetics, triggered by radiation-induced genetic instability and cellular damage. Currently, there are no evidence-based interventions, such as optimal pharmacologic therapy, nutritional supplements, or dietary interventions, for cancer-related fatigue. This will be the first study to determine the role of mitochondrial metabolism, specifically bioenergetics function, in the development of debilitating radiation-induced fatigue. The study represents an opportunity for new insights into a moleculargenetic and mitochondrial bioenergetics mechanism, specifically, changes in mitochondria-related genes linked to specific physiological processes and functions in mitochondria. We acknowledge that experimental variability and missing data are potential threats to reliability. All laboratory work will be handled by a single research assistant and the investigator. Measuring mitochondrial bioenergetics profile will be performed on the same day that lymphocytes are harvested, using the same batch. Further, we use a new approach to measure mitochondrial OXPHOS in fresh isolated human lymphocytes. Specifically, Dr. Hoppel and his research team have developed a method to measure complex III and integrated mitochondrial function from human fibroblasts [63]. The standard laboratory procedure has been optimized and tailored to use of human lymphocytes in order to test the proposed hypothesis. In addition, obtaining data from a control group (age-, race-, and clinical stage-matched prostate cancer patients without XRT) will enable us to control for the influence of confounding variables on outcome measures. Conclusion Achieving the aims of this hypothesis-testing project will establish the molecular-genetic mechanism of radiation-induced fatigue in prostate cancer patients receiving XRT. The downregulation of BCS1L, a defect in complex III, and ATP depletion and ROS production in lymphocytes will need to be verified with a large sample in order to identify early biomarkers prognostic for radiation-induced fatigue in clinical settings. Nonetheless, our results will provide the foundation for interventions targeting cancer-related fatigue. This research project is an essential step in pursuing a novel hypothesis designed to reveal the physiologic mechanisms of cancer-related fatigue, a ubiquitous and significant cause of patient distress. The results have the potential for identifying targets for pharmacological and/or nutraceutical interventions and initiating a new direction for design of interventions for cancer-related fatigue. Funding This research project is funded by Oncology Nursing Society Foundation-Nursing Research Grant RE01. Conflicts of interest The authors report no conflicts of interest.
2018-12-23T14:15:59.014Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "f12af5795574e5b98ef582452b50a7f44bb44833", "oa_license": "CCBY", "oa_url": "http://www.acanceresearch.com/cancer-research/association-between-mitochondrial-bioenergetics-and-radiation-related-fatigue-a-possible-mechanism-and-novel-target.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1c4d162f2796e243ace93804476b806cb248e3fa", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
149067666
pes2o/s2orc
v3-fos-license
When Push Comes to Shove: How Are Students With Autism Spectrum Disorder Coping With Bullying? Students with autism spectrum disorder (ASD) are frequent targets of peer victimization (i.e., bullying). Although the frequency and potential impact of such experiences on students with ASD has been examined, the potential coping strategies implemented by such students are relatively unexplored. This qualitative study examined coping strategies for peer victimization as suggested by 38 students with ASD who do not have cognitive impairment. Participants viewed cartoons depicting characters that experienced various forms of bullying at school and responded to open-ended questions to explore their suggested coping strategies. Thematic analysis yielded three themes: approach coping, avoidance coping, and complexities of bullying. This study provides insight into the coping strategies implemented by students with ASD and possible avenues for school-based intervention. coping strategies, which are implicated in the reduction of physiological arousal and psychological suffering when confronted with stressors like peer victimization, is important in understanding how bullying is handled by the victims (Baron, Lipsitt, & Goodwin, 2006). Coping has been defined as "a person's ongoing efforts in thought and action to manage specific demands appraised as taxing or overwhelming" (Lazarus, 1993, p. 8) and has a strong influence on an individual's physical and mental health, particularly in response to early and extreme stress (National Scientific Council on the Developing Child, 2005/2014). Indeed, ineffective and/or maladaptive coping strategies can have long-lasting and detrimental effects on an individual's social-emotional well-being (Lazarus, 2006). Coping strategies are commonly classified into approach and avoidance responses (Roth & Cohen, 1986). Approach strategies are actions or behaviours that are taken by an individual to directly alter stressful situations. Examples include active problem solving or social support seeking, when others' insights and support are sought to address the issue (Fields & Prinz, 1997). Conversely, avoidance strategies are those that enable an individual to manage personal physiological and psychological reactions to the negative stressor (Fields & Prinz, 1997). Researchers have described three avoidance strategies that typical school-aged children often use in the context of peer conflict: (a) cognitive distancing or resistance to thinking about the negative experience, (b) internalizing or emotional reactions directed toward oneself for bringing on the negative situation, and (c) externalizing, or focusing one's emotions on other people or objects (Causey & Dubow, 1992). Combinations of strategies are often implemented by typically developing individuals, with the selection and application varying depending on environmental and situational demands (Kochenderfer-Ladd & Skinner, 2002). In general, approach strategies are related to improved psychological outcome when successful at producing the desired change (i.e., stopping the stressor) whereas avoidant strategies can be adaptive for those with less control over their situation, as is the case for students with ASD who experience frequent bullying (Carver, Scheier, & Weintraub, 1989). A limited number of studies have investigated how students with ASD cope with peer victimization. One of the first studies on this topic was a qualitative investigation of responses to bullying among 11-to 16-year-old students with ASD (Humphrey & Symes, 2010). The students varied in their responses to bullying, with some participants seeking social support from teachers, friends, and/or classmates. Other students attempted to deal with the victimization by themselves and only sought social support as a last resort. Some resorted to aggression, and many students reported a lack of trust in others and a desire for withdrawal from social interaction. In general, participants' responses were largely based on whether they believed that the strategy would be effective given their previous experience. Social support seeking was mediated by relationship histories and whether participants felt that there was someone to whom they could turn. Barriers to seeking social support included more severe ASD symptoms, an absence of trust in others, and a preference for solitude. These barriers increased participants' isolation and, ultimately, their susceptibility to peer victimization. More recently, Bitsika and Sharpley (2014) examined how 48 boys with ASD and without intellectual disability responded to bullying via a mixed-method approach. The most commonly reported responses were to walk away (56%), ignore (54%), say something back (44%), or avoid the bullies (40%). A smaller but significant percentage of the sample (33%) responded with physical aggression (e.g., hitting, pushing, kicking), and 17% smiled to demonstrate that they were unaffected by the bullying. When asked if they informed anyone about these experiences, 76% of the sample told their parents, 52% told school staff (e.g., teacher, principal, or school counselor), 10% told their siblings, and 15% did not tell anyone. Many participants stated that informing someone about the bullying did not alleviate the associated distress, with 38% of the sample reporting that telling someone "sometimes" or "always" made the bullying worse. At home, many of the students engaged in "withdrawal" responses, with 56% asking parents to not send them to school the next day and 35% trying to forget about the bullying. Overall, the authors of this study concluded that participants did not have effective coping strategies for dealing with bullying. The current study sought to identify specific coping strategies used by students with ASD, who do not have an intellectual disability, as a precursor to potential future research on the effectiveness of certain strategies. Bullying is a complex problem, and there is limited research investigating associated coping strategies among students with ASD. In the general population, the success of certain coping strategies varies based on a number of factors such as gender, frequency of victimization, and quality of friendships (Kochenderfer-Ladd & Skinner, 2002). It is likely that the same is true for students with ASD, with strategy effectiveness varying based on both situational and individual factors. The current study explores the coping strategies that students with ASD generate when confronted/provided with complex/multifaceted bullying situations. We sought to expand the range of coping responses identified in previous studies and provide students with a platform to voice their opinions on this topic via a video-prompted inquiry approach. Participants Thirty-eight children and adolescents with ASD between the ages of 8 and 13 years (M = 11.26, SD = 1.58) participated in this study. Participants attended Grades 3 (n = 5), 4 (n = 6), 5 (n = 6), 6 (n = 10), 7 (n = 9), or 8 (n = 2). They were predominantly enrolled in a regular classroom (n = 20), with some enrolled in a specialized educational environment (n = 10) or a regular classroom with some small group instruction (n = 6). One participant was in a specialized setting with some regular classroom instruction, and one was homeschooled. Participants had a previous diagnosis of ASD made by an appropriate licensed professional prior to participating; the validity of this diagnosis was confirmed by the research team via the Autism Diagnostic Observation Schedule-Second Edition (ADOS-2; Lord et al., 2012). Thirty-one participants (82%) identified as male, which reflects an expected gender ratio of the ASD population (APA, 2013). All participants were required to demonstrate a verbal intelligence score of 70 or greater on a brief measure of cognitive ability to ensure that they were able to understand and respond to the study's questions. Participant demographic information appears in Table 1. Measures ADOS-2. The ADOS-2 was administered by research reliable administrators to obtain objective evidence of each participant's ASD diagnosis. The ADOS-2 is a semi-structured, standardized assessment of social-communicative abilities, imaginative play, and restricted/repetitive behaviours, and is considered a "gold-standard" instrument in ASD assessment. Examinee behaviours are documented during the assessment and then coded and summed to create an algorithm for diagnosis. Only Module 3 of the ADOS-2 was administered to participants as it is suitable for children or adolescents with fluent speech. Interrater reliability for Module 3 is strong, as is test-retest reliability and internal consistency. Item-total correlations support the reliability and validity of the ADOS-2 scores; sensitivity and specificity are 91% and 84%, respectively, when differentiating autism and non-spectrum individuals, and 72% and 76%, respectively, when differentiating individuals on the broader autism spectrum and non-spectrum individuals (Lord et al., 2012). Wechsler Abbreviated Scale of Intelligence-Second Edition (WASI-II). The WASI-II (Wechsler, 2011) is an individually administered abbreviated test of cognitive intelligence for individuals aged 6 to 90 years. It is a reputable choice for the measurement of intelligence as it has been normed on a large and representative sample and shows strong psychometric properties, reliability, and validity (McCrimmon & Smith, 2013). It consists of two domains, verbal comprehension (VCI, consisting of the Similarities and Vocabulary subtests) and perceptual reasoning (PRI, consisting of the Block Design and Matrix Reasoning subtests), and provides a full-scale IQ (FSIQ). The VCI was used to determine each participant's verbal intelligence. Semi-structured interviews. Interviews were conducted with participants after they viewed three short video clips of bullying episodes adopted from the freely available Note. VCI = verbal comprehension. Age is reported in decimalized format (e.g., 11 years 6 months is 11.5 years). The Wechsler Abbreviated Scale of Intelligence-Second Edition (WASI-II) is from Wechsler, 2011; Mean and SD performance for the WASI-II is reported in standard score units. video toolkit Take a Stand, Lend a Hand, Stop Bullying Now (U.S. Department of Health and Human Services, 2011). The webisodes (1, 2, and 5) show cartoon characters experiencing various forms of bullying (verbal, physical, social, cyber) within a school environment and represent the complex nature of bullying (e.g., repetitiveness despite telling a trusted teacher, bullying by multiple students, beginning at a new school). After each video, students were asked open-ended questions to obtain their opinions on how the victim could potentially cope with the situation and also how they themselves would cope if they were in the same situation as the character. The interview questions are presented in the appendix. Procedure The current study is embedded in a larger investigation of bullying of students with ASD. Participants were recruited through community agencies, school divisions, and health offices that provide services to individuals with ASD. In addition, participants from previous research with the corresponding author were invited to participate in the current study. Interested parents contacted the research team to obtain information about the study and attended one or two sessions with their child (approximately three hours total). Informed consent was obtained from parents and verbal assent was obtained from students. The study was approved by the local university ethics review board. Children first completed the inclusionary measures (ADOS-2 and WASI-II). All participants met the ADOS-2 criterion; participation was concluded for children who did not receive a WASI-II VCI score of 70 or higher. Children who met these criteria were invited to complete the remaining measures for the overall project. Bullying is a sensitive topic and viewing students being bullied (even cartoon characters) has the potential to upset participants. Thus, the research team observed children carefully for any distress and checked in with the participants about their desire to continue viewing the videos throughout this process. Only one participant expressed mild distress by asking for the video to be paused and commenting that he did not like how the main character was treated by the others in the video; when given the option to discontinue participation, he indicated that he wanted to continue responding to the questions. Results Transcription of the interviews occurred after each participant completed their participation. Thematic analysis was used to explore themes within the data. The process of thematic analysis consisted of five steps; interested readers are directed to the work of Braun and Clarke (2006) to familiarize themselves with the process. The first author was responsible for the initial coding and development of themes. The process began with the researcher making initial notes and then proceeding to line-by-line open coding. Initial thoughts regarding potential themes were recorded and the process of coding remained flexible and dynamic as additional interviews were analyzed. After all of the interviews had been coded, segments of interviews were retrieved and written with codes to further develop code groupings. Further grouping and code re-establishment aided in the development of themes, and the resulting themes were then named and defined. A second coder then independently coded all of the qualitative data and derived themes; high intercoder reliability was achieved (97% agreement). These two researchers then discussed and resolved any discrepancies. The qualitative questions explored participants' views on how children who are victimized could address and cope with the situation and, subsequently, how they themselves would respond if they were in the same situation as the cartoon character in the video. Thematic analysis indicated three primary themes (each with several subthemes): (a) approach coping strategies, (b) avoidance coping strategies, and (c) complexities of bullying. A description of each theme, including sample quotes, is provided below and presented in Table 2. Theme 1: Approach Coping Strategies This theme included active methods to changing the situation, including strategies that may be maladaptive (e.g., physical aggression/revenge). The vast majority of responses fell under this theme and were thus further categorized into four subthemes: (a) telling a teacher or another adult, (b) stand up/say something, (c) problem solving, and (d) externalizing behaviours. Subtheme 1: Telling a teacher or another adult. The most common response under the approach coping theme involved seeking social support from an adult (e.g., teacher, parent, and/or principal). Overall, 103 such responses were provided across the three different webisodes viewed by participants. Some participants reported that they would try to solve the situation on their own at first and if unsuccessful, they would then tell an adult. Others indicated that telling someone would make them feel better, while acknowledging that it would not change the situation. Three participants acknowledged the severity of bullying and suggested involving the police, and only four participants suggested turning to friends for support/help. In addition, two participants suggested informing the bullies' parents. Subtheme 2: Stand up/say something. Another common response was to stand up to the bully. However, some of the students stated that their "comebacks" had the potential to further entice the bully to continue bullying. Other students reported that they would threaten the bully in an effort to address the situation. Finally, some students offered strategies similar to those taught to adolescents in "Social Stories" (a popular social skills building resource; Gray, 2004) as well as in an empirically supported social skills intervention for students with ASD (e.g., responding to teasing with a quick retort such as "Whatever!"; Laugeson & Frankel, 2011). Subtheme 3: Problem solving. Only seven students suggested problem-solving techniques when faced with peer victimization. Responses within this subtheme indicated some degree of resilience and entailed efforts to maintain the peace with creative and to, it'd be, you just kind of ignore it" "I might try to ignore. Cause then maybe the bully would realize that he or she wasn't getting an effect on me . . . and then probably stop" "I would just try to keep away and find a really far away locker from them." "I would probably just live with it." "If I were KB I would try and sit far away from the bully" "Tell my mom maybe we should move to a new place." "Say maybe: I'm getting bullied should I go to a different school?" Complexities of bullying Coping responses to bullying are complex and dependent upon the situation and other factors, sometimes resulting in confusion as to an appropriate course of action "Well . . . she probably could have told the teacher, but it could turn into a fight. If it's really bad they may just say 'No!' . . . ya it depends on how bad the bully is" "Well ignoring wouldn't work cause it would make her look kind of stupid" "I'd tell em' to shut up . . . and then I'd get in trouble probably/maybe or they would . . . cause usually when I say something back to the person, I say it louder, and then I get in trouble" "Like, not really nothing. Actually like . . . I'm not sure" "I don't know. Once something is on the Internet it's always there" mature ways to befriend the bully. However, some responses could be considered overly optimistic (e.g., attempting to connect with the bully as opposed to dwelling on the situation). Subtheme 4: Externalizing behaviours. Eleven students suggested externalizing strategies (e.g., getting angry, physical aggression, and/or revenge) to cope with the bullying that, while potentially allowing for emotional release during times of stress, are likely not effective means by which to cope with victimization. Interestingly, some students suggested unrealistic and vengeful responses. For example, one student provided insight into how they would seek revenge on the bully in gym class during a game of dodgeball. Theme 2: Avoidance Coping Strategies This theme captured more "passive" responses in that the students sought to avoid the stressor instead of seeking to change their situation. Two subthemes were derived, including ignoring or doing nothing and walking away/staying away (including changing schools). Subtheme 1: Ignoring. Fifteen participants suggested ignoring as a strategy. As part of this approach, two participants recognized the difference between once-off aggression versus repetitive bullying, suggesting that ignoring is effective when it is a one-time occurrence but not necessarily if it occurs repeatedly. Five participants simply suggested to "not listen" or "ignore it." One student reported that he would "probably just live with it" whereas another student reported that he would "wait for the bullies to go away." Subtheme 2: Walk away/stay away. Twelve participants suggested walking away from or avoiding the bully, either as an initial response or after other strategies were ineffective. Four students suggested running away from the bully, hiding, moving their lockers, and/or leaving the school for that day, potentially reflecting the lack of safety that they feel at school. An additional three participants suggested changing schools as an option. Theme 3: Complexities of Bullying This theme portrays the complexities of bullying and the subtleties that make coping with victimization difficult for students with ASD. Two subthemes were identified: (a) barriers to strategies and (b) uncertainty. Subtheme 1: Barriers to strategies. Twenty responses by 11 participants reflected barriers to certain strategies and the challenge of determining a straightforward solution. Participants indicated that bullying may persist despite telling a teacher, suggesting that simply telling an adult or someone in authority may not address the situation effectively. In addition, several participants highlighted the potential consequences of seeking support from others (e.g., could get made fun of for telling; the problem could get bigger). Subtheme 2: Uncertainty. Some responses indicated hesitation, frustration, and an inability to provide an idea for what to do when faced with bullying. Eleven participants reported uncertainty regarding which responses or strategies would be effective in addressing the situation. Interestingly, one participant spoke to the reality of cyberbullying by indicating that derogatory statements made online are often lasting or permanent, and so it is difficult to respond to such an uncontrollable situation. Discussion This is one of the only studies to examine the strategies that youth with ASD consider when confronted with bullying. These include both approach and avoidance behaviours and represented a variety of tactics within each category. Many of these strategies are reactive in nature and may be effective in the short-term; few strategies were pro-active or involved planning for a long-term solution (i.e., active problem solving). It may be that the discussion of bullying with students with exceptionalities must strike a balance between short-and long-term goals and strategies. Interviews also confirmed the major complexities that youth with ASD experience when considering how to problem solve about bullying. The thematic analysis yielded similar coping strategies as previous research (e.g., approach and avoidance; Fields & Prinz, 1997) and produced consistent findings to Humphrey and Symes (2010) in that suggested coping responses were variable and dependent upon each student's unique perspective, experience with victimization, and past experience with responses to being bullied. Consistent with previous research, participants in the current study indicated that some coping behaviours may actually worsen the bullying, and as a result, they are often at a loss when considering which actions to take (Bitsika & Sharpley, 2014). In addition, the relative lack of responses suggestive of seeking of social support from peers is consistent with research findings that students with ASD tend to occupy low social status and have few friends. As with the general population (Kochenderfer-Ladd & Skinner, 2002), many students with ASD struggle with determining what to do when confronted with such a highly stressful interpersonal situation. Moreover, it may be that the participants' problem-solving strategies are less effective as these students have less control over their situations and are frequently victimized (Craig, Pepler, & Blais, 2007;Kochenderfer-Ladd & Skinner, 2002). Complicating this matter further, many youth with ASD struggle with executive functioning, emotion regulation, and social skills, all of which support effective coping with bullying (DeRosier, 2004;Mahady Wilton, Craig, & Pepler, 2000;Monks, Smith, & Swettenham, 2005). It is also important to recognize that solving issues of bullying in schools must extend beyond the strategies that a child must rely upon, as the genesis of bullying and the potential solutions for this form of abuse are relational and contextual in nature. This study is also novel given the kinds of materials used to scaffold the inquiry. Instead of simply asking students to describe what they would do when they are bullied, the current study used videos of realistic bullying situations. There are many advantages to this methodological approach. First, as noted by Flanagan and colleagues (2013), "Children might identify with fictional characters and bullying situations both at a cognitive and emotional level and gain insight more easily than talking directly about their own experiences" (p. 699). In addition, the cartoons depicted multifaceted scenarios in which characters were bullied. For example, one video depicts a character, who is small in stature, plays the tuba, is bullied by multiple higher status students, and who tells his teacher. Another video shows a character who is starting at a new school, is the new kid with no social support/friends (similar to students with ASD), and does not want to tell her mom about the bullying. Despite the potential realism depicted in the cartoons, a limitation to the current study is that there is a difference between the video-prompted enquiry approach and actual problem solving in a moment of peer victimization when distress and confusion may occur. Indeed, the current study indirectly evaluates coping strategies, and participants were not directly asked to report on what others (e.g., parents/teachers) could do to help solve the problem, with the understanding that individual and contextual interactions highlight the true complexity inherent within a bullying event. In addition, it may have been easier for students to fantasize about their responses to being bullied given that the current study showed videos of cartoons rather than a more realistic medium. Future research is needed to bridge the gap between what students with ASD say they will do when confronted with bullying and what they actually do to assist in building skills that can be used in the moment to promote safety and resilience. Indeed, the participants' actual implementation of strategies in their everyday lives was not ascertained and the effectiveness of these approaches could not be determined. Overall, this study provides important new information on coping strategies suggested by students with ASD in the context of being bullied. The results indicate that students acknowledge that bullying is complex; certain coping strategies might be more successful for some students over others depending on the context/situation/ social status of student. The results also indicate that many students suggest approach strategies; however, many of the strategies may not successfully address the situation and few participants reported engaging in effective problem-solving tactics. Importantly, supportive adults (e.g., teachers, principals, and parents) could be educated on how to help children or youth who approach them when bullied (Hong, Neely, & Lund, 2015). These children or youth are often vulnerable and how a teacher or parent responds could negatively impact the child (and the child's social reputation). For example, teachers who go straight to the source (e.g., the bully) could actually place the victimized child at further risk. Hong and colleagues (2015) have developed steps for reducing peer victimization among students with ASD, including instruction in behavioural strategies through video modeling, social stories, and role-play. They also suggest that teachers and parents can teach students with ASD how to report potential bullying incidents (modifying reporting procedures as necessary based on the student's cognitive and language ability/effectiveness). In addition, they indicate that the use of a systematic monitoring system by teachers and parents can be an effective tool for the identification and subsequent coping of victimization among students with ASD. Social and emotional learning (SEL) programs in schools, which aim to foster empathy and understanding of individual difference among peers, are one approach to promoting acceptance and reducing bullying in schools (Greenberg et al., 2003). School psychologists can play an important role in monitoring such programs and troubleshooting with teachers about issues that may arise. Although it is important to teach students how to cope effectively with peer victimization, it is also the role of school personnel, including school psychologists, to promote positive school climates and prevent bullying from occurring. In the current study, some of the participants' responses reflected a lack of safety at school (e.g., leaving the school for the day or changing schools). Similarly, Bitsika and Sharpley (2014) found that more than half of their sample asked their parents not to send them to school the next day. With students with ASD being increasingly included in mainstream classrooms, it is essential to promote positive school climates and build empathy and understanding for all students. As such, school psychologists can also conduct school-wide screening for bullying and/or school climate regarding bullying. Such information can support school psychologists' efforts to promote positive school climates (e.g., target bullying through schoolwide interventions, promote social-emotional development and positive peer relationships, promote positive student-teacher relationships, and emphasize the importance of home-school collaboration. There is also the potential that evidence-based interventions that work to assist children with ASD to cope with interpersonal stressors through cognitive and behavioural means could be tailored to build skills specific to bullying experiences. For instance, there is considerable research to show that cognitive behaviour therapy, provided in both group (e.g., Reaven, Blakeley-Smith, Culhane-Shelburne, & Hepburn, 2012) and individual (e.g., Wood et al., 2009) formats is efficacious in addressing anxiety in youth with ASD. Common across these interventions is a focus on improving emotional awareness and complexity, assertiveness training, relaxation strategies, cognitive restructuring, in vivo practice, and positive reinforcement. There is also evidence for their application within the school context (Fujii et al., 2013). In addition, evidence-based cognitive behavioural social skills programs often incorporate specific modules on safety and bullying into their manualized approaches, often focusing on ways for youth to identify and elicit help from others when experiencing victimization (Beaumont & Sofronoff, 2008;Laugeson et al., 2012). The efficacy of these interventions to reduce the frequency or impact of bullying has yet to be examined and an important area of future research. In sum, the results of this study provide important insights into coping strategies for victimization of students with ASD. Such students are frequently bullied yet often lack the social awareness and/or support to deal with this behaviour effectively. Future research could build on the results from the current study by exploring the Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This article was funded by Social Sciences and Humanities Research Council of Canada (SSHRC).
2019-05-11T13:03:44.972Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "938ab97f570501c16194bad7121f4fe1279f8f84", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0829573516683068", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "4e7e2414f729ab4754d5f9cd92dc97c68fc1678b", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
245356785
pes2o/s2orc
v3-fos-license
The Mediation Effect of Health Literacy on Social Support with Exchange and Depression in Community-Dwelling Middle-Aged and Older People in Taiwan The proportion of the world’s population that are over 60 years old is increasing rapidly. The physical and mental health of older people is affected by depression. Health literacy is a major determinant of health and healthcare for the aging; middle-aged and older people with high health literacy are more likely to maintain a healthy lifestyle, and control or manage their chronic diseases. Therefore, this study explored the relationship between health literacy, social support with exchange, and depression, in middle-aged and older adults in the community, using data from the 2015 Taiwan Longitudinal Study on Aging (TLSA) database. Of the 7636 participants, 1481 (19.4%) were middle-aged or older persons with depression symptoms. We found age, gender, and education level to be significantly related to health literacy status, social support with exchange, and depression. Health literacy was positively correlated with depression and social exchange in social support with exchange, whereas the emotional support component of social support with exchange was negatively correlated with depression. Regression-based process analysis was used to verify the mediation effect of health literacy. Our results indicated that when health literacy was entered into the regression model (a × b path), the effect of social exchange on depression was insignificant (c′ = −0.01, p = 0.84), indicating a complete mediation effect. These findings suggest that improving health literacy may offset the impact of social support with exchange on depression, and lead to the mitigation of depression in middle-aged and older people in Taiwanese communities. Introduction The aging process is very complicated, with obvious changes both physically and psychologically. As people get older, the risk of having multiple coexisting chronic diseases increases. Recently, 92 diseases have been identified as age related, accounting for 51.3% (95% UI 48.5-53.9) of all global burden among adults in 2017 [1]. However, physical degradation, chronic diseases, and disability associated with aging may cause psychological pressure, negative emotional feelings, and even depression [2]. This is a phenomenon that requires attention in an aging society [2]. The World Health Organization (WHO) reported that the incidence of unipolar depression is 7% in the elderly, and accounts for 5.7% of years lived with disability (YLDs) for those over 60 years old [3]. Depression can cause great pain and adversely affect the activities of daily living, even more so than other chronic diseases often associated with a profound impact on (dis)ability. In a systematic review on the morbidity rate of depression in older adults, 74 studies involving a total of 487,275 people aged 60 years and above were included. The morbid-ity rate of depression was determined to be 4.7-16.0%, with a median morbidity rate of 10.3% [4]. Depression is associated with physical and psychological problems. Soysal et al. [5] found that, among 2167 older patients with depression, the prevalence of frailty was 40.40% (95% CI 27.00-55.30, I 2 = 97%). Moreover, depression may lead to suicide. The Taiwan Ministry of Health and Welfare reports [6] indicate that the suicide rate among middle-aged and older people was the highest among all age groups (24.7%), and depression was an important factor of this [7]. Reviews of published literature indicate that many factors are associated with depression. Social support is one such factor found to be associated with depression [8]. Moreover, high rates of low health literacy among older adults, along with a high prevalence of chronic conditions may lead to increased levels of depression symptoms [9,10]. However, the relationship between the level of health literacy, social support with exchange, and depression remains largely unknown and unexplored. Thus, the present study investigates the mediation effect of health literacy on the relationship between social support with exchange and depression. Social Support with Exchange and Depression Social support is a mechanism that relieves life pressure and promotes health at the same time, thereby contributing to positive psychological effects. Middle-aged and older people with less social support have a higher incidence of depression [8,9,[11][12][13][14][15][16]. A systematic review of 24 studies found that good social support is associated with the reduction in depression [17]. However, relying on the support of others may lead to guilt and anxiety [18]. In contrast, if older adults are provided with instrumental assistance, it can help prevent a decline in their daily living activities [19,20]. Brown et al. [21] also identified that older adults who were provided tools by friends, relatives and neighbors exhibited significantly reduced mortality. There is accruing evidence that providing social support may be more beneficial than obtaining social support [21,22]. Thus, our first hypothesis is that social support and exchange affects depression (Hypothesis 1). The nature of such effect forms one of the questions addressed in the present study. Social Support with Exchange and Health Literacy Health literacy is an important factor in determining public and personal health, and is regarded as the core of patient-centered care [23]. Nutbeam [24] defined health literacy as a personal, cognitive and social skill that determines the individual's ability to obtain, understand, and use information to promote and maintain good health. Poor health literacy is a silent epidemic across the globe, affecting every aspect of health [25]. In several reports, a lack of health literacy has been associated with higher mortality, poor self-management skills, lower satisfaction with medical communication, poor awareness of diseases, higher hospitalization and emergency medical-use rates, incorrect use of medication, low utilization of preventive healthcare services (such as screening), high prevalence of chronic diseases (such as cardiovascular disease, diabetes, and obesity, etc.), and high healthcare costs [26,27]. There are suggested associations between health literacy and social support. This includes reports by Liu et al. [28], showing that health literacy was positively correlated with social support (β = 0.151, 95% CI: 0.077, 0.224), but negatively correlated with depression (β = −0.173, 95% CI: −0.246, −0.1) in 637 adults aged ≥65 years with hypertension and diabetes, living in the community. This indicates that older people with higher health literacy tend to have better social support and relatively lower levels of depression. On this premise, we next hypothesized that the interaction of social support with exchange is affected by health literacy (Hypothesis 2), and investigated the nature of the probable effect of health literacy on social support. Health Literacy and Depression Many studies have shown that poor health literacy is significantly associated with increased incidences of depression [9,10,15,[29][30][31][32], including middle-aged and older adults. A study of 3260 older people showed that, compared with persons with sufficient health literacy skills, those with insufficient health literacy were 1.2 times more likely to be depressed (95% CI: 0.9-1.7). However, this was mostly explained by the bidirectional relationship between health literacy and depression, which may be mediated by health status [9]. Hsu et al. found that improving the health literacy of diabetic women elicited reduced psychological distress in those with depression, as shown by the negative correlation between health literacy and depressive tendencies [10]. In fact, for each 1-point rise in the Chinese Health Literacy Scale for Diabetes score, the Center for Epidemiologic Studies Depression Scale (CES-D) score decreased by 0.17 points (z = −2.05, p = 0.042) [10]. Thus, extrapolating from the general population to a more specific age group, the present study evaluated the relationship between health literacy and depression in middle-aged and older adults, with the working hypothesis that health literacy is inversely associated with depression (Hypothesis 3). Health Literacy as a Mediator between Social Support and Depression This present study investigates the relationship between health literacy, social support with exchange, and depression. Liu et al. [28] described health literacy as a predictor of social support and depression. Zhang et al. [32] concluded that physical comorbidity and health literacy mediate the relationship between social support and depression among patients with hypertension in their study of 549 hypertensive adults (95% CI: −0.282 to −0.097). However, little is known about the protective effect of health literacy on middleaged and older people. Against this background, we further hypothesized that health literacy has a mediation effect on the relationship between social support with exchange and depression (Hypothesis 4). To further clarify the mediation effect of health literacy on the relationship between social support with exchange and depression, this study also adjusted other related factors. As far as we know, this is the first study to investigate the mediation effect of health literacy on social support with exchange and depression in middle-aged and older people in Taiwan. Theoretical Framework According to Sørensen, et al. [33], health literacy is linked to literacy and entails people's knowledge, motivation and competency to access, understand, appraise, and apply health information towards making judgments and decisions in everyday life concerning healthcare, disease prevention, and health promotion, to maintain or improve quality of life during one's life course. Sørensen, et al. [33] proposed an integrated model of health literacy which combines the qualities of a conceptual model, outlining the main dimensions of health literacy, and those of a logical model, showing the proximal and distal factors which impact health literacy, as well as the pathways linking health literacy to health outcomes. The core of the model is the ability to acquire, understand, evaluate, and apply health-related information across the three dimensions of health literacy: namely, health care, health promotion and disease prevention. In addition to the components of health literacy, the model also shows the main antecedents and consequences. The antecedents are personal determinants (e.g., age, gender, race, socioeconomic status, education, occupation, employment, income, literacy) and situational determinants (e.g., social support, family and peer influences, media use and physical environment). Moreover, health literacy could affect health service use, health costs, heath behavior and health outcomes at the individual level and participation, empowerment, equity and sustainability at the population level. We used this model as the framework of this study. In summary, the path model of this study aimed to confirm the mediation effects of health literacy on the relationship between a situational determinant, namely, social exchange with emotional support, and a health outcome, depression, with the personal determinants of middle-aged and older people as control variables. Study Design and Data Collection Since 1987, Taiwan has been conducting surveys and studies collectively known as the "Taiwan Longitudinal Study on Aging" (TLSA), in order to understand the health and living conditions of middle-aged and older people over the age of 50. Eight research sessions were completed between 1989 and 2015 [34]. The TLSA survey adopts stratified random sampling, allowing the collected data to fully reflect the physical, psychological and social aspects of the participants. Data and results from the TLSA project can also serve as empirical basis for the formulation of social and healthcare policies for older adults. This study used data from a component of the TLSA termed 'the Long-term Tracking Survey of the Physical and Mental and Social Life of the Middle-aged and Older people in Taiwan in 2015'. Participants The total number of participants with complete data was 7636. The computed required number of participants was 1073, based on an effect size |ρ| = 0.1, error probability (α) = 0.05, and power (1-β err prob) = 0.95. Surpassing the required number, of the 7636 participants, 1481 (19.4%) were middle-aged and older persons with depression symptoms. Exclusion criteria: Of the total 7636 enrolled subjects, 6155 were excluded because they had depression symptoms index scores 0-8, had no documented history of depression, and were younger than 50 years old. Inclusion criteria: In all, 1481 middle-aged and older people (19.4%) had depression symptoms index scores ≥ 9, were classified as having depression symptoms, and thus considered eligible for further comparative studies. All personally identifiable information from the TLSA are encrypted to protect the participants. This study was approved by Fu Jen Catholic University (FJU-IRB No: C109147), and conducted following the Declaration of Helsinki guidelines on research involving human subjects. Demographic Characteristics The demographic characteristics evaluated included gender, age, and education level. Consistent with Lin et al. [35], according to their age, participants were divided into two groups: 50 to 64 years, and 65 to 85 years. Similar to the levels of education used by Xu et al. [36], the study cohort was also divided into two strata: "junior high school or lower" and "senior high school or above". Social Support with Exchange Scale This 7-item scale consisted of two sub-scales: social exchange and emotional support. The three questions on social exchange were scored on a scale of (0) No, (1) Occasionally, and (2) Often. The total score range of the three items was 0-6. A higher total score indicated a greater frequency of social exchange, which in turn implied better social support. There were four questions on emotional support. The scoring method was divided into (1) Very supported (2) Supported (3) Normal (4) Not supported (5) Very unsupported. The total score range of the four items was 4-20. A lower total score indicated better emotional support of the respondent. In this study, the Cronbach's α of the scale was 0.70. The social support with exchange scale facilitated the understanding of the social status of the middle-aged and elderly in Taiwan, such as the participant's "family structure, living arrangements, social support, leisure activity, socioeconomic status, life satisfaction, occupation and retirement, and awareness and utilization of services provided by the government" [37]. Health Literacy Scale The scale contained three dimensions: health care, health promotion, and disease prevention. There were a total of 9 items, which were scored on a 5-point Likert-type scale. The total score range was 9-45. A score of 20 points or less was considered to indicate good health literacy. In this study, the Cronbach's α of the scale was 0.86. The TLSA health literacy scale is a reliable and valid instrument for measuring health literacy in middle-aged and older people [38]. As alluded to already, the health literacy scale allows the comparison of the "differences in health and social status among subgroups of people characterized by their socio-economic background" [37]. Center for Epidemiological Studies Depression Scale (CES-D) The abbreviated version of the Center for Epidemiological Studies Depression Scale (CES-D Scale) was used to measure depression during the survey. The scale was composed of 11 questions out of the original 20 questions on the CES-D developed by Radloff [39]. The short version of the CES-D scale included three factors: physical symptoms, depressive emotions and positive emotions. Each of these 11 items was scored on a 4-point Likert-type scale of "rarely (<1 day) = 0" to "frequently or consistently (more than 4 days) = 3". The total score range was 0-33. A higher score indicated a higher frequency of depression, whereas a score of 8 or higher indicated obvious symptoms of depression. The Cronbach's α ranged from 0.76 to 0.81, indicating that the TLSA short version of the CES-D scale, which was based on the Iowa EPESE (estabilished populations for epidemiologic study of the elderly), had good internal consistency reliability [40]. In this study, the Cronbach's α of the scale was 0.71. Statistical Analysis In this study, the Chinese version of SPSS 22.0 (SPSS Inc., IBM Corp., Chicago, IL, USA) was used for data analysis. All variables adopted descriptive statistics (such as the median and standard deviation of continuous variables, the percentage of categorical variables). Variable correlation analysis and Univariate linear regression were used to examine the association between social support with exchange, health literacy, covariance, and depression scales. The process procedure for SPSS Version 3.5.2 statistical software Model 4 (Hayes Process macro for SPSS) [41] was used to analyze the mediation effect of health literacy on social support with exchange and depression in middle-aged and older adults. The bootstrap participants numbered 5000, and a 95% confidence interval was set to achieve sufficient statistical analysis power. Since the gender, age, education level and key variables were all related, we used these three as covariates in the mediation regression model. Descriptive Statistics The descriptive statistics of the participants are listed in Table 1. Our total participants consisted of 3917 females (51.3%) and 3719 males (48.7%). The largest groups were those 65-85 years old and above (52.4%) and those with a primary or junior high school level education (66.9%). The average depression score was 4.3 ± 5.3 (range, 0-32). The depression group was older and mostly female, and had a low education level. Social support with exchange-social exchange was low, social support with exchangeemotional support was poor, and health literacy was also low. In addition, the average scores of social support with exchange-social exchange, social support with exchangeemotional support, and health literacy were 2.03 ± 0.01, 7.33 ± 2.55, and 15.12 ± 5.92, respectively. Most (79.2%) of the participants had sufficient health literacy. Data are presented as n (%) or (mean ± standard deviation). The Correlation of Health Literacy and Social Support with Depression The correlations between the key variables are listed in Table 2. Health literacy was positively correlated with depression (r = 0.354, p < 0.001), indicating that better health literacy was related to lower depression. Social support with exchange-social exchange was negatively correlated with depression (r = −0.145, p < 0.001), indicating that better social exchange was associated with a lower degree of depression. Social support with exchange-emotional support was positively correlated with depression (r = 0.377, p < 0.001), suggesting an association between worse emotional support and a higher degree of depression. The Mediation Effect of Health Literacy Based on Baron and Kenny [42], this study proposes the conditions and tests of the intermediary variables. The results of the model are shown in Figure 1. Paths a, b, and c represent standardized regression coefficients between paths. Path c presents the association between social support with exchange and depression, path a shows the association between social support with exchange and health literacy, path b shows the association between health literacy and depression, and path c' presents the mediation effect of health literacy on social support with exchange and depression. The direct, indirect and total effects of the key research variables are presented in Table 3. The indirect effect (ab) was defined as the product of coefficients a and b [43]. If the 95% bootstrap CI did not contain zero, the indirect effect was considered to be significant, indicating that there was a mediation effect. Health literacy had a mediation effect on social support with exchange and depression (ab = −0.13, 95% CI = −0.17 to −0.010; ab = 0.13, 95% CI = 0.11 to 0.15). Discussion This section will discuss the hypothesis proposed in this research and report the related limitations. Health Literacy Status Positively Correlates with Social Support with Exchange, But Is Inversely Associated with Depression In this nationally representative study, with middle-aged and elderly community-resident participants, we found that the prevalence of depression in middle-aged and older people was 19.4%, which was lower than the prevalence reported in the previous study (35.2%) [44] and lower than those in India (41%) and South Asian countries (42%) [45]. However, it was higher than the world's median of depression in an older population of 10.3% (interquartile range [IQR], 4.7-16.0%) [4], which in turn was higher than the 13% rate in the United States [46], 13.9% in Sri Lanka [13] and 18.5% in Thailand [47]. Therefore, surveys of depression in older adults are easily affected by the differences in customs and cultural backgrounds of various countries. The total effects of social support with exchange-social exchange and emotional support on depression in middle-aged and older people were significant (c = −0.14, p = 0.005; c = 0.79, p < 0.001), which supported Hypothesis 1. The effects of social support with exchange-social exchange and emotional support on the health literacy of middle-aged and older people respectively were significant (a = −0.45, p < 0.001; a = 0.49, p < 0.001), thus supporting Hypothesis 2. The effect of health literacy on depression was (b = 0.29, p < 0.001), which supported Hypothesis 3. When health literacy was incorporated into the regression (a × b path), the effect of social support with exchange-social exchange on depression became non-significant (c = 0.01, p = 0.84), indicating that health literacy could completely mediate depression in social support with exchange-social exchange. Figure 2 shows that, after adjustments for gender, age, and education level, health literacy still had a complete mediation effect on the relationship between social support with exchange-social exchange and depression. In addition, the influence of social support with exchange-emotional support on depression was significant (c = 0.67, p < 0.001), indicating that health literacy could have a partial mediation effect on depression between social support with exchange-emotional support, accounting for 16.46% of the total effect. These results supported Hypothesis 4. Discussion This section will discuss the hypothesis proposed in this research and report the related limitations. Health Literacy Status Positively Correlates with Social Support with Exchange, But Is Inversely Associated with Depression In this nationally representative study, with middle-aged and elderly community-resident participants, we found that the prevalence of depression in middle-aged and older people was 19.4%, which was lower than the prevalence reported in the previous study (35.2%) [44] and lower than those in India (41%) and South Asian countries (42%) [45]. However, it was higher than the world's median of depression in an older population of 10.3% (interquartile range [IQR], 4.7-16.0%) [4], which in turn was higher than the 13% rate in the United States [46], 13.9% in Sri Lanka [13] and 18.5% in Thailand [47]. Therefore, surveys of depression in older adults are easily affected by the differences in customs and cultural backgrounds of various countries. Discussion This section will discuss the hypothesis proposed in this research and report the related limitations. Health Literacy Status Positively Correlates with Social Support with Exchange, but Is Inversely Associated with Depression In this nationally representative study, with middle-aged and elderly communityresident participants, we found that the prevalence of depression in middle-aged and older people was 19.4%, which was lower than the prevalence reported in the previous study (35.2%) [44] and lower than those in India (41%) and South Asian countries (42%) [45]. However, it was higher than the world's median of depression in an older population of 10.3% (interquartile range [IQR], 4.7-16.0%) [4], which in turn was higher than the 13% rate in the United States [46], 13.9% in Sri Lanka [13] and 18.5% in Thailand [47]. Therefore, surveys of depression in older adults are easily affected by the differences in customs and cultural backgrounds of various countries. This study found that, in terms of gender differences, middle-aged and older females were more likely than their male counterparts to have depression. This finding was consistent with those in previous studies that the female gender is a risk factor of depression in old age [2,13,[44][45][46][47][48][49][50][51][52]. Moreover, age is positively correlated with depression, as age is an important determinant of mental health. Due to the normal aging of the brain, the deterioration of physical health, and brain diseases, the overall prevalence of mental and behavioral disorders has been shown to increase with age [2,45]. We demonstrated that the severity of depression increases with age, as was consistent with the findings of previous studies [2,45,48,49]. Furthermore, we found that a low educational level was significantly associated with a higher risk of depression, corroborating the findings of Portellano-Ortiz et al. [50] and Ylli et al. [52]. In a cross-sectional study, a total of 93,590 people over 55 years of age in 18 countries completed questions related to depressive symptoms using the shortened CES-D or EURO-D scale. The study indicated depression prevalence was generally highest among women, individuals aged 75 years or older, those who were divorced, widowed, or single, and those who did not attain a secondary education [53], which is similar to our research results. Our data showed that 79.2% of the middle-aged and elderly people had sufficient health literacy, similar to the results of a Finnish study, which found that 51.4% had sufficient literacy and 12.3% had excellent health literacy [54]. However, this is higher than the figures reported in studies from the United Kingdom, the United States, Taiwan, and Germany, with 49.5%, 51%, 53.7%, and 66-80% of persons older than 65 years having poor or limited health literacy, respectively [55][56][57][58], or worse still in Turkey, where 85.1% of the elderly are considered to have "problematic or insufficient" health literacy [59]. Components of Social Support with Exchange Differentially Affect Depression Social support elicits better mental health. The results of this study indicate a negative correlation between social support with exchange-social exchange and depression. A higher score for "social exchange" indicated that better social exchange was associated with lower levels of depression. This may be related to the traditional concept in Chinese culture that it is more blessed to give than to receive. In Chinese society, providing support to others increases happiness in older people [60]. Brown et al. [21] pointed out that giving support may be an important part of interpersonal relationships, and has considerable value for health and well-being. Chen et al. [61] stated that encouraging individuals to provide appropriate support, such as helping others, and being willing to accept support may be beneficial for well-being and longevity. Conversely, social support with exchangeemotional support was positively correlated with depression, implying that a higher score on emotional support indicated worse emotional support, and thus a stronger likelihood to depression. This impact of social support on depression is consistent with findings from other studies [9,13,62,63]. Gyasi et al. [64] found meaningful social support to be a key element of life in older people. By strengthening opportunities to establish closer interpersonal relationships with others, older adults could improve their mental health, independence and quality of life. Therefore, social support with exchange helps people release negative psychological pressure during the aging process. Social Support with Exchange Reflects Health Literacy Status Social support with exchange-social exchange was negatively correlated with the health literacy of middle-aged and older people, which means that better social exchange was associated with higher health literacy (a lower score). Consistent with the conclusions of previous studies [28,58,[65][66][67], we found emotional support was positively correlated with health literacy, suggesting that if the degree of support was poor, the level of health literacy was also low. Thus, the rationality of De Wit et al.'s proposed practice of collaborative learning and social support to improve the health literacy of older people [68]. Moreover, social support could help alleviate the negative effects of low health literacy [65]. Inadequate Health Literacy Is Implicated in Mental Health Deterioration and Depression This study found that health literacy was positively correlated with depression. Inadequate health literacy in middle-aged and older people was associated with the deterioration of physical and mental health, including increased depression. This finding aligns with those previously reported [9,10,31,54], and therefore it can be stated that insufficient health literacy is strongly related to depression. Gazmararian's team in their seminal study of 3260 elderly people found that individuals with inadequate health literacy were 2.7 times (95% CI, 2.2-3.4) more likely to be depressed, compared with individuals with adequate health literacy [9]. In addition, Do et al. [69] in their study of 928 adults aged 60-85, reported that every one-point increase in health literacy decreased the likelihood of depression by 9% (OR, 0.90; 95% CI, 0.87, 0.94; p < 0.001). Concurring with Parikh et al. [70], we found that people with low health literacy often feel shame and embarrassment, which can cause social isolation and constitute a serious psychological barrier to seeking help. This, in part, explained the observed higher odds of depression among people with lower health literacy than their peers with higher health literacy. Thus, we posit that improving health literacy has a health protective effect for older people, and is a protective factor against depression. For contextualization, the current COVID-19 pandemic is associated with increased anxiety and mental health problems in the public [71][72][73] and, as established by Robb et al., 13% of the elderly feel worse in terms of depression [51]. However, a 4-5% reduction in the likelihood of depression was reported for each unit increase in health literacy score, further highlighting the protective effect of health literacy against depression during the pandemic [74]. Health Literacy Significantly Mediates the Relationship between Social Support with Exchange and Depression To the best of our knowledge, this study is the first to analyze the relationships between social support with exchange, health literacy and depression among the elderly in the community. Our results showed that the health literacy status of middle-aged and older people has a mediation effect on the relationship between social support with exchange and depression. When social support with exchange and health literacy were input into the final mediation model, the association between social exchange and depression was eliminated (complete mediation), and the relationship between emotional support and depression was reduced (partial mediation), accounting for 16.46% of the total effect size. The reason for this may be that the need for listening and caring is extremely important in Chinese culture and society [63]. Our findings on the mediation effect of health literacy is consistent with those by Zhang et al. [32] and Zou et al. [15], validating the working hypothesis of the present study. Health literacy mediates the impact of social support with exchange on depression among the elderly in the community. Therefore, the improvement of health literacy and intervention measures require due attention. Duong et al. noted that entertainment series and educational TV programs on health promotion and health-related community activities help increase health knowledge and health behaviors, thereby improving health literacy [75]. As rightly opined by Nutbeam [24], findings in the presented study do suggest that the improvement of health literacy should emphasize more personal communication, community-based education outreach and health education content, with a focus on suitable equipment to overcome structural barriers to people's health. Therefore, we recommend that coping strategies and support resources be provided to help middle-aged and older people improve their health literacy and mitigate or prevent depression. Herein, our results ( Figure 3) echo the health literacy integration model proposed by Sørensen et al. [33]. This study confirmed the mediation effects of health literacy on the relationship between situational determinants (social support with exchange), personal determinants (age, gender, and education level) and the health outcomes (depression) of middle-aged and older people. Herein, our results ( Figure 3) echo the health literacy integration model proposed by Sørensen et al. [33]. This study confirmed the mediation effects of health literacy on the relationship between situational determinants (social support with exchange), personal determinants (age, gender, and education level) and the health outcomes (depression) of middle-aged and older people. Limitations of This Study This study has some limitations. First, the individuals in this study were middle-aged and older people living in the community. Compared with individuals living in institutions, they may have relatively better mental health, social support with exchange, and health literacy. Second, because the health literacy assessment was a component of the first survey in 2015, continuous data analysis could not be done. Third, unlike analytic devices used for clinical diagnosis, the CES-D indexes only self-report current symptoms. Therefore, depression symptoms may be overestimated. Finally, the inclusion of other age groups in the study participants may also be necessary in extending the Limitations of This Study This study has some limitations. First, the individuals in this study were middleaged and older people living in the community. Compared with individuals living in institutions, they may have relatively better mental health, social support with exchange, and health literacy. Second, because the health literacy assessment was a component of the first survey in 2015, continuous data analysis could not be done. Third, unlike analytic devices used for clinical diagnosis, the CES-D indexes only self-report current symptoms. Therefore, depression symptoms may be overestimated. Finally, the inclusion of other age groups in the study participants may also be necessary in extending the conclusions to a larger population. Conclusions In middle-aged and older people, social support with exchange differentially affects depression, and this association is mediated by health literacy status. Improving health literacy offsets the adverse effects of social support with exchange on depression. In view of these results, multidisciplinary intervention measures should be formulated to increase the social exchange component of social support with exchange and improve health literacy, so as to reduce the likelihood and incidence of depression. In addition, the results of this study echo Sørensen's health literacy integration model, which extends the need to improve health literacy and patient/family engagement, empower people to take charge of their health and better prepare them to deal with health crises, rather than becoming passive recipients of services. This is an integrated people-oriented health service. The results reported herein may serve as an evidence-based reference for evaluating and/or mitigating depression in middle-aged and older people in Taiwan. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data that support the findings of this study are available from Health Data Science Center, Taiwan but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the corresponding author upon reasonable request and with permission of the Taiwan Ministry of Health and Welfare.
2021-12-22T17:17:52.526Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "510c1abd1e278be18fc0a7e7d1b89bbc77d8945d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9032/9/12/1757/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1db8bcd8974afbd0b5b687ed744cfb29fbbaa53c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225206260
pes2o/s2orc
v3-fos-license
The Implementation of Think Pair Share Assisted with Pop Up Media Increases Students’ Outcomes In current learning, teachers still use the lecture method and rarely use media to support learning so that students feel bored and less enthusiastic about participating in learning activities resulting in low student learning outcomes. This study aims to analyze the improvement of student learning outcomes in the realm of knowledge and skills using the think pair share learning model assisted by pop-up media in class V. This type of research is classroom action research. This study consisted of 4 stages, namely planning, acting, observing, and reflecting. The research subjects were 23 students in grade V. Data collection techniques in this study were test and non-test. The data analysis technique used quantitative and qualitative data analysis techniques. The results of this study indicate that the learning outcomes of students in the domain of social science content knowledge in the first cycle obtained an average value of 69 and the Indonesian language content obtained an average value of 70 while in the second cycle the social studies content obtained an average value of 79 and the Indonesian language content obtained an average value. -averaged 80. The results of learning skills in the first cycle obtained an average score of 66 in the category of needing guidance while in the second cycle an average score of 78 was in the sufficient category. Based on the research results, it can be concluded that applying the think pair share learning model assisted by pop up media can improve learning outcomes in class V. A R T I C L E I N F O Article history: 1 Mei 2020 Received in revised form 11 Juni 2020 Accepted 10 Juli 2020 Available online 25 Agustus 2020 Kata Kunci: think pair share, hasil belajar A B S T R A C T In current learning, teachers still use the lecture method and rarely use media to support learning so that students feel bored and less enthusiastic about participating in learning activities resulting in low student learning outcomes. This study aims to analyze the improvement of student learning outcomes in the realm of knowledge and skills using the think pair share learning model assisted by pop-up media in class V. This type of research is classroom action research. This study consisted of 4 stages, namely planning, acting, observing, and reflecting. The research subjects were 23 students in grade V. Data collection techniques in this study were test and non-test. The data analysis technique used quantitative and qualitative data analysis techniques. The results of this study indicate that the learning outcomes of students in the domain of social science content knowledge in the first cycle obtained an average value of 69 and the Indonesian language content obtained an average value of 70 while in the second cycle the social studies content obtained an average value of 79 and the Indonesian language content obtained an average value. -averaged 80. The results of learning skills in the first cycle obtained an average score of 66 in the category of needing guidance while in the second cycle an average score of 78 was in the sufficient category. Based on the research results, it can be concluded that applying the think pair share learning model assisted by pop up media can improve learning outcomes in class V. Introduction Education is one of the important factors in improving the quality of human resources as a determinant of the success of a nation's development. Based on Law No.20 of 2003 concerning the National Education System in Chapter II, Article 3 states that education has the function of developing abilities and shaping the character and civilization of a nation with dignity to educate the life of the nation, aiming to develop the potential of students to become faithful and human beings. be devoted to God Almighty, have a noble character, are healthy, knowledgeable, capable, creative, independent, become democratic and responsible citizens. Education cannot be separated from learning activities. The learning process is said to be effective if the implementation of learning activities at each level and type of education has guidelines that must be adhered to, adhered to, and implemented which is called the curriculum (Ardianti, et al 2018). The learning process is said to be successful if the teacher's way of carrying out learning is not teachercentered but learning that involves students actively in learning so that learning becomes more meaningful for students, but many inaccuracies have occurred in the learning process so far. The results in the field generally show that students are not actively involved in the learning process, most of the learning time is filled by the teacher through one-way communication. Conditions like this can result in the learning atmosphere to be less interactive and lead to passive and apathetic behavior in students. To overcome this, the teacher plays an important role in improving the quality of learning, one of the efforts is to provide learning tools and learning media so that the learning process will be meaningful which allows students to achieve maximum student learning outcomes. This problem is supported by the results of the observations carried out at SD 3 Bulungcangkring. SD 3 Bulungcangkring is an educational unit in Kudus Regency, Jekulo sub-district that has implemented the 2013 curriculum. The results of interviews on October 2, 2019, with grade V SD 3 Bulungcangkring students found several problems, namely students having difficulty understanding the subject matter of social studies and Indonesian language due to too much material. and a lack of explanation from the teacher. This problem occurs because the teacher in carrying out the learning process rarely uses the media to support learning and still uses the lecture method, so students feel bored and less enthusiastic in participating in learning activities. Based on observations on October 3, 2019, carried out in the research class, the students found several weaknesses in writing skills, namely that students could not make a good summary, this was because students could not understand the contents of the text in reading, and students also still experienced difficulties in selecting the right words in composing the correct sentence so that students feel afraid if their writing is wrong. Not only in writing skills, but students also have weaknesses in interacting with other students so that when the teacher holds group discussions, students are still individualistic in doing their assignments. If it continues to exist in students, it will affect student learning outcomes. Based on the midterm test scores (UTS) class V SD 3 Bulungcangkring, it is known that the students' scores are still below the maximum completeness criteria (KKM) for social studies content and Indonesian language ≥ 70. In social studies content, there are 39% (9 out of 23 students) who have experienced complete learning while 61% (14 of 23 students) who have not experienced learning completeness, and in Indonesian language content there are 43% (10 of 23 students) who have experienced learning completeness while 57% (13 of 23 students) have not experienced learning completeness. Based on this, students who are less well off in social studies will become students who do not understand the conditions and circumstances of their environment. In addition to this, students who do not understand and do not understand Indonesian subjects will make it difficult to communicate and convey ideas. Both of these things in addition to affecting the value of student learning outcomes will have an impact on students' daily lives, for example, when a formal situation student is unable to speak Indonesian well, it could be misleading. The next example is when students are in a new area or environment, if students do not know the environment it will make students confused in acting and mingling. Seeing all the problems that arise, these problems need to be resolved. Based on the above problems, one of the right solutions is in the implementation of the learning process it is necessary to apply a learning model that can improve student learning outcomes. To improve learning outcomes, researchers have the initiative to apply the think pair share a cooperative learning model with the help of pop up media. It is intended that students can take an active role in understanding learning concepts and be able to solve problems in a discussion. The TPS learning model was chosen because according to (Hamdayama, 2014) thinks Think Pair Share (TPS) is a simple technique with big benefits. Besides, according to (Trianto, 2014), the TPS learning model is a learning model that affects student interaction patterns. Furthermore, (Huda, 2015) conveyed that the TPS learning model gave students time to think about answers to the questions or problems being discussed, then students solve these problems with their respective abilities then explain the results obtained in front of the class. Based on this explanation, it can be concluded that the TPS learning model is a learning model that affects student interaction by inserting stages or learning processes such as thinking about answers, responding to, and determining solutions to problems. Learning is carried out in groups so that interaction occurs between students and students can work according to their respective abilities, this will make students complement each other. The advantage of this learning model is that TPS can improve students' ability to remember information and a student can also learn from other students and share ideas for discussion before submitting them to the class (Ustatik, 2016). Based on the definitions according to the expert above, the advantages of the TPS learning model are, 1) increasing students' thinking power. 2) give students time to think. 3) Students become more active in thinking. 4) students have a better understanding of the concept or topic of discussion. 5) students can learn from other students. 6) each student can convey the ideas they have. The think pair share learning model stage consists of five stages, with three main stages as characteristics, namely the preliminary stage, think, pair and share, appreciation. The explanation of each stage is as follows. 1) the preliminary stage, the beginning of learning begins with exploring perceptions as well as motivating students to be involved in learning activities. 2) the thinking stage (think individually), students are given a time limit (think time) by the teacher to think about their answers individually to the questions given. 3) the pair stage (pairing up with a peer), the teacher groups students in pairs. Then, students start working with their partners to discuss the answers to the problems that have been given by the teacher. 4) the sharing stage (sharing answers with other pairs or the whole class), students can present their answers individually or cooperatively to the class as a whole group. 5) the award stage, students receive an award in the form of values both individually and in groups. Apart from the opinion of experts, the use of the TPS learning model is also influenced by several research results, the first is the research carried out by (S.E. & Dra. Ni Wayan Suniasih, 2018) states that the think pair share model can give students more time to think, to respond to and help each other, so that students are expected to be able to understand the material well. Subsequent research was carried out by (Litna & Seli, 2019) mentioned that the think pair share learning model can provide opportunities for students to suggest answers and encourage students to increase cooperation between students. Furthermore, the research was carried out by (Yustitia et al., 2018) by using the think pair share learning model can have a positive impact on the development of students' mathematical reasoning abilities. Research conducted by (Sni & Hero, 2018) said the think pair share learning model can increase student learning activities. Finally, the research was carried out by (Tanjung & Wardani, 2020) argues that TPS is learning that encourages students to be independent, and trains students to actively participate in class discussions. From the results of their research, they prove that applying the think pair share learning model can improve the quality of learning. To support learning activities, researchers also use learning media, namely pop-ups. The use of this media considers the results of observations made, namely that students find it difficult to understand a concept if it is not seen directly, besides that students are also less able to remember an abstract concept. In addition, the choice of media also considers the opinion of several experts, namely, Montanaro in (Masna, 2015) explains that at first glance Pop up is similar to origami, both of these arts use paper folding techniques. Pop Up Book has its advantages over other media such as displaying a shape made by folding and having dimensions. Furthermore, (Dzuanda, 2011) states that the Pop-Up book is a book that has moving parts or has 3-dimensional elements and provides a more interesting visualization of the story, starting from the display of images that can move when the page is opened. (Muktiono, 2003), Pop Up book is a book that has an image display that can be established and forms beautiful objects and can move or give amazing effects. Pop up book media has several advantages, (Ni'mah, 2014) mentions some of the advantages of pop up as a teaching medium, including 1) pop-ups are widely used to explain complex images such as in health, mathematics, and technology; 2) books or pop up media that can be moved are effective learning strategies and make learning more effective, interactive and easy to remember; 3) popups provide learning feeds because, for students, visual illustrations can make abstract concepts clear; 4) pop-ups add new experiences for students; 5) pop-ups entertain and attract students' attention; 6) interactive pop-up passages make teaching a game-like opportunity for students to participate in it. Furthermore, the use of pop up book media has been proven in several studies, namely, according to Oktaviarini in (Eri Karisma et al., 2020) call a pop-up book a book that can show a three-dimensional shape when the page is opened and has a motion that can be created using paper as a material for folds, rolls, shapes, or wheels. Pop up book learning media can improve student learning outcomes because in (Masturah et al., 2018) stated that pop up media can be used as teaching material for students because it can increase students' enthusiasm and interest in learning. Other research was conducted by (Sentarik & Kusmariyatni, 2020) said that pop up media is made by giving surprises on every page so that it can create a sense of admiration for readers when opening each page. Furthermore, the research was carried out by (Pramesti, 2015) The use of pop up book media is seen as making it easier for students to obtain material during the learning process. Subsequent research was carried out by (Putri et al., 2019) shows that pop up book media can improve the ability to listen to the theme of loving plants and animals around grade III. Furthermore carried out by (Winarti & Setiani, 2019) states that the use of pop up book media can improve the mathematics learning outcomes of grade V students. Other studies were conducted by (Dewanti et al., 2018) concluded that pop up media helps students understand the learning material so that it has an impact on student learning outcomes. Their research results prove that using pop up book media can improve the quality of learning. Based on the explanation above, the TPS model and pop up media were chosen by the researcher as an effort to improve student learning outcomes. Previous research which also used the think pair share learning model was conducted by (Winantara, 2017) concluded that using the think pair share learning model can improve science learning outcomes in grade V. Furthermore, the research was carried out by (Citra Wibawa, 2018) states that applying the think pair share model can improve science learning outcomes in grade V. Subsequent research was carried out by (Dewi & Dharsana, 2020) the use of think pair share technique can improve thematic learning outcomes in grade VA SD. Other research was conducted by (Pendidikan et al., 2017) with the think pair share learning model will have a positive effect on science learning outcomes of class V students. And the last research was conducted by (Sukreni et al., 2017) the application of the think pair share model can improve student learning outcomes and student interest in learning. Based on the explanation above, it can be concluded that the think pair share learning model assisted by pop up media can improve student learning outcomes. In previous research, the think pair share learning model and pop up media were applied separately. However, this research will be made different from previous research. Researchers will try to collaborate the think pair share learning model with pop up media to be able to analyze improving learning outcomes in the realm of knowledge and skills. Method This research is a Classroom Action Research which refers to the model proposed by Kemmis and Mc Taggart. This design consists of 4 stages, namely planning (planning), action (acting), observation (observing), and reflection (reflecting). For more details, see Figure 1. This research was conducted in 2 cycles, each cycle carried out 2 meetings. This research was conducted on fifth-grade students of SD 3 Bulungcangkring second semester of the 2019/2020 school year. The subjects in this study were 23 students of class V for the 2019/2020 school year, consisting of 13 students and 10 female students. SD 3 Bulungcangkring is located at Jl. Buyut Sipah Bulungcangkring RT 03 RW 09, Jekulo sub-district, Kudus Regency, Central Java province. Data collection techniques used in this study are tests, observation, interviews, and documentation. The data analysis method used in this research is quantitative data analysis and qualitative data analysis. Quantitative data analysis was used to calculate the test results of student learning in the domain of knowledge, amounting to 10 items. While the qualitative data analysis in the form of observations was carried out by researchers to observe the learning outcomes of skills during the learning process. Interviews in this study were used to obtain information about problems that exist in the learning process so that the data can be used as a basis for conducting research. This study uses documentation by producing documents in the form of writing and pictures related to the learning process. Indicators of success in this study are 1) Student learning outcomes in the realm of knowledge on hot themes and their transfer can be said to increase if the value obtained is ≥70 with 70% classical completeness. 2) Student learning outcomes in the domain of hot theme skills and their transfer can be said to increase if the value obtained is ≥70 with 70% classical completeness. Result and Discussion This classroom action research lasted for 2 cycles, namely, cycle I and cycle II, each cycle consisting of 2 meetings. In cycle I, the first meeting of this research was conducted on January 8, 2020, and cycle I, the second meeting was held on January 9, 2020, while in cycle II the first meeting was held on January 22, 2020, and cycle II, the second meeting was held on January 23, 2020. At the end of each cycle, an evaluation test is conducted to measure the ability of student learning outcomes in the domain of knowledge on hot theme 6 and the displacement of fifth-grade students at SD 3 Bulungcangkring. Based on the Semester Middle Deuteronomy data obtained by the researcher, in the Social Studies content there are 30% or 7 students who have achieved learning completeness and 70% or 16 students who have not achieved learning completeness while in the Indonesian language content there are 35% or 8 who have reached learning completeness and 65% or 15 students who have not achieved completeness in learning. If analyzed the learning outcomes of the fifth-grade students of SD 3 Bulungcangkring are included in the category of needing guidance. This requires action to improve student learning outcomes in the realm of knowledge. The learning outcomes of the fifth-grade students of SD 3 Bulungcangkring by applying the think pair share learning model assisted by pop-up media were measured using an evaluation test in the form of a description of 10 items given to students at the end of the cycle I and cycle II. The subjects in the classroom action research conducted by the researcher were students of class V SD 3 Bulungcangkring with 23 students consisting of 13 students and 10 female students. The realm of knowledge consists of 6 aspects according to Lorin Anderson Krathwohl (Kosasih 2014) in the following order: (C1) remembering, (C2) understanding, (C3) applying, (C4) analyzing, (C5) evaluating, (C6) create. Student learning outcomes increase from the beginning of the pre-cycle to cycle II if the quality of teachers when managing to learn is done well, causing student learning outcomes to increase. Student success in learning also depends on the material presentation model, if the presentation of material that does not make students feel bored, interesting, and easy to understand by students and of course affects student learning success and during the learning process the teacher delivers material by applying the think pair share theme The heat and the transfer assisted by pop up media have been done as much as possible, but there are still some students who are not enthusiastic about participating in learning, often play with their friends so that learning is not conducive. Due to problems in cycle I, the teacher made improvements in cycle II by guiding students who often play with their friends so that students are more serious in participating in learning and the class becomes conducive. After the repairs were held in cycle II, there was an increase in cycle I to cycle II. The following is an increase in learning outcomes in the realm of knowledge that can be seen in table 1. Data from table 1. In the first cycle, the social studies content obtained an average value of 69 criteria for guidance (D) with 52% learning completeness or 12 students who completed, while the Indonesian language content obtained an average score of 70 criteria sufficient (C) with completeness learn 57% or 13 students who complete and in cycle II have increased again so that the social studies content the classical average value becomes 79 criteria sufficient (C) with the completeness of learning 78% or 5 students who complete, while in Indonesian content the average score Classical average in cycle II obtained 80 good criteria (B) with 83% learning completeness or 4 students who completed. Thus, classroom action research by applying the think pair share learning model assisted by pop up media in grade V SD 3 Bulungcangkring is said to have been successful because it increases the learning outcomes of the knowledge desired by researchers, namely ≥ 70% with sufficient criteria (C). Learning outcomes in the realm of skills according to the Education and Culture Ministry. 104/2014 describes the realm of skills there are two, namely the realm of abstract skills and the realm of concrete skills. The realm of abstract skills includes: 1) observing, 2) asking, 3) collecting, 4) trying, 5) reasoning, 6) communicating while the realm of concrete skills includes: 1) imitating, 2) doing, 3) describing, 4) assembling, 5) creating. For the realm of skills, the researcher takes one of the Minis, namely concrete skills (creating) this is to observe students' writing skills. The observation sheet of learning outcomes in the realm of skills consists of 4 observed aspects, namely (1) content comprehension, (2) organizational accuracy of text content, (3) diction accuracy, (4) spelling and writing procedures. In table 2. Learning Outcomes in the Skills Domain. Category Need Supervision Pretty Good Based on the observation of learning outcomes in the realm of skills in the first cycle of the meeting I obtained a value of 1510 with an average of 66 criteria needed guidance, and in the first cycle of meeting II obtained a value of 1623 with an average of 71 criteria sufficient. So that the classical average in the first cycle obtained 69 criteria for guidance. In cycle, I have not reached the indicators of success, because during the learning process some students could not understand the contents of an explanatory text so that students still had difficulty in making a summary, therefore students' writing skills were low, with teacher guidance and guidance at each meeting. students can understand the content of a text and can retell the content of a text in writing. So, in the second cycle, the first meeting experienced an increase in obtaining a value of 1736 with an average of 75 with sufficient criteria, while in the second cycle of the second meeting, it had a value of 1840 with an average of 80 sufficient criteria. So that the classical average in the second cycle increased to get 78 sufficient criteria. Thus, the classroom action research by applying the think pair share learning model assisted by pop up media in grade V SD 3 Bulungcangkring is said to have been successful because it increases the learning outcomes of the skills desired by the researcher, namely ≥ 70 with sufficient criteria. Apart from being based on the test results, several things show that the application of the think pair share learning model assisted by the media pop up book improves student learning outcomes, some of these things are. First, the syntax of the think pair share learning model emphasizes students on active thinking activities to gain knowledge. In the preliminary stage, students are given a stimulus in the form of problems that have been prepared through the pop-up book media. Furthermore, students are directed at the thinking stage, at this stage students have started to think to understand the problem and the solutions given, this activity will increase students' thinking abilities. After that, the next stage students convey the results of these thoughts to group friends, in the group all ideas and discussion from students will be responded to by group members. At this stage, there is a process of exchanging ideas so that students 'understanding is more complete, besides that the students' ability to communicate will be more honed. Furthermore, when the group discussion is over students will present the results of the discussion in groups in front of the class. Students who listen will respond and give opinions, while group members can help friends who are presenting. The final stage is giving rewards and correcting wrong concepts, at this stage students will be more enthusiastic about learning to get rewards. This statement is in accordance with the opinion of (Huda, 2015) which states that the TPS learning model gives students time to think about answers to the questions or problems being discussed, then students solve these problems with their respective abilities. In addition to expert opinion, these results are supported by several studies, this research is a research conducted by (Winantara, 2017), he found that the application of the TPS learning model made students active in the learning process and students became more understanding because of the process of exchanging ideas with friends. Furthermore, research conducted by (Shoimin, 2017) stated that the think pair share model is a cooperative learning model that gives students time to think and respond and help each other. Research conducted by (Puspitasari, 2016) found that student learning outcomes increased after being given learning using the TPS learning model, in learning students were used to working in groups and dared to express opinions. Further research by (Kurniasari & Setyaningtyas, 2017) shows that the application of the think pair share model can improve social studies learning outcomes, this increase in learning outcomes is obtained because students are accustomed to brainstorming and listening well. Furthermore carried out by (Rismawati & Ganing, 2019) With the application of the think pair share model it can improve science learning outcomes in grade V, the increase in learning outcomes occurs because students who feel unsatisfied will find it helpful to presentations from friends. Subsequent research was carried out by (Afiyahni et al., 2019) stated that the application of the think pair share cooperative learning model can affect student learning outcomes. Other research was also carried out by (Tambusai, 2018) who said that the think pair share learning model can improve student learning outcomes. Last carried out by (Aryadiputra et al., 2020) concluded that the think pair share learning model can affect the learning outcomes of class V students. Second, students' skills increase due to TPS learning model activities that focus student activities on thinking activities which include observing, asking, collecting, reasoning, doing. In observing and reasoning activities, students do activities to think about problems given by the teacher. Then students continue to do activities to collect ideas from group members after collecting ideas. Students do reasoning activities again to get complete answers and understanding. In the end, the students presented the results of the discussion, at the discussion stage students could ask questions or responses. This statement is in accordance with the opinion of experts, namely (Trianto, 2014), the TPS learning model is a learning model that affects student interaction patterns, the student interaction pattern that is influenced is that students carry out discussion activities while doing repeated thinking activities. Furthermore, (Huda, 2015) conveyed that the TPS learning model gave students time to think about answers to the questions or problems being discussed, then students solve these problems with their respective abilities then explain the results obtained in front of the class. This statement is supported by several studies, namely research conducted by (Pramawati, 2016), she found that students' creative thinking skills increased after being given learning using the TPS learning model, this happened because students were allowed to think and study problems. Furthermore, research by (Febriany, 2018) found that students who took the TPS learning mode would be better trained to carry out scientific activities such as observing, observing, and asking questions. other research that is in line with this research was conducted by (Faridha et al., 2015) which shows that applying the think pair share model can improve student learning outcomes in the psychomotor domain, this is because in learning students are used to doing skills such as speaking, questioning and reasoning. Each of these skills is honed through student activities in groups. Furthermore, carried out by. Subsequent research was carried out by (Lakilaf & Suarjana, 2017) said that using the think pair share learning model can improve students' writing skills, students' writing skills increase because students are accustomed to doing activities related to writing, for example when learning takes place students first write down questions related to their friends' presentations. Third, pop up book media is proven to improve student learning outcomes. This is because the pop-up book media attracts students' attention, students feel interested in learning because the pop -up book contains interesting images and students can interact with the learning media directly. This media can also improve students' memory about a concept, this is because the media can be 3-dimensional and similar to the original. This statement is supported by an opinion according to (Dzuanda, 2011), which states that the Pop-Up book is a book that has a moving part or has 3-dimensional elements and provides a more interesting visualization of the story, starting from the display of images that can move when the page is opened. (Muktiono, 2003), Pop Up book is a book that has an image display that can be established and forms beautiful objects and can move or give amazing effects. In addition to expert opinion, several studies support these results, this research is research conducted by (Setyowati, 2017) concluded with that the use of pop up book media can improve the writing skills of fourth-grade students, this is because the pop-up book media is interesting and the pictures make students remember the material more. This makes students remember the material they are going to write better. Furthermore, ( Khoiriyah's, 2018) research found that pop up book media can increase student activity in learning and improve student learning outcomes, this is indicated by the enthusiasm of students in using learning media, which initially passive students become more active and courageous. The next research is (Dewanti's, 2018) research in this study found that pop up media makes it easier for students to understand the material so that student learning outcomes can improve. The next research was carried out by (Mustofa, 2018), he found that grade 3 students who were given learning with pop up book media made it easier for students to understand abstract concepts. This is because the pop-up book media can be used directly by students, besides that pop-up book media can also be made in a 3-dimensional form and become more real. Conclusions and suggestions Based on the results of the research and the results of data analysis that has been carried out, it can be concluded that by using the think pair share learning model assisted by pop-up media there is an increase in learning outcomes of knowledge and skills on the theme of heat and its displacement. From this conclusion, the researchers gave some suggestions aimed at the school to make the think pair share learning model and pop up media as a reference to be applied when integrated thematic learning, schools should provide motivation to teachers to further develop their skills in implementing learning and providing adequate facilities and infrastructure so that teachers are more innovative during learning. Teachers are advised to improve student learning outcomes and also to motivate students to participate actively in the learning process and be skilled in class management.
2020-10-28T19:13:02.419Z
2020-09-08T00:00:00.000
{ "year": 2020, "sha1": "d3fd47fd17d0d723631484d80811e8d79af0d5ac", "oa_license": "CCBYSA", "oa_url": "https://ejournal.undiksha.ac.id/index.php/JISD/article/download/27145/16081", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "614fac3b8e6d3afff9e613e16430365ff803eeb7", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
229200353
pes2o/s2orc
v3-fos-license
Score Detection and Anemia Education Prospective Bridals Using Android Based Macca Botting Application Anemia is the highest cause of maternal death in Indonesia. Various methods were used to assist in preventing and overcoming anemia. The method used was Research and Development with the Borg and Gall development model which was simplified by the Research Center for Policy and Education Innovation Team of the National Education Research and Development Agency (Pultijaknov) and quantitative research methods with a quasi-experimental research design. The research was conducted in January-July 2020 at religious affairs office Biringkanaya Makassar. The subject of this research was the bride and groom at the Biringkanaya Religious Affairs Office in Makassar. Data analyzed used the Mann Whitney test. The results showed that the application of score detection and anemia education for the bride and groom was assessed by material experts with an average score of 3.30 (very good), the media expert's assessment was 3.25 (good) and the assessment of the prospective bride and groom in the small sample trial got a score of 3.63 (very good). The results of the large sample trial obtained p value 0.000 <p value 0.05 that the botting macca application has an effect on the detection of scores and anemia education of the prospective bride. The development of an Android-based Botting Macca application program can be developed and it was suitable for future used. 38 The Botting Macca application was designed to detect anemia scores by asking questions to the application user. The results of these questions were used as the basis for determining the user's health level. In addition, this application has added an automatic feature in the form of an alarm for drinking Blood Supplement Tablets and can monitor changes in user behavior. So, the application was expected to provide education and prevention of anemia for women of childbearing age who will become future brides. Study design This study used Research and Development (R&D) with the Borg and Gall development model and quasiexperimental pre-post test. This research method intends to develop or validate products applied in education (Short et al. 2018). The product will go through a validation test by a media expert, two material experts and a small (n = 10) and large (n = 40) sample trial (Amelia et al. 2020). Participants Respondents came from prospective brides who were taken by purposive sampling around January -July 2020 at religious affairs office Biringkanaya. The intervention group (n = 20) was given Botting Macca application and the control group (n = 20) was given score detection and education by using print out media. Data collection Respondents in each group filled in the score detection section in the form of 12 questions related to signs of anemia symptoms including (5L (lethargy, tired, weak, tired, negligent), headache, dizzy eyes, easily drowsy, quickly tired, difficult to concentrate, pale on the face, pale on the lower eyelid, pale on the lips, pale on the skin, pale on the nails, pale on the palms) and monitored for research activities once a week, for 4 weeks. Data Analysis Processing and data analysis in this study were obtained from research instruments in the form of qualitative and quantitative data. Quantitative data were obtained from the recapitulation of the validation questionnaire of media experts, material experts and small sample tests as well as the results of pre-test and post-test trials. Meanwhile, qualitative data were obtained from suggestions / input from media and material experts after assessing the score detection application and anemia education for the bride and groom. In this study, the product suitability score was decided in the range 2.51-3.25 or 3.26-4.00 so, this aspect can be said to be valid and does not need to be revised of the score which can be seen in (Table 1) Research Ethics The study has received a recommendation of ethical approval from the Faculty of Public Health, Hasanuddin Makssar University with protocol number 7420092136. Macca Botting Application Products Based on Android The product being developed was a scoring application and anemia education for prospective brides named botting macca which comes from the Bugis language and means smart bride, with this application the researchers hope that after the bride and groom use the application they can become smarter in acting especially in prevention and overcoming anemia. Media Expert Test Validation, Material Expert Test Validation, and Small Scale Sample Trials From the results of the application feasibility test answers, it can be assumed that the results of the assessment results from the media expert's assessment showed an average score of 3.25 in a good category (Table 2) in terms of media which includes application size, cover design and application content design is good, and Material experts showed an average score of 3.30 in a very good category (Table 3) in terms of material which includes aspects of content feasibility, presentation feasibility, presentation feasibility, language feasibility and contextual feasibility, and the respondent's assessment showed an average score of 3.63 in a very good category (Table 4) of the components of interest, material and language. Based on the results of the last description, it can be decided that the suitability score of the Macca Botting Application product with a minimum score of 3.39 was classified into the "Very Good" category. Table 2. The results of the media expert's test of the botting macca application for score detection and anemia education of the bride and groom ASSESSMENT ASPECT MEDIA EXPERTS AVERAGE RATING Application Size 3.5 3.5 1. Suitability of application size with ISO standards 3 2. Suitability of size with the content of the application 4 Cover Design 3.25 3.25 3. Appearance of layout elements 3 4. The letters used are attractive and easy to read 3 6. Printable and space layout elements 3 9. Title layout elements and illustrations 3 10. Embedding layout 3 11. Typography of simple application contents 3 12. Typography of the app content makes it easy to understand 3 13. Content illustration 3 Average 3.25 3.25 (Good) Large-Scale Sample Trials In Table 5, it was found that the two groups of respondents who were prospective brides for the normal, high-risk and anemia categories most were 20-30 years old by 90% (18 respondents) with 10 respondents (50%) as private employment status. 100% Based on table 6, it was found that the difference in the detection score of anemia scores in the prospective bride showed that the pre-test and post-test when using the botting macca application, all items decreased on average, while when using print out media, all items increased on average. 10 Pale on the palms 4 0 0 0 Based on the results of table 7, it was found that the difference in status changed in the prospective bride showed the pre-test and post-test when using the botting macca application, all items on average from anemia and highrik become normal, while when using print out media, all items on average experience changed from normal to high risk and from hypertension to anemia. table 8, the results of the Man-Whitney statistical test obtained p value 0.001 <p value 0.05, so Ho is rejected, Ha is accepted, which means there was a difference in the effect of using the botting macca application for score detection and anemia education with print out media on the prospective bride. The result of the mean rank of the experimental group that was given botting macca application was 27.25, while the control group who was given print out media was 13.75. Discussion The product being developed was a scoring application and anemia education for prospective brides named botting macca which comes from the Bugis language and means smart bride, with this application the researchers hope that after the bride and groom use the application they can become smarter in acting, especially in prevention and overcoming anemia. The following are some of the features contained in the macca botting application: . Anemia Education How to fill in the detection of anemia scores is by downloading the botting macca application found on the Android plystore, then you can install the application and open the application in which there are several features as follows: user profile, instructions for using the application, detection of anemia scores, results and education of anemia, monitoring anemia, alarm reminders to drink blood plus tablet and logout menu. Furthermore, to run the application, users can open and fill in the user profile feature, open and understand the application usage instructions feature, select and open the anemia score detection feature in the botting macca application, Then the user selects the anemia score detection feature, which contains 12 questions for the detection of anemia scores and can fill in the 12 questions by choosing a yes or no answer according to what was experienced by the prospective bride. The more yes statements chosen, the greater the chance of anemia happened. Through the results of the calculation of the answers, the prospective bride can assess whether she was included in the category of risk of anemia or not. If the bride-to-be knew that she was at risk of anemia, it was hoped that she can be more vigilant and apply the education contained in the anemia education feature in the botting macca application (MHN Sari and Anggraini 2020), early detection can prevent anemia in an effort to reduce AKI (Solehati et al. 2018), early detection efforts can be done in adolescents (DP Sari et al. 2020 ; Abdimas and Tasikmalaya 2019; Umriaty and Arti 2019; Putrianti and Krismiyati 2019) and early detection in pregnant women (Fitri and Machampang 2018;Sukmawati, Mamuroh, and Nurhakim 2019;Saryono nd) From the results of the validation test by the media expert and material expert as well as the small sample trial, it can be assumed that the results of the material expert's assessment showed an average score of 3.30 (very good), while the media expert showed a score of 3.25 (good). The criteria for the assessment results in accordance with the results of the study were very feasible and can be used without revision (Candradewi, Saputri, and Adnan 2020). The advantages possessed by this application are that the application display given an attractive animation on each feature so, it does not bore users. In addition, the content of the material contained in the application uses communicative language and easy to understand and is equipped with pictures, in the botting macca application it is also equipped with an alram feature for drinking blood supplement tablets. Based on the results of the research, in addition to the content of the material contained in the application using communicative language and easy to understand by the bride-to-be respondents which was proven by the assessment obtained by the researcher, the results showed that the average small-scale test recapitulation results on users of the botting macca application for score detection and anemia education for the bride-to-be with the interest components, material and language was at 3.63 in the very good category. So, the Macca botting application can be said to be suitable for use and does not need to be revised. After the botting macca application product for score detection and anemia education in the bride-to-be was categorized as feasible according to product testing by media experts, material and small-scale user testing, furthermore this botting macca application can be developed and given directly to bride-to-be respondents to measure its effect on score detection and anemia education for the bride-to-be. In this study, the results of the description of the characteristics of the two groups of respondents to the prospective bride for the normal, high-risk and anemia categories most were 20-30 years old by 90% (18 respondents) in the group given the botting macca application and 100% (20 respondents) in the control group who was given print out media. In the two groups of respondents, the prospective bride and groom have the most work status as private as many as 10 respondents (50%) in the group given the botting macca application and as many as 15 respondents (75%) in the control group, these all from the normal, high-risk and high-risk categories, anemia. These results were consistent with research conducted by Zahidatul & Trias (2017) that women aged <20 years have a risk of experiencing anemia 2,250 times compared to those aged 20-35 years, and those aged> 35 years have a risk of experiencing anemia 5.885 times greater than those aged 20-35 years old. Women who do not work have a risk of experiencing Anemia 1,990 greater than pregnant women who work (Zahidatul Rizkah and Trias Mahmudiono 2017). This study found that the results of using the botting macca application were feasible to detect scores of anemia in prospective brides. This was in line with a number of studies conducted using applications as the main medium for early detection of women (MHN Sari and Anggraini 2020; Solehati et al. 2018;DP Sari et al. 2020;Abdimas and Tasikmalaya 2019;Umriaty and Arti 2019;Putrianti and Krismiyati 2019;Fitri and Machampang 2018;(Sukmawati, Mamuroh, and Nurhakim 2019) This botting macca application used a score detection feature which contained 12 questions related to signs of anemia, including: 5L (tired, lethargic, weak, tired, inattentive), headache, dizzy eyes, easily drowsy, tired quickly, difficult to concentrate, pale on the face, pale on the lower eyelids, pale on the lips, pale on the skin, pale on the nails, pale on the palms of the hands. Respondents filled in the score detection item section on the botting macca application once a week. Based on the results of research conducted in the experimental group by given the botting macca application, of the 20 respondents the most answered yes to symptoms of easy drowsiness and tiredness, namely 19 respondents. Meanwhile, the control group was given print out media from the 20 respondents who answered yes to the symptoms of easy drowsiness, fatigue and difficulty concentrating, as many as 20 respondents. This was in line with research conducted by Julia in 2018 that anemia can be detected with several symptoms including 5L, dizziness, dizziness, drowsiness, fatigue, difficulty concentrating, pale conjunctiva, pale lips and face (Fitriany and Saputri 2018). Based on the results of research conducted in the experimental group by given the botting macca application, of the 20 respondents the most were in the hygienic category, namely 8 respondents (40%) and decreased significantly for 4 weeks to 2 respondents (10%). Whereas the control group was given print out media from 20 respondents, the most were in the high-risk category, namely 11 respondents (55%) and after 4 weeks it decreased to 10 respondents (50%). This was in line with the study by Emma Tonkin (2017) who tested a smartphone application to improve nutrition in community settings, it was found that there was a change in behavior in food selection that affected a person's health condition (Tonkin, Brimblecombe, and Wycherley 2017). This study found that the application of score detection and anemia education in the bride and groom-to-be was feasible and has the potential to be developed in increasing the behavior changed of the bride and groom. This was in line with the benefits and uses of mobile-based applications as an information medium in pregnancy, that most women use internet access to retrieve health information (Selvia and Ernawati 2019). The results of the Mann-Whitney test with a p-value of 0.001 <0.05, in line with research which showed that there were significant differences in nutritional knowledge and levels of nutritional adequacy related to anemia prevention after nutrition education was given (Sefaya 2017). Using applications compared to print outs made it easier for respondents to add education. The implication of this research was that it made it easier to study anemia material in the bride-to-be and it was proven to be able to increase the behavior changed of the bride-to-be so, this application was considered feasible and can be used as an educational medium. Various forms of application according to the times that are web-based and smartphone are proven to increase knowledge (Noverina, Dewanti, and Sitoayu 2020;Publication 2017;Fadhilah, Hartini, and Alit Gunawan 2017) Besides being able to increase knowledge, it can also have an influence on maternal attitudes related to anemia (Febrianta, Gunawan, and Sitasari 2019;Ferwanda and Muniroh 2017), and can increase the effectiveness of health promotion (Mahampang and Sari 2020;Rotua 2018;Putu Fani Yustisa, Aryana, and Suyasa 2012). The availability of a score detection application and anemia education for the bride and groom was expected to have a positive impact in increasing public awareness, especially the bride and groom to do something that can help themselves related to anemia. If a person already has knowledge, with this knowledge he will have the ability to use the material he has learned in real situations and circumstances (Iron, Knowledge, and 2019). Conclusion Based on the development model of Borg & Gall, a product called an android-based botting macca application produced for scoring detection and education of anemia among aspirants. Based on the assessment of the expert test covering the components of application size, cover design, application content design, aspects of feasibility, presentation, language and context, the application of score detection and anemia education in the bride and groom was suitable for use without revision. Based on the pre and post test assessments for sample trials, the detection score and anemia education of the bride and groom were feasible to use without revision and were feasible to be developed. Based on further development research, it was hoped that the components of score detection and education related to the health of the bride and groom can be developed so, they can add features in the botting macca application, add interesting anemia educational videos and can add HB diagnostic tests to confirm the results of detection of anemia scores, and can add sympton or symptom and sign or sign / indication in section 12 question in score detection feature.
2020-11-05T09:11:25.293Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "37404676be6797aed04635b49562378c4a10d129", "oa_license": "CCBY", "oa_url": "https://iiste.org/Journals/index.php/ALST/article/download/54350/56162", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "6e0520c7f31957ec36954b29a892de17aaad57a6", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Psychology" ] }
119106182
pes2o/s2orc
v3-fos-license
Analysis of a combined influence of substrate wetting and surface electromigration on a thin film stability and dynamical morphologies A PDE-based model combining surface electromigration and wetting is developed for the analysis of morphological stability of ultrathin solid films. Adatom mobility is assumed anisotropic, and two directions of the electric field (parallel and perpendicular to the surface) are discussed and contrasted. Linear stability analyses of small-slope evolution equations are performed, followed by computations of fully nonlinear parametric evolution equations that permit surface overhangs. The results reveal parameter domains of instability for wetting and non-wetting films and variable electric field strength, nonlinear steady-state solutions in certain cases, and interesting coarsening behavior for strongly wetting films. I. INTRODUCTION Since the early 1990's there has been an interest in understanding and accessing the effects of electromigration on kinetic instabilities of crystal steps [1]- [7] and epitaxial islands [8]- [13], and on morphological stability of epitaxial films [14]- [16]. Among other applications (which were developed primarily at the microscale), electromigration has been also used for fabrication of nanometer-sized gaps in metallic films. Such gaps are suitable for testing of the conductive properties of single molecules and control of their functionalities [17]- [19]. For instance, Ref. [19] describes fabrication of nanoscale contacts by using electromigration to thin down and finally break the epitaxially grown ultrathin (10 ML) Ag films wetting the Si(001) substrate. The gap between contacts can be cyclically opened and closed by applying electromigration current at 80 K to open the gap, and enabling surface diffusion by annealing to the room temperature, to close it. Thus for this and other emerging applications at the nanoscale [20,21] it seems important to understand and characterize the effects of substate wetting and electromigration that are simulaneously active in the physical system. This paper combines these effects in a model that is based on an evolution equation(s) for the continuous profile of the film surface. The focus is on wetting films with isotropic surface energy, but with anisotropic adatom mobility [8,9], although the model allows any combination of wetting properties and anisotropies. We factor in and discuss the effects on film stability and morphological evolution of the electric field that is either parallel, or perpendicular to the initial planar surface of the film, and do not limit considerations to small deviations from planarity, i.e. the arbitrary surface slopes and even surface overhangs are permitted by the model. Models of wetting appropriate for continuum-level modeling of the surface diffusion-based dynamics of solid films have been developed and discussed extensively primarily in the context of thin film heteroepitaxy [22]- [35]. Our analysis is based on one such model, called the two-layer exponential model for the surface energy [24,28,29], [32]- [37] (which is particularly useful when the surface energy is anisotropic), but other models of wetting discussed in Refs. [22]- [35] can be used instead, and the results are expected to be qualitatively similar. The goal of modeling in this paper is not to match the theoretical results to the experiment [19] and thus help in understanding the experiment, but rather to provide the broad analysis of the interplay of two effects (wetting and electromigration) that, at least to our knowledge, has not been addressed in prior publications. II. PROBLEM STATEMENT We consider a 2D single-crystal film of unperturbed height H 0 with the 1D parametric surface Γ(x(u, t), z(u, t)), where x and z are the Cartesian coordinates of a point on a surface, t is the time and u is the parameter along the surface. The origin of the Cartesian reference frame is on the substrate, and along the substrate (the x-direction) the film is assumed infinite. The z-axis is in the direction normal to the substrate and to the initial planar film surface. Surface marker particles will be used for computations of surface dynamics [38]. Thus x and z (z > 0) in fact represent the coordinates of a marker particle, which are governed by the two coupled parabolic PDEs [39,40]: Here (and below) the subscripts t, s, u, z and x denote partial differentiation with respect to these variables, V is the normal velocity of the surface which incorporates the physics of the problem, and g(u, t) = s u = z u / cos θ = x 2 u + z 2 u . Here s is the surface arclength and θ the surface orientation angle, i.e. the one that the unit surface normal makes with the reference crystalline direction (chosen along the z-axis). If the surface slopes are bounded at all times (surface does not overhang), then it is more convenient to describe surface dynamics by a single evolution PDE for the height function h(x, t) of the film. Eqs. (1), (2) can be easily reduced to such "h-equation", which we will use for analysis; however, Eqs. (1), (2) will be used for most computations in this paper. Similar parametric approach was used recently in Ref. [41] for the computation of a hill-and-valley structure coarsening in the presence of material deposition (growth) and strongly anisotropic surface energy. Assuming that temperature is sufficiently high and surface diffusion is operative, the normal velocity is given by where D is the adatoms diffusivity, Ω the atomic volume, ν the adatoms surface density, kT the Boltzmann's factor, µ the surface chemical potential, M (θ) the anisotropic adatom mobility, E 0 the applied electric field, q > 0 the effective charge of adatoms, f (θ) = sin θ, if the electric field is vertical, or f (θ) = cos θ, if the electric field is horizontal, and α = ±1 is used to select either stabilizing, or destabilizing action of the field for the chosen combination of the vertical or horizontal orientation of the field and the mobility M (θ). The two values of α correspond to two possible field orientations once either the horizontal, or the vertical field direction has been chosen (that is, field directed up-down, or left-right). The first term in V describes the high-T surface diffusion, the second term describes surface diffusion enabled by electromigration [42]. The surface chemical potential µ is assumed to have the contributions from the surface energy and the surface wetting interaction with the substrate/film interface: where κ = θ s = g −3 (z uu x u − x uu z u ) is the surface mean curvature and, in the general case, γ(z, θ) is the heightand orientation-dependent, i.e. anisotropic, surface energy. (Note again that z here stands for the shortest distance between the substrate and a chosen point (x, z) on a film surface; this distance is the height h(x, t) of the surface if there is no overhangs.) In this paper we focus on the effects due to anisotropic adatom mobility, thus we will use the simpler isotropic model for the surface energy [22]- [35]: where γ S is the (constant) energy of a substrate/gas (or vacuum) interface, ℓ the characteristic wetting length, and γ f the constant energy of a crystal/gas interface (that is, of the film surface). Eq. (5) is the interpolation between the two energies. In the limit of a thick film, z → ∞, only the latter energy is retained in this expression (because the inter-molecular forces between the substrate and the surface molecules are relatively short-ranged), and in the limit of a film of zero thickness only the former energy is retained. Despite that Eq. (5) is phenomenological, it matches surprisingly well the experiments and the ab-initio calculations (at least for lattice-mismatched systems) [32,36,37]. Finally, we assume the (dimensionless) anisotropic adatom mobility in the form [9] where N is the number of symmetry axes and φ is the angle between a symmetry direction and the average surface orientation. β is a parameter determining the strength of the anisotropy. Throughout the paper we present results either for β = 0 (isotropic case), or for β = 1 and N = 4, φ = π/16. For the latter set of parameters values the graphs of the functions M (θ) and M ′ (θ) are shown in Fig. 1. Remark 1. In the limit z → ∞ (where wetting effect is not operative), the present problem is most closely related to the one analyzed by Schimschak and Krug in Ref. [9]. The essential difference is that these authors determine the electric field from the solution of the potential equation with the appropriate boundary conditions on the material boundaries, including the moving surface [43,44]. Thus their solution for the electric field is nonlocal, unlike the local approximation used in this paper. The local nature of the electric field explains why we did not detect traveling wave solutions, which are the hallmark of Refs. [9,44]. Next, we choose ℓ as the length scale and ℓ 2 /D as the time scale and write the dimensionless counterparts of Eqs. (1), (2), where we use same notations for dimensionless variables: Notice that the dimensionless forms of f (θ) and the parametric expression for κ coincide with their dimensional forms, and that Eq. (5) has been substituted in Eq. (4). For conciseness, we keep differentiation with respect to the arclength, s, in the dimensionless equations, but their most transparent forms for the computations result when the differentiation with respect to s is replaced with the differentiation with respect to the parameter u, using ∂/∂s = (1/g)∂/∂u. In Eqs. (7) -(9) B = Ω 2 νγ f / kT ℓ 2 is the surface diffusion parameter, A = ανΩE 0 q/(kT ) is the strength of the electric field, and G = γ S /γ f is the ratio of substrate to film surface energies. For wetting films G > 1, for non-wetting films 0 < G < 1. Notice that A may take on positive or negative values through the parameter α. In the following sections we analyze several representative situations. III. VERTICAL ELECTRIC FIELD A. Linear stability analysis The small slope approximation (|h x | ≪ 1) of Eq. (10) reads: where the mobility has been linearized about the flat surface h The last two terms are the simplest nonlinearities from the expansion of the electromigration flux that involve M (0) and M ′ (0). Without loss of generality we will take M (0) = 1 here and elsewhere in this paper. (See Fig. 1. When mobility is isotropic, i.e. β = 0, then M = M (0) = 1, as is seen from Eq. (6).) Notice that the anisotropy of the mobility, M ′ (0), does not have an effect on linear stability, as it enters in the coefficients of the nonlinear terms. Also note that the last three terms can be written in a conservative form similar to the terms in the first line of the equation and to the first term in the second line (and this is how they are implemented in the code). Introducing small perturbation ξ(x, t) in Eq. (12) by replacing h with h 0 + ξ(x, t) (where h 0 = H 0 /ℓ is the dimensionless unperturbed film height), linearizing in ξ and assuming normal modes for ξ, gives the perturbation growth rate ω(k), where k is the perturbation wavenumber: Remark 2. When wetting interaction is absent (thick film: h 0 → ∞), Eq. (13) reduces to the standard one, ω(k) = −Bk 4 − Ak 2 , which reflects the stabilizing action of the surface diffusion and either stabilizing (A > 0, electric field is in the positive z direction), or destabilizing (A < 0, electric field is in the negative z direction) action of the electric field. Such film is absolutely stable when A > 0, but when A < 0 it is long wave-unstable. 1. Analysis of Eq. (13) • Wetting films (G > 1). From Eq. (13) one notices that wetting films are absolutely linearly stable when A > 0, but they are long wave-unstable when A < −B(G − 1) exp(−h 0 ) < 0 (see Fig. 2). The short-wavelength cut-off wavenumber, the maximum growth rate, and the wavenumber at which the latter occurs are: k c and ω max are plotted in Figs. 3 and 4. The film stability decreases with increasing h 0 , and this trend saturates around h 0 = 10 (3 nm). That is, for the stated field strength A = −71 films of thickness h 0 > 10 do not "feel" the stabilizing presence of the substrate, and they are as stable as the films that do not interact with the substrate at all. Of course, increasing field strength |A| makes the film less stable, but increasing G makes it more stable, since the substrate energy provides stabilizing effect. In Figs this condition is equivalent to h 0 > ln BA −1 (1 − G) , and for the typical values of A and B stated above and for moderately large G the right-hand side of the latter inequality is negative, thus the longwave instability occurs for any film height h 0 . This is similar to the situation when wetting effects are absent, see Remark 2 (but of course the spectrum of unstable wavenumbers and the magnitude of the maximum growth rate are different). There is a critical thickness h 0c below which the film is absolutely linearly stable only if G − 1 ∼ |A|, i.e. when G is of the order of at least one hundred. • Non-wetting films (0 < G < 1). In this case, for A < 0 the long-wave instability occurs for any film thickness and any electric field strength -which is again similar to the situation when wetting effects are absent, see Remark 2. When A > 0 the long-wave instability (with k c , ω max and k max numerically similar to those shown in Eq. (14)) occurs when the film height is less than critical, h 0 < h 0c = ln AB −1 (1 − G) −1 . When h 0 > h 0c the film is absolutely linearly stable. h 0c is plotted in Fig. 5. It is clear that h 0c increases with increasing either A, or G, or both parameters. B. Nonlinear surface dynamics of wetting films Computations with the small-slope Eq. (12) (where M (0) = 1, M ′ (0) = 0) were performed for G = 2 and varying field strengths A that satisfy the condition for long-wave instability, A < −B(G − 1) exp(−h 0 ) < 0. Computational domain was 0 ≤ x ≤ λ max , λ max = 2π/k max with periodic boundary conditions, and the initial condition was the small-amplitude cosine-shaped perturbation of the flat surface h 0 = const. All computations produced steady-state solutions that have the shape of a vertically stretched cosine curve with a fairly large amplitude. However, neither of these steady-states were confirmed when instead fully nonlinear parametric equations (7) -(9) were computed.[? ] Fig. 6 shows the computed morphology. Overhangs are clearly visible, and the bottom of the "pit" flattens out as it approaches the substrate due to increase of the repulsive force, promoting overhangs in the vicinity. Similarly, when Eq. (12) was computed with the random small-amplitude initial condition on the domain 0 ≤ x ≤ 20λ max (the mobility was again isotropic), the result was a perpetually coarsening hill-and-valley structure. This was again not confirmed in the computations of parametric equations. The late time morphology computed using the parametric equations is shown in Fig. 7(a). The surface develops deep "pockets" whose walls overhang and eventually merge, resulting in tear drop-shaped voids trapped in the solid; this can be seen, for instance, on the interval 40 < x < 50. Fig. 7(b) shows the typical random initial condition. Anisotropic mobility resulted in morphologies that are similar to ones shown in Figs. 6 and 7(a), but skewed left or right, depending on the sign of φ. As we pointed in Sec. III A, anisotropy matters in the nonlinear stage of the dynamics. A. Linear stability analysis In the case of the horizontal electric field the small slope approximation of Eq. (10) reads: Again here we retained two simplest nonlinearities from the expansion of the electromigration flux term. The coefficient M (0) in the third-to-last term has been set equal to one. This nonlinear term and the last (nonlinear) term in Eq. (15) do not have an effect on linear stability. Unless mobility is isotropic, the second-to-last term of Eq. (15) has an effect on linear stability. Assuming anisotropy, the growth rate is Comparing Eqs. (13) and (16), it is clear that one can obtain stability properties in the horizontal field case by replacing A in Eq. (13) with AM ′ (0). Since M ′ (0) ≈ −2.7 (see Fig. 1) for the chosen parameter values in Eq. (6), then for instability one must have A > 0 (the electric field is in the positive x direction). Thus, for instance, the stability analysis of wetting films in the paragraph 1. of Sec. III A 1 translates directly into the case of the horizontal field simply by replacing Computations with either the small-slope Eq. (15), or the parametric equations (7)-(9), of the evolution of a one wavelength (λ)[? ], small-amplitude cosine curve-shaped perturbation on a periodic domain resulted in steady-state profiles which have a shape of a vertically stretched cosine curve with a fairly large amplitude. The steady-state profiles often displayed a more sharply peaked bottom and less curved walls than the cosine curve, and the amplitude is significantly smaller in the (fully nonlinear) parametric case. In fact, it can be noticed from Fig. 8 that the film is nowhere close to dewetting the substrate for all tested field strengths, notwithstanding that the stabilizing influence of the substrate is minimal for the chosen parameters values (see the discussion in Sections III A 1, paragraph 1, and IV A). Remark 3. When Eq. (15) is used in a computation, the last term there is at large responsible for the saturation of the surface slope and the existence of the steady state. When this nonlinear term is omitted from the equation, the slope increases until the computation breaks down. In the computations of the parametric equations, unless λ is larger than approximately 4λ max , the surface evolves towards the steady-state mostly by vertical stretching of the initial shape. Perturbations of larger wavelength develop a large-amplitude, hill-and-valley type distortions which slowly coarsen into a steady-state, cosine curve-type shape. Fig. 8 shows the amplitude of the steady-state profile, h max − h min , and the height of the profile at it's lowest point, h min , vs. λ. It can be seen that, at least for moderate strengths of the electric field, growth of the amplitude is logarithmic in the vicinity of λ c , and for λ ≫ λ c growth is linear. The numerically determined slope of the linear section of A = 71 curve is 0.208, the one of the A = 710 curve is 0.217, and the one of the A = 7100 curve is 0.209, which suggests that the slope is insensitive (or very weakly sensitive) to the field strength. The decay of h min with increasing wavelength mirrors the growth of the amplitude, thus the steady-state surface shape is symmetrically vertically stretched about the equilibrium surface position h 0 = 10. All computed steady-states are stable with respect to imposition of random small-and large-amplitude point perturbations, which we confirmed by computing the dynamics of such perturbed shapes. Coarsening of random initial roughness We employed fully nonlinear simulations based on parametric equations for determination of the coarsening laws at increasing electric field strength and at variable wetting strength (characterized by values of the parameters h 0 and G). Computations were performed on the domain 0 ≤ x ≤ 20λ max with periodic boundary conditions; all runs were terminated after the surface evolved into a large-amplitude hill-and-valley structure with 3-5 hills. Unless wetting is strong, i.e. h 0 is small and G is large -the case that is discussed in more detail below -the slopes of the hills are constant 24 • during coarsening, except for the short initial period. Figs. 9 -11 are the log-log plots of the averaged, over ten realizations, maximum surface amplitude h max − h min and the averaged mean horizontal distance X between valleys (kinks) vs. time. Remark 4. We also attempted to compute coarsening dynamics resulting from the small-slope Eq. (15). While the hill-and-valley structure does emerge and coarsens with time, the characteristic constant hill slope is nearly 90 • . This suggests that additional nonlinear terms must be retained in Eq. (15) for predictive computations and raises the question what terms must be retained. We had not tried to obtain an adequate nonlinear small-slope model, leaving this agenda to future research. Of course, increasing the field strength results in faster coarsening, as the times needed for coarsening into a "final" structure are 10 3 , 10, and 1 for field strengths A = 71, 710 and 7100, respectively. These values also point out the decrease of the rate of change of a coarsening rate with the increasing field strength. Other than that, the coarsening laws are similar for the three tested field strengths. Fits to the data in the case A = 71 are shown in Figs. 9(a,b). At small times coarsening is fast (exponential, see Fig. 9(b)), then it changes to a slower power law with the exponent in the range ∼ 0.1 ÷ 0.2. (Since the values of the amplitude's logarithm are negative, we were unable to fit the exponential law to the data in Fig. 9(a), thus we fitted the quadratic, which results in the t ǫ1+ǫ2 log 10 t -type law.) Remarkably, when h 0 is decreased from 10 to 6 a very different coarsening behaviors emerge, see Figs. 10(a,b) and 11(a,b). At G = 2, after the period of slow power-law coarsening at intermediate times, the coarsening accelerates to exponential coarsening or t ǫ1+ǫ2 log 10 t type coarsening at late times ( Figs. 10(a,b)). (For reference, the linear stability in the case h 0 = 6 is shown in Figs. 3(c) and 4(c) by the dash-dotted line.) At G = 500 we did not output a sufficient number of data points in the beginning of the computation, but it is expected that initial coarsening is still faster than the power law. The amplitude starts to decrease towards the end of the simulation, and the kink-kink distance coarsens fast on the entire time interval. [? ] It seems certain that such unusual dynamics in the strongly wetting states emerges due to nonlinear "overdamping" of electromigration-induced faceting instability by the surface-substrate interaction force. Toward this end, in Figs. 12(a,b) the surface shapes and surface slopes for h 0 = 10, G = 2 and h 0 = 6, G = 500 cases are compared at the time when seven hills formed on the surface. In the former case the hills are steep, have rather uniform height and their slopes are almost straight lines. In the latter case they are more irregular, "rounded", and the average height is smaller. It is reasonable to expect that in the opposite case of non-wetting films, the surface-substrate interaction will instead "sharpen up" the hills, i.e. increase their slopes and make the surface structures appear more spatially and temporally uniform. Finally, we remark that the discussed coarsening laws for strong wetting cases also are qualitatively different from the laws governing coarsening in the absence of wetting, but in the presence of deposition, attachment-detachment, strong surface energy anisotropy, and interface kinetics [41,45]. Indeed, only the coarsening exponents shown in Fig. 9 (week wetting) are within the same range (0.1-0.5) for nearly the whole computational time interval, as are the exponents computed in the cited papers. V. CONCLUSIONS In this paper the effects of electromigration and wetting on thin film morphologies are discussed, based on the continuum model of film surface dynamics. It has been shown that wetting effect modifies significantly the stability properties of the film and the coarsening of electromigration-induced surface roughness. Also it has been shown that the small-slope evolution equations that were employed in many studies of electromigration effects on surfaces, are often inadequate for the description of strongly-nonlinear phases of the dynamics. It is expected that the account of the surface energy anisotropy and the electric field non-locality (through the solution of the moving boundary value problem for the electric potential) will lead to uncovering of even more complicated behaviours.
2013-01-04T17:08:06.000Z
2012-12-05T00:00:00.000
{ "year": 2013, "sha1": "795d156432b92d163e37ec8afe6f1122ac3ce06f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1212.1141", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "795d156432b92d163e37ec8afe6f1122ac3ce06f", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
254048255
pes2o/s2orc
v3-fos-license
Radial and cylindrical symmetry of solutions to the Cahn–Hilliard equation The paper is devoted to the classification of entire solutions to the Cahn–Hilliard equation -Δu=u-u3-δ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$-\Delta u=u-u^3-\delta $$\end{document} in RN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {R}^N$$\end{document}, with particular interest in those solutions whose nodal set is either bounded or contained in a cylinder. The aim is to prove either radial or cylindrical symmetry, under suitable hypothesis. Introduction We consider the entire equation This result agrees with the variational theory, which studies the asymptotic behaviour of the scaled functionals E ε (u, ) = ε 2 |∇u| 2 + W (u) ε dx (1.5) as ε → 0. For instance, Modica proved that, if ε k is a sequence of positive numbers tending to 0 and u ε k is a sequence of minimisers of E ε k (· p, ) under the constraint (1.3) such that u ε k → u 0 in L 1 ( ), then u 0 (x) ∈ {±1} for almost every x ∈ , and the boundary in of the set E := {x ∈ : u 0 (x) = 1} has minimal perimeter among all subsets F ⊂ such that |F| = |E|, where | · p| denotes the volume (see [15], Theorem 1). Further -convergence results relating E ε (· p, ) to the perimeter can be found in [16]. Therefore, given a family {u ε } ε∈(0,ε 0 ) of minimisers under the constraint (1.3), their nodal set is expected to be close to a compact Alexandrov-embedded constant mean curvature surface, at least for ε small. Corollary 2, together with a scaling argument, shows that, for ε small enough, the nodal set of any entire solution to (1.6) in R N such that u > z 2 (ε ) outside a ball is actually a sphere, which is known to be the unique compact Alexandrov-embedded constant mean surface in R N (see [1]). After that, we set and we consider solutions satisfying The aim is to study their symmetry properties and their asymptotic behaviour as δ → 0, with particular interest in solutions which have one periodicity direction. ) be a family of non constant solutions to (1.1) in R N , with N ≥ 2. Assume furthermore that u δ is periodic in x N and, for any δ ∈ (0, 2 (2) u δ is radially symmetric in x . In view of the aforementioned -convergence results, given a solution u to (1.6) satisfying (1.7), with δ = ε , we expect its nodal set to be close to an Alexandrov-embedded constant mean curvature surface which is contained in a cylinder. This kind of surfaces are fully classified, at least the ones which are embedded in R 3 , in fact it is known that the unique examples are the sphere and Delaunay unduloids, that is a family of non compact revolution surfaces obtained by rotating a periodic curve around a fixed axis in R 3 , which can be taken to be the x 3 -axis, parametrised by a real number τ ∈ (0, 1). We will denote the period of D τ by T τ . For a detailed introduction of Delaunay surfaces, we refer to [12,14]. For any τ ∈ (0, 1), Kowalczyk and Hernandez [11] constructed a family {u τ,ε } ε∈(0,ε 0 ) of solutions to (1.6) in R 3 , with = ε depending on ε, such that (1) ε is positive and bounded uniformly in ε. We observe that the solutions u ε,τ constructed in [11] are actually negative outside a cylinder, however, in order to obtain the aforementioned family, thanks to the oddness of f , it is enough to replace them with −u ε,τ . An interesting question is uniqueness. In other words, we are interested in the following question. This would be the counterpart of Corollary 2 for periodic solutions. For now we are not able to give a full answer to this question. However Theorem 3 is a first step in this direction, since it proves that any family {v ε } ε∈(0,ε 0 ) of such solutions has to share many properties with the family {u τ,ε } ε∈(0,ε 0 ) constructed by Hernandez and Kowalczyk. For instance, for ε small, v ε has to satisfy (1), (2), (3) and the scaled functions v ε (εx) tend to −1 uniformly on compact subsets of R N as ε → 0. The plan of the paper is the following. In Sect. 2 we will state some quite general results, of which the Theorems stated in the introduction are consequences. Section 3 is devoted to the proofs. It is divided into three subsections, dedicated to prove global boundedness, radial symmetry and the asymptotic behaviour for δ small respectively. Some relevant results In this section we state some results that are proved in Sect. 3. First we prove boundedness of solutions, which holds irrespectively of the sign of δ. ) and let u δ ∈ L 3 loc (R N ) be a distributional solution to the Cahn-Hilliard equation (1.1). Then a. e. in R N . Remark 6 • Using Proposition 5, standard elliptic estimates (see [10], Theorem 8.8 and Corollary 6.3) and a bootstrap argument, it is possible to show that any distributional . This parallels the regularity result proved in [6] for the Allen-Cahn equation. • It follows from the strong maximum principle that either u δ is constant, and in this case it has to be either We observe that Proposition 5 and Remark 6 prove point (1) of Theorem 3, which is actually true for any non constant entire solution. After that, we rule out the case δ ≤ 0, in which only constant solutions are allowed. We stress that the latter result proves point (1) of Theorem 1 and agrees with the sign of δ obtained by Hernández and Kowalczyk in [11]. Using boundedness and the famous result by Gidas et al. [9], or Theorem 2 of [7], which relies on the moving planes method, we can prove this symmetry result. and let u δ be a non constant solution to (1.1) such that u δ > z 2 (δ) outside a ball B R , for some R > 0. Then admits a unique solution which is radially symmetric (see [4,17,18]), that is v δ (x) = w δ (|x|). In view of this fact, we can actually prove the following classification result. ) and let u δ be a non constant solution to (1.1) such that u δ > z 2 (δ) outside a ball B R . Then, up to a translation, u δ = v δ . In the sequel, we will use the notation W δ (t) := W (t) + δt. Remark 10 It is possible to see that, for any δ ∈ (0, 2 is strictly decreasing, since Thus, using that, by Proposition 8, v δ is decreasing, which yields that w δ (0) < 0. In particular, in view of Remark 10, which yields that the nodal set of v δ is neither empty nor a singleton, Corollary 2 is true. Considering solutions that are approaching a positive limit just with respect to N − 1 variables, we can prove the following. We note that this proves point (2) of Theorem 3. Even in this case, our result agrees with the construction of [11], where the authors prove the existence of a family of solutions fulfilling the symmetries of the Delaunay surface D τ , hence, in particular they are periodic in x N , radially symmetric and radially increasing in x . Here we show that any periodic solution has to be radially symmetric and radially increasing in x . Finally, in order to prove point (3) of Theorem 3, we need the following result, which shows that the phase transition has to be complete. Proposition 12 For any ) such that, for any δ ∈ (0, δ 0 ) and This result somehow parallels Lemma 2.5 of [8]. The proof relies on both the moving planes and the sliding method. For a detailed proof of point (3) of Theorem 3, we refer to Sect. 3. Boundedness In order to prove boundedness for distributional solutions to (1.1), we will rely on a result proved by Brezis [2]. Remark 14 A similar argument is used in [5] to prove boundedness for solutions to a class of vectorial equations of the form with 0 < k 1 < · · · < k n . The scalar Allen-Cahn equation is included in this class. Here we prove that a similar result is true for a slightly different non linearity, due to the presence of δ. Now we can prove Proposition 7, using boundedness and a result of [6] where nonexistence f ground states for some special non linearies is proved. Radial symmetry The aim of this subsection is to prove Proposition 11. In order to do so, we need some decay at infinity of the solution. From now on, we denote the variables by x := (x 1 , x ) ∈ R × R N −1 . For λ ∈ R, we set λ := {x ∈ R 3 : x 1 < λ}. (3.2) This changing of notation is justified by the fact that several times this section x N is the periodicity variable, hence we are not allowed to start the moving planes in that direction. Lemma 15 Let u δ be a solution to (1.1). Assume furthermore that u δ > z 2 (δ) in the half-space R N \ λ , for some λ ∈ R. Then Proof The statement is trivial if u δ is constant (see Remark 6), hence we can assume that it is non constant. We apply Lemma 2.3 of [6] to w := u δ −z 2 (δ) in the half space R N \ λ , where, by Lemma 13, 0 < w < β. This is possible since the non linearity g(t) : is positive in (0, β) and g (0) > 0. We recall that the constants α and β are defined in the Proof of Proposition 5. The conclusion is that and the limit is uniform in the other variables. Using the fact that f (z 3 (δ)) < 0, we can actually prove a better result about the decay rate of z 3 (δ) − u δ . 3.4) Proof We compare the bounded function v := z 3 (δ) − u δ with the barrier μe −γ x 1 , for γ ∈ (0, − f (z 3 (δ))), in the half-space R N \ M , with M > 0 large enough. In fact, on Note that here we use the fact that v ∈ L ∞ , which is true by Lemma 13. Moreover, setting h δ (v) : uniformly with respect to x . Thus, by the maximum principle for possibly unbounded domains (see Lemma 2.1 of [3]), we conclude that (3.4) is true in R N \ M . Changing, if necessary, the constant C(γ ), the required inequality is fulfilled in the whole space. In order to prove Proposition 11, we need to apply Theorem 2 of [7], which we recall, for the reader's convenience. Theorem 17 ([7]) Let v > 0 be a bounded entire solution to Then v is radially symmetric in y, that is, up to a translation, v(y, z) = w(|y|, z), and radially decreasing in y, that is ∂ y j v(y, z) < 0 for any x = (y, z) ∈ R M × R N −M with y = 0. Proof By Proposition 5, z 1 (δ) < u δ < z 3 (δ) and, by Remark 6, u δ is smooth. By Lemma 15, it converges to z 3 (δ) as |x | → ∞, uniformly in x N . Since u δ is periodic, in order to conclude that it is radially symmetric in x and radially decreasing, it is enough to apply Theorem 17 The asymptotic behaviour for ı small First we show that if a solution lies between 1/ √ 3 and z 3 (δ), then it is constant. This is proved by the moving planes method. In order to prove this fact, we assume by contradiction that there exists λ ∈ R such that the open set λ := {x ∈ λ : v − v λ < 0} is nonempty, and we observe that, in any connected component ω of λ we have due to the strict monotonicity of f δ in [1/ √ 3, 1) (for the definition of h δ , see the Proof of Lemma 16). As a consequence, by the maximum principle for possibly unbounded domains, Composing v with any rotation of R N , we conclude that v is a constant solution to (1.1), thus v ≡ 0. Moreover, we take a smooth cutoff function χ : R → [0, 1] such that χ = 1 in (−∞, −1) and χ = 0 in (0, ∞) and we set We will denoteW :=W 0 . It is possible to see thatW δ enjoys the following properties: In the sequel, we will be interested in a solution to for δ ≥ 0 small enough and R large. This will be used as a barrier in the Proof of Proposition 12, which relies on a sliding method. This can be obtained in a variational technique, by minimising the functional The case δ = 0 is treated in Lemma 2.4 of [8]. Lemma 19 Let δ 0 > 0 be so small that W δ (z 3 (δ)) < α/2, for any δ ∈ [0, δ 0 ). Then, For any R > 0 and δ ∈ [0, δ 0 ), there exists a minimiser β R,δ ∈ C 2 (B R ) of (3.11) among all functions with trace z 1 (δ) on ∂ B R . Moreover, there exists R 0 > 0 such that, for any R ≥ R 0 and for any δ ∈ [0, δ 0 ), • there exists a solution β R of (3.10) with δ = 0 such that Proof Existence follows from coercivity and weak lower semi continuity. By the fact that W δ ≡ α in (−∞, μ(δ)) and (3.8), we can see the minimiser actually has to satisfy z 1 (δ) ≤ β R,δ ≤ z 3 (δ), thus, due to the strong maximum principle, either (3.12) holds or β R,δ ≡ z 1 (δ). Now we prove (3.13), which, in particular, shows that β R,δ > z 1 (δ) in B R , at least for R ≥ R 0 . In order to do so, we assume that there exists a sequence R k → ∞ and a sequence δ k ∈ [0, δ 0 ) such that It follows that, on the one hand (3.15) where ω N denotes the surface of S N −1 . On the other hand, if, for R > 1 and δ ∈ [0, δ 0 ), we take w R,δ to be equal to z 1 (δ) on ∂ B R and to z 3 (δ) in B R−1 with |∇w R,δ | bounded uniformly in δ, we can see that there exists a constant C > 0 such that, for k large enough, This contradicts the minimality of β R k ,δ k . Finally we prove (3.14). In the forthcoming argument, R > 0 will always be arbitrary but fixed. We observe that, since β R,δ is bounded uniformly in R > 0 and δ > 0, then any sequence δ k → 0 admits a subsequence, that we still denote by δ k , such that β R,δ k converges in Since the convergence is uniform and (3.12) holds, then as δ → 0. Moreover, by (3.13) and the strong maximum principle, Now we can prove Proposition 12. Proof It is enough to prove that, if there exists a sequence δ k → 0, a sequence u δ k of solutions to (1.1) and ν > −1 such that then there exists a subsequence δ k such that u δ k ≡ z 3 (δ k ). Claim For any ε > 0 and ρ > 0, there exists a subsequence, which we still denote by u δ k , and a sequence x k ∈ R N such that Therefore the sequence u k (x) := u δ k (x +x k ) admits a subsequence converging, in C 2 loc (R N ), to a solution u ∞ to the Allen-Cahn equation (3.20) By (3.19), we can see that u ∞ (0) = 1, thus u ∞ ≡ 1. As a consequence, for any ε > 0 (small) and ρ > 0, there exists a subsequence (still denoted by u k ) such that hence the claim is true. In order to prove our result, we first observe that, by (3.13), for δ 0 small as in Lemma 19 and δ ∈ (0, δ 0 ), there exists R > 0 and a solution β R,δ to (3.10) such that Moreover, by (3.14), there exists a solution β R to and δ 1 = δ 1 (R) > 0 such that, for any δ ∈ (0, δ 1 ), we have As a consequence, for any δ ∈ (0,δ), whereδ =δ(R) := min{δ 0 , δ 1 (R)}, we get Now, applying the claim with ρ = R and we can prove the existence of a subsequence, still denoted by u δ k , and a sequence x k in R N such that Sliding β R,δ k , with k ≥ k 0 fixed, we get the lower bound In conclusion, by Lemma 18, u δ k ≡ z 3 (δ k ). • u δ is periodic in x N . Then u δ → −1 as δ → 0, uniformly on compact subsets of R N . Proof By Lemma 13, the family u δ is uniformly bounded, hence any sequence δ k → 0 admits a subsequence, that we still denote by δ k , such that u δ k converges in C 2 loc (R N ) to a solution u ∞ to the Allen-Cahn equation (3.20). Since u δ are all non constant solutions, then, by Proposition 12, we have inf R N u δ → −1, as δ → 0.
2022-11-29T14:16:08.817Z
2020-03-23T00:00:00.000
{ "year": 2020, "sha1": "6f3fe16815319b2012fcffca83a082d2b23e1548", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00526-020-1727-5.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "6f3fe16815319b2012fcffca83a082d2b23e1548", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
249745964
pes2o/s2orc
v3-fos-license
Economic burden of sickle cell disease in Brazil Background Sickle cell disease (SCD) may cause several impacts to patients and the whole society. About 4% of the population has the sickle cell trait in Brazil, and 60,000 to 100,000 have SCD. However, despite recognizing the significant burden of disease, little is known about SCD costs. Objective To estimate SCD societal costs based on disease burden modelling, under Brazilian societal perspective. Methods A disease burden model was built considering the societal perspective and a one-year time horizon, including direct medical and indirect costs (morbidity and mortality). The sum of life lost and disability years was considered to estimate disability-adjusted life years (DALYs). Data from a public database (DATASUS) and the prevalence obtained from literature or medical experts were used to define complications prevalence and duration. Costs were defined using data from the Brazilian public healthcare system table of procedures and medications (SIGTAP) and the human capital method. Results Annual SCD cost was 413,639,180 USD. Indirect cost accounted for the majority of burden (70.1% of the total; 290,158,365 USD vs 123,480,816 USD). Standard of care and chronic complications were the main source of direct costs among adults, while acute conditions were the main source among children. Vaso-occlusive crisis represented the complication with the highest total cost per year in both populations, 11,400,410 USD among adults and 11,510,960 USD among children. Conclusions SCD management may impose an important economic burden on Brazilian society that may reach more than 400 million USD per year. Introduction Sickle cell disease (SCD) involves a group of inherited conditions in which both alleles for beta-globin are mutated and at least one is the mutation for beta-S-globin, with a quantitative predominance of hemoglobin S within the red blood cells. Depending on the mutations type, heterozygosis or homozygosis, the disease classification is sickle cell anemia (SCA; hemoglobin SS), hemoglobin SC or hemoglobin S/β-thalassaemia [1]. The disease is most commonly observed among sub-Saharan Africans; however, it is globally observed. Considering SCA frequency, it is estimated that 300,000 children are born globally with the disease per year. This number will reach 400,000 in 2050 [2]. In Brazil, a study has estimated that one in every six newborns have abnormal hemoglobins, encompassing all hemoglobinopathies [3]. Another study reported that 3.9% of the adults receiving treatment at hematology outpatient clinics have SCD [4]. Brazilian Ministry of Health estimates that about 4% of the population have the sickle cell trait and that homozygous or compound heterozygous disease is present among 60,000 to 100,000 people [5]. Furthermore, a mortality rate of 1.12/100,000 habitants is estimated [6]. Despite the efforts promoted by the Ministry of Health to propose national public policies, disparities in SCD patients' care are still observed across country regions [7]. The SCD management may include the use of hydroxyurea, folic acid, blood transfusion, iron chelators, antibiotic therapies, vaccination, and hematopoietic stem cell transplantation [5,8]. However, only hematopoietic stem cell transplantation has a curative intention for a few patients. Hydroxyurea and red blood cell transfusion are the major disease-modifying therapies. Despite treatment, patients still experience several systemic manifestations of SCD [8,9]. Since SCD is a multisystem disorder, every organ in the body can be affected. Patients could present multiple complications such as acute chest syndrome, acute ischemic stroke, splenic sequestration, avascular necrosis, leg ulcerations, priapism, cholelithiasis, vaso-occlusion crisis (VOCs), and others [1,10]. A Brazilian study has reported that only 4.63% of deaths among SCD patients are unrelated to the disease. The leading causes of death arre infection (29.18%) and acute chest syndrome (25.27%) [6]. Furthermore, patients with a higher number of VOCs in a year have an increased likelihood of experiencing other SCD-related complications and death [11]. Considering that SCD patients have higher morbidity and mortality, understanding the disease burden and the impact of several complications is important to support health policy-making decisions in the country [2]. In addition, the Brazilian public healthcare system has equity as one of its principles, which turns the disease burden into an impact on society [12]. The disease burden stimulated many interventions such as neonatal screening, bacterial prophylaxis, and comprehensive healthcare management [2]. In Brazil, the clinical protocol and therapeutic guidelines published by the Ministry of Health in 2018 propose similar disease management and screening [5]. Despite these initiatives and the increase in survival, mortality rates are still high in Brazil. Arduini et al. (2017) conducted a systematic review aiming to characterize mortality by SCD in Brazil in respect to the frequency, death rate or mortality coefficient, age, and causes. Mortality rates ranged from 0.115 to 0.54 per 100.000 individuals, depending on the studied population, region, and other variables. This data highlights the scarcity of information about SCDrelated deaths in Brazil [13]. In addition, Brazilian SCD patients' life expectancy is about 20 years lower than the observed for the whole country [6]. Beyond the mortality rates and impact on patients' life, little is known about SCD costs. Most of the available studies reports only costs among hospitalized patients [14][15][16][17]. Kauf et al. (2009) reported an average total cost of care per patient-month of USD 1,946 [18]. Shah and coworkers (2020) reported mean annual cost per patient of USD 1,204 and highlighted the severe impact of VOCs on resource utilization [19]. However, only direct medical costs were included [18,19]. In Brazil, Lobo and colleagues (2022) reported emergency visits and hospitalizations total costs higher than USD 500,000 [20]. Nevertheless, only a single study, conducted in a single center, reporting direct costs is available to date. Considering the lack of knowledge on the SCD burden, this study aims to estimate SCD societal costs based on disease burden modelling, under a Brazilian societal perspective. Methods A disease burden model was built considering the societal perspective and a one-year time horizon, including direct medical costs and indirect costs (morbidity and mortality). Although a time horizon of one year was used for years lost to disability (YLD) estimation, years of life lost (YLL) calculation used the life expectancy approach. The indirect cost estimates productivity lost due to early death. The analysis followed the methodology proposed by Larg & Moss (2011) for a cost-of-illness evaluation. The study should include an analytical framework (study perspective and motivation, epidemiologic approach, and a well-specified question), an adequate methods definition and data regarding productivity loss and resource use (quantification, healthcare resource, and productivity loss values definition and the inclusion of intangible costs) as well as adequate data analysis and reporting (presence of a range of estimates, identification of main uncertainties, sensitivity analysis, adequate documentation and justification given for cost components, data and sources, assumptions and methods, identification of main study limitations and level of results detail to answer study questions) [21]. DALY calculation was aided by WHO's DALY calculator tool following the methodology depicted at WHO methods and data sources for global burden of disease estimates 2000-2011 [22,23]. Age-weighting and the discount rate were not used as the World Health Organization recommended. The definition of DALY is given by Eq 1 [24]. YLL is estimated by Eq 2, where N d (a) is the number of deaths by SCD and L(a) is the life expectancy at age a. YLD is given by Eq 2, through the prevalence method, where N p (a) is the number of SCD's patients at age a, p(c, age group) is the prevalence of complication c (can be chronic or acute) by age group (children or adults), AER(c) is the annualized event rate of c, DW(c) is disability weight of complication c, and d(c) the duration of the complication c from the onset until remission or death [23]. N d (a) is calculated by applying the SCD mortality rate to the prevalent population. However, the raw number from the official registry was not used as death by SCD is known to be underreported in Brazil. Instead, the resulting total number of deaths per year was distributed through age according to the ICD-10 code D57 in the Brazilian official death registry. Thereby, the death proportion by age and the estimated total number of deaths per year were used. Life expectancy was extracted from the National Mortality Table published by the Brazilian Institute of Geography and Statistics (IBGE) [25] and disability weights (DW) from the Global Burden of Disease 2010 (GBD 2010) and the Institute for Health Metrics and Evaluation (IHME) [26]. When DW were not found on the references mentioned above, economic evaluation literature was searched for utility values used as proxies to DW. Complications' prevalence was extracted from the literature, when available, or expert medical opinion. Annualized event rate (AER) and complication duration were calculated from DATASUS, an open database that centralizes claims from all the Brazilian public healthcare system. Data analysis of DATASUS comprised the period from January/2008 to December/2018, considering all hospitalizations registered with ICD-10 D57 (SCD). Annualized hospitalization rates were estimated by the ratio between complications count and the total number of patients-year of DATASUS. The average event duration was based on inpatient data related to more severe cases. Therefore, the burden of some events may be overestimated. This limitation could not be avoided as outpatient event duration data is unavailable and must be noted as a limitation of the present study. Eq 2. YLL and YLD definition. Direct medical costs were estimated using a bottom-up strategy. A literature review identified SCD treatment and main complications. Their costing was defined through the microcosting method, broadly defined in two steps: definition of health resource use and subsequent costing [27]. The first was extracted from published guidelines, while the latter was set from the Brazilian public healthcare system table of procedures and medications (SIGTAP). Due to many uncertainties, direct non-medical costs were not included since they can hardly be evaluated. Nevertheless, this is a conservative approach since costs, such as transportation for medical appointments, may impose an important burden, as the few reference centers are usually far from patients' residences. Indirect medical costs were based on the average annual income in Brazil (7,416.45 USD) published by the IBGE in 2020 [28]. The total disease burden was calculated by Eq 3. Eq 3. Total disease burden. Indirect costs ¼ DALY � Annual income Direct costs ¼ N p ðaÞ � pðc; age groupÞ � AERðcÞ � EMCðcÞ þ TCðage groupÞ Where EMC(c) is the event management cost (direct costs only) of complication c and TC is SCD standard management cost (hydroxyurea, folic acid, blood transfusion, iron chelators, hematopoietic stem cell transplantation-HSCT, antibiotic therapy and vaccination) by age group. A literature review was performed until October 2019 using MEDLINE databases via Pubmed and Latin American and Caribbean Health Sciences Literature (LILACS). Data from a multinational study were also used when information was not available in national literature [29]. An expert panel was further conducted, on June 17, 2020, on an online platform. Five Brazilian experts were responsible for validating all data retrieved on the literature review and provided inputs on those without published information. Values were reported in US dollars (USD) with 1 USD = 3.88 Brazilian Reais (BRL). Parameters' uncertainty was evaluated under a deterministic sensitivity analysis. They were varied according to their respective 95% CI, when available. Otherwise, a standard variation of ±20% was applied. The results of the analysis were expressed as a tornado diagram. Epidemiology Given the information scarcity and that disease prevalence in Brazil varies in the available literature, the prevalence of 24.0 cases per 100,000 inhabitants was defined, as proposed by medical experts (ME). According to the IBGE population projection, this prevalence results in approximately 50,000 patients with the disease in 2018, segmented by age [30]. A mortality rate of 2.68/1,000,000 inhabitants was assumed, as the weighted average for men and women resulted in 558 patients lost to the disease in the same year [31]. The estimative is that the SCD mortality rate is underreported because the primary diagnosis is omitted and considered only the immediate death cause, as an acute stroke or acute chest syndrome. Total deaths were age-segmented according to mortality data from the Mortality Information System (Fig 1) [32]. Table 1 shows complications' prevalence, DW extracted from literature, mean hospitalization duration of acute events, and annual incidence rate, stratified by adults and children. VOC was the most frequently observed acute complication with the longer length of hospital stay. However, the most significant disability was related to stroke. The literature reports the highest Table 2 shows the standard of care costs per patient. Annual costs attributed to chronic and acute complications are shown in Table 3. SCD related cost was composed by the summation of the standard of care and chronic complications' costs. Acute complications and HSCT costs were calculated per event, not as chronic costs. The standard of care average costs assumed were 1,835.92 USD and 987.21 USD, while for chronic complications were 769.30 USD and 116.62 USD, for adults and children, respectively. Acute complications costs were estimated as 595.68 USD and 703.71 USD for adults and children, respectively. Despite its high cost per event, HSCT-related costs were estimated as 60.94 USD and 39.61 USD for adults and children, respectively. This apparent inconsistency is directly related to the low incidence of the procedure. Economic burden Annual SCD cost in Brazil was approximately 414 million USD or 1.6 billion BRL per year, of which 290 million USD (1.1 billion BRL) and 123 million USD (479 million BRL) were related to indirect and direct costs, respectively. Approximately 41,000 DALYs were lost in 2018, 27,000 due to death, and 14,000 due to disability (Table 4). Deterministic sensitivity analysis Parameters that most influenced the model results were: SCD prevalence, annual income, VOC annualized rate, and SCD base utility. The first three parameters increment resulted in an increased disease burden. At the same time, the SCD base utility improvement reduced the disease burden. This behavior is expected once the improvement of SCD base utility results in reduced patient absenteeism. Fig 2 shows the deterministic sensitivity analysis complete results. Values and ranges considered for each parameter are in the S1 File. Discussion This study aimed to estimate the SCD burden, considering the Brazilian societal perspective. To the best of the authors' knowledge, this is the first study to assess Brazil's SCD burden and add important knowledge to improve disease management and decision-making. Economic burden related to several hematologic conditions were previously conducted in Brazil [20,[46][47][48][49][50]. However, there are no studies using similar methods to estimate burden of illness in the country to date. Data related to disease epidemiology were collected to allow SCD burden estimation in Brazil. Data available at the information system was analyzed stratified by age groups regarding mortality. It was possible to notice that mortality rates are still higher in young ages, such as children. This pattern is quite different from those observed in developed countries. Payne et al. (2020) assessed the trends in SCD-related mortality among black Americans considering 1979-2017 period and reported a decline in death rates among children and an increase among adults, with the median age at death increasing from 28 to 43 years [51]. Thus, it is possible to highlight that the disease burden may still be higher in developing countries, such as Brazil, considering the delay in disease management. High mortality rates were also reported in other studies. Cançado et al. (2021) reported that SCD is related to a reduction of approximately 37 years on median age at death, compared to the general population, which was also observed when data were stratified by age [52]. In the present study, the total cost related to SCD in a year was estimated at 413,639,180 USD, considering both adults and children. Most of this burden was related to indirect costs, representing 70.2% of the total amount. Previous studies worldwide assessing SCD burden reported, in the majority, data considering only those related to direct medical costs [53][54][55][56][57][58][59][60][61]. Naik et al. (2019) reported a total direct annual cost that ranged from 1 million USD to 3 million USD and an average annual indirect cost of 1,293 USD in a systematic review aiming to synthesize worldwide SCD economic burden. The lack of detailed information on Naik et al. (2019) does not allow us to understand the reason for the differences observed [62]. However, in another review that aimed to estimate the SCD economic burden in the United States, the authors highlighted the need for studies considering direct and indirect costs to characterize the full burden of disease [61]. Thus, despite the differences among estimates shown in the present study and those in the analysis from Naik et al. (2019), the impact of indirect costs on total SCD burden still needs to be better understood in further studies [62]. Still, regarding the indirect disease burden, the estimated total DALY loss in 2018 was 22,755 and 18,085 among adults and children, respectively. Considering both premature death and the years lived with disability, the disease substantially impacts the whole Brazilian society. Similarly, Rezaei et al. (2015) assessed DALYs estimations due to hemoglobinopathies (thalassemia, SCD, and G6PD-D) in Iran by sex and age in 1990, 2005, and 2010. Using data from Global Disease Burden, the study reported total DALY among SCD patients of 51,129 and 30,501 in 1990 and 2010, respectively, showing a decreasing trend across the years. However, in contrast to our findings, DALYs lost per adult were lower than those lost per children (369 among children �14 years vs. 88 among adults �15 years, in 1990; 204 among children �14 years vs. 66 among adults �15 years, in 2010) [63]. The SCD direct medical costs were estimated at 123,480,816 USD for 2018 in Brazil. Standard of care was the main driver of direct costs, followed by acute complications in both populations. However, among children, acute complications costs were about six times higher than those observed for chronic ones, while among adults, values were closer. This difference is influenced by the frequency of such complications since children have a higher frequency of acute conditions and adults the chronic ones [34,42]. Considering total costs related to complications' management in a year, VOC represents the most expensive acute condition in both populations. About 87% of all acute complications' costs were related to VOCs among adults. The condition's occurrence may explain this finding since VOC was the most frequently observed complication among children (59.5%) and adults (75.0%). In addition, the condition was responsible for the highest annualized hospitalization rate (5.30 per year) and length of hospital stay (mean duration of 15 days). Previous studies in Brazil reported that acute painful episodes are the leading cause of hospitalization among SCD patients, which may reach about 70% of the cases [64,65]. VOCs disrupt blood circulation, which drives acute and chronic pain beyond the damage to key organs such as the liver, brain, lungs, and kidneys [66]. In addition, the condition is directly associated with mortality, with a risk of death 5.5 times higher among patients with �3 VOCs per year compared to those with <1 episode [11]. In the present analysis, the estimated cost per event was 130.36 USD. Despite the low cost, SCD patients may experience � 6 VOCs per year, which explains the amount spent in a year (11,400,410 USD among adults and 11,510,960 USD among children) [67]. The burden related to VOC may still be higher since some issues like the hospital beds occupation were not considered in this analysis. These data highlight the importance of strategies to control disease to avoid VOCs occurrence. Furthermore, patients with uncontrolled disease seems to have a higher resource utilization and costs [68]. This study was designed to follow the checklist proposed by Larg & Moss (2011), and most of the items were addressed. However, it was impossible to control confounders and provide the required level of detail of resource use and productivity loss. Data were extracted from literature and reports from key-opinion leaders, which may not represent the Brazilian scenario in some cases. Still, it was the available source when the study was conducted. A deterministic sensitivity analysis was conducted to estimate the influence of the model parameters on the modeled results to address some of these limitations. It is important to note that parameters were varied according to their respective 95% CI, when available, or using a standard variation of ±20%, considered wide enough to evaluate the model behavior under uncertainty [21]. The results highlight the model's sensitivity, especially to SCD prevalence, which is almost natural as the increase in the number of cases will inevitably lead to a higher burden of the disease. The same will happen to an increase in annual income as the magnitude of the results is directly related to indirect costs, income dependent. Finally, an increase in SCD base utility will reduce DW (better quality of life), reducing the disease burden. Somehow, deterministic sensitivity analysis results give rise to the necessity of SCD prevalence studies, which can refine future disease burden analysis. Thus, although the study adds valuable knowledge, these limitations need to be highlighted. In addition, the absence of direct non-medical costs may translate to an underestimation of disease burden since the availability of reference centers in a few places may represent the need for frequent transportation for medical assistance, change of residence place, and others. However, in contrast, mortality quoted comes from a specialist hospital and may translate into a more severe population. Considering that SCD patients who are well may not need such resources, further analysis addressing this limitation is needed. Finally, the hospitalization rate was estimated through data available from DATASUS. Although it is a national source of information, only those related to public assistance are reported, and it is usually not registered as a condition related to SCD, underestimating the occurrence of the outcome. Conclusion The present analysis showed that SCD patients might generate an economic burden for the Brazilian society of approximately 400 million USD per year. The indirect burden is responsible for most of this cost; however, disease complications play an important role in direct medical costs. Thus, the present data highlights the importance of investment in disease control to reduce its impact on patients and society.
2022-06-18T05:06:14.832Z
2022-06-16T00:00:00.000
{ "year": 2022, "sha1": "851589d0ba94aa023cc039350d1d0079f51e4539", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "851589d0ba94aa023cc039350d1d0079f51e4539", "s2fieldsofstudy": [ "Medicine", "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
52804402
pes2o/s2orc
v3-fos-license
Analysis of Synonymous Codon Usage Bias of Zika Virus and Its Adaption to the Hosts Zika virus (ZIKV) is a mosquito-borne virus (arbovirus) in the family Flaviviridae, and the symptoms caused by ZIKV infection in humans include rash, fever, arthralgia, myalgia, asthenia and conjunctivitis. Codon usage bias analysis can reveal much about the molecular evolution and host adaption of ZIKV. To gain insight into the evolutionary characteristics of ZIKV, we performed a comprehensive analysis on the codon usage pattern in 46 ZIKV strains by calculating the effective number of codons (ENc), codon adaptation index (CAI), relative synonymous codon usage (RSCU), and other indicators. The results indicate that the codon usage bias of ZIKV is relatively low. Several lines of evidence support the hypothesis that translational selection plays a role in shaping the codon usage pattern of ZIKV. The results from a correspondence analysis (CA) indicate that other factors, such as base composition, aromaticity, and hydrophobicity may also be involved in shaping the codon usage pattern of ZIKV. Additionally, the results from a comparative analysis of RSCU between ZIKV and its hosts suggest that ZIKV tends to evolve codon usage patterns that are comparable to those of its hosts. Moreover, selection pressure from Homo sapiens on the ZIKV RSCU patterns was found to be dominant compared with that from Aedes aegypti and Aedes albopictus. Taken together, both natural translational selection and mutation pressure are important for shaping the codon usage pattern of ZIKV. Our findings contribute to understanding the evolution of ZIKV and its adaption to its hosts. Introduction Zika virus (ZIKV) is classified as a mosquito-borne arbovirus of the family Flaviviridae, genus Flavivirus [1]. This virus was first isolated from a blood sample of a Rhesus monkey in Uganda in 1947 and, before its outbreak in Oceania in 2007, it was confined to Africa and Southeast Asia [1]. Since then, ZIKV has been circulating in the Americas, and in May 2015, the first case of ZIKV originating from the Americas was reported in Brazil. Thus far, ZIKV has expanded from South America to more than 28 countries and has aroused the attention of the World Health Organization (WHO) as well as that of many governments [2,3]. Clinical presentation of ZIKV fever is non-specific; the most common symptoms are rash, fever, arthralgia, myalgia, asthenia, and conjunctivitis. ZIKV is thought to be transmitted to humans mainly the frequency of the nucleotide Y, and f xy the frequency of the dinucleotide XY. As a criterion, If ρ xy >1.23 or <0.78, the XY dinucleotide is considered to be over-represented or under-represented compared with a random association of mononucleotides [22]. Comparison between the codon usage pattern in ZIKV and those in its hosts RSCU. RSCU was employed to investigate the overall synonymous codon usage bias among the genes, and this value was defined as the ratio of the observed codon usage to the expected value [23]. Codons with a RSCU value of >1.6 were regarded as over-represented, while codons with a RSCU value of <0.6 were regarded as under-represented. Codons used at an average level (no bias) have the RSCU values of 1 [24]. In our comparison of the ZIKV codon usage pattern with those of its hosts, if the RSCU value for the polyprotein-coding region of ZIKV and that of the same codon for the host were both <0.6, >1.6, or between 0.6 and 1.6, their codon usage patterns were judged to be similar [25]. The codon usage data of ZIKV's hosts, including human (Homo sapiens) and mosquitoes (A. aegypti and A. albopictus) was retrieved from the codon usage database (http://www.kazusa.or.jp/codon). D(A,B). To determine the influence of the overall codon usage of hosts on that of ZIKV, the similarity index D(A,B) [26] was calculated as follows: where R(A,B) is termed as a cosine value of an included angle between A and B spatial vectors and represents the extent of similarity between ZIKV and hosts in the aspect of overall codon usage pattern. a i is defined as the RSCU value for a specific codon among 59 synonymous codons in ZIKV polyprotein-coding region. b i is defined as the RSCU value for the same codon of ZIKV's hosts. D(A,B) represents the potential effect of the overall codon usage of the hosts on that of ZIKV, and its value varies from 0 to 1 [26]. The higher D(A,B) means the stronger influence of environment related synonymous codon usage patterns of hosts on that of ZIKV. tRNA adaptation index. tRNA adaptation index (tAI) is used to estimate tRNA usage for the coding sequences of a species [27]. It represents the levels of co-adaption between a special codon and a corresponding tRNA pool and has greater correlations with protein abundance compared with other indicators [28]. The tAI value of ZIKV polyprotein-coding region based on the tRNA copy number of H. sapiens was calculated by Visual Gene Developer [29]. Effect of mutation pressure and translational selection on the codon usage bias ENc-GC3s plot. A ENc-GC3s plot was used to investigate the influence of the GC3s content on codon usage [20]. The expected ENc values for each GC3s were calculated using the following formula: where s represents the GC3s value. Parity rule 2 (PR2) plot. A Parity rule 2 (PR2) plot was used to assess the influence of mutation pressure and translational selection on the codon usage of genes [30]. This plot is shown by the value of AU-bias [A 3 /(A 3 +U 3 )] as the ordinate and GC-bias [G 3 /(G 3 +C 3 )] as the abscissa at the third codon position of the four-codon amino acids. The center of the plot, where both coordinates are 0.5, is the position where A = U and G = C (PR2), with no bias between influence of mutation and translational selection rates. Neutrality plot (GC12 Vs GC3). Analysis of the correlation between the GC contents at the first and second codon positions (GC12) and that at the third codon position (GC3) is useful to examine the effect of mutation pressure and translational selection on the base composition [31]. Therefore, GC12 and GC3 were calculated by using the EMBOSS CUSP program [32] and then subjected to correlation analysis. Correspondence analysis (CA) Correspondence analysis (CA) is a useful multivariate statistical method for studying the internal relationship between variables and samples [33]. The mathematics procedure of CA transforms the RSCU values into a series of dimensional factors, and the results can be used to analyze the major trend in codon usage patterns among different samples. Each gene is represented with 59 dimensional variables, and each dimension matches the RSCU value of one codon, with the exclusions of AUG, UGG and stop codons. The CA was performed using the CodonW 1.4.2 program. The first two axes of CA (Axis 1 and Axis 2) were subjected to a correlation analysis. Statistical analysis The statistical analyses were performed using SPSS 13.0 software package (SPSS Inc., Chicago, USA). Correlation analyses were carried out using Spearman's rank correlation test. The P-values less than 0.05 were considered statistically significant. Phylogenetic analysis of ZIKV based on polyprotein-coding region To determine the phylogenetic relationship of different ZIKV strains, a phylogenetic tree was drawn (Fig 1). The results show that 46 strains of ZIKV can be divided into two genera (I, II) and the strains isolated from the same geographic regions cluster together (Fig 1). It can be seen that the members isolated from Africa, including Senegal, Central African Republic and Uganda, firstly cluster together and form a separate branch, and subsequently cluster with the members isolated from other countries all over the world (Fig 1). Nucleotide composition analysis The GC3s content is a useful indicator of the extent of the base composition bias, representing the frequency of the nucleotides G+C at the synonymous third codon position, excluding Met, Trp, and the termination codons. The mean value of the GC contents in the 46 tested strains of ZIKV is 50.98% (50.40-51.20%; SD, 0.216), while the average value of their GC3s contents is 51.53 (49.60-52.10%; SD, 0.685) ( Table 1). An analysis of nucleotide composition at the third position of synonymous codons (G3s, A3s, U3s, C3s) indicates that the mean values of C3s (31.97%) and A3s (33.32%) are higher compared with those of G3s (31.87%) and U3s (24.95%) in ZIKV polyprotein-coding region (Table 1). Moreover, it was found that the G and A nucleotides are abundant with mean values of 29.16% and 27.50%, respectively, while the average values of U and C nucleotide were 21.52% and 21.82%, respectively (data not shown in Tables). The G and A contents are significantly higher compared with U and C contents (Student's t test, p<0.01). These results highlight that there is a GA-rich composition in ZIKV polyprotein-coding region. The synonymous codon usage characteristics of the ZIKV polyproteincoding region ENc was used to quantify the codon usage bias of each gene [20]. ENc values can range from 20 to 61, and lower values of ENc represent higher levels of codon usage bias. To measure whether or not ZIKV strains show similar codon usage biases, the ENc values of 46 different strains were calculated. The ENc values of the ZIKV polyprotein-coding regions vary from 52.13 to 55.00, with a mean value of 53.32 and a standard deviation (SD) of 0.81, showing that the codon usage bias of ZIKV is low ( Table 1). The CAI value is a universal measure of the synonymous codon usage of genes in different organisms and can be used to analyze the adaption of a species to its hosts [17]. CAI values can range from 0 to 1 and higher CAI values signify higher levels of codon usage bias. We found that, in relation to human, the CAI values of ZIKV polyprotein-coding regions range from 0.734 to 0.741, with an average value of 0.740 and a SD of 0.002 (Table 1). This study on 46 ZIKV strains revealed that the codon usage bias in the polyprotein-coding region of ZIKV is low as the mean ENc value of ZIKV polyprotein-coding regions is 53.32 (>40). This result is analogous to those of previous studies, which found that some RNA viruses, such as hepatitis A virus, bovine viral diarrhea virus, SARS-coronavirus, Newcastle disease virus, Marburg virus, and swine fever virus, also show a weak codon usage bias [22,[34][35][36][37][38]. A possible explanation for this is that the low codon usage bias may be beneficial for the efficient transcription and translation of virus genes in host cells [22]. In addition, ZIKV shows the high CAI value (0.740) for H. sapiens, suggesting that natural selection from H. sapiens can affect the codon usage of ZIKV and the evolution of codon usage in ZIKV has made it to utilize the translation resource of H. sapiens more efficiently. This is similar to Marburg virus, which also has a higher CAI value for H. sapiens but shows low codon usage bias [37]. Relationships between the codon usage pattern of ZIKV and that of its hosts To investigate the synonymous codon usage pattern, the RSCU values of 59 codons (excluding Met, Trp, and the termination codons) in ZIKV polyprotein-coding regions were calculated. Among 18 preferable codons, 13 have an end base of A or C, while only five have an end base of U or G; therefore, the codons with end bases of A and C are prone to be preferentially utilized in the ZIKV genome ( Table 2). To determine if the codon usage pattern of ZIKV is influenced by that of its hosts, the codon usage pattern of ZIKV was compared with the codon usage patterns of its natural hosts, including H. sapiens, A. aegypti, and A. albopictus. We found that 47 of 59 synonymous codons between ZIKV and H. sapiens are equivalently selected while 40 or 30 of 59 synonymous codons between ZIKV and A. aegypti or A. albopictus, respectively, are similarly selected ( Table 2). In general, the similarity in the degree of codon usage between ZIKV and H. sapiens is higher than that between ZIKV and A. aegypti or A. albopictus. Specifically, CUG for leucine (Leu), AGC for serine (Ser), GCC for alanine (Ala), UAC for tryptophan (Tyr), CAC for histidine (His), AAC for asparagine (Asn), AAG for lysine (Lys), and UGC for cysteine (Cys) have high similarity between ZIKV and its natural hosts. Additionally, the RSCU values of several codons showed a strong discrepancy between ZIKV and its hosts, such as CUA for Leu, AUA for isoleucine (Ile), CCA for proline (Pro), and CGA/CGG/AGA for arginine (Arg). These results suggest that the selection pressure from the hosts may influence the codon usage pattern of ZIKV, which may assist it in adapting to the cellular environment of the hosts and allow it to replicate efficiently in the hosts [24,31]. Interestingly, the role of the translational selection from H. sapiens in shaping the codon usage pattern of ZIKV is different from that of its insect hosts (A. aegypti and A. albopictus). Compared with the codon usage pattern of A. aegypti or A. albopictus, the codon usage pattern of ZIKV is more similar to that of H. sapiens. This discrepancy of similarity in the degree of codon usage between ZIKV and its hosts may be caused by the various defense mechanisms from different hosts against ZIKV infection. Indeed, a recent study indicated that skin immune cells, including fibroblasts, epidermal keratinocytes, and immature dendritic cells, are highly permissive to ZIKV infection and replication, which can lead to the activation of an antiviral innate immune response [39]. Another study found that although A. aegypti and A. albopictus are susceptible to ZIKV infection, they are both low-competent vectors for ZIKV [40]. It is presumed that the evolution of the flavivirus genome sequence involved in anti-host countermeasures may be faster than that of other flavivirus sequence [41]. This may be one reason why the codon usage pattern of ZIKV tends to show more similarities to that of H. sapiens. Assessing effects of the overall codon usage of hosts on that of ZIKV To determine how the overall codon usage of ZIKV's hosts has contributed to virus codon usage bias, the similarity index analysis was carried out. The results indicated that all of the average values of D (A,B) among three hosts are slightly low, suggesting that ZIKV has adapted to self-replicate efficiently with strong independence of overall codon usage of its hosts during Relationship between dinucleotide biases and codon usage in ZIKV Previous studies found that dinucleotide compositional constraints of genomes can affect the codon usage bias [33]. Therefore, we determined the relative abundance of 16 dinucleotides in ZIKV polyprotein-coding regions. The results show that the occurrences of dinucleotides in ZIKV are not randomly distributed and no dinucleotide is present at the expected frequency (Table 3). Specially, the dinucleotides UG and CA are over-represented (ρ xy > 1.23) while UA and CG are markedly under-represented (ρ xy < 0.78). These data is consistent with previous study, which suggested that the dinucleotides UA and CG are under-represented in many sequence sets [21]. Moreover, the analysis of RSCU values of the eight codons containing CG (UCG, CCG, ACG, GCG, CGU, CGC, CGA, and CGC) suggests that these codons are not preferentially used. Meanwhile, in case of UA containing codons, most of codons are not preferentially selected, except for UAC. Taken together, the composition of dinucleotides plays a role in the synonymous codon usage pattern of ZIKV. The relative abundance of dinucleotides has been shown to influence the codon usage in some RNA viruses [42]. In our study, we found the relative low abundances of CpG and UpA in ZIKV, which may be beneficial for the virus to escape the host anti-viral immune response and complete virus transcription reaction efficiently [43]. The unmethylated CpG can be recognized by the host innate immune system as a pathogen signature, and activates various immune response pathways [42,44]. UpA deficiency was proposed to avail virus by reducing the risk of nonsense mutations, minimizing the improper transcription and decreasing the opportunities of cleavage by RNase L [45]. Correspondence analysis and correlation analysis: compositional properties of the ZIKV polyprotein-coding region The A, U, C, G, and GC contents were compared with the A3s, U3s, C3s, G3s, and GC3s contents, respectively ( Table 4). The results show that correlations in nucleotide compositions are complicated. Specifically, both the G and GC contents have a significant negative correlation with the content of A3s or U3s, as well as a significant positive correlation with the content of C3s, G3s, or GC3s. The A content has a significant negative correlation with the content of C3s, G3s or GC3s and a significant positive correlation with the content of A3s or U3s. The U content has a significant negative correlation with the content of C3s or GC3s as well as a significant positive correlation with A3s or U3s content, except for the insignificant correlation between U and G3s contents. The C content has a significant negative correlation with A3s, U3s, or C3s content as well as a significant positive correlation with G3s or GC3s content. These data shows that the nucleotide compositional constraint may also affect the codon usage of ZIKV. A correspondence analysis was performed to determine the main trends in the codon usage variation and the distribution of each gene along the continuous axes. The positions of each polyprotein-coding region defined by the first axis (Axis 1) and second axis (Axis 2) are shown in Fig 2. The first axis accounts for 72.93% of the total variation, and the second, third and fourth axes account for 8.99%, 6.33%, and 3.08%, respectively, of the total variation in synonymous codon usage. A correlation analysis also showed that, except G content, the Axis1 is positively correlated with the contents of A, U, A3s, U3s, whereas it is negatively correlated with the GC3s, GC, ENc, C, C3s and G3s (Table 5). Meanwhile, Axis2 is only negatively correlated with the G content (Table 5). Overall, these results suggest that mutation pressure from the base composition plays a role in constructing the codon usage pattern of ZIKV. The effect of translational selection on the codon usage of ZIKV A plot of the ENc values against the GC3s values was constructed to check the heterogeneity of codon usage [20]. If a gene is subject to the GC compositional constraints, it will lie on or near Analysis of Codon Usage Bias of ZIKV the theoretical fitting curve that represents random codon usage. In contrast, if a gene is subject to translational selection, it will lie considerably below the expected curve [46]. Here, the ENc value of each polyprotein-coding region of ZIKV was plotted against the corresponding GC3s content (Fig 3). The resulting points lie considerably below the solid curve, implying that, in addition to mutation pressure, other factors, such as translational selection, also influence the codon usage pattern of ZIKV. This result is generally similar to the related plot in previous study [34]. The base composition and codon usage bias of the ORFs of a species with an A/U-rich genome may be different from those species with G/C-rich genomes. Previous studies have employed a correlation between CAI values and ENc values to demonstrate the effect of mutation and translational selection on the codon usage bias [37,47]. If the correlation (r) between the two indices approaches -1, this suggests that the translational selection is preferred over mutation. Otherwise, if the r value approaches 0 (no correlation), mutation may be more influential than translational selection. Our results showed that the CAI value of ZIKV is significantly positively correlated with the ENc value (r = -0.749, P<0.01) (Fig 4). This result reflects the influence of both translational selection and mutation pressure on the codon usage pattern of ZIKV. A significant correlation between the GC12 and GC3 values is regarded to indicate that the mutation pressure dominates over the translational selection pressure in shaping the codon usage bias [22,46]. If the correlation between GC12 and GC3 is significant, the mutation pressure is regarded as the main force forming the codon usage bias. To further determine the role of mutation pressure and translational selection in shaping the codon usage bias of ZIKV, a correlation analysis was performed to analyze the relationship between GC12 and GC3. There was no significant correlation observed between them (r = 0.25, P>0.05), suggesting that both translational selection and the mutation pressure are involved in shaping the codon usage pattern of ZIKV (Fig 5). To determine whether the biased codon selection are restricted in highly biased coding sequences, the relationship between pyrimidines (C and U) and purines (A and G) contents in four-fold degenerate codon families (alanine, arginine, glycine, leucine, proline, serine, threonine and valine) are analyzed by PR2 bias plot. It can be seen that A and C are more frequently used than U and G in ZIKV in four-fold degenerate codon families (Fig 6). This result shows that the codon usage pattern of ZIKV is shaped by mutation pressure and other factors including translational selection. To confirm whether translation selection from the hosts plays a role in shaping the codon usage pattern of ZIKV, the tAI values were calculated based on the tRNA copy numbers of H. sapiens. The results indicated that the tAI values of 46 ZIKV strains range from 0.329 to 0.347, with an average value of 0.344 and a SD of 0.004. Moreover, the positive correlation between tAI and CAI values (r = 0.457, P<0.01) in ZIKV highlights the importance of translational selection in the formation of synonymous codon usage pattern. Compared with translational selection, mutation bias seems to have a stronger effect on the codon usage bias of some viruses [48,49]. However, for ZIKV, the translational selection pressure also takes part in shaping the codon usage bias. Our result is consistent with a previous study that showed that recent Asian lineage spread is linked to the codon usage adaptation of the NS1 protein to human housekeeping genes [50]. During the preparation of this manuscript, two papers were published employing some ZIKV strains to analyze the codon usage [51,52]. They concluded that mutation pressure is an important determinant of the codon usage bias of ZIKV mainly based on the result of a GC3s-ENc analysis [51]. The reasons that they do not mention the role of translational selection in the codon usage of ZIKV may be due to the lack of application of other codon usage analysis methods in their studies. Effect of other factors on codon usage GRAVY and AROMO may also be related to the codon usage pattern of viruses [53]. Our correlation analysis indicated that AROMO is positively correlated with GC3s, GC, and ENc, but it is negatively correlated with Axis 1. GRAVY showed a significant positive correlation with Axis 1, but it showed a significant negative correlation with GC, GC3s, and ENc, respectively (Table 6). Both GRAVY and AROMO do not show any correlation with Axis 2. These results indicated that the aromaticity and degree of protein hydrophobicity are linked to the codon usage variation in ZIKV, emphasizing the importance of natural translational selection on forming the codon usage pattern [34]. The involvement of aromaticity and hydrophobicity in the construction of codon usage bias has been revealed in some RNA viruses, such as bovine viral diarrhea virus, classical swine fever virus, and duck hepatitis A virus [34,35,54]. This study found that Axis 1 has a Analysis of Codon Usage Bias of ZIKV significant role in shaping the ZIKV codon usage pattern and is significantly correlated with aromaticity and hydrophobicity indices, implying that the aromaticity and hydrophobicity of proteins are related to the codon usage pattern of ZIKV. Aromaticity and hydrophobicity are known to play a role in peptide self-assembly and protein aggregation rates [55,56]. A recent study showed that the structure of ZIKV particles is thermally stable, and this feature may help the virus to survive in the harsh conditions of semen, saliva, and urine [57]. It has been reported that there is a significant correlation between the phylogroups of isolates and their geographic regions, and an obvious pattern of geographic clustering has been observed in ZIKV isolates [14]. To determine if geographic factors influence the evolution of ZIKV, a plot of Axis 1 and Axis 2 was drawn according to the geographic distribution of the tested ZIKV strains. The resulting coordinate spots are separated into three groups, classified Analysis of Codon Usage Bias of ZIKV as group I, II, and III (Fig 7). Some strains isolated in Uganda clustered together with the strains isolated from Senegal, and were classified as group I. Additionally, some strains isolated from Central African Republic also clustered together with the strains isolated from Senegal, and these were classified as group II. Most of the strains isolated, regardless of their isolation countries, tended to cluster together and were classified as group III. The codon usage pattern reflects the close relationship of ZIKV strains in different geographic regions. To investigate if the ZIKV codon usage pattern displays changes over time, a plot of Axis 1 and Axis 2 was drawn according to the outbreak time of the ZIKV strains. The 46 ZIKV isolates were divided into three groups, classified as group I, II and III (Fig 8). Most of the strains isolated from 2010 to 2016 tended to cluster together in group III. The strains isolated from 1968 to 1997 clustered together in group II, while the strains isolated in 1947 and 2001 clustered in group I. Interestingly, the strains isolated in 1968 exist in both group II and group III. These results indicated that ZIKV strains isolated in different time intervals show genetic variation in their codon usage patterns. Previous studies showed that the Dengue virus strains occurring in the same continental region are more closely related to one another, forming a cluster when plotted by their codon usage biases, indicating that the viruses from a geographical group can show similar codon usage biases [58]. Andrew et al found that the geographic origin of the strains responsible for the ZIKV epidemics that occurred on Yap island in 2007 and in Cambodia in 2010 most likely originated in Southeast Asia [14]. In this study, we further found that most of the American ZIKV strains isolated in recent years cluster with some Asian, Europe and Oceania strains, supporting the idea that a close evolutionary relationship exists among Asian, Europe, Oceania and American strains. Conclusions Our findings reveal that the codon usage bias of ZIKV is weak and that, in addition to mutation pressure, translational selection also influences the codon usage bias. Other factors, such as base composition, aromaticity, and hydrophobicity, also have an effect on the codon usage pattern. Importantly, there are similarities between the codon usage patterns of ZIKV and its natural hosts. This study not only provides an understanding about the variation in ZIKV codon usage patterns, but it also contributes to understanding the factors that drive ZIKV evolution.
2018-04-03T04:29:32.098Z
2016-11-28T00:00:00.000
{ "year": 2016, "sha1": "15859a4aeb396718cea4abc4939edf5407317e19", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0166260&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c182fa44ed42d3f4bce316dc78bec2ea3b6d31ba", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231991031
pes2o/s2orc
v3-fos-license
Study on the variation of air pollutant concentration and its formation mechanism during the COVID-19 period in Wuhan To prevent the spread of COVID-19 (2019 novel coronavirus), from January 23 to April 8 in 2020, the highest Class 1 Response was ordered in Wuhan, requiring all residents to stay at home unless absolutely necessary. This action was implemented to cut down all unnecessary human activities, including industry, agriculture and transportation. Reducing these activities to a very low level during these hard times meant that some unprecedented naturally occurring measures of controlling emissions were executed. Ironically, however, after these measures were implemented, ozone levels increased by 43.9%. Also worthy of note, PM2.5 decreased 31.7%, which was found by comparing the observation data in Wuhan during the epidemic from 8th Feb. to 8th Apr. in 2020 with the same periods in 2019. Utilizing CMAQ (The Community Multiscale Air Quality modeling system), this article investigated the reason for these phenomena based on four sets of numerical simulations with different schemes of emission reduction. Comparing the four sets of simulations with observation, it was deduced that the emissions should decrease to approximately 20% from the typical industrial output, and 10% from agriculture and transportation sources, attributed to the COVID-19 lockdown in Wuhan. More importantly, through the CMAQ process analysis, this study quantitatively analyzed differences of the physical and chemical processes that were affected by the COVID-19 lockdown. It then examined the differences of the COVID-19 lockdown impact and determined the physical and chemical processes between when the pollution increased and decreased, determining the most affected period of the day. As a result, this paper found that (1) PM2.5 decreased mainly due to the reduction of emission and the contrary contribution of aerosol processes. The North-East wind was also in favor of the decreasing of PM2.5. (2) O3 increased mainly due to the slowing down of chemical consumption processes, which made the concentration change of O3 pollution higher at about 4 p.m.–7 p.m. of the day, while increasing the concentration of O3 at night during the COVID-19 lockdown in Wuhan. The higher O3 concentration in the North-East of the main urban area also contributed to the increasing of O3 with unfavorable wind direction. Introduction The COVID-19 pandemic has significantly challenged our daily life (Lonergan et al., 2020;Shereen et al., 2020). For public health, the Chinese Government ordered its highest Class 1 Response (Hubei Provincial People's Government, 2020), which is explained in The National Emergency Plan for Public Health Emergencies (The Central People's Government of the People's Republic of China, 2006). With this order, all the unnecessary transportation in and around Wuhan was shut down. All the unnecessary human activities were reduced to the minimum to reduce transmission and avoid cross-infection, including closing down local businesses, schools, colleges and universities (Zhou et al., 2020;Tang et al., 2020). In another way, the COVID-19 lockdown in Wuhan is also an unprecedented emission mitigation measure that represents opportunities to understand air pollution in extreme cases. It has been widely accepted that PM 2.5 reduced about 40% in the lockdown conditions in Wuhan compared with the last few years (IQAir, 2020;Le et al., 2020; Ministry of Ecology and Environment of the People's Republic of China, 2020). In addition to the changes to the PM 2.5 , O 3 was found to have about a 30% increase during the lockdown (Le et al., 2020; Ministry of Ecology and Environment of the People's Republic of China, 2020). As for NO 2 , Le et al. (2020) found a − 93% decrease from satellite data, while the Ministry of Ecology and Environment of the People's Republic of China (2020) shows about a 40% decrease from the national ground station. As described in previous studies, one of the main reasons causing the reduction in PM 2.5 might be the reduction of emissions. Vieno et al. (2015) used the EMEP4UK (UK -scale chemistry-transport model) atmospheric chemistry transport model to investigate the impact of the reductions in PM 2.5 anthropogenic emissions and found that the reductions of primary PM 2.5 emissions might be the most effective single-component control on PM 2.5 . Liang et al. (2016) summarized the previous studies and concluded that industrial emission which induced secondary inorganic aerosols were the most dominant sources of PM 2.5 in urban areas in China. Many emission reduction campaigns were conducted in China to avoid air pollution, with Wang et al. (2009) and Li et al. (2011) having done a study about the emission reduction campaigns during the "Olympics Blue" in 2008. Sun et al. (2016), Huang et al. (2015) completed studies about the "APEC Blue" in 2014 and Han et al. (2016), Chu et al. (2018) and Ren et al. (2019) conducted a study about the "Parade Blue" in 2015. In these cases, China closed factories, industrial plants, construction sites, gas stations and kept vehicles off of the road in order to avoid air pollution, with the emission reduction campaigns proving to be effective. The aerosol extinction coefficient decreased to about 42.3% during the Beijing Olympic Games in 2008 compared with that in 2007. This study indicated the effectiveness of local air pollution control measures in Beijing areas under almost the same meteorological conditions (Yang et al., 2010). During the APEC periods in 2014, air pollutant concentrations had shown significant decreases over the North China Plain, especially over the Jing-Jin-Ji region, with NO 2 VCD (vertical column densities), AOD (aerosol optical depth), and AAOD (absorption aerosol optical depth) mostly reduced in Beijing, resulting in percentages of 47%, 34%, and 17% compared with that in the previous three years (Huang, 2015). Lin et al. (2017) found that daily PM 2.5 concentrations decreased from 98.57 μg/m 3 to 47.53 μg/m 3 during "APEC Blue", and from 59.15 μg/m 3 to 17.07 μg/m 3 during the "Parade Blue", using the same dates from the prior year as a reference. Recently, Wang et al. (2020) used CMAQv5.0.1 to simulate air pollution in China during January 1 to February 12, 2020 and found that the decrease of PM 2.5 in Wuhan was 30.79 μg/m 3 when the emissions of transportation, industry and agriculture decreased to 20%. More specifically, the literature indicates that when O 3 production is in a NO x -saturated state (NO x = NO + NO 2 ), a reduction in NO x leads to an increase in ozone and the lack of NO emissions alleviates ozone titration (Le et al., 2020;Levy et al., 2014;Atkinson et al., 2000). Surface O 3 is normally low at night when NO emissions are high. However, during the daytime, the significant removal of ozone via reaction (NO + O 3 →NO 2 ) occurs in the vicinity of large NO emission sources (Kleinman et al., 2000;Lin et al., 1998). In a recent study, the lack of NO x due to the reduction of emissions during the COVID lockdown in China led to substantial increases in O 3 (Huang et al., 2021). Given that PM 2.5 decreased and O 3 increased during the COVID-19 lockdown period in Wuhan, several questions come to mind. What are the differences and formation mechanisms of their chemical and physical processes in the atmosphere between the pandemic and those of normal years? How can we quantify the impacts of the COVID-19 lockdown on the chemical and physical processes? What are the differences of the COVID-19 lockdown impacts resulting in the pollution increasing or decreasing? How to find the most affected period of the day instigated by the COVID-19 lockdown? To fill a literature gap in studying the influence of the COVID-19 lockdown (Huang et al., 2021;Le et al., 2020;Wang et al., 2020), this study focused on emission reduction ratio through examining partial differentiations in individual species chemical and physical processes. The study utilized the CMAQv5.3.1 (Byun and Schere, 2006) to conduct sensitivity simulation tests with four sets of emission data collected in Wuhan during the lockdown period. The study, which the individual pollutant actions show is more complicated than the previous studies, was demonstrated through implementing the PROCAN (Process Analysis Preprocessor) module (Byun and Ching, 1999). This module further revealed the quantitative effects of the individual chemical and physical processes, which instigated the mechanism for the changes in PM 2.5 and O 3 in Wuhan during the COVID-19 lockdown time. This paper found that PM 2.5 decreased mainly due to the reduction of emission and the positive contribution of aerosol processes. O 3 increased mainly due to the slowing down of chemical consumption processes, which made the concentration change of O 3 pollution higher at about 4 p.m.-7 p.m. of the day, and increased the concentration of O 3 at night during the COVID-19 lockdown in Wuhan. Data This study used data from the 10 air quality stations in Wuhan from China National Environmental Monitoring Center. The FNL (Final Operational Global Analyses) 1 • × 1 • data that was produced by the National Centers for Environmental Prediction (NCEP), was used in this simulation to initialize the WRF (The Weather Research and Forecasting) model, which can be downloaded from the website in https://rd a.ucar.edu/datasets/ds083.2/. The MEIC (Multi-resolution Emission Inventory for China) was developed and maintained by Tsinghua University (Zhang et al., 2009), which can be downloaded from this website in https://meicmodel.org. Method In this study, a WRF-CMAQ modeling system was applied to simulate the pollution. The Weather Research and Forecasting (WRF) Model was developed at the National Center for Atmospheric Research (NCAR), which is operated by the University Corporation for Atmospheric Research (UCAR) (Chen et al., 2007). The WRFv3.7.1 was used to generate the meteorological background for air quality simulation. The Community Multiscale Air Quality (CMAQ) modeling system is an active open-source development project of the U.S. EPA that consists of a suite of programs for conducting air quality model simulations (United States Environmental Protection Agency, 2020;Byun and Schere, 2006;Pye et al., 2017;Fahey et al., 2017). The CMAQv5.3.1 was used to simulate the spatial distribution and temporal variation of pollution within the study region from Dec. 13, 2019 to Jan. 15, 2020 and from Feb. 5, 2020 to Apr. 8, 2020. A twenty-days gap from 15th Jan. to Feb. 5, 2020, around Jan. 23, 2020 when the lockdown occurred, was left to better analyze the influence of the lockdown. The PROCAN (Process Analysis Preprocessor) module was implemented in CMAQ by Byun and Ching (1999) which is an accounting system that tracks the quantitative effects of the individual chemical and physical processes, which were combined to explain the predicted hourly species concentrations from a simulation. The PROCAN module helped calculate integrated process rates and integrated reaction rates, which can then be used for diagnosing the physical and chemical behavior of these pollution processes. PROCAN has two components: Integrated Process Rate (IPR) analysis, and Integrated Reaction Rate (IRR) analysis, IPR was mainly used in this study. The detailed WRF-CMAQ model configurations can be found in Table 1(a) and Table 1(b). The model simulation domain and topography height are shown in Fig. 1. The scheme we used to calculate PM 2.5 followed the CMAQv5.3.1 mechanism cb6r3_ae7_aq rules, can be found at https://github.com/USEPA/CMAQ/blob/master/CCTM/src/MECH S/cb6r3_ae7_aq/SpecDef_cb6r3_ae7_aq.txt. There are 9 p.m. 2.5 chemical or physical processes assigned by PROCAN, including HADV(horizontal advection), ZADV(vertical advection), HDIF(horizontal diffusion), VDIF(vertical diffusion), DDEP(dry deposition of species), CLDS(change due to cloud processes; includes aqueous reaction and removal by clouds and rain), AERO(change due to aerosol processes), CHEM(net sum of all chemical processes for species over output step) and EMIS(emissions contribution to concentration). The CHEM of PM 2.5 process analysis in the CMAQ model represents the heterogeneous reactions. The aerosol species involved in reactions in the mechanism definition file are listed here: https://github.com/USEPA/CMAQ/blob/ master/CCTM/src/MECHS/cb6r3_ae7_aq/mech_cb6r3_ae7_aq.def. The production and loss of each of these reactions can be found in https ://github.com/USEPA/CMAQ/blob/master/CCTM/src/gas/ebi_cb6r3_ ae7_aq/hrprodloss.F. There are seven O 3 chemical or physical processes signed by PROCAN, including HADV, ZADV, HDIF, VDIF, DDEP, CLDS, CHEM. Observation analysis Literature was reviewed to report the studies on the behavior of air pollution when the lockdowns were implemented (Table 2). Please note that all the observation data came from the national ground station except no.6, which used the data from TROPOMI (The Tropospheric Monitoring Instrument). As shown in Table 2, these studies reached a consensus that PM 2.5 reduced 30%-50% in the lockdown conditions compared with the preceding years. In contrast to the changes to the PM 2.5 , Le et al. (2020) and Ministry of Ecology and Environment of the People's Republic of China (2020) found an approximately 30% increase in O 3 during the lockdown. As for NO 2 , Le et al. (2020) found a − 93% decrease using the satellite, and other reports (Ministry of Ecology and Environment of the People's Republic of China, 2020) which show about a 40% decrease from the national ground station. In this paper, the observations of ten air quality stations average over space and time were compared in Wuhan from the China National Environmental Monitoring Center. Considering that there must have been some changes in emissions between 2019 and 2020 in Wuhan, we separated the observations to four time periods. Then the observations periods: Dec. 15, 2018 to 15th; Jan. 2019 and Feb. 08, 2019; Apr. 08, 2019, were used to do comparative analysis. The observations from Dec. 15, 2019 to Jan. 15, 2020 were considered, however, as there was no emission influence from COVID-19 lockdown which had not yet begun, we only noted the differences between 2020 and 2019. For the other observation periods, were reviewed through the lens that there was causality from the COVID-19 lockdown compared to the observations in the same period in 2019. Table 3 shows the differences of PM 2.5 , PM 10 (inhalable particles), the ratio of PM 2.5 /PM 10 , O 3 and NO 2 between these 4 time periods. The calculate method is used as follows: (1) where Data1, Data2 represents the observations (PM 2.5 , PM 10 , the ratio of PM 2.5 /PM 10 , O 3 and NO 2 ) from different time periods, N1, N2 represents the number of samples in Data1, Data2. Table 3 shows the differences of observations, A1 represents the differences of the air pollutant concentration during the COVID-19 lockdown in 2020 and the same time period in 2019. A2 represented the differences before the COVID-19 lockdown in 2020 and the same time period in 2019. B1 represented the differences before and during the COVID-19 lockdown in 2020. B2 represented the differences of the same time period as B1 in 2019. PM 2.5 decreased 31.7% from Feb. 08, 2020 to Apr. 08, 2020 compared with 2019, and O 3 increased 43.9%, NO 2 decreased 51.8%. The concentration of PM 2.5 , PM 10 and NO 2 were lower in lockdown time period A2 than before the lockdown time period A1, meanwhile O 3 was higher. The results above show that the lockdown in Wuhan had a great influence on air pollution. Compared with B1 and B2, it's clear that PM 2.5 and PM 10 continually had a downward trend from December to April, and it's going down more in 2020. O 3 continually had an upward trend from December to April, but it's going up much more in 2020. NO 2 should not have changed much from December to April, but it decreased 53.1% in 2020. Our results are in alignment with the previous studies that PM 2.5 reduced 30%-50%. Beyond that, O 3 in our results increased 43.9%, much more than previous studies. The reasons might be that our observation periods lasted much longer until the lockdown was over in Wuhan. The results of observations show that the influence to O 3 of the lockdown in Wuhan was more significant when the concentration of O 3 is higher. There is an upward trend of O 3 from February to April in both 2019 and 2020, with the concentration of O 3 being higher in April compared to February, therefore there is a larger inferred increase in O3 levels when a longer observational period was compared. Fig. 2 shows the concentrations of the four species pollutants from December 2018-April 2019 and December 2019-April 2020(considering that 2020 is a leap year, we used the Julian date here). It is clear that PM 2.5 and PM 10 had a continuing downward trend and O 3 had a continuing upward trend from December to April in both 2019 and 2020. The concentration of PM 2.5 , PM 10 and NO 2 was lower in 2020 Model assessment In order to assess the performance of the CMAQ simulation and make this study more convincing, the Eva experiment scenario was carried out to simulate the pollution from Dec. 15, 2019 to Jan. 15, 2020, before the COVID-19 lockdown. This experiment used an adjusted anthropogenic emission inventory that is based on MEIC 2016. It was multiplied by coefficients compared with observations to be an alternative solution when the real-time emission inventory is unavailable. The simulated pollutants concentrations of the Eva experiment were compared to the observations of the ten air quality stations. We cannot directly evaluate our simulation during the COVID-19 lockdown in Wuhan because we cannot get the accurate emission inventory during that time. Therefore, the Eva experiment was used to evaluate the simulation of the CMAQ model and find out the suitable emission inventory, which confirmed that the simulation presented a reliable performance before the COVID-19 lockdown in Wuhan. The Emission Control module in CMAQv5.3.1 was used here to adjust the emission inventories in 2020. First of all, NO x and SO 2 was adjusted in the emission inventory and compared to the simulation results with the observation of NO 2 and SO 2 (Formula 2~5 were used here).Then, the total amount of VOC and PM 2,5 was adjusted. Table S1 listed the adjust ratios and Table S2 listed the performance of EVA0 (without adjustment) and EVA6 (the emission inventory we used in the following paper) simulation from Nov. 15, 2019 to Jan. 14, 2020 compared with the observation in the average of the ten stations. The correlation coefficient (COR), root mean squared error (RMSE), normalized mean bias (NMB) and normalized mean error (NME) were calculated as follows: where C m is the simulated concentration, C 0 is the observed data, N represents the number of samples, Cov(x) means the covariation of x and D(x) means the variance of x. All the data in this study we simulated by the WRF-CMAQ model was in the surface layer and analyzed by the daily average of the ten stations' in Wuhan. Table 4 shows the detailed model assessment. Fig. 3 compared the observation and simulation of the Eva experiment of PM 2.5 , PM 10 , O 3 and NO 2 from Dec. 15, 2019 to Jan. 15, 2020. Compared with the observations, our simulation presented a strong performance. The average characterics of meteorological factors during the COVID-19 lockdown Fig . 4 shows the average temperature, sea level pressure and relative humidity with wind in Wuhan from Feb. 08, 2020 to Apr. 08, 2020 simulated using CMAQ model. The average wind direction during the COVID-19 lockdown was from North-East to South-West. The temperature was approximately 12 • C in the main urban area of Wuhan, and the temperature was lower where the surface is water (including the Yangtze River, Hanjiang River and lakes in Wuhan). The sea level pressure was approximately 1020.2hpa in the main urban area of Wuhan. The relative humidity was approximately 67% in the main urban area of Wuhan, and was higher where the surface is water. Simulation schemes for emission control In order to investigate the greatest possible emissions reduction ratio caused by the COVID-19 lockdown, Eva and the other four simulation scenarios were performed and compared. As seen in Table 5, the Base (stand for baseline) experiment was regarded as the benchmark experiment, and Exp1 (stand for experiment 1), Exp2 (stand for experiment 2), Exp3 (stand for experiment 3) were regarded as the sensitivity experiments with different decreasing ratios in different emission categories. The MEIC emission inventory was used here and the ratios of PM 2.5 , PM 10 , NO X , and VOC in MEIC emission inventory can be found in Fig. S1. First, the Base experiment scenario used the same emission inventory as the Eva experiment in section 3.2 but simulated the time period in COVID-19 lockdown from Feb. 8, 2020 to Apr. 8, 2020, assuming that COVID-19 lockdown had no effect on the emissions in Wuhan. As seen in Fig. 5, the concentration of PM 2.5 , PM 10 , and NO 2 was much higher than observation, meanwhile the concentration of O 3 was much lower. Then, Exp1 was carried out and the emissions were decreased to 50% in agriculture, industry and transportation in Wuhan compared with the Base experiment. Considering that with the highest Class 1 Response, all the unnecessary transportation were shut down and all the unnecessary human activities were reduced to the minimum, including closing down local business, schools, colleges, universities, restricting the movement of people (Hubei Provincial People's Government, 2020) and carrying out agricultural production in different periods and batches (Agricultural and rural Bureau of Wuhan, 2020). The results in Fig. 5 shows that the decreasing ratio of agriculture, industry and transportation was still low. Next, Exp2 was carried out, the emissions were decreased to 20% in agriculture, industry and transportation in Wuhan compared with the Base experiment. It is obvious that the concentration of PM 2.5 and PM 10 was similar to observation, but the concentration of NO 2 was still a little high, while O 3 was still a little low. Wang et al. (2020) did the similar experiment but cut the emissions in Hubei. Finally, in order to get a better performance in simulating O 3 , Exp3 was carried out and the emissions were decreased to 20% in industry and 10% in transportation and agriculture in Wuhan compared with the Base experiment. Considering that PM 2.5 and PM 10 were well simulated and the differences of ratios in the MEIC emission inventory between PM 2.5 , PM 10 and NO x mainly came from transportation and agriculture, Exp3 cut more in transportation and agriculture compared with Exp2. The results of emission control experiments demonstrated that Exp3 had the best performance on both PM 2.5 and O 3 . The COR, RMSE, NMB and NME of PM 2.5 were 0.28, 17.07(ug/m 3 ), 1.9% and 38.1%. The COR, RMSE, NMB and NME of O 3 were 0.62, 14.45(ug/m 3 ), − 7.1% and 18.1% compared with observation. Our results were better than the previous study . As the results show, Exp3 presented great performance in simulating PM 2.5 and O 3 during the COVID-19 lockdown in Wuhan. Therefore, in our study, the greatest possible emissions were decreased to 20% in industry and 10% in agriculture and transportation during the COVID-19 lockdown period compared with the emissions before the lockdown period in Wuhan. Variety characterics of pollutant concentration for emission control The horizontal distribution of the average simulated PM 2.5 , PM 10 , O 3 , O 3 -8H (max 8-h average O 3 ) and NO 2 concentration from Feb. 08, 2020 to Apr. 08, 2020 in Base, Exp3 and their difference was shown in Fig. 6. It can be found that the biggest differences of all the pollutants (as Fig. 6(c), (f), Fig. 6(i), (o)) occurred in the main urban area, which can be represented by the location of the national air quality stations in Wuhan. The results reflect that the COVID-19 lockdown influenced the pollution more in urban areas in Wuhan. The spatial and concentration differences of O 3 and O 3 -8H revealed that the COVID-19 lockdown influenced more on the low level of O 3 in the urban area of Wuhan. As the max 8-h average O 3 usually occurs in the afternoon of the day, and the low level of O 3 always occurs at night, it can be deduced that COVID-19 had more influence on the low level of O 3 at night. The further analysis was carried out in section 4.2.5. Formula of processes analysis In order to find out the formation mechanism that caused significant changes in PM 2.5 and O 3 during the lockdown time period, the Process Analysis module was used in CMAQv5.3.1. Formula 6~9 shows how to calculate the concentration of PM 2.5 and O 3 using process analysis. Table S3 shows the species index used in process analysis of PM 2.5 and O 3. where CHA t means the total concentration change of all processes in hour t, P process,t means the change due to process in hour t. CHA d means the total daily concentration change of all processes in day d, C t means the concentration of PM 2.5 or O 3 in hour t, C d means the daily average concentration of PM 2.5 or O 3 . Fig. 7 shows the daily average process analysis of PM 2.5 and O 3 of Base and Exp3. As declared in section 2.2, for example, ZADV_PM 2.5 means the daily summation of PM 2.5 changes due to vertical advection in every hour, whereas, CHA_PM 2.5 means the daily change of PM 2.5 , which is the summation of each hour and each process, where PM 2.5 means the daily average PM 2.5 . It should be noted that the hourly concentration of PM 2.5 is equal to the summation of the initial concentration and changes of all the processes in this hour, which can be seen in Fig. S2. Daily average processes analysis As for PM 2.5 , it's clear that emissions contributions always play an important role in increasing the concentration (Vieno et al., 2015) and had a stable performance during the whole simulation. Vertical diffusion, horizontal advection and vertical advection dominated the decreasing of PM 2.5 , and it is clear that pollution always occurs while VIDF or HADV slows down (Hua et al., 2016). ZADV often had the 6. The averaged simulated concentration horizontal distribution during Feb. 08, 2020 to Apr. 08, 2020. PM 2.5 of Base (a), Exp3 (b) and differences (c). PM 10 of Base (d), Exp3 (e) and differences (f). O 3 of Base (g), Exp3 (h) and differences (i). O 3 -8H of Base (j), Exp3 (k) and differences (l). NO 2 of Base (m), Exp3 (n) and differences (o). The differences were defined as Exp3 minus Base. The vector indicate wind, the black points means the location of 10 air quality stations. opposite tendency compared with VDIF and HADV, and it influenced much more in Exp3 compared to Base. Aerosol processes increased pollution in the Base experiment, but on the contrary, reduced the pollution in Exp3 on average, and AERO was lower on the decreasing time of PM 2.5 and even reduced the pollution in Exp3. HDIF, DDEP, CLDS and CHEM gave a little contribution to PM 2.5 . HDIF decreased the concentration a little while PM 2.5 increased and almost equal 0 while PM 2.5 decreased. The DDEP was higher, though it's similar in the Base and Exp3. CLDS is always equal to 0, but it decreased the concentration while there was rain or high humidity. The CHEM always increased the concentration and higher while PM 2.5 increased, and it's similar in the Base and Exp3. The CHEM of PM 2.5 is connected with heterogeneous reactions. The heterogeneous reactions can increase the concentration of PM 2.5 . Comparing Exp3 with the Base experiment, there is a conspicuous decrease in EMIS contributions that then lead to the decreasing of all processes. Under the influence of all the decreasing processes, the concentration of PM 2.5 stayed low during the lockdown in Wuhan. As for other findings of note, in terms of O 3 , vertical diffusion and horizontal advection dominated the increase of O 3 , while the chemical processes dominated the decrease, as Hogrefe et al. (2018) and Tzella (1983) described in their studies. CHEM always had the similar tendency but had an opposite contribution compared with the summation of VDIF and HADV. Also, dry deposition always decreased the concentration of O 3 . Strikingly, the cloud processes and horizontal diffusion gave little contribution. As seen in Fig. 7, the concentration change of O 3 mainly connects with the summation of HADV, VDIF and CHEM processes. However, compared with the Base experiment, although the emissions were cut down in Exp3, the concentration of O 3 was higher. It is important to note that the results were disproportionate in the decreasing CHEM and increasing HADV and VDIF. Total contribution of each process In order to quantify the impacts of the COVID-19 lockdown on the chemical and physical processes, analysis was conducted to examine the total contribution of each process for PM 2.5 and O 3 during Feb. 08, 2020 to Apr. 08, 2020, including concentration changes and their proportion as summarized in Table 6 TOT means the total concentration contribution of all the processes. The proportion of each process were calculated as: where C j,t means the changes due to process j in hour t, and S j,t means the sum of concentration changes of process j in hour t from t=1 to t, while P j,t means the proportion of process j in time t. The total contribution of each process for PM 2.5 in Base, Exp3, as well as the differences between Exp3 and Base (Exp3-Base), and the decline ratio from Base to Exp3(Exp3/Base) were shown in Table 6(a). The summation of all the processes, demonstrating the differences of concentration between the first hour and last hour in the simulation, was 124 μg/m 3 in Base, 50 μg/m 3 in Exp3. The difference was 32.5 μg/m 3 in observation, with Exp3 much closer to the observation comparatively to Base, so Exp3 can better reflect the authentic situation for PM 2.5 . The proportion of each process in the differences between Exp3 and Base (Exp3-Base) shows us that EMIS and AERO dominate the decreasing changes, attributed to the COVID-19 lockdown, and VDIF, HADV dominate the increasing changes. The data indicated significant differences in each process as follows; EMIS is a dominant physical increasing process and accounts for approximately 44% of the contribution for all the processes in Base and 50% in Exp3. The changes attributed to the COVID-19 lockdown were about 42%. AERO increased pollution (6%) in the Base experiment, but on the contrary reduced the pollution (− 2%) in the Exp3. The changes attributed to the COVID-19 lockdown were about − 12% which can be explained through analysis of the condensation processes that changed aerosol species in different emission levels (Zhang, 2017). VDIF is a dominant physical decreasing process and accounts for approximately 20% of the contribution of all the processes in Base and 23% in Exp3, with the changes attributed to the COVID-19 lockdown being about 43%. HADV also contributed significantly and accounts for approximately 22% of the contribution of all the processes, including 13% in Exp3, with the changes attributed to the COVID-19 lockdown being about 21%. In our simulation, we conclude that the PM 2.5 reduced during the COVID-19 lockdown period in Wuhan because of the contributions of EMIS and AERO decreased more than HADV and VDIF. The contribution of EMIS decreased to 42%, which always increased the concentration. And AERO had an opposite trend from increasing concentration to decreasing concentration and changed to − 12%. HADV and VDIF decreased to 21% and 43% respectfully, which always decreased the concentration. The total contribution of each process for O 3 in Base, Exp3, the differences between Exp3 and Base (Exp3-Base), as well as the decline ratio from Base to Exp3(Exp3/Base) were shown in Table 6(b). The summation of all the processes, demonstrated by the differences of concentration between the first hour and last hour in the simulation was 5.8 μg/m 3 in Base, 21.4 μg/m 3 in Exp3. The difference was 19.2 μg/m 3 in Table 6 Total contribution of each process in the average over 10 stations in Wuhan from Feb. 08, 2020 to Apr. 08, 2020 (a. PM 2.5 , b. O 3 ). (The symbol + means the tendency that increase the concentration, -means the tendency that decrease the concentration.) observation, so Exp3 better reflects the real situation for O 3 . The proportion of each process in the differences between Exp3 and Base (Exp3-Base) shows that VDIF and HADV dominate the decreasing changes, while CHEM dominates the increasing changes. The data indicated significant differences in each process as follows; VDIF is a dominant physical increasing process and accounts for approximately 34% of all the processes in Base and 32% in Exp3, as the changes attributed to the COVID-19 lockdown is about 45%. HADV also contributed significantly and accounts for approximately 13% of all the processes in Base and 16% in Exp3, with the changes attributed to the COVID-19 lockdown about 57%. CHEM dominated the decreasing of O 3 and accounted for approximately 45% of all the processes in Base and 34% in Exp3, with the changes attributed to the COVID-19 lockdown being about 37%. In our simulation, the reason why O 3 increased during the COVID-19 lockdown period in Wuhan was that the contribution of CHEM decreased more than HADV and VDIF. The contribution of CHEM decreased to 37%, which always decreased the concentration. And the contribution of HADV and VDIF decreased to 57% and 45%, which always increased the concentration. The two dominated physical processes HADV and VDIF in the CMAQ model were highly connected with the meteorological parameters. The HADV and VDIF were solved in the science algorithms of CMAQ model and shown as: where γis the Jacobian of coordinate transformation, φ i is the trace species concentration in density units, t is the time step, V ξ is horizontal wind components in the coordinates ξ, ξ is the terrain-influenced vertical coordinate, whose value is increasing monotonically with height, q i = φ ρ is the species mass mixing ratio, ρ(T) is air density connected with temperature T, F 3 qi represents frictional forcing terms, Q φ i is the source or sink term. In formula 12, the HADV is mainly connected with the horizontal wind. As shown in Fig. 6, the average wind direction during the COVID-19 lockdown in Wuhan is North-East wind, and in Fig. 6(a) and (b), the concentration of PM 2.5 in the North-East of Wuhan was much lower than the main urban area. The favorable wind direction led to lower concentration in PM 2.5 . In Fig. 6(g) and (h), it is different for O 3 that the concentration was always lower in the main urban area. Therefore, instead of decreasing, HADV increased the concentration of O 3 in the main urban area. In formula 13, the formula used the eddy diffusion concept (K-theory) (Sutton, 1932), and as the theory proved, the species mass mixing ratio is highly connected with temperature (ρ(T)) and the trace species concentration in different vertical coordinate (φ i ). The simulation of PM 2.5 and O 3 had the same temperature gradient but different concentration gradient. The recent observation results in China has proved that the concentration of PM 2.5 generally decreased with height and the concentration of O 3 increased with height . Therefore, the different tendency of VDIF between PM 2.5 and O 3 in our study can be explained by that VDIF decreased the surface concentration of PM 2.5 as a flux from the surface to upper layers and increased the surface concentration of O 3 as the flux from upper layers to the surface, in consistent with the study of Li et al. (2016) and Hogrefe et al. (2018). In order to investigate why the AERO of PM 2.5 had an opposite trend in Base and Exp3 and why the HADV of O3 was always positive in the average of ten stations in Wuhan, the horizontal distribution of the average rates of AERO of PM 2.5 from Feb. 08, 2020 to Apr. 08, 2020 in Base, Exp3 and their difference was shown in Fig. 8. In Fig. 8, the AERO of PM 2.5 had positive impact on the main urban area and negative impact on the other areas in Wuhan. The COVID-19 lockdown slowed down the positive rates of AERO but speed up the negative rates. Fig. S3 shows the rates of divided aerosol processes. The results in Fig. S3 revealed that the changes in aerosol species due to condensation was the main cause. The low emissions in the north-east of the main urban area in Wuhan became lower due to the COVID-19 lockdown. The AERO of PM 2.5 around the stations in the North-East of the urban area increased the concentration of PM 2.5 in Base but changed to decrease the concentration in Exp3. Then led to the opposite trend of the AERO of PM 2.5 in Base and Exp3. Performances of each process while the pollution increased or decreased In order to determine the impact of the COVID-19 lockdown regarding the pollution metrics during periods of increasing and decreasing pollution rates, analysis was conducted to examine the average contribution of each process and whether the concentration of pollutants increased or decreased from the previous hour (increasing and decreasing stages) as summarized in Table 7. The positive value of increasing and decreasing stages means the concentration increased in the average of the whole simulation. The negative value of increasing and decreasing stages means the concentration decreased in the average of the whole simulation. The ratio means the concentration changed while the pollution increased compared with the concentration changed while the pollution decreased. The ratio reflects the difference between the periods of increasing and decreasing pollution in Base and Exp3, providing a direct target to find out the influences of COVID-19 lockdown to air pollution in Wuhan. As for PM 2.5 , the ratio of EMIS was 1.07 in Base and 1.08 in Exp3. The results proved that the EMIS plays a similar role in increasing and decreasing stages. It decreased in almost equal proportion with the increasing and decreasing stages during the COVID-19 lockdown. The ratio of HADV was 1.01 in Base and 2.63 in Exp3. The results show that HADV played a similar role in the increasing and decreasing stages in Base but it contributed more in increasing stages than decreasing stages in Exp3. It is clear that the COVID-19 lockdown had more impact on the decreasing stages of HADV. The ratio of VDIF was 0.29 in Base and 0.24 in Exp3. The results show that VDIF contributed more in dissipating stages, and it decreased in almost equal proportion in increasing and decreasing stages during the COVID-19 lockdown. The ratio of AERO was 3.02 in Base and − 0.48 in Exp3. The results also show that AERO contributed more in increasing the concentration during increasing stages in Base, then changed to contribute more in decreasing the concentration during decreasing stages in Exp3. It is also evident that the COVID-19 lockdown had a significant impact on AERO in decreasing stages. As for other findings of note, in terms of O 3 , the ratio of HADV was 0.75 in Base and 0.86 in Exp3. The results show that HADV contributed more in decreasing stages. In addition, the COVID-19 lockdown had a little more impact on the decreasing stages of HADV. The ratio of VDIF was 1.79 in Base and 1.97 in Exp3. The results show that VDIF contributed more in increasing stages, and the COVID-19 lockdown had a little more impact on the decreasing stages of VDIF. The ratio was 0.70 in Base and 0.28 in Exp3. The results show that VDIF contributed more in decreasing stages, but the COVID-19 had a significant impact on the increasing stages of CHEM. In conclusion, the COVID-19 lockdown had a greater impact on the decreasing stages of horizontal advection and aerosol processes of PM 2.5 , and the increasing stages of chemical processes of O 3 , which might be caused by the lack of NO 2 and NO and explained in section 4.2.5. Daily change of O 3 In order to find the most affected period of the day during the COVID-19 lockdown, Fig. 9 was carried out and shows the daily change of O 3 in observation, Base, Exp3 and the differences of each process between Exp3 and Base from the average of Feb. 08, 2020-Apr. 08, 2020. Fig. 9 represents the average of the whole COVID-19 lockdown period. TOT means total contribution of all the processes, while OBS means the observations. There is no doubt that Exp3 had a better performance in simulating daily change of O 3 . In our simulation, it is apparent that Base had a lower O 3 concentration at night compared to Exp3. As seen in Fig. 9, for most of the day, the total contribution of all processes was a bit lower except 4 p.m.-7 p.m. The significant total differences of all the processes during the time of 4 p.m.-7 p.m. between Base and Exp3 resulted in higher O 3 levels at night during the COVID-19 lockdown period in Wuhan. The difference between the maximum and minimum concentration of the day change from the average during Feb. 08, 2020-Apr. 08, 2020, which was 63.38 μg/m 3 in Exp3 and 77.23 μg/m 3 in Base. The results here were in alignment with the results in section 4.2.4. These results reached an agreement that the COVID-19 lockdown made the differences smaller between the high concentration and low concentration of O 3 . The COVID-19 lockdown had a greater impact on the increasing stages of chemical processes in O 3 as an hourly average. Beyond that, the results in Fig. 9 shows that the biggest difference of chemical processes attributed to the COVID-19 lockdown was at about 4 p.m.-7 p.m. in the decreasing stages. For future consideration, what causes the significant differences between Base and Exp3? Fig. 9 shows that the declining rates of decreasing the concentration that are mainly due to chemical processes, were much higher than the declining rates of increasing the concentration by vertical diffusion and horizontal advection decreased. The differences in Base and Exp3 around 6 p.m. lead to the different rate of the consumption of O 3 , and then made O 3 higher at night. Chemical processes at about 4 p.m.-7 p.m. were the main factors that led to the high O 3 pollution during the COVID-19 lockdown in Wuhan. As described in the previous study, the main chemical processes of O 3 is that O 3 produced directly by photolysis of NO 2 (R1), where the oxygen atom (O) rapidly recombines with molecular oxygen to produce ozone (O 3 ). Normally, this reaction is counterbalanced by the reaction of NO with ozone(R2): There is always net removal of ozone at nighttime. Surface O 3 is normally low when NO emissions are high. The significant daytime removal of ozone via reaction (R2) occurs in the vicinity of large NO emission sources (Kleinman et al., 2000;Lin et al., 1998). In our situation, NO x had decreased in the lockdown, leading to lower rates of ozone titration, which led to higher O 3 at night, leading to a higher daily concentration of O 3 . One could easily hypothesize then, that the causes of the high O 3 conditions during the lockdown might be that the Fig. 8. The horizontal distribution of the average rates of AERO of PM 2.5 from Feb. 08, 2020 to Apr. 08, 2020, the black points means the location of 10 air quality stations. (a. Base, b. Exp3, c. differences as Exp3 minus Base). Discussion and conclusion In order to study the variation of air pollutant concentration and its formation mechanism through chemical reactions and physical processes in the atmosphere during the COVID-19 lockdown, in this paper the WRF-CMAQ model and Process Analysis module were used to simulate pollutant concentration and the contribution of processes. The air pollutant ratio changes during the COVID-19 lockdown period in Wuhan were studied too. The following are the main conclusions. First, the observation results show that PM 2.5 reduced 31.7% and O 3 increased 43.9% from Feb. 8, 2020 to Apr. 8, 2020 compared to the same time period in 2019. The observation results from this study were in alignment with other scientist's studies (IQAir, 2020;Le et al., 2020; Ministry of Ecology and Environment of the People's Republic of China, 2020), adding validity to this data. Secondly, there were five simulation experiments carried out. In order to properly calibrate the initial emission source of the model, the Eva experiment was used to assess the performance of the CMAQ simulation. The Base experiment was designed as a baseline without considering the COVID-19 lockdown in Wuhan. The other three experiments which considered the COVID-19 lockdown in Wuhan show that Exp3 had the best performances in simulating PM 2.5 and O 3 , by a decrease of 20% in industry and 10% in agriculture and transportation. Therefore, Exp3 might reflect the most possible reduction of emissions caused by the COVID-19 lockdown in Wuhan. Thirdly, analysis was conducted to examine the impacts of the COVID-19 lockdown on the chemical and physical processes as summarized in Table 6. While comparing the results of process analysis of PM 2.5 and O 3 between Exp3 and Base, it was found that (1) the reduction of PM 2.5 was mainly due to the reduction of emissions, which dropped to 42% in Exp3 compared with Base. The concentration contribution of the aerosol process in Exp3 changed to − 12%, meaning that the aerosol process had an opposite tendency in changing PM 2.5 concentration in Exp3 compared with Base. In addition, the rates of concentration contribution in Exp3 are reduced to 43% for vertical diffusion and 21% for horizontal advection. (2) The increase of O 3 was mainly due to the weakening of the chemical process (weaken to 37% compared with the base), which was unfavorable to consumption, although the rates of diffusion and advection increasing O 3 concentration were reduced to 45% and 57% respectively. As for the horizontal advection, it is mainly connected with the horizontal wind. The average wind direction during the COVID-19 lockdown in Wuhan is North-East wind, the concentration of PM 2.5 in the North-East of Wuhan was much lower than the main urban area. The favorable wind direction led to lower concentration in PM 2.5 . It is different for O 3 that the concentration was always lower in the main urban area. Therefore, instead of decreasing, the horizontal advection increased the concentration of O 3 in the main urban area. The North-East wind was in favor of the decreasing of PM 2.5 .The higher O 3 concentration in the North-East of the main urban area contributed to the increasing of O 3 with unfavorable wind direction. As for the vertical diffusion, the eddy diffusion concept (K-theory) (Sutton, 1932) has proved that the species mass mixing ratio is highly connected with Fig. 9. The daily change of O 3 in observation, Base and Exp3(line) and the differences of each process between Exp3 and Base(bar) in the average over 10 stations in Wuhan from the average of Feb. 08, 2020-Apr. 08, 2020. temperature and the trace species concentration in different vertical coordinate. The simulation of PM 2.5 and O 3 had the same temperature gradient but different concentration gradient. The recent observation results in China has proved that the concentration of PM 2.5 generally decreased with height and the concentration of O 3 increased with height . Therefore, the different tendency of VDIF of PM 2.5 and O 3 in our study can be explained by that VDIF decreased the concentration of PM 2.5 as a flux from the surface to upper layers and increased the concentration of O 3 as the flux from upper layers to the surface, in consistent with the study of Li et al. (2016) and Hogrefe et al. (2018). As for the aerosol process of PM 2.5 , it had an opposite tendency during the COVID-19 lockdown. The aerosol process had positive impact on the main urban area and negative impact on the other areas in Wuhan. The COVID-19 lockdown slowed down the positive rates of aerosol processes but speed up the negative rates, changed the rates from increasing the concentration of PM 2.5 to decrease in the North-East of the urban area. The results also revealed that the changes in aerosol species due to condensation was the main cause. The results of this study quantified the impacts of the COVID-19 lockdown to air pollution, and offered an answer in what and how an unprecedented emission mitigation measure can be done to prevent air pollution. The results show that although an unprecedented emission mitigation measure was carried out, the concentration of PM2.5 decreased significantly, the concentration of O 3 increased on the contrary. This result pointed out that to prevent O 3 and PM 2.5 pollution at the same time is more difficult than existing proposed measures Li et al., 2011) in the field, that primarily were emission reduction campaigns including closing factories, industrial plants, construction sites, gas stations and keeping vehicles off of the road. Fourth, in order to find out the differences of the COVID-19 lockdown impacts between the pollution that increased and decreased, analysis was conducted to examine the performances of processes changed while pollution increased or decreased due to the COVID-19 lockdown in Wuhan as summarized in Table 7. The concentration contribution ratios were used here. Through dividing the average process rates of increasing periods by decreasing periods in Base and Exp3, the ratios reflect the difference between the periods of increasing and decreasing pollution in Base and Exp3, providing a directly target to find the influences of COVID-19 lockdown to air pollution in Wuhan. (1) As for PM 2.5 , the concentration contribution ratios of horizontal advection were 2.63 in Exp3 and 1.01 in Base while the pollution increase compared to decrease. The ratio shows that horizontal advection demonstrated more differences between the pollution which increased and decreased in Exp3. (2) The ratio of aerosol processes was − 0.48 in Exp3 and 3.02 in Base, indicating that aerosol processes were more likely to decrease the concentration while the pollution decreased in Exp3, rather than increasing the concentration. (3) As for O 3, the ratio of chemical processes was 0.28 in Exp3 and 0.75 in Base, showing that chemical processes demonstrated more differences between the pollution which increased or decreased in Exp3. The COVID-19 lockdown had a greater impact on the decreasing stages of horizontal advection and aerosol processes in PM 2.5 , and on the increasing stages of chemical processes in O 3 . The results can help better understand the differences of pollution increasing stages and pollution decreasing stages when an emission mitigation measure is carried out. This can further help the government to consider how to prevent O 3 and PM 2.5 pollution at the same time. According to the results of this paper, focusing on the decreasing stages of PM 2.5 and increasing stages of O 3 might be more effective when air pollution needs to be avoided. Finally, in order to find the most affected period of the day by the COVID-19 lockdown, Fig. 9 was carried out and found the significant differences at about 4 p.m.-7 p.m. between Base and Exp3, which might be caused by the O 3 being higher at night during the COVID-19 lockdown period in Wuhan. The causes of the significant differences might be that the restriction of traffic leads to the reduction of NO concentration, which weakens the reaction between O 3 and NO, NO 2 an NO, stimulating a reduction of fresh NO emissions and alleviating ozone titration. This study further pointed out the exact time period of the day that was most affected in addition to its similar results to previous studies (Huang et al., 2021;Le et al., 2020) that the reduction of fresh NO emissions alleviates ozone titration leading to the higher O 3 during the COVID-19 lockdown. Beyond that this result further confirms the previous conclusions through quantitative analysis and provides a new possible way that enhances ozone titration at about 4 p.m.-7 p.m. of the day to avoid O 3 pollution. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2021-02-23T14:07:02.038Z
2021-02-23T00:00:00.000
{ "year": 2021, "sha1": "ffbcfdff5b52973441dd03a44410907807a55aa5", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.atmosenv.2021.118276", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "a07327700ed8f6e067d615cd5eba2f0de918f1c3", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
1747897
pes2o/s2orc
v3-fos-license
HYPERPROLACTINEMIA ASSOCIATED TO CALCIFICATION OF THE PITUITARY STALK CASE REPORT In this work, the authors report the case of a female patient with 24 years of age with hyperprolactinemia, who presented a pituitary stalk calcification as seen by CT scan. Once other possible etiologies were excluded, we concluded that the calcification was probably related to hyperprolactinemia caused by interruption of the input of dopamine to the pituitary gland. The sustained hyperprolactinemia can result from the use of medication, neurogenic disorders, hyperthyroidism, chronic renal failure, cirrhosis and hypothalamic or pituitary diseases of which prolactinomas are highlighted 1 .Among the alterations of the pituitary stalk which are related to hyperprolactinemia are its deviation from the median line, eventually suggesting the presence of pituitary adenoma and secondary compression of a pseudoprolactinoma 2 . The aim of this study is to report the case of a patient with hyperprolactinemia, probably due to the interruption of the pituitary stalk by calcification.This situation, as far as we know, has not been reported in the literature. CASE REPORT Female patient, white, 24 years of age, presented with amenorrhea for two years.She was using 2.5 mg per day of bromocriptine for 5 months, which was prescribed after the repeated observation of hyperprolactinemia of 200 ng/ml (normal reference range=3-24) and estradiol of 17 pg/ml (30-150).The patient referred decreasing of libido and denied other kind of complaints.She reported seizure since 13 year of age, with last episode at 12 months ago, when she was under treatment with phenobarbital.At physical examination, she weighted 61.presented with galactorrhea and a small diffuse goiter.The prolactin concentration at that time was 116 ng/ml, the functional tests of thyroid were normal and CT scan was negative for sellar lesion, detecting calcification of the pituitary stalk (Fig 1). From that moment onwards, the dosage of bromocriptine was progressively increased.Her period was re-established and there was an improvement of the libido with 5 mg/day, but with persistency of hyperprolactinemia.The minor concentration in serum concentration of prolactin was 29 ng/ml with the use of 12.5 mg of bromocriptine. DISCUSSION The hyperprolactinemia presented in this case is not related to the use of drugs or to hypothyroidism.Nevertheless, the presence of microprolactinoma cannot be ruled out, since CT scan of the sellar region, which is useful in the definition of bone alterations, not always presents enough sensibility for the diagnosis of microadenomas 3 .The most probable etiology of the hyperprolactinemia observed here is a disturb in the neuroendocrine mechanisms which control prolactin secretion, represented by the interruption of the input of dopamine to the pituitary gland.High levels of prolactin, higher than 150 ng/ml, which are classically suggestive of a tumoral etiology, should not be used alone as a discriminating factor for the presence or absence of an adenoma which was not identified through imaging methods.In a recent series of pseudoprolactinoma, with histological confirmation of the absence of immunohistochemical reactivity for prolactin in all tumors, 7% of the patients presented serum concentration of prolactin higher than 200 ng/ml, reaching a value of 504 ng/ml in one case 4 . The pituitary stalk, with origin in the median eminence and insertion in the neurohypophysis, measures approximately 3.2 mm of transverse diameter in the level of the optic chiasma and 1.9 mm in the level of the pituitary insertion 5 and can present anatomic and/or functional alterations.The stalk is anatomically absent in cases of typical congenital hypopituitarism with diabetes insipidus 6 , and as a consequence of previous trauma 7 .It can be also found deviated from the medium line, being this inclination previously interpreted as suggestive of the presence of pituitary adenoma.This concept has been going through modifications, since it has been noticed that the inclination of the stalk can occur in up to 46% of the general population, even by eccentricity of the pituitary gland in relation to the medium line of the brain, or by an eccentric insertion of the pituitary infundibulum in relation to the gland medium line, representing a variation of normality 8 .A third anatomical alteration is the thickening of the stalk associated to lymphocytic pituitary-infundibulum 9 , granulomatous diseases 10 or local or metastatic tumors 11 . Finally, from the functional point of view, the stalk can suffer compression by a tumoral mass which obstructs the capillaries, blocking the input of hypothalamic dopamine (compression syndrome or stalk section effect).These tumors, designated as pseudoprolactinomas, are usually non-functional pituitary adenomas, but also craniopharyngiomas and other parasellar tumors.As far as we know, there are no previous reports in the literature on calcification of the pituitary stalk, which would behave as a space-occupying lesion, interfering with the transit of neurotransmitters in this via. Pathologic calcifications in the central nervous system, when non idiopathic, are related to various causes, among them metabolic disturbs, specially of calcium and phosphorus, toxic-anoxic, vascular, and tumoral causes, and parasitic and infectious diseases.Independently of its cause, the calcification process is similar in relation to the structure and chemical composition.In the case presented here, it is difficult to establish the diagnosis of the primary event which originated the calcification, due to the absence of data about the updated and previous history or findings on physical examination suggestive of a systemic disease.At the same time, there are no tomographic alterations in the central nervous system, as well as thickening of the stalk compatible with inflammatory lesion or diagnosis of diabetes insipidus.The magnetic resonance imaging, unaccessible to the patient in the financial point of view, is the gold standard method for the evaluation of the sellar and parasellar region 12 .However the possible detection of a microadenoma would not definitely clarify the etiology of the hyperprolactinemia, as the anatomic lesion of the stalk would persist.On the other hand, the finding of minimal lesions suggesting granulomatous disease could add more information about the etiology of the cacification. In conclusion, the authors describe another lesion of the central nervous system related to hyperprolactinemia, represented by the presence of calcification of the pituitary stalk, which compromises the input of dopamine from the nervous terminals of the median eminence. Fig 1 . Fig 1. CT scan of the sellar region showing calcification of the pituitary stalk.
2016-09-28T00:20:06.222Z
1998-06-01T00:00:00.000
{ "year": 1998, "sha1": "72ce8ea9fb4b0a1cbbfe414fec2020f428048262", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/anp/a/wrGMsNR6BW7VTWQZWFN8GTf/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "72ce8ea9fb4b0a1cbbfe414fec2020f428048262", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
38647083
pes2o/s2orc
v3-fos-license
CDC Grand Rounds: Improving the Lives of Persons with Sickle Cell Disease Approximately 100,000 Americans have sickle cell disease (SCD), a group of recessively inherited red blood cell disorders characterized by abnormal hemoglobin, called hemoglobin S or sickle hemoglobin, in the red blood cells. Persons with hemoglobin SS or hemoglobin Sß0 thalassemia, also known as sickle cell anemia (SCA), have the most severe form of SCD. Hemoglobin SC disease and hemoglobin Sß+ thalassemia are other common forms of SCD. Red blood cells that contain sickle hemoglobin are inflexible and can stick to vessel walls, causing a blockage that slows or stops blood flow. When this happens, oxygen cannot reach nearby tissues, leading to attacks of sudden, severe pain, called pain crises, which are the clinical hallmark of SCD. The red cell sickling and poor oxygen delivery can also cause damage to the brain, spleen, eyes, lungs, liver, and multiple other organs and organ systems. These chronic complications can lead to increased morbidity, early mortality, or both. Tremendous strides in treating and preventing the complications of SCD have extended life expectancy. Now, nearly 95% of persons born with SCD in the United States reach age 18 years (1); however, adults with the most severe forms of SCD have a life span that is 20-30 years shorter than that of persons without SCD (2). Most of the morbidity and mortality among pediatric patients with SCD is associated with pneumococcal sepsis, strokes, and pain crises. In 1986, researchers concluded that oral penicillin prophylaxis should start at age 4 months for children with SCA, because of the high rates of morbidity and mortality associated with sepsis in early childhood, and that screening for SCD should take place in the neonatal period (3). As a result, since 2006, newborns are universally screened for SCD in all U.S. states, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands (4). Current recommendations for pneumococcal infection prevention also include a series of pneumococcal vaccines (5 ultrasonography was found to predict which children with SCD had the highest risk for developing a stroke (7). A few years later, a study demonstrated that chronic blood transfusions lowered the risk for first stroke in children with an abnormal transcranial Doppler (TCD) result by 92% (8). The current clinical recommendations are annual screening with TCD for children ages 2-16 years with SCA and, in an effort to prevent stroke, referral of children with abnormal TCD results to a chronic transfusion specialist (5). Increased access to and utilization of health care services by children with SCD is a key component in decreasing morbidity and mortality. However, recent data from the Maryland Medicaid program found that 38% of children with SCD had not seen a hematologist by age 2 years, and 54% of children aged 12-17 years had not seen a hematologist in 2 years, suggesting that "the ambulatory care of many Medicaid-insured children with SCD might be inadequate" (9). Furthermore, although it is known that bone marrow transplantation is a promising cure for SCD, with 93% survival and 91% eventfree survival after 5 years of follow-up, only 1,000 patients with SCD worldwide have received an HLA-identical sibling transplant (10). These findings indicate that, although gains have been made in the treatment of children with SCD, room for improvement remains. The Transfer from Pediatric to Adult Care and Continued Challenges As persons living with SCD age, issues concerning adherence, treatment, complications, and the health care system become different from those encountered during childhood. With so many variables in play, difficulty often occurs in determining the correlation between these factors and the changes in health status that can take place during and after the transfer from pediatric to adult care. Adolescence, in particular, represents a period of medical vulnerability for persons with SCD, given competing demands of normalcy with peers, increasing autonomy in self-management, and advancing disease. For example, Medicaid data suggest that the period of transition from pediatric to adult care is associated with a rise in complications, including pain crises, pulmonary complications, and use of emergency departments (9,11). The causes of these increased complication rates are multifaceted and include lack of access to qualified health care providers with an understanding and interest in SCD, changes in insurance coverage, psychosocial factors, and others. Hydroxyurea is a chemotherapeutic agent that increases the production of fetal hemoglobin and decreases SCD-related complications. In adults with SCA, the annual rate of painful crises was significantly less frequent, and the median times to both first and second crises were longer in patients receiving hydroxyurea than in those receiving placebo (12). Hydroxyurea use was also found to lower the occurrence of acute chest syndrome (a vaso-occulsive crisis of the pulmonary vasculature) and the need for transfusion therapy. Hydroxyurea is currently labeled for use in adults but is also prescribed to children with SCA. Although hydroxyurea might reduce the occurrence of SCD-related issues, the burden of chronic organ damage remains increasingly important. Contemporary data indicate chronic organ damage is now the leading cause of death for adults with SCD (13). Most adults with SCD have health care insurance, usually Medicaid or Medicare, or both. Still, gaps in coverage might preclude their accessing care. For example, insurance plans might not cover necessary services or high deductibles might preclude use of services. Intermittent or sporadic coverage can occur because of loss of a job that provided insurance or gain of a job that provides a level of income resulting in ineligibility for income-based programs. This lack of access to expert providers and care can further complicate a patient's disease course. As with the pediatric SCD population, broad opportunity exists for a multistrategy approach to improve health outcomes for adults with SCD. A Health Policy Approach Increasingly, health policy makers advocate the Triple Aim as a model for improving population health (14). The first aim is to improve population health, the second is to enhance patient experience, and the third is to reduce health care costs through eliminating preventable acute care utilization and readmissions. With the Triple Aim as the goal, researchers and policy makers are now trying to determine a way to achieve these aims for the SCD community that aligns with current health care priorities and occurs at the individual, provider, and health care system levels. Insufficient data, however, have limited recent efforts to incorporate SCD into health policy initiatives. For example, Healthy People 2020 contained 10 new objectives focused on sickle cell disease (BDBS-1-10); however, all were "archived due to lack of a viable data source" (15). A Community Approach The only national community-based organization for SCD, the Sickle Cell Disease Association of America (SCDAA), focuses on improving the quality of life of persons with SCD and finding a cure. SCDAA initiatives include advocacy for increased access to high-quality health care across the lifespan, increased drug development and therapeutic interventions to decrease disease-related complications, and increased availability of low-risk cures for all persons with SCD. To accomplish these goals, SCDAA developed Get Connected, an information-sharing, patient-powered registry. Through this web-based platform, multiple stakeholders can receive information important to the sickle cell community, such as new therapies, opportunities for enrollment in clinical trials, research results, and the locations of knowledgeable providers. The database and network include children and adults with SCD, families, community members, community-based organizations, health care providers, and government and private industry stakeholders. A Public Health Approach The shortage of long-term follow-up programs, registries, or data collection systems has limited the understanding of SCD. To address this gap in knowledge, in 2015, CDC implemented the Sickle Cell Data Collection program to address the need for this public health approach of improving health outcomes. Using state-based surveillance systems, the program provides important population-level data about disease course and the impact of interventions, health care use, and premature death and identifies providers and sites of care. Understanding the onset and progression of complications helps when planning strategies for prevention, early detection, and intervention. The four main objectives of the Sickle Cell Data Collection program are to 1) establish a health profile of the SCD population in the United States; 2) track changes in the SCD population's outcomes over time; 3) ensure that the SCD community has credible, scientifically sound information to inform standards of care; and 4) inform policy and health care changes. By achieving these goals, the program could improve quality of life, life expectancy, and health of persons living with SCD. Without data and mechanisms to track and understand SCD care and outcomes, evidence of what works and where improvements could be made is limited. These two current efforts, Get Connected and the Sickle Cell Data Collection program, along with adequate resources and support, have the potential to provide the evidence base to inform health care policies and improve the lives of persons living with SCD.
2018-04-03T04:56:12.470Z
2017-11-24T00:00:00.000
{ "year": 2017, "sha1": "8ad49c4c300ff715ed0469f54313eb48ce56ac8e", "oa_license": "CCBY", "oa_url": "https://www.cdc.gov/mmwr/volumes/66/wr/pdfs/mm6646a2-H.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "8ad49c4c300ff715ed0469f54313eb48ce56ac8e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
271461366
pes2o/s2orc
v3-fos-license
A single-cell atlas of the Culex tarsalis midgut during West Nile virus infection Abstract The mosquito midgut functions as a key interface between pathogen and vector. However, studies of midgut physiology and associated virus infection dynamics are scarce, and in Culex tarsalis – an extremely efficient vector of West Nile virus (WNV) – nonexistent. We performed single-cell RNA sequencing on Cx. tarsalis midguts, defined multiple cell types, and determined whether specific cell types are more permissive to WNV infection. We identified 20 cell states comprised of 8 distinct cell types, consistent with existing descriptions of Drosophila and Aedes aegypti midgut physiology. Most midgut cell populations were permissive to WNV infection. However, there were higher levels of WNV RNA (vRNA) in enteroendocrine cells and cells enriched for mitochondrial genes, suggesting enhanced replication in these populations. In contrast, proliferating intestinal stem cell (ISC) populations had the lowest levels of vRNA, a finding consistent with studies suggesting ISC proliferation in the midgut is involved in viral control. Notably, we did not detect significant WNV-infection induced upregulation of canonical mosquito antiviral immune genes (e.g., AGO2, R2D2, etc.) at the whole-midgut level. Rather, we observed a significant positive correlation between immune gene expression levels and vRNA in individual cells, suggesting that within midgut cells, high levels of vRNA may trigger antiviral responses. Our findings establish a Cx. tarsalis midgut cell atlas, and provide insight into midgut infection dynamics of WNV by characterizing cell-type specific enhancement/restriction of, and immune response to, infection at the single-cell level. Introduction Arthropod-borne viruses represent a severe and ever-growing public health threat (1).Mosquito-borne viruses alone are estimated to cause over 400 million infections globally each year (2).For transmission of a mosquitoborne virus to occur, a mosquito must first become infected with a virus via ingestion of an infectious bloodmeal after feeding on a viremic host (3,4).Said virus must establish infection in the mosquito midgut before it escapes the midgut and disseminates into the body cavity, and eventually enters the salivary glands and saliva -where transmission occurs (3,4).The mosquito midgut is a complex organ comprised of a variety of cell types with distinct functions including digestion, nutrient absorption, endocrine signaling, and innate immune activity (5,6).The midgut is also the site of infection and escape barriers that strongly influence virus population dynamics (4,5).Previous studies have demonstrated that successful infection of the midgut epithelium, and replication and immune evasion therein, is essential for establishing disseminated infection in an insect vector (3,7).In these ways, for hematophagous disease vectors like mosquitoes, the midgut serves as a critical interface between vector and pathogen. Despite the importance of Cx. tarsalis as a vector of WNV and other important human viruses, studies examining the cellular composition of its midgut, and WNV infection dynamics therein, are nonexistent.The recent publication of the full Cx.tarsalis genome, in conjunction with the growing body of work demonstrating the successful application of single-cell RNA sequencing methodologies in insect models has made it possible to address this significant knowledge gap (12)(13)(14)(15)(16)(17)(18)(19)(20).Therefore, we performed single-cell RNA sequencing (scRNA-seq) on dissociated midgut cells from both mock and WNV-infected Cx. tarsalis mosquitoes to gain a better understanding of how the midgut functions as the interface between vector and WNV. We utilized a scRNA-seq approach previously demonstrated to be flavivirus RNA inclusive, which allowed us to detect WNV viral RNA (vRNA) in addition to host transcripts (21).Through this approach we identified distinct midgut populations corresponding with midgut cell types previously described in Drosophila and Aedes aegypti midguts -enterocyte (nutrient absorption cells), enteroendocrine (secretory cells), cardia (peritrophic matrix secreting cells), intestinal stem cell/enteroblast (undifferentiated progenitor cells), proliferating intestinal stem cell/enteroblast, visceral muscle cells, and hemocyte (immune cells) -and characterized the infection and replication dynamics of WNV within each population (5,6,14,16,17).We found that WNV infects most midgut cell types, with evidence suggesting enhanced replication in enteroendocrine cells and cells enriched for mitochondrial genes, and reduced replication in proliferating intestinal stem cells/enteroblasts.Additionally, we characterized the Cx.tarsalis immune response to WNV infection at both the whole-midgut and single-cell level.This study has bolstered our understanding of WNV midgut infection in a highly competent vector, and elucidated the midgut biology of Cx. tarsalis. Results Single-cell RNA sequencing of female Cx.tarsalis midguts identified 20 distinct cell populations.Using the 10X Genomics platform we performed scRNA-seq on dissociated mock and WNV-infected Cx. tarsalis midgut pools at 4 and 12 days post-infection (dpi).We recovered an average of 2,416 cells per pool with an average coverage of 255,000 reads per cell, which were mapped to the Cx.tarsalis genome (Supplemental File 1).Following quality control (QC) filtering, we retained data for 12,886 cells at 4dpi (7,386 WNV-infected, 5,500 mock), and 9,301 cells at 12dpi (4,609 WNV-infected, 4,692 mock) for downstream analyses (Supplemental File 1).Cells retained after QC contained an average of 597 (611 WNV-infected, 580 mock) and 407 (448 WNV-infected, 367 mock) unique genes per cell at days 4 and 12dpi respectively. Intestinal stem cells/enteroblasts (ISC/EB) were identified by visualizing klumpfuss (klu) expression localized to these clusters via feature expression map (Supplemental Figure 5).Klu is a canonical marker for EBs not ISCs, however, EBs and ISCs are often indistinguishable by UMAP (14,17,20).One of the ISC/EB clusters was significantly enriched for PCNA and aurora kinases A and B -markers for cell proliferation and mitosis -and therefore named ISC/EB-prol to reflect this (Figure 1C, Supplemental Figure 6).A cluster that shared identical conserved markers with cardia-1 and was also significantly enriched for PCNA was identified as cardia-prol (Figure 1C, Supplemental Figure 6).A cluster of Malpighian tubule cells (MT) that was only present in one sample (mg5c) (Supplemental Figure 1C) was identified by significant enrichment for an inward rectifier potassium channel gene (irk-2) as well as several glutathione and vacuolar ATPase genes (Figure 1C).This indicates that Malpighian tubule tissue was inadvertently retained upon midgut collection for sample mg5c.Clusters without identifying markers are subsequently referred to by cluster number (e.g., cluster 4).Importantly, HCs and MT cells are not midgut cells, but considered associated with the midgut, while EC, EE, cardia, ISC/EB, and VM cell populations comprise the midgut (Figure 2A, C).We compared the proportion of each cluster between mock and WNV-infected replicates and found no significant differences (Supplemental Figure 1A-B).The percent of the total population comprised by each cluster/cell-type can be found in Supplemental Table 1.2A) that, through the secretion of neuropeptides, regulate behavioral responses associated with feeding, satiety, stress, etc. (14,17,22).These cells, and the neuropeptides they secrete, have been previously characterized in Ae. aegypti and Drosophila, but never Cx.tarsalis (23,24).We identified Cx. tarsalis orthologs for previously described insect gut hormones found in EE cells -short neuropeptide (sNPF), bursicon (Burs), ion transport peptide (ITP), and tachykinin (Tk) receptor (14,17,20,(23)(24)(25).However, tachykinin receptor was the only detectable neuropeptide/neuropeptide receptor identified in our EE populations (Figure 2B).The EE population was significantly enriched for canonical neuroendocrine genes (IA2, and SCG5) (26,27) and Syt1, and showed expression of Syt4, Syt6, Syx1A and nSyb (genes associated with vesicle docking and secretion) (Figure 2B) (14,28).Interestingly, the EE population showed strong enrichment for NEUROD6, a neuronal differentiation gene known to be involved in behavioral reinforcement in mammals (Figure 2B) (29). Characterization of Cx. tarsalis midgut secretory and immune cells. Enteroendocrine cells (EE) are the secretory cells of the midgut (Figure Hemocytes (HC) are immune cells that circulate in the hemolymph (Figure 2C) and play a central role in the mosquito immune response -the exact nature of which varies by class of HC (15,16,18,30,31).Much like EEs, hemocytes have not been characterized in Cx. tarsalis.We distinguished the classes of our HC populations using identifying markers.The HC-1 population was identified as mature granulocytes due to expression of SCRASP1 (Figure 2D) and enrichment for c-type lectin, defensin, and cecropin genes (Supplemental File 3). The HC-2 population was identified as oenocytoids by expression of SCRB3 (Figure 2D) (15,16,18,30).SPARC was present in both HC classes, however NIMB2, a gene previously identified as a marker present in all hemocyte classes in Anopheles gambiae (18), was only detected in granulocytes (HC-1) (Figure 2D).The oenocytoid populations in Cx. tarsalis do not appear to express NIMB2 (Figure 2D).COG profiles demonstrate homogeneity between midgut cell populations despite differences in conserved markers.We next examined the transcriptional profiles of each cluster to understand the function of unidentified clusters and compare the transcriptomes of distinct cell populations.We used cluster of orthologous genes (COG) categories, and a two-pronged approach to visualization -COG profiles of all genes expressed in >75% of cells in a cluster (termed "base genes") and COG profiles of all significant (p<0.05),conserved cluster markers with positive log 2 fold-changes (log 2 FC) relative to the other clusters (Supplemental Figure 1A, B).Base gene and cluster marker gene profiles were derived from the total population at each timepoint.Despite the varying cell types, we noted homogeneity across base genes for each cluster, with the plurality of each profile for most clusters comprised of genes involved in translation and ribosomal biogenesis (J), and energy production and conversion (C) (Supplemental Figure 1A).However, ISC/EB-prol, cardia-2, and cardia-prol all possessed fewer 'J' COGs than other clusters.The COG profiles of EC-like populations show variability between their transcriptomes and ECs (Supplemental Figure 1A-B).As expected, VM populations contained the highest proportions of cytoskeletal genes (Z) in both base gene and cluster marker profiles compared to other cell types (Supplemental Figure 1A-B).Interestingly, base gene profiles differed dramatically between the HC-1 and HC-2 populations which reflects the distinct HC classes (granulocytes and oenocytoids) comprising each population (Supplemental Figure 1A).WNV vRNA is detected at varying levels in the majority of midgut cell populations.In addition to characterizing the cellular heterogeneity of Cx. tarsalis midguts, we also sought to examine WNV infection dynamics at the single-cell level.The five-prime bias of the scRNA-seq chemistry captured and allowed us to detect the WNV 5' UTR as a feature in our data.Importantly, WNV viral RNA (vRNA) was only detected in our WNV-infected samples, and was broadly detected across most cell populations at both time points (Figure 3A).Within WNV-infected replicates we compared the percent of cells with detectable vRNA (calculated as percent expressing) and the average vRNA level (calculated as average expression) in the total population for each timepoint (Figure 3B).We saw no significant difference in the total percent of WNV-infected cells between timepoints, but a significant increase in average total vRNA level by 12dpi (Figure 3B).Within clusters (replicates within time points combined), cells contained variable levels of vRNA, however some clusters (cluster 17, cardia-prol, etc.) were either not present or were comprised of ≤5 cells in the WNVinfected condition (Figure 3C).At both timepoints, cluster 4 contained both the highest average level of vRNA and was significantly enriched for vRNA relative to the other clusters (Figure 3C-D).Cluster 4 lacked canonical markers, however was significantly enriched for mitochondrial genes and mitochondrial tRNAs, suggesting these cells are in states of increased energy demand or stress (Figure 3D).There was minimal expression of pro-apoptotic genes across all clusters -confirming that cell death is neither driving clustering nor causing the upregulation of mitochondrial genes and tRNAs in cluster 4 (Supplemental Figure 7). WNV vRNA levels differ between epithelial cell populations.Next, we sought to compare the presence and level of vRNA in epithelial cell populations; EC and EC-like, EE, cardia, ISC/EB, and ISC/EB-prol.Average expression and percent expression values derived from clusters comprised of ≤5 cells in a given replicate were excluded from this comparison.At 4dpi the EC-like-2 population had the highest percentage of cells containing vRNA, significantly more than EC-like-1, EC, EE, ISC/EB-prol, and cardia populations (Figure 5A). Interestingly, the other EC-like population at 4dpi (EC-like-1) had significantly lower percentages of cells containing vRNA compared to other populations (Figure 5A).There were no significant differences in the percent of cells containing vRNA between any epithelial cell population at 12dpi (Figure 5B).At both time points, EC-like-2 and EE populations the highest average levels of vRNA (Figure 4C).At 12dpi EE populations had significantly higher levels of vRNA than all other epithelial cell populations (Figure 4D). To further explore the epithelial cell populations and their involvement in WNV infection, we used slingshot (v2.10.0) to perform a trajectory inference and identify cell lineages.We identified two lineages: (1) ISC/EB → ISB/EB-prol → EE, and (2) ISC/EB → ISC/EB-prol → EC-like → EC (Figure 4E).As expected, we observed decreases in Klu and PCNA expression in both lineages before pseudotime 10, and saw an increase in expression of EC cell marker POU2F1 and EE cell marker PROX1 corresponding with the differentiation of lineages 1 and 2 respectively (Supplemental Figure 12).Plotting levels of WNV vRNA by lineage revealed that vRNA levels decrease in the ISC/EB-prol population and increase in fully differentiated EE and EC cells (Figure 4F). Identification of genes associated with WNV infection at the whole-tissue and single-cell level.Bulk-RNA sequencing comparing WNV-infected to uninfected Cx. tarsalis midguts has not yet been described, so we performed a pseudo-bulk differential expression (DE) analysis to identify differentially expressed genes (DEGs) associated with mock and WNV-infected midguts at the whole-tissue level.We identified six significant DEGs at 4dpi; homocysteine S-methyltransferase, DMAS1 (aldo-keto reductase), and GBE1 (deltamethrin resistance-associated gene) were upregulated in response to WNV infection while BCAN (c-type lectin), uncharacterized gene11056, and a serine protease gene were downregulated (Figure 5A).At 12dpi we identified 10 significant DEGs; an ML (MD-2 related lipid recognition) domain-containing gene, four CRYAB (heat shock protein) genes, and a chitin-binding domain-containing gene were upregulated in response to WNV infection, and three uncharacterized genes (gene13447, gene11056, gene9296) and fibrinogen/fibronectin were downregulated (Figure 5B).DEGs differed for each timepoint and, as such, we next examined DEGs in the WNV-infected condition between timepoints.We found many significant DEGs between timepoints; several leucine rich repeat containing genes were upregulated at 4dpi, and host immune gene LYSC4 was upregulated at 12dpi (Figure 5C). To further examine genes associated with WNV infection, we performed a gene correlation on normalized counts for the top 500 variable genes (genes that have variation in expression across all cells) for each timepoint, determined significance via bootstrapping, then extracted and visualized characterized genes correlated (>0.65) with vRNA (Figure 5D, Supplemental Table 2).Transcription regulator ATRX, a cytochrome p450 gene and several uncharacterized genes were strongly correlated with vRNA at 4dpi (Figure 5D, Supplemental Table 2).At 12dpi, GSTE4, HAO1, METTLE20, PROX1, UROS, BCAN, DHDH, and CHKov1 were strongly correlated with vRNA along with serine protease, AMP dependent ligase, cytochrome p450, mitochondrial ribosomal S26, glutathione S-transferase, and aldo/keto reductase family genes (Figure 5D, Supplemental Table 2).Many of these genes have no documented roles in flavivirus infection.However, ATRX has been implicated in the cellular response to DNA damage, and chromatin remodeling -processes which many viruses exploit during infection (32)(33)(34).Further, cytochrome p450 enzymes and serine proteases have been purported to play a role in the mosquito response to viral infections (35)(36)(37)(38). Upon observing that PROX1, the canonical marker for EE cells, was significantly positively correlated with vRNA at 12dpi, we examined the correlation between vRNA and several previously described neuroendocrine genes (Figure 2B, 5E).Two previously described housekeeping genes, RPL8 and RPL32 (39), were validated as having broad expression throughout the total population and used to both confirm that the high prevalence of vRNA in these populations was not confounding the results and provide a visual reference for a biologically insignificant correlation value (Figure 5E).Additionally, for each timepoint we determined the correlation between WNV vRNA and 1,000 random genes from the dataset (unlabeled solid line denotes the average of this calculation with 95% confidence intervals) (Figure 5E).At 4dpi PROX1, Syt6, and Syx1A, and at 12dpi PROX1, IA2, and Syx1A had strong positive correlations with vRNA (Figure 5E). Characterization of the midgut immune response to WNV infection at the whole-tissue and single-cell level.While previous work demonstrated an increase in hemocyte proliferation upon bloodmeal ingestion and infection, there were no significant increases in the proportion of hemocyte populations associated with infection at either time point (Figure 6A-B) (15).Upon observing that no mosquito immune genes were identified as significantly upregulated in response to WNV infection by pseudo-bulk DEG and correlation analyses, we manually compared percent of cells expressing and expression level of key immune genes that have been implicated in viral control/infection response (19,30,(40)(41)(42)(43).We identified orthologs in the Cx.tarsalis genome for mosquito immune genes; DOME, NANOS1, MYD88, IMD, AGO2, R2D2, STAT5B, Cactus, PIAS1, SUMO2, LYSC4, MARCH8, PIWIL1, PIWIL2, DICER2, and NFKB1 and found no significant differences in the percent of cells expressing and average expression of these genes at either time point (Figure 6C-D). Next, we examined the relationships between expression of these immune genes and vRNA at the single-cell level in the WNV-infected population (Figure 6E).Interestingly, almost all genes were significantly positively correlated with vRNA at both timepoints (Figure 6E).To further confirm these findings, we visualized the relationship of the four most highly correlated immune genes (IMD, PIWIL1, PIAS1, and DOME) with vRNA in individual cells, and compared the expression level of each immune gene in both mock and WNV-infected conditions (timepoints combined) (Figure 6F-G, Supplemental Figure 11).These genes and vRNA were correlated, despite comparable expression levels of each gene across infection conditions, confirming that while vRNA load is correlated with specific immune genes at the individual cell level, it does not induce significant population level immune gene enrichment (Figure 6F-G, Supplemental Figure 11). Discussion In this study, we sought to generate a midgut cell atlas (i.e., map of cell type and function) for Cx.tarsalis and characterize WNV infection of the midgut at single-cell resolution by performing scRNA-seq on mock and WNV-infected midguts, collected at days 4 and 12 post infection.We identified and described nutrient absorptive (enterocyte), secretory (enteroendocrine), peritrophic matrix secreting (cardia), undifferentiated progenitor (intestinal stem cell/enteroblast), visceral muscle, and immune (hemocyte) cell populations (5,17,18).The distribution and proportion of each cell-type in the total population varied between timepoints, however we identified at least one cluster comprised of each cell-type at each timepoint.Several clusters were precluded from identification due to either lack of canonical markers/enrichment patterns, or origination from a single replicate.Nonetheless, we have demonstrated that single-cell sequencing of Cx. tarsalis midguts is feasible and that distinct cell populations can be identified and characterized using previously described canonical cell-type markers and enrichment patterns (14,(16)(17)(18)20). We detected WNV RNA (vRNA) in the majority of midgut cells at both timepoints.While vRNA significantly increased in the total midgut by 12dpi, the percent of infected cells did not, demonstrating that the majority of midgut cells that will become infected are infected by 4dpi, while vRNA load increases as infection progresses. The high percentage of WNV-infected cells and permissibility of most cell populations to infection supports previous work demonstrating the extreme competence of Cx. tarsalis as a WNV vector (9,(44)(45)(46).Interestingly, while WNV infected almost all midgut cell populations, cluster 4 was significantly enriched for vRNA at both timepoints.This high WNV-expressing cluster was associated with very few (<15) defining cluster markers, precluding us from identifying its cell-type.The cluster markers associated with this cell state are comprised entirely of mitochondrial genes and mitochondrial tRNAs, suggesting a heightened state of cell stress and/or energy production.Importantly, this cluster was present and enriched for the same mitochondrial genes in the mock condition, suggesting that WNV was able to replicate to higher levels in cells enriched for mitochondrial genes, and not that viral replication induced significant stress and/or energy production responses in these clusters (47,48).Several previous studies have demonstrated that positive sense-single stranded RNA viruses like dengue virus (DENV), and SARS-CoV-2 modulate mitochondrial dynamics to facilitate replication and/or immune evasion (48)(49)(50), suggesting potential beneficial interactions between WNV and the mitochondria.Additionally, a study in Lepidoptera (moths and butterflies) purported that enrichment of mitochondrial genes is associated with insect stress resistance (47).Stress resistance responses modulate cell viability and it is known that the maintenance of cell viability is central to productive WNV infection (51,52). Moreover, we demonstrated that several heat shock genes (known to be protective against cell stress) were significantly upregulated in WNV-infected midguts, further suggesting interplay between WNV infection and the stress response (53)(54)(55).However, these heat shock genes were predominantly localized to cluster 8, an untyped cluster that does not contain the high level of vRNA seen in cluster 4. Further work is needed to tease out the complexities of these cell states and their impact on WNV replication. Our enteroendocrine (EE) cell populations only contained one of the previously described mosquito neuropeptides/neuropeptide receptors, tachykinin receptor, and only in a small subset of cells (14,17,20,23,25).This could be due to the known bias of scRNA-seq towards highly expressed genes, or due to Cx. tarsalis EE cell secretion of yet uncharacterized neuropeptides.High expression of NEUROD6 -a neurogenic differentiation factor frequently found in neurons involved in behavioral reinforcement -in EE populations at both timepoints supports the hypothesis that additional/uncharacterized neuropeptides may be present in Cx. tarsalis EE cells (29).Interestingly, EE populations contained more vRNA than other epithelial cell populations.Further, PROX1 (the canonical marker for EE cells) and select neuroendocrine and vesicle docking genes present in EE cells were strongly positively correlated with vRNA at both timepoints, supporting our hypothesis that EE cells serve as sites of enhanced WNV replication during midgut infection.This hypothesis is further supported by previous studies that suggest arboviruses preferentially replicate in highly polarized cell types, such as EE cells (56,57).Additionally, previous work with Sindbis virus (SINV) -a mosquito-borne alphavirus -identified EE cells as a site of infection initiation in Ae. aegypti (58). A previous study characterizing ISC dynamics in response to DENV in the Ae.aegypti midgut found that ISC proliferation increased refractoriness to infection, suggesting that cell renewal is an important part of the midgut immune response (59).Interestingly, we observed that proliferating ISC/EB populations consistently had the lowest levels of vRNA compared to other epithelial cell types.Further, cell lineage trajectory analysis showed that vRNA decreased to the lowest levels in the ISC/EB-prol population before rebounding in fully differentiated EE and EC cell types.Proliferating cell states expressed notably fewer base genes involved in translation, ribosomal structure, and biogenesis.These findings together suggest that the transcriptional state of proliferating ISC/EBs impedes WNV replication. While the presence of vRNA alone does not signify active replication, average vRNA levels increased between timepoints in all epithelial cell populations, apart from the ISC/EB and ISC/EB-prol populations (Supplemental Figure 13), suggesting that the high vRNA levels in EE populations, and low vRNA levels in ISC/EB-prol populations, are the result of enhanced and restricted replication respectively.The lack of significant differences between the percent of cells in each population containing vRNA at 12dpi further supports that varying levels of vRNA in EE and ISC/EB-prol cells is due to permissivity to replication and not susceptibility to infection.The factors that determine cell-type specific enhancement or suppression of WNV replication are not currently known, but could include more efficient evasion of antiviral pathways, more efficient mechanisms of midgut escape/dissemination, or abundance of pro/anti-viral genes. We identified granulocyte and oenocytoid hemocyte populations -known to play important roles in the mosquito innate immune response -at both timepoints, allowing us to characterize a cellular immune component of mosquito midguts (5,15).There was no evidence of immune cell proliferation or immune gene upregulation in the total infected population compared to mock -suggesting little to no immune activation in the midgut upon WNV infection.This was surprising given the importance of the midgut as a site of innate immune activation (5,6,45,46).Although scRNA-seq of Cx. tarsalis after WNV infection has not been described, several previous studies in Ae. aegypti and Cx.pipiens have noted significant upregulation of IMD and Toll pathway genes in response to viral infection and highlighted that the innate immune response in mosquitoes is a strong determinant of vector competence (53,60,61).The absence of a notable immune response to WNV infection in Cx. tarsalis could be a determinant of the vector's extreme susceptibility and competence (9,(44)(45)(46).However, in individual cells, most immune genes had some degree of significant positive correlation with vRNA suggesting that, while WNV infection does not cause significant enrichment of these genes in the total population, WNV infection and replication influences the expression of these genes at the single-cell level.This finding highlights that scRNA-seq is a powerful tool for characterizing infection dynamics that are not apparent when looking at the population average. Limitations and Future Directions Our inability to detect certain genes (i.e., neuropeptide genes and additional canonical markers we would expect to see) could be due to the low percentage (~30%) of reads mapped to the Cx.tarsalis genome (due to a predominance of reads that were too short to map), or the absence of those genes in the existing annotation file.Future scRNA-seq studies in Culex mosquitoes could potentially benefit from adjusting the fragmentation time recommended by 10X Genomics.Further, improvement of the existing Cx.tarsalis genome annotation would facilitate studies of gene expression in this species.A multitude of genes detected in our dataset remain uncharacterized due to a lack of appropriate orthologs, which could be explained by the evolutionary divergence between Cx. tarsalis and the species from which most gene orthologs were derived; Cx. quinquefasciatus and Ae.aegypti which diverged 15-22 million years ago (MYA) and 148-216 MYA respectively (12).High levels of vRNA in specific cell types imply that replication is occurring/has occurred but it is important to note that the presence of vRNA is not analogous to active viral replication (e.g., the presence of vRNA could be the result of phagocytosis of an infected cell).Future studies could use qRT-PCR, focusing on our top genes of interest, to measure expression kinetics and levels following midgut infection in Cx. tarsalis and other relevant vectors.Finally, Early WNV infection in the Cx.tarsalis midgut will be further studied in our lab via immunofluorescence assays using cell-type specific RNA probes in whole midguts, putting the findings described here into a spatial context. Conclusion The work presented here demonstrates that WNV is capable of infecting most midgut cell types in Cx. tarsalis.Moreover, while most cells within the midgut are susceptible to WNV infection, we observed modest differences in virus replication efficiency that appeared to occur in a cell-type specific manner, with EE cells being the most permissive and proliferating ISC/EB cells being the most refractory.Our findings also strongly suggest interplay between WNV infection and the cell stress response, and we have provided evidence that WNV infection of the Cx.tarsalis midgut results in the upregulation of cell stress associated genes.We observed mild to no upregulation of key mosquito immune genes in the midgut as a whole, however, we show that immune gene expression is correlated with WNV vRNA level within individual cells.Additionally, we have generated a midgut cell atlas for Cx.tarsalis and, in doing so, improved the field's understanding of how WNV establishes infection in this highly competent vector. Declaration of interests The authors declare no competing interests. Inclusion and diversity We support inclusive, diverse, and equitable conduct of research.(A-B) Percent of the total population of each replicate comprised by hemocytes compared between mock and WNV-infected conditions at each timepoint.Significance was determined by unpaired t-test.(C-D) Average expression of, and percent of cells expressing, mosquito innate immune genes compared between mock and WNV-infected replicates for each timepoint.Significance was determined by multiple unpaired t-tests.(E) Correlation of immune genes and WNV vRNA at 4dpi (purple) and 12dpi (teal).Red triangular points denote nonsignificant correlation.Unlabeled solid line represents the mean correlation of WNV vRNA with 1000 randomly selected genes at the specified timepoint (unlabeled dotted lines represent the upper and lower 95% confidence intervals for this value).Labeled dotted lines denote vRNA correlation with RPL8 and RPL32 housekeeping reference genes.(F) Feature scatter of IMD vs. WNV 5' UTR for both timepoints combined.(G) Expression level of IMD in mock and WNV-infected conditions for both timepoints combined.Panels E-F were derived from only WNV-infected replicates. Methods Virus.All infections were performed with a recombinant barcoded WNV (bcWNV) passage 2 stock (epidemic lineage I strain, 3356) grown on Vero cells.Titer for the stock was determined by standard Vero cell plaque assay (62). Mosquito infection.Mosquito studies were conducted using laboratory colony-derived Cx. tarsalis mosquitoes (>50 passages) WNV infections in mosquitoes were performed under A-BSL3 conditions.Larvae were raised on a diet of powdered fish food.Mosquitoes were maintained at 26°C with a 16:8 light:dark cycle and maintained at 70-80% relative humidity, with water and sucrose provided ad libitum.Cx. tarsalis mosquitoes were transferred to A-BSL3 conditions 48 hours prior to blood feeding, and dry starved 20-24 hours before blood feeding.Seven days after pupation (6-7 days after emergence) mosquitoes were exposed to an infectious bloodmeal containing a 1:1 dilution of defibrinated calf's blood and bcWNV stock diluted in infection media (Dulbecco's Modified Eagle's Medium, 5% penicillin-streptomycin, 2% amphotericin B, and 1% fetal bovine serum (FBS)) for a final concentration of 3-6e 7 PFU/mL, or a mock bloodmeal containing a 1:1 dilution of defibrinated calf's blood and infection media.All bloodmeals were provided in a hog's gut glass membrane feeder, warmed by circulating 37°C water.Following 50-60 minutes of feeding, mosquitoes were coldanesthetized, and engorged females were separated into cartons and maintained on sucrose. Collection of mosquito tissues.At indicated time points, mosquitoes were cold-anesthetized and transferred to a dish containing Sf900III insect cell culture media (Gibco) with 5% FBS.Midguts were dissected, and transferred to tubes containing 500uL Sf900III media + 5% FBS and kept on ice for the duration of dissections. Ten pooled midguts per sample/tube were collected for dissociation and sequencing. Midgut dissociation and single-cell suspension preparation. A dissociation buffer containing Bacillus licheniformis protease (10mg/mL) and DNAse I (25U/mL) was prepared in Sf900III media (Gibco).Pooled midguts were resuspended in dissociation media, transferred to a 96-well culture dish, and triturated with a p1000 pipet at 15-20 minute intervals for 105 minutes.At each interval 100-125μL containing dissociated single-cells was collected (with replacement) from the top of the dissociation reaction and transferred to 25mL of Sf900III + 5% FBS on ice.Dissociation reactions were kept covered at 4°C between trituration.Upon complete tissue dissociation the entire remaining volume of each reaction was transferred to Sf900III + 5% FBS on ice.Collection tubes with dissociated cells were centrifuged at 700xg for 10 minutes at 4°C, resuspended in 500uL of Sf900III + 5% FBS, and passed through a 40um small volume filter (PluriSelect). Immediately prior to loading on the Chromium Controller, cell suspensions were spun down at 700xg for 10 minutes and washed twice in 1mL PBS + 0.04% bovine serum albumin (BSA), and resuspended in 50μL of PBS + 0.04% BSA.Cell concentration was determined using the Countess II Automated Cell Counter (Thermo Fisher Scientific) and the appropriate cell suspension volume (target recovery of 10,000 cells) was loaded on the Chromium controller.Further details on this protocol can be found heredx.doi.org/10.17504/protocols.io.j8nlke246l5r/v1.Gel Bead-In Emulsions (GEM) generation and cDNA synthesis.GEM generation and cDNA synthesis were performed using the Next GEM Single Cell 5' GEM kit v2 (PN-1000266) and Next GEM Chip K Single Cell Kit (PN-1000286) (10X Genomics).Reactions for GEM generation were prepared according to the Chromium Next GEM Single Cell 5' Reagent Kit v2 (dual index) user guide with one alteration; a primer specific to the WNV envelope region of the genome was added at a concentration of 10nM (7.5μl -displacing 7.5μl of the total H 2 O added to each reaction).GEMs were generated using the 10X Chromium controller X series and cDNA was synthesized according to the Chromium Next GEM Single Cell 5' Reagent Kit v2 (dual index) user guide. Library preparation and sequencing.Libraries were prepared using the Next GEM single cell 5' v2 library construction kit and Dual Index Kit TT Set A (10X Genomics, PN-1000190 and PN-1000215 respectively). Library construction was carried out according to the Chromium Next GEM Single Cell 5' Reagent Kit v2 (dual index) user guide.Library concentration was determined by KAPA Library Quantification Kit (Roche).Libraries were then diluted to 15nM, pooled by volume, and sequenced at the CU Anschutz Genomics and Microarray core on the NovaSeq 6000 (150x10x10x150) (Illumina) at a target coverage of 4.0e 8 read pairs per sample, equating to 40,000 read pairs per cell.Average sample coverage and cell recovery per sample was 5.3e 8 read pairs and 2417.9 cells respectively (Supplemental File 1). Reference generation and sample processing with Cell Ranger.The gene feature files associated with the Cx.tarsalis and WNV genomes were converted to gene transfer format (gtf) using AGAT (v1.0.0) prior to being filtered with the CellRanger (v7.0.1) mkgtf function (63).An 'MT-' prefix was manually added to all non-tRNA features located in the mitochondrial chromosome of the Cx.tarsalis genome.The contents of the filtered WNV genome feature file, and fasta file, were appended to the Cx.tarsalis feature and fasta files respectively, and run through CellRanger::mkref.All sequencing data were processed and mapped to the aforementioned Cx. tarsalis reference genome using CellRanger::count with the following parameters: --include-introns=true \ -expect-cells=10000. Quality control and Seurat workflow.Cell Ranger output files were individually read into RStudio (RStudio -v2023.09.0+463,R -v4.3.2) as SingleCellExperiment objects using the singleCellTK package (v2.12.0).Doublet identification and ambient RNA estimation were performed with singleCellTK::runCellQC using the algorithms "scDblFinder" and "DecontX" respectively.Samples were filtered for doublets and ambient RNA contamination by keeping cells with the following metrics: decontX_contamination < 0.6, scDblFinder_doublet_score < 0.9.Samples were then converted to Seurat objects, log normalized, and merged into one Seurat object, with columns pertaining to sample of origin, infection condition, and timepoint added to the object metadata prior to processing as described in the Seurat (v4.3.0.1) guided clustering tutorial (64).Briefly, mitochondrial gene percentage for each cell was calculated, and cells with the following metrics were retained: nFeature_RNA > 100, nFeature_RNA < 2500, percent_mt < 25 (14).Cell retention metrics were informed by a previous study of the Drosophila midgut (14).Percent of cells retained after QC for each sample can be found in Supplemental File 1. Features were log normalized, variable features were identified, the top 2000 variable features were scaled, a principle component (PC) analysis dimensionality reduction was run, and the number of PCs needed to adequately capture variation in the data was determined via elbow plot.Nearest neighbors were computed, and appropriate clustering granularity was determined with Clustree (v0.5.0).Uniform Manifold Approximation and Projection (UMAP) dimensional reduction was performed, and clusters were visualized with UMAP reduction.Cluster markers were identified with Seurat::FindConservedMarkers using default parameters and infection condition as the grouping variable.For clusters that had very few conserved markers we split the dataset by infection condition and used Seurat::FindMarkers on clusters in either the mock or WNV-infected condition.WNV vRNA as a cluster marker was identified by splitting the dataset by infection condition and using Seurat::FindAllMarkers on the WNV-infected samples.In all cases where calculations were performed on individual replicates or individual conditions, the merged Seurat object was split by sample or condition using Seurat::SplitObject so that calculations performed on subsets of the data or individual replicates were derived from a dataset that had been normalized as one.Percent expression and average expression were calculated with scCustomize::Percent_Expressing (v1.1.3)and Seurat::AverageExpression respectively.Feature expression levels were visualized with Seurat::FeaturePlot and Seurat::VlnPlot. Trajectory inference with Slingshot.Lineage structure and pseudotime inference was performed using the Slingshot (v2.10.0) and tradeSeq (v1.16.0) functions getLineages, getCurves, and fitGAM successively on the dataset containing the total population.ISC/EB and ISC/EB-prol populations were specified as the start state and EC and EE populations were specified as the end state for lineage determination. Pseudo-bulk differential expression analysis with DESeq2.Pseudo-bulk dataset was generated and differential expression analysis performed as described by Khushbu Patel's (aka bioinformagician) pseudo-bulk analysis for single-cell RNA-Seq data workflow tutorial (65).Briefly, raw counts were aggregated at the sample level using Seurat::AggregateExpression, and the aggregated counts matrix extracted and used to create a DESeq2 object (dds).The dds object was filtered to retain genes with counts >=10 prior to running DESeq2 and extracting results for the appropriate contrast.DESeq2 results were visualized via EnhancedVolcano (v1.20.0).We identified infection associated DEGs between timepoints by performing a pseudo-bulk DE analysis between 4dpi and 12dpi WNV-infected samples and mock samples separately.We then filtered out DEGs associated only with bloodmeal consumption (genes that came up as significantly differentially expressed between timepoints in our mock condition) leaving only DEGs associated with infection. Figure 1 . Figure 1.Single-cell sequencing of Cx. tarsalis midguts and cell-typing of midgut cell populations. Figure 2 . Figure 2. Characterization of enteroendocrine cells and hemocytes.Gene counts were converted to Figure 3 . Figure 3. WNV vRNA is detected in most midgut cell populations.(A) Levels of WNV vRNA across midgut Figure 4 . Figure 4. Examining WNV vRNA levels in intestine epithelial cells.The percentage of cells containing Figure 5 . Figure 5. Identifying genes upregulated in response to WNV infection with differential expression and Figure 6 . Figure 6.Lack of key immune gene upregulation in response to WNV infection.
2024-07-27T13:10:20.192Z
2024-07-24T00:00:00.000
{ "year": 2024, "sha1": "d7c10d044ff2dacbf285a84b4f9a5601aa49d6a5", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1101/2024.07.23.603613", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "6797bb4a3ccad884fe610693149d4bd3f5d04195", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
85498117
pes2o/s2orc
v3-fos-license
Validation of Copernicus Sentinel-2 Cloud Masks Obtained from MAJA , Sen 2 Cor , and FMask Processors Using Reference Cloud Masks Generated with a Supervised Active Learning Procedure The Sentinel-2 satellite mission, developed by the European Space Agency (ESA) for the Copernicus program of the European Union, provides repetitive multi-spectral observations of all Earth land surfaces at a high resolution. The Level 2A product is a basic product requested by many Sentinel-2 users: it provides surface reflectance after atmospheric correction, with a cloud and cloud shadow mask. The cloud/shadow mask is a key element to enable an automatic processing of Sentinel-2 data, and therefore, its performances must be accurately validated. To validate the Sentinel-2 operational Level 2A cloud mask, a software program named Active Learning Cloud Detection (ALCD) was developed, to produce reference cloud masks. Active learning methods allow reducing the number of necessary training samples by iteratively selecting them where the confidence of the classifier is low in the previous iterations. The ALCD method was designed to minimize human operator time thanks to a manually-supervised active learning method. The trained classifier uses a combination of spectral and multi-temporal information as input features and produces fully-classified images. The ALCD method was validated using visual criteria, consistency checks, and compared to another manually-generated cloud masks, with an overall accuracy above 98%. ALCD was used to create 32 reference cloud masks, on 10 different sites, with different seasons and cloud cover types. These masks were used to validate the cloud and shadow masks produced by three Sentinel-2 Level 2A processors: MAJA, used by the French Space Agency (CNES) to deliver Level 2A products, Sen2Cor, used by the European Space Agency (ESA), and FMask, used by the United States Geological Survey (USGS). The results show that MAJA and FMask perform similarly, with an overall accuracy around 90% (91% for MAJA, 90% for FMask), while Sen2Cor’s overall accuracy is 84%. The reference cloud masks, as well as the ALCD software used to generate them are made available to the Sentinel-2 user community. Introduction Thanks to their open access policy, their systematic and frequent revisit, and their data quality, the Landsat [1] and Copernicus Sentinel-2 [2] missions have revolutionized the optical Earth observation at a high resolution.Before this open access era, most users only had access to a very limited number of images per year on their sites and used to process the data manually or at least in a very supervised manner.The amount of data provided by these missions pushes the users to automatize their processing, or reciprocally, a manual approach would prevent an efficient use of the data provided by Sentinel-2.To allow a robust and automatic exploitation of Sentinel-2 data, "Analysis Ready Data" (ARD) [3] products are therefore requested by most users.ARD products take care of the common burdens necessary for most applications, which include the cloud detection and atmospheric correction steps. The detection of clouds and cloud shadows is one of the first issues encountered when processing optical satellite images of land surfaces.The difficulty lies in the large diversity of cloud types and Earth surface landscapes [4].It is already frequent to confuse bright landscapes with clouds, but moreover, it is especially difficult to detect semi-transparent clouds for which the observed reflectances contain a mixture of cloud and land signals.The detection of cloud shadows is also complex, as a similar low reflectance range can also be frequently observed on targets that are not obscured by clouds.This leads to confusions with water pixels, burnt areas, or topographic shadows.In the case of semi-transparent clouds, shadow detection is even more challenging [5]. Until 2008, when Landsat 8 data started to become free and easily accessible, images with a decametric resolution were expensive, and as a result, users ordered mostly cloud-free images from the image providers.For a given user, the number of images to process was usually low, and the realization of a manual cloud mask was possible.At that time, cloud classification methods existed, but were mainly dedicated to providing a cloud percentage per image in the catalog [6]. The availability of operational imaging satellites, that image all lands frequently, such as Landsat 8 [7] and Sentinel-2 [2], and moreover, the free and open access to these data have prompted new applications based on time series of images covering large territories.For instance, Inglada et al. [8] used all data acquired by Sentinel-2 or Landsat-8 during one year over France, to produce a land cover map of France, applying a supervised classification method.Even if the supervised classification method can cope with the presence of a few outliers in a time series, reliable cloud and shadow masks are important and can only be obtained automatically. The reliability of the cloud mask is also a key element that determines the noise present in reflectance time series.Figure 1 shows a time series of top of atmosphere reflectances obtained with Sentinel-2 Level 1C data over a mid-altitude meadow in the center of France, for the blue green, red, and near-infra-red spectral bands.The same plot after a good cloud screening (Figure 2, top plot) is much smoother, although several dates were screened because of the cloud cover.It is therefore much easier to process automatically.It may be also noted (Figure 2, bottom plot) that on this site, which usually has a low aerosol content, the effect of atmospheric correction on the smoothness improvement is much less important than the effect of cloud screening. The Sentinel-2 mission consists of two twin satellites, Sentinel-2A and Sentinel-2B, each one equipped with an optical Multi-Spectral Instrument (MSI).For the next 10 years at least, the Sentinel-2 mission will provide time series of images that combine the following features: thirteen spectral bands from 0.44-2.2µm, high resolution images (10 m-60 m according to the spectral band), and steady and frequent observations.Since the system became fully operational in October 2017, Sentinel-2 has performed acquisitions above each land pixel at least every fifth day. Several providers have developed so-called Level 2A (L2A) processors, which provide surface reflectances after atmospheric correction and a mask that flags clouds and cloud shadows.At least three organizations are distributing L2A products for Sentinel-2: ESA is distributing L2A data generated with the Sen2Cor processor [9]; the United States Geological Survey (USGS) distributes L2A whose cloud masks come from the FMask algorithm [10]; and the French land data center, named Theia, distributes L2A products generated with the MAJA processor [4].These methods are explained in the next section.The Centre National d'Etudes Spatiales (CNES) and the Centre d'Etudes Spatiales de la Biosphère (CESBIO), on the one hand, and the German Aerospace Centre (DLR), on the other hand, separately developed Level-2A processors containing cloud and shadow detection methods applicable to Sentinel-2 time series of images.CNES developed the Multi-sensor Atmospheric Correction and Cloud Screening software (MACCS) ( [4,11]), while DLR developed the atmospheric Correction software (ATCOR) ( [12]).In 2015, the institutes decided to merge their efforts to develop the MACCS-ATCOR Joint Algorithm (MAJA).MAJA is built on the structure of MACCS, whose methods are extended with a few complementary elements from ATCOR. Sen2Cor and FMask are mono-temporal methods that use only the image to process and determine the cloud cover, while MAJA methods exploit multi-temporal information.Although some cloud detection methods based on machine learning algorithms are starting to appear ( [13,14]), the three methods we selected are rule-based methods that apply thresholds to selected features. In order to compare the results provided by the three processors quantitatively, reference cloud masks are necessary.To our knowledge, there is no in situ reference database that provides the cloud cover on a regular basis, with a good coverage at a decametric resolution.The validation of cloud masks classically relies on manually-classified images ( [13,15]) or polygons selected in a large number of images ( [16,17]).Such a validation dataset has already been generated for Sentinel-2 ( [16]).A human operator selected and labeled polygons within a set of images, but looking at the selected polygons, we found that the authors had avoided selecting pixels near the cloud limits, where part of the difficulty and subjectivity of identification lies.For a more complete validation, we needed reference cloud masks for which all the pixels would be classified.To do that with a limited amount of manual work, we developed a new method based on machine learning. As for most existing methods, the generation of our reference masks relies on the ability of well-trained human operators to recognize a cloud, a shadow, or a cloud-free pixel.However, since it would take too long for an operator to classify manually all the pixels of an image, we decided to use an active machine learning algorithm.Our method, named Active Learning Cloud Detection (ALCD), is iterative.The operator labels a small number of pixels, which are used to train a machine learning algorithm, which is used to produce a classification.After this step, the operator visually determines the possible imperfections of the classification and labels new pixels where the classification is wrong or uncertain.This procedure is iterated several times to get a satisfying reference cloud mask. The article is organized as follows: Section describes the dataset used in the study.Section 3 recalls the detection methods used for Sen2Cor, FMask, and MAJA.Section 4 describes the ALCD method and its validation, and Section 5 describes and discusses the comparison of the validation results of the three selected operational processors. Validation Results for Operational Processors and Discussion We selected 10 sites and 32 Sentinel-2 L1C scenes, described in Table 1 which were used to validate the active learning method and then to evaluate the performances of the three selected operational processors.The 10 sites have been chosen in order to ensure a large diversity of the scenes.Several dates at different seasons were selected to obtain various atmospheric conditions, cloud types, and land covers. Five out of ten sites are mostly covered by vegetation.We chose an equatorial forest site, Alta Floresta in Brazil, two very diverse sites, Arles (France) and Ispra (Italy), ranging from mountains to agricultural plains, and two flatter agricultural sites in Munich (Germany) and Orleans (France), which also include large patches of forests.We also added five arid sites, four in Africa and one in North America, with various degrees of aridity: Gobabeb, Namibia, is a desert site; Marrakech, Morocco, is mainly a high elevation semi-desert, which includes the highest peaks of the Atlas mountains; Railroad Valley (USA) includes very bright patches of sand and some mountains.The two last sites, Pretoria, South Africa, and Mongu, Zambia, contain grassland, savannah, and dry woodland.The sites are also very diverse in terms of elevation, with flat low altitude sites in Orleans, Gobabeb, and Alta Floresta, contrasted sites that go from mountains to sea level (Arles, Ispra), or from 450 m above sea level (a.s.l) to above 4000 m for Marrakech, and flat more elevated sites in Munich (600 m a.s.l),Mongu (800 m a.s.l),Railroad valley (1200 m a.s.l),Pretoria (1500 m a.s.l). Cloud and Cloud Shadow Detection Methods The free availability of Landsat time series, which started in 2008 [18], pushed several teams to develop reliable methods to generate cloud masks.The cloud detection methods applied to Landsat rely greatly on the Thermal InfraRed (TIR) bands, using the fact that cloud top temperature is often much lower than the temperature of cloud-free surfaces ( [19]).For satellites that lack TIR bands, the classical cloud detection methods consist of a series of rules using thresholds on reflectance or reflectance ratios ([4,10,16,19,20]).They usually combine a set of criteria such as: 1. a threshold in the visible range, preferably after a basic atmospheric correction, as surface reflectance is low, while cloud reflectance is higher. 2. spectral tests to check that the cloud is white in the visible and near infra-red range. 3. a threshold on the reflectance in the 1.38-µm band, when it is available (for instance on Landsat-8 and Sentinel-2).This spectral band is centered on a deep water vapor absorption band that absorbs all light in that wavelength passing through the lower layers of the atmosphere ( [21]).As a result, only objects on the upper layers can be observed.These objects are usually clouds, but some mountains in a dry atmosphere can also be observed [22].4. thresholds on the Normalized Difference Snow Index (NDSI) to tell snow from clouds, because snow has a much lower reflectance in the short wave infra-red ([23]). If these criteria are efficient to detect thick clouds and high clouds (when the sensor possesses the 1.38 µm band), they usually tend to confuse cloud-free pixels with a high reflectance in the blue, such as bright deserts, and semi-transparent low clouds.Depending on the threshold value, either cloud-free high reflectance pixels will be classified as clouds or thin and low clouds will be classified as cloud-free. Several arguments also push cloud mask developers to dilate the cloud masks they have obtained. • Cloud edges are usually fuzzy, and some parts could be undetected. • Clouds also scatter light to their neighborhood, and this adjacency effect is very complicated to correct as it is very dependent on cloud altitude and thickness, which are not well known. • Sentinel-2 spectral bands observe the Earth with viewing angles that can differ by about one degree.A parallax of 14km is observed on the ground, which is corrected by the geometric processing of L1C.However, this processing takes only the terrain altitude into account and not of the cloud altitude, resulting in uncorrected parallax errors, which can reach 200m for the bands that have the largest separation (B2 and B8a).Moreover, the acquisition of these two bands is also separated by two seconds, and wind speeds of 10-20 m/s are not uncommon in the atmosphere, adding a few tens of meters to the possible displacement.The parallax effect occurs mostly along the direction of the satellite motion and slightly in the perpendicular direction because of the time difference between acquisitions. Because of these three items, it is therefore necessary to dilate cloud masks by 200-500 m.In all this section, we will denote ρ band the Top Of Atmosphere reflectance (TOA) for a given spectral band and ρ * band the TOA reflectance corrected from gaseous absorption and Rayleigh scattering.The interest in performing a basic atmospheric correction before cloud detection is that it allows using the same threshold value whatever the viewing and Sun zenith angles are.Regarding the band names, we used the denomination provided in Table 2. MAJA Cloud and Cloud Shadow Detection For the sake of conciseness, all the details of the method and the threshold values are not provided here, but they have been described in depth in [4].However, a few changes have recently been made to the method, which are highlighted here. Sentinel-2's cirrus band centered at 1.38 µm was designed to detect high clouds, as the water vapor in this band absorbs all the light that would otherwise reach the Earth's surface and travel back to the satellite [21].Except in very dry atmospheric conditions, the only light reflected to the satellite in this band comes from altitudes above 1000-2000 m.As a result, only clouds or high mountains can be observed in this band.In theory, the threshold to detect clouds with this band should evolve as an exponential function and should depend on the atmospheric water vapor content.In practice, very thin clouds are very frequent, but a large part of them is thin enough to bring very limited disturbance to the surface reflectance in the bands outside the cirrus band.We used for MAJA a quadratic law of variation of the cirrus band threshold with altitude.A high cloud is detected if Equation ( 1) is verified. where h is the pixel altitude in km above sea level. To the classical multi-spectral threshold methods described above, MAJA adds several multi-temporal criteria, because surface reflectances usually tend to change slowly with time.To use multi-temporal criteria, MAJA uses a reference composite image that contains the most recent cloud-free observation for each pixel.At each new image, MAJA updates the composite with the newly-available cloud-free pixels.Due to that, MAJA has to process the data for a given location in chronological order. The multi-temporal method detects the pixels for which a sharp increase of reflectance in the blue is observed.However, ground surface reflectance can also increase with time, especially if the pixels in the composite have been acquired a long time ago, due to a persistent cloud cover.Because of that, the threshold on blue reflectance increase becomes higher when the time difference between the acquisition and the reference increases.If a pixel is declared cloudy by that criterion, it has to be whiter than in the reference composite image to be accepted as a cloud. As MAJA is a recurrent method, it needs to be initialized.For that, we have also implemented a mono-temporal cloud mask, which tends to be less selective than the multi-temporal cloud mask.As a result, it is only used when the first image in a time series is processed or when, because of a persistent cloud cover, the most recent cloud-free pixel in the composite is too old to be used to detect clouds (more than 90 days).In [4], a very simple threshold on the blue reflectance was used, but starting from MAJA Version 2.0, we replaced it by the ATCOR mono-temporal cloud mask, defined as: A final control checks the correlation of each cloudy pixel neighborhood with previous images, as already done in [24].We used a neighborhood of 7 × 7 pixels, at 240-m resolution.Given that it is very unlikely to observe a cloud at exactly the same place with the same shape in successive images, we discarded all cloud pixels for which a correlation coefficient greater than 0.9 was found. MAJA tests were not performed at full resolution, but currently at 240 m, in order to (i) spare computation time and (ii) avoid false cloud or shadow detections that could occur at full resolution if spectral bands are not perfectly registered.Moreover, man-made structures, such as buildings, greenhouses, roads, and parking lots can have very diverse spectra and generally contain bright or dark objects that could be classified as clouds or shadows.Of course, due to the processing at a lower resolution, thin clouds with a size of about 100 m can be omitted in the MAJA cloud mask. Once clouds are detected, it is possible to detect their shadows.The MAJA shadow detection method has been considerably updated since [4] was written, so we describe it here with more details.MAJA also uses a multi-temporal method to detect the darkening of pixels due to cloud shadows.Cloud shadows are usually more noticeable in the infra-red wavelengths, but when vegetation cover changes, the surface reflectances in the NIR and SWIR bands also exhibit more variations with time than for the shorter wavelengths.As a result, the best multi-temporal detection results for cloud shadow were obtained using the red band. Two cloud shadow masks were generated: a so-called "geometric" one, which only searches for shadows where a detected cloud can cast a shadow, and a "radiometric" one for the shadows that may have been generated by clouds lying outside the image. The "geometric" shadows algorithm computes the intersection between the zones where there could be a cloud shadow (because of the presence of a cloud in the neighborhood) and the zones in which the darkening in the red band seems high enough.For that, the MAJA cloud mask was used to compute the zone where a cloud shadow could exist, considering cloud altitudes from the ground to ten kilometers.For each altitude, the cloud shadow zone was computed using a geometric projection, accounting for the solar and viewing directions, as in [5]. We then determined a threshold on the relative variation of the reflectance in the red band to detect the real cloud shadows within the possible shadow zone.To do that, we computed a red reflectance ratio: red (D re f ) where ρ red (D) is the reflectance of the pixel at date D, and ρ * red (D re f ) is the reflectance value in the cloud and shadow-free composite reference image (after a basic atmospheric correction, as explained above).A pixel may be a shadow if it belongs to the possible shadow region and if: with T shadow a threshold value, which depends on the image content.To compute T shadow , a histogram of the red band reflectance ratio was computed for the cloud-free pixels.The threshold value was set to a certain percentage of the cumulative histogram, the actual value depending on the cloud cover proportion (the higher the cloud cover, the higher the percentage).To avoid local over-detections, the area of shadows was tested cloud per cloud to ensure that the cloud shadow area was not greater than the cloud area, with a 20% margin.If it was greater, a lower threshold was used to reduce its size. For the "radiometric" mask, we used a threshold depending on the actual darkening of the shadows detected in the rest of the image by the previous ("geometric") algorithm or a default value if no shadow had been detected.As for the cloud detection, a final correlation test was used to eliminate some remaining over-detections. Finally, as explained in the Introduction, all the cloud and shadows masks were dilated by two coarse pixels, i.e., 480 m. The MAJA cloud mask is provided as a set of binary bits, which contain the results of each set of cloud masks: and two bits used as a summary of the detection: • cloud or shadow detected by any of the above tests • cloud detected by any of the above cloud detection tests Sen2Cor Cloud Detection Method The Sen2Cor cloud detection method is described in depth in ( [9]), although this description applies to an older version of Sen2Cor and has not been updated since then.It mainly uses the four classical thresholds defined at the beginning of this section, but in a slightly different manner and with some complementary thresholds. The specificity of the Sen2Cor method is that each test provides a probability map of cloud presence (between 0 and 1).All these individual probability masks are then multiplied to get a global probability mask.Various thresholds are applied to the global probability mask to get three masks: a low probability cloud mask, recently renamed "unclassified", a medium probability cloud mask, and a high probability cloud mask. The global threshold ( 1) is applied to the red band instead of the blue band.As this test tends to detect too many clouds, several tests on band ratios are passed to avoid detecting clouds on: • senescent vegetation (using the near infra-red/green ratio), • soils (using the short wave infra-red/blue ratio), • bright rocks or sands (using the near-infra-red/sand ratio). Another test is used to tell snow from clouds, as described in the introduction of this section. Sen2Cor also detects cloud shadows using the fact that cloud shadows are dark and only keeps the shadows that can be traced back to a cloud.In this study, we had access to Sen2Cor Version 2.5.5. FMask Cloud Detection Method The FMask (Function of mask) method [25] was initially developed for Landsat 5 and 7 and later on extended to Landsat-8 [10].It involves the surface reflectances and the brightness temperatures of the Thermal Infra-Red (TIR) Channels.In [10] as well, a variant of the method was developed for Sentinel-2 without TIR bands. The FMask method first computes potential cloud and cloud shadow layers based on single date thresholds including the ones described in the general methods above and, of course, for Landsat only, on the thermal infra-red bands.After this first step, a second pass is used to compute the cloud probability based on statistics computed on the pixels that are not in the potential cloud layer.Pixels with the highest probability are also included in the potential cloud layer.An object-based method segments these layers and tries to match the clouds with their shadows, by iterating on the cloud altitude.If a good match is found, potential shadow objects are confirmed. In this paper, we had access to the FMask 4.0 version, which is the most recent version of FMask.The improvements brought to the methods of FMask have not been published so far, but they at least include the detection of clouds based on Sentinel-2 observation parallax from [26]. How to Recognize a Cloud In Sentinel-2 images, clouds are white and usually have a reflectance higher than that of the underlying surface, especially in the blue band [20].They also have a greater SWIR reflectance than that of snow [23], and if they are high enough in the atmosphere, they have a non-null reflectance value in Sentinel-2 Channel 10 centered on the band at 1.38 µm [21].They usually also look less sharp than the surface, and their shape changes from date to date in the same location.All these criteria can be used by a human operator to decide which pixels are clouds. However, the first issue found in trying to build a reference cloud/shadow mask is the lack of an accurate definition of what we should consider as a cloud.As said in [15], "note that a quantitative threshold does not exist to distinguish thin clouds and transparent features such as haze and aerosols, making thin cloud identification inherently subjective to any analyst".As it is possible, to some extent, to correct reflectances for the effects of aerosols, it is important not to confuse them with clouds.Moreover, even if it is easy to see a thin high cloud on the 1.38-µm band, this cloud can still be too thin to have an effect on the surface reflectance of the other channels.As a result, flagging as invalid all the pixels for which the surface reflectance at 1.38 µm is not null would result in discarding too much useful information for users. Finally, the pixels we classified as clouds were either (i) opaque clouds that were easily identifiable, (ii) semi-transparent clouds that were identifiable in the band at 1.38 µm, which can be also discerned in the other bands, and (iii) semi-transparent clouds that were identifiable in visible bands, even if not visible in the 1.38-µm band.The clouds that were visible in the band at 1.38 µm were classified as high clouds, and the others as clouds. Snow was distinguished from clouds by viewing the SWIR band at 2.2 µm (Band 12) and the digital elevation model.Similarly, shadows are quite easy to distinguish from water thanks to their shapes and textures, and in case of a doubt, we distinguished cloud shadows from other dark features, by checking that there were clouds around that could cast that shadow.It was also possible to tell cloud shadow from topographic shadow using the DEM, and in the case of topographic shadow, the shadow was still visible in another cloud-free image acquired a few days apart.In many cases, thin clouds were observed above cloud shadows, and we classified those as clouds.However, the distinction is not too important as the aim of the cloud shadow detection method is to tell the pixels valid for land surface analysis from the invalid ones. Finally, water is usually easy to tell from land visually, using the image context, and in the case of mixed pixels, an accurate distinction is not essential, as we considered both pixels as valid for monitoring surface reflectances. Using these rules, we labeled samples with 6 classes: land, water, snow, high cloud, low cloud, and cloud shadow. Active Learning Cloud Detection Method In order to classify whole images according to the 6 classes defined above, we set up an Active Learning Cloud Detection (ALCD) method.Active learning was introduced in 1994 [27] in the context of text recognition.It was later on introduced in remote sensing [28].Active learning's main idea is that the training of a model based on a small ensemble of well-chosen samples can perform as well as a model trained on a large ensemble of randomly-chosen samples.It therefore provides a strategy to reduce the amount of time to devote to the provision of training samples.In classical active learning methods, samples that are automatically selected are proposed to be labeled by the user.We did not fully implement such an active learning method, but, after the first iteration, the user selected samples where the classifier had a low confidence or where it was obviously wrong.We provide here a general description of our methodology, then detail each step more accurately. • Compute the image features to provide to the classifier. • While the classification is not satisfactory: 1. Select of a set of samples for the 6 classes.After the first iteration, select these samples where the image classification is wrong or where the classification confidence is low. 2. Train a random forest [29] model with the samples. 3. Perform the classification with the model obtained at Step 2. We chose to generate reference cloud masks at 60-m resolution, which corresponds to the lowest resolution of Sentinel-2 bands. Feature Selection In the jargon of machine learning, the features are the information provided to the machine learning method.Regarding image classification, the features are information provided per pixel.In the ALCD method, for each scene, 26 features were computed: • Twelve bands from the image to classify.Among the 13 available bands, the B8A band was discarded, as its information is redundant with respect to that of B8. • The Normalized Difference Vegetation Index (NDVI) [30] and the Normalized Difference Water Index (NDWI) [31] of the image to classify. • The Digital Elevation Model (DEM).Its purpose is two-fold: First, it aims at improving the distinction between snow and clouds, in high altitude areas.Snow is generally present above a given altitude, and the threshold altitude is often more or less uniform on the scene.The combination of information coming from the reflectances and the DEM can help the classifier distinguish between those two classes.It can also partially avoid the false-positive detection of cirrus with Band 10 (1.38 µm) over mountains. • The multi-temporal difference between bands of the image to classify and a clear image.For this, we used a cloud-free date (referred to as the "clear date") acquired less than a month apart from the date we wanted to classify.We provided the difference of these images to the classifier, for all bands except Band 10, as it does not allow observing the surface except at a high altitude.Band 8A was also discarded, for the reason exposed above.Thus, 11 multi-temporal features were computed, from all the other bands. • MAJA also uses a multi-temporal method.In MAJA, the scenes are processed in chronological order using cloud-free pixels acquired before the date to classify.To obtain results that are independent of those of MAJA, the cloud-free reference image used for ALCD was selected after the date to classify.Moreover, the clear date has to be close to the one to be classified, to allow considering that the landscape did not change.Therefore, we constrained the ALCD reference date to be less than 30 days after the date to classify. • A side effect of requiring a cloud-free image within a month, but posterior to the date to classify, is that our method cannot produce a reference cloud mask for any Sentinel-2 image.Anyway, there is no need to be exhaustive in the generation of cloud mask validation images.However, the need to find an almost cloud-free image as a reference for the ALCD method prevents us from working on sites that are almost always cloudy, such as regions of Congo or French Guyana for instance.As the multi-temporal features of MAJA would not be very efficient for such regions, this can introduce a small bias in favor of MAJA in our analysis.Still, we included three cloudy sites in our analysis (Alta Floresta, Orleans, Munich) to try to minimize this bias.Finally, the necessity to find a cloud-free reference image has an advantage: it is an objective criterion for the selection of the images to use as reference cloud masks, which avoids a subjective bias for the selection of the images to be processed. Sample Selection Along with the features creation, ALCD generates an empty vector data file per class.The user is then invited to populate each file using a GIS software program (we used QGIS [32]).The user can either select the pixels on a "true color" red-green-blue color composite image made with Channels 4-3-2 of Sentinel-2 or on a display of Band 10 to see the potential cirrus clouds.After the first iteration, the user can also view the classification generated at the previous iteration, as well as a confidence map described in the next paragraphs.This information is used to select and label samples where the classification looks wrong or where the confidence is low. Before producing a large quantity of cloud masks, we studied whether it was more efficient to label points or polygons.By efficient, we mean the possibility to obtain a reliable classification quickly.Polygons have been used by [16] and other authors [33].The main advantage of a polygon is that it allows selecting a large number of pixels.Therefore, for a similar number of clicks, the number of training samples will be greater with polygons than with points.However, it is time consuming to draw polygons while making sure that they do not include any other class, and it is highly likely that the pixels belonging to a given polygon carry similar information that does not improve the description of the class. On the contrary, points can be placed quickly in a precise manner, even close to the border of clouds or on small areas.Finally, the diversity of pixels is far greater in the case of points rather than polygons.We thus decided to use points, after several tests on real cases. The reference samples have been selected according to the principles exposed in Section 4.1.We also paid attention to selecting a number of samples proportional to the area represented by each class, with some increase for classes covering a small area, such as shadow or water.On the 32 scenes we classified, a mean of 120 samples was selected for the first iteration and about 300 points on average.To increase the information provided to the classifier, a data augmentation was done, which consisted of adding pixels in the 3 by 3 pixel neighborhood of 60-m pixels. Machine Learning For the machine learning part, we used the Orfeo Tool Box (OTB) [34], which is an up-to-date satellite image processing library distributed as open source software.After several tests, we found that the Random Forest (RF) classifier [29], provided the best results with a shorter training time, as [35] assessed. Classification and Confidence Evaluation At each iteration, after having trained a model, the classifier produces a classification map and a confidence map.The random forest classification method produces a large number of binary decision trees, which are used as a committee of classifiers.A vote is then performed, and the class that receives a majority of votes is chosen for each pixel.OTB also returns a confidence map, which is, for the random forest classifier, the proportion of votes for the majority class [34].For each pixel, a confidence score between 0 and 1 is therefore given. The confidence map can be used to help the operator select the next samples to classify.A map of low confidence regions is provided to the operator by applying a median filter to the confidence map to reduce its complexity.A layer style with discrete colors is provided to the users, to improve the readability of the map with QGIS. With those two maps, the user can choose where it is interesting to add new labeled samples.He/she usually has to iterate a couple of times before reaching a satisfying result.When the user is satisfied with the maps, he/she can stop the iterative process and decide to use the last classification as the final one. Visual Evaluation The simplest option is to check visually if the masks are consistent with the corresponding image.This is done by overlaying the contours of the classes on the true color image and comparing the classification map with the original image.An example is available in Figure 3.We checked all our masks this way, and only stopped the iterative process after they were satisfactory.Miniatures of the other scenes are provided in Figure 4 and in the Supplementary Material. Cross-Validation A classical way to validate the performances of a classification is to use a part of the available samples for training and another part for validation.However, as the number of samples we had was quite low, using only half of them for the training would probably reduce the performance. To avoid that, we used, for each scene, a 10-fold cross-validation ( [36]), for which the available samples were randomly split into 10 parts.Ten validation experiments were made using each time a different part for validation and the 9 other parts to train the classifier.This enabled us to get an estimate of the quality of the classification and, by analyzing the dispersion of results for the 10 validation experiments, to check the stability of the result. As our end goal was to provide accurate binary validity masks, we gathered the low clouds, high clouds, and cloud shadows into the invalid super-class, whereas the land, water, and snow classes were gathered into the valid super-class.Thus, a sample whose real class was low cloud, but which was classified as high cloud was considered correct. The results were therefore binary.It was therefore possible to compute a number of statistics: • TP, the number of True Positive pixels, for which both the classified pixel and the reference sample were invalid, • TN, the number of True Negative pixels, for which both the classified pixel and the reference sample were valid, • FN, the number of False Negative pixels, for which the classified pixel was valid and the reference sample was invalid, • FP, the number of False Positive pixels, for which the classified pixel was invalid and the reference sample was valid From these quantities, we can compute the overall accuracy, which is computed as TP+TN TP+TN+FP+FN ; it is the quantity we tried to maximize. We can also compute the recall (or sensitivity or user's accuracy) as TP TP+FN and the precision (or producer's accuracy) as TP TP+FP .The recall is the proportion of true positives that have been detected.A recall of 100% means all the true positive pixels (clouds or shadows) have been detected, while a precision of 100% means that no valid pixel was classified as invalid.The F1-score is defined as the harmonic mean of recall and precision, which can also be written as: 2 TP 2 TP + FP + FN .The resulting metrics for all the 32 scenes are plotted in Figure 5.The global mean accuracy was 98.9%, the mean F1-score 98.5%, the precision 99.1%, and the recall 97.8%.All these figures indicate a good overall quality of the classifications. The maximum standard deviation of the overall accuracy was 4%.One should note that, after the first iteration, the active learning procedure led us to label samples where the classification was not easy.Consequently, the accuracy and F1-score of the 10-fold cross-validation were computed with mainly difficult cases, which probably provided an underestimate of the classification's real performance. Comparison with an Existing Dataset Hollstein et al. created a publicly-available database [16].It consists of manually-classified Sentinel-2A images with polygons.Their classes were the same as the ALCD ones: cloud, cirrus, snow/ice, shadow, water, and clear sky.A direct comparison is therefore possible.As only one of the images they classified was part of our 32 scenes, we decided to classify 6 additional scenes with ALCD that are part of their dataset.We selected images for which a clear image was available within a month of the desired date.This dataset was not used as the reference image for the validation of operational cloud masks in Section 5, because a large part of the images were acquired very early in Sentinel-2's life when the acquisitions of Sentinel-2 were not steady yet and some auxiliary files were sometimes incorrect. The comparison of masks showed a very good agreement, but we noticed on one image that the "shadow" class from the Hollstein dataset also included terrain shadows.On another image, a few polygons with the "cirrus" class were not thick enough to be classified as clouds according to our criteria defined above.Therefore, on those two images, along with the unchanged original reference cloud masks from Hollstein et al., a corrected version of the dataset was produced, to validate the ALCD output more accurately.The results of the correctly-classified pixels are given in Table 3.It has to be noted that the cloud masks provided by Hollstein et al. do not contain the edges of the clouds where the distinction between valid and invalid pixels is difficult and somewhat subjective.This contributed to the very good agreement between both datasets as some of the most difficult pixels to classify were not included in the analysis.However, still, based on this comparison, we concluded that ALCD can produce satisfactory classifications, consistent with other teams' work.We used three methods to validate the reference cloud masks provided by the ALCD method.The first one used visual photo-interpretation to check that the obtained results corresponded to our definition of a cloud and a cloud shadow.The second, based on a 10-fold cross-validation, checked that the methodology was able to classify correctly whole images with the information provided by a limited number of manually-labeled samples.The third, which compared ALCD cloud masks with other cloud masks from Hollstein et al. [16], showed that our definition of clouds was consistent by 98.3% with that of other experts. Validation Results for Operational Processors and Discussion As mentioned above, in this study, we compared the performances of three operational cloud mask processors (MAJA Version 3.2, Sen2Cor Version 2.5.5, and FMask Version 4.0), over the 32 scenes described in section 2. For each processor, we selected the most recent version we had access to, even if this version was not currently used in the operational ground segments.This choice was made to obtain a consistent level of progress among these codes and because we can expect that the ground segments will be updated soon to use these newer versions.As a result, the performances obtained were probably better than those obtained in the official products delivered before our work.Each of the three processors used a particular set of classes for its mask output.There is for instance no cirrus class for FMask, or the absence of distinction between a medium probability cloud and a high probability cloud in MAJA and FMask, which is present in Sen2Cor.In order to compare the results fairly, the multi-class classifications were transformed into binary classification: each pixel was thus classified as valid or invalid, which is moreover what most users need to know: Is a pixel valid to monitor surface reflectance or not? For MAJA, a pixel is valid if its cloud/shadow mask value is zero, and invalid otherwise.The invalid pixels included a buffer of 480 m around the detected clouds or shadows. For Sen2Cor, FMask, and ALCD, the conversion from the multi-class classification to the valid/invalid classes is therefore exposed in Tables 4-6. This conversion allows a direct comparison of the outputs of each chain with the ALCD reference.The workflow for the Sen2Cor processor is given in Figure 6 as an example, and the procedure was the same for MAJA and FMask. An example of the comparison for the three processors is given in Figure 7, and similar figures for all scenes are provided in the Supplementary Material.On this particular scene, Sen2Cor had a great amount of false valid pixels, i.e., it did not detect clouds where it should have, and FMask also had quite a few false valid pixel, but to a lesser extent.For MAJA, the number of false valid pixels was again lower, but it also had some false invalid pixels, indicating that MAJA detected clouds where there were none.This was partly due to the dilation of clouds used by MAJA. The differences of approaches concerning dilation are therefore an issue to perform a fair comparison.To solve this issue, we present comparison results expressed in two ways: • comparison of non-dilated cloud masks, which means we had to erode the MAJA cloud mask, as the dilation was built-in within MAJA; • comparison of dilated cloud masks, for which we dilated ALCD, FMask, and Sen2Cor using the same kernel as the one used by MAJA; in this comparison, of course, we used the MAJA cloud mask with its built-in dilation. Comparison of the Results for Non-Dilated Cloud Masks The results for the 32 scenes are compiled in Table 7. Table 7 and Figure 8 show the comparison of the cloud masks provided by the three processors with the reference cloud masks generated with ALCD.The result of one of the masks, the August scene for the Orleans site, is not included, because the cloud cover percentage after dilation was above the 95% threshold, above which MAJA does not issue the product to save computing time and space.In the case of MAJA, the output cloud mask was eroded by 240 m to compensate for the dilation, which is done within MAJA processing.However, the compensation was not perfect, as a dilation followed by an erosion closed the small gaps in the cloud cover. The results showed quite good overall accuracies, with average values at 93% for MAJA, 91.5% for FMask, and 90% for Sen2Cor.However, in this comparison, the fact that the MAJA cloud mask was dilated then eroded, while the reference and the masks from the two other processors were not, makes a difference, disfavoring MAJA.Moreover, as explained in Section 3, the cloud masks should be dilated. Comparison of Results for Dilated Cloud Masks It is therefore more interesting to analyze the results obtained when the cloud masks from the reference and all the processors are dilated.As stated in Section 3.1, MAJA dilates its cloud and shadow masks by 480 m.As a result, the results presented in Table 8 and Figure 9 compare the MAJA output (which is dilated) to dilated cloud masks of ALCD, Sen2Cor, and FMask, using the same dilation kernel as the one used in MAJA. The four statistics (overall accuracy, F1-score, recall, and precision) are summarized in Figure 9.The mean accuracy for MAJA, FMask, and Sen2Cor was 90.8%, 89.8%, and 84%, respectively.MAJA and FMask have a similar good quality, while Sen2Cor has an overall accuracy lower by 7% compared to MAJA.This means that 16% of pixels were wrong within the dilated Sen2cor validity mask, while the errors were reduced to 10% for FMask and 9% for MAJA.However, depending on the scenes, there was a large dispersion of results, which probably means that improvements can be expected by merging the good points of each method, for instance combining in the same method multi-temporal criteria, from MAJA, and detection using the observation parallax, from FMask.Compared to the performances obtained without dilation, the overall accuracies have decreased for the three tested methods.In the case of Sen2Cor, the large decrease is due to the number of buildings and other bright pixels classified as clouds by Sen2Cor.After dilation, the surface of these false clouds increased greatly.For MAJA, the source of decrease lied in the fact that MAJA can miss small clouds because the cloud detection was performed at 240-m resolution.When dilated, the size of these clouds in the reference masks increased, and the small clouds were given a greater importance in the statistics. Regarding thin cirrus clouds, the comparison of reference masks with the three methods show (see the Supplementary Material) that, despite our definition of what a cloud is (a cloud visible in the cirrus band is still a valid pixel if it is not noticeable in the other bands), the ALCD cirrus masks tended to include more pixels than those of the three tested methods.Sen2Cor was often the closest to the ALCD reference in the case of high and thin clouds. We have provided in the Supplementary Material of this paper figures similar to Figure 7 for each reference image.A detailed analysis of the results shows that Sen2Cor tended to rely mainly on the 1.38-µm band.As a result, it performed well on scenes with cirrus clouds, but it had degraded performances for scenes with low clouds.It also tended to detect cirrus clouds over mountains (for instance with the scene over Marrakesh on 2017-01-02).Sen2Cor also had frequent over-detection of clouds over bright targets such as bare soil or buildings.When these bright spots were observed on dark landscapes, the false clouds also tended to generate false shadows, such as those observed for the reference scenes of Arles and Ispra in winter.After dilation, this issue was of course worsened, which is why one cannot advise users to dilate Sen2Cor cloud masks. FMask precision and recall rates were well balanced, which probably resulted from a good optimization of the parameters, but we noticed it tended to omit a larger proportion of cloud shadows than MAJA.We also observed false cloud detection for FMask over very bright targets such as the ones in the Railroad Valley images.MAJA had slightly better performances than FMask when the dilation was accounted for.However, MAJA tended to ignore small clouds or small shadows due to the lower resolution of the cloud mask processing.It also had a very low amount of false positive clouds and a greater amount of false negative clouds, indicating that some improvement could arise from a better tuning of the parameters. Among the three methods, MAJA's results tended to be more homogeneous with no scene with an overall accuracy below 80%, while FMask had three images below 80%.Sen2Cor obtained an overall accuracy below 80% for seven scenes, among which two scenes were just above 50%: Marrakesh in January, where the Sen2Cor cirrus test classified mountains as clouds because of the dry air, and Arles in December, an image with scattered bright pixels detected as clouds, found their shadows in a dark surface.The area concerned by these false positive pixels was then extended after the dilation step. Conclusions This article presents the results of the validation of Sentinel-2 cloud masks delivered by MAJA, FMask, and Sen2Cor.The validation was conducted with reference cloud masks generated with a new tool based on a supervised active learning procedure.ALCD was designed to allow generating reference cloud masks over whole Sentinel-2 images and minimizing the time spent by the operator.A trained operator was able to generate a good and complete reference cloud mask in less than two hours. The ALCD method was validated using three methods.First, a visual evaluation of the produced cloud masks gave satisfactory classification maps.Second, a k-fold cross-validation was computed on the 32 scenes, leading to a global mean overall accuracy of 98.9%.Third, a comparison with an existing dataset indicated that the ALCD tool was capable of producing masks with a quality similar to manual classification via polygons, with an accuracy of 98.3% on the original dataset and 99.7% on the corrected one.However, the fact that the ALCD method required the existence of a cloud-free image prevented us from choosing reference scenes acquired in permanently cloudy sites such as Guyana or Congo.As a result, we were not able to provide validation in such situations that were not conducive to a multi-temporal methods such as MAJA. The ALCD tool could also be used to prepare reference cloud masks for other multi-temporal sensors such as Landsat, and its methodology could also be used to build other types of reference masks, such as water, snow, or forests for instance.ALCD is currently being used at CNES to prepare water and snow masks. Thirty two reference cloud masks were generated on 10 different sites in various biomes selected around the world.These 32 scenes have been made available for free download [37], and the source code of ALCD is available on an online source repository [38].These cloud masks were used to compare the performance of three cloud masking methods used to provide Level 2A products operationally, namely MAJA, Sen2Cor, and FMask.FMask and MAJA gave very similar results, with a small advantage for MAJA.The accuracy of Sen2Cor was on average 6% lower than that of the two other methods when considering dilated cloud masks and 3% lower with non-dilated cloud masks. These results show that the multi-temporal cloud mask enabled MAJA to perform better than the other methods, but the difference from the well-tuned FMask was still quite low.The advantages of the multi-temporal cloud masks seem to be counter balanced by the processing at a lower resolution needed to speed-up the complex processing due to the use of multi-temporal methods.The MAJA processor computing time is currently being optimized to allow computing cloud masks at a better resolution.The ALCD dataset should also be used to improve the tuning of the thresholds of MAJA to get a better balance of false positive and false negative errors. Figure 1 . Figure 1.Time series of top of atmosphere reflectances from the Sentinel-2 Level 1C product, regardless of cloud cover, for a mid-altitude meadow in the center of France, for four spectral bands centered at 490 nm (blue dots), 560 nm (green dots), 670 nm (red dots), and 860 nm (black dots). Figure 2 . Figure 2. Same as Figure 1, but after removal of the detected clouds and shadows, with top of atmosphere reflectances on the top plot, and on the bottom plot, the surface reflectance. Figure 3 . Figure 3. Visualization of the reference mask for the Marrakech site, Tile 29RPQ, on 18 November 2017.clouds are outlined in green, cloud shadows in yellow, water in blue, and snow in purple. Figure 4 . Figure 4. Miniatures of the 32 reference cloud masks generated in the study, with contours provided as in Figure 3.This figure is provided to show the diversity of landscapes and cloud covers.Larger images are provided in the Supplementary Material with this article. Figure 5 . Figure 5. Mean and standard deviation for the overall accuracy and F1-score of a 10-fold cross-validation procedure for each scene. Figure 6 . Figure 6.Procedure to derive the comparison against the reference for Sen2Cor. Figure 7 . Figure 7.The top left image shows the image from Arles on 02 October 2017, with the overlayed contours from ALCD, as in Figure 3.The three other images show the comparison of each processor to ALCD reference masks (top-right, MAJA, bottom-left, FMask, and bottom-right, Sen2Cor).Green color corresponds to true positive, red color to false negative, deep blue to true negative, and purple to false positive. Figure 8 . Figure 8. Mean and box plot of each metric for the original version of the cloud masks of Sen2Cor and FMask and of the eroded version of MAJA over the 32 scenes, compared to the non-dilated ALCD cloud mask.Red dots correspond to average values and the horizontal bars in the colored box to the 25%, 50% (median), and 75% quartiles.The dashed lines extend to the minimum and maximum values. Figure 9 . Figure 9. Mean and boxplot of each metric for the dilated cloud masks of the three processors compared with the dilated version of the ALCD cloud mask, over the 32 scenes.Red dots correspond to average values and the horizontal bars in the colored box to the 25%, 50% (median), and 75% quartiles.The dashed lines extend to minimum and maximum values. Supplementary Materials: Images of all reference cloud masks and their comparison to the operaional cloud masks from FMask, MAJA and Sen2cor are available online at http://www.mdpi.com/2072-4292/11/4/433/s1.Author Contributions: L.B. programmed the ALCD method, generated the cloud masks, obtained the validation results, and contributed to the article.C.D. contributed to the validation results and to the writing of the article.O.H. designed the study, supervised the work, and wrote the main parts of the article.Funding: The work of Louis Baetens during a six month training period was funded by CNES Table 1 . Description of each scene used as reference. Table 2 . Spectral bands used in the various cloud detection tests.NIR stands for Near Infra-Red and SWIR for Short Wave Infra-Red Table 3 . Comparison of ALCD masks to Hollstein reference masks on 7 scenes.The mean is weighted by the number of pixels in the polygons. Table 4 . Conversion of Sen2Cor mask classes to valid/invalid. Table 5 . Conversion of FMask mask classes to valid/invalid. Table 6 . Conversion from ALCD classes to valid/invalid. Table 7 . Accuracy and F1-score for each scene, with the original masks for ALCD, Sen2Cor, and and for MAJA after erosion.In bold, the best metrics for each scene.Dates are written with format YYYYMMDD to save space Table 8 . Accuracy and F1-score for each scene, with the original masks for MAJA and the dilated masks for ALCD, Sen2Cor, and FMask.In bold, the best metrics for each scene.Dates are written with format YYYYMMDD to save space
2019-02-22T21:53:12.380Z
2019-02-20T00:00:00.000
{ "year": 2019, "sha1": "4f337ed61c6c205dd702e983901951af87405062", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/11/4/433/pdf?version=1550654973", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "4f337ed61c6c205dd702e983901951af87405062", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science", "Geology" ] }
250461409
pes2o/s2orc
v3-fos-license
Evaluation of forensic cases presented to the pediatric emergency department OBJECTIVE: Child forensic cases constitute an essential part of emergency presentations. The most crucial point is that the correct planning of protective and preventive activities depends on the correct analysis of the problem; therefore, there is a need for studies on childhood forensic cases. This study aimed to obtain data on the etiological characteristics of forensic cases presented to the pediatric emergency department. We believe that the collected data will guide the social measures in preventing forensic cases. METHODS: This retrospective study consists of forensic cases aged from 1 month to 18 years and presented to the pediatric emergency service of Adana City Training and Research Hospital between January 1, 2018, and December 31, 2019. The general forensic examination report of the cases was surveyed. RESULTS: For this study, 6577 general forensic examination reports were surveyed. 40% of the patients were females, and 60% were males. Traffic accidents were the most common (35.1%) cause of the emergency presentation, which was followed by assault (16.5%), fall from height (9.2%), accidental drug-caustic corrosive substance intake (7.8%), early pregnancy (7.4%), blunt or sharp force injuries (6.3%), electrical burn injuries (5.7%), suicide (5.1%), carbon monoxide-food poisoning (2.7%), and others that consisted of work accident, firearm injury, substance ingestion, suffocation, animal attack, sudden death, and missing child (4.2%). CONCLUSIONS: This most extensive study with 6577 cases has several important implications. First of all, traffic accidents continue to be an important public health problem today. Second, cases presented to the emergency department due to assault and blunt or sharp force injuries constitute an important part of forensic cases, and children who are driven to violence and crime in childhood are a situation that requires immediate action. Our third yet most important result is that early pregnancy is a much ignored social problem despite its importance. Introduction F orensic cases are extraordinary situations that occur due to external factors and consequently cause physical and mental disorders in the person's health. All kinds of assault, torture, traffic accident, firearm-explosive substance injuries, work accident, injuries, poisoning, burns, illegal substance use, sexual assault and abuse cases, suicide, accidents, murder, or sudden suspicious deaths are considered forensic cases. [1,2] Since the emergency service units are the initial place of presentation for forensic cases and intervention and treatment, they are also critical centers for developing preventive interventions for detecting forensic cases. [3][4][5] Children tend to continuously explore and discover their environment without being aware of the possible risks. It is commonly accepted that the prevalence of forensic cases can be reduced with preventive measures and that accidents in childhood are primarily due to preventable causes. The National Action Plan for Child Injury Prevention is an essential guide in preventing childhood injuries. As it can be understood from the guide, the most crucial point is that the correct planning of protective and preventive activities depends on the correct analysis of the problem; therefore, there is a need for studies on childhood injuries. [6,7] This study aimed to obtain the data on the etiological characteristics of forensic cases presented to the pediatric emergency department. We believe that the collected data will guide the social measures in preventing forensic cases. Methods This retrospective study consists of forensic cases aged from 1 month to 18 years and presented to the pediatric emergency service of Adana City Training and Research Hospital between January 1, 2018, and December 31, 2019. The general forensic examination report of the cases was surveyed, and demographic information, reason for application, presence of a lesion, laboratory and radiological examinations, requested consultations, and hospitalization and outcomes were recorded. The reasons for presentations were collected in 10 groups as follows: traffic accident, assault, blunt or sharp force injury, fall from height, suicide, carbon monoxide-food poisoning, accidental drug-caustic corrosive substance intake, early age (<18 years), pregnancy, electrical burn injury, and other: work accident, firearm injury, substance ingestion, suffocation, animal attack, sudden death, and missing child. If the situations such as falling or jumping from a height and taking medication were due to suicide, the case was included in the suicide group. Otherwise, they represented accidental cases. If the injury with blunt or sharp objects caused with the aim of attack by another person, the case was included to assault group. If the injury was caused by self-inflicted, the case was classified as suicide. If the event was accidental (injury with a broken object, cutting him/ herself during cooking, crushing under a blunt object, etc.), the case was classified as blunt or sharp force injuries. Approval for the study was obtained from the Adana Training and Research Hospital Clinical Research Ethics Committee on December 16, 2020, with the decision no. 1177. Statistical analysis The data were evaluated using the Statistical Package for the Social Sciences (SPSS 21.0 IBM Corp, Armonk, NY, USA) package program. Descriptive statistics for the categorical variables were provided in terms of "frequencies and percentages." Continuous variable, i.e., age, was assessed using Kolmogorov-Smirnov test and described as mean ± standard deviation (minimum-maximum). A cross-table was created to observe the distribution of two different categorical variables relative to each other. Box-ED Section What is already known on the study topic? • Pediatric emergency units are the initial place of presentation for forensic cases and intervention and treatment • Pediatric emergency units are critical centers for developing preventive interventions for detecting forensic cases. What is the conflict on the issue? Is it important for readers? • This study aimed to obtain data on the etiological characteristics of forensic cases presented to the pediatric emergency department. We believe that the collected data will guide the social measures in preventing forensic cases. How is this study structured? • This was a single-center, retrospective study that includes data from 6577 cases. What does this study tell us? • Traffic accidents continue to be an important public health problem today • Cases presented to the emergency department due to assault and blunt or sharp force injuries constitute an important part of forensic cases, and children who are driven to violence and crime in childhood are a situation that requires immediate action • Early pregnancy is a much ignored social problem despite its importance. As our results, 77.5% of the cases were treated as outpatient, 12.4% were hospitalized, and 4% were treated in the emergency department. Hospitalized patients were most frequently transferred to neurosurgery, burns, gynecology, and obstetrics units. Finally, 0.3% of the cases were already dead at the time of admission or died in the emergency department. Discussion Pediatric forensic cases constitute an essential part of emergency presentations. Although accidental injuries, considered forensic cases, are defined as preventable events, childhood accidents are still a serious public health problem worldwide. [8,9] The American Center for Disease Control and Prevention defines drowning, falls, burns, traffic accidents (in and out of vehicles, pedestrian, and bicycle accidents), poisoning, and asphyxia as accidental injuries. These events, all included within the scope of forensic cases, account for 44% of deaths of individuals aged between 1 and 19 years in the United States. Moreover, the death rate is two times higher in boys than in girls, and traffic accidents are the most common cause. It was also reported that 9.2 million children in the same age group are treated in the emergency department every year for nonfatal injuries, and the male gender is a risk factor. The most common cause of nonfatal injuries was falling from the height for children aged 1-14 years and assault for 15-19 years. [7,10] Simil, similar results were observed in studies conducted in Turkey. Yazar et al. [11] stated that 1.71% of the patients presented to the pediatric emergency department were forensic cases. Demir et al. [12] observed that 66% of the cases were male, and the mean age was 8.8 ± 4.37 years. Korkmaz et al. [13] reported that 21.6% of the cases presented to the pediatric emergency department were forensic, the mean age was 9.9 ± 5.5 years, and 61% of the cases were male. Congruent with the related literature, the frequency of male cases was higher in this study. In forensic cases, the male prevalence was attributed to the fact that the most common reasons for admission, such as trauma and assault, were relatively more common in males. This situation is thought to be due to boys being more active and growing up with a higher sense of freedom in society and becoming active parts of the workforce earlier. [8] In our study, traffic accidents were the most common presentation. As in the whole world, traffic accidents are among Turkey's most important causes of death, resulting in severe injuries and disability. Furthermore, the population and the number of motor vehicles participating in traffic increase daily; statistical results show that traffic accidents are a significant socioeconomic problem. In search for prospective solutions to reduce traffic accidents, further support for the driver, passenger, and pedestrian training and continuously informing society through public service advertisements were presented. In addition, it will be helpful to evaluate the parameters affecting traffic accidents and arrange accident reports according to these parameters. It is also recommended that national road maintenance, operation, and accident information systems should be developed. [14] The literature reports that children are increasingly involved in the judicial process day by day. [8,12,15,16] School absenteeism, spending time with problematic peers and ganging up, city life, low parental education, and low income are associated with juvenile delinquency. Among juvenile forensic cases, theft, injury, damage to property, sexual crimes, and substance use are the most common causes of crime. [17] Based on the study's data, the cases due to assault ranked second, and the cases due to sharp-piercing-blunt instrument injury ranked sixth. In addition, these cases consisted of approximately one-fifth of all cases, and the results are important in drawing attention to the issue of children driven to violence and delinquency in childhood. Children's learning about violence at such an early age will affect their future life, relationships, and trust in society and prevent them from being an individual who can look to their future with confidence. Therefore, for societies to be healthy, it is imperative to care for every individual and grow them up in a society with health. [18] Fall from height is the third common reason for forensic cases in our study. Studies show an increase in suicidal jumping cases in adolescents and young adults. [7] However, in this study, since jumping cases were included in the suicide group and accidental cases were included in the group fall from height, it was seen that 60% of the cases consisted of children aged between 1 and 87 months, and 62.6% of all were males. Kılıç et al. [19] reported that falling accidents most frequently happen in balconies, windows, and trees, with 67.9% of male cases. Atmış et al. [20] showed that 62.6% of the cases presented to the emergency department due to head trauma were male, 60.8% fell from the height indoors, and 25.4% fell from the height outside. Thus, falling in the preschool period is a significant cause of mortality and morbidity for children, and our results are in parallel with the literature. In this regard, it is recommended that parents increase their supervision of children and take necessary precautions, primarily at home. [21] According to the American Center for Disease Prevention data, poisoning is most common in males aged 1-4 years. [10] In our study, accidental drug-caustic corrosive substance intake was the fourth reason for presentation (7.8%). It was observed that it was more common in males, and the cases were most common between children aged 1-87 months. The fact that poisoning resulting from the intake of such substances is observed frequently at young ages, particularly in boys, is attributed to children being more in contact with the environment and their curiosity about it. In addition, Children cannot distinguish if a substance is toxic, while also boys are generally more active than girls. [11] Therefore, similar to previous studies, we suggest that parents should be more cautious about keeping and storing drugs and harmful substances. According to our study, 7.4% of all cases were pregnancies under 18 years which was the fifth most common presentation in forensic cases. It was most frequently observed between children aged 174 and 216 months. In Turkey, adolescent pregnancy prevalence is associated with cultural reasons, lack of knowledge on birth control, low socio-cultural and economic level, low education level, and ethnic reasons. Early marriage and adolescent pregnancy is a significant public health problem in Turkey and worldwide, and we suggest that joint studies of sociology, psychology, and medicine should be supported, and preventive measures should be taken. [22,23] In our study, burn and electric injuries consisted of the seventh most common reason for emergency presentation, and it was observed most frequently between 1 and 44 months of age. Since such injuries mainly occur in the home, it is possible to prevent them with precautions to be taken at home. It is recommended that drug-corrosive substances and sharp tools such as scissors and knives be kept out of the reach of children; hot drinks should not be given to the child or given under parental control. [9] Cases presented to the emergency department due to suicide are in the eighth rank in the current study. They are also most frequently observed between 174 and 216 months and are more common among girls. Physicians have essential duties for this situation, a significant public health problem for young adults. It is recommended to contact the adolescent, be careful about mood disorders, depression, drug, substance use, risky sexual behaviors, inform the parents about these issues, and work together with the family in risky situations. [7,24] Excessive use of vehicles that produce carbon monoxide, the severity of the wind, and living in closed areas cause an increase in cases of poisoning with this substance in autumn and winter. [25] Carbon monoxide and food poisoning are the reasons for the application of 2.7% of forensic cases, and they can be prevented by raising awareness in society. Forensic cases related to child abuse and neglect were observed between 1.2% and 5%. The low rates in cases of abuse and neglect are attributed to the tendency to hide such incidents due to the socio-cultural structure of the society. [8,12,15] Our study observed a small percentage of abandoned children and child abuse. However, we claim that necessary studies on this subject should be strictly conducted since it is assumed to be observed more frequently in the community. After raising awareness in the society with the activities led by nongovernmental organizations and official institutions on child abuse, studies should be carried out to correct the behavior of families at home, teachers, and children at school. Effective policies should be developed against child abuse, including measures to prevent abuse, treatment and rehabilitation of the victim, and severe punishment of the perpetrator. [26][27][28] While the most common causes of hospitalization were suicide, firearm injury, fall from height, and traffic accidents, 0.4% of the patients lost their lives due to traffic accidents or fall from height. [11,12,29] Duramaz et al., [30] when they examined the forensic cases followed up in the pediatric intensive care unit, showed that 71.8% of the cases were hospitalized for nontraumatic reasons and most frequently due to accidental drug intake (92.5%). Only 28.1% of the cases were followed up for traumatic reasons, with a mortality rate of 5%, and the most common cause of death was traffic accidents following fall from height. As a result, considering the hospitalization rates of centers that treat trauma and those that do not, the latter group has higher hospitalization rates in forensic cases. In addition, trauma patients have a higher mortality rate despite a low hospitalization rate. In our study, similar to the literature, 77.5% of the cases were treated as outpatients, 12.4% were hospitalized, and 4% were treated in the emergency department. Limitations The limitation of our study is that it is retrospective, and the details of the history, the way of the arrival to hospital, and arrival time are insufficient. Conclusions Our study has several important implications. First of all, traffic accidents continue to be an important public health problem today. Second, cases presented to the emergency department due to assault and blunt or sharp force injuries constitute an important part of forensic cases, and children who are driven to violence and crime in childhood are a situation that requires immediate action. Our third yet most important result is that early pregnancy is a much ignored social problem despite its importance. Measures to be determined by social, cultural, medical, and legal studies need to be implemented more effectively. Finally, reminding and warning parents about injuries that can be prevented with simple precautions, most of which occur at home, will protect children from undesired situations. Physicians should have information about forensic cases that occurred in childhood and fill in the necessary forms and documents with being aware that reporting is a responsibility and a legal obligation when faced with a forensic case. Author contributions İA and KİD conducted the study concept and design, analysis and interpretation of the data, İA conducted drafting of manuscript, critical revision of the manuscript for important intellectual content, İA and KİD conducted the statistical analysis and acquisition of the data. Conflicts of interest None Declared. Ethical approval For the study, ethical approval was obtained from the Adana Training and Research Hospital Clinical Research Ethics Committee on December 16, 2020, with decision no: 1177. Informed consent Written informed consent was not obtained due retrospective nature of this study. Funding None.
2022-07-13T14:35:13.748Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "98cf495c0272d23df557ab3c98771a42d0210ab6", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "957e240bc2c95b0672d508deb96efcdb0013feb8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
235467358
pes2o/s2orc
v3-fos-license
The single-point insulin sensitivity estimator (SPISE) index is a strong predictor of abnormal glucose metabolism in overweight/obese children: a long-term follow-up study Purpose To investigate the relationship between the single-point insulin sensitivity estimator (SPISE) index, an insulin sensitivity indicator validated in adolescents and adults, and metabolic profile in overweight/obese children, and to evaluate whether basal SPISE is predictive of impaired glucose regulation (IGR) development later in life. Methods The SPISE index (= 600 × HDL0.185/Triglycerides0.2 × BMI1.338) was calculated in 909 overweight/obese children undergoing metabolic evaluations at University of Cagliari, Italy, and in 99 normal-weight, age-, sex-comparable children, selected as a reference group, together with other insulin-derived indicators of insulin sensitivity/resistance. 200 overweight/obese children were followed-up for 6.5 [3.5–10] years, data were used for longitudinal retrospective investigations. Results At baseline, 96/909 (11%) overweight/obese children had IGR; in this subgroup, SPISE was significantly lower than in normo-glycaemic youths (6.3 ± 1.7 vs. 7 ± 1.6, p < 0.001). The SPISE index correlated positively with the insulin sensitivity index (ISI) and the disposition index (DI), negatively with age, blood pressure, HOMA-IR, basal and 120 min blood glucose and insulin (all p values < 0.001). A correlation between SPISE, HOMA-IR and ISI was also reported in normal-weight children. At the 6.5-year follow-up, lower basal SPISE—but not ISI or HOMA-IR—was an independent predictor of IGR development (OR = 3.89(1.65–9.13), p = 0.002; AUROC: 0.82(0.72–0.92), p < 0.001). Conclusion In children, low SPISE index is significantly associated with metabolic abnormalities and predicts the development of IGR in life. Introduction Overweight and obesity in childhood are conditions epidemically spread worldwide, and the dramatic increase of their incidence in the last decades has become a relevant public health issue around the world [1]. Data show that 17.9% of European children were overweight or obese during the period 2006-2016. The prevalence estimate of obesity was 5.3%, with highest values reported in the Southern European countries [2]. The increasing prevalence in children comes with escalations in both current childhood and future adulthood morbidity and mortality [3], but also in concomitant costs [4]. Early onset obesity is an independent risk factor for the development of insulin resistance and type 2 diabetes (T2D) [5], and insulin resistance represents the most common metabolic disorder associated with obesity [6,7]. Insulin resistance and T2D are well known independent risk factors for cardiovascular diseases [8]. The pathogenesis of insulin resistance in children is a multi-factorial process and obesity is the most prevalent risk factor [9]. Also genetic predisposition [10], gestational diabetes [11], children born small for gestational age (SGA) [12], rapid post-natal weight gain [13], premature birth [14] and smoking during pregnancy [15], increase the risk of insulin resistance during childhood. Not all obese children are insulin resistant, and insulin resistance can also occur in non-obese children [16]. The prevalence of insulin resistance in obese children varies from 33.2 to 52.1% [17][18][19], depending on the method and cut-off used to define insulin resistance. Indeed, the identification of accurate tools for risk stratification in obese/overweight children is a crucial step for designing strategies to prevent metabolic diseases and their complications later in life. However, standards for assessing insulin resistance in children in the clinical setting are still lacking, and insulin-derived indexes of insulin resistance/ sensitivity are affected by methodological issues, such as the poor standardization of insulin measurement [16,[20][21][22]. Among the non-insulin-derived indices, the single-point insulin sensitivity estimator (SPISE) index is a lipids and BMI-based index of insulin sensitivity that showed better accuracy than other indicators, such as its forerunner TG/ HDL-C ratio and HOMA-IR, in the prediction of metabolic syndrome and has been recently validated in adolescents and adults [23,24]. To date, the usefulness of the SPISE index has not been investigated in a pediatric population. Furthermore, no prospective data exists on the predictive value of this parameter on insulin resistance and glucose abnormalities. Therefore, aims of this study were (i) to investigate the relationship between the SPISE index, glyco-metabolic profile and insulin sensitivity in a large population of children, with and without obesity, and (ii) to evaluate whether basal SPISE is predictive of the development of impaired glucose metabolism later in life. Study population For the purposes of this study, we carried out a cross-sectional and a longitudinal retrospective investigation. For the cross-sectional phase of this study, we analysed the SPISE index from data obtained in 909 overweight or obese children (median age [interquartile range]: 10 [8][9][10][11][12][13] years) consecutively recruited at the Paediatric Endocrine outpatient clinics of the Paediatric Hospital for Microcitaemia, Cagliari, Italy. Study participants were selected among all those referring to the clinic for the presence of excess bodyweight, as for indication of the general practitioner; main exclusion criteria were the presence of endocrine disorders or genetic syndromes, including syndromic obesity. After visit, all children with overweight or obesity were instructed to follow an educational program including dietary and lifestyle modifications. To provide a reference range in children with normal metabolic status, the SPISE index was also calculated in 99 normal-weight healthy children (median age [interquartile range]: 11 [9-12.9] years) with age and sex distribution comparable to the first cohort. These normal-weight children, without any endocrine, cardiovascular, gastrointestinal or renal disorder, were recruited in the same clinical setting and selected among those referring to the Paediatric outpatients' clinic for undergoing routine clinical assessment. The enrolment of the whole study population occurred between May 2007 and May 2010. 200 out of 909 overweight/obese children were followedup between 2013 and 2016, with a median (range) followup duration of 6.5 (3.5-10) years, and data collected were used for the longitudinal investigation (see reference [25] for description of this cohort). Clinical and biochemical evaluations At the baseline, all the study participants (n = 1008) underwent medical history collection, clinical examination and fasting blood sampling. Children with fasting blood glucose less than 126 mg/dl underwent oral glucose tolerance test (OGTT) at both the baseline and follow-up visit, following clinical recommendations for children (1.75 g of glucose administered per kg bodyweight, up to 75 g) [26]. Blood glucose and insulin concentrations were measured at the baseline and after 120 min from the oral glucose load. The presence of an abnormal glucose metabolism (AGM), in terms of impaired glucose regulation (IGR: impaired fasting glucose, IFG, impaired glucose tolerance, IGT) or diabetes mellitus, was diagnosed according to criteria from the ADA Standards of Medical Care in Diabetes 2021 [27]. Systolic and diastolic blood pressure (SBP, DBP, mmHg) was measured after 10-min rest and the average value of three measurements was recorded for the analysis. Overweight, obesity and standard deviation score body mass index (SDS-BMI) were defined according to the Italian growth charts for height, weight and BMI in people aged 2-20 years, more than one standard deviation (SD) of BMI defined overweight and more than 2 SD defined obesity [28]. Study participants were classified as pre-pubertal or pubertal according to the Tanner stage for pubertal development (pre-puberal, for Tanner's stage I: boys with pubic hair and gonadal stage I, girls with pubic hair stage and breast stage I; puberal for Tanner's stages ≥ II-V: boys with pubic hair and gonadal stage ≥ II and girls with pubic hair stage and breast stage ≥ II) [29] Laboratory procedures Blood samples were obtained from the antero-cubital vein after 12-h fasting for evaluating routine biochemistry and metabolic profile, including blood glucose (FBG, mg/dL), insulin (FSI, IU/mL), aspartate aminotransferase (AST, IU/L), alanine aminotransferase (ALT, IU/L), total cholesterol (mg/dL), high-density lipoprotein cholesterol (HDL-C, mg/dL), triglycerides (mg/dL) and uric acids (mg/dL). Plasma glucose levels were measured by glucose oxidase method (Autoanalyzer, Beckman Coulter, USA) and insulin concentration by radio-immunoassay (DLS-1600 Insulin Radioimmunoassay Kit, Diagnostic System Laboratories Inc., Webster, Texas, USA) on samples separated, frozen and stored at −80 °C until the analyses. AST, ALT, total cholesterol HDL-C, triglycerides and uric acids were measured in the local laboratory by standard methods. Low-density lipoprotein cholesterol (LDL-C) value was obtained using the Friedewald formula. Ethics standards The study protocol was reviewed and approved by the local Ethics Committee and conducted in conformance with the Helsinki Declaration. Informed written consent was obtained from the children or their legal guardians before all the study procedures. Statistics All the analyses were performed using the SPSS statistical package, version 25.0. Values are shown as mean ± standard deviation (SD), median (interquartile range, IQR) or percentage, as appropriate. Skewed variables were log-transformed before the analyses. Differences between two independent groups were compared by Student's T test for continuous variables and by χ 2 test for categorical parameters. Correlations were estimated by Pearson's and Spearman's tests, in relation to the type and distribution of the variables. Univariate regression analyses were performed to test the association between binomial (i.e. sex, AGM…) and continuous variables. The predictive value of SPISE value at the baseline for the onset of abnormal glucose metabolism (IFG and/or IGT or T2D) at the follow-up was estimated by multiple logistic regression analysis adjusted for age, sex, fasting and 120 min glucose and insulin levels at the baseline. An adjusted area under receiver-operating characteristic curve (AUROC) of SPISE for AGM, with 95% confidence interval (C.I.), was also calculated controlling for the same covariates. P values < 0.05 were considered statistically significant with a C.I. of 95%. In these subjects, the prevalence of alterations of glucose metabolism at baseline was almost 11% (96 out of 909 subjects). Children with IGR had significantly lower SPISE than those with normal glucose tolerance (mean ± SD SPISE: 6.3 ± 1.7vs.7 ± 1.6, p < 0.001). In our study population, triglycerides, AST, ALT, blood insulin, DI, HOMA-IR, HOMA-β%, ISI, SPISE had a skewed distribution; characteristics of the whole study population and according to the glucose tolerance profile are illustrated in Table 1. At the bivariate analysis, the SPISE index positively correlated with the ISI and DI whereas an inverse association was found between SPISE and age, blood pressure, HOMA-IR, basal and 120 min blood glucose and insulin ( Table 2). The SPISE did not associate with sex at the univariate logistic analysis (β coefficient = 0.79, p = 0.19). Characteristics of this study subgroup at the baseline and follow-up are illustrated in Table 3. Baseline SPISE index inversely correlated with age, BMI, SDS-BMI% and waist circumference at the followup. Having a lower SPISE index at baseline was associated with the development of higher blood pressure levels, impaired glucose and lipid profile at the follow-up evaluation (Table 4). Belonging to the lowest quartile of the SPISE index distribution (i.e. SPISE index below 6.08) at baseline was associated with the development of IGR later in life with OR = 3.89 (1.65-9.13); β = 1.36; p = 0.002, at the multivariate logistic regression analysis adjusted for age, sex, fasting and 120 min glucose and insulin levels at baseline. The SPISE index showed high specificity and sensitivity in predicting future IGR, with AUROC curve = 0.82(0.72-0.92), p < 0.001 in the adjusted AUROC model corrected for the same covariates (Fig. 2). Notably, unlike SPISE, neither ISI nor HOMA-IR at baseline were able to predict the development of IGR in obese children later in life in multivariate logistic regression models adjusted for sex, age and BMI [ISI: OR = 0.94 Discussion The main finding of this study is that the SPISE index correlates with insulin-derived indicators of insulin resistance and sensitivity in children, and significantly predicts the development of glucose metabolism abnormalities later in life in this population. Indeed, on one hand, the SPISE index displayed a strong cross-sectional correlation with dynamic OGTT-derived indicators of insulin sensitivity such as ISI, or with the widely used proxy of insulin resistance HOMA-IR. On the other, low basal SPISE was able to predict the development of impaired glucose regulation at the 6.5-year follow-up with an OR of 3.89(1.65-9.13) regardless of major metabolic confounders such as sex, age, and results from the OGTT performed at the baseline. At variance with SPISE, in our study, all the other insulin-derived indexes of insulin sensitivity/resistance calculated at baseline failed to demonstrate any correlation with future onset of IGR in children with body weight excess. This is the first study aiming to test the reliability of the SPISE index as an indicator of insulin sensitivity in children and to assess its predictive value for the identification of glucose-insulin metabolism disorders later in life. In the cross-sectional phase of this study, the association between the SPISE index and the OGTT-derived ISI, a validated indirect indicator of low insulin sensitivity also applied in children and adolescents [34,35] was demonstrated in over 900 youths with body weight excess and was then confirmed in normal-weight children. A number of studies previously investigated the relationship between the SPISE index and insulin-derived indicators of insulin homeostasis [23,24,[36][37][38][39]; data showed that the SPISE index was comparable to Matsuda-ISI, QUICKI and HOMA-IR when used for the identification of conditions of altered insulin sensitivity in adults [23]. Moreover, lower SPISE index significantly correlated in adults or adolescents with the presence of T2D [36], metabolic syndrome [37,39], risk of cardiovascular diseases [36], non-alcoholic fatty liver disease (NAFLD) [38], abdominal obesity, higher levels of C-reactive protein (CRP) and lower levels of adiponectin [24]. Finally, in line with our results obtained in youths, Sagesaka et al. demonstrated that basal SPISE index was significantly lower in adults who developed T2D 10 years later in comparison to those who did not progress to diabetes, in a longitudinal investigation on over 27,000 individuals without diabetes [40]. Our study is the first investigation testing the SPISE index in children. The rise of childhood obesity in Western countries is paralleled by the increasing prevalence of T2D and other metabolic diseases such as NAFLD and metabolic syndrome in children and pre-adolescents [41,42], as a significant proportion of overweight/obese children is also affected by subclinical-insulin resistance [16,18,19]. Therefore, identifying an easy, reliable and cost-effective tool for the stratification of cardio-metabolic risk in youths with body weight excess is a primary goal to achieve for Table 3 Characteristics of the subgroup of overweight/obese children (n = 200) undergoing 6.5 year follow-up (baseline and end of observation) Data are mean ± SD unless otherwise indicated. Differences compared by Student's T test BMI body mass index, SDS BMI standard deviation score of body mass index, SBP systolic blood pressure, DBP diastolic blood pressure, HDL-C high-density lipoprotein cholesterol, LDL-C low-density lipoprotein cholesterol, FBG fasting blood glucose, FSI fasting serum insulin, SPISE single-point insulin sensitivity estimator, ISI insulin sensitivity index, DI disposition index, HOMA-IR homeostasis model assessment of insulin resistance, HOMA-β% homeostasis model assessment of insulin secretion containing the burden of T2D and other metabolic complications of obesity across the new generations [43].Indeed, although the euglycaemic-hyperinsulinemic clamp represents the "gold standard" for measuring insulin sensitivity [44], this technique is invasive, expensive and difficult to perform in the clinical practice. Thus, surrogate indexes of insulin resistance have been developed [23,[31][32][33]. Some of these include insulin or glucose loading and their measurement in determined time intervals, such as the Matsuda index, intravenous glucose tolerance test (IVGTT), insulin tolerance test (ITT) [45]. Other indexes consist in the measurement of insulin levels in a steady state (including HOMA-IR, QUICKI, 1/HOMA, HOMA-1%S, etc.) [45][46][47]. However, the value of insulin measurement and insulin-derived indicators for metabolic risk stratification in the young population is still debated, and their interpretation is not univocal due to the pulsatility in insulin release [20], its short half-life [22] and the presence of poorly standardized assays [21]. In this setting, the identification of SPISE, a lipids and BMI derived index of insulin sensitivity, as a novel predictor of impaired glucose-insulin metabolism, provides a unique tool to be used in clinical practice for phenotyping children at high risk of metabolic diseases, such as those with obesity. Moreover, since insulin measurement is not advised for screening insulin resistance in large groups or for preventive purposes [16], whereas most of the population-based health surveys include BMI and lipid profile [23], the SPISE index may also be used as a sensitive and easy method to assess insulin sensitivity at the population level. The SPISE index has been validated in a cross-sectional investigation including a large cohort of over 1200 nondiabetic adults and 29 obese adolescents [23]. In this study, a cut-off value of SPISE below 6.61 was proposed to indicate the presence of insulin resistance, as estimated by the comparison with the clamp-derived M value. Conversely, our study is the first investigation exploring the SPISE index in the prediction of impaired glucose regulation development in overweight and obese children. Thus, rather than using a previously identified SPISE cutoff obtained in a non-comparable population and study design, in our study we explored whether belonging to the lowest quartile of the SPISE index distribution, i.e. SPISE below 6.08, at baseline was associated with the development of altered glucose metabolism. Thus, a SPISE index cut-off < 6.08 may be proposed as a novel threshold for low insulin sensitivity in children which could predict the development of dysglycaemia later in life in the setting of the real world evidence. The rationale of the SPISE index to identify insulin resistance is particularly intriguing: TG and HDL represent changes in lipids and lipoproteins that are among the earliest manifestations of insulin resistance [48][49][50]. Indeed, insulin resistance measured by euglycaemic clamp is associated with adverse lipid and lipoprotein changes favoring atherosclerosis even in subjects without diabetes. The addition of BMI, another easy indirect measure of adipose tissue and insulin sensitivity, further enhances the sensitivity of the SPISE index. For all these characteristics, the SPISE index, but not traditional insulin-derived indicators of insulin sensitivity/resistance such ad ISI and HOMA-IR, performed very well as a strong independent predictor of development of IGR in the large population of overweight and obese children included in this study. In conclusion, this study demonstrates that the SPISE index is a strong indicator of insulin sensitivity in children with and without body weight excess, and that in overweight/obese individuals it predicts the development of impaired glucose regulation later in life independently from potential confounders. Finally, for its characteristics of noninvasive, low-cost and simple to estimate index, the SPISE index may represent an easy surrogate of insulin sensitivity in overweight/obese children to be used as a screening tool for metabolic risk assessment on a large scale. Funding Open access funding provided by Università degli Studi dell'Aquila within the CRUI-CARE Agreement. This work was supported by Research Grants from the Department MeSVA, University of L'Aquila (Bando Ricerca FFO 2020 and FFO 2021) to Marco G. Baroni, Sapienza University of Rome "Ricerca Ateneo" to M.G.Cavallo. Ilaria Barchetta is supported by a Grant from Eli Lilly Foundation. Availability of data and material The authors agree to share data upon request. Conflict of interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Ethical approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent Informed written consent was obtained. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2021-06-18T13:50:10.180Z
2021-06-17T00:00:00.000
{ "year": 2021, "sha1": "72250d06b59791a47c75740fcd66eacd1f68a48d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40618-021-01612-6.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "72250d06b59791a47c75740fcd66eacd1f68a48d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
9332871
pes2o/s2orc
v3-fos-license
A study of autologous melanocyte transfer in treatment of stable vitiligo Background: Replenishing melanocytes selectively in vitiliginous macules by autologous melanocytes is a promising treatment. With expertise in culturing melanocytes, it has now become possible to treat larger recipient areas with smaller skin samples. Aim: To study the extent of repigmentation after autologous melanocyte transplantation in patients with stable vitiligo. Methods: The melanocytes were harvested as an autologous melanocyte rich cell suspension from a donor split thickness graft. Melanocyte culture was performed in selected cases where the melanocyte cell count was insufficient to meet the requirement of the recipient area. These cells were then transplanted to the recipient area that had been superficially dermabraded. Results: An excellent response was seen in 52.17% cases with the autologous melanocyte rich cell suspension (AMRCS) technique and in 50% with the melanocyte culture (MC) technique. Conclusion: Autologous melanocyte transplantation can be an effective form of surgical treatment in stable but recalcitrant lesions of vitiligo. INTRODUCTION There are many modalities for the treatment of vitiligo but there is still a need for a treatment that is promptly effective.Replenishing melanocytes selectively within vitiliginous macules by autologous melanocytes is a promising treatment. [1]Moreover, with expertise in culturing melanocytes, it has now become possible to treat larger recipient areas with smaller skin samples. [1]here are a few techniques for seeding autologous melanocytes.To study the extent of repigmentation after autologous melanocyte transplantation in patients with stable vitiligo, we employed two methods, melanocyte rich cell suspension and cultured melanocytes for replenishing melanocytes. M E T H O D S Patients with stable vitiligo were selected for the study, the criteria for stability being no increase in the size of the lesion for at least 2 years and no new lesions since the last 2 years.Exclusion criteria were patients with active disease, infection at the recipient site, age below 8 yrs, evidence of Köebner response in the past, bleeding diathesis, keloidal tendencies and poor general condition.The study was conducted at Department of Dermatology, Civil Hospital, BJ Medical College, Ahmedabad in a span of 3 years.Patients were chosen randomly after they met the selection criteria.Out of the 27 patients recruited, a single vitiliginous lesion was taken as a control in 20 cases.The control patch was only superficially dermabraded and subsequently dressed. Pre-operative work-up consisted of informed consent, a clinical photograph, screening for HIV and Hepatitis B virus infection and charting of the area to be grafted.A prophylactic course of an antibiotic, usually oral erythromycin 500 mg thrice daily, was started 1 day before the procedure and alprazolam 0.5 mg was administered orally on the previous night. Two techniques were employed, the autologous melanocyte rich cell suspension (non-cultured) technique [1][2][3] and the cultured melanocyte technique. [4- 6]Both these techniques share a common principle of selective replenishment of melanocytes at the recipient stable vitiligo macules.The culture technique was used when the harvested melanocytes were less than required, when counted with Neubauer's chamber. Donor site About one-tenth the size of the recipient area was selected as the donor site, usually on non-cosmetically important sites like the thighs, buttocks or waist.It was cleaned with povidone iodine (Betadine  ) and 70% ethanol, and draped.The site was anesthetized with 1% lidocaine (Xylocaine  ) infiltrated in the subcutis.The skin was stretched and a very superficial sample was obtained with Silver's skin grafting knife or a sterile razor blade on a straight hemostat forceps.The superficial wound was then dressed with Sofra-tulle  . Laboratory procedure for cell separation [3] The skin graft was immediately transferred to 6 ml of 0.25% trypsin-EDTA solution in a petri dish.This sample of skin graft inside the petri dish was turned back and forth to ensure complete contact with trypsin-EDTA solution.This mixture of skin sample with trypsin-EDTA solution was incubated at 37°C and 5% CO 2 for 50 minutes.Three ml of trypsin inhibitor (soya protein solution; Sigma Chemicals, St. Louis, Mo, USA) was then added to neutralize the action of trypsin.The epidermis was separated from the dermis with the help of a pair of forceps.The epidermis was cut into tiny pieces and transferred to a 15 ml test tube with a pipette.Three ml of MK medium (Melanocyte-keratinocyte culture medium; Sigma Chemicals, St. Louis, Mo, USA) was added to it.The contents of MK Medium were DMEM (Dulbecco's Modified Eagle's Medium), insulin (25 µg), adenine (0.38 nmol), basic fibroblast growth factor (b-FGF), [7][8][9][10] human albumin [or fetal calf serum (10%)], hydrocortisone (0.1 µg/ml), streptomycin (0.1 mg/ml) and penicillin (100 u/ml).All materials used were tissue culture grade (endotoxin level <0.1EU/ml chrome-LAL test).The test tube was then centrifuged at 2000 rpm for 10-15 minutes.A pellet was formed at the bottom.The supernatant was discarded and the pellet, containing cells from the stratum basale and lower half of the stratum spinosum that were rich in melanocytes, was taken.The melanocytes were stained with trypan blue and counted simultaneously with Neubauer's chamber under the light microscope; this would also identify whether the suspension was viable as the dead cells pick up blue stain.Around 1000-1500 cells/mm 2 of recipient area were required; [8] when the separated cells were less than 1000 cells/mm 2 they were subjected to cell culture. Recipient site It was cleaned, painted and draped with Betadine  , 70% ethanol and washed thoroughly with normal saline.It was anesthetized with use of EMLA (Prilox  ) or 1% lignocaine. Transplantation [1] The recipient area was abraded with a high-speed motor dermabrader fitted with a diamond fraises wheel until tiny pinpoint bleeding spots were seen, which implied that the dermoepidermal junction had been reached.The denuded area was covered with a saline moistened gauze piece.The suspension was poured evenly from the pipette to the denuded surface, which was then covered with a collagen dressing (Kollagen  5x5).This was then covered by a small gauze piece moistened in MK medium.The dressing was kept in place by a Tegaderm  dressing. The patient was allowed to go home 30 minutes later.The dressing was removed at their first follow-up visit after a week to the hospital. Preparation of autologous culture of melanocytes [4] The graft was harvested in a similar manner as above.The tissue was transferred to a Petri dish containing 6 Pandya V, et al: Autologous melanocyte transfer in vitiligo ml of 0.25% trypsin-EDTA solution with the help of a pair of forceps and incubated for 50-60 min at 37°C and 5% CO 2 .Later, a few milliliters of trypsin inhibitor were added to neutralize the action of trypsin.The epidermis was separated and cut into tiny pieces by the help of a small surgical knife and the entire content was transferred into a centrifuge tube and 3 ml of MK culture medium was added.The tube was then centrifuged at 2000 rpm for 15 minutes.The supernatant was removed and the pellet was taken out with a pipette.It was again re-suspended in 1 ml of MK medium and 0.5 ml of the solution inoculated in a T-25 tissue culture flask.Another 5 ml of MK medium was added to the flask, which was kept in the incubator for incubation with 5% CO 2 and temperature of 37 ο C for a period of maximum of 3 weeks. [4]The medium in the flask was changed daily carefully and incubation continued.Separation of melanocytes from the flask of melanocytes was done with trypsin EDTA solution after the end of 2-3 weeks.Trypsin-EDTA solution was again replaced with MK medium for a few minutes.This solution was further incubated for 5 minutes and then another 5 ml of MK medium was added.The entire content was collected in a pipette and poured into a centrifuge tube.It was again centrifuged at 2000 rpm for 10-15 minutes.The supernatant was discarded and the contents were re-suspended in 0.5 ml of the culture medium.A microscopic preparation of this was made and stained with Trypan Blue to check the viability of melanocytes.Viable melanocytes did not take up the stain and dead cells appeared blue in color [Figure 1 and 2].The melanocytes were counted and the desired number of cells in the form of a certain quantity of suspension (cells/ml) was taken as per the area of recipient site.Around 1000-1500 melanocytes/mm 2 were spread uniformly over the recipient surface after superficial dermabrasion.However, in certain areas like the areola, the number of melanocytes implanted was more than 2000 cells/mm 2 . Erythromycin and nimesulide 100 mg daily (if required) were continued for 7 days following the transplantation.The dressings at the donor and recipient site were removed on day 7.A light dressing was applied on the recipient area for the next 7 days, if found necessary.The patient was called after 1 month, 3 months and 6 months to assess the extent of repigmentation.Photographs were taken and the observations were tabulated.The response was graded according to the extent of repigmentation in transplanted areas as follows: excellent, >90% repigmentation; good, 65% to 89% repigmentation; fair, 25% to 64% repigmentation and poor, below 25% repigmentation. RESULT S Of the total 27 patients with vitiligo, 12 (44.45%)patients were male and 15 (55.55%) female.Majority (16, 59%) of the patients were in the age group of 21 to 30 years.Vitiligo was of the vulgaris type in 25 and segmental type in 2 patients.Table 1 shows details of duration of disease in our patients.At first follow-up, soon after the removal of dressing the treated area appeared bright pink.Repigmentation was first seen after 2-3 weeks after the procedure and was completed in up to 6 months.It was almost of a uniform color.In a few cases, there was initial hyperpigmentation that subsequently faded to match the normal skin color. [1]This hyperpigmentation may be caused by hyperactivity of transplanted cells from the culture or oversupply of growth factors and melanogenic peptides such as â-FGF during wound healing. [1,7]In most patients we observed pigmentary islands irrespective of leukotrichia or paucity of hair follicles.The optimum time for successful culture was 1-3 weeks.At the end of 3 weeks, the cell count was raised 50-100 folds after primar y culture and subculture.The melanocyte content of these cultures was 95%. Pandya V, et al: Autologous melanocyte transfer in vitiligo Fifty-one sites in 27 patients were chosen for autologous melanocyte transplantation.The most common sites were the feet (45.1%), legs (29.4%), hands (9.8%), knees (3.9%) and face (3.9%) [Table 2].The results were most favorable on the legs, feet, face and forearms, and poor on the elbows and acral areas of the hand.Repigmentation was not observed in any of the control patches. The repigmentation was best when there were 1500 cells/mm 2 or more.In 27 patients, the cells were cultured because of the large recipient area or because the number of cells separated was less than 1000 cells/ sq cm of recipient area [Table 3]. An excellent response was seen in 12 (52.2%)and 2 (50.0%) patients with AMRCS and melanocyte culture respectively [Table 4].Some minor complications were observed.Strikingly, there was no milia formation or scarring.Two (7.4%) patients had infection at the donor area and three (11.1%)developed infection at the recipient surface. Only one patient developed Köebner response at the donor area.In patients with stable vitiligo, autologous melanocyte transfer is a simple and effective technique to produce homogeneous pigmentation quickly.It has an advantage over conventional split thickness grafting as it requires very little donor skin (usually only one tenth of the recipient site). [9]Patients were generally satisfied with the results as the quality of repigmentation was superior.Further large scale patient studies are required, especially with melanocyte culture methods, to confirm the efficacy of autologous melanocyte transfer techniques. The outcome was optimal when more than 1000 cells form AMRCS/mm 2 were kept but was proportionately less when the number was less.The autologous melanocyte rich cell suspension (AMRCS) technique of autologous melanocyte transfer was equally effective for smaller lesions but for a larger recipient area, melanocyte culture (MC) was found more suitable, although the technique of AMRCS was simple and efficient. [11]erall, an excellent response was seen in 14 patients (51.8%), a good response in 5 (18.5%), and a fair response in 3 (11.1%).Five patients (18.5%) had a poor response. Response to both techniques was comparable, with an excellent response in 52.2% cases of the AMRCS technique and 50% with melanocyte culture.Repigmentation was generally first observed at 2-3 weeks and was complete by 6 months.It was seen as multiple islands of pigmentation that later coalesced to a uniform color.The location of the recipient site was the major determinant of the outcome; acral parts including the dorsal aspects of the hands and feet, and skin over the joints were less responsive, as 2 patients each with lesions on the hands and feet, and 1 patient with lesions on the elbow had a poor response. [1]e response was comparable to studies done by Mulekar. [9]In a study in 27 patients, Lontz et al reported excellent response in 40.7%, good response in 7.4%, and moderate response in 51.8%. [1]Lontz et al emphasize that the anatomical location is the major factor that determines the response. [1]The fingers, knuckles and elbows were the most difficult areas to repigment, in part because of the relative uncertainty in controlling the depth of dermabrasion of such heavily cornified areas and also because of the high mobility of the skin covering these joints.Olsson and Juhlin have also made a similar observation. [4]th these techniques had some minor complications.Infection at the donor area was seen in 7.4% of patients and at the recipient site in 11.1%.Infection occurred probably because patients did not comply with instructions to avoid unnecessary movements of the neighboring joint and subsequently the dressing slipped.Only 1 patient developed Koebner response at the donor site.None of our patients had milia formation or scarring. Figure 1 :Figure 2 : Figure 1: Melanocytes in culture (day-3): Note the developing dendrites (blue filter used in the surface phase contrast microscopy)
2018-04-03T00:00:38.067Z
2005-11-01T00:00:00.000
{ "year": 2005, "sha1": "0a51cd71502a8fd18d7fa687e9872c03f99136cd", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0378-6323.18942", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "0a51cd71502a8fd18d7fa687e9872c03f99136cd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
105730556
pes2o/s2orc
v3-fos-license
Analysis and optimisation of the glass/TCO/MZO stack for thin film CdTe solar cells Abstract Magnesium-doped Zinc Oxide (MZO) films have recently been proposed as a transparent buffer layer for thin film CdTe solar cells. In this study, the band gap of MZO buffer layers was tuned for CdTe solar cells by increasing the substrate temperature during deposition. Films were deposited by radio-frequency magnetron sputtering. Devices incorporating an optimised MZO buffer layer deposited at 300 °C with a band gap of 3.70 eV yielded a mean efficiency of 12.5% and a highest efficiency of 13.3%. Transmission electron microscopy showed that MZO films are uniformly deposited on the transparent conductive oxide (TCO) layer surface. The favourable band alignment seems to positively counterbalance the low doping level of the MZO layer and its high lattice mismatch with CdTe. Titanium-doped indium oxide, tin-doped indium oxide and aluminium-doped zinc oxide TCOs were also used as alternatives to fluorine-doped tin oxide (FTO), in combination with MZO films. The use of titanium-doped indium oxide and tin-doped indium oxide TCOs did not improve the device efficiency achieved compared with FTO, however using aluminium-doped zinc oxide coupled with a boro-aluminosilicate glass substrate the mean and highest efficiencies were further improved to 12.6% and 13.4% respectively. Introduction Thin film cadmium telluride (CdTe) photovoltaics (PV) are an extremely promising and scalable PV technology. First Solar, Inc., the largest manufacturer of CdTe PV modules to date, currently provides utility-scale levelised cost of electricity (LCOE) which is competitive with all other renewable and non-renewable energy sources [1]. One of the main drivers for thin film CdTe LCOE reduction is to increase the PV module power conversion efficiency [2]. The efficiency of thin film CdTe modules and laboratory scale cells has been significantly improved in the last decade, achieving record efficiencies of 18.6% and 21.0% respectively [3]. The improvement was mainly a result of increases in the short circuit current density of the solar cell/module, achieved by using a very transparent window layer as well as including a CdSeTe alloy to grade the CdTe band gap and absorb more infrared light [4]. Recently, magnesium-doped zinc oxide (MZO) has been proposed as an effective buffer and alternative to CdS [5]. MZO is a high band gap semiconductor (Eg > 3.3 eV) and transmits a larger fraction of the solar spectrum through to the underlying absorber, as compared with CdS (Eg = 2.45 eV) [6]. There are several other semiconductors with a band gap comparable to MZO; however MZO is effective due to the tuneability of its band alignment with CdTe [7]. This depends upon the Mg concentration in the film and in particular on the MgO/ZnO ratio, with MgO being the high band gap (Eg = 7.8 eV) material that mixed with ZnO causes the energy band structure to shift. The optimal band structure of MZO can be achieved by adding the right amount of Mg to ZnO. In a previous study an alternative method to optimize the MZO energy band gap as a high resistance transparent (HRT) layer for CdTe solar cells was presented, rather than a buffer layer [8]. By using a single target composition, this method enables widening of the energy band gap by increasing the temperature the MZO film is deposited at. In the current study, we have increased the MZO buffer layer deposition temperature to create a favourable MZO/CdTe alignment, thereby maximising the efficiency of CdS-free thin film CdTe solar cells. In the second part of the study, different glass/TCO/MZO combinations were investigated, in an effort to further improve device performance. Devices incorporating aluminium-doped zinc oxide (AZO), titanium-doped indium oxide (ITiO) and tin-doped indium oxide (ITO) TCOs have been compared to the FTO substrates (TEC10, Pilkington NSG) which are typically used when fabricating CdTe solar cells. Finally, soda lime glass was substituted with boro-aluminosilicate (Eagle XG, Corning) glass to analyse the impact of using a more transparent substrate on the device output. Experimental ITO, AZO, ITiO and MZO thin films were deposited by radio-frequency (RF) magnetron sputtering. 4 mm thick soda lime glass (SLG) and 1 mm thick boro-aluminosilicate glass were used as substrates. The glass was cleaned using a solution composed of 1/3 isopropanol, 1/3 acetone and 1/3 deionised water in an ultrasonic bath at 50°C for 60 min. Thin films were deposited using an Orion 8 HV magnetron sputtering system (AJA International, USA) equipped with an AJA 600 series RF power supply. All sputtering targets (ITO: 10% SnO 2 and 90% In 2 O 3 wt%, AZO: 0.5% Al 2 O 3 and 99.5% ZnO wt%, ITiO: 2% TiO 2 and 98% In 2 O 3 wt%, MZO: 11% MgO and 89% ZnO wt%) were 3″ in diameter. The glass substrates were rotated at 10 rpm during deposition to enhance the uniformity of the films. The TCO sputtering process was carried out at a constant power density of 3.95 W cm −2 and a pressure of 1 mTorr (0.133 Pa) using pure Ar as the process gas. MZO and ZnO films were sputtered in a 1% O 2 in Ar atmosphere at 5 mTorr, and a power density of 3.95 W cm −2 . The temperature of the substrates was kept at 450°C for the deposition of ITO and ITiO and 300°C for AZO films. The deposition temperature for MZO films was varied between 20°C and 400°C. The deposition temperature for ZnO films was 20°C. The FTO substrates used in this study were NSG TEC™ C10 glass (Pilkington). The optical properties were investigated using a Varian Cary 5000 UV-VIS-NIR spectrophotometer. The composition of the films was measured using an X-ray photoelectron spectrometer (XPS) (Thermo Scientific K-alpha). Hall effect measurements were carried out in the Van der Pauw configuration to measure the resistivity, Hall mobility and carrier concentration of the different layers. Hall effect measurements presented in this work were carried out using an Ecopia HMS-3000 Hall Measurement System. The structural properties of the films were analysed by X-ray diffraction (XRD) using a Brucker D2 phaser desktop X-ray diffractometer equipped with a Cu-K-alpha X-ray gun. The XRD measurements were obtained using 15 rpm rotation, a 1 mm beam slit and 3 mm anti-scatter plate height. Devices were fabricated in a superstrate configuration on the different TCO/MZO combinations. The CdTe absorber was deposited by close space sublimation (CSS) at a pressure of 1 Torr (133 Pa) in a 6% O 2 in Ar atmosphere, with a CdTe source plate temperature of 630°C and a substrate temperature of 515°C, for 2 min. All samples have CdTe films deposited with thicknesses in the range of 4-4.7 µm to rule out the effect of absorber thickness variation on device performance. The spacing between substrate and source plate was set to 2 mm. The CdCl 2 activation treatment was carried out by thermal evaporation and subsequent annealing. A quartz crucible was loaded with 0.5 g of CdCl 2 pellets, which was then evaporated at~1 × 10 −6 Torr for 20 min. The samples were then annealed on a hot plate at a dwell temperature of 425°C for 3 min. The dwell temperature was reached by using a 22°/min ramping rate bringing the temperature from 25°C to 425°C in 18 min, for a total annealing duration of 21 min. Devices were rinsed with DI water to clean the CdTe surface of CdCl 2 and completed with 80 nm gold contacts deposited using thermal evaporation. No intentional copper has been added to these devices. The current density-voltage (JV) characteristics of devices were determined using a bespoke solar simulator under a simulated AM1.5G spectrum. External quantum efficiency (EQE) measurements were carried out using a PVE300 EQE system (Bentham Instruments Limited, UK) with a 5 nm resolution. Samples for transmission electron microscopy (TEM) were prepared by focused ion beam milling using a dual beam FEI Nova 600 Nanolab. A standard in situ lift out method was used to prepare cross-sectional samples. An electron beam assisted platinum (e-Pt) over-layer was deposited onto the sample surface above the area to be analysed followed by an ion assisted layer to define the surface and homogenize the final thinning of the samples down to 100 nm. TEM analysis was carried out using a Tecnai F20 operating at 200 kV to investigate the detailed microstructure of the cell cross sections. Images were obtained using a bright field (BF) detector. Theory Due to the tuneability of the MZO band gap, a MZO buffer layer can enhance the absorber type inversion of thin film CdTe solar cells [7,9]. The type inversion refers to the inversion of the majority/minority carrier densities. Within a thin film solar cell, the CdTe absorber layer is p-type, therefore within this layer holes are majority carriers and electrons are minority carriers. However, when p-type CdTe is contacted with an n-type semiconductor to create a p-n junction the resulting electric field can invert this situation such that electrons become majority carriers and holes become minority carriers in the CdTe layer near the buffer/absorber interface. Simulations of thin film heterostructure solar cells show that increasing this absorber inversion can be beneficial for device performance [7,9]. Changing the buffer layer/ CdTe band alignment and the buffer layer n-type doping are two ways to affect this device parameter. Fig. 1 shows the simulated energy band diagram and free carrier distribution for two different situations, using SCAPS-1D software [10]. The first simulation (a) shows the energy band diagram for a CdTe solar cell where the inversion is smaller than in (b). An indicator of the absorber inversion is the energy gap between the CdTe valence band and the Fermi energy at the buffer/CdTe interface ( x 0 is 0.83 eV in (a) and 1.27 eV in (b). The higher = E p a , x 0 is due to an increased buffer layer carrier concentration and a change in the conduction band offset from negative (− 0.1 eV) to positive (+ 0.1 eV). From the carrier concentration profile shown below the energy band structure in Fig. 1 it is clear that the increased = E p a , x 0 results in an enhanced absorber inversion where electrons become majority carriers further away from the CdTe/buffer interface. The interface defect density can be very high compared to the bulk, and limiting interface recombination can be crucial for device performance. The simulated situation (b) is more favourable than (a) since interface recombination is limited by the lack of holes for electrons to recombine with. MZO band gap tuning The MZO band gap depends upon the MgO/ZnO ratio in the film, with a larger band gap achieved with Mg rich films. The band gap change has been analysed optically [11], by X-ray and UV photoelectron spectroscopy [12], and by density functional theory [13] and it is primarily due to an upward shift in the conduction band minimum. In this study, the heating of the glass substrate during sputtering of the MZO films has been used to increase its band gap, as reported in our previous study [8] and in a study from Hwang et al. [14]. The estimation of the film band gap, done graphically by using the Tauc plot technique, highlights that changing the substrate temperature between 20°C and 400°C caused a band gap increase of almost 0.2 eV, from 3.56 eV to 3.73 eV (Fig. 2). X-ray photoelectron spectroscopy was also used to confirm the Zn/Mg ratio of the different films. There is a clear increase in the Mg ratio with deposition temperature, which also corresponds to the band gap increase. (Fig. 2). At 5 mTorr, Zn and Mg form a vapour at ≈ 290°C and ≈ 380°C respectively [15], and so during the deposition it is possible that free Zn is lost from the hot substrate surface, reducing the Zn content significantly with higher deposition temperatures. This at least explains qualitatively the significant difference in Mg content and band gap with increasing deposition temperature. Measuring the exact band alignment between the MZO layer and the CdTe layer is complex and different results are presented in the literature. The conduction band offset (CBO) can also vary depending on the films deposition technique and parameters. Rao [12] [7]. Because of this uncertainty it is difficult to precisely estimate the band alignment provided by MZO compositions with different band gaps with CdTe, while it is more effective to analyse the effect of the MZO band gap widening on device performance. The electrical properties of these films could not be analysed because their conductivity was too low to be effectively measured using our Hall effect system. Fig. 3. The performance of devices using ZnO buffers have been included to highlight the influence of Mg. All device parameters clearly improve when Mg is added to the buffer layers. The device efficiency further improves with increasing deposition temperature up to 300°C. There is a clear trend between the substrate deposition temperature, the MZO film band gap and the device Voc. The Voc is a strong indicator of recombination, suggesting that the increase of the MZO band gap towards a flat or slightly positive CBO with the CdTe layer reduces the interface recombination. At 400°C the device Voc and the efficiency slightly degrade compared with samples with an MZO buffer deposited at 300°C. With the exception of devices incorporating MZO deposited at room temperature, the current density output decreases with increasing deposition temperature of the MZO layer. The higher Jsc of the MZO device deposited at 100°C, compared to room temperature, is due [15][16][17][18][19][20][21][22] to a higher EQE response across all wavelengths (Fig. 4). Further increase of the MZO deposition temperature reduces the EQE response at wavelengths below 600 nm (200°C). Subsequently, at higher deposition temperatures (300°C and 400°C) the decrease occurs across the whole spectrum. The degradation of photo-generated carrier collection over the full active spectrum of the solar cell could be related to the MZO/ CdTe band alignment. The increasingly positive CBO due to the increase of the MZO band gap can act as a barrier for the electrons flowing from the CdTe to the MZO layer [17,18]. There is a slight shift in the high energy absorption edge of the EQE, corresponding to the MZO band gap variation seen in Fig. 2. A more significant shift is showed when ZnO is used as a buffer, since it has a lower band gap compared to MZO. The lower EQE response in the ultra-violet range of the solar spectrum of samples incorporating ZnO buffers is the cause of the low Jsc. The J-V characteristics indicate that increasing the MZO layer band gap has a positive impact on device Voc and efficiency, up to a band gap of approximately 3.7 eV, in agreement with previous work [5]. Increasing the substrate temperature during deposition was found to be an effective method to tune the MZO band gap. Although it was not possible to estimate the n-type doping of the material, increasing the free carrier concentration of MZO films also has the potential to increase the absorber inversion and device performance. TEM and XRD analysis of the MZO films TEM cross section imaging of devices shows a thin (100 nm) but uniform MZO layer deposited at 300°C, separating FTO and CdTe (Fig. 5). The buffer layer uniformity is considered to be an important aspect to achieve high efficiencies, as interruptions or particularly thin buffer layer areas may result in weak diodes, with the consequent degradation of device Voc, FF and efficiency [19]. The crystal structure of MZO films grown on top of the FTO TCO has been investigated by XRD in the 2ϴ angular range 20-70°. All films exhibited an MZO (002) peak at 34.6°± 0.2° (Fig. 6). All other peaks observed in Fig. 6 are attributed to the FTO film. This indicates that MZO films grow with a single phase with a wurzite structure typical for ZnO, avoiding secondary phases related to the MgO cubic structure (peak 200 and 002) which can appear at higher MgO/ZnO atomic ratios [20]. The MZO peak is slightly shifted when compared to that of intrinsic ZnO films, also deposited on FTO [21] (ICDD 00-003-0752). This slight peak shift corresponds to a slightly smaller MZO lattice constant c (5.18 Å) compared to ZnO films (5.21 Å). The buffer/absorber lattice mismatch can be an indicator of the quality of the junction that will form between the two, as a larger lattice mismatch can lead to a large number of dislocations and defects at their interface [22,23]. MZO has a large lattice mismatch with the CdTe zinc blende (cubic) structure which has a lattice constant c of 6.48 Å. The mismatch is even larger considering the MZO lattice constant a (3.17 Å, c/a = (8/3) 1/2 = 1.633, [24]). The lattice mismatch is higher than that between CdS and CdTe (the wurzite CdS lattice constants are a = 4.136 Å, c = 6.713 Å, ICDD 00-006-0314). Considering this large lattice mismatch, it appears that a favourable band alignment can also mitigate the negative effect of a high defect density at the buffer/absorber interface. TCO properties In this section different TCO materials and glass substrates have been used to improve the SLG/FTO/MZO interface. The electrical properties of the investigated TCOs are summarised in Table 1. Each film was deposited to a thickness which produces a sheet resistance, R sheet , of lower than or equal to 10 Ω/sq. ITO had the lowest resistivity due to both a high carrier concentration and mobility. AZO exhibited the highest resistivity, although this slightly improves when the AZO film is deposited on BSG. ITiO achieves a low resistivity due to a very high mobility [25]. The transmission (T%) and the 100 − A(%) spectra of each TCO are shown in Fig. 7(a) and (b). The 100 − A(%) spectrum was calculated by: where R% is the reflectance of the layers stack. The 100 − A(%) data present the light available through the TCO assuming no reflection losses at the air/glass and TCO/air interface during the measurement. Using this data as a comparison is useful because the TCO transmission and reflection spectra are affected by the films subsequently deposited on top of the TCO (in this case the MZO and CdTe), due to differing refractive indices of these layers; this means that the solar cell structure can be optimised to maximise T% and minimise R%, while the absorption is a characteristic of the glass/TCO only and is an effective measure of the optical quality of the film. The transmittance drop in the near-infrared (NIR) is strongest for ITO, due to the high free carrier absorption caused by the high free carrier density of the film. ITiO does not show free carrier absorption within the wavelength range analysed due to the low free carrier concentration and thickness. Below the absorption onset of CdTe at 826 nm, ITiO exhibits very high 100 − A(%). The AZO deposited on BSG also has high 100 − A(%) within this range in the visible and near infrared, but a higher absorption in the UV range caused by the lower band gap of ZnO based TCOs compared to In 2 O 3 based TCOs. By comparing the optical properties of AZO films deposited on the different glass substrates it is clear that BSG glass strongly improves the transparency of the stack, due to a lower iron content than SLG [26]. Also, because of the higher conductivity of the film deposited on BSG, a thinner AZO film is sufficient to obtain an equivalent sheet resistance than AZO on SLG. FTO represents a compromise in electrical and optical characteristics compared to the other TCOs in this study. Device performance parameters are presented in Fig. 8. The performance of the different glass/TCO/MZO combinations does not depend on the properties of the TCO only, but also on the interface properties of the window layer stack. AZO TCOs deposited on SLG substrates in combination with MZO buffers produce efficiencies comparable to the FTO baseline. When combined with a thinner BSG substrate, the efficiency is improved largely through an increased Jsc. Devices incorporating ITiO TCOs yield lower efficiencies, although the TCO has a high conductivity and high transparency. The low efficiencies are primarily a consequence of the low Voc and FF of the devices, while the Jsc is relatively high due to the high transparency of the material. Devices incorporating ITO yield relatively low efficiencies due to poor FF and Jsc. The low current output is due to the high free carrier absorption in the VIS-NIR wavelength range. The EQE of CdTe devices incorporating the different glass/TCO/MZO combinations are shown in Fig. 9 in comparison with the Transmission and 100 − Absorption(%) spectra. The UV absorption edge of devices incorporating the high band gap TCOs (ITO: E g = 3.85 eV, FTO: E g = 3.85 eV, ITiO: E g = 3.93 eV) is shifted to longer wavelengths, presumably by the smaller band gap of the MZO layer, whilst this is not observed when using an AZO TCO, which has a lower band gap than MZO (AZO: E g = 3.30 eV). Generally, the transmittance spectrum is lower than the EQE. This suggests that there are more photons reaching the absorber layer than implied by the transmission curves of the glass/TCO combination only, and that the further addition of MZO and CdTe is reducing the interfacial reflectance. The 100 − Absorption(%) data, on the other hand, is the maximum transmission limit. Reducing the gap between this limit and the EQE can be achieved by maximising photo-generation and extraction efficiencies of charge carriers. This gap can be visualised in Fig. 9 as the red area between the 100 − Absorption(%) curve and the EQE curve. Following this approach, the SLG/AZO devices are the most effective in converting the available light, converting roughly 90% of the available, non-absorbed, photons. This percentage was calculated by dividing the value of maximum ideal Jsc (Jsc max ), and the value of device Jsc estimated from the EQE data (Jsc EQE ). Jsc max was calculated by where q is the electron charge, A(λ) is the wavelength dependent film absorption and ϕ λ ( ) is the wavelength dependent photon flux of the AM1.5G spectrum. This calculation assumes that all electrons not absorbed by the glass substrate and the TCO will be converted into current by the cell. Jsc EQE is calculated by where EQE λ ( ) is the wavelength dependent EQE response of the solar cell. Devices with AZO deposited on BSG, although having a higher Jsc and a larger number of available photons, convert a lower fraction of potentially available photons (86%). Also FTO-based devices convert 86% of the photons not absorbed by the TCO, while devices with ITO (84%) and especially ITiO (81%) are much lower. ITiO has remarkable opto-electronic properties. The Jsc max available after the light passes through the BSG/ITiO bilayer, assuming no reflection is occurring, is 28.4 mA/cm 2 . This value is close to the maximum available for thin film CdTe solar cells (29.0 mA/cm 2 ), confirming that little parasitic absorption losses take place within this layer [27]. Understanding the Jsc, Voc and FF losses related to the devices containing this TCO can potentially lead to even higher device efficiencies. Conclusions This study focused on the analysis and improvement of the window layer for a thin film CdTe solar cell, including the glass substrate. The band gap of MZO films was widened by increasing the deposition temperature during sputtering. Results suggest that the band gap increase helps create a favourable band alignment between the MZO layer and the CdTe absorber. The MZO films are uniformly deposited on the TCO surface. However, MZO films have a low doping density and a larger lattice mismatch factor with CdTe if compared with CdS, which can introduce a high number of interface defects. Results suggest that a favourable buffer/absorber band alignment positively dominates the negative effects due to the large MZO/CdTe interface lattice mismatch and the low MZO doping density. A number of TCOs were also examined as partners for MZO and as alternatives to FTO. AZO TCOs, when deposited on boro-silicate glass, yielded better opto-electronic properties and yielded the highest Jsc and overall efficiencies. ITiO TCOs showed exceptional opto-electronic properties thanks to the high free carrier mobility and relatively low carrier concentration in the films. However, in this study, indium oxide based TCOs (ITO and ITiO) did not yield efficiencies as high as those of devices including AZO and FTO in combination with MZO. The TCO/buffer interface chemistry and/or the band alignment may play a key role in the functioning of these devices and might explain the different TCO behaviour, however further investigation is required to improve our understanding on the mechanism occurring within the window structure of these samples.
2019-04-10T13:11:28.400Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "5b93be9410b812320f885c2be1f290edaaecdfba", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.solmat.2018.07.019", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "efdaa80480d87763055f4b1c56b41006ea4eb069", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
119638528
pes2o/s2orc
v3-fos-license
On GCD(\Phi_N(a^n),\Phi_N(b^n)) There has been interest during the last decade in properties of the sequence {gcd(a^n-1,b^n-1)}, n=1,2,3,..., where a,b are fixed (multiplicatively independent) elements in either the rational integers, the polynomials in one variable over the complex numbers, or the polynomials in one variable over a finite field. In the case of the rational integers, Bugeaud, Corvaja and Zannier have obtained an upper bound exp(\epsilon n) for any given \epsilon>0 and all large n, and demonstrate its approximate sharpness by extracting from a paper of Adleman, Pomerance, and Rumely a lower bound \exp(\exp(c\frac{log n}{loglog n})) for infinitely many n, where c is an absolute constant. The upper bound generalizes immediately to gcd(\Phi_N(a^n), \Phi_N(b^n)) for any positive integer N, where \Phi_N(x)$ is the Nth cyclotomic polynomial, the preceding being the case N=1. The lower bound has been generalized in the first author's Ph.D. thesis to N=2. In this paper we generalize the lower bound for arbitrary N but under GRH (the generalized Riemann Hypothesis). The analogue of the lower bound result for gcd(a^n-1,b^n-1) over F_q[T] was proved by Silverman; we prove a corresponding generalization (without GRH). where a, b are fixed elements in one of Z, C[T ], or F q [T ]. Motivated by recurrence sequences and the Hadamard quotient theorem, Bugeaud, Corvaja and Zannier [3], bounded the cancellation in the sequence b n −1 a n −1 by proving the following upper bound result: Theorem 1.2. [3] For any two positive integers a, b, there exist infinitely many positive integers n for which log gcd(a n − 1, b n − 1) > exp(c log n log log n ), where c is an absolute constant. The result in [1] from which this is derived is an improvement of a result of Prachar [10]: Theorem 1.3. [10] Let δ(n) denote the number of divisors of n of the form p − 1, with p prime. Then there exist infinitely many n such that δ(n) > exp(c log n/(log log n) 2 ). The improvement in [1] (with a similar proof) removes the exponent 2 (and the p − 1 are squarefree): Theorem 1.4. [1] Let δ(n) denote the number of divisors of n of the form p − 1, with p prime and p − 1 squarefree. Then there exist infinitely many n such that δ(n) > exp(c log n/ log log n). It is interesting to note that in [10], Prachar was motivated by a paper of Nöbauer [9] which dealt with the group of invertible polynomial functions on Z/nZ and particularly the subgroup of functions of the form x k , whereas in [1], Adleman, Pomerance and Rumely were motivated by the computation of a lower bound on the running time of a primality testing algorithm. In his Ph.D. thesis [4], the first author tests the robustness of these results and asks what happens to Theorem 1.2 if gcd(a n − 1, b n − 1) is replaced by gcd(a n + 1, b n + 1) or by gcd(a n + 1, b n − 1), and proceeds to prove the analogous results for these sequences, using [1]: For any two positive nonsquare integers a, b, there exist infinitely many positive integers n for which log gcd(a n +1, b n +1) > exp(c log n log log n ), where c is a constant depending on a and b. The same result holds for gcd(a n + 1, b n − 1). (The corresponding analogues of Theorem 1.1 follow immediately from x n ± 1|x 2n − 1.) If one observes that the polynomials x − 1 and x + 1 are the first and second cyclotomic polynomials Φ N (x), N = 1, 2, we ask if Theorems 1.1 and 1.2 also hold for gcd(Φ N (a n ), Φ N (b n )) for any positive integer N, or even for gcd(Φ M (a n ), Φ N (b n )) for suitable positive integers M, N. For Theorem 1.1, this is immediate from Φ N (x)|x N − 1. In this paper we deal with this generalization for Theorems 1.2 (and 1.5). It should be remarked that Corvaja and Zannier have made far-reaching generalizations of Theorem 1.1 in [5], in other directions. In Section 2 we prove the above generalization for gcd(Φ N (a n ), Φ N (b n )) for any positive integer N under the Generalized Riemann Hypothesis (GRH). The explanation for this is that the generalization of Prachar's argument in this situation leads to an application of the effective Chebotarev density theorem to a tower of Galois extensions L d /Q, where the exceptional zeros of the corresponding zeta functions of the L d are required to be bounded away from 1 as d goes to infinity 1 . Since we do not know if the exceptional zeros in our tower are bounded away from 1, we apply the stronger GRH version of the effective Chebotarev density theorem in which there are no exceptional zeros. An additional attempt to avoid GRH using the Bombieri-Vinogradov theorem has so far not been successful. Silverman [12] has proved an analogue of Theorem 1.2 for the global function fields F q (T ): Theorem 1.6. Let F q be a finite field and let a(T ), b(T ) ∈ F q (T ) be nonconstant monic polynomials. Fix any power q k of q and any congruence class n 0 + q k Z ∈ Z/q k Z. Then there is a positive constant c = c(a, b, q k ) > 0 such that for infinitely many n ≡ n 0 (mod q k ). In Section 3 we apply the method of Section 2 to prove (unconditionally) the corresponding cyclotomic generalization of Silverman's theorem. Acknowledgment. We are grateful to Zeev Rudnick, Ram Murty and Jeff Lagarias for helpful discussions at various stages of the preparation of this paper. We also thank Joe Silverman for helpful comments on the initial draft. 2. The case a, b ∈ Z Theorem 2.1 (contingent on GRH). Let N be a positive integer, N = ℓ s 1 1 · · · ℓ sr r , ℓ 1 < ℓ 2 < · · · ℓ r , the factorization of N into primes. Let a, b be positive integers, relatively prime to N, which are not ℓ i th powers in Q for i = 1, ..., r. Then there exist infinitely many positive integers n such that where c is a positive constant depending only on a, b, N. Proof. Suppose p is a prime congruent to 1 mod N such that neither a nor b is a ℓ i th power mod p for i = 1, ..., r. Suppose also that n is a positive integer prime to N and divisible by p−1 N . Then p | gcd(Φ N (a n ), Φ N (b n )). Indeed, (a n ) N ≡ 1 (mod p). The orders of a and of a n mod p are equal and divide N. If a has order less than N, then there is a prime ℓ|N such that a N/ℓ ≡ 1 mod p, so a (p−1)/ℓ ≡ 1 mod p, whence a is an ℓth power mod p, contrary to hypothesis. The idea of the proof of the theorem, a generalization of the proof in Prachar's paper, is to use the pigeonhole principle to produce, for large x, an n ≤ x 2 with more than exp(c log x log log x ) divisors of the form p−1 N , p prime, c an absolute constant. The result then follows. Fix 0 < δ < 1. Let x be a positive real number and let K = K δ (x) be the product of all the primes p ≤ δ log x, p ∤ N. Let A be the set of pairs (m, p), m a positive integer, p a prime, m ≤ x, p ≤ x, gcd(m, N) = 1, p ≡ 1 (mod N), p ≡ 1 (mod Nℓ i ), i = 1, ..., r, neither a nor b is an ℓ i th power mod p, i = 1, ..., r, and and neither a nor b is an ℓ i th power mod p, i = 1, ..., r}. To bound |A ′ d ×A ′′ d | from below, it suffices to bound each of |A ′ d |, |A ′′ d | from below and take the product of the two lower bounds. First, writing d ′ = K/d, To bound |A ′′ d | from below we use the effective form of Chebotarev's density theorem due to Lagarias and Odlyzko [8] as formulated by Serre [11] under the generalized Riemann Hypothesis (GRH). The condition d | p−1 N is equivalent to p ≡ 1 (mod Nd), which is equivalent to p splits completely in Q(µ N d ), where µ n denotes the group of nth roots of unity. The condition a is an ℓth power mod p (ℓ prime) is equivalent to the condition x ℓ − a has a root mod p, which for p ≡ 1 modulo ℓ is equivalent to the condition x ℓ − a splits into linear factors mod p, which is equivalent to the condition p splits completely in (the Galois extension) It follows from the definition of C d that Consider the Galois extension By the effective Chebotarev density theorem cited above, under GRH for the Dedekind zeta function of F d , where e i = 2 or 3 according to whether or not a, b are multiplicatively dependent mod ℓ i th powers in Q. Proof: First we look at the case r = 1 (N is a power of ℓ 1 ) and write ℓ = ℓ 1 . We need an elementary observation. Let H be the direct product of three cyclic groups of order ℓ: are multiplicatively independent mod ℓ i th powers in Q, and H i = U i × W i if not. The subgroups whose union we are looking at (in the definition of C d ) can be identified with the subgroups of the form H 1 × · · · H i−1 × U i × V i × H i+1 × · · · × H r or H 1 ×· · · H i−1 ×V i ×W i ×H i+1 ×· · ·×H r or H 1 ×· · · H i−1 ×U i ×W i ×H i+1 ×· · ·×H r for those i for which a, b are multiplicatively independent mod ℓ i th powers in Q, and the subgroups H 1 × · · · H i−1 × U i × H i+1 × · · · × H r or H 1 × · · · H i−1 × W i × H i+1 × · · · × H r for those i for which a, b are multiplicatively dependent mod ℓ i th powers in Q. An element (h 1 , ..., h r ) is in the union of these ⇐⇒ some We conclude that By [11], Prop. 5, p. 128, where c 1 is an absolute constant. We now bound N, a, b)x δ+ǫ (using φ(d) < d < K < x δ and log x < x ǫ for any given ǫ and sufficiently large x). From this, It then follows that where ω(K) denotes the number of primes dividing K. For the last inequality we use [7], 22.2, p. 341, and 22.10, p. 355: Now the number of positive integers n ≤ x 2 such that K|n is at most x 2 K . Furthermore, for every pair (m, p) ∈ A, m p−1 N is such an n. Therefore there exists an n ≤ x 2 such that K|n with at least |A| representations of the form m p−1 N , for x sufficiently large, where c 2 , c 3 are absolute constants. It follows that GCD(Φ N (a n ), Φ N (b n )) is a product of at least exp(c 1 · · · ℓ sr r , ℓ 1 < ℓ 2 < · · · ℓ r , the factorization of L into primes. Let a, b be positive integers, relatively prime to L, which are not ℓ i th powers in Q for i = 1, ..., r. Then there exist infinitely many positive integers n such that where c is a positive constant depending only on a, b, N. The proof is similar to the proof of Theorem 2.1; we omit the details. Also here, the case M = 1, N = 2 was proved unconditionally in [4]. In this section we will generalize Silverman's Theorem 1.6 [12]: Let F q be a finite field and let a(T ), b(T ) ∈ F q (T ) be nonconstant monic polynomials. Fix any power q k of q and any congruence class n 0 + q k Z ∈ Z/q k Z. Then there is a positive constant c = c(a, b, q k ) > 0 such that deg(gcd(a(T ) n − 1, b(T ) n − 1)) ≥ cn for infinitely many n ≡ n 0 (mod q k ). The generalization will be as in the preceding section, replacing a(T ) n − 1 with Φ m (a(T ) n ) for an arbitrary fixed positive integer m. The proof will be similar in parts to the proof of Theorem 1, but there will be some changes in notation. Theorem 3.1. Let F q be a finite field, and let m be a positive integer prime to q, m = ℓ e 1 1 · · · ℓ es s , ℓ 1 < ℓ 2 < · · · ℓ s , the factorization of m into primes. Let a(T ), b(T ) ∈ F q (T ) be nonconstant monic polynomials which are not ℓ i th powers in F q (T ) for i = 1, ..., s. Fix a power q k of q, and any congruence class n 0 + q k Z ∈ Z/q k Z. Then there is a positive constant c = c(m, a, b, q k ) > 0 such that for infinitely many n ≡ n 0 (mod q k ). Proof. Assume first that (n 0 , q) = 1. Choose the smallest positive integer r such that (r, m) = 1 and rmn 0 ≡ −1 (mod q k ). Let Q = q t , where t ≥ k and q t ≡ 1 mod mr (e.g. t = kφ(mr). Let n = Q N −1 mr , where N is a positive integer. Let π = π(T ) be a monic irreducible polynomial of degree N in F Q [T ] not dividing a(T )b(T ) (this holds e.g. if deg(π) > deg(a(T )b(T ))). Then, writing a = a(T ), b = b(T ), π|Φ m (a n ) if and only if a n is a primitive mth root of unity mod π, i.e. a nm ≡ 1 mod π and a mn/ℓ ≡ 1 mod π for every ℓ|m. Substituting n = Q N −1 mr , this holds ⇔ a Q N −1 r ≡ 1 (mod π) and a Q N −1 rℓ ≡ 1 (mod π) for all ℓ|m. The first condition holds ⇔ there exists A ∈ F Q [T ] such that a ≡ A r (mod π). For such an A, the second condition is equivalent to which is equivalent to saying that A is not an ℓth power mod π, and since (r, ℓ) = 1, this is equivalent to saying that a is not an ℓth power mod π. It follows that the two conditions hold together ⇔ a is an rth power mod π and a is not an ℓth power mod π for all ℓ dividing m. We conclude that π|Φ m (a n ) if and only if a is an rth power mod π and a is not an ℓth power mod π for all ℓ dividing m. Similarly, π|Φ m (b n ) if and only if b is an rth power mod π and b is not an ℓth power mod π for all ℓ dividing m. To count the number of π dividing gcd(Φ m (a n ), Φ m (b n )), we will use an effective version of Chebotarev's density theorem for global function fields [6], p. 62, Prop. 5.16. For this purpose, let and let Since deg π = N, π splits completely in F Q N (T ). Therefore a and b are rth powers mod π if and only if π splits completely in F . Furthermore, a and b are not ℓth powers mod π for all ℓ dividing m if and only if π does not split completely in for all ℓ dividing m. Accordingly, proceeding as in Section 2, consider the Galois extension EF/F Q (T ) with Galois group G N , and let Then π splits completely in F and π does not split completely in F Q N (T )( ℓ √ a) nor in F Q N (T )( ℓ √ b) for all ℓ dividing m, if and only if (π, EF/F Q (T )) ⊆ C N . Now the same counting argument as in the preceding section gives |G N | = Nr 2 i ℓ e i i and |C N | = i (ℓ i − 1) e i . Applying [6], p. 62, Prop. 5.16 2 (and observing that a conjugacy class can be replaced by any union of conjugacy classes in that theorem), we get for some constant not depending on N, and n = Q N − 1. This proves Theorem 2 when (n 0 , q) = 1. The case (n 0 , q) = 1 follows from the case (n 0 , q) = 1 as in [12]. As in the previous section, the proof of Theorem 3.1 can be generalized to yield the following Theorem 3.2. Let F q be a finite field, u, v be positive integers, d = gcd(u, v), and assume gcd(u/d, d) = gcd(v/d, d) = 1. Let a = a(T ), resp. b = b(T ) ∈ F q [T ] be monic nonconstant polynomials which are not ℓth powers in F q [T ] for all ℓ|u, resp. ℓ|v. Fix a power q k of q, and any congruence class n 0 + q k Z ∈ Z/q k Z. Then there is a positive constant c = c(u, v, a, b, q k ) > 0 such that deg(gcd(Φ u (a(T ) n ), Φ v (b(T ) n )) ≥ cn for infinitely many n ≡ n 0 (mod q k ). The details are omitted.
2013-01-17T18:55:21.000Z
2011-03-11T00:00:00.000
{ "year": 2011, "sha1": "9b87eec601760f5226a1451f461fe8e5835fcd05", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9b87eec601760f5226a1451f461fe8e5835fcd05", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
9867095
pes2o/s2orc
v3-fos-license
Spectral Cytopathology of Cervical Samples: Detecting Cellular Abnormalities in Cytologically Normal Cells Aim Spectral Cytopathology (SCP) is a novel spectroscopic method for objective and unsupervised classification of individual exfoliated cells. The limitations of conventional cytopathology are well-recognized within the pathology community. In SCP, cellular differentiation is made by observing molecular changes in the nucleus and the cytoplasm, which may or may not produce morphological changes detectable by conventional cytopathology. This proof of concept study demonstrates SCP’s potential as an enhancing tool for cytopathologists by aiding in the accurate and reproducible diagnosis of cells in all states of disease. Method Infrared spectra are collected from cervical cells deposited onto reflectively coated glass slides. Each cell has a corresponding infrared spectrum that describes its unique biochemical composition. Spectral data are processed and analyzed by an unsupervised chemometric algorithm, Principal Component Analysis (PCA). Results In this blind study, cervical samples are classified by analyzing the spectra of morphologically normal looking squamous cells from normal samples and samples diagnosed by conventional cytopathology with low grade squamous intraepithelial lesions (LSIL). SCP discriminated cytopathological diagnoses amongst twelve different cervical samples with a high degree of specificity and sensitivity. SCP also correlated two samples with abnormal spectral changes: these samples had a normal cytopathological diagnosis but had a history of abnormal cervical cytology. The spectral changes observed in the morphologically normal looking cells are most likely due to an infection with human papillomavirus, HPV. HPV DNA testing was conducted on five additional samples, and SCP accurately differentiated these samples by their HPV status. Conclusions SCP tracks biochemical variations in cells that are consistent with the onset of disease. HPV has been implicated as the cause of these changes detected spectroscopically. SCP does not depend on identifying the sparse number of morphologically abnormal cells within a large sample in order to make an accurate classification, as does conventional cytopathology. These findings suggest that the detection of cellular biochemical variations by SCP can serve as a new enhancing screening method that can identify earlier stages of disease. The prevalance of cervical cancer decreased 75 percent between 1955 and 1992 as a direct result of the cervical cancer screening methods introduced by Papanicalou. However, the American Cancer Society estimates that 11 270 women will be diagnosed with invasive cervical cancer in 2009.(3) A review by the U.S. Department of Health and Human Service's Agency for Healthcare Research and Quality (AHRQ) that evaluated cervical cytology, found the specificity of the Pap smear screening to be 0.98, 95% CI 1 0.97-0.99, and the sensitivity to be 0.51, 95% CI 0.37-0.66 (4). The low accuracy due to false negative and false positive results is widely recognized and acknowledged within the pathology community. False negative results are primarily due to a low number of abnormal cells in a sample, but may also be the result of poor sample collection; a gradual transition of cells from a normal morphology to an abnormal morphology; biological contaminants such as blood and infectious organisms; sample preparation; and the inherent subjectivity of the diagnosis. Consequences of false negative test results are compounded by low rates of interobserver reproducibilty in accurately grading dysplasia of the uterine cervix. (5,6) These results affect patient care, and frequently require follow-up testing in terms of another Pap test and, in the case of ASCUS (atypical squamous cells of undetermined significance) a HPV DNA test. Current research has shown that combined Pap screening and HPV DNA testing may offer an improvement in sensitivity to over 99%. Although HPV DNA testing is the state of the art in terms of treating and tracking the course of the disease in infected women, co-testing would not be cost-effective. (7) These diagnostic challenges are not limited to cervical smears alone; cytological diagnoses from other body sites have equal or worse sensitivity. In a study of the accuracy of urinary cytology in daily practice, it was found that the diagnostic sensitivity of high grade transitional cell neoplasms was roughly 79% whereas the sensitivity in low grade transitional cell neoplasms was confirmed at about 26%. (8) The accuracy of clinical diagnosis was also studied in interstitial lung diseases and for patients who underwent surgical lung biopsy, in these cases the diagnoses were 62% accurate. (9) The accuracy rate is dependent on the presence of diagnostic morphological features and on adequate tissue samples. In order to overcome some of the inherent obstacles of conventional cytopathology, a significant amount of research has been focused on applying SCP to procure a more objective diagnosis of cells. The aim of these studies is to develop a synergistic methodology using SCP that would assist cytopathologists in improving the overall accuracy of cytological diagnoses via collaborative work between bio-spectroscopists and cytopathologists. The work by Wong in the early 1990s claimed that IR-MSP could differentiate between normal, dysplastic, and cancerous cervical cells in pellets based on decreased glycogen peaks and increased symmetric and anti-symmetric phosphate stretching intensities. (10)(11)(12) However, follow-up studies undertaken by other groups indicated that the spectral changes observed by Wong were not related to the molecular composition of dysplastic cells, but to confounding contributions made by different cell types present within a smear. Benign variations such as inflammation, metaplasia, the ratio of non-dividing to dividing cells, and the overall divisional activity of the cells will also dramatically change the IR spectrum collected. (13)(14)(15)(16)(17)(18)(19) As these problems were recognized, it also became apparent that other contaminates may affect the spectra, including blood, mucus, micro-organisms and semen. (20) Today, advances in instrumentation technology permit the collection of a spectrum from an individual cell, opposed to cells in pellet form. In 2006, initial studies explored the spectral variance amongst individual cells from homogeneous cell samples. (21,22) A model system using individual cells from canines was employed to examine effects of cellular maturation. Homogeneous squamous cells were exfoliated from the cervix of both estrus and non-estrus dogs. Unsupervised statistical analysis using PCA showed distinct separation between these two states of maturation. These observed differences are due to hormones that initiate cells to mature when canines are in estrous. (21) The aim of this research paper is to establish a primary proof of concept for SCP as a potential diagnostic tool for cytopathologists evaluating gynecological samples. Athough hundreds of samples have been analyzed by SCP to date, only a small number of samples with sufficient patient background information were incorporated into this preliminary study. However, over 3 000 cells were included from 17 patient samples, which is statistically significant. The method of analysis, PCA, is not a diagnostic algorithm, but an unsupervised classification algorithm that does not require training or validation data: PCA simply identifies quantifiable spectral differences and classifies the data accurately without user input. Therefore, in terms of this proof of concept study with a limited number of patient samples, PCA is a suitable classification algorithm. SCP is unique in that it takes a snapshot of a cell's biochemical composition. This snapshot encompasses all the biochemical processes in the cell at the time of exfoliation. This includes, but is not limited to, any defining hormonal influences or inherent viral infections, which may not be reflected in the cell's morphology. The cervical samples used in the SCP analysis were correlated with cytopathological interpretations. Non-diagnostic, or normal looking, cells were selected from samples with a cytological diagnosis of low grade dysplasia. We show that SCP has the sensitivity required to differentiate these cells, from normal cells of healthy patients based on the infrared analysis of their biochemical composition. Additional DNA testing was performed on 5 of the samples, diagnosed with normal cytopathology and LSIL, to confirm the presence of any high risk HPV (hrHPV) strains. Morphologically normal cells of hrHPV+ samples (1 LSIL sample, 1 normal cytology sample) were differentiated from cells of cytologically normal hrHPV− samples by PCA. This study parallels oral cytology studies in our laboratory that distinguish cells from a normal sample to those in a sample diagnosed as reactive or cancerous. (23) Sample Preparation All cervical samples were obtained in collaboration with the Cytopathology Division of the Pathology Department at Tufts Medical Center [Boston, MA USA] after routine testing and follow-up had been performed. Samples on cytological brushes were perserved in SurePath® solution [Burlington, NC USA], and adequate cellularity remained for SCP analysis. Subsequently, cells were vortexed off the brushes, filtered to remove debris, and deposited onto "low-e" microscope slides [Kevley Technologies, Chesterland, OH USA] using cytocentrifugation [CytoSpin, Thermo, Waltham, MA USA]. Ethical approval for this study was provided by a local institutional review board (IRB) and supported by the National Institute of Health (NIH). Data Collection Infrared spectral data were collected from a 4 mm × 4 mm area of the sample deposited on a low-e slide in imaging mode using one of two Perkin Elmer Spectrum One/Spotlight 400 imaging IR micro-spectrometers [Sheldon, CT USA] in the Laboratory for Spectral Diagnosis at Northeastern University. The instrument optical bench, the infrared microscope, and an external microscope enclosure box were purged with a continuous stream of dry air (−40°C dew point). Inside the purge chamber, the relative humidity is below the limits of detectability using standard commercial hygrometers (< 5% relative humidity). The following data acquisition parameters were used: 4 cm −1 spectral resolution, 6.25 × 6.25 μm pixel size, Norton-Beer apodization, 1 level of zero-filling, no atmospheric background correction. Two co-added interferograms for each pixel were Fourier transformed to yield spectral vectors, each covering the 4000 -700 cm −1 range at 2 cm −1 intervals. Background spectra for all 16 detector elements were collected using 128 coadded interferograms. Data were collected in reflectance mode. Raw datasets consist of 409,600 spectra, occupy ca. 2.54 GByte each and are stored in native instrument data format (.fsm). Image Processing Raw data sets from the infrared micro-spectrometers were imported into a program developed in the authors' laboratory and referred to as PapMap. (24) This program is written in 64-bit MATLAB [The Mathworks, Natick, MA USA] in order to accommodate the large data matrices. PapMap reconstructs the spectra of individual cells collected in mapping mode from between 9 and 100 individual pixel spectra for each cell area (corresponding to cells with a diameter between ca. 19 and 63 μm). To this end, PapMap first establishes which pixel spectra belong to a given cell. This is accomplished by constructing a binary mask in which contiguous regions belonging to individual cells are identified. Such a binary mask is shown in Figure 1. The larger white areas, corresponding to squamous cells, may consist of up to 100 individual pixels in area, whereas the smallest white areas may contain as few as 9 pixels in area. Cell clumps are eliminated since they occupy areas larger than 100 pixels. This mask is established by defining a threshold for the amide I intensity (1650 cm −1 ), which is a specific signiture of protein abundance (see below). For each contiguous area occupied by a cell (i.e. the white pixels), the cellular spectrum is calculated, starting from the spectrum with the largest amide I intensity. This spectrum is presumably from the nucleus of the cell, which always exhibits the strongest protein intensity. Subsequently, all spectra identified by the binary mask to be associated with a cell are co-added and subject to several constraints, in order to prevent very weak spectra with poor signal-to-noise contaminating the cell spectrum and to prevent spectra from the edges of a cell, which may be contaminated by scattering (25)(26)(27) to be co-added. The co-added cellular spectra, as well as the coordinates of each cell, are exported for further data analysis. After infrared data collection, the cells on a slide are manually stained using standard cytological stain combinations: Protocol OG6 [Fisher Scientific, Kalamazoo, MI USA]; EA-50 [Surgipath Medical Industries, Richmond, IL USA]; Hemotoxylin 1, Clarifier 1, and Bluing [Richard-Allan Scientific, Kalamazoo, MI USA]. Tap water and solutions of ethanol are used in the washing steps. Finally, to avoid degradation, slides are dipped in xylene and cover-slipped for cytological analysis. Next, visual images at 40x magnification of each stained cell are collected at the coordinates indicated by the PapMap algorithm, using an Olympus BX40 microscope fitted with a computer-controlled stage and a QImaging GO3 3MB digital color camera. The images and cellular spectra are linked and stored in a database for easy identification. The cell images are diagnosed by a cytopathologist, and the resulting medical diagnosis is correlated to spectral and cytologic data. Data Analysis Data were pre-processed as follows: the spectral range for analysis of cervical cells was restricted to 3100-2800 and 1700-1200 cm −1 , since the 2800 to 1700 cm −1 is devoid of spectral information. Subsequently, second derivatives of the spectral intensities versus wavenumber were calculated, using a 9-point sliding window (28) and vector normalized. The spectral range from 1700-1200 cm −1 was chosen for this analysis because it eliminated confounding effects caused by glycogen contribution. Glycogen exhibits a triad of specific peaks between 1250 and 1000 cm −1 , and could be used to follow cyclic changes. (29,30) However, variations in glycogen abundance in cells can be caused by a number of conditions, and it is difficult to correlate these causes in a diagnostic application. The 1700-1480 cm −1 spectral region contains two dominate protein peaks, known as the amide I (1650 cm −1 , C=O stretching) and amide II (~1550 cm −1 , C-N stretching and N-H deformation) vibrations in the primary protein structure. Changes in these peaks, with regards to band shape, position, and appearance of shoulders, are due to overall changes in the abundance of specific proteins. The region between 1480-1200 cm −1 contains spectral bands related to other biochemical components, including DNA and RNA, phosphates and phospholipids. An increase in intensity of the RNA band may be indicative of a viral infection due to its high replication rate. The 3100-2800 cm −1 region contains the C-H stretching vibrations of mainly lipids and phospholipids. At this point, Principal Component Analysis, PCA, an upsupervised method of multivariate analysis, was performed. The term 'unsupervised analysis' means that training data and validation data are not neeeded in order to accurately classify data. For a detailed explanation of PCA, the reader is referred to Adams.(31) PCA was carried out using the PLS Toolbox 402 [Eigen Vector Research, Wenatchee, WA] in MATLAB. The results from PCA are presented in the form of 'scores plots', in which the spectrum of each cell is represented by a dot in a coordinate system that indicates the contributions of principal component (PC) 2 and 3 to the spectrum of the cell. The PC's are obtained from the eigenvectors of the correlation matrix of the dataset, and represent a totally unbiased decomposition of cellualr spectra. Although this method is not suitable for diagnostic purposes, we use it at this stage to determine whether or not there are systematic changes in cellular spectra and to classify these spectral changes. Sample Selection The cyclic fluctuations of estrogen and progesterone, the hormones influencing the proliferation and differentiation of the cervical epithelium, cause inevitable biochemical effects which make diagnosis by SCP more difficult since the phase of the menstrual cycle has to be considered. Therefore, all cervical samples used in this study were from patients who were taking hormonal contraceptives. Hormonal contraceptives prevent pregnancy by maintaining the patient's hormone levels to that of the ovulation-luteal phase of a typical cycle. For this study, it is assumed that all patients have a similar estrogen and progestrone profile. Superficial squamous cells from five samples diagnosed by a cytopathologist as "negative for intraepithelial lesion or malignancy" (referred to as normal for the remainder of this paper) and from five samples diagnosed by a cytopathologist with "epithelial cell abnormality: low grade squamous intraepithelial lesion (LSIL), encompassing HPV" were scrutinized by SCP. Two additional samples that had a normal cytolopathological diagnosis at the time of sample collection, but had a recent history of cervical disease, were also included. Since data were aquired for unstained cells, both morphologically normal and abnormal cells are included in the datasets from LSIL samples. However, PCA was executed in two steps: first, normal cells from normal samples were compared to morphologically normal looking cells from LSIL samples ( Figure 2); second, cells with morphological abnormalities were compared to morphologically normal looking cells from LSIL samples ( Figure 3). An additional study evaluated the presence of hrHPV strains (16,18,31,33,35,39,45, 51, 52, 56, 58, 59, 68) by performing a DNA test using the HPV Hybrid Capture technique (Qiagen Inc., Valencia, CA USA). Five samples were included in this study (all on hormonal contraceptives): four had a normal cytopathological diagnosis, and one was diagnosed with low grade dysplasia. Three of the normal samples were hrHPV negative. The dysplastic sample and one of the normal samples were hrHPV positive. Only morphologically normal looking cells were analyzed. Figure 4 displays the PCA results. The authors acknowledge the low number of samples in this proof of concept study. This paper reports partial results from a much larger study involving well over 100 patients; however, only those patients for whom the hormonal status was known accurately were included. Thus the 3 000 cellular spectra from 17 patient samples represent a dataset which is devoid of confounding hormonal influences. In the first part of the study (Figures 2 and 3), we demonstrate the spectral differentiation of truly normal cells from patients diagnosed as normal and from morphologically normal looking cells from patients diagnosed as LSIL or history of LSIL. In the second part of the study (Figure 4), we provide evidence that the spectral differences between healthy samples and diseased samples is due to the presence of hrHPV. The same trends were observed in a study of oral cells submitted previously. (23) Results Figure 2A depicts a PCA based scores plot which blindly differentiates the majority of cells into two classes: cells from normal samples (blue) and morphologically normal looking cells from abnormal samples (red). In addition, two samples, shown in yellow, were diagnosed by the cytopathologist as normal, but had a recent history of abnormal cervical cytology. SCP classified both of these samples with the abnormal samples. Table 1 outlines the original cytological diagnosis as well as the number of individual cells used from each sample in the SCP analysis. Representative cells from each classification: normal, morphologically normal looking cells from abnormal samples, and samples with a history of abnormal Pap test results, are depicted in Figures 2B-D, respectively. Figure 2E depicts the mean cellular spectral changes of the second derivative spectra between the normal samples and the abnormal samples. The most notable spectral changes are in the amide I and amide II manifolds. The term "manifold" is used here since these bands are superpositions of the protein spectral features of hundreds of cellular proteins, and therefore, do not represent the vibrations of one, but multiple proteins. Small variations in the peak height ratio for the amide I and amide II bands are apparent. More significantly, there is a discernable change in the ratio of the amide I and its low frequency component at 1624 cm −1 . These differences may be interpreted in terms of a change in the proteome of the cells. Changes in band shape and peak position are also apparent in the rest of the spectral region analyzed. This figure is the first report of biochemical compositional changes between morphologically normal looking cells from normal and diseased samples. Excellent spectral quality, and the averaging of hundreds of individual cells, allow for the characterization of these spectral differences, which are directly related to the discrimination of classes shown in Figure 2A. Conventional cytopathology requires the identification of abnormal cells, which may be few, within a sample of thousands of cells, in order to make a diagnosis. SCP does not rely on the chance of picking out diagnostic sparse cells. In fact, the infrared spectral signatures for cells exhibiting morphological abnormalities and normal looking cells from the same abnormal samples, are nearly indistinguishable, but quite different from those of normal cells from healthy patients. Figure 3A The PCA scores plot in Figure 4 shows for the first time that there are quantifiable differences between hrHPV infected and hrHPV negative cells, even when analyzing only normal looking cells. The three hrHPV− samples with corresponding normal cytology (blue circles) cluster away from the two hrHPV+ samples. One of these hrHPV+ samples had a normal cytological diagnosis with no known history of cervical disease. The most notable spectral changes shown in Figure 4B are in the amide I and amide II regions; however, there are some distinct changes in the DNA, RNA and phosphate bands as well as in the C-H stretching region. Discussion In the first part of this preliminary study, SCP was used to successfully differentiate cytologically indistinguishable squamous cells from normal samples and LSIL samples, and demonstrate how cells exhibiting abnormal patterns are not necessary for SCP an accurate diagnosis. In the second part of this study, SCP distinguished spectral differences between cells based on hrHPV status. Studies have shown that HPV DNA can be found in 99.7% of all cervical cancers,(32) and it has been proven that infection of high-risk HPV is a necessary prerequisit for the development of cervical cancer.(33) Cervical dysplasia is commonly an early manifestation of HPV infection, typically appearing 4-24 months post exposure. Invasive cervical cancer is a rare and late manifestation of HPV infection, with an average detection period of 10-20, even 30 years. (34) The PCA scores plot in Figure 2A undeniably illustrates the power of SCP, since there is a strong separation of the red and blue classes along PC2. PC2 is directly related to the the most significant spectral differences between normal healthy cells and morphologically normal looking cells from abnormal samples: the dramatic increase of the amide I low frequency component and the decrease in the amide I to amide II peak height ratio, shown in Figure 2E. The source of these changes in the protein composition may be the result of a viral infection, HPV, or other, disease-specific changes in protein content. These samples were not tested for HPV DNA, but there is strong evidence that cervical abnormalities are due to viral infection. Similar results have been observed in oral SCP studies of patients infected with the herpes simplex virus: although the majority of the cells are morphologically normal, a progression of disease can be observed ranging from healthy normal cells, to morphologically normal infected cells, and finally, cells infected with the herpes simplex virus. (23) Although the cells from the abnormal samples used in this analysis were morphologically normal, they showed abnormal spectral changes in the majority of the cells using SCP. Some morphologically normal looking cells did co-cluster with the normal cells in Figure 2A. This is expected since all cells may not show the same biochemical changes. The biochemistry of the cells in the samples with a history of cellular abnormality (yellow) in Figure 2A deserve further investigation. These samples were initially diagnosed as normal by the cytopathologist, but classified with the abnormals by SCP analysis. With a clinical history of previously abnormal Pap tests, it is possible that these samples are still infected with HPV. Studies have shown that some HPV infections are cleared by the body's immune system within two years (3). In cases where a lesion was detected during morphological analysis, an HPV DNA test was positive at the previous cytologically normal Pap screening test. The HPV DNA remained positive in one or more follow-up Pap screening tests while the cytological evaluation was normal. (34)(35)(36)(37)(38)(39) Therefore, the SCP classification is most likely not a false-positive in these two instances, but rather a confirmation for the sensitivity of the technique in detecting a latent infection. In general, it is not the actual infection of HPV, but rather a persistent infection by the virus that is risky. HPV infects the basal layer cells of immature metaplastic epithelium in the transformation zone. HPV can remain in these cells as low copy episomal viral genomes (50-100 per cell), which are replicated only once per cycle. Therefore, these infected cells can provide a reservoir of virus in morphologically normal looking cells. Normally, the virus is cleared in a year or two, but can persist in this stable episomal state for extended periods of time. This is a non-productive or latent infection. (40) In the two cases with a history of abnormal Pap test results, SCP has shown its potential in monitoring such latent infections, which possibly could lead to cervical cancer if they are not shed from the cervical epithelium. Figure 3 illustrates that the cells, which show morphological patterns of abnormality exhibit the same spectral patterns as the cells with normal morphology from abnormal samples. SCP reaches the same diagnosis as the cytopathologist without the presence of any diagnostic cells and using only unstained cells. Only eight cells from two samples diagnosed with LSIL were identified as reactive or atypical-there were no characteristic low grade dysplastic cells identified from any of the samples, which would have been present in the original sample screened by the cytopathologist. In Figure 3A, the eight morphologically abnormal cells were analyzed in conjunction with the morphologically normal looking cells from the same two samples. The scores plot shows co-clustering of the two groups; therefore, there are no statistically significant spectral differences between these two groups. The drastic differences in morphology are shown in Figures 3B and C. This analysis shows strong evidence that SCP can accurately classify samples without cells exhibiting morphological changes as conventional cytopathology requires. SCP would not only improve the sensitivity of the Pap test, but potentially also improve patient care. In another blind test, PCA accurately differentiated cells based on hrHPV status, Figure 4A. By performing additional DNA testing on five other samples, we were able to confidently attribute the spectral changes to the presence of hrHPV strains. Four of the samples precisely correlated with their cytological diagnosis; however, one sample with a normal cytological diagnosis, with no known history of cervical disease, was hrHPV positive. This samples clustered with the other hrHPV positive sample diagnosed with low grade dysplasia. This result is not only important to show that SCP can distinguish HPV infection, but also demonstrates that SCP has the potential to provide more diagnostic information than standard cytology alone. The most notable spectral changes observed in Figure 4B are in the protein amide I and amide II bands (1670 to 1610 and 1550 to 1500 cm −1 , respectively), which most likely can be attributed to a degradation of cellular proteins and increased production of viral proteins. Changes in the DNA and RNA regions are due to the high replication rate of the viral genome. This proof of concept study shows that Spectral Cytopathology can be used successfully to detect abnormalities using morphologically un-differentiable cervical cells. Samples were used from patients who were taking hormonal contraceptives in order to eliminate variance due to the menstrual cycle. SCP reveals intrinsic spectral changes in the biochemical composition that differentiates cells from normal samples and morphologically normal looking cells from LSIL samples. SCP does not require the identification of a few abnormal cells in order to classify cells precisely, but rather detects biochemical compositional changes in cells that still exhibit normal morphology. Although HPV DNA testing is currently the state of the art for diagnostic cytopathology, SCP blindly differentiated cells based on their individual hrHPV status. In conclusion, there is substantial evidence that SCP has the potential to provide an objective, unsupervised and unbiased method of detecting and classifying exfoliated cells as a novel tool in diagnostic cytopathology. Future Directions Studies including a wider patient population are currently underway for patients not taking hormonal contraceptives and from samples diagnosed with more progressive diseases, mainly high grade dysplasia and carcinoma in situ, as well as ASCUS. Table 1. Each symbol represents the spectrum of an individual cell. There is a separation of the blue and red classes along PC2. The cellular spectra from samples of patients with a history of abnormal cervical cytology (yellow) co-cluster with the abnormal samples (red). All cells were morphologically normal as shown in (B-D), representative high resolution 40x images of cells from each classification. (E) Mean second derivative vector normalized spectra of the (A) PCA scores plot of 5 samples which were tested for hrHPV DNA: 3 samples with normal cytology were hrHPV negative (blue circles), 1 sample with normal cytology was hrHPV positive (pink triangles), and 1 sample was diagnosed by cytology with low grade dysplasia and was hrHPV positive (red squares). The samples differentiate along PC2 based on their HPV status. (B) Mean second derivative vector normalized spectra for each class for the spectral range 3100-2800 and 1700-1200 cm −1 .
2016-05-04T20:20:58.661Z
2010-02-16T00:00:00.000
{ "year": 2010, "sha1": "73e843c84dfcb38bf59cbdf54dee02cdd29f843e", "oa_license": null, "oa_url": "https://www.nature.com/articles/labinvest201072.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "73e843c84dfcb38bf59cbdf54dee02cdd29f843e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
143498513
pes2o/s2orc
v3-fos-license
Modern social welfare in the light of the sustainability model The paper presents the analysis of interaction between the social welfare and sustainable development. The aim of this paper is to show that the social development mostly depends on community values that form the pathway of the social movement. It is shown that mankind can influence the future choosing the optimum way of its development. Therefore, it is necessary to appreciate personal aspects of sociogenesis and a mechanism of its functioning, differences between social and natural dynamics. From the auth ors’ viewpoint, a philosophical understanding of sustainability by means of welfare as a regulation mechanism, is one of approaches to the study of social life and social development. A model of socio-practical man’s existence mostly oriented towards satisfaction of needs helps to analyze the relationship between the categories under review. The more so as an ordinary intake of consumer amenities transforms to the instrument of construction od social identity, the sociocultural integration of individual with society. Social welfare is presented as a multiple-factor construct represented by a synthesis of cause and effect. Explication technique, hermeneutical approach, and comparison study are used to clarify basic notions of this research Introduction The relevance of scientific interest to the problem of social welfare is currently conditioned by a shift of paradigms and principles of the quality of living assessment in terms of increasing instability of the modern world.Different research trends are unified by integrated factors and economic, socio-political and moral guides to the social development.The modern approach implies a search for a balance between the dominated economic and ignored cultural determinant approaches.In other words, it is a search for synthesis and integrated solution of contemporary problems rather than separation or superseding one by another. Alvin Toffler is one of the first futurists who attracted the attention to the problem of social welfare and sustainability in his study of such survival strategy when 'the response to a future shock is not a stability but a change'.At that, the development of scientific scope 'threatens with the change of the production not only how but why' (Toffler, 2002). In this content, the concept of welfare is directly connected with the idea of the sustainable development oriented towards the choice of solutions which create optimum proportions and equal opportunities for present and future generations judging from the safe character of measures taken for a prospective existence of mankind.Helen Clark, the convenor of the United Nations Conference on Sustainable Development, also known as Rio+20, noted that 'justice, dignity, happiness, sustainability are significant for our life, however, absent in the GDP ' (Rio+20, 2012). Whitehead stated that 'civilization must regulate relational connections between people and the surrounding world in a way that will provide phenomena in which the dominant is the imperative harmony of stable things' (Whitehead, 1990).Is it possible to refer welfare to stable things? For the answer, let us address to the problem of survival.In order to survive, the society must solve minimum three problems: to satisfy man's needs and develop the production; to provide the recovery of utilized resources; to pursue the policy of prevention and protection of socioeconomic turbulence from the inside.These problems can be solved in terms of interconnection between welfare and sustainability acting as basic values and ideas of human society. Notions of welfare and sustainability Aristotle was one of the first philosophers who noticed the diversity of understanding of the term welfare stipulated, first of all, by its multiple-factor construct represented by a synthesis of cause and effect.What is the relation and what are the cause and the effect of welfare? Welfare cause lies in the necessity of overcoming a need for something, i.e. satisfaction of needs, whereas the effect is saturation and satisfaction from accomplishment.This interconnection is traced at the etymological level of a broad conceptual construct relating to the term welfare which includes the broad context of social existence.Social welfare is, concurrently, the objective and subjective phenomenon determined by everyday conditions of people's vital activity under which they satisfy their needs, implement plans and social expectations. Traditionally, understanding of the term welfare is connected with the state of society which possesses all necessary facilities for life support.Objective and subjective properties of man's existence are described by many notions among which are such as satisfaction, life satisfaction, quality of living, welfare, happiness, etc.These understanding of the term welfare allows drawing a conclusion that it serves as a socio-personal ideal which can be achieved by specific means.A subjective personal component reflects the concrete historical tension and social aspirations to the social order, and is charged with a choice of many prospects relating to certain resources and technologies of achieving new desirable social conditions. To summarize definitions stated above, three aspects can be emphasized in understanding of the notion welfare.First, it means satisfaction with the life and is connected with its standards.It is also a global assessment of the quality of living in conformity with social and personal criteria.In these terms, welfare is coincidence of satisfaction of needs and aspirations with really achieved ones or with what a person possesses in a real situation.Second, it implies the existence of a certain welfare standard.This understanding is 'recorded' in the requirements for moral life conditioned by the correspondence to the value system accepted in a certain culture at a certain historical age or time.Third, understanding of the welfare standard is further reflected by a subjective experience in reality that is fixed by the notion of happiness. The term sustainability is a synthesis of concepts of changes and stability, and expresses real dialectical contradictions and a tendency to their harmonization as well.The ontological character of these contradictions is defined by the scope of social existence. Social welfare as a factor of sustainability Welfare implies a self-evidence of satisfaction of what a person or society possesses in a real situation because it means the obtainment of welfare that becomes possible due to external factors (subsistence resources, normalization of relationships, etc.) and internal factors (psychological emotions, assessment of welfare standards based on individual self-sentiment). Social welfare is a complex integral indicator of the social sphere efficiency which reflects social health, welfare standard, quality of living, and social security of the society system.In this connection, social welfare is the most important indicator of stability comprising certain achievements and opportunities available in diverse vital activities of the individual and society as a whole, i.e. policy, economics, legislation, and so on. Diversification of needs depending on different objective circumstances (historical time, governmental priorities, technological change, personal development) leads to instability of the social existence which is mostly evaluated just by social welfare, the indicator of a sustained interaction between man, society, and culture. A society itself generates threats to the social welfare that is shown by the level of offences, social anomia, etc.A society with low living standards is forced to contradict with the ideas of sustainability because the absence of efficient technologies leads to the ineffective production.Therefore, in terms of deficiency of subsistence resources, a person has to think about his/her tomorrow rather than the others'.Thus, the growth of welfare and sustainability requires the control for social processes and relations oriented towards a formation of favourable social and economic conditions which can facilitate the achievement of welfare by both person and society using personal and societal labour.On this account, relations arising between people will not lead to socioeconomic turbulence from the inside. The achievement and retention of a certain social state fixed by the notion health is important from the point of view of welfare.Erich Fromm suggested to apply the social analysis to the health criterion which was fixed in the category of sane society.Later, this concept was applied by Toffler in his book 'Future shock'.For Fromm, the answer to a question what society is sane is connected with moral traits of the individual and society as a whole, such as opportunity, responsibility, and obligation.A sane society is, first of all, a society in which nobody is a means towards an end of the other, but is rather the goal on oneself; a society in which nobody is used and uses oneself with the aim no to develop human capacities; a society in which a man is the central part, and his economic and political activities are directed to his own development; a society in which the individual is involved in social problems and solves them as his own; a society in which man's attitude to a neighbor is not separated from the entire self-other system (Fromm, 2005). Socio-practical existence is multidimensional and formed by coordinates that meet the main types of people's vital activity and efforts, i.e. socio-ecological (reproduction), socio-economic (resource recovery), socio-cultural (living standard recovery), and many others.A speed of changes which take place in subsystems of social and human life should be maximally agreed.This agreement is possible merely with the availability of a certain safety zone or a damper capable of smoothing turbulences dangerous both for the society and the environment on the whole.A damper is created by ideas, community ideals and values which form the basis for the choice of specific actions.Social welfare is one of universal categories the ideas and implementation of which create this safety zone.In this capacity, welfare is a regulation mechanism.The concept of welfare is regulation of social tension and processes which reflects the integrating assessments of the quality of living and events in terms of a socialized individual included in the community of other people.And happiness of an individual depends on happiness of all other people that is proved by one of the eco-laws which runs that 'no joy can come from causing grief'.So, only orientation to the mutual achievement of welfare becomes the moral imperative of human life.And no matter, whomever mostly welfare depends from since it is already a shared responsibility.This mechanism is included in functioning of a civil society in the quality of social institution which expresses and protects personal interests in one's natural right 'everyone to his trade'. The welfare assurance is provided by the civil society interested in the achievement and regulation of the balance between the private and the public.A civil society represents the moral and political method of organizing people in a community which demonstrates a phenomenon of autophagy, when people utilize themselves as a means of their survival, free choice and activity based on common values (Ivankina, 2013). Being an artifact, social welfare is recorded in the system of believes, assessments, rationales, requirements, and values of axiological dominants in man's existence.In this quality, welfare is a kind of chronotope of the social development, a spatiotemporal characteristic of normative-regulatory transformations.Therefore, social welfare is a sociocultural policy directed to the establishment of axiological systems and norms that allow each person to feel safe. Integrating or axiological basis for the welfare and sustainability relationship A characteristic feature of modern civilization is a steady increase in production and consumption.Just in the 20 th century, the fuel consumption increased almost 30 times, and the industrial production more than 50 times (Ryabchikov, 2002).At the same time, the problem of poverty and ill-being is still the most relevant.How to solve it?According to Fromm, it is necessary to be conscious of pathological events occurring in society and a wish to change the reality, and then real situation, values, and standards.In P. Sztompka's opinion, 'only by mutual agreement we can freeze some states important for our practical needs, treating them as single events, and speak of change or processes as the sequence of such frozen, 'discrete' points' (Sztompka, 1996).According to Le Chatelier's principle, a society must be in a state of dynamic equilibrium at which internal processes compensate external effects.When wishing well-being another person, what do we wish indeed?Certainly, good.A choice of good is the process of correlation between what we wish and what is really needed and valuable for an individual living in a society.Because welfare is an attractive image of what a man would like to possess.This broad understanding of welfare forms the broad spectrum of its types ranging from material subjects, food, things to diverse feelings and experiences.That is why the process of obtaining welfare is rather stable in its principle basis (mechanism of wants) and, at the same time, in the diversity of wants. The hermeneutical approach can be used to analyze social welfare which states that it refers to a material substrate.A person who satisfies his/her needs, achieves welfare through a systems consumption of realias of the material and social world (things, food, cloths, etc.).Factors that affect social and personal welfare as well as its dynamics are arranged in conformity with levels of needs shown by Maslow's in his hierarchy of needs (Maslow, 1999).According to this theory, a person is connected to its setting mostly by lower levels, and while moving upwards, he/she becomes dependable on its own ideology, sense and value systems.In this case, gratuitousness is possible, when a man does not expect a response from other people to his ideas, ideology, gratefulness, etc., since his/her enthusiasm, self-actualization, goal-achievement compensate the absence of other people's response.Inwardly, a person is free from compensation given by another person that is difficult to achieve at other levels.According to Maslow, it is necessary to appreciate the superior rather than the inferior because 'with increased personal responsibility for one's personal life and a rational set of values to guide one's choosing, people would begin to actively change the society in which they lived'. A continuous stream of needs provides social welfare the level of which is dynamic and shifts to either increase or decrease of the level reached.Moreover, the stream of needs has effect on axiological purposes of man's provision with goods to the extent of his/her dependence on them.Potential capitalization of man and a tendency to goods as one of basic needs is supplemented with that to the establishment of strict requirements for man's behavior which include the internal moral improvement. According to the law of the need growth, the level of individual and group aspirations increases, and constantly extends welfare boundaries which possess --along with the universal concept welfare --that ones achievable and possible for a given community in a given historical time. Using the notion welfare, J. Habermas studied the idea of rationalization of actions.In his opinion, welfare is not available to a person who has no rationality, and it can be formed and conditioned both in the process of rationalization of one's own expectations and by rational interpretation of the environment (Habermas, 1995). The methodology which unifies concepts of welfare and sustainability is the concept of the nature of social activities which are developed in the works of Garfinkel.He founded that understanding of the social life by individuals occurs not only from the outside by accepting common cultural standards and values as M. Weber and T. Parsons insisted, but also from the inside.In Garfinkel's opinion, the social order is the product of one's own spontaneous activity which is created by participants of the social interaction allowing for rules and knowledge of the given cultural setting learned before and fixed by the notion tacit expectations (Garfinkel, 1967). Tacit expectations is the socially approved attitude to one or another actions observed by an individual and to which he/she assigns a rational meaning.This type of expectations are basic latent structures of social life that can be revealed by highlighting a certain aspect of multiple properties of their definition.These ideas are supported by Heimann who concluded back in the 30's of the 20 th century that the quality of living are rather determined by purposes of those groups which a person considers to belong to than those he/she really belongs to. As Albert Schweitzer states, the development of the concept of sustainability is directed to the acceptance of values oriented towards 'reverence for life'.The original idea and the objective which unify social welfare and sustainability is man's development as the criterion of social progress implemented by choice broadening, increase of life durability, level of education, income, etc. Recognition, discovery of one's own daemon (translated from Greek as 'godlike power, fate, god') and a further life organization in conformance with its laws is the foremost duty of each man from the moment of formation of civilization.Daemon comprises a reflectively critical man's attitude to himself and the world on the whole or a sensitivity to the truth phenomenon, moral conscience or sensitivity to welfare phenomenon, aesthetic sensibility or sensitivity to the beauty (Mantatov, 2009). A contravention of the principle of balance and proportion tends toward compensated justice, when the problem of the possibility of mankind existence by itself is sharply aggravated.Consensus achieved in social contacts does not accompany the activity, and requires a specific task and additional efforts for its accomplishment.To provide justice, it is necessary to form trust on the basis of positive personal and social purposes. The process of the sustainable development is complex, and a necessity of the purpose change is the priority provided that mankind will learn how to create and sustain constructive relations.Stability imperatives include the system of values allowing to assure a choice of decisions in favour of optimum proportions of equal possibilities for life of present and future generations judging from the priority of the safe character of measures taken for a prospective existence of mankind.In this terms, the concept of sustainability can be defined as a growing-point for new worldview orientations.One of the steps taken in this direction, is to focus attention on the problem of social responsibility for making decisions and their further implementation both at the individual and group activity levels. The model of determined chaos includes a stable structural order at a macro-level formed by discrete chaos at a micro-level.A prototype of micro-organization regularity is archetypical symbols of 'labyrinth' or the world tree in which a transition to a higher level of the order is connected with transition to the whole new (evolutionary) level.So, the social welfare as a cultural universality acts as a regulator of control for the order in chaos. Conclusion It is shown that regulation for welfare is an ontological concept that measures the social reality changing during the processes of the individual existence and acts in the capacity of the universal resource able to increase the degree of stability of the social object under conditions of uncertainty.Social welfare is a key factor of stabilization of social relations.Sustainability and welfare are interconnected processes unified by the principle of inter-complementarity.
2019-05-04T13:07:41.445Z
2015-01-07T00:00:00.000
{ "year": 2015, "sha1": "44f9662162f8b1a0d2653706553f4783476c5684", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.sbspro.2014.12.493", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0536d30be726c8d489e8aeef87e98b538ceb79d1", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Sociology" ] }
12364619
pes2o/s2orc
v3-fos-license
Commentary: Extracellular peptidase hunting for improvement of protein production in plant cells and roots Despite much recent success in plant-based protein production, key challenges, such as undesired plant proteolytic activities, still severely compromises current recombinant protein production with peptidases affecting protein stability (Pillay et al., 2013). The paper by Lallemand et al. (2015) reporting about identification of extracellular peptidases compromising protein production in plant cells and roots is therefore an excellent contribution to ultimately advance our understanding of peptidase action in plant-based recombinant protein production (Lallemand et al., 2015). Since research has so far not paid a great amount of attention to this problem, a more detailed view, as taken in the paper, is highly beneficial to elucidate such peptidases in the extracellular space. This offers great benefits in terms of protein stability and higher protein production yield. Previous approaches used to address this challenge in plants has for example included peptidase silencing by applying RNA interference technology (Voinnet et al., 2003; Hatsugai et al., 2004) and also co-expressing specific protease inhibitors as “companions” to limit specific protease activities (Goulet et al., 2010, 2012; Pillay et al., 2012). However, silencing a specific peptidase or co-expressing a “companion” protease inhibitor always bears the risk of vital plant metabolic pathways also being affected (Van der Vyver et al., 2003; Senthil-Kumar et al., 2007). This can compromise efficient recombinant protein production in a plant-based system. In addition, work on Arabidopsis, as already done by Lallemand et al. (2015), with its existing wealth of transcriptome and gene data (The_Arabidopsis_Genome_Initiative, 2000) will enable future identification of similar peptidases in other plant species when comparative genomics approaches are applied in combination with Next Generation Sequencing. By investigating two plant species (Arabidopsis thaliana and Nicotiana tabacum); the Lallemand et al. (2015) study particularly unraveled that root-secretion production contained more peptidase activity than, for example, the extracellular medium of cell suspensions. A less proteolytic enriched environment is certainly more favorable for the production of recombinant proteins, especially antibodies. This key finding has, therefore, not only significantly extended our understanding how particular plant species contribute to proteolytic activity and type of peptidase produced but has also contributed to advancing our understanding on how proteases in different plant parts can compromise recombinant protein stability. The study has whereby set a strong working basis for exploring, in the future, proteolytic action in greater depth. Lallemand et al. (2015) also focused on establishing geno-transcriptome data. By also tapping into the wealth of existing peptidase data, Lallemand et al. (2015) further carried out an in-depth in silico analysis of existing Arabidopsis genome and transcriptome data. Remarkably, the search resulted in identification of serine and metallo-peptidases as main peptidases involved in proteolytic processes. These peptidases were consistently expressed in the two investigated production systems. By applying the approach of merging activity assays with geno-transcriptome data, specific Ser-peptidases, potentially responsible for target degradations, were identified. Lallemand et al. (2015) proposed that these peptidases should first be prime candidates for modification to improve protein stability. Specific inhibition of Ser-proteases is certainly an attractive idea which is also supported by previous findings (Goulet et al., 2012). However, the question still remains, how many other proteases are there particularly in plants currently applied in recombinant protein production and what role(s) do they play in protein production and stability. For example commercial companies are primarily using Nicotiana benthamiana and also the unconventional method of producing proteins in carrot cells is applied. These plant species might have very different protease profiles. Investigating such systems for protein production from a plant-based perspective, suggests commercial preferences in industry which are excellent indicators for researchers to adopt in their methodology. Consequently, more definitive investigations are required in protease profiling with the option to avoid plant species with a specific profile unfavorable for the production of a specific recombinant protein. In this regard, recent Next generation sequencing and also proteomics approaches for protease profiling (Vandenabeele et al., 2003; van Wyk et al., 2014) will allow the identification of a great number of peptidases as well as the establishment of their particular expression profiles in plant species targeted for recombinant protein production. In addition, more focused assessments in recombinant protein susceptibility to proteases have to be carried out to identify potential cleavage sites within the protein. These considerations and risks are encapsulated in our pipeline for enhancing protein expression (Figure ​(Figure1)1) which illustrates two stages where proteins are most vulnerable to proteolysis. A different complement of plant-derived proteases may be released during the extraction process from a cellular compartment that is different to that where the target protein is originally localized and thus may also co-purified during the purification process. Once the inherent susceptibility of the target protein is determined, appropriate inhibitors can be used to ameliorate the negative effects of proteases during extraction and purification. Figure 1 Pipeline for enhancing protein production in expression systems. Green circles represent areas that are free of danger from proteolysis whilst red circles represent areas where there is a danger of proteolysis. Without doubt, the study is, as Lallemand et al. (2015) have already outlined, an excellent starting point to develop new strategies for identifying proteolytic activity with the goal of enhancing recombinant protein stability. A commentary on Extracellular peptidase hunting for improvement of protein production in plant cells and roots by Lallemand, J., Bouché, F., Desiron, C., Stautemas, J., de Lemos Esteves, F., Périlleux, C., et al. (2015). Front. Plant Sci. 6:37. doi: 10.3389/fpls.2015.00037 Despite much recent success in plant-based protein production, key challenges, such as undesired plant proteolytic activities, still severely compromises current recombinant protein production with peptidases affecting protein stability (Pillay et al., 2013). The paper by Lallemand et al. (2015) reporting about identification of extracellular peptidases compromising protein production in plant cells and roots is therefore an excellent contribution to ultimately advance our understanding of peptidase action in plant-based recombinant protein production (Lallemand et al., 2015). Since research has so far not paid a great amount of attention to this problem, a more detailed view, as taken in the paper, is highly beneficial to elucidate such peptidases in the extracellular space. This offers great benefits in terms of protein stability and higher protein production yield. Previous approaches used to address this challenge in plants has for example included peptidase silencing by applying RNA interference technology (Voinnet et al., 2003;Hatsugai et al., 2004) and also co-expressing specific protease inhibitors as "companions" to limit specific protease activities (Goulet et al., 2010(Goulet et al., , 2012Pillay et al., 2012). However, silencing a specific peptidase or co-expressing a "companion" protease inhibitor always bears the risk of vital plant metabolic pathways also being affected ( Van der Vyver et al., 2003;Senthil-Kumar et al., 2007). This can compromise efficient recombinant protein production in a plant-based system. In addition, work on Arabidopsis, as already done by Lallemand et al. (2015), with its existing wealth of transcriptome and gene data (The_Arabidopsis_Genome_Initiative, 2000) will enable future identification of similar peptidases in other plant species when comparative genomics approaches are applied in combination with Next Generation Sequencing. By investigating two plant species (Arabidopsis thaliana and Nicotiana tabacum); the Lallemand et al. (2015) study particularly unraveled that root-secretion production contained more peptidase activity than, for example, the extracellular medium of cell suspensions. A less proteolytic enriched environment is certainly more favorable for the production of recombinant proteins, especially antibodies. This key finding has, therefore, not only significantly extended our understanding how particular plant species contribute to proteolytic activity and type of peptidase produced but has also contributed to advancing our understanding on how proteases in different plant parts can compromise recombinant protein stability. The study has whereby set a strong working basis for exploring, in the future, proteolytic action in greater depth. FIGURE 1 | Pipeline for enhancing protein production in expression systems. Green circles represent areas that are free of danger from proteolysis whilst red circles represent areas where there is a danger of proteolysis. Lallemand et al. (2015) also focused on establishing genotranscriptome data. By also tapping into the wealth of existing peptidase data, Lallemand et al. (2015) further carried out an in-depth in silico analysis of existing Arabidopsis genome and transcriptome data. Remarkably, the search resulted in identification of serine and metallo-peptidases as main peptidases involved in proteolytic processes. These peptidases were consistently expressed in the two investigated production systems. By applying the approach of merging activity assays with geno-transcriptome data, specific Ser-peptidases, potentially responsible for target degradations, were identified. Lallemand et al. (2015) proposed that these peptidases should first be prime candidates for modification to improve protein stability. Specific inhibition of Ser-proteases is certainly an attractive idea which is also supported by previous findings (Goulet et al., 2012). However, the question still remains, how many other proteases are there particularly in plants currently applied in recombinant protein production and what role(s) do they play in protein production and stability. For example commercial companies are primarily using Nicotiana benthamiana and also the unconventional method of producing proteins in carrot cells is applied. These plant species might have very different protease profiles. Investigating such systems for protein production from a plant-based perspective, suggests commercial preferences in industry which are excellent indicators for researchers to adopt in their methodology. Consequently, more definitive investigations are required in protease profiling with the option to avoid plant species with a specific profile unfavorable for the production of a specific recombinant protein. In this regard, recent Next generation sequencing and also proteomics approaches for protease profiling (Vandenabeele et al., 2003;van Wyk et al., 2014) will allow the identification of a great number of peptidases as well as the establishment of their particular expression profiles in plant species targeted for recombinant protein production. In addition, more focused assessments in recombinant protein susceptibility to proteases have to be carried out to identify potential cleavage sites within the protein. These considerations and risks are encapsulated in our pipeline for enhancing protein expression (Figure 1) which illustrates two stages where proteins are most vulnerable to proteolysis. A different complement of plant-derived proteases may be released during the extraction process from a cellular compartment that is different to that where the target protein is originally localized and thus may also co-purified during the purification process. Once the inherent susceptibility of the target protein is determined, appropriate inhibitors can be used to ameliorate the negative effects of proteases during extraction and purification. Without doubt, the study is, as Lallemand et al. (2015) have already outlined, an excellent starting point to develop new strategies for identifying proteolytic activity with the goal of enhancing recombinant protein stability. Funding This work was supported by National Research Foundation (NRF) as NRF incentive funding to KK and a NRF bursary to PP.
2016-05-04T20:20:58.661Z
2015-07-21T00:00:00.000
{ "year": 2015, "sha1": "7a3e9cfb78c708fc08d869886aa7af66d19e6b12", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2015.00557/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7a3e9cfb78c708fc08d869886aa7af66d19e6b12", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
244632568
pes2o/s2orc
v3-fos-license
THE INTERNATIONAL JOURNAL OF BUSINESS & MANAGEMENT Foreign Direct Investment: A Look at Nigeria 2009-2019 emphasized that in the Nigeria’s can achieved if focus is beamed on the manufacturing segment of the economy using foreign direct investment. Several recommendations that Abstract: This study examined efforts of the Nigerian Government between 2009 and 2019 to boost foreign direct investment (FDI) with diversification from crude oil as outlined in the Investment Policy Review (IPR) report submitted by United Nations Conference on Trade and Development (UNCTAD) in 2008. The study adopted a conceptual approach anchored on the Dutch Disease model and Resource curse theory. Relying on secondary data, study showed that Nigeria has not attracted FDI commensurate with its potentials despite efforts and plans. This study recommends expedition of action by the Nigerian Government and all stakeholders in setting up the manufacturing sector for FDI inflows. Nigeria's attractiveness were made in the 2009 IPR report. Key findings were also shared at a cabinet meeting chaired by the president of Nigeria in July, 2008.This paper looks at Nigeria and its effort to attract foreign direct investment in the last decade, between 2009 and 2019, challenges that must be surmounted and recommendations on the way forward in line with global standards. Research Methods The study used conceptual research design. Data were collected from past journals and Nigeria Bureau of Statistics to explain and endorse the researchers' position. Literature Review For any nation to enjoy the full gains of foreign direct investment, national policies and global investment framework must be attractive to investors. FDI is a propelling factor for economic growth in every nation when approached holistically (Asiama, Ofori&Arful, 2018). The numerous benefits of FDI for host countries impact on the life of the average citizen in terms of employment generation and economic empowerment. It has been established that developing countries see FDI as a pipeline for economic growth, reduction of unemployment rate, modernization, income growth; and assiduously go the extra length to create an enabling environment for FDI increase (OECD, 2002) Beyond the economic benefits of human capital development, international trade integration, transfer of technology, enhanced business development coupled with competitive business environment, FDI is also a tool for improving the social circumstances in the host country.'The major impact of FDI on human capital in developing countries appears to be indirect, occurring not principally through the efforts of multinational enterprises, but rather from government policies seeking to attract FDI with enhanced capital' (OECD, 2002 p.14). Rise in competition for FDI in the global market space drives nations to introduce incentives for attraction. These incentives come in the form of tax holidays, discounts, special tariffs, special economic zones, free land and different kinds of subsidies. Every country in Africa is keen on attracting FDI. The clamors for foreign direct investment by developing nations is not necessarily due to globalization but due to the consistent reduction in foreign aids and development assistance from the advanced countries (Yeboah&Anning, 2020). This drives them to design and act on polices, build institutions and become signatories to investment agreements (Olokoyo, 2012). The whole essence of attracting FDI by different governments lies on the beneficial effects on employment, wages, and balance of payment, technology and growth (Velde& Morrissey 2002). It is no longer news that FDI benefits the host country via technology transfer and knowledge spillovers, however, the extent to which a country benefits from this externality is dependent upon its domestic economic conditions (Giwa, George, Okodua & Adediran, 2020). Policies on attracting FDI, no matter how encompassing are not enough to generate driversof steady economic growth (Elbasan, 2015). Refining the investment milieu is the ultimate deal for any country serious about significant economic growth. Nigeria, between 1970and mid 1990s was the highest recipient of FDI in the African continent. It accounted for more than 30% of total FDI inflows Africa. The overdependence on oil for FDI inflows coupled with inadequate attention to other sectors affected Nigeria in the long run. By 2007, the country was overtaken by other African countries like Egypt and South Africa which attracted FDIs to different areas of their economy (UNCTAD, 2009).The extractive sector alone contributed as much as 70 percent FDI inflows. This is detrimental to the industrial and manufacturing sector and has little effect on the country's per capita income (Akinmulegun, 2012). Previous governments have focused on transforming the Nigerian economy from oil-based to an industrial one without much success (Nwosa, 2018). One of the issues with concentrating FDI in a particular sector (like oil in the case of Nigeria) is that most of the accruing benefits reside with the political and urban elite leaving out the poor who comprise a greater portion of the populace (Delay, 2018). 'Oil export has contributed substantially to the revenue base of Nigeria but has entrenched a mono-cultural economy. Regrettably, other critical sectors such as agriculture and manufacturing have not been given much consideration by successive administrations' (Aljazeera, 2015).Nigeria's current situation is characteristic of the Dutch disease model explained by the resource curse theory. Dutch disease Model refers to the inhibiting effect natural resources has on other sectors of the economy. Due to windfalls from natural resources and associated appreciation of exchange rate, other sectors like manufacturing become less competitive and consequently enjoy lesser focus from stakeholders (John, 2010). Many African countries including Congo, Angola and Sudan boast of oil and solid minerals but their citizens experience poor quality of life while countries situated in rocky islands with no exportable natural resource like Japan, Korea, Singapore and Taiwan have placed themselves on the global map for high level living standards (Jeffrey, 2010-21). In Nigeria, a lot of policy discussions are held with principal focus on oil and oil derivatives to the detriment of the industrial and manufacturing sectors. This is also evident in countries like Venezuela, Iran, Trinidad and Tobago and Russia whose manufacturing sectors suffer decline due to over dependence on natural resources for foreign exchange. The resource curse theory explains why countries are unable to develop to full potentials despite the blessings of abundant natural resources. Such countries in essence fail to cater to the welfare of its citizens as expected. Peculiar to these countries are high rates of conflict, political and economic instability. Oil and Gas wealth in particular is characterized by high upfront costs, price volatility, stakeholders' interference, multiple value chains and secrecy of the industry. Economists believe these factors differentiate oil wealth from other types of wealth (Natural Resource Governance Institute, 2015) The resource curse theory proposes that large quantities of fuel and mineral resourcescatalyzesdismal economic performance in developing countries due to rent seeking and high levels of corruption surrounding the allocation and distribution of accruable wealth. In Resource Poor countries, Citizens pay taxes while in resource rich Inherent issues in Resource Curse revolve around: • Democracy: When Government spending relies on citizen taxation, it is more likely to accede to the demands of the citizenry and vice versa. Resource rich countries are mo countries since their citizens feel less invested in government financial decisions. • Conflict: Abundance of natural resources is usually central to internal conflicts in resource rich countries. Different groups struggle for ownership and control of these resources due to accruable financial gains. The oil rich Niger Delta region of Nigeria has witnessed a lot of resource control conflicts between the Federal Government, Ijaw tribe and the multinationals. • Boom-Bust Cycles: Oil resources are prone to boom There are tendencies to overspend during the boom years and then make cuts when revenues decline due to bankruptcy. Nigeria currently borrows to f or mortgaging of national assets. Over the last few years, the manufacturing sector in Nigeria has not benefitted in concrete terms from FDI. Through employment creation processes, F manufacturing sector benefits from FDI inflows, more jobs are created leading to increase in middle income earnings with a consequent reduction in poverty (Assadzadeh It is important to note that the resource curse is reversible. Like Nigeria, the bulk of FDI in neighboring country Ghana was concentrated in the extractive industries. Around 70% of all FDI was natural resources but its government has significantly reduced the dependence on gold and oil. FDI now exists in non plastic and services sectors. (Investment Policy Review, 2003) Currently, more than 72% of registered projects in Ghana are wholly-foreign owned. (Yeboah&Anning, 2020) This has led to employment creation with consequential poverty reduction especially in rural households. One of the crystal reasons for this is the political stability under President Nana Akufor-Addo who has also been proactive in marketing The chart below shows the GDP growth rate of Nigeria and Ghana for the period under study. Results and Discussions From above data, Nigeria's GDP growth rate is on steady decline compared to Ghana, a pointer to a significant difference in the investment climate of both countries despite their shared historical charact Since Nigeria's return to democracy in 1999, successive National Economic Empowerment and Development Strategy (NEEDS), restructured existing platforms Investment Promotion Commission (NIPC)all in a bid to increase the volume of FDI in the country. In fa President Olusegun Obasanjo (who held power between 1999 and 2007) was in the news for his several visits abroad to In Resource Poor countries, Citizens pay taxes while in resource rich countries, Multinationals pay taxes Inherent issues in Resource Curse revolve around: Democracy: When Government spending relies on citizen taxation, it is more likely to accede to the demands of the citizenry and vice versa. Resource rich countries are more prone to authoritarianism than resource poor countries since their citizens feel less invested in government financial decisions. Conflict: Abundance of natural resources is usually central to internal conflicts in resource rich countries. ups struggle for ownership and control of these resources due to accruable financial gains. The oil rich Niger Delta region of Nigeria has witnessed a lot of resource control conflicts between the Federal Government, Ijaw tribe and the multinationals. Bust Cycles: Oil resources are prone to boom-bust cycles due to fluctuations in global price and production. There are tendencies to overspend during the boom years and then make cuts when revenues decline due to bankruptcy. Nigeria currently borrows to fund its budget. Incessant borrowings will no doubt lead to a debt crises Over the last few years, the manufacturing sector in Nigeria has not benefitted in concrete terms from FDI. Through employment creation processes, FDI contributes to poverty reduction in developing countries. When the manufacturing sector benefits from FDI inflows, more jobs are created leading to increase in middle income earnings with a consequent reduction in poverty (Assadzadeh & Pourqoly, 2013). It is important to note that the resource curse is reversible. Like Nigeria, the bulk of FDI in neighboring country Ghana was concentrated in the extractive industries. Around 70% of all FDI was natural resources but its government has d the dependence on gold and oil. FDI now exists in non-traditional agri plastic and services sectors. (Investment Policy Review, 2003) Currently, more than 72% of registered projects in Ghana Anning, 2020) This has led to employment creation with consequential poverty reduction especially in rural households. One of the crystal reasons for this is the political stability under President Nana Addo who has also been proactive in marketing Ghana to global audience at every given opportunity (Levia, 2021). The chart below shows the GDP growth rate of Nigeria and Ghana for the period under study. : GDP Growth Rates (Nigeria andGhana, 2009-2019) Researcher's Calculations Based on Data from UNCTAD From above data, Nigeria's GDP growth rate is on steady decline compared to Ghana, a pointer to a significant difference in the investment climate of both countries despite their shared historical characteristics. Since Nigeria's return to democracy in 1999, successive governments tried to put in place different policies like National Economic Empowerment and Development Strategy (NEEDS), restructured existing platforms Investment Promotion Commission (NIPC)all in a bid to increase the volume of FDI in the country. In fa President Olusegun Obasanjo (who held power between 1999 and 2007) was in the news for his several visits abroad to www.theijbm.com M2107-048 July, 2021 countries, Multinationals pay taxes Democracy: When Government spending relies on citizen taxation, it is more likely to accede to the demands of re prone to authoritarianism than resource poor Conflict: Abundance of natural resources is usually central to internal conflicts in resource rich countries. ups struggle for ownership and control of these resources due to accruable financial gains. The oilrich Niger Delta region of Nigeria has witnessed a lot of resource control conflicts between the Federal bust cycles due to fluctuations in global price and production. There are tendencies to overspend during the boom years and then make cuts when revenues decline due to und its budget. Incessant borrowings will no doubt lead to a debt crises Over the last few years, the manufacturing sector in Nigeria has not benefitted in concrete terms from FDI. DI contributes to poverty reduction in developing countries. When the manufacturing sector benefits from FDI inflows, more jobs are created leading to increase in middle income earnings with It is important to note that the resource curse is reversible. Like Nigeria, the bulk of FDI in neighboring country-Ghana was concentrated in the extractive industries. Around 70% of all FDI was natural resources but its government has traditional agri-business, media, education, plastic and services sectors. (Investment Policy Review, 2003) Currently, more than 72% of registered projects in Ghana Anning, 2020) This has led to employment creation with consequential poverty reduction especially in rural households. One of the crystal reasons for this is the political stability under President Nana Ghana to global audience at every given opportunity (Levia, 2021). The chart below shows the GDP growth rate of Nigeria and Ghana for the period under study. From above data, Nigeria's GDP growth rate is on steady decline compared to Ghana, a pointer to a significant eristics. to put in place different policies like National Economic Empowerment and Development Strategy (NEEDS), restructured existing platforms-E.g.,the Nigeria Investment Promotion Commission (NIPC)all in a bid to increase the volume of FDI in the country. In fact, former President Olusegun Obasanjo (who held power between 1999 and 2007) was in the news for his several visits abroad to woo foreign investors. NEEDS had FDI attraction as its main goal and guided the policies of the Government at that time. It was anchored on four key areas. • Reforms in public sector to improve efficiency • Private sector involvement • Putting into practice a social charter • Value re-orientation By mid-2007, NEEDS was replaced with 7 which was to act as an anchor upon which private sector led development would rely. The 7 Creation, Infrastructural Development (Power, Energy and Transport), Human Capital Development, Security, Land tenure charges, Regional development (Niger Delta) and Food Security.Ultimate aim was to make Nigeria a member of the 20 global economies by 2020 (UNCTAD, 2009).Nigeria has two major laws that guarantee investments for international multinational enterprises and ensure hitch Promotion Commission (NIPC) Act 16 and Foreign Exchange Act 17. Both were endorsed in 1995. The mandate of NIPC is: • To handle issues of duplicity • Guarantee profit, capital, interest a • Assist with provision of incentives • Create an effective dispute resolution process for investor The NIPC act empowers foreign investors to invest in any enterprise in Nigeria except arms, ammunition, military uniforms and equipment, narcotic drugs. The main objective of the commission is to market Nigeria as an attractive destination for FDI and design measures to ease the conduct of business in the country (Salihu&Shasore, 2019). A look at FDI inflows data shows a lackluster performance so far. 2015-2019: The Crisis and the response: price. The Central bank adopted heterodox policies and restricted 41 categories of imports from accessing foreign currency in order to boost local production and lessen the pressure o many foreign investors exited the Nigerian market. The country entered recession in 2016. All these triggered the Government to rapidly respond with new policies aimed at rebuilding investors' confidence. Business Environment Council supervised by the Vice President But as noted by UNCTAD,'while the recent investment climate reform efforts and achievements are impressive, particularly following a decade of regulatory lethargy, they have so far focused on 'low improve the business environment and to foster economic diversification and sustainable development, there is need for a continued policy drive to address deeper structural issues affecting the investment climate' (UNCTAD, 2018 p.5 (Niger Delta) and Food Security.Ultimate aim was to make Nigeria a member of the 20 global economies by 2020 (UNCTAD, 2009).Nigeria has two major laws that guarantee investments for international multinational enterprises and ensure hitch-free transfer of funds to and fro Nigeria. These are Nigerian Investment Promotion Commission (NIPC) Act 16 and Foreign Exchange Act 17. Both were endorsed in 1995. The mandate of NIPC is: Guarantee profit, capital, interest and dividend transfer Assist with provision of incentives Create an effective dispute resolution process for investor-state arbitration The NIPC act empowers foreign investors to invest in any enterprise in Nigeria except arms, ammunition, military s and equipment, narcotic drugs. The main objective of the commission is to market Nigeria as an attractive destination for FDI and design measures to ease the conduct of business in the country (Salihu&Shasore, 2019). A look at ckluster performance so far. 2019: The Crisis and the response: Severe shortage of foreign exchange due to continued fall in global oil price. The Central bank adopted heterodox policies and restricted 41 categories of imports from accessing foreign currency in order to boost local production and lessen the pressure on external reserves. Security issues continued and many foreign investors exited the Nigerian market. The country entered recession in 2016. All these triggered the Government to rapidly respond with new policies aimed at rebuilding investors' confidence. Business Environment Council supervised by the Vice President-YemiOsibanjo-was established in 2017. But as noted by UNCTAD,'while the recent investment climate reform efforts and achievements are impressive, g a decade of regulatory lethargy, they have so far focused on 'low-hanging fruit'. To significantly improve the business environment and to foster economic diversification and sustainable development, there is need for a deeper structural issues affecting the investment climate' (UNCTAD, 2018 p.5 (Niger Delta) and Food Security.Ultimate aim was to make Nigeria a member of the 20 global economies by 2020 (UNCTAD, 2009).Nigeria has two major laws that guarantee investments for international free transfer of funds to and fro Nigeria. These are Nigerian Investment Promotion Commission (NIPC) Act 16 and Foreign Exchange Act 17. Both were endorsed in 1995. The mandate of NIPC is: The NIPC act empowers foreign investors to invest in any enterprise in Nigeria except arms, ammunition, military s and equipment, narcotic drugs. The main objective of the commission is to market Nigeria as an attractive destination for FDI and design measures to ease the conduct of business in the country (Salihu&Shasore, 2019). A look at NIPC 2011: The peak years: Oil prices were high in international markets. Several policy initiatives were launched in line with IPR recommendations. Sadly, none was translated into practice. Example was the removal of the requirement 2014: The decline: Security issues and terrorist activities impacted on investors' confidence resulting in falls in FDI inflows. The Federal Government set up two special committees to oversee FDI attraction. These were (1) Doing care Committee. No concrete investment reforms were carried out Severe shortage of foreign exchange due to continued fall in global oil price. The Central bank adopted heterodox policies and restricted 41 categories of imports from accessing foreign n external reserves. Security issues continued and many foreign investors exited the Nigerian market. The country entered recession in 2016. All these triggered the Government to rapidly respond with new policies aimed at rebuilding investors' confidence. A Presidential Enabling was established in 2017. But as noted by UNCTAD,'while the recent investment climate reform efforts and achievements are impressive, hanging fruit'. To significantly improve the business environment and to foster economic diversification and sustainable development, there is need for a deeper structural issues affecting the investment climate' (UNCTAD, 2018 p.5). For stop liason (mediator) between Government parastatals and investors known as The Ghana lapping duties regarding foreign investment viz Nigeria Investment Promotion Commission (NIPC), Federal Ministry of Industry, Trade and Investment vestment climate stated that, 'NIPC lacks adequate funding and suffers from poor use of existing funds. It also lacks clear targets against which Stop Investment Centre (OSIC), co-ordination climb the ladder, the more they contribute to a country'seconomic development. When compared to most developing countries, foreign affiliates in Nigeria are way below average on this ladder (UNCTAD, 2009). How soon Nigeria is able to reverse the downtrend in FDI inflows also depends on the political climate. 'In fact, Nigeria has almost everything it needs to make it one of the first African countries to achieve economic take-off on its own. The only issue weighing heavily on the horizon is the governance, or even governability of the country. This will certainly be the decisive issue for the future of West Africa as a whole' (OECD, 2020). A significant bottleneck against FDI in Nigeria borders on security and terrorism. The fear of business disruptions and threat to life and property lead to poor perception of the country by investors. There are incidences of kidnapping (for ransom) and wanton killings in the country especially in the Northern region. (UNCTAD, 2009). Nigeria is among nations classified as fragile and conflict situations (FCS) countries by World Bank. Investors in such countries are usually cautious due to high risks that must be weighed against potential gain. The high risks in most cases render investments inviable. Conclusion Ten years after the investment policy review of 2009, Nigeria is still struggling to boost FDI inflows into the country. With overall directional trend pointing towards shorter value chains and higher value added (due to Covid-19 pandemic and its aftermath) developing countries will face greater challenges as they lack the required environment and technology to meet up with their developed counterparts (UNCTAD, 2020). FDI has proven to be the driving factor when it comes to growth and development in developing countries. It can turn around the present Nigerian economy to a more robust one through employment, income creation and investments. The focus of the Nigerian Government leans more on attracting FDI as seen in various incentives and programs put in place to encourage investors; but the bulk of the work is in creating a stable and reliable socio-economic and political environment guided by the rule of law. FDIs are not likely to thrive in a tensioned business climate like Nigeria. There is also need for the federal government to implement transparent and favorable exchange rate policies that encourage investment in the manufacturing sector. This study recommends that all stakeholders expedite actions in setting up the non-oil sectors of the economy for FDI inflows (considering the gradual global shift from oil to non-fossil fuels to meet energy demands). This is achievable through consistent policies that are do not just exist on paper.
2021-10-21T15:11:36.723Z
2021-07-30T00:00:00.000
{ "year": 2021, "sha1": "9209f08084a5f2cde43a8f533dccb32c8f3b6278", "oa_license": null, "oa_url": "http://www.internationaljournalcorner.com/index.php/theijbm/article/download/165564/113732", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "2231eecda1e949bc02ce37552654406a577b65eb", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
57784163
pes2o/s2orc
v3-fos-license
Against (Design) Research The content of this article is an attempt to show that the word research in traditional sense (within the Cartesian scheme) is hostile to design activity. It will reduce the process of design into a merely fix mechanical process, burying the aspect of imagination, creativity, and more importantly proposing in it that is a crucial aspect to design. This imposition might indiscipline (in a formal academic sense) design as a discipline in human sciences. In order to achieve this, the article is divided in three sections. The first section will discuss what scientific research is and what traditionally presupposed in it: objective, neutral, and a-historical; simply the disinterestedness of research. As for the second, epistemological concerns on traditional scientific research from two philosophers of science will be discussed: Thomas S. Kuhn and Paul K. Feyerabend. As for the third, the closing section, an attempt to delineate design and what designers do in comparison to scientists' will be discussed. It is how design and designers are different in their manners with traditional science and scientist. INTRODUCTION On the Disinterestedness of Research This first section of the article here will briefly discuss the presupposition applied in research activity, specifically what has become a tradition in scientific research. From the word itself, research, as an English word, means a process of repeated searching for patterns that are manifest in available data and facts (Krippendorff, 2007) as truth. In German languages, Forschung, signifies a rigorous activity of inquiry for truth that also requires repeated searching. Traditionally, science aims to validate propositions that state facts and departs from facts. Research is the activity that accommodates in accomplishing this aim. What kind of facts will be a natural one and for the next, the words traditional science or traditional scientific research will refer to approaches (laws or theories) applied in natural science. Nature is positioned as an object of research for scientists from which laws or theory are extracted in order to understand reality better. Placing nature as an object to be uncovered by this research activity is not an easy task because nature doesn't talk, but human does. So, since nature doesn't talk to reveal itself, scientists are the ones who do most of the talking among themselves, but talking is not considered as scientific activity. In order for the researching activity to be scientific, the scientists as an observer of nature try their best to approach nature objectively. Through excluding observer's subjectivities from propositions about reality (the world), a key feature that signifies the word 'scientific' in 'scientific research' is thus validated. With this kind objective approach to nature, science tends to also apply and generalize this approach to not only natural science, but also human sciences. The more objective one's approach while doing research, the more scientific it is. This objectivity in science is presupposed by a certain level of disinterestedness. The more disinterested, the more valid and scientific a research is. Nature (and also human) is considered as a bunch or a collection of facts ready to be observed and exhausted. Claiming to observe objectively, collected and found data are considered as neutral and fixed as it is. Data is representation of nature as the way it is, (claimed to be) detached from observer's context and interest. This view that science and scientist should be disinterested towards nature or object that is researched or observed has been quite of an enchantment since the dawn of modernity. It was Réne Descartes (1596-1650), a French philosopher, whose philosophical view provided a modern ground for traditional science, especially his human philosophy. His anthropological view on human is understood as dualism (Hardiman, 2007). It's a concept or ideology that views human mind / soul and human body as two separate entities. This kind of dualistic view on human and the world was actually set earlier by Plato (427-347 BC), a Greek philosopher in antiquity era. The dualism in Cartesian view on human treats mind or soul as a controller of the body; body is considered as a machine for the mind or soul. The word machine signifies that the body is capable to work automatically and mechanically apart from mind or soul. The task of mind or soul is to control this machine to work accordingly. Human body is seen as a mechanical object for the mind. Mind or soul is interpreted and understood as subject and the body as an object for the mind or soul. To explain how these two separated entities are related one to another, Descartes pointed at a small gland inside the brain (glandula pinealis) to bridge these two separated entities. This pineal gland, according to Descartes, made the relation between mind or soul and body possible. This makes possible for one to smile physically while one's mind or soul at the same time is in a troublesome or anxious situation. Then how is this Cartesian dualism view on human grounds and affects the traditional view on science? The most influential and enchanting dogma to science is his subject-object scheme as an implication from his distinction of mind or soul and body. As mentioned earlier, the mind or soul stands as the controlling subject while the body as a mechanistic machine controlled by the subject. We can simply replace the word 'subject' to the word 'scientist' and the word object to the word nature or world. Scientist, disinterestedly is detached from the nature that is observed as well as subject (mind or soul) is detached or separated entity from it's object (the body). The body is necessarily an object, it is not viewed as part of the subject; the body is not a milieu to conscious subject, it is different. Again, the more they are detached one from another, the more disinterested and objective the research is; it is considered as a valid inquiry to truth. Epistemological Concerns on Scientific Research The implication of this Cartesian subject-object scheme is that objective interpretation is an absolute necessity for a research to be scientifically valid. This mechanization of the world view through the subject-object scheme has been infesting the mind of the scientist traditionally. Form this objective interpretation about the world, it emerges intellectualism and empiricism (Hardiman, 2007). The first, supposes that the world or reality is embedded in the thinking subject thus reality or the world is adequate to what is constructed inside our head. The later, empiricism, on the other hand, states that human consciousness is a reflection from the objective world. As long as it is reflected on our consciousness, it is validated to be objective. Even these two isms sounds contradictive one to another, there's one thing that are identical one from another. They both claim the capability of science to fully understand reality to its very core totally. This claim leaves no room for context, crisis, and creativity from scientist as a historical human individual. Nature and the world are viewed, exhausted objectively and a-historically. Scientist is seen as mechanized disinterested subject like a merely quantity of data or natural fact. Responding to this kind of tradition in science, here we will discuss two views from two different philosopher of science. The first to be discussed will be an American science historian and science philosopher, Thomas S. Kuhn (1970) and secondly, Paul K. Feyerabend (1993), an Austrian science philosopher who is considered as the most controversial and adventurous figure in the post-Kuhn era (Godfrey-Smith, 2003). These two science thinkers raised few epistemological issues regarding approach in traditional scientific research. Form Kuhn, he emphasized the importance of context through his concept of paradigm and crisis in science, while Feyerabend (1993) emphasized the importance of freedom to think divergently in science. Kuhn's view is more ontological (the being of science), while Feyerabend is more existential (viewing scientist as an free individual subject whose decisions define his existence). In order to describe the importance of Kuhn's view is to say that he shattered the traditional myths about traditional science, especially what has been embedded in empiricist tradition (Godfrey-Smith, 2003). Kuhn showed that the work of science has actually little to do with traditional views on rationality and knowledge. In showing this, here in this article, we will focus on two important terms from Kuhn, first, is paradigm and crisis as the second one. The term paradigm in Kuhn sense means a whole way of doing science; it is a package of claims about the world, methods for gathering and analyzing data, and also habits of scientific thought and action (Godfrey-Smith, 2003). This understanding of paradigm is considered as a broad sense of paradigm. What paradigm in a broad sense tries to show is that the whole way of doing science is always within certain particular context and always historical within a particular age of time. Science and scientist are never a-contextual and the research method is never a-historical. Therefore the presupposition that merely emphasizes the disinterested aspect of science is shattered. What makes this paradigm in a broad sense possible is the paradigm in a narrower sense (Godfrey-Smith, 2003). What is this paradigm in the narrow sense? According to Kuhn, one key part of the paradigm in a broad sense is a specific achievement or an exemplar proposed and signified by a scientist. This specific achievement or exemplar later as time flows will inspire to other scientists, this will suggest a different way to investigate the world; this is understood as paradigm in a narrow sense. The broad sense of paradigm is included within the narrow sense of paradigm. The specific achievement proposes a different way to investigate the world and this different way of investigating will become a new whole way of doing science. So, the whole way of doing science is intrigued by an emerging specific exemplar achieved. An effort to organize one's scientific work is within a certain paradigm. These paradigms (both broad and narrow) stand as a context that, like it or not, influence how scientists doing and deciding their research methods; influence or represent their interests. That's the way science is, it is the being of science. The empiricist scientist tends to state the objective about nature is scientifically valid as long as it's reflected in human mind (consciousness), a-historical and detached from its paradigm that stands as a context. One of things that is desperately avoided and abhorred by scientists is that science, if contextual, will fall into relativism. This might be the strongest concern that will be questioned to Kuhn regarding his view on science through his view on paradigm: relativism of science. How would Kuhn respond to this? Below is an excerpt from one of the Kuhn's (1970) magnum opus, The Structure of Scientific Revolutions, which would provide a counter argument against the relativist's prejudice: "Later scientific theories are better than earlier ones for solving puzzles in the often quite different environments to which they are applied. That is not a relativist's position, and it displays the sense in which I am a convinced believer in scientific progress." (Kuhn, 1970) Depending on a context, need not be a form of relativism, even that it might have a slight chance of going there, but it's not necessarily fall into it. If you are in a circumstance A, then you should do B, this is not relativism; although not everyone might be in circumstances A. As mentioned earlier, another key term from Kuhn that will be discussed here is crisis. Crisis, in Kuhn's sense is a special period when existing paradigm has lost the ability to inspire and guide scientists, but when no new paradigm has emerged to get the field back on track (Godfrey-Smith, 2003). It is when scientists start to lose faith in their existing paradigm and this condition is intrigued by an emerging anomaly or a puzzle that has resisted solution. Through this condition of crisis, scientists are challenged to come up or propose a different way of doing science, challenged to propose an inspirational exemplar. In a different way of saying, crisis is condition that makes a paradigm possible. When a different paradigm arises within a particular context, science itself will prevail instead of falling into relativism. Comfort zone, staying tenaciously with disinterested objectivist yet a-historical interpretation in this case, is a sedative drug to established paradigm, to science, and it will faint itself and its method in researching. Another epistemological concern about traditional science (the objectivist interpretation of the world) comes also from Feyerabend (1993). His notorious view on science is in his famous book Against Method. Here he argued for epistemological anarchism (Godfrey-Smith, 2003). The epistemological anarchist is opposed to all system, rules, and constraints in science. "Science is an essentially anarchic enterprise: theoretical anarchism is more humanitarian and more likely to encourage progress than its law-and-order alternatives." (Feyerabend, 1993) From what is stated above, it shows how Feyerabend places a great importance of freedom and creativity of scientist as human individual. Great scientist, in some ways, is opportunistic and creative. Able to initiate him / herself to make use of available technique for the sake of discovery and invention; a great scientist should be able to propose by questioning and criticizing older and existing methods. Any attempt to establish rules or method in science tends to be dogmatic and will stifle the aspect of creativity in scientist and science itself. Feyerabend's deepest conviction was that science is an aspect of human creativity (Godfrey-Smith, 2003). Unlike the objectivists in traditional science who place Cartesian rationality to its highest throne in scientific research, Feyerabend didn't think of that way. To him rationality is merely one of the ways to do a scientific research. Feyerabend relates science more to freedom and human wellbeing. Position rationality as the only way to do scientific research is similar like tying up science in a altar of dogmatizing madness. Science with its rationality is turning scientists into 'human ants' that are entirely unable to think outside of their training (Feyerabend, 1993). And the dominance of science in society dogmatizes each individual into a 'miserable, unfriendly, self-righteous mechanism without charm or humor' (Feyerabend, 1993). If science and rationality become a dogma for society then we, mind as well, are going back to the glorious dark Medieval era. In different way of saying, Feyerabend simply encouraged the aspect of divergent thinking in science and as for the scientists, dare to challenge them to think outside the box or in Kuhn's sense, dare to create a new and different paradigm. In spite of that, this critical view of science from Feyerabend is not free from critics. What is missing in Feyerabend's picture is some rule or mechanism for rejection and elimination of ideas, a lack of converging aspect. But one must admit a great contribution from his view that brings back the aspect of human freedom in scientists. One thing that is quite identical to both thinker as briefly discussed above: science is not about researching through one established method but also proposing a different way of doing it in certain context, certain situation, by position its activity as human centered one. Any a-historical & a-contextual dogmatic approach of doing science should be questioned and against. Design and Designers Previously we've talked about what is presupposed on traditional scientific research and how is this responded by borrowing two perspectives from two philosophers of science, Kuhn and Feyerabend. Regarding to the title of this paper, Against (design) Research, this section will discuss about what designers do in comparison to the what scientists do traditionally and how is this related to what Kuhn and Feyerabend have proposed for science. The contents of this section are mostly in debt to Klaus Krippendorff (2007), a contemporary design writer and practitioner. What is design? Going back to its etymology, design comes from the Latin word de and signere. Literally, this means marking out, setting apart, or giving significance by assigning it to a use, user, maker or owner (Krippendorff, 2007). In a different way of putting it, the word 'design' can be read as a sense making activity. The product of design or the design itself are to make sense to their users. Like what is suggested by Feyerabend for science, design is likewise, a human centered activity. What do designers do in comparison to scientist in traditional sense? Unlike the scientists, designers are not researchers who exclude their interest, context, and subjectivity for the sake of reaching an objective truth. Whereas scientific researchers traditionally privilege causal explanations, that exclude them as initiators or originators of the observed phenomena, designers intend to affect their surroundings by their own actions, their own decisions; and this can not merely be resulted through causal explanations in scientific discourse. Designers always intend to involve their subjectivity and aspect of freedom in their decision making activities. Whereas scientific researchers are concerned only with truth of their propositions, established by claimed-to-be-objective facts or evidence, designers are concerned with the plausibility and compelling-ness of their proposals. Another strong difference between scientists and designers is the way they place rationality. In traditional sense, scientists view rationality as the main way in searching for truth or practically, to achieve scientific and valid solution. Designers view rationality as just one of many ways to solve problems. In fact, rational problem solving (which is positioned on the top of hierarchy in science) is just one of the ways of designing activity itself. Last but not least, scientists tend to research disinterestedly in search for truth, while designers, fundamentally, make proposals, embrace their own interest, offer possibilities. The last one is similar to what has been suggested earlier by Feyerabend (1993). What makes this urges of proposing possible or what motivates designers to propose comes in, as Krippendorff described, three ways. First, along the way, designers will be situated in a troublesome conditions, daunting problems and conflicts. This kind of situation makes challenges possible, challenging the designers to step out form their well-established or comfort zone. Second, these negative harsh conditions allow the designers to see opportunities that might not be seen by others; opportunities to improve other people lives. Third, these opportunities open up possibilities of introducing variations that others might not dare to consider. From what has been described by Krippendorff, we can see how it is well connected to Kuhn's and Feyerabend's critical stance on traditional science. The troublesome situation is crisis in Kuhn's term. Opportunities and offering possibilities of variant are paradigm in Kuhn's sense and also as suggested by Feyerabend regarding to scientific activity. Fundamentally, designers propose proposals and what makes this possible, if for Kuhn it is the crisis, for designers, it is boredom. Boredom for designers is conditions of possibilities to possible possibilities; and by this and only through this, design will prevail existentially. CONCLUSION What is the main critical concern (knowing and spotting similarity between design activity with Kuhn's and Feyerabend's critical suggestion to science) in this article: most designers are and have been doing what Kuhn and Feyerabend suggested critically for science, while science, in certain and particular region or country, is still trying to adapt the traditional way and forcing design approach and activity to go on its way. This is probably quite explicit in applying the word research to design as design research. The objectivist and mechanical interpretation are still trying to be imposed formally by institutions in design; lacking awareness that this can stifle and suffocate the design itself, academically and professionally. Worst case scenario, this will kill design as human centered activity before it grows. Dehumanizing design while claiming design is for human, how ironic. That's why any attempt to impose objectivist interpretation as an absolute necessity towards design should be against, the best we can do is letting design be. Be what? As Feyerabend said: anything goes; embrace it.
2019-01-23T18:32:39.248Z
2011-10-31T00:00:00.000
{ "year": 2011, "sha1": "8d72f2afc9696ea9fcc4ae0a895d7102c4a7ff9c", "oa_license": "CCBYSA", "oa_url": "https://journal.binus.ac.id/index.php/Humaniora/article/download/3167/2553", "oa_status": "GOLD", "pdf_src": "Neliti", "pdf_hash": "f1c995ffd8bb803227c07822a2c711b6b7291496", "s2fieldsofstudy": [ "Philosophy", "Art" ], "extfieldsofstudy": [ "Sociology" ] }
198195170
pes2o/s2orc
v3-fos-license
Long-term outcomes after surgical dissection of inguinal lymph node metastasis from rectal or anal canal adenocarcinoma Background The 8th edition of the tumor-node-metastasis (TNM) classification classifies inguinal lymph nodes as regional lymph nodes for anal canal carcinoma but non-regional lymph nodes for rectal carcinoma. This difference might reflect the different prognosis of inguinal lymph node metastasis from anal canal carcinoma and rectal carcinoma. However, long-term outcomes of inguinal lymph node metastasis from rectal or anal canal adenocarcinoma are unclear, which we aimed to investigate in this study. Methods The study population included 31 consecutive patients with rectal or anal canal adenocarcinoma who underwent inguinal lymph node dissection with curative intent at the National Cancer Center Hospital from 1986 to 2017. Long-term outcomes were assessed and clinicopathologic variables analyzed for prognostic significance. Results Of the 31 patients, 12 patients had rectal adenocarcinoma and 19 patients had anal canal adenocarcinoma. Synchronous metastasis were observed in 14 patients and metachronous metastasis in 17 patients. After dissection of inguinal lymph node metastasis with curative intent, the 5-year overall survival rate was 55.2%, with 12 patients surviving for more than 5 years. Median survival time was 66.6 months. Multivariate analyses revealed that location of primary tumor (rectum versus anal canal) was not a prognostic factor, whereas lateral lymph node metastasis and histological findings were independent prognostic factors. Conclusion Given the good prognosis, inguinal lymph node metastasis in patients with rectal or anal canal adenocarcinoma appears to be regional rather than distant. If R0 resection can be achieved, inguinal lymph node dissection may be indicated for these patients. Background Anal canal cancer is the most common type of gastrointestinal malignancy that metastasizes to inguinal lymph nodes (LNs). Whereas inguinal LN metastasis from anal canal cancer is classified as N2 in the 7th edition of the tumor-node-metastasis (TNM) classification when metastasis is unilateral, or as N3 when metastases are bilateral [1], it is categorized in the 8th edition as N1a (N1a: metastases in inguinal, mesorectal, and/or internal iliac nodes) [2,3]. This classification was modified based on accumulating evidence from studies on anal canal squamous cell carcinoma [4,5]. Most anal canal cancer cases in Western countries involve squamous cell carcinoma, which accounts for almost 90% of these cases [6]. In contrast, adenocarcinoma is the predominant histological subtype of malignancy arising in the anal canal in Asian countries such as Japan and China [7]. Specifically, adenocarcinomas account for 63% of anal canal cancers in China [8] and 74% in Japan [7], although anal canal cancer itself is a rare disease in these countries. Given the rarity of this disease, little is known about the long-term outcomes of inguinal LN metastasis from anal canal adenocarcinoma. The largest study cohort of patients (21 patients) with inguinal LN metastasis from anal canal adenocarcinoma to date was reported by Su et al., showing 5-year overall survival (OS) rate as 19.1% [8]. Inguinal LNs in rectal carcinoma are classified as nonregional LNs in the TNM classification [2]. Adenocarcinomas that originate from the lower rectum occasionally metastasize to inguinal LNs in a manner similar to anal canal cancer, with an incidence of approximately 2.0-4.5% [9,10]. Some studies have reported that inguinal LN metastasis from rectal adenocarcinoma occurs as a consequence of locally advanced primary tumors or recurrent pelvic malignancy, and that in these cases, only systemic chemotherapy and radiotherapy should be considered due to the frequency of distant metastasis and poor prognosis [9,11]. Other studies reported that solitary inguinal LN metastasis from rectal adenocarcinoma showed a favorable prognosis after LN excision and thus surgical treatment may be a reasonable therapeutic option for such patients [12,13]. Accordingly, appropriate treatment strategies for inguinal LN metastasis from rectal adenocarcinoma are unclear, and surgical treatment for inguinal LN metastasis remains controversial. The TNM 8th edition classifies inguinal LNs as regional LNs for anal canal carcinoma, but non-regional for rectal carcinoma [2]. No study to date has adequately accounted for this difference. Survival is thought to be an adequate indicator for determining regional versus distant metastasis. In this respect, this study aimed to investigate the long-term outcomes, specifically with respect to OS, of inguinal lymph node metastasis from rectal or anal canal adenocarcinoma, which makes it possible to speculate whether inguinal LN metastasis is regional or distant. In this study, because anorectal adenocarcinoma is sometimes difficult to determine its anatomical origin (rectum or anus) and thus rectal adenocarcinoma and anal adenocarcinoma sometimes overlap, and because treatment strategies for rectal adenocarcinoma and for anal adenocarcinoma are completely same according to the National Comprehensive Cancer Network (NCCN) guidelines 2018 [14,15] (surgery, and sometimes followed by chemotherapy), and because of the limited sample size, inguinal LN metastasis from these two types was considered as a single entity and the combined data were analyzed. Patients Patients with inguinal LN metastasis from rectal or anal canal adenocarcinoma who underwent inguinal LN dissection with curative intent at the National Cancer Center Hospital from September 1986 to August 2017 were included in this study. Patients who had incomplete medical records and those who underwent only biopsy of inguinal LNs for diagnosis were excluded. Patients with inguinal LN metastasis from colon adenocarcinoma and patients with other histological types were also excluded. The Institutional Review Board (IRB) of the National Cancer Center Hospital approved this retrospective study (IRB code: 2017-437). Anatomic definition of lower rectum and anal canal tumors The TNM classification defines rectal carcinoma and anal canal carcinoma based on the anatomical location of the primary tumor. According to the TNM 8th edition [2], the anal canal begins where the rectum enters the puborectalis sling at the apex of the anal sphincter complex and ends with the squamous mucosa blending with the perianal skin. In the present study, tumor location was determined by colonoscopy and digital rectal examination before surgery. If the center of the tumor was located above the puborectalis sling, the tumor was defined as lower rectal cancer, and when below the puborectalis sling, as anal canal cancer. Treatment of rectal or anal canal adenocarcinoma in Japan Preoperative treatment, including chemoradiotherapy and chemotherapy, prior to total mesorectal excision is the current standard for locally advanced rectal cancer in many Western countries [16]. However, in Japan, surgery with total mesorectal excision plus lateral lymph node dissection (LLND), without preoperative therapy is performed as the standard treatment for rectal cancer [17]. Thus, regardless of the clinical lateral lymph node status, LLND, including prophylactic dissection, without neoadjuvant chemoradiotherapy is usually performed for patients with locally advanced rectal or anal canal adenocarcinomas in Japan. Inguinal LN metastasis All the patients in this study had clinically positive inguinal nodes detected on CT. In most cases, a biopsy of the inguinal LNs was not performed prior to inguinal node dissection. Patients with pathologically positive inguinal nodes were included in this study. Prophylactic inguinal LN dissection was not performed in cases of lower rectum adenocarcinoma or anal canal adenocarcinoma, without clinically positive inguinal nodes. Synchronous inguinal LN metastasis was defined as metastasis occurring within six months after the diagnosis of rectal or anal canal adenocarcinoma. Inguinal LN dissection Technical details of inguinal LN dissection are described below. At 3 cm below the inguinal ligament, a slanting incision is made parallel to the inguinal ligament. Reaching above the femoral artery, a 6 cm incision is made along the femoral artery. Both superficial and deep inguinal LNs, including Cloquet's nodes, are then dissected. After locating the femoral vein, the great saphenous vein is identified. All tissue between the fascia lata and Camper's fascia within the standard template for inguinal node dissection is freed and the great saphenous vein is sacrificed. The inguinal ligament, adductor muscle, sartorius muscle, and the intersection point between these muscles surround the dissection area. Statistical analysis The categorical variables were presented as frequencies, with percentages. Pearson's chi-square test was used to compare categorical variables. The Kaplan-Meier method was used to estimate overall survival (OS) which was defined as the survival probability (in days) from the date of inguinal LN dissection to the date of death from all causes. The survival days were censored at May 1, 2018. We estimated OS for each covariate level, and we evaluated the association with each covariate using the logrank test. The results are shown as median survival and p-value. Multivariate Cox proportional hazards regression models with Firth's modification [18], which were used to avoid sparse data bias and related problems, were subsequently fitted to evaluate the factors independently associated with OS. The prediction model was detected based on Akaike information criteria (AIC) from all conceivable models with different sets of covariates [19]. The results of the multivariate analyses were presented as hazard ratios (HRs), together with their 95% confidence interval (95% CIs) for the selected prediction model. A probability value of P < 0.05 was considered statistically significant. All statistical analyses were performed using the JMP13 software program (SAS Institute Japan Ltd., Tokyo, Japan) or R version 3.5.3 and the 'coxphf ' package (R Project). Characteristics of the study cohort One patient with an incomplete medical record, one patient who underwent a biopsy of an inguinal LN only for diagnostic purposes, and four patients with inguinal LN metastasis attributed to colon adenocarcinomas were excluded, leaving 19 patients with anal canal adenocarcinoma and 12 patients with lower rectal adenocarcinoma as the final study population. The patient characteristics and primary tumor information are summarized in Table 1. Of the 31 patients, 23 patients underwent abdominoperineal resection, six patients underwent pelvic exenteration, and two patients underwent intersphincteric resection for primary cancer. Thirty patients underwent surgery (total mesorectal excision plus LLND) without any preoperative therapy, and one patient received neoadjuvant chemoradiotherapy. LLND was not performed in eight (26%) patients for the following reasons: Five of these patients were clinical T1 stage, and the general condition of the other three patients was poor due to severe comorbidities or old age. Histological findings of the primary tumor showed a well-or moderately differentiated adenocarcinoma in 26 patients, poorly differentiated adenocarcinoma in three patients, and a mucinous adenocarcinoma in two patients. In all cases, the surgical margins were negative. . We observed synchronous metastasis in 14 patients and metachronous metastasis in 17 patients. Two patients with synchronous inguinal LN metastasis also had liver metastasis. Bilateral inguinal LN metastasis was found in five patients. Seventeen patients had only one positive inguinal LN; four patients had two positive inguinal LNs; and 10 patients had more than two positive inguinal LNs. Five of 31 patients received adjuvant chemotherapy, and four patients received adjuvant radiotherapy, after inguinal LN dissection. Figure 1 shows types of inguinal LN metastasis classified by the presence or absence of mesorectal LN metastasis and lateral LN metastasis. Ten patients had neither mesorectal LN nor lateral LN metastasis (Fig. 1a), 11 patients had mesorectal LN metastasis without lateral LN metastasis (Fig. 1b), three patients had lateral LN metastasis without mesorectal LN metastasis (Fig. 1c), and seven patients had both mesorectal LN metastasis and lateral LN metastasis (Fig. 1d). Long-term outcomes after inguinal LN dissection In the entire cohort, 3-and 5-year OS rates were 76.5 and 55.2%, respectively, with a median follow-up time for survivors of 47.5 months (range, 1.9-276.6 months). Median survival time (MST) was 66.6 months. Notably, 12 patients survived for more than five years (Fig. 2). No significant difference was found between prognosis of anal canal adenocarcinoma with inguinal LN metastasis and that of lower rectal adenocarcinoma with inguinal LN metastasis (p = 0.31). Twenty-five patients experienced recurrence after inguinal LN dissection during the study period; 10 patients had local pelvic recurrence, four patients had inguinal LN recurrence (three on the other side, one on the same side), 11 patients had lung metastasis, three patients had liver metastasis, and one patient had peritoneal dissemination. As for timing of recurrence after inguinal LN dissection, 15 had recurrence within one year, and 10 had recurrence more than one year after inguinal LN dissection. Median relapse-free time was 10.3 months. Recurrence after inguinal LN dissection was treated by a multidisciplinary team approach including surgical resection (n = 4), radiotherapy (n = 4), chemotherapy (n = 14), and a combination of surgical resection and chemotherapy (n = 2). Factors affecting prognosis of inguinal LN metastasis The univariate analyses revealed no significant association between the location of the primary tumor (rectum Table 2) was the optimal model based on the AIC from all conceivable models with different sets of covariates. Table 2). The other variables were not selected as the prognostic factor. Discussion The present study found that, for long-term outcomes after inguinal LN dissection from rectal or anal canal adenocarcinoma with curative intent, MST was 66.6 months and 5-year OS was 55.2%. These results are noticeably better than previous data reported for inguinal LN metastasis from anal canal or rectal adenocarcinoma (MST, 8-14.8 months; 5-year OS, 0-19.1%) [8,9,11,20]. This discrepancy could be due to the small sample size in the previous studies (8-32 patients) [8,9,11,20], as well as recent developments in chemotherapy and the multidisciplinary team approach. In our study, although almost 80% of patients experienced recurrence after inguinal LN dissection, a multidisciplinary team approach that included surgical treatment and chemotherapy for recurrent tumors could have led to the better prognosis. Our MST of 66.6 months was also better than those reported for colorectal cancer patients with distant metastasis. According to the Analysis and Research in Cancers of the Digestive System database, MST was 19.3 months in colorectal cancer patients with liver metastasis, 24.6 months in those with lung metastasis, and 16.3 months in those with peritoneal metastasis [21]. Patients who underwent curative resection of liver metastasis, and thus are expected to have a favorable prognosis among stage 4 colorectal cancer patients, had 5-year survival rates of about 40% [22,23], whereas patients who underwent curative resection of peritoneal metastasis had 5-year survival rates of about 30% [24,25]. Thus, from the perspective of longterm outcomes, inguinal LN metastasis from rectal or anal canal adenocarcinoma appears to be regional rather than distant. Previous studies have reported that, for rectal or anal canal adenocarcinoma with inguinal LN metastasis, unilateral inguinal LN metastasis, metachronous LN metastasis, and solitary inguinal LN metastasis are independent factors associated with a longer OS [12,13,20]. In contrast, we found that the absence of lateral LN metastasis and histological type of well or moderately differentiated adenocarcinoma were independent factors associated with longer OS in patients with inguinal LN metastasis from rectal or anal canal adenocarcinoma. Lymph drainage at and proximal to the dentate line is directed toward the anorectal, perirectal, and paravertebral nodes and to some extent, the internal iliac system nodes, and lymph drainage below the dentate line mainly is directed to superficial inguinal LNs [26,27]. There are two lymphatic routes from the rectum to inguinal LNs; one is a direct route [9], and the other is an indirect route which passes through internal and external iliac vessels. As shown in Fig. 2, we classified patterns of LN metastasis into four types based on the presence or absence of mesorectal LN and lateral LN metastasis. Figure 2a and The TNM classification defines rectal carcinoma and anal canal carcinoma based only on the anatomical location of the primary tumor, without accounting for histological type. One issue with this is that both adenocarcinoma and squamous cell carcinoma, which originate from the anal canal, are classified in the same category despite their different treatment strategies. Namely, standard treatment for primary anal canal squamous cell carcinoma is chemoradiotherapy [4,6], whereas that for adenocarcinoma is surgery. With regard to squamous cell carcinoma of the anal canal with inguinal LN metastasis, a previous study reported that the 5-year OS in patients with synchronous inguinal LN metastasis was 54.4%, and primary local control in the inguinal area after inguinal LN dissection was 68% [6]. In the present study of rectal and anal canal adenocarcinoma, 5-year OS was 55.2%, which is similar to that reported for anal canal squamous cell carcinoma patients. This study has some limitations. First, since the study was a single-center retrospective analysis, biases may exist. Prospective studies will be needed to confirm our results. Second, the sample size was relatively small due to the rarity of inguinal LN metastasis from rectal or anal canal adenocarcinoma, although the number of patients who underwent inguinal dissection represented the largest sample reported to date. Third, treatment regimens varied among patients after inguinal LN dissection. Further prospective studies will be needed to confirm that inguinal LN metastasis from both rectal and anal canal adenocarcinoma is regional rather than distant. Conclusion Based on the acceptable prognosis of patients who underwent inguinal LN dissection with curative intent, the presence of inguinal metastasis in patients with lower rectal and anal canal adenocarcinoma can be considered regional LN metastasis. If R0 resection can be achieved, inguinal LN dissection may be indicated in patients with inguinal LN metastasis from both rectal and anal canal adenocarcinoma.
2019-07-25T03:51:49.250Z
2019-07-24T00:00:00.000
{ "year": 2019, "sha1": "fa86afdb35e7478fa83f77f36ab54a62e4caa025", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-019-5956-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "de71b079722b2b5f296f1fec89d1ecda3ff7f234", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
86031089
pes2o/s2orc
v3-fos-license
Squamous cell carcinoma-like giant keratoacanthoma with delayed presentation in a known case of xeroderma pigmentosum Introduction Keratoacanthomas are rapidly growing cutaneous tumours, which usually resolve leaving an atrophic scar. The term ‘giant keratoacanthoma’ is applied to a lesion greater than 2–3 cm in diameter. This case report discusses squamous cell carcinoma-like giant keratoacanthoma with delayed presentation in a known case of xeroderma pigmentosum. Case report A 50-year-old female patient, a known case of xeroderma pigmentosum, presented with a large, domeshaped, crateriform lesion over her right cheek that had persisted for the last four months. Histological examination showed an exophytic lesion with a large central keratin-filled crater surrounded by deep bulbous nodules of proliferating squamous cells that had abundant keratin with a lip of normal epidermis. The demarcation was discrete except for tiny foci of deep infiltration at the periphery. However, immunohistochemistry for p53 revealed strong positivity only in the basal layer of the infiltrating islands, weak Ki-67 and desmoglein positivity, along with down regulation of Bcl-2 and E-cadherin. Followup for 18 months did not reveal any site recurrence or metastasis. Conclusion This case highlights the uncommon, delayed, adult onset presentation of giant keratoacanthoma with borderline histological features that mimic squamous cell carcinoma but show negative immunohistochemistry. Introduction Keratoacanthoma (KA) was first described in 1889 by Sir Jonathan Hutchinson as a ‘crateriform ulcer of the face’. KAs are rapidly growing, cutaneous tumours with atypical histological features similar to squamous cell carcinoma (SCC) that resolve leaving an atrophic scar1. KAs are either follicular KAs arising in hair-bearing skin or non-follicular KAs arising in the palm and sole2. Nearly all KAs are solitary lesions, less than 2 cm in diameter and arising on skin exposed to the sun. The term ‘giant keratoacanthoma’ is applied to a tumour greater than 2–3 cm in diameter2. Xeroderma pigmentosum (XP) is a rare, autosomal recessive genodermatosis characterised by deficient DNA repair, photophobia, severe solar sensitivity, cutaneous pigmentary changes and xerosis developing before the age of two years. Early development of mucocutaneous and ocular lesions, solar keratoses, cutaneous horns, KA, SCC and basal cell carcinoma, basosquamous carcinoma, atypical fibroxanthoma, malignant melanomas and angiomas, have been reported2. We present a case of a 50-year-old female patient, a known case of XP, with a large, solitary dome-shaped lesion on her right cheek. Case report A 50-year-old female patient, a known case of XP, presented with a large, solitary, dome-shaped lesion on her right cheek that had been rapidly growing for the past four months. There was no lymphadenopathy. Fine needle aspiration cytology showed only nucleated and anucleated squames. Surgery was performed with wide skin margins and the specimen was sent for histopathological examination with clinical suspicion of SCC. Gross A single skin-covered soft tissue mass measuring 6.5 × 6 × 3.5 cm in size, was received. An ulcero-proliferative growth was identified 1.2 cm and 0.7 cm from both lateral margins and 0.5 cm and 0.2 cm from both vertical margins. Cut surface was grey white and firm with central necrosis. Procedure The tissue was paraffin-embedded and stained by haematoxylin and eosin. Immunohistochemistry was done on the paraffin sections by the antibodies as indicated in the text. Histological examination showed an exophytic lesion with a central keratin-filled crater surrounded by solid lobules of proliferating squamous cells with abundant keratin. A lip of normal epidermis was seen at the peripheral edge. Parakeratosis within the crater and micro-abscesses were seen towards the periphery. The cells showed mild anisonucleosis, prominent nucleoli and infrequent mitosis. The advancing edges at the base and sides of the lesion were distinct except for focal irregular areas with cords of squamous cells infiltrating the underlying dermis. A dense lymphocytic infiltrate admixed with polymorphs and eosinophils, was seen at most of the tumour interphase, however, the infiltrate was lymphoplasmacytic in the infiltrative areas (Figure 1). Additionally, immunohistochemistry for p53 revealed strong positivity only in the basal layer of infiltrating islands with weak Ki67 staining. Loss of Bcl-2 and E-cadherin but preserved desmo* Corresponding author Email: nigamjs@gmail.com D.D.U. Hospital, New Delhi, India De rm at ol og y Introduction Keratoacanthoma (KA) was first described in 1889 by Sir Jonathan Hutchinson as a 'crateriform ulcer of the face'.KAs are rapidly growing, cutaneous tumours with atypical histological features similar to squamous cell carcinoma (SCC) that resolve leaving an atrophic scar 1 .KAs are either follicular KAs arising in hair-bearing skin or non-follicular KAs arising in the palm and sole 2 .Nearly all KAs are solitary lesions, less than 2 cm in diameter and arising on skin exposed to the sun.The term 'giant keratoacanthoma' is applied to a tumour greater than 2-3 cm in diameter 2 .Xeroderma pigmentosum (XP) is a rare, autosomal recessive genodermatosis characterised by deficient DNA repair, photophobia, severe solar sensitivity, cutaneous pigmentary changes and xerosis developing before the age of two years.Early development of mucocutaneous and ocular lesions, solar keratoses, cutaneous horns, KA, SCC and basal cell carcinoma, basosquamous carcinoma, atypical fibroxanthoma, malignant melanomas and angiomas, have been reported 2 .We present a case of a 50-year-old female patient, a known case of XP, with a large, solitary dome-shaped lesion on her right cheek. Case report A 50-year-old female patient, a known case of XP, presented with a large, solitary, dome-shaped lesion on her right cheek that had been rapidly growing for the past four months.There was no lymphadenopathy.Fine needle aspiration cytology showed only nucleated and anucleated squames.Surgery was performed with wide skin margins and the specimen was sent for histopathological examination with clinical suspicion of SCC. Gross A single skin-covered soft tissue mass measuring 6.5 × 6 × 3.5 cm in size, was received.An ulcero-proliferative growth was identified 1.2 cm and 0.7 cm from both lateral margins and 0.5 cm and 0.2 cm from both vertical margins.Cut surface was grey white and firm with central necrosis. Procedure The tissue was paraffin-embedded and stained by haematoxylin and eosin.Immunohistochemistry was done on the paraffin sections by the antibodies as indicated in the text. Histological examination showed an exophytic lesion with a central keratin-filled crater surrounded by solid lobules of proliferating squamous cells with abundant keratin.A lip of normal epidermis was seen at the peripheral edge.Parakeratosis within the crater and micro-abscesses were seen towards the periphery.The cells showed mild anisonucleosis, prominent nucleoli and infrequent mitosis.The advancing edges at the base and sides of the lesion were distinct except for focal irregular areas with cords of squamous cells infiltrating the underlying dermis.A dense lymphocytic infiltrate admixed with polymorphs and eosinophils, was seen at most of the tumour interphase, however, the infiltrate was lymphoplasmacytic in the infiltrative areas (Figure 1).Additionally, immunohistochemistry for p53 revealed strong positivity only in the basal layer of infiltrating islands with weak Ki67 staining.Loss of Bcl-2 and E-cadherin but preserved desmo- glein was uniformly noted in all areas (Figure 2).Eighteen months follow-up did not reveal any local recurrence or metastasis favouring the diagnosis of giant KA with borderline features mimicking SCC. Discussion KA is seen in 3% of all reported cases of XP but giant KA is uncommon 3,4 .KA is a self-limiting epithelial proliferation with a strong clinical and histopathological similarity to welldifferentiated SCC.Exposure to excessive sunlight is the most frequently incriminated factor in the aetiology of both KA and XP with 95% of the solitary lesions found on the sun-exposed areas like the face, head and extremities 3,4 . Men are more commonly affected than women.Lesions pass through the stage of proliferative phase, maturation phase and final resolving phase in which KAs reabsorb and expel the keratogenous core and eventuate into a scar with variable atrophy 1 .However, this case showed a rapidly increasing size with no involution.Kern et al. 5 observed that almost all of the confirmed KAs had invaginating keratin-filled craters with epidermal proliferation at the sides and the bottom of the lesion, and significant atypia and mitotic activity was rare, while SCCs showed considerable cellular anaplasia, pleomorphism and many displayed significant mitotic activity unlike this case 5 .Cribier et al. 6 evaluated 14 histological criteria, mainly based on the architecture of the tumours to differentiate KA from SCC. Epithelial lip, (as seen in this case) and sharp demarcation between tumour and stroma have been described as distinctive features of KA.However, this demarcation was focally not seen in this case thus increasing the difficulty in ruling out SCC.It was also suggested that atypical or difficult cases should be treated as SCC as clear-cut distinction is not possible in such cases; however, a clear distinction and diagnosis were made in this case 6 . Cain et al. 7 compared the clinical, histological and immunohistochemical differences using antibodies to proliferating cell nuclear antigen, and wild-type and mutant-type p53 protein but found no significant statistical differences 7 .However, Kerschmann et al. 8 observed that 80% KA showed nuclear staining with anti-p53 antibody that was distributed along the outermost layers of the aggregates of neoplastic cells, similar to what was observed in our case, while 60% of the SCCs were uniformly p53 positive.Mean Ki-67 proliferation fraction was higher for KA than for SCC (55% vs. 46%), but this difference was not statistically significant 8 .Connolly et al. 9 concluded that immunohistochemistry for p53 and Ki-67 may help distinguish between a subungual SCC and a subungual KA 9 .Loss of Bcl-2 expression with tumour maturity in KA has been reported as seen in this case 10 .The use of a panel of immunoperoxidase stains for apoptosis-associated proteins, telomerase-associated protein and the cell adhesion protein E-cadherin, helped to conclude that KA has a different pathogenesis and biochemistry from that of SCC and is thus a distinct entity 1 .Desmoglein 1 and 2 have demonstrated the down regulation * Corresponding author Email: nigamjs@gmail.comD.D.U.Hospital, New Delhi, India Dermatology Licensee OA Publishing London 2013.Creative Commons Attribution Licence (CC-BY) F : Sood N, Nigam JS.Squamous cell carcinoma-like giant keratoacanthoma with delayed presentation in a known case of xeroderma pigmentosum.OA Case Reports 2013 Apr 01;2(3):29. Figure 1 : Figure 1: a. Central keratin-filled crater surrounded by an epidermal lip (H&E 100×).b.Solid lobules of proliferating squamous cells in the base with abundant keratin (H&E 100×).c.Solid lobules of proliferating squamous cells in the base with lymphocytic infiltrate at the interface (H&E 400×).d.Focal irregular areas with cords of squamous cells infiltrating the underlying dermis with lymphoplasmacytic infiltrate at the interface (H&E 400×).
2017-10-19T16:51:24.422Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "4e1fc14d2db28e8c6162afdf347d7aa37914cf9c", "oa_license": "CCBY", "oa_url": "http://www.oapublishinglondon.com/images/article/pdf/1393763226.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4e1fc14d2db28e8c6162afdf347d7aa37914cf9c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
149625131
pes2o/s2orc
v3-fos-license
The Viral Impact of Emotion on Social Transmission Under Control Context This research targeted to investigate the role of emotion in Chinese Social Transmission through an empirical study. Using a unique data set of the Southern Weekly (one of the most famous political media in China) articles published over an 87-days period, we tried to explore the influence of specific emotional attributes on viral forwarding of online content. The results indicate that, although positive emotions generally promote communication, the relationship between emotion and social transmission is more complex than valence alone. Among the 28 emotions common to all Chinese, only 2 emotions (including positive admiration and negative hate/dislike) significantly inhibited transmission. Using special model like the zero-inflated negative binomial (ZINB) regression, the specific emotion (i.e., hate/dislike) can help to distinguish different generating processes for zero-repost articles which are difficult to inform. Taking together, these findings shed light on emotions can predict the virus transmission characteristics of online content under the information control environment. Introduction Sharing online content has become an integral part of the lives of Internet users all over the world. The number of Chinese social media users has also grown tremendously in recent years (Weibo 340 million, WeChat 900 million, and QQ 800 million). Social networks in China such as Weibo [1] and WeChat enable the rapid spread of online content among netizens. Affective attributes do affect people's online behaviour [2] and may be influenced by culture. Sharing behavior can lead to emotional infection, and people who are infected are more willing to actively spread [3]. It is well-known that Chinese government publicly restricts information dissemination, and has established its own microcosm of social media trying to closely monitor and control problematic content. Many studies have demonstrated that emotional traits of content will affect whether it is shared [4], then what impact does emotion have on Chinese online communication in China's specific information regulation and cultural background? This article focuses on how emotions affect social transmission in this Chinese context. We analyse a unique data set of 230 Commentary articles in the Southern Weekly, and to examine how content's valence (i.e., whether an article is positive or negative) and the specific emotions it evokes (i.e., admiration, anger/rage, etc., as showed in Table 1 below affect the spontaneously transmission on the social media platform, Weibo. This research makes several important contributions. most of the similar research in the past occurred in non-Asian regions, and this article focuses on the impact of Chinese general sentiment on online sharing.. By combining a largescale examination of real transmission in the field under tightly controlled context, we shed light on the underlying processes that drive people to share. Second, our findings provide insight into how to design successful viral marketing campaigns and effective communication content under information control environment. Data and Analytic Method We collected online articles from the Southern Weekly between 2007 and 2011 using a web crawler (during this time period there was relatively less constraint by the government on information dissemination in the Mainland). The retrieval result was a total number of 230 Commentary articles and the number of times they had been reposted on Weibo until 0:00 on December 31, 2014. Rating professionals were employed for the rating-based emotional attributes. As shown in Figure 1, about 40% of the count data were zeroes. To examine how specific emotions evoked by content drive social transmission, we coded the articles on two dimensions. First, we relied on human coders to classify the extent to which content was exhibited by others, 28 emotional attributes, almost all social emotions most perceived by Chinese. Considering the professional nature of the content [2], we hired twelve media professionals The Viral Impact of Emotion on Social Transmission Under Control Context instead of students as raters (They included senior reporters, editors, and chairpersons of some newspapers). These raters were all trained and blind to our research hypotheses. The raters were asked to rate the emotional attributes independently on a binary scale after reading each article, with 1 indicating the activation of the emotional attribute and 0 otherwise. For machine-based emotional indices, the Chinese Linguistic Inquiry and Word Count [9] program was adopted which had been widely used to analyze Chinese text in psychological research [10]. We quantified positivity as the difference between the percentage of positivity and negative words in an article and emotionality as the percentage of both positive and negative words [11]. For machinebased emotional indices based on the CLIWC programme [12], the positivity and emotionality of each article were computed. With count data as the outcome, we employed the negative binomial (NB) regression and the zero-inflated negative binomial (ZINB) regression to analyse the data [13]. The NB distribution is a generalization of the Poisson distribution by allowing over dispersed count data (i.e., different mean and variance). The zero-inflated negative binomial (ZINB) regression to model the probability that some zero-repost articles were different from others in nature. The ZINB regression combines the NB and logistics regression to model count data with excessive zeroes that could be generated by a different process. Results Descriptive statistics of both the machine-and rating-based emotional variables can be found in Table 1. As shown, just 5 of 28 were relatively prevalent and consistent (i.e., admiration, anger/rage, pity/compassion, love, hate /dislike). Table 2 gave the results of the three NB regression models (i.e., Model 1, machinebased predictors; Model 2, rating-based predictors, and Model 3, both). There was a significant negative effect for emotionality and significant positive effect for positivity in Model 1. Both Models 1 and 2 were nested within Model 3, which contained all emotional variables. The results of the ZINB regressions can be found in Table 3. Note that since Model 3was not significantly better than Model 2 (△χ2 = 5.03, △df = 3, p = .17), Model 2 was preferred due to the principle of simplicity. It was found that the significance of the emotional attributes differed largely across the two parts of the model (i.e., basic vs. zero-inflated). In general, Commentary articles associated with less admiration and hate/dislike were more likely to be forwarded in the basic part. In contrast, zero-forwarding articles associated more hate/dislike were not forwarded too. However the other three emotions (i.e., anger/rage, pity/compassion and love) do not significantly affect the sharing and dissemination of online content. Taking together, among all the 28 common emotions, only two emotions (i.e., admiration and hate/dislike) significantly inhibit online sharing by Chinese people. Even if the number of reposts is 0, hatred/dislike can significantly predict people's repression during social sharing in the Zero-inflated part (Appendix 1). Note: only effects significant at 10% or above level are shown.*Significant at 5% level; **Significant at 1% level; ***Significant at the .1% level. Discussion Is positive or negative content more vira under the Chinese information control environment ? the results indicate that the relationship between emotion and social transmission is more complex than valence alone. Online content that evokes positive(admiration) and negative (hate/dislike) emotions may be more suppressed not to be shared by people spontaneously. A negative significant effect in emotionality see Table 2 shows that more affect-laden content, is less likely to be shared. Is it true that up to 40% 0 forwarding means that Chinese really not interested in these articles see Figure 1? Both employing the NB and ZINB regression, and linking psychological and sociological approaches to studying diffusion, when some zero-repost articles were different from others in nature, the findings can help predict the spread of online content in an information control environment. In short, the naturalistic setting allows us to pay attention to the use of emotions and measure the relative importance of content characteristics when we want to increases the likelihood that content will be highly shared in China. Taking together, we examined how the emotionality, valence, and specific emotions evoked by an article can affect its viral transmission and these findings shed light on why people share online content and how to design more effective viral marketing campaigns in China.
2019-05-12T14:23:40.595Z
2018-08-03T00:00:00.000
{ "year": 2018, "sha1": "293c8dd53bc2fb98c875c6fdaf59d3b48bedc67f", "oa_license": "CCBY", "oa_url": "https://biomedres.us/pdfs/BJSTR.MS.ID.001499.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5f467ce31d47731a81f1762d8eb7a4cf2a56cde5", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Psychology" ] }
216157266
pes2o/s2orc
v3-fos-license
Decoding the vital segments in human ATP-dependent RNA helicase An analysis of the ATP-dependent RNA helicase using known functionally close analogs helps disclose the structural and functional information of the enzyme. The enzyme plays several interlinked biological functions and there is an urgent need to interpret its key active-site residues to infer function and establish role. The human protein q96c10.1 is annotated using tools such as interpro, go and cdd. The physicochemical properties are estimated using the tool protparam. We describe the enzyme protein model developed using modeller to identify active site residues. We used consurf to estimate the structural conservation and is evolutionary relationship is inferred using known close sequence homologs. The active site is predicted using castp and its topological flexibility is estimated through cabs-flex. The protein is annotated as a hydrolase using available data and ddx58 is found as its top-ranked interacting protein partner. We show that about 124 residues are found to be highly conserved among 259 homologs, clustered in 7 clades with the active-site showing low sequence conservation. It is further shown that only 9 loci among the 42 active-site residues are conserved with limited structural fluctuation from the wild type structure. Thus, we document various useful information linked to function, sequence similarity and phylogeny of the enzyme for annotation as potential helicase as designated by uniprot. Data shows limited degree of conserved sequence segments with topological flexibility unlike in other subfamily members of the protein. Availability: The constructed files/datasets analyzed in this study are available from the corresponding author on reasonable request. Background: RNA helicase is ubiquitously present in viruses, bacteria, archaea and eukaryotes, and is the largest cluster of enzymes linked with RNA metabolism [1]. Being a highly conserved enzyme, it plays a phenomenal role in the unwinding of the RNA duplexes [2] and requires the hydrolysis of nucleoside triphosphates [3]. The DEAHbox protein (DHX) family members are usually located in the nucleus region. The laboratory protein of genetics and physiology2 (LGP2) is a member of the DEAD-box protein family and belongs to the ATP-dependent RNA helicase family [4,5], known to be involved in various steps of RNA metabolism [6] with several pleiotropic functions [7]. The catalytic core of these proteins encodes 12 highly conserved motifs [8]. LGP2 is a key regulator of interferon-induced with helicase-C domain1 (IFIH1)/ melanoma differentiation associated protein5 (MDA5) and DExD/H-Box helicase58 (DDX58)/retinoic acid-inducible gene (RIG-I)-mediated antiviral [9,10]. When the antiviral pathway gets perturbed, RIG-I usually initiates a cascade of deregulated events, which further causes the immunological disorders [11]. It shows a significant response against several viruses including newcastle disease, rhabdovirus, sendai, lassa, orthomyxoviruses (influenza), ebola and flaviviruses (hepatitis). While it acts both against the single or double-stranded RNA, MDA-5 is active against the long doublestranded RNA and recognizes picornaviruses and vaccinia viruses. Both these proteins are shown to have an active response against dengue, West Nile and Japanese encephalitis viruses [12]. Thus, helicases play key roles in regulating the innate immune responses [13]. Active research is going on RNA helicase, and enormous articles have been published to date (September 14 th , 2019) [14, 15,16,17,18,19,20,21,22]. The two databases national center for biotechnology information proteins (NCBI) and universal protein resource knowledgebase (UNIProtKB) orderly contain 1335 and 422 sequences in contrast to 154 structures listed in the protein data bank (PDB). The ever-increasing sequence-structure gap for this protein makes its sequence, structure, conservation or phylogeny analysis quite elusive for the evolutionarily distinct human sequence variants. For its key role behind the regulation and control of gene regulation and RNA metabolism, there are growing implications for DHX subfamily in human diseases and their treatment [23,24]. It is of interest to report an analysis of the ATPdependent RNA helicase using known functionally close analogs to help disclose the structural and functional information of the enzyme. Materials and Methods: For functionally characterizing the un-annotated human protein sequence, the following strategy is developed, as depicted through a flowchart in (Figure 1). Figure 1: Flowchart showing the robust annotation algorithm for the human protein sequence. To ascertain the predictions, the methodology deploys the key sequence, structural and evolutionary measures. Sequence retrieval: The amino acid sequence of ATP-dependent RNA helicase (Q96C10.1) is retrieved from the UniProtKB/SwissProt database. Prediction of physicochemical properties: Several features, viz. residue composition, molecular weight, theoretical PI, instability index, extinction coefficient, atomic composition, aliphatic index, and grand average of hydropathicity (GRAVY) score are essential to define the physicochemical properties and to estimate the structural features of a protein sequence. The parameters are estimated through the Expasy-Protparam tool (https://web.expasy.org/protparam/) [25]. Molecular modelling: To construct a near-native structure of the RNA helicase sequence, HHPred [27] is used to screen the top-ranked functionally similar protein structure(s) (templates) from the PDB database by extending the sequence profile on the basis of 5 iterative rounds [28,29]. The template 5F9FE, sharing the highest sequence similarity of 53%, is selected and the protein model is built using MODELLER9.19 [30]. The unaligned 1-residue N-terminal and 12residue C-terminal (667-678) segments are truncated to curate the alignment file and to construct the 2-666 residue model structure. As the predicted decoy is found to encode several atomic clashes, it is energetically relaxed/refined through3Drefine [31], and the best model is selected on the basis of qualitative model energy analysis (QMEAN) and ERRAT scores. The model is assessed through the discrete optimized potential energy (DOPE) and GA341 scores of MODELLER. By using the QMEAN server, the predicted topranked model is assessed through the Molprobity score on the basis of rotamer outliers and the atomic clash score. Ramachandran map is subsequently plotted through the PROCHECK server to assess the topological accuracy of the predicted structure on the basis of phi and psi angles. Functional scrutiny: The sequence is fed to InterPro server to retrieve the information regarding the superfamily, domains, repeats and gene ontology [32].Conserved domain database (CDD) is subsequently screened to affirm the credibility of the screened domains for purging the spurious hits/superfamilies and selecting the credible ones [33]. To estimate the interaction of the selected protein with the closelyrelated sequences, the STRING database is used [34]. For a robustly accurate analysis, its algorithm deploys several parameters including gene fusion, gene neighborhood, gene co-occurrence, text mining, and co-expression to estimate a confidence score. The score ranges between 0 and 1 and, for all the considered features, it is expected to remarkably score the closely interacting protein pairs.To localize the three most-conserved motifs, the MEME suite is used [35]. On the basis of gapless local alignment of multiple sequences (GLAM2) protocol, it even covers the gapped motifs [36]. The algorithm helps to identify DNA and protein sequence motifs. The default motif length range of 6-50 is used for the analysis. PROFUNC (http://www.ebi.ac.uk/thornton-srv/databases/Pro Func/) is further used to estimate the biochemical functions through the sequence homology against the PDB database [37]. To reliably affirm the intracellular/cytosolic locus of the human helicase protein, the hidden markov models-dependent server (TMHMM2.0 www.cbs.dtu.dk/services/TMHMM) is used [38]. Peptide cutter, a web-based tool, (https://web.expasy.org/peptide _ cutter/) is subsequently used to predict the location of probable cleavage sites of chemicals/proteolytic enzymes. Conservation and flexibility analysis: To reliably affirm the sequence conservation profile of the sequence, UniProtKB/SwissPROT database is screened through HMMER for the selected protein [39]. With a very strict E-value inclusion cutoff of 0.00001, the sequence profile is expanded through five iterative rounds. From a total of 728 ATP-dependent RNA helicases, 259 sequences are selected. As the sequence length of experimentally solved protein structures is found to be within 600-800, the sequence length filter (580-820 residues) is conservatively used along with the removal of bifunctional proteins.Sequences are retrieved using Batch-Entrez and aligned by ClustalW module of HHpred. Consurf is subsequently used to track the degree of conservation across the chain [40]. Deploying the constructed sequence profile, the conservation scores are statistically estimated with the Bayesian probability across the chain on a scale of 1-9. To define the functional conservation across the chain, it takes input from sequence alignment and draws phylogeny connections among the sequences to plot it over the deployed/predicted reference structure through color gradations. Surface topography is further analyzed by computed atlas of surface topography of proteins (CASTp) to locate the active-site within the modelled protein structure [41]. It locates pockets, internal cavities, and the cross channels along with their surface area and volume, and reveals the functionally important sites within a protein structure. To study flexibility across the active-site and derive the root mean square fluctuation (RMSF) fluctuations across the cavity, the CABS-flex2.0 server is used [42]. It estimates flexibility/rigidity of the secondary structures and key residues of the constructed model. For localizing these flexible sites in correlation with the topology of the predicted model, Polyview-2D (http://polyview.cchmc.org/) server is used [43]. Phylogeny analysis: To draw a credible evolutionary analysis, Gblocks is used for eliminating the evolutionary divergent regions and poorly aligned segments from the constructed alignment. It removes the ambiguous regions and takes into consideration only the conserved ©Biomedical Informatics (2019) regions to construct a phylogenetic tree. The resultant output is fed to a phylogeny server (http://phylogeny.lirmm.fr/phylo_cgi/ index.cgi) to construct an evolutionary tree [44,45]. Using the minimum value of SH-like statistically assesses the evolutionary relationship of the sequence dataset and Chi-2 based tests. The evolutionary distances are further computed using the Jonathan Taylor Thomas (JTT) matrix method. Results: Physicochemical properties: The physicochemical properties of the ATP-dependent RNA helicase are estimated through ProtParam. For the 678-residue sequence, the molecular weight is estimated to be 76.6KDa. The sequence encodes 75 negatively and 73 positively charged residues, and it indicates that the protein is somewhat negatively charged. Theoretical pI is estimated to be 6.98 and it exhibits a slightly acidic nature. The extinction coefficient value and in-vitro half-life of the protein are respectively estimated to be 66,350 and 30 hours. The molecular formula is shown to be C3365H5391N983O994S34cand it shows the GRAVY score of -0.294. Secondary structure prediction: The secondary structure elements define a protein structure and their encoded fractions play a key role in designing various bioanalytical experiments. Using PSIPRED, the fraction of α-helix, coil and β-strands are orderly estimated to be 46.6, 37.4 and 15.9 (Figure 2A), and it indicates a substantial predominance of α-helix than the remaining elements. The estimated secondary structures are marked across the chain, along with their statistical confidence ( Figure 2B). Through the STRING database, a resource of known and predicted protein-protein interactions, the top-ten potentially interacting partners are screened. The server ranks the functionally associated partners through an integrated confidence score by genome-wide network connectivity, and the ten partners show a score higher than 0.83. The protein DHX58 is identified to be an ATP-dependent RNA helicase. It lacks the cuspate activation and recruitment domain (CARD domain) and has its role in RIG-1 and MDA5-mediated signaling against the infectious virus or targeted cells. The predicted network of interacting protein partners shows a significantly higher confidence score of 0.964 for DDX58, an innate immune receptor. The network is constructed by retrieving data through the coexpression and published experimental results through textmining and extensive database screening. Further, the low-ranked partner 2'-5'-oligoadenylate synthetase like protein (OASL) shows a score of 0.835 (Figure 4). The proteins are known to actively participate in the immunological network of cellular proteins. The three most conserved motif sites, with the E-value scores of 1e -2683 , 1.9e -2409 and5.9e -1728 are found for the ATPdependent RNA helicase using the MEME server ( Figure 5). The size of each logo character represents the evolutionary conservation of an amino acid at a specific site. The results reveal that the DEAD motif, associated with the ATP binding and hydrolysis, is encoded in the positions 4-7 in the third motif.TMHMM predicts the location of transmembrane, intracellular and extracellular regions, and it indicates that ATP-dependent RNA helicase is an extracellular protein ( Figure 6). Further, to find the cleavage sites of extracellular digestive enzymes including caspase, trypsin, thermolysin, pepsin and proteinase K, the peptidecutter server is used. No cleavage sites are found for the caspase upstream and downstream enzyme, signifying the programmed cell death. However, 334 cleavage sites are found for proteinase K, an enzyme responsible for the degradation of nucleases. Molecular modeling: HMM-profile is constructed through HHPred for the selected sequence, and the 5F9FE is found to be the top-ranked template structure. It shares a 53% similarity and completely spans the target sequence. On the basis of secondary structure features estimated by PSIPRED, the sequence alignment is manually curated and the selected sequence is modelled using MODELLER9.19, as per the strategies discussed earlier [46,47]. For resolving the non-physical ©Biomedical Informatics (2019) atomic clashes, the predicted structure is iteratively refined through 3Drefine to extensively sample its conformational space. The refined structure orderly shows a credible TM-Score and Cɑ-RMSD of 0.96442 and 1.03 against 5F9FE. The model shows an ERRAT score of 94.1807, and it affirms the non-bonded interaction network in the model. The constructed decoy shows a DOPE and GA341 score of-77915.687 and 1.00 respectively. While the latter score indicates the structural compactness, the former energetic measure confirms the near-native credibility of the predicted model. As shown in (Figure 7), a set of 90.70% and 7.50% residues are found to be localized within the most-favored and additionally allowed regions in the Ramachandran map, plotted through PROCHECK, as detailed in the following (Table 1) [48] Assessing through the QMEAN server, the model shows the clash score, rotamer outlier percentage and Molprobity score of 2.65, 0.52% and 1.48 respectively. It affirms the local and global accuracy and suggests that the topological accuracy of the predicted decoy is comparable to a medium-resolution crystallographic structure. Conservation and flexibility analysis: The conservation level, indicating the color gradation with maroon, white and turquoise to orderly represent the higher, medium and lower order of sequence conservation, is mapped onto the surface of the constructed protein model (Figure 8). The analysis reveals an average pairwise distance score of 1.49264, within the range of 1.01758e -07 to 3.2305, across the entire sequence length. While only 68 residues are found to be completely conserved, 124, 308 and 178 loci are orderly found to be highly, moderately and poorly conserved. Figure 8: Consurf-derived conservation analysis of the human ATPdependent RNA helicase. Color-coding is used to mark the evolutionary rate of residues over the predicted model. Low, mean and high evolutionary variability is orderly marked as maroon, white and turquoise. For an input protein structure, the output ensembles the atomicresolution profile representing the flexibility of the input structure. As the functionality of a protein is dependent on its topological flexibility, it is mandatory to map such vital sites across the protein sequence. Overlapping the sequence conservation map of this protein with the active-site cavity, it further illustrates that the core cavity is not highly conserved. Structural mapping through POLY VIEW-2D further shows that some flexible loci are significantly conserved and it delineates that these residues are essential for protein function. To analyze the residue flexibility score in correlation with the secondary structure and sequence conservation of the residues, the results are overlapped and the average structural fluctuations are marked with a red line (Figure 10). The average and standard deviation of the RMSF scores for these loci are orderly found to be 0.781 and 0.495, unlike the respective scores of 0.804 and 0.593 for the complete structure, and it indicates that the active site is a bit more structurally stabilized. However, only 9 residues (L24, A28, K30, Q256, R285, T438, S439, G444, and L459) are found to be conserved, and it shows that the flexibility is natively vital for only a few residues. Phylogeny analysis: The 259-sequence dataset is aligned through ClustalW and is curated by eliminating the poorly aligned positions and divergent regions. Gblocks server is used to select the informative positions of the sequences. The dataset shows a mutual sequence identity within the range of 13. (Figure 11). Figure 10 : CABS-flex-estimated RMSF scores defined in correlation with their secondary structure and the sequence conservation (marked with red superficial bars) for the ATP-dependent RNA helicase. Several flexible residues are found to be evolutionarily conserved across the topologically important secondary structure elements. Figure 11: Phylogenetic tree of 259 sequences comprising 7 clades in which species in a clade are closely related to each other when compared to species in another clade. (Figure 4) strongly indicates a potential role of the interacting partners in the immune signaling mechanism of DHX58.Motif segments have only been shown to be highly conserved in contrast to a significant variability across the N-and C-terminal domains, majorly responsible to interact with a diverse set of proteins. However, our Consurf analysis shows statistically higher conservation for 168 residues through the HMM-profile of the constructed 259-sequence dataset, as mapped on the predicted near-native structure of DHX58 ( Figure 8). Further, as estimated through CABS-flex (Figure 10), the structural fluctuations are found highest for some terminal residues, although the model shows a significant structural fluctuation across the chain. Although this is in agreement with the earlier results [58,59], it shows that the fluctuation of the key residues could possibly have a vital functional role.The evolutionary study shows a 7-clade evolutionary distribution of the constructed 259-sequence dataset, and each clade is found to span the sequences from all the available species. However, the structural superimposition based study of DEAD domains of DDX2A, DDX2B, DDX5, DDX10, DDX18, DDX20, DDX47, DDX52, and DDX53, and the helicase domains of DDX25 and DDX41 shows a Cα-RMSD within 0.6-1.9Å over the diverse sequence identity range of 27%-86 [60]. The study thus adds on to the details reported earlier and it implicates that these protein structures are robustly conserved over the sequence alterations. Besides the interaction with a few molecules like β-catenin, a protein involved in the gene transcription [61], the active-sites of helicases have not been extensively excavated [62]. To extend it further, it is observed that the enzyme encodes a set of 42 active-site residues, of which 9 residues are found to be conserved. The active-site shows a lower topological fluctuation than the overall structure. Thus, the presented analysis provides a reliable framework for a more detailed evolutionary and structural analysis of ATP-dependent RNA helicase. Conclusion: It is of interest to annotate the human protein Q96C10.1 using known data to model structure and infer function with potential role in the pathway. We document that ten proteins, including DDX58 and OASL are potential interacting protein partners with Q96C10.1. A dataset of 259 functionally similar homologs shows an evolutionary clustering within seven clades and shows conservation of only 9 active-site residues. It is inferred that active site residues are not highly conserved to link with the corresponding low structural similarity in these enzyme proteins. Authors' contributions: AR planned and supervised the experimental methodology. VK and AK have carried out the work, and took the lead in writing the manuscript. All authors read and approved the final manuscript.
2020-03-26T10:19:56.395Z
2020-02-29T00:00:00.000
{ "year": 2020, "sha1": "7ad3a30bd6245aaf4f3299e56a5c5d058353d3d6", "oa_license": "CCBY", "oa_url": "http://www.bioinformation.net/016/97320630016160.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f52a8e5c1fc0cb8395f73184ab2b6aaa0b8d4647", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
10743228
pes2o/s2orc
v3-fos-license
An Analysis of Online Evaluations on a Physician Rating Website: Evidence From a German Public Reporting Instrument Background: Physician rating websites (PRW) have been gaining in popularity among patients who are seeking a physician. However, little evidence is available on the number, distribution, or trend of evaluations on PRWs. Furthermore, there is no published evidence available that analyzes the characteristics of the patients who provide ratings on PRWs. Objective: The objective of the study was to analyze all physician evaluations that were posted on the German PRW, jameda, in 2012. Methods: Data from the German PRW, jameda, from 2012 were analyzed and contained 127,192 ratings of 53,585 physicians from 107,148 patients. Information included medical specialty and gender of the physician, age, gender, and health insurance status of the patient, as well as the results of the physician ratings. Statistical analysis was carried out using the median test and Kendall Tau-b test. Results: Thirty-seven percent of all German physicians were rated on jameda in 2012. Nearly half of those physicians were rated once, and less than 2% were rated more than ten times (mean number of ratings 2.37, SD 3.17). About one third of all rated physicians were female. Rating patients were mostly female (60%), between 30-50 years (51%) and covered by Statutory Health Insurance (83%). A mean of 1.19 evaluations per patient could be calculated (SD 0.778). Most of the rated medical specialties were orthopedists, dermatologists, and gynecologists. Two thirds of all ratings could be assigned to the best category, “very good”. Female physicians had significantly better ratings than did their male colleagues (P<.001). Additionally, significant rating differences existed between medical specialties (P<.001). It could further be shown that older patients gave better ratings than did their younger counterparts (P<.001). The same was true for patients covered by private health insurance; they gave more favorable evaluations than did patients covered by statutory health insurance (P<.001). No significant rating differences could be detected between female and male patients (P=.505). The likelihood of a good rating was shown to increase with a rising number of both physician and patient ratings. Conclusions: Our findings are mostly in line with those published for PRWs from the United States. It could be shown that most of the ratings were positive, and differences existed regarding sociodemographic characteristics of both physicians and patients. An increase in the usage of PRWs might contribute to reducing the lack of publicly available information on physician quality. However, it remains unclear whether PRWs have the potential to reflect the quality of care offered by individual health care providers. Further research should assess in more detail the motivation of patients who rate their physicians online. (J Med Internet Res 2013;15(8):e157) doi: 10.2196/jmir.2655 Introduction In many health care systems, quality of care improvement strategies have been implemented over the last few years [1]; nevertheless, quality deficits still remain [2][3][4]. Several studies have further shown remarkable variability in quality of care across health care providers [1,[5][6][7]. However, patients are not likely to be generally aware of existing quality differences [8,9]. One reason for this is the limited amount of publicly reported information on the quality of health care providers [10]. It has become a major challenge to remedy this deficiency by improving transparency about the quality of health care providers [10,11]. This is supposed to increase overall quality by steering patients to better performing health care providers [12,13] and by motivating providers to make quality improvements [9,14]. Therefore, public reporting (PR) instruments have been put in place in many countries [15][16][17][18][19][20][21][22]. These instruments generally assess the quality of care by measuring adherence to clinical guidelines and by providing additional structural information [11]. However, patients have been slow to take advantage of these comparative reports in making their health care provider choices [9]. Possible reasons for this might be found in the fact that patients are not aware of the information, do not understand it, do not believe it, or are unwilling or unable to use the information provided [23]. The newest trend in the PR movement is the use of physician rating websites (PRWs) [24]. The primary objective of these websites lies in rating and discussing physician quality online by using user-generated data [25,26]. Although the usefulness of PRWs has been seen critically from a scientific point of view [24], their popularity among patients has been increasing [24,27,28]. In contrast to traditional PR instruments, PRWs might have the advantage that the information can be more easily understood by patients. While traditional instruments report on measures such as the administration of beta blockers or angiotensin-converting enzyme inhibitors, which require a higher level of clinical knowledge than most patients have [8], PRWs concentrate on measuring patient satisfaction [24]. Although there is a vast amount of evidence regarding traditional PR instruments, little research has addressed PRWs [25]. A recently conducted systematic review has identified 9 articles published in peer-reviewed journals [25]. In them, the number, distribution, and trend of the evaluations on PRWs were investigated [11,[27][28][29][30][31][32][33][34]. Most of the investigations evaluated ratings for a (non)random sample of physicians, while 1 study assessed over 386,000 national ratings from 2005 to 2010 from the US PRW, RateMDs. Furthermore, there is no published evidence available that analyzes the characteristics of the patients who provide ratings. In this context, this paper adds to the literature by presenting an analysis of all physician evaluations posted on the German PRW, jameda, in 2012. Thereby, we provide descriptive analysis of (1) both physician and patient characteristics, and (2) the number, distribution, and results of the ratings. Analytical analyses were applied to assess (3) the impact of physician and patient characteristics on the overall performance measure, and (4) the correlation between the number of ratings per patient/physician and the overall performance. Analysis of Jameda This paper presents an analysis of all 127,192 physician evaluations that were posted on the German PRW, jameda, in 2012. In total, 107,148 patients completed evaluations on 53,585 physicians. The dataset contained the following information: the medical specialty and gender of the physician, as well as the gender, age, and health insurance status of the patient. Additionally, the results of the physician ratings for all mandatory and optional questions were included. The mandatory physician rating system on jameda consists of 5 questions, rated according to the grading system in German schools on a 1-6 scale (1=very good; 2=good; 3=satisfactory; 4 =fair; 5=deficient; and 6=insufficient) [35]. These relate to (Q1) satisfaction with the treatment offered by the physician, (Q2) education about the illness and treatment, (Q3) the relationship of trust with the physician, (Q4) the time the physician spent on the patient´s concerns, and (Q5) the friendliness of the physician. A mean score ("overall performance") is calculated, based on the results of these 5 questions. Beyond that, a narrative commentary has to be given and 13 optional questions are available for answering (these are not addressed in this paper) [36]. We focused on jameda because it is likely to play the most significant role in the German PRW movement for the following reasons: (1) from a patient's perspective, jameda is the PRW to which a patient is most likely to be referred [24,31], (2) jameda is ranked highest in traffic among German PRWs [34], and (3) among German PRWs, jameda has been shown to contain the largest number of ratings, so far [37]. Statistical Analysis All statistical analyses were conducted using SPSS 21.0 (SPSS for Windows, version 21.0). The median test was used for nonparametric data of groups with different distributions. The Kendall Tau-b test was used to analyze specific correlations. Differences were considered to be significant if P<.05 and highly significant if P<.001. Number and Distribution of Ratings In total, 127,192 ratings of 53,585 physicians from 107,148 patients were posted on the PRW, jameda, in 2012. The German outpatient sector consists of approximately 146,000 physicians [38]; thus, 37% were rated in 2012. As displayed in Table 1, about one third of all rated physicians were female (34.1%). The rating patients were mostly female (60%), between 30-50 years (51%), and covered by Statutory Health Insurance (83%). The distribution of ratings demonstrates that nearly half of the physicians were rated once and less than 2% were rated more than ten times (see Table 2). Thereby, rated physicians had a mean of 2.37 individual ratings (SD 3.169, range 1-159). It could further be shown that 88% of the patients left a single rating and 12% of them left between two and five ratings. This leads to an average of 1.19 rated physicians per patient (SD 0.778, range 1-153). If the ratings are analyzed according to the medical specialty of the physicians in absolute terms, family physician/general practitioner, internist, and gynecologist were rated most often (13,466,8709, and 6410, respectively) (see Table 3; [38,39]). In contrast, laboratory specialist, nuclear medicine, and child and youth psychotherapist were rated least frequently (13,136, and 166, respectively). The distribution of ratings in relative terms, compared to the national physician composition, shows that the most rated medical specialties were orthopedists, dermatologists, and gynecologists (59.20%, 58.90%, and 56.90%, respectively). In contrast, the least frequently rated medical specialties were radiologists, anesthetists, and laboratory specialists (10.40%, 7.90%, and 2.10%, respectively). Evaluations Table 4 shows the evaluation results of all 53,585 rated physicians (as they are displayed on the website). It can be shown that two thirds of all evaluations were assigned to the best rating category, "very good". An additional 13% of patients rated their experience with the physician as "good". Three percent of the physicians were rated with the worst score, "insufficient" in their overall performance. The median result of all questions was "very good", while the mean varied between 1.68 for question 5 (friendliness of the physician) and 1.85 for question 3 (relationship of trust with the physician). An analysis was performed to ascertain whether differences in the rating of a physician, regarding both the physician (ie, gender and medical specialty) and the patient characteristics (ie, gender, age, and health insurance) could be determined. The results are displayed in Table 5. They show that female physicians were rated better than their male colleagues and that the difference is statistically significant (the percentage of rated physicians below median is 61% for female and 59% for male physicians; P<.001). Furthermore, significant rating differences between medical specialties could be demonstrated (P<.001). The best rated medical specialties were laboratory specialists, anesthetists, medical practitioner without specialization, and family physician/general practitioner (85%, 76%, 74%, and 70% below median, respectively). The lowest ratings were given to neurologist/psychiatrist, ophthalmologist, orthopedist, and dermatologist (including venereologist) (47%, 45%, 35%, and 35% below median, respectively). With respect to patient characteristics, no significant rating differences between female and male patients could be detected (percentage below median is 59% in each group; P=.505). However, it could be shown that older patients gave better ratings than did their younger counterparts (P<.001). Additionally, patients covered by private health insurance gave more favorable evaluations than did patients covered by statutory health insurance (P<.001). Next, the correlation between the mean overall performance of a physician and the number of ratings per physician was addressed. As displayed in Figure 1, the total performance range can be observed for physicians with a low number of ratings. By contrast, physicians who received a higher number of ratings were shown to have better ratings (eg, all physicians with more than 60 ratings were rated as "very good"). As a result, the correlation between the mean overall performance of a physician and the number of ratings per physician could be shown to be statistically significant (Kendall Tau-b=0.193, P<.001). This is also true for all five mandatory questions (P<.001; data not presented here). We further investigated to find out whether similar results could be detected for the number of ratings per patient compared to the mean overall performance given by this patient. The result is displayed in Figure 1 and shows a similar correlation (Kendall Tau-b=0.178, P<.001). Principal Findings In this section, the results obtained in this investigation are compared to published studies, mostly from the United States. The evidence from this investigation shows that 37% of physicians in the German outpatient sector were rated on jameda in 2012. This number exceeded those from previously published international studies. For example, Gao and colleagues showed that 16% of US physicians received an online review on RateMDs in the period between 2005 and 2010 [27]. Lagu et al reported that out of 300 Boston physicians, 27% of them had been rated [11], while Mostaghimi et al calculated percentages of between 0.4% and 21% for a sample of 250 randomly selected internal medicine physicians [33]. In a sample of 500 randomly selected US urologists, the percentages varied between 0.4% and 53.6% [40]. Published results for German PRWs reported percentages of between 3.36% and 25.78% in 2009 [31] and between 3% and 28% in 2012 [34]. However, it is worth mentioning here that direct comparison is difficult due to the fact that data from one year was analyzed in this investigation, whereas most studies use ratings for a sample of physicians without including any time constraints. It could also be shown that rated physicians had a mean of 2.37 individual ratings (SD 3.169, range 1-159). Published results for the US PRW, RateMDs, were quite similar and were reported to be 2.7 [30], respectively 3.2 [27]. More recent US studies determined numbers of 2.35 [11] and 2.4 [40], while results for German PRWs were reported to be between 1.1 and 3.9 [34]. The number decreases to 0.87 when regarding all rated physicians from the German outpatient sector in 2012. This is slightly higher than the results obtained by Lagu and colleagues (mean 0.63) [11]. Nearly half of the physicians were rated only once, and 44% received between 2 and 5 ratings in this study. Less than 2% were rated more than 10 times and 0.1% more than 50 times. These numbers are in line with the results obtained by analyzing the ratings provided for 2010 on RateMDs. In that case, half of the physicians had a single rating and the percentage of physicians with 5 or more ratings was 12.50% [27]. Of 250 randomly selected physicians in Boston, 50 physicians (20%) had between 1 and 4 reviews on Healthgrades, 13 physicians (5.2%) on RateMDs, and 1 physician (0.4%) on Wellness. Only 3 physicians had more than 5 reviews on any of the ratings sites [33]. About one third of all rated physicians on jameda were female. This is consistent with both the gender composition of physicians in Germany (female national average 40% [38]) and with the results by Gao and colleagues [27]. If the ratings are analyzed according to the medical specialty in relative terms (ie, compared to the national physician composition), the numbers are again confirmed by other study results. For example, Gao and colleagues showed that rated physicians were most likely to be classified as obstetrician/gynecologists and least likely to be classified as other specialists such as radiologists or anesthesiologists [27]. In this study, almost 80% of all evaluations could be assigned to the two best rating categories. Less than 3% of the physicians were rated with the worst score, "insufficient". These results are in line with most other studies: Lagu and colleagues categorized 88% of quantitative reviews as positive, 6% as negative, and 6% as neutral [11]. On RateMDs, 45.80% of the physicians received the best score and only 12% were rated with the worst score [27]. Kadry et al assessed the 10 most commonly visited US PRWs and found that the percentage of reviews rated ≥75 on a 100-point scale was 61.5%, ≥4 on a 5-point scale was 57.74%, and ≥3 on a 4-point scale was 74.0% [32]. On the Canadian PRW RateMDs, 70% of the comments were reported to be favorable and about 30% of the comments were negative [41]. In the sample of 500 randomly selected US urologists, 86% had positive ratings [40]. Moreover, the median result of all questions in this study was "very good". The means varied between 1.68 concerning the friendliness of the physician (question 5) and 1.85 regarding the relationship of trust with the physician (question 3). In their study, Kadry et al determined the average rating to be 77 out of 100 for sites using a 100-point scale, 3.84 out of 5 for sites using a 5-point scale, and 3.1 out of 4 for sites using a 4-point scale [32]. For the US RateMDs, the mean scores were reported to be 3.93 [27] and 3.82 [30] on a 5-point scale, respectively. Finally, a comprehensive analysis of German PRWs showed the mean ratings to be between 1.1 and 1.5 (3-point scale, 1 "good", 3 "poor") [34]. The results of this study suggest that female physicians receive better ratings than do their male colleagues. The number is small but statistically significant (P<.001). Better ratings for female physicians were also determined by Ellimoottil and colleagues (P=.72) [40]. However, this is in contrast to the results obtained by Gao and colleagues, who showed that male physicians received higher ratings than did female physicians (P<.001) [27]. But, differences in all three studies were shown to be quite small. We can further demonstrate significant rating differences among the analyzed medical specialties. Of these, the best rated were laboratory specialists, anesthetists, medical practitioners without specialization, and family physician/general practitioners. The lowest ratings were given to neurologist/psychiatrists, ophthalmologists, orthopedists, and dermatologists. In line with the numbers obtained in this study, higher ratings were shown for physicians in primary care [27] and lower ratings for physicians in dermatology [30]. However, in another study, primary care physicians were rated at average [30]. Lagu et al found a similar percentage of positive, negative, and neutral quantitative reviews for generalists and subspecialists. They then concluded that after accounting for varying number of reviews per physician, generalists tended to have more positive reviews than did subspecialists [11]. This is the first study that allows for a closer analysis of the patients who rate their physicians. Approximately 73% of all patients provided information regarding gender, age, and health insurance. According to our results, most of the rating patients were female (60%) and were covered by Statutory Health Insurance (83%). One other notable fact could be shown: patients in the youngest age group (<30) made fewer ratings than did older patients. Whether or not this is due to more severe illness problems with increasing age cannot be assessed with this data. However, this question should be addressed in future research. The fact that hardly any patients leave more than a single rating (mean 1.19 rated) can be regarded as even more surprising. One might expect that once they were aware of the existence of such websites, patients would use them constantly in an active (ie, rating physicians) or passive (ie, only searching for physicians) manner, especially to assist other patients with information when seeking a physician. However, we could not investigate the motivation behind the patients' ratings. Nor could we assess the reasons for not regularly rating physicians. Considering the mean of 14 [42] to 17 [43] physician contacts in Germans with statutory health insurance, there is still high potential for even more ratings. The fact that patients covered by private health insurance give more favorable ratings than do patients covered by statutory health insurance is not surprising, since they were found to have faster access to care [44]. This might well have had an effect on the ratings differences. Whether quality of care differences can be determined between the two groups and whether this leads to ratings differences should be addressed in future studies. It could be shown that there is a significant correlation between the mean overall performance rating of a physician and the number of ratings received for that physician (P<.001). One possible explanation for this finding might be the fact that physicians who are aware of these websites and use them as a marketing instrument may specifically ask satisfied patients to leave a (positive) rating on a PRW. Another explanation might be that some physicians, who are identified by patients on PRWs, simply provide outstanding quality of care and they receive favorable ratings afterwards. Although our results prove that there is a significant correlation between these variables, we cannot prove which assumption is true. This should be addressed in further studies, which should contain additional information about the physicians. Limitations There are some limitations that have to be taken into account when interpreting the results of this investigation. First, we analyzed online ratings from only a single PRW, jameda. Although jameda has shown to be the most frequently used German PRW, it is possible that other PRWs have more online reviews or show other results. Second, the data provided allowed for comprehensive analysis. However, there was no information available on the age of the physician, malpractice claims, or the medical school attended. This information would have allowed further analysis. Third, we were not able to present analysis conducted over a longer period of time. However, the data do reflect the entire year 2012. Fourth, we did not analyze results presented in narrative comments. Finally, there was no chance to verify the validity of the analyzed reviews. Therefore, it cannot be guaranteed that the ratings were not subject to manipulation [27]. Conclusions Finally, it can be stated that there is a limited amount of publicly reported information on quality of health care providers. To increase transparency, different approaches have been developed. There are traditional PR instruments that focus on the adherence to evidence-based guidelines. Thus, they may have the potential to reflect the clinical quality of care provided by a health care professional. However, these instruments have not yet proven to be a meaningful measure for patients. In contrast, PRWs concentrate on patient satisfaction measures. Whether or not these results have the potential to reflect the quality of care provided by a health care professional should be addressed in future research as well. Since an increasing usage of these websites has already been shown [24,27,28], PRWs might contribute to reducing the lack of publicly available information on quality, at least for those physicians who have been rated. Given that only a certain number of physicians has been rated so far, there is still no perfect transparency. However, given the increasing number of ratings on PRWs, the future impact for patients seeking a physician will continue to rise.
2018-04-03T00:15:48.919Z
2013-08-01T00:00:00.000
{ "year": 2013, "sha1": "53e2f15842f0ed50f31f6aba7fb957c6cb8f4420", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2196/jmir.2655", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "464590ff26559c6e4fc95edad4afdc5d66b796a0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13840883
pes2o/s2orc
v3-fos-license
Cancer immunity and therapy using hyperthermia with immunotherapy , radiotherapy , chemotherapy , and surgery Hyperthermia is a type of medical modality for cancer treatment using the biological effect of artificially induced heat. Even though the intrinsic effects of elevated body temperature in cancer tissues are poorly understood, increasing the temperature of the body has been recognized as a popular therapeutic method for tumorous lesions as well as infectious diseases since ancient times. Recently accumulated evidence has shown that hyperthermia amplifies immune responses in the body against cancer while decreasing the immune suppression and immune escape of cancer. It also shows that hyperthermia inhibits the repair of damaged cancer cells after chemotherapy or radiotherapy. These perceptions indicate that hyperthermia has potential for cancer therapy in conjunction with immunotherapy, chemotherapy, radiotherapy, and surgery. Paradoxically, the anticancer effect of hyperthermia alone has not yet been adequately exploited because deep heating techniques and devices to aggregate heat effects only in cancer tissues are difficult in practical terms. This review article focuses on the current understanding concerning cancer immunity and involvement of hyperthermia and the innate and adoptive immune system. The potential for combination therapy with hyperthermia and chemotherapy, radiotherapy, and surgery is also discussed. INTRODUCTION Cancer is one of the most fatal diseases in the world, inducing various conditions such as organ disorders from primary lesions or metastatic lesions and cachexia.Three standard therapeutic methods include surgery, chemotherapy, and radiotherapy; however, these are not satisfactory on the whole.Difficulties in treating This is an open access article licensed under the terms of Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/),which permits unrestricted use, distribution, and reproduction in any medium, as long as the original author is credited and the new creations are licensed under the identical terms. For reprints contact: service@oaepublish.comcancer are due to its distinctive abilities for immune escape, metastasis, and tolerance to cancer therapies.These abilities result from the heterogeneity of cancer cells [1] and the anticancer efficacy of conventional treatment methods has limitations.Recent progress has been made in cancer immunotherapy, a fourth cancer therapeutic method, including activated T-cell therapy and dendritic cell (DC) vaccines; however, their therapeutic efficacy is still limited [2,3] .The more recent discovery of immune check-point inhibitors has demonstrated sensational long-term benefits in patients with advanced cancer and has highlighted the importance of immune responses to cancer, but its efficacy has been recognized only in a minority of patients [4,5] .Hence, current therapeutic methods need to be improved or new therapeutic cancer therapy developed for single or combination use.Recently, many reports have shown that an appropriate heat effect has potential anticancer efficacy and can enhance the efficacy of other cancer therapies.Nonetheless, fever itself is a complex physiological response [6] , with the intrinsic effects of elevated body temperature regarded as an important defense system for the body by increasing the immune reaction not only against infectious disease but also against cancer. Hyperthermia Even though the efficacy of intrinsic effects of elevated body temperature in tissues to cells is still being studied, it has been disclosed that the survival rate of cells is reduced by heating at 39-42 °C, and it is amplified remarkably by heating at ≥ 42.5 °C for ≥ 1 h.There is no variation in tolerance between tissue types [7,8] .Hyperthermia is a type of cancer treatment using this feature to target cancer cells and their surrounding environment [9] .In the early days, hyperthermia alone at 42-44 °C was performed against recurrent tumors derived from head and neck cancer and breast cancer, which appeared on the surface of the body.The objective antitumor response in this set of superficial tumors was around 40% [10][11][12] .However, most cancers, including primary sites as well as recurrent or metastatic sites, are located deep inside the body.This makes hyperthermia alone less effective because it is quite difficult to heat only cancer tissues to more than 42 °C using currently available heating devices.Recently, the usefulness of mild hyperthermia with 39-42 °C (feverrange hyperthermia) for 1-2 h has been reported for combination use with other cancer therapies [13] .This method takes advantage of the difference in sensitivity to heat stress between normal tissues and cancerous tissues.The logic behind its use is that normal tissues have enough vascular distribution to drain the congestion of fever and avoid tissue damage in these shorter time periods.In contrast, in cancerous tissues, fever and heat stress tend to accumulate.Consequently, an anticancer effect can be obtained within the fever range while normal tissues endure [14] .Nevertheless, irradiation for a long period with a higher temperature than body temperature until cancer is eliminated is still harmful to normal tissues and homeostasis of the body. In widespread use, the term hyperthermia generally includes regional hyperthermia, whole-body hyperthermia, and hyperthermic intraperitoneal chemotherapy (HIPEC).Ablation therapy, which uses microwave or a laser at 80-100 °C, leading to direct cancer cell death by heat denaturation of proteins or necrosis [15] , may also be categorized as hyperthermia in the broad sense.Regional hyperthermia is a less invasive method of thermal therapy.In this method, heat effects are limited to the range of irradiation and have an expected role as a chemosensitizer or radiosensitizer used to augment the efficacy of chemotherapy or radiotherapy in situ [9] .The currently popular method for hyperthermia in clinical practice is mild hyperthermia applied to the regional cancer area by using an 8 MHz [16,17] or 13.56 MHz [18] radiofrequency capacitive heating device, applied to the surface of the body directly above the cancer.In contrast, whole body hyperthermia heats areas of body and increases the systemic body temperature of patients [19] .This method is suitable for patients with metastases in multiple organs, including carcinomatosis.Increased effects of immune cells such as T cells and DCs located in the peripheral organs or circulation are also expected, in addition to the effect of regional hyperthermia [20] . T-cell-based immune responses to cancer T-cells are key immune cells with regard to specific immune responses against cancer.The pivotal events involved in the induction of successful T-cell mediated immune responses and those in the immune effector phase are shown in Figure 1.DCs are the major antigen-presenting cells (APCs), which are capable of initiating T-cell mediated immune response [21] .These cells usually reside in the epidermis of the skin and mucosal tissues to prepare to combat foreign enemies.This includes cancer antigens, which are fragments of tumor cells generated as a consequence of natural death or the interaction of tumors and innate immune cells such like natural killer (NK) cells [22,23] .DCs process and present these fragments on their cell surface along with major histocompatibility complex (MHC) antigens after capturing cancer antigens.The complex of cancer antigens and MHC class I antigens are presented to CD8 + cytotoxic T lymphocytes (CTLs), while the complex of cancer antigens and MHC class II antigens engage to stimulate CD4 + helper T-cells.The CD4 + helper T cells enhance the differentiation of CTLs into effecter T-cells by secreting cytokines such as interferon-γ (IFN-γ) and interleukin-2 (IL-2). Naïve T-cells receive the presentation of cancer antigens from DCs in the T-cell zones of lymphoid organs to acquire an appropriate immune response [24] .To achieve antigen presentation, these two cells constantly migrate to the lymphoid organs (homing). The homing mechanism is regulated by chemokines and the chemokine receptor axis.CCL21 and CCL19 are homeostatic chemokines that are constantly secreted from secondary lymphoid organs, including lymph nodes and Peyer's patches, without any exogenous stimulation [25][26][27] .Meanwhile, C-C chemokine receptor 7 (CCR7) is a concomitant chemokine receptor of CCL21 and CCL19 [25] .Cells expressing CCR7 can migrate to organs where CCL21 and CCL19 are secreted depending on the concentration gradient [28] .Most naïve T-cells constantly express CCR7.DCs matured by exogenous stimulation with antigens derived from bacteria or cancers begin to express CCR7, whereas immature DCs do not express CCR7 [24,29] .Thus naïve T-cells and antigen-presenting DCs can migrate to secondary lymphoid organs from peripheral organs.As a final step in homing, these immune cells need to carry out transvasation through high endothelial veins to infiltrate into the lymphatic organs from the lymphatic system.To complete homing through the interval between endothelial cells, expression of adhesion molecules such as L-selectin and integrin is up-regulated for rolling and adhesion with intercellular adhesion molecule (ICAM), which is a co-receptor for integrin [30,31] .After antigen presentation, T-cells are induced to proliferate and differentiate into effecter T-cells.The expression of CCR7 on T-cells is then down-regulated to leave the lymphoid tissue for migration to cancer tissues. Effecter T-cells recognize the complex of cancer antigens and MHC class I antigens expressed on cancer cells through a T-cell receptor (TCR).Then, the lethal effects against cancer cells are triggered along two different pathways: granule exocytosis and death ligand/death receptor system.Granule exocytosis induces the secretion of perforins and granule enzymes (granzymes) [32,33] .Even though the nature and the role of these proteases in response to cancer is still unclear, perforins generally act as a carrier for the delivery of granzymes and build pores in the plasma membrane of target cells to allow granzymes to gain entry to the target cell cytosol.Granzymes are considered to execute the target cells through the cleavage of factors required for replication and defense [33] .Death ligands are proteins including Fas ligand (FasL) that is expressed on effector T-cells.FasL can engage target cells through the Fas receptor, which belongs to the tumor necrosis factor (TNF) superfamily, to induce target cell death by apoptosis [34] .The main role of FasL is to regulate the immune system, but cancer cells may over-express FasL spontaneously or as chemotherapy resistance to make a countercharge by inducing the apoptosis of tumor-infiltrating lymphocytes (TILs) to escape from the immune system [34] . T-cell based immunotherapy for cancer As has been mentioned, the human immune system always works in response to antigens expressed on cancer cells, thus distinguishing cancer cells from noncancerous cells.This causes the induction of TILs to be found in the tumor microenvironment [35] .However, anticancer immunity is usually not enough to overcome the tumor's growing speed owing to the low immunogenicity of cancer cells because these cells are derived from an individual's own cells.Thus, it was inevitable that immunotherapy would be developed to overcome the low immunity against cancer.With regard to T-cell-based immunotherapy, adoptive transfer of CD3-activated T-cells has been induced traditionally as a compulsory activation stimulus to compensate for the reduction in stimulation frequency due to low antigenicity.he more recent discovery of immune check-point inhibitors achieved outstanding progress in cancer immunotherapy by showing sensational longterm benefits in patients with advanced cancer [4,5] .The purpose of this medicine is to inhibit immune suppressive signals between cancer cells and T-cells; thus the agent that eliminates a cancer during the final phase is T-cells [36,37] .Immune check-point molecules such as programmed death-1 (PD-1) and T-lymphocyteassociated antigen 4 (CTLA4) are expressed on T cells and play a vital role in limiting the exaggerated immune response in both adaptive immune response and autoimmune response to maintain homeostasis by acting as an inhibitory signal against APCs.Recently, it has been disclosed that cancer cells take advantage of this mechanism to survive.For example, cancer cells express PD-L1, which is a concomitant ligand against PD-1, to attenuate T-cell-based immune reactions in association with cancer progression.With the discovery of this mechanism, immune check-point inhibitors have been shown to carry great promise.However, its efficacy has only been recognized still in a small number of patients, and PD-L1 expression on tumor cells has been regarded as a negative prognostic factor [4,5] . Hyperthermia enhances immune systems in response to cancer Body temperature elevation has been considered an important phenomenon associated with regulation in both innate and adaptive immune responses [38] .Hyperthermia elicits various effects in several steps of the immune reaction for cancer.It up-regulates the homing of immune cells and the function of adhesion molecules on both immune cells and endothelial cells, activating immune cells including CTLs, DCs, and NK cells, and inhibiting immune suppression.In this section, we discuss how thermal stress up-regulates the immune system.Hyperthermia, especially whole body hyperthermia, has the potential to increase the homing of immune cells.Continuous secretion of homeostatic chemokines including CCL21 and the expression of adhesion factors including selectin, integrin, and ICAM regulate immune homeostasis by maintaining the homing of these immune cells.Thermal effect can enhance the expression of ICAM-1 and CCL21 in high endothelial venules (HEVs) [39] and can up-regulate L-selectin-and integrin-dependent adhesive interaction to induce the adhesion and migration of DCs and T-cells toward HEVs [40] .Additionally, increases in the migration capacity of DCs ex vivo has been reported [41] . We reported previously that heat treatment stimulated cytokine production from peripheral T-cells in vitro and in vivo in fresh peripheral venous blood obtained from 5 healthy volunteers [42] .We first incubated peripheral blood mononuclear cells (PBMCs) separated from obtained blood samples at 37 °C or 39 °C for 2 h in a water bath, then PBMCs were co-cultured with anti-CD3/CD28 monoclonal antibodies for 24 h at 37 °C.To evaluate the secretory properties of cytokines in T-cells, IFN-γ and IL-2 levels in the supernatant were measured.Results showed that both cytokine production levels were significantly increased (approximately twofold) when PBMCs were cultured at 39 °C [Figure 2].Next, the volunteers underwent whole body hyperthermia until the rectal temperature reached 38.5 °C (generally it required 1 h of treatment).After terminating heating, volunteers were covered with a leather tent for 60 min as a heat-retention phase.Blood samples were obtained four times: before the treatment, at the end of the heat-retention phase, and then 24 and 48 h after and CD28.Blood samples were collected and incubated at 37 °C and 39 °C for 2 h, then PBMCs were extracted and co-cultured with monoclonal antibodies against CD3 and CD28 to measure IFN-g and IL-2 production levels in each supernatant.Results are shown as twofold over the control (37 °C) for the average of five separate donors, and expressed as mean ± SEM.Statistical differences from the control were evaluated using paired t-test.P < 0.05 was recognized as statistically significant [42] .IFN: interferon; IL: interleukin; PBMCs: peripheral blood mononuclear cells treatment.IFN-γ and IL-2 levels in each supernatant were measured in response to monoclonal antibody against CD3 and CD28.The results showed significant increases in the production of both IFN-γ and IL-2; these were observed not only immediately after but also 24 h after whole body hyperthermia.At 48 h after whole body hyperthermia, the production levels of both cytokines had returned to the pretreatment levels [Figure 3]. The potential mechanisms that stimulate cytokine production after hyperthermia may be explained by an increase in the membrane fluidity of T-cells.It was reported that physiological heat stress enhanced the membrane fluidity of T-cells.It also showed an increase in the cluster formation of the GM1 + CD-microdomain in CD8 + T-cells, clustering TCRβ and the CD8 coreceptor, and enhanced conjugate formation between T-cells and APCs in mice [43] .These results suggest that a heat-stress-induced increase in membrane fluidity is one of the primary events, and it subsequently triggers a cascade of molecular events that eventually make T cells crosstalk more rapidly and efficiently with APCs.These cellular events, including the formation of TCR microclusters, consist of several adhesion and signaling molecules [44] , which accumulate at the immune synapse [45] .This is also known as the central supramolecular activation complex [46] . Heat-shock proteins (HSPs) have been considered to play an important role in the effects of heat treatment on T-cell function.Indeed, the synthesis of HSPs was shown to increase with elevated body temperature in fever-range whole-body hyperthermia [47] .The essential function of HSPs is known to involve their actions as molecular chaperones.As part of this function, HSPs are involved in antigen presentation and cross-presentation in DCs by delivering chaperoned antigenic peptides to MHC class I molecules, thereby inducing antigen-specific T-cell activation [48][49][50] .It was reported that the presence of recombinant Hsp60 allows antigendependent T-cell activation with antigen-specific IFN-γ secretion in conditions when even stimulation is not sufficient to activate T-cells [51] . In contrast, Hsp70 is also expressed in cancers and acts as an effective inhibitor of apoptosis caused by heat stress, thereby participating in tumor progression [52] .Hsp70 can prevent aggregation, remodel folding pathways, and regulate activity of cancer cells [53] .However, the effects of HSPs on DCs and T-cells are still contradictory [54] .Thus, the function of HSPs must continue to be investigated in order to clarify whether and how HSPs are involved in antigen presentation between T-cells and DCs during heat stress. NK cells can behave as a spearhead of the innate immune response toward exogenous antigens and can make an initial attack against targets without prior exposure to the specific antigens.Basically, normal cells express MHC class I molecules, whereas aberrant cells such as cancer cells extinguish the expression of MHC class I molecules on themselves [55] .This phenomenon was especially observed in pancreatic [56] , cervical [57] , breast [58] , prostatic [59] , and penile cancer [60] .immediately after (Post), 24 h after (24 h), and 48 h after (48 h) whole-body hyperthermia.PBMCs from each time point were extracted and co-cultured with monoclonal antibodies against CD3 and CD28 to measure IFN-g and IL-2 production levels in each supernatant.Results are shown as fold over the control (Pre) for the average of five separate donors, and expressed as mean ± SEM.Statistical differences from control was evaluated using paired t-test.P < 0.05 was recognized statistically significant [42] .IFN: interferon; IL: interleukin; PBMCs: peripheral blood mononuclear cells [61] .Activated NK cells have nonspecific anticancer potential by secreting cytotoxic molecules including perforin and granzyme [33] and death receptors such as FasL, TRAIL, and TNF-α [62,63] .Additionally, heat stress can enhance the distinct clustering of NK cell-activating receptors such as NKG2D on the surface of NK cells and the expression of NK cell-activating ligands, including major histocompatibility complex class I-related chain A (MICA) [64,65] .Moreover, an increase in the expression of Heating phase Retention phase Time (min) To avoid exaggerated immune responses that cause harmful effects on the body, the immune system is regulated to limit adaptive immune responses and prevent autoimmune responses and auto-inflammatory reactivity in the normal situations.To achieve this, our immune system combines immunological tolerance system.Regulatory T cells (Tregs) are a subpopulation of T-cells, expressing CD4, CD25, and FOXP3, which negatively modulate both innate and adopted immune response by down-regulating or suppressing induction and proliferation of immune cells including T cells, DCs, and NK cells [67][68][69] .Even though Tregs usually account for about 4% of CD4 + T cells, they can make up as much as 20-30% of the total CD4 + population in the tumor microenvironment and are associated with poor prognosis in many cancers, such as ovarian, breast, renal, and pancreatic cancer [70] .Depletion of Tregs in animal models has been shown to increase the efficacy of immunotherapy.So, achieving the depletion of Tregs is one of the pivotal targets of recent research and therapy associated with cancer immunology [71] .The potential effect of hyperthermia is considered to enhance the cytotoxicity of NK cells against Tregs, and to inhibit the induction of Tregs while the apoptosis of Tregs is induced.A significant decrease in the number of Tregs was observed while NK cell activity and the percentage of NK cells increased in peripheral blood samples of healthy volunteers after irradiation of feverrange hyperthermia to the upper abdominal region [72] .Moreover, combination therapy of intratumoral injection of immature DCs and local hyperthermia for patients with advanced malignant melanoma demonstrated decreased infiltration of Tregs and increased infiltration of activated CTLs, even though there was no statistical difference in overall survival time [73] . NK cells is exercised at this time to complement T-cellbased immune reactions owing to attenuated inhibitory signals between NK cells and cancer cells The efficacy of hyperthermia in down-regulating the expression of PD-L1 in some cancer cell lines was reported.In this study, decreased expression of PD-L1 in cancer cell lines was shown when samples were exposed to temperatures between 40 °C and 43 °C [74] . Further accumulation of data associated with this new experimental model is eagerly awaited. Combination therapy with immunotherapy Hyperthermia has been reported to enhance the efficacy of DC vaccines by up-regulating IFN-γ secretion to stimulate naïve T-cells, enhancing DC migration toward lymphatic organs and protecting DC from apoptosis [75] .We introduced a whole body hyperthermia device 5B].This result indicated that the combined use of hyperthermia with a DC vaccine and activated T-cells had a positive impact on the induction of T-cell based immune responses [76] . Combination therapy with radiotherapy The enhancement of anticancer efficacy of combination use of radiotherapy and hyperthermia was clinically recognized in cervical, breast, and head and neck cancer and so on [77] .Even though radiological cytotoxicity induces DNA damage of cancer cells [78] , some cancer cells can come back into existence (termed sublethal damage repair or lethal damage repair) [79,80] .In the analysis of the cell cycling, quiescent tumor cells were more resistant to irradiation because cells in this stage have the potential for lethal damage repair [81] .In contrast, hyperthermia can inhibit the repair of radiation-induced damage in cancer cells, so that combination use of hyperthermia can enhance the anticancer efficacy of radiotherapy [82,83] .Cells in the synthesis (S) phase are also relatively radio-resistant, while they are the most sensitive to hyperthermia.Additionally, hypoxic cells in tumors are also radioresistant, while hyperthermia improves the anaerobic condition by oxygen delivery due to increased blood flow.These perceptions indicate the synergetic effect of combination use of radiotherapy and hyperthermia [7] .Moreover, additional use of DNA repair inhibitors was reported to further enhance its efficacy [84] . Combination therapy with chemotherapy Chemotherapy is the most popular therapeutic method for patients with inoperative cancer and recurrent or metastatic cancer; however, there are serious problems including its uncertain efficacy, drug resistance, and adverse effects.To improve therapeutic results, combination use of hyperthermia was tested, and increased anticancer effect was reported in paclitaxel, docetaxel, gemcitabine, oxaliplatin, and irinotecan [85] .The mechanism of interaction of chemotherapy and hyperthermia was considered as follows: increased drug uptake into cancer cells by causing damage to the membrane of cancer cells and reducing oxygen radical detoxification.Eventually, DNA damage increased while DNA repair decreased.Additionally, hyperthermia was reported to have a potential ability to avoid drug resistance [86,87] .In addition, it is also expected that elevated blood flow could result in a relative increase in anticancer drug concentration within the tumor.Moreover, adverse effects can be decreased because increased drainage of the drugs may accelerate in normal cells due to the up-regulation of metabolism.On the other hand, some anticancer drugs, including 5-fluorouracil, gemcitabine, and oxaliplatin, are considered to enhance cancer immunity by inducing the infiltration of CTLs while reducing Tregs in the tumor [88] .Accordingly, enhancing the efficacy of chemotherapy will result secondarily in up-regulation of cancer immunity. Combination therapy with surgery Chemotherapy is usually performed for peritoneal metastases, but its prognosis is nonetheless bad, because blood flow to the peritoneum is poor owing to the presence of the peritoneal-plasma barrier [89] .Hyperthermic effects are considered to impair the peritoneal-plasma barrier and result in increased resorption of anticancer drugs in peritoneal tumors.Hence, the combination of hyperthermia and chemotherapy by intraperitoneal administration resulted in more anticancer drug accumulation in peritoneal tumors than after chemotherapy alone [90] .Using this concept, the effectiveness of cytoreductive surgery with subsequent HIPEC has been reported for peritoneal metastasis from gastric [91] , colorectal [92,93] , appendiceal [94] , and adrenal cancer [95] .Generally HIPEC is performed after resection of the cancer lesion with or without systemic peritonectomy by intraperitoneal administration of an anticancer drug containing saline, which is heated in advance to maintain the peritoneal surface at around 43 °C while irrigating the drug solution. Adjuvant chemotherapy is often given after surgery for certain types of cancer, such as pancreatic, colorectal, and breast cancer, to improve prognosis by reducing the potential for recurrence and metastasis [96][97][98][99] .However, in some cancers, including intrahepatic cholangiocarcinoma (ICC), the prognosis of patients is extremely poor because the recurrence rate after curative operation is very high and there is no standard adjuvant setting.We reported previously that postoperative adjuvant immunotherapy with intradermal administration of a DC vaccine and intravenous administration of activated T-cells would be a feasible and effective treatment for preventing recurrence and achieving long-term survival in patients with ICC.In this study, the median 5-year progression-free survival and overall survival were 18.3 and 31.9 months in the patients receiving adjuvant immunotherapy, and 7.7 and 17.4 months in the group with surgery alone (P = 0.005 and 0.022, respectively).Additionally, patients whose skin reactions at the vaccine site were ≥ 3 cm showed dramatically better prognosis in patients receiving adjuvant immunotherapy [100] . As has been explained above, hyperthermia can elicit early skin reactions when used in conjunction with immunotherapy including with DC vaccines and activated T-cell transfer [76] .Hyperthermia can also augment the efficacy of adoptive immunotherapy by up-regulating IFN-γ secretion to stimulate naïve T-cells, and enhancing homing of DCs and T-cells [75] .For the reasons stated above, hyperthermia is considered to be useful for adjuvant settings. Neoadjuvant chemotherapy and chemoradiotherapy have become well established, especially for esophageal cancer patients [101] .These neoadjuvant therapies improved the long-term survival rate, but the therapeutic benefit was sometimes countered by a significant increase in adverse effects [102] .In addition, postoperative complications, including cardiac diseases and pulmonary diseases, are much more severe after chemoradiotherapy [103,104] .Preoperative radiotherapy increases the risk of postoperative anastomosis leakage, an unfavorable complication. As has been mentioned above, hyperthermia has the potential to augment the effects of chemotherapy or radiotherapy.Combination use of hyperthermia with chemotherapy or radiotherapy may be useful even in the neoadjuvant setting to suppress the possibility and seriousness of adverse effects and complications by reducing the dose of chemotherapy or radiotherapy required while maintaining or increasing its anticancer effects.Indeed, in 1995, the result of a randomized phase III study for patients with resectable squamous cell carcinoma of the thoracic esophagus was disclosed.Patients underwent neoadjuvant chemoradiotherapy with or without radiofrequency wave local hyperthermia, and the 3-year survival rate was 24.2% and 50.4%, respectively.There were no procedural complications [105] .Additionally, in 2010, the results of a randomized phase III trial of patients with high-risk soft-tissue sarcoma were reported.Patients underwent neoadjuvant chemotherapy consisting of etoposide, ifosfamide, and doxorubicin with or without local hyperthermia.The treatment response rate in the group that received regional hyperthermia was 28.8%, compared with 12.7% in the group with chemotherapy alone (P = 0.002) [106] . During the postoperative period, immunity is suppressed by operative invasion [107] and the administration of anesthetic drugs such as opioids [108] , and it results in the encouragement of postoperative cancer metastasis [109] .Concerning the biological effect, preoperative feverrange whole-body hyperthermia was reported to augment postoperative cancer immunity by increasing the blood level of TNFα and HSP60 [77] .Thus, hyperthermia increased the benefit of neoadjuvant therapy. Adverse effects of hyperthermia Adverse effects of acute or chronic periods of regional hyperthermia do not develop often and are usually minor, owing to a recent development of heating techniques and thermometry, or treatment schedule [110] .Adverse effects of hyperthermia include skin burns and skin pain, but these events usually heal spontaneously [111] .In terms of combination therapies with hyperthermia, radiation toxicity is not increased, but toxicity of chemotherapy might be enhanced depends on the increase in drug efficacy [110] .In rare settings of combination therapy with chemotherapy, the formation of severe subcutaneous fat or muscle necrosis that required surgery to cure was reported.Adverse effects of regional hyperthermia vary with the type of targeted organ or heating device and techniques.Whole body hyperthermia is a slightly more invasive hyperthermal method accompanied by a feeling of heat, tiredness, and loss of sweat due to a rapid elevation of the core body temperature.Dehydration, heat illness, cardiac disease, or thrombosis might appear depending on underlying disease or physical condition.In addition, whole body hyperthermia has a risk for toxicity to the peripheral nervous system.Hence, this method is contraindicated for patients with neurodegenerative diseases, such as multiple sclerosis [112] . On the other hand, no reports that describe hyperthermia having a positive impact on cancer progression in regard to biological effects are found until now. Overall, hyperthermia is considered a convenient therapeutic method so long as it is used appropriately.Paradoxically, the safety of hyperthermia is maintained by avoiding excess irradiation with the purpose of deep heating, because it is still difficult to aggregate the heat effect only in cancer tissue.By using hyperthermia in combination with chemotherapy or radiotherapy, the dose of these therapies may be reduced to ease their side-effects without reducing therapeutic effects, because hyperthermia has the potential to augment the effect of chemotherapy or radiotherapy in a less invasive manner. CONCLUSION This report shows that hyperthermia increases the advantage of the following biological features.Heat stress lowers the survival rate of all cells, but normal tissues are better able to tolerate this than cancerous tissues.Heat has a potential to augment immune responses while decreasing immune suppression.Heat inhibits the recovery of cancer cells from DNA damage.Heat enhances the resorption of anticancer drugs into cancer cells.The sensitivity of cancer cells against heat and radiation differs depending on the condition of cancer cells in the cell cycle.The anticancer efficacy of hyperthermia alone with currently available heating devices is not enough to suggest its use as a standalone therapy.However, some studies have shown that combination therapy with conventional methods including immunotherapy, radiotherapy, chemotherapy, and surgery improves its anticancer efficacy in vitro and vivo. Perspective Currently, clinical experience and data of oncological hyperthermia are still limited because both information and devices for hyperthermia have not become common.Therefore, multicenter clinical trials for cancer treatment including hyperthermia should be done to provide convincing data.The combination hyperthermia with immune check-point inhibitors should be involved in these studies to achieve fuller anticancer efficacy with fewer adverse effects.Development of drugs such as DNA repair inhibitors or regulators of HSPs is also expected to augment the efficacy of hyperthermia itself.Additionally, further efforts will be required to solve the mechanism of the involvement of cancer and hyperthermia to optimize cancer therapy.The development of heating devices and thermometry is also needed to achieve more appropriate heat delivery that is limited to the tumor lesion. Figure 2 : Figure 2:In vitro hyperthermia stimulates IFN-g and IL-2 production from T-cells stimulated with monoclonal antibodies against CD3 and CD28.Blood samples were collected and incubated at 37 °C and 39 °C for 2 h, then PBMCs were extracted and co-cultured with monoclonal antibodies against CD3 and CD28 to measure IFN-g and IL-2 production levels in each supernatant.Results are shown as twofold over the control (37 °C) for the average of five separate donors, and expressed as mean ± SEM.Statistical differences from the control were evaluated using paired t-test.P < 0.05 was recognized as statistically significant[42] .IFN: interferon; IL: interleukin; PBMCs: peripheral blood mononuclear cells Figure 3 : Figure3: Whole-body hyperthermia stimulates IFN-g and IL-2 production from T-cells.Blood was obtained from donors before (Pre), immediately after (Post), 24 h after (24 h), and 48 h after (48 h) whole-body hyperthermia.PBMCs from each time point were extracted and co-cultured with monoclonal antibodies against CD3 and CD28 to measure IFN-g and IL-2 production levels in each supernatant.Results are shown as fold over the control (Pre) for the average of five separate donors, and expressed as mean ± SEM.Statistical differences from control was evaluated using paired t-test.P < 0.05 was recognized statistically significant[42] .IFN: interferon; IL: interleukin; PBMCs: peripheral blood mononuclear cells Figure 4 :C Figure 4: Approach for immunotherapy with whole-body hyperthermia.(A): cell preparation for DC vaccine and activated-T-cell therapy; (B): fever-range hyperthermia using heckle HT-3000 device; (C): representative data of body temperature during whole-body hyperthermia.DC: dendritic cell; PBMCs: peripheral blood mononuclear cells Figure 5 : Figure 5: Required number of immunotherapy injections with DC vaccine and activated T-cells to elicit DTH-like skin reaction.(A): comparison between patients who received immunotherapy alone or in combination use with hyperthermia; (B): comparison of patient's rectal temperatures during hyperthermia[76] .DC: dendritic cell; DTH: delayed type hypersensitivity
2018-05-06T16:08:45.445Z
2017-10-31T00:00:00.000
{ "year": 2017, "sha1": "2f2b7db8b9403600bd9ee34d9379dfd7478a9d80", "oa_license": "CCBY", "oa_url": "https://jcmtjournal.com/article/download/2272", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2f2b7db8b9403600bd9ee34d9379dfd7478a9d80", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
85979713
pes2o/s2orc
v3-fos-license
Caladium 75-14, a spotted, fancy-leaved cultivar for containers and sunny landscapes. As a common pot and landscape plant, caladium (Caladium ·hortulanum Birdsey, Araceae Juss.) is valued for its colorful leaves and low maintenance requirements (Evans et al., 1992). Commercial caladium plants are grown from tubers. Central Florida growers produce greater than 95% of the tubers for the worldwide market (Bell et al., 1998; Deng et al., 2005). Tuber yield is one of the primary factors determining a caladium cultivar’s production value and whether the cultivar will be acceptable to growers and viable in commercial production. Poor tuber yield has been one of the main reasons why many early cultivars were removed from commercial tuber production and many new breeding lines with novel colors or coloration patterns have not become commercialized. Developing caladium cultivars with good tuber yield has been one of the main breeding objectives for the University of Florida’s caladium breeding program at the Gulf Coast Research and Education Center since the program began in 1976. Caladium 75-14 (Figs. 1 and 2) is a new spotted, fancy-leaved cultivar with superior tuber yield. Plants of caladium 75-14 are vigorous and can quickly fill a landscape space. Its leaves are resistant to sunburn, allowing this cultivar to perform well in full-sun landscapes. With its multiple branching habit and plant vigor, caladium 75-14 produces high-quality plants in container forcing. As a common pot and landscape plant, caladium (Caladium ·hortulanum Birdsey, Araceae Juss.) is valued for its colorful leaves and low maintenance requirements (Evans et al., 1992). Commercial caladium plants are grown from tubers. Central Florida growers produce greater than 95% of the tubers for the worldwide market (Bell et al., 1998;Deng et al., 2005). Tuber yield is one of the primary factors determining a caladium cultivar's production value and whether the cultivar will be acceptable to growers and viable in commercial production. Poor tuber yield has been one of the main reasons why many early cultivars were removed from commercial tuber production and many new breeding lines with novel colors or coloration patterns have not become commercialized. Developing caladium cultivars with good tuber yield has been one of the main breeding objectives for the University of Florida's caladium breeding program at the Gulf Coast Research and Education Center since the program began in 1976. Caladium 75-14 (Figs. 1 and 2) is a new spotted, fancy-leaved cultivar with superior tuber yield. Plants of caladium 75-14 are vigorous and can quickly fill a landscape space. Its leaves are resistant to sunburn, allowing this cultivar to perform well in full-sun landscapes. With its multiple branching habit and plant vigor, caladium 75-14 produces high-quality plants in container forcing. Origin Caladium 75-14 was initially selected in 2002 as GCREC-1075-14 out of a population of progeny from a cross made in 2001 between 'Gingerland' and 'Florida Moonlight' (Fig. 3). 'Gingerland' was selected as the seed parent because of its sun tolerance and bright leaf spots. 'Florida Moonlight' was used as the pollen parent for its high tuber yield, multiple branching habit, pure white leaf color, and heart-shaped leaf. 'Florida Moonlight' was a progeny from the cross 'Aaron' · 'Candidum Junior' (Miranda and Harbaugh, 2003). The ancestry of 'Gingerland', 'Aaron', and 'Candidum Junior' are unknown, although 'Candidum Junior' is suspected to be a field mutation of 'Candidum' (Wilfret, 1991). Description Color designations for plant parts [e.g., Royal Horticultural Society (RHS) 200B] are based on comparison with the Royal Horticultural Society Colour Chart (RHS, 1986). Plants used for describing color were grown in 11.5-cm containers in a 45% shaded greenhouse from No. 1 (3.8 to 6.4 cm) deeyed tubers. Leaves of caladium 75-14 are peltate, sagitate-cordate with green-white (RHS 157A) palmate-pinnate venation. The upper surface has dark green (RHS 141A) margins, 2 to 3 mm wide, bordering the entire leaf except for the basal leaf valley where it is grayed purple (RHS 185A). Interveinal areas are green-white (RHS 157A) near the central main vein and change to dark green (RHS 141A) near the margin. Leaves have a small red-purple blotch (1 to 3 mm diameter) at the petiole attachment and numerous (1 to 40 mm in diameter) grayed purple (RHS 185B) spots. Netted green-white (RHS 157D) venation occurs on the leaf surface. The undersurface has a grayed green (RHS 191B) margin 2 to 3 mm wide. Primary veins are grayed green (RHS 194B), and netted venation is grayed green (RHS 191A). Interveinal areas are green-white (RHS 157A) near the central main and large veins and change to grayed green (RHS 191A) near the margin. Grayed purple spots (RHS 186A) are numerous and scattered between primary veins. Petioles are 3 to 6 mm in diameter and light green (RHS 138D) at the apex, but the colors diffuse into a dark brown (RHS 200B) at the base that is %5 to 9 mm in diameter. Plants of caladium 75-14 grown for %4 months in full sun in ground beds had an average height of 40 cm. The largest leaf on plants grown in a 45% shaded greenhouse produced from an intact No. 1 tuber in an 11.4-cm pot averaged 19 cm long and 12 cm wide 8 weeks after planting. Jumbo-sized (greater than 6.4 cm and less than 8.9 cm in diameter) tubers are multisegmented, bearing five to six dominant buds. Tuber surfaces are brown (RHS 200C) with the cortical area yellow-orange (RHS 15C). Performance Caladium 75-14 was evaluated for tuber production and plant performance at the Gulf Coast Research and Education Center in Wimauma, FL, in 2005 and2006. The soil was EauGallie fine sand with %1% organic matter and pH of 6.2. Plants were grown on plastic-mulched raised beds with a constant water table maintained using a seep irrigation system (Geraldson et al., 1965). In 2005, ground beds were fumigated on 25 Feb. (6 weeks before planting) with a mixture of 67% methyl bromide and 33% chloropicrin (by volume) at the rate of 392 kgÁha -1 , and in 2006, the beds were fumigated on 10 Mar., 10 d before planting, with the same fumigant mixture but at 196 kgÁha -1 . The beds were 91 cm wide and 20 cm high with 2.54-cm caladium seed pieces (tuber pieces) planted 15 cm apart in three rows. Osmocote 18N-2.6P-10K 8-9 month controlled-release fertilizer (Scotts Co., Marysville, OH) was applied to the bed surface when shoots were emerging from the soil with nitrogen at 336 kgÁha -1 . Tubers were harvested in Nov. 2005 and Dec. 2006, respectively. Dried tubers were graded by their maximum diameter: No. 2 (greater than 2.5 cm and less than 3.8 cm), No. 1 (greater than 3.8 cm and less than 6.4 cm), Jumbo (greater than 6.4 cm and less than 8.9 cm), Mammoth (greater than 8.9 cm and less than 11.4 cm), and Super Mammoth (greater than 11.4 cm). A production index (an indicator of economic value of the harvested tubers from each plot) was calculated: where N is the number of tubers in the grade. Field plots were organized in a randomized complete block design consisting of three replications. Each plot was 1.2 m 2 and was planted with 30 propagules (tuber pieces). Three major commercial cultivars that had a similar coloration pattern (spotted) were also planted in the field as controls to assess the tuber yield and plant performance of caladium 75-14. They were: 'Galaxy' (fancy-leaved, vigorous), 'Gingerland' (lance-leaved, prostrate), and 'Miss Muffet' (fancyleaved, dwarf). An analysis of variance was conducted using the GLM procedure in the SAS program followed by mean separation by protected Fisher's least square difference (SAS Institute, 2003). Landscape performance of caladium 75-14 grown under full-sun conditions was evaluated in 2005 and 2006 on the same plots used for assessing tuber production. Plant height, number of leaves, and leaf sizes were recorded %4 months after planting. Three plants in the center of each plot (with 30 plants) were randomly selected for the objective measurements. Plants in each plot were also evaluated for overall plant quality and leaf sunburn tolerance, three times in the 2005 growing season and two times in the 2006 growing season. The scale for plant quality evaluation was 1 to 5 with 1 being very poor (few leaves and lack of vigor) and 5 being excellent (full plants, numerous leaves, and bright color display). The scale for leaf sunburn tolerance was also 1 to 5 with 1 being very susceptible to sunburn (leaves having numerous sun-damaged areas or holes) and 5 being resistant to sunburn (no visible sundamaged areas). Caladium 75-14 plants were slightly taller (%10 cm) than 'Galaxy' and 'Gingerland', but %26 cm taller than 'Miss Muffet'. Its leaves were smaller than those of 'Galaxy' and 'Gingerland' but similar in size to those of 'Miss Muffet' (Table 2). For overall plant quality, caladium 75-14 received the highest scores among the cultivars tested in both growing seasons (2005 and 2006) and its scores were significantly higher than all controls in three of the five evaluations. The leaf sun tolerance of caladium 75-14 was rated 4 to 5 (good to excellent) in all the evaluations during the two growing seasons. Except for one evaluation in Sept. 2006, caladium 75-14's sun tolerance ratings were higher than all of the controls. Caladium 75-14's performance in container forcing was evaluated by planting No. 1 tubers in 11.4-cm containers. Dry tubers were planted either intact or de-eyed in a peat/vermiculite mix (VerGro Container Mix A; Verlite, Tampa, FL) on 26 Mar. 2007. The tests were performed in a greenhouse with 45% light exclusion during the summer in Wimauma, FL. Average daily temperatures ranged from a low of 16°C (night) to a high of 29°C (day) during the tests. Plants were grown on metal benches in the greenhouse and arranged according to a randomized complete block design with 10 replications. Three fancy-leaved commercial cultivars, Candidum Junior (a cultivar commonly used for pot plant production), Galaxy, and Miss Muffet, were included as controls. Plant height, number of leaves, and leaf sizes were recorded 8 weeks after planting. At the same time, each pot plant was rated on a scale of 1 to 5 for quality as pot plants: 1 = very poor, 3 = fair, and 5 = very good (many leaves, bright and full plants). An analysis of variance and mean separation were done using the GLM procedure in the SAS program to compare the performance of caladium 75-14 to the controls. Caladium 75-14 sprouted in 37 d (intact or de-eyed) after planting, similar to 'Galaxy', but 3 to 6 d later than 'Candidum Junior' and 7 to 10 d later than 'Miss Muffet' (Table 3). Caladium 75-14 plants were 27 cm (intact tubers) or 25 cm (de-eyed tubers) tall, similar to 'Galaxy' in height, but were significantly taller than 'Candidum Junior' (8 to 10 cm taller) and 'Miss Muffet' (10 to 12 cm taller), a known dwarf cultivar. Caladium 75-14 had seven leaves on intact plants 8 weeks after planting, similar to 'Galaxy' but less than 'Candidum Junior' or 'Miss Muffet' plants, although the difference was not statistically significant. When tubers were de-eyed, caladium 75-14 produced more leaves (13 per plant). In leaf size (length and width), caladium 75-14 was similar to 'Galaxy' and 'Candidum Junior'. Tuber de-eyeing significantly improved the quality rating of the pot plants, from 3.3 to 4.2. This indicates that caladium 75-14 can be used for forcing in small containers, but tuber de-eyeing will be required to produce high-quality plants. In summary, caladium 75-14 is a new spotted, fancy-leaved cultivar. It has shown superior tuber production potentials in the replicated field trials. This yield is consistent with growers' trials (T. Cantwell-Bates, personal communication). With its vigorous growth habit, caladium 75-14 can quickly fill a landscape space with many leaves and resist sunburn. These characteristics allow it to perform well in the landscape in full sun. In container forcing, caladium 75-14 behaves much like 'Galaxy' with similar sprouting time, leaf size, and plant height, but caladium 75-14 produces pot plants of higher quality regardless of tuber treatments (intact or deeyed). Tuber de-eyeing can improve caladium 75-14's plant quality when forced in small containers (10 cm in diameter), although this practice is not required for producing pot plants in 20-cm or larger containers (Z. Deng, personal observation). For commercial tuber production, growers are encouraged to use preplant hot water treatment (Rhodes, 1964) and standard postharvest treatment (Harbaugh and Tjia, 1985). Availability Caladium 75-14 will be trademarked as Berry Patch. A plant patent will be sought from the U.S. Patent and Trademark Office and plant patent rights will be assigned to the University of Florida, Board of Trustees. Propagation and distribution will be licensed by the Florida Foundation Seed Producers, Inc., P.O. Box 110200, Gainesville, FL 32611. Information on tuber availability and propagation agreements can be obtained from the Florida Foundation Seed Producers, Inc. were rated on a scale of 1 to 5 with 1 being very poor, 3 fair and acceptable, and 5 being excellent in plant vigor, fullness, and color display in June, July, and Aug. 2005 and Aug. and Sept. 2006, respectively. y Plants' sunburn tolerance was rated on a scale of 1 to 5 with 1 being very poor, 3 fair and acceptable, and 5 being excellent without showing any signs of leaf burns or resulting holes on leaf surfaces taken in June, July, and Aug. 2005 andAug. andSept. 2006, respectively. x Mean separation within columns by protected Fisher's least significant difference test at P # 0.05 unless otherwise indicated in the last row by the P values (last two columns under sun tolerance ratings).
2019-03-30T13:04:03.197Z
2009-06-01T00:00:00.000
{ "year": 2009, "sha1": "31a38fb8765f4230bdfd01cdb3215d2ad1f68af1", "oa_license": null, "oa_url": "https://journals.ashs.org/downloadpdf/journals/hortsci/44/3/article-p854.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "16c92aefb548b25d6fd74ffd19d47493ab697f6a", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }